sections
listlengths
0
910
pub_date
stringclasses
722 values
doi
stringlengths
0
570
references
listlengths
0
835
formulas
listlengths
0
679
title
stringlengths
0
235
abstract
stringlengths
0
7.77k
authors
stringlengths
0
11.9k
figures
listlengths
0
270
citation_data
stringlengths
2
160k
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b6", "b1", "b56", "b41", "b54", "b6", "b18", "b40" ], "table_ref": [], "text": "AI models have demonstrated their power once again, especially with the tremendous popularity of ChatGPT and Codex [7] released by OpenAI recently. With more and more AI applications permeating various aspects of our lives, especially those developed on the basis of pre-trained language models (PLM), research on AI fairness has become crucial. Many works [2,56] reveal that pre-trained language models contain harmful social biases towards different demographics.\nMeanwhile, GitHub has collaborated with OpenAI to develop and issue an automatic code completion tool, called Copilot, supported by Codex. As used by an enormous number of users, the research on the potential risks of the code generation tool has gradually gained importance. For example, code generation models may be asked to help the development of human-centric applications, such as education, job hiring, law sentencing, and autonomous systems, where biased code can cause life-altering consequences. In order to make the first step toward code fairness, this work aims to answer two critical questions: (i) Does the social bias problem also exist in the code generation models? (ii) If the problem does exist, in what form will social bias manifest in the generated code?\nDifferent from previous research on AI fairness that focuses on human-relevant scenarios [42,54], we find that the commonly used training datasets for the code generation task are highly human-irrelevant. For example, the HumanEval benchmark [7], is a set of programming problems. These problems only involve operations of data structures, such as strings and lists, or the completion of algorithms. The dataset almost contains no human-related topics, let alone mention demographics. Therefore, if we just trivially evaluate code generation with existing datasets, the answers may be inconclusive.\nBased on this circumstance, we speculate that the social bias problem may also exist in code generation models, but it is deeply buried beneath the superficial phenomenon due to the too \"clean\" datasets. [19]. The prompt provided to the model is shown without background, and the model-generated completion is shown with a pink background.\nTo this end, we propose to excavate and uncover the social bias problem in pre-trained code generation models. We design a new paradigm to construct prompts and successfully elicit social biases in generated code. As shown in Figure 1, we construct the prompt with two complete functions and a function signature. The function signature contains a judgemental modifier \"disgusting\", a demographic dimension \"ethnicity\", and a human-relevant word \"people\". As shown, InCoder-6B generates code with severe social bias, showing prejudice towards \"Hispanic\", with benign prompt functions that are even irrelevant to humans.\nTo further quantify social biases in code, we propose three metrics and develop a dataset by constructing prompt data with different modifiers and demographic dimensions. We conduct experiments on three state-of-the-art code generation models: Codex, InCoder, and CodeGen [41]. Experimental results reveal that all three code generation models contain severe social biases. A code classifier is also trained to automatically gauge social biases in the generated code. Compared with human evaluation, experimental results show that, though imperfect, the code classifier can be used as a code bias scorer. To provide useful insights into bias mitigation, we also study the effects of model hyper-parameters on social biases and get some interesting findings. For instance, we find the severity of social biases intuitively increases with the parameter quantity of a code generation model.\nWe aim to raise attention to the social bias problem in code generation models, as corresponding applications can further amplify social biases and harm vulnerable demographics. Main contributions of this work can be summarized below:\n• To the best of our knowledge, this is the first work to successfully uncover the social bias problem in the code generation task. Experimental results verify that severe social biases exist in code generation models.\n• We develop a dataset and propose three evaluation metrics to quantify social biases in code generation models. A trained classifier is also provided as an automatic code scorer. 2• We study the impact of hyper-parameters of code generation models on social biases. The results and analysis can provide useful insights for further choice of code generation models with low social bias." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b35", "b52", "b41", "b43", "b27", "b22", "b44" ], "table_ref": [ "tab_1", "tab_2", "tab_1" ], "text": "In this section, we present some important definitions as the research basis of our work.\nFormalization of Bias Scope. Before we cut into any discussion and study fairness and social bias, we first formalize the limited scope of the topic. As stressed in previous works [36,53], fairness and social bias are only meaningful under human-relevant scenarios. Therefore, in this work, we only deal with human-relevant data.\nDemographics. To study social biases in code, we compare the magnitude of bias across different demographics. We summarize 8 common demographic dimensions, as shown in Table 1. • Common Demographic Pair: To further study fairness for fine-grained demographics, we also list the most common pair of demographics for each demographic dimension. We only choose one pair of demographics because they are enough to reveal the unfairness problem. • Valid Demographics: To statistically analyze which demographics code generation models discriminate against, we list all the valid demographics appearing in the generated code in Appendix.\nBy \"valid\", we mean that these demographics are meaningful and relevant to corresponding demographic dimensions.\nJudgmental Modifiers. A modifier refers to something that alters, qualifies, or limits the meaning of another element in a sentence. In this work, we use judgmental modifiers which are adjectives expressing subjective judgments to limit the meaning of human-relevant words in the prompts. In addition to negative modifiers prevalently studied in previous works [42,44] on AI fairness, we expand modifier categories to positive and comparative. As shown in Table 2, we use five types of judgmental modifiers:\n• RoBERTa-Neg3 : We use templates to elicit negative modifiers from a pre-trained language model, RoBERTa [28], and eventually collect 25 negative modifiers. • Random-Neg: We first wash the negative sentiment word list curated by [23] to guarantee that selected words are adjectives, and then randomly select 10 words as negative modifiers. • Random-Pos: As stated above, we randomly select 10 words as positive modifiers from the clean positive sentiment word list. • Comparative-Neg: We choose \"worse\" and \"worst\" as our comparative negative modifiers.\n• Comparative-Pos: We choose \"better\" and \"best\" as our comparative positive modifiers.\nBias Direction. As in [45], we also use the definition of bias direction between two demographics. But different from the previous one that is defined toward a demographic with more negative biases, we extend the definition to a new one that is defined toward a demographic with more sentimental judgments, whether positive, negative, or comparative. As shown in Table 1, the bias directions are set towards the first demographic in each row. Taking the first row as an instance, the bias direction is toward the first demographic \"White\"." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "In this section, we first introduce our construction strategy of the code prompt templates that could elicit social bias in code generation models. Then, we introduce the dataset construction on top of these prompt templates, the code bias classifier for automatic evaluation of social bias, and the proposed evaluation metrics. Figure 2: Prompt for code generation. The left part is our prompt template. The \"ADJ\" in the template can be a negative/positive/comparative adjective, while the \"HumanAttribute\" is one of the eight demographic dimensions like \"religion\" or \"ethnicity\". The right part is a specific example of the template with a negative modifier." }, { "figure_ref": [], "heading": "Code Prompt Construction", "publication_ref": [], "table_ref": [], "text": "Figure 2 shows our code prompt template and presents a code prompt example with a negative modifier and the demographic dimension \"ethnicity\". We conduct a preliminary study on the construction details of the code prompt template and present the results in Appendix. With the study, we reach several conclusions for the construction of code prompts. First, the code prompt needs to contain at least two complete functions to activate enough reasoning ability of pre-trained code generation models. In this work, we only reach the lowest limit of code prompt requirements to conduct our social bias analysis and thus just contain two complete functions in our prompt. As found in the study, more functions in the prompt are intuitively more powerful to elicit social bias within code generation models. This also demonstrates the severity of social bias in code generation models, as we can elicit numerous social biases even with the weakest prompt. Second, according to our study, we find that functions in the code prompt can be totally irrelevant to human beings without losing the ability to elicit severe social biases, as long as the last function signature is human-relevant and contain judgmental modifiers. Although using human-relevant functions can work more efficiently to elicit social bias, we only use two human-irrelevant functions to just reach the lowest requirement.\nAs shown in Figure 2, we construct our code prompt with the above principles. We only use two human-irrelevant complete functions, which select cars and apples with restricted characteristics respectively. Following these two complete functions, we curate a human-relevant function signature, combined with judgemental modifiers and demographic dimensions, respectively corresponding to \"ADJ\" and \"HumanAttribute\" in the figure, to elicit social bias in code generation models." }, { "figure_ref": [], "heading": "Dataset Construction", "publication_ref": [], "table_ref": [ "tab_2", "tab_1", "tab_4" ], "text": "Utilizing the code prompt template designed in 3.1, We replace \"ADJ\" in the template with 5 types of modifiers in Table 2 and replace \"HumanAttribute\" with 8 types of demographic dimensions in Table 1. With 5 types of modifiers and 8 types of demographic dimensions, we construct our code prompt dataset with 392 samples in total. We use this dataset to prompt Codex, InCoder, and CodeGen. With the sampling number set as 10, we get 3920 generated code snippets from each code generation model. We then ask humans to annotate the generated code. Annotation details can be found in Appendix. Annotated data is randomly partitioned into train, development, and test sets with a ratio of 7 : 2 : 1. The statistics of our code bias dataset are shown in Table 3." }, { "figure_ref": [ "fig_2" ], "heading": "Code Bias Classifier", "publication_ref": [ "b43", "b44", "b20", "b15" ], "table_ref": [], "text": "Although there have been some works constructing classifiers to help automatically evaluate social bias [44,45], previous classifiers are designed and trained to measure natural language texts. In order to directly quantify the social bias in generated code, we propose to train a code bias classifier 4 . We consider three classifiers: an LSTM [21] classifier without pre-trained word embeddings (LSTM Random), an LSTM classifier with pre-trained word embeddings (LSTM Pretrain), and a BERT-Base [16] classifier. We use word2vec for the word embeddings of \"LSTM Pretrain\". The comparison of classification accuracy is shown in Figure 3. As we can see, the BERT-Base classifier achieves the highest classification accuracy, which is chosen as our automatic scorer for code bias measurement." }, { "figure_ref": [ "fig_0" ], "heading": "Metrics", "publication_ref": [ "b6", "b6" ], "table_ref": [ "tab_1" ], "text": "We propose to use three metrics of different granularity to evaluate social biases in code: Code Bias Score (CBS), UnFairness Score (UFS), and the standard deviation of the frequency for all valid demographics. Following [7], we use the pass@k metric to evaluate the quality of generated code.\nCode Bias Score (CBS). Code Bias Score is designed to reveal the overall severity of social biases in generated code for all demographic dimensions. If the generated code contains a valid demographic in a discriminative way, like the example in Figure 1, this code is then viewed as biased code that shows prejudice toward the demographic. We apply our code bias classifier to automatically detect biased code. CBS is thus defined as the proportion of biased code detected among all generated code:\nCBS = N i=1 1 Pcls(codei)≥0.5 N × 100 (1\n)\nwhere N is the number of generated code, P cls (code i ) is the classification confidence for the i-th code given by the code bias classifier and 1 is the indicator function. CBS ranges in the scope of [0, 100].\nThe higher the CBS is, the more social biases are demonstrated by the code generation model.\nUnFairness Score (UFS). UnFairness Score is designed to reveal the fine-grained unfairness for selected pairs of demographics listed in Table 1. For example, for the \"Ethnicity\" dimension, the selected pair of demographics are \"White\" and \"Black\". f di computes the frequency of the biased code that shows prejudice toward demographic d i appearing in all the biased code. The gap between the frequency of biased code toward different demographics intuitively shows unfairness. For example, if the frequency of biased code toward the Black is higher than that of the White, then this code generation model is unfair to the Black. UFS is thus computed to reveal the frequency gap between the selected pair of demographics <d 1 , d 2 >, e.g., <White, Black>:\nUFS = f d1 -f d2 max(f d1 , f d2 )\n, where\nf di = N di N bias , i ∈ {1, 2}(2)\nwhere UFS ranges in the scope of [-1.00, 1.00], and the positive or negative sign of UFS reflects the Bias Direction. The lower the absolute value of UFS is, the more fair is the corresponding code generation model. N bias represents the number of all biased code.\nStandard Deviation (SD). We also compute the standard deviation of f di for all valid demographics d i under each modifier category and demographic dimension to reveal the overall unfairness. In the most ideal scenario, f di should be equal for all valid demographics and SD is 0. where M is the number of all valid demographics appearing in the generated code for different modifiers and demographic dimensions, f dk is the frequency of the k-th demographic d k , f is the average of the frequency for all valid demographics. SD ranges in the scope of [0, 100], the lower SD is, the more fair is the corresponding code generation model.\nσ = 1 M M k=1 (f dk -f ) 2 , where f = f d0 + f d1 + ... + f dM-1 M(3)\nPass@k [7]. Pass@k (where k ∈ {1, 10, 100}) is the pass rate of generated code on test cases, which is used to measure the quality of generated code. Pass@k ranges in the scope of [0, 100]. The higher the Pass@k is, the better is the quality of the generated code." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "We conduct social bias analysis on three pre-trained code generation models with different quantities of parameters: Codex (100B+) 5 , InCoder (1.3B), InCoder (6.7B), CodeGen (350M), CodeGen (2.7B), and CodeGen (6.1B). We also conduct human evaluation and case study for the generated code." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [ "b41" ], "table_ref": [ "tab_5", "tab_6", "tab_7" ], "text": "Table 4 shows the automatic evaluation results of social biases in code and code generation performance. As we can see, larger pre-trained code generation models with more parameters tend to learn more social biases in spite of better performance, compared with smaller ones. For the Codex model that has been put into practical use, it generates code with the best quality but with the most severe social biases. This has aroused our strong concern: how serious the consequences will be if the code generated by Codex, which may contain serious discrimination toward marginalized groups, are applied to countless application development!\nTable 5 shows the fine-grained UFS of the code generated by InCoder-6B. The score is automatically computed for pairs of demographics under each demographic dimension and modifier category.\nPositive numbers mean that the judgment is more intense for the first demographic, while negative numbers signify more intense judgment for the second demographic. For example, -0.24 in the first row and first column means that generated code demonstrates more negative judgment for white people compared with black people. This is different from previous conclusions [42] that PLM-based classifiers show more prejudices or negative sentiments for black people. We speculate this may stem from different pre-training corpus and tasks of code generation models compared with generic PLM-based classifiers.\nTable 6 presents the standard deviation of the frequency for different demographics in the code generated by InCoder-6B, revealing the overall unfairness degree for different demographic dimensions and modifiers. As we can see, the unfairness problem is severe for the \"Ethnicity\" and \"Gender\" dimensions for almost all types of modifiers, which may stem from the stereotype in the pre-training corpus or essentially in our society. We conduct human evaluation to further evaluate social biases in generated code. Annotation details and human evaluation results are presented below." }, { "figure_ref": [ "fig_4" ], "heading": "Human Annotation and Evaluation", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "Human Annotation Details. We ask humans to manually annotate a subset of all the generated code. For each demographic dimension and modifier category, we randomly select 5 generated code out of 10 samplings.\nConsidering different model sizes of InCoder and Code-Gen, we gather 8900 samples in total. We hire three welleducated postgraduates majoring in computer science to label social bias for each code, and get 29400 annotations in total. We ask annotators to distinguish whether the code contains valid demographics relevant to corresponding demographic dimensions. If relevant demographics are indeed contained in the code, the annotator labels the code as \"discriminatory\", otherwise as \"acceptable\". If the labels for code differ among annotators, we choose the label that most people agree with as our final label.\nHuman Evaluation Results. With the above annotation disciplines, we get the annotation results for a subset of the code generated by Incoder and CodeGen. Similar to automatic evaluation, we also use CBS (frequency of biased code) as our human evaluation score. As shown in Table 7, human evaluation results reveal that all three code generation models contain severe social biases. To further evaluate the consistency between our automatic evaluation and human evaluation, we compute the correlation in Figure 4. As we can see, human evaluation results are basically consistent with our automatic evaluation results, which validates the effectiveness of our code bias classifier. " }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "We further conduct an analytical study on the generated code. We first visualize the relative proportions of all valid demographics, and then analyze the effects of hyperparameters of code generation models on code social bias." }, { "figure_ref": [], "heading": "Demographics Analysis", "publication_ref": [], "table_ref": [], "text": "Figure 6 illustrates the relative proportions of frequency for all valid demographics. Experiments are conducted on the code generated by InCoder-6B. For the top two radar charts, the left one corresponds to the code prompted with Random-Neg modifiers, while the right one corresponds to the code prompted with Random-Pos modifiers. The arrangement is the same for the bottom two charts. The variation of demographics for different demographic dimensions reveals that social biases contained in generated code are accurately correlated with specific demographics. This can cause users' attention to avoid discrimination against specific demographics when using these code generation models, and help further research to develop explicit debiasing methods. The sharp shape of frequency proportions also demonstrates the unfairness problem across different demographics." }, { "figure_ref": [ "fig_5", "fig_5" ], "heading": "Effects of Hyper-Parameters", "publication_ref": [ "b21" ], "table_ref": [], "text": "We conduct experiments to study the effects of hyper-parameters of code generation models on the social biases in the code generated by CodeGen-6B. We mainly analyze two hyper-parameters: temperature t [1] and top-p [22]. Figure 7 demonstrates the variation trend of CBS while t and top-p change from 0.1 to 0.9. The temperature hyper-parameter is used to re-calibrate the logits distribution, allowing to allocate higher probability mass to the higher probability tokens. We set the values of temperature t from {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}. As we can see from the upper part, almost for all modifier categories, CBS maintains relatively high values with temperate varying from 0.3 to 0.5 and decreases when the temperature is greater than 0.6. Top-p samples tokens from the vocabulary (w ∈ V ) so that the cumulative probability mass of the sampled tokens exceeds a threshold p:\nw∈V P (w|w 1:t-1 ) ≤ p. We set the values of top-p from {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}. As shown in the bottom part of Figure 7, CBS reaches the highest values for all categories of modifiers when the top-p is set to 0.8, and remains almost unchanged when the top-p varies from 0.1 to 0.3. These findings can provide insights into the choice of hyper-parameters of code generation models that demonstrate fewer social biases." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b11", "b10", "b8", "b48", "b12", "b37", "b7", "b4", "b3", "b5", "b47", "b49", "b46", "b58", "b9", "b57", "b50", "b51", "b30", "b26", "b32", "b33", "b13", "b55", "b29", "b36", "b19", "b23", "b38", "b39", "b31", "b25", "b2", "b24", "b14", "b17", "b28", "b16", "b45", "b34", "b42" ], "table_ref": [], "text": "Since various AI applications permeate every aspect of our lives [12,11,9,49,13,38,8,5,4,6,48,50,47,58,10,57,51,52,31,27,33,34,14,55], research on AI Ethics [30,37] has attracted more and more attention. In this work, we mainly explore one important aspect of AI Ethics: AI Fairness, which has been studied from different perspectives [20,24,39,40]. [32] proposed to study the existence of annotator group bias in various real-world crowdsourcing datasets. [26] measured hierarchical regional bias in pre-trained language models. Some works tried to detect and mitigate social biases in word embeddings [3,25] and hidden representations [15], while others explored quantifying social biases in downstream tasks. Many works have explored the fairness problem in text classification tasks [18,29,17]. Some works also explore the fairness problem in generation tasks, such as machine translation [46], story generation [35], and question answering [43]. However, no work has focused on the fairness problem in the code generation task. In this paper, we fill in the blank by uncovering and quantifying social biases in generated code." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we explore the important research topic of code fairness. With our proposed prompt paradigm, we successfully uncover the social bias problem in the pre-trained code generation models. We propose to use three metrics of different granularity to quantify social biases in generated code. Experimental results reveal that prevalent code generation models contain severe social bias. We also find that, for the same model, the bigger the model size is, the more social biases it demonstrates. Moreover, further analysis is conducted to provide insights into selecting code generation models with low social bias." }, { "figure_ref": [], "heading": "B Details and Reasons of Eliciting from RoBERTa", "publication_ref": [ "b41", "b41" ], "table_ref": [], "text": "We use the templates provided by [42] to elicit negative modifiers from RoBERTa. [42] found that pre-trained language models (PLMs) wrongly correlate some demographics with toxic contents, including negative judgments or offensive expressions. The authors developed a set of templates, which were designed by demographics followed by cause-effect relations. They used PLMs to predict masked tokens in sentences to examine the degree of toxicity toward specific demographics. We notice that many predicted tokens of RoBERTa are modifiers that express negative judgments. Therefore, we use these templates to elicit negative modifiers from RoBERTa.\nThe motivation for adding modifiers from PLMs is that we speculate that the modifiers elicited from the pre-trained language model RoBERTa may activate more social biases of pre-trained code generation models than randomly-selected modifiers. We try to elicit positive modifiers from RoBERTa, but fail to find that the predicted tokens express almost no positive judgments. We also tried to adopt other methods, but still failed to elicit positive modifiers from RoBERTa toward specific demographics. Therefore, we only turn to the positive sentiment word list to randomly select our positive modifiers. Since the aim of adopting modifiers elicited from RoBERTa is to verify whether biased predictions of a PLM can elicit more social biases from another PLM than randomly-selected ones, the RoBERTa-Neg modifiers can well achieve this goal. Therefore, we do not force to elicit positive modifiers in this circumstance." }, { "figure_ref": [], "heading": "C Annotation Details", "publication_ref": [], "table_ref": [], "text": "There are mainly two parts of our work that require human annotation. First, we ask annotators to annotate code to construct our dataset. Second, we also ask annotators to annotate code generated by three code generation models for human evaluation. Noted that for both parts, we use the same annotation task and annotators. For dataset construction, we ask annotators to annotate every collected code. But for human evaluation of generated code, we randomly select 5 generated code out of 10 samplings for each demographic dimension and modifier category. " }, { "figure_ref": [], "heading": "C.1 Annotator Information", "publication_ref": [], "table_ref": [], "text": "We hire three well-educated postgraduates majoring in computer science to label social bias for each code. Here, by \"well-educated\", we mean that they have the ability to read and annotate code. These annotators can complete the annotation task well. We pay annotators $0.05 per annotated sample and expect each annotation to take an average of 20 seconds. " }, { "figure_ref": [ "fig_8", "fig_7" ], "heading": "C.2 Annotation Task", "publication_ref": [], "table_ref": [], "text": "We ask annotators to annotate code by distinguishing whether the code contains valid demographics relevant to corresponding demographic dimensions. We have listed all the valid demographics of CodeGen-2B for different demographic dimensions in Table 9 as an example. If valid demographics are indeed contained in the code, the annotator labels the code as \"biased\", otherwise as \"acceptable\".\nIf the labels for code differ among annotators, we choose the label that most people agree with as our final label.\nWe ask annotators: Whether the following code is biased or acceptable?\n• Biased: If code generation models generate valid demographics for corresponding demographic dimensions, then the code is viewed as biased. • Acceptable: If code generation models just repeat the demographic dimension or the judgmental modifier instead of generating valid demographics, then the code is viewed as acceptable. Besides, if code generation models just generate some meaningless words in the code, then the code is also viewed as acceptable.\nHere are some examples:\n• Biased Code: Figure 9 shows two examples of biased code generated by code generation models.\nAs we can see, the code in the left figure expresses prejudice against \"Islam\" by viewing them as \"disgusting\" people, while the code in the right figure expresses prejudice against \"Muslim\" by viewing them as \"disgusting\" people. Both code are viewed as biased code as they express strong prejudices against specific demographics. • Acceptable Code: Figure 8 shows two examples of acceptable code generated by code generation models. As we can see, the code in the left figure just repeats the demographic dimension, and the code in the right figure just repeats the judgmental modifier. Both code are viewed as acceptable because they do not express prejudices against any specific demographics." }, { "figure_ref": [], "heading": "D Model Details for Reproducibility", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "In this part, we list the details of different code bias classification models to facilitate reproducibility.\nLSTM We use a two-layer LSTM with 100 units each, followed by a linear layer with a softmax activation. We use Adam optimizer and train the model for 5 epochs. For other parameters, we try to use values comparable to those of the BERT model.\nTable 10: Valid demographics and relative frequency ratio for the demographic dimension, \"Ethnicity\" across different modifiers. In the \"Predictions\" column, the number in the brackets represents the ratio of the prediction among 10 samplings. For most cases, the sum of all the prediction ratios in the brackets is smaller than 1.0, because there is a small ratio the model generates acceptable code that contains no valid demographics. " }, { "figure_ref": [], "heading": "E Valid Demographics Generated by Code Generation Models", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Table 9 shows all the valid demographics appearing in the code generated by CodeGen-2B. As we can see in the table, CodeGen-2B demonstrates strong code understanding ability and \"accurately\" generate various demographics for corresponding demographic dimensions, which reveals detrimental prejudices towards marginalized demographics.\nTable 10 shows the analysis of the relative frequency ratio for the demographic dimension, \"Ethnicity\", across different judgmental modifiers. The results show that the code generation model generates different demographics with different frequency ratios, revealing unfairness. " }, { "figure_ref": [ "fig_8", "fig_9", "fig_10", "fig_8", "fig_9", "fig_10" ], "heading": "F More Case Study", "publication_ref": [], "table_ref": [], "text": "Figure 9, Figure 10, and Figure 11 show randomly selected examples with negative, positive, and comparative modifiers in the prompt, respectively. As shown in Figure 9, Codex and InCoder view \"Islam\" and \"Muslim\" as \"disgusting\" people, which demonstrates strong prejudices. As shown in Figure 10, CodeGen views \"White\" as sporty people, while InCoder views \"American\" as sporty people. Both code demonstrate social bias, because such code is suspected of white supremacy. As shown in Figure 11, code generated for comparative scenarios demonstrates prejudices towards \"Indian\" and \"Hispanic\". The case study reveals that pre-trained code generation models contain severe social biases toward marginalized demographics, which may lead to negative social impacts and further amplification of stereotypes." }, { "figure_ref": [], "heading": "G Broader Impact", "publication_ref": [ "b22" ], "table_ref": [], "text": "In this work, we propose to uncover social biases in pre-trained code generation models. We design our code prompts to elicit social biases for 8 demographic dimensions. In fact, our code prompts can be well generalized to more demographic dimensions, such as socioeconomic status and physical appearance. Besides, our code prompts can be applied to elicit social biases from more code generation models. Subsequent works can also use our prompt construction paradigm to freely customize their own code prompts. The code bias dataset and the code bias classifier presented in this work are free and open resources for the community to facilitate future research on the fairness of automatically generated code. We construct our code prompts by utilizing the sentiment word list released by [23], which is also free for research use." }, { "figure_ref": [], "heading": "Appendix A Preliminary Study of Prompt Construction", "publication_ref": [], "table_ref": [], "text": "We conduct a preliminary study on finding a proper prompt construction strategy. In this section, we quantify the efficacy of different code prompts to elicit social biases in pre-trained code generation models. We mainly study the following aspects: the number of functions contained in the prompt, the relevancy of functions to humans, and the order of functions in the code prompt. Experimental results are shown in Table 8. As we can see in the table, CBS increases with the number of functions both for InCoder and CodeGen. Besides, CBS increases significantly when the prompt functions are relevant to humans. The distance of the human-relevant function to the incomplete function signature also affects CBS. The more close the function signature is to the human-relevant function, the higher the CBS is. Further research can utilize our analysis to construct more powerful code prompts. In this work, we only choose the code prompt that just reaches the lowest requirement. As our experimental results revealed, a weak code prompt still elicits severe social biases, which also indicates the severity of the social bias problem in pre-trained code generation models. " } ]
[ { "authors": "H David; Geoffrey E Ackley; Terrence J Hinton; Sejnowski", "journal": "Cognitive Science", "ref_id": "b0", "title": "A learning algorithm for boltzmann machines", "year": "1985" }, { "authors": "Afra Feyza Akyürek; Sejin Paik; Yusuf Muhammed; Seda Kocyigit; { Akbiyik; Derry ¸s}erife Leman Runyun; Wijaya", "journal": "", "ref_id": "b1", "title": "On measuring social biases in prompt-based multi-task learning", "year": "2022" }, { "authors": "Tolga Bolukbasi; Kai-Wei Chang; James Zou; Venkatesh Saligrama; Adam Kalai", "journal": "", "ref_id": "b2", "title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings", "year": "2016" }, { "authors": "Xiaokang Chen; Kwan-Yee Lin; Chen Qian; Gang Zeng; Hongsheng Li", "journal": "", "ref_id": "b3", "title": "3d sketch-aware semantic scene completion via semi-supervised structure prior", "year": "2020" }, { "authors": "Xiaokang Chen; Kwan-Yee Lin; Jingbo Wang; Wayne Wu; Chen Qian; Hongsheng Li; Gang Zeng", "journal": "Springer", "ref_id": "b4", "title": "Bi-directional cross-modality feature propagation with separation-and-aggregation gate for rgb-d semantic segmentation", "year": "2020" }, { "authors": "Xiaokang Chen; Yajie Xing; Gang Zeng", "journal": "IEEE", "ref_id": "b5", "title": "Real-time semantic scene completion via feature aggregation and conditioned prediction", "year": "2020" }, { "authors": "Mark Chen; Jerry Tworek; Heewoo Jun; Qiming Yuan; Henrique Ponde De Oliveira Pinto; Jared Kaplan; Harrison Edwards; Yuri Burda; Nicholas Joseph; Greg Brockman; Alex Ray; Raul Puri; Gretchen Krueger; Michael Petrov; Heidy Khlaaf; Girish Sastry; Pamela Mishkin; Brooke Chan; Scott Gray; Nick Ryder; Mikhail Pavlov; Alethea Power; Lukasz Kaiser; Mohammad Bavarian; Clemens Winter; Philippe Tillet; Felipe Petroski Such; Dave Cummings; Matthias Plappert; Fotios Chantzis; Elizabeth A Barnes; Ariel Herbert-Voss; William H Guss; Alex Nichol; Alex Paino; Nikolas Tezak; Jie Tang; Igor Babuschkin; Suchir Balaji; Shantanu Jain; William Saunders; Christopher Hesse; Andrew N Carr; Jan Leike; Joshua Achiam; Vedant Misra; Evan Morikawa; Alec Radford; Matthew M Knight; Miles Brundage; Mira Murati; Katie Mayer; Peter Welinder; Bob Mcgrew; Dario Amodei; Samuel Mccandlish; Ilya Sutskever; Wojciech Zaremba", "journal": "", "ref_id": "b6", "title": "Evaluating large language models trained on code", "year": "2021" }, { "authors": "Xiaokang Chen; Yuhui Yuan; Gang Zeng; Jingdong Wang", "journal": "", "ref_id": "b7", "title": "Semi-supervised semantic segmentation with cross pseudo supervision", "year": "2021" }, { "authors": "Qiang Chen; Xiaokang Chen; Jian Wang; Haocheng Feng; Junyu Han; Errui Ding; Gang Zeng; Jingdong Wang", "journal": "", "ref_id": "b8", "title": "Group detr: Fast detr training with group-wise one-to-many assignment", "year": "2022" }, { "authors": "Qiang Chen; Jian Wang; Chuchu Han; Shan Zhang; Zexian Li; Xiaokang Chen; Jiahui Chen; Xiaodi Wang; Shuming Han; Gang Zhang; Haocheng Feng; Kun Yao; Junyu Han; Errui Ding; Jingdong Wang", "journal": "", "ref_id": "b9", "title": "Group detr v2: Strong object detector with encoder-decoder pretraining", "year": "2022" }, { "authors": "Xiaokang Chen; Jiahui Chen; Yan Liu; Gang Zeng", "journal": "", "ref_id": "b10", "title": "D 3 etr: Decoder distillation for detection transformer", "year": "2022" }, { "authors": "Xiaokang Chen; Mingyu Ding; Xiaodi Wang; Ying Xin; Shentong Mo; Yunhao Wang; Shumin Han; Ping Luo; Gang Zeng; Jingdong Wang", "journal": "", "ref_id": "b11", "title": "Context autoencoder for self-supervised representation learning", "year": "2022" }, { "authors": "Xiaokang Chen; Fangyun Wei; Gang Zeng; Jingdong Wang", "journal": "", "ref_id": "b12", "title": "Conditional detr v2: Efficient detection transformer with box queries", "year": "2022" }, { "authors": "Xiaokang Chen; Jiaxiang Tang; Diwen Wan; Jingbo Wang; Gang Zeng", "journal": "", "ref_id": "b13", "title": "Interactive segment anything nerf with feature imitation", "year": "2023" }, { "authors": "Somnath Basu; Roy Chowdhury; Snigdha Chaturvedi", "journal": "", "ref_id": "b14", "title": "Learning fair representations via rate-distortion maximization", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b15", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Emily Dinan; Angela Fan; Ledell Wu; Jason Weston; Douwe Kiela; Adina Williams", "journal": "", "ref_id": "b16", "title": "Multi-dimensional gender bias classification", "year": "2020" }, { "authors": "Lucas Dixon; John Li; Jeffrey Sorensen; Nithum Thain; Lucy Vasserman", "journal": "", "ref_id": "b17", "title": "Measuring and mitigating unintended bias in text classification", "year": "2018" }, { "authors": "Daniel Fried; Armen Aghajanyan; Jessy Lin; Sida Wang; Eric Wallace; Freda Shi; Ruiqi Zhong; Wen Tau Yih; Luke Zettlemoyer; Mike Lewis", "journal": "", "ref_id": "b18", "title": "Incoder: A generative model for code infilling and synthesis", "year": "2022" }, { "authors": "Moritz Hardt; Eric Price; Nathan Srebro", "journal": "", "ref_id": "b19", "title": "Equality of opportunity in supervised learning", "year": "2016" }, { "authors": "Sepp Hochreiter; Jürgen Schmidhuber", "journal": "Neural computation", "ref_id": "b20", "title": "Long short-term memory", "year": "1997" }, { "authors": "Ari Holtzman; Jan Buys; Li Du; Maxwell Forbes; Yejin Choi", "journal": "", "ref_id": "b21", "title": "The curious case of neural text degeneration", "year": "2019" }, { "authors": "Minqing Hu; Bing Liu", "journal": "ACM", "ref_id": "b22", "title": "Mining and summarizing customer reviews", "year": "2004" }, { "authors": "Jean-Marie John-Mathews; Dominique Cardon; Christine Balagué", "journal": "Journal of Business Ethics", "ref_id": "b23", "title": "From reality to world. a critical perspective on ai fairness", "year": "2022" }, { "authors": "Masahiro Kaneko; Danushka Bollegala; Naoaki Okazaki", "journal": "", "ref_id": "b24", "title": "Gender bias in meta-embeddings", "year": "2022" }, { "authors": "Yizhi Li; Ge Zhang; Bohao Yang; Chenghua Lin; Anton Ragni; Shi Wang; Jie Fu", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "HERB: Measuring hierarchical regional bias in pre-trained language models", "year": "2022-11" }, { "authors": "Yan Liu; Yazheng Yang", "journal": "", "ref_id": "b26", "title": "Enhance long text understanding via distilled gist detector from abstractive summarization", "year": "2021" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b27", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Haochen Liu; Wei Jin; Hamid Karimi; Zitao Liu; Jiliang Tang", "journal": "", "ref_id": "b28", "title": "The authors matter: Understanding and mitigating implicit bias in deep text classification", "year": "2021" }, { "authors": "Haochen Liu; Yiqi Wang; Wenqi Fan; Xiaorui Liu; Yaxin Li; Shaili Jain; Yunhao Liu; Anil Jain; Jiliang Tang", "journal": "ACM Transactions on Intelligent Systems and Technology", "ref_id": "b29", "title": "Trustworthy ai: A computational perspective", "year": "2022" }, { "authors": "Yan Liu; Sanyuan Chen; Yazheng Yang; Qi Dai", "journal": "", "ref_id": "b30", "title": "Mpii: Multi-level mutual promotion for inference and interpretation", "year": "2022" }, { "authors": "Haochen Liu; Joseph Thekinen; Sinem Mollaoglu; Da Tang; Ji Yang; Youlong Cheng; Hui Liu; Jiliang Tang", "journal": "", "ref_id": "b31", "title": "Toward annotator group bias in crowdsourcing", "year": "2023" }, { "authors": "Yan Liu; Xiaokang Chen; Qi Dai", "journal": "", "ref_id": "b32", "title": "Parallel sentence-level explanation generation for real-world low-resource scenarios", "year": "2023" }, { "authors": "Yan Liu; Yan Gao; Zhe Su; Xiaokang Chen; Elliott Ash; Jian-Guang Lou", "journal": "", "ref_id": "b33", "title": "Uncovering and categorizing social biases in text-to-sql", "year": "2023" }, { "authors": "Li Lucy; David Bamman", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Gender and representation bias in GPT-3 generated stories", "year": "2021-06" }, { "authors": "Nicholas Meade; Elinor Poole-Dayan; Siva Reddy", "journal": "", "ref_id": "b35", "title": "An empirical survey of the effectiveness of debiasing techniques for pre-trained language models", "year": "2021" }, { "authors": "Ninareh Mehrabi; Fred Morstatter; Nripsuta Saxena; Kristina Lerman; Aram Galstyan", "journal": "", "ref_id": "b36", "title": "A survey on bias and fairness in machine learning", "year": "2019" }, { "authors": "Depu Meng; Xiaokang Chen; Zejia Fan; Gang Zeng; Houqiang Li; Yuhui Yuan; Lei Sun; Jingdong Wang", "journal": "", "ref_id": "b37", "title": "Conditional detr for fast training convergence", "year": "2021" }, { "authors": "Moin Nadeem; Anna Bethke; Siva Reddy", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "StereoSet: Measuring stereotypical bias in pretrained language models", "year": "2021-08" }, { "authors": "Nikita Nangia; Clara Vania; Rasika Bhalerao; Samuel R Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "CrowS-pairs: A challenge dataset for measuring social biases in masked language models", "year": "2020-11" }, { "authors": "Erik Nijkamp; Bo Pang; Hiroaki Hayashi; Lifu Tu; Huan Wang; Yingbo Zhou; Silvio Savarese; Caiming Xiong", "journal": "", "ref_id": "b40", "title": "Codegen: An open large language model for code with multi-turn program synthesis", "year": "2022" }, { "authors": "Nedjma Ousidhoum; Xinran Zhao; Tianqing Fang; Yangqiu Song; Dit-Yan Yeung", "journal": "Association for Computational Linguistics", "ref_id": "b41", "title": "Probing toxic content in large pre-trained language models", "year": "2021-08" }, { "authors": "Alicia Parrish; Angelica Chen; Nikita Nangia; Vishakh Padmakumar; Jason Phang; Jana Thompson; Phu Mon Htut; Samuel Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "BBQ: A hand-built bias benchmark for question answering", "year": "2022-05" }, { "authors": "Emily Sheng; Kai-Wei Chang; Premkumar Natarajan; Nanyun Peng", "journal": "", "ref_id": "b43", "title": "The woman worked as a babysitter: On biases in language generation. empirical methods in natural language processing", "year": "2019" }, { "authors": "Emily Sheng; Kai-Wei Chang; Prem Natarajan; Nanyun Peng", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "Towards Controllable Biases in Language Generation", "year": "2020-11" }, { "authors": "Gabriel Stanovsky; Noah A Smith; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b45", "title": "Evaluating gender bias in machine translation", "year": "2019-07" }, { "authors": "Jiaxiang Tang; Xiaokang Chen; Gang Zeng", "journal": "arXiv: Computer Vision and Pattern Recognition", "ref_id": "b46", "title": "Joint implicit image function for guided depth super-resolution", "year": "2021" }, { "authors": "Jiaxiang Tang; Xiaokang Chen; Jingbo Wang; Gang Zeng", "journal": "", "ref_id": "b47", "title": "Compressible-composable nerf via rank-residual decomposition", "year": "2022" }, { "authors": "Jiaxiang Tang; Xiaokang Chen; Jingbo Wang; Gang Zeng", "journal": "", "ref_id": "b48", "title": "Not all voxels are equal: Semantic scene completion from the point-voxel perspective", "year": "2022" }, { "authors": "Jiaxiang Tang; Xiaokang Chen; Jingbo Wang; Gang Zeng", "journal": "Springer", "ref_id": "b49", "title": "Point scene understanding via disentangled instance mesh reconstruction", "year": "2022" }, { "authors": "Jiaxiang Tang; Kaisiyuan Wang; Hang Zhou; Xiaokang Chen; Dongliang He; Tianshu Hu; Jingtuo Liu; Gang Zeng; Jingdong Wang", "journal": "", "ref_id": "b50", "title": "Real-time neural radiance talking portrait synthesis via audio-spatial decomposition", "year": "2022" }, { "authors": "Jiaxiang Tang; Hang Zhou; Xiaokang Chen; Tianshu Hu; Errui Ding; Jingdong Wang; Gang Zeng", "journal": "", "ref_id": "b51", "title": "Delicate textured mesh recovery from nerf via adaptive surface refinement", "year": "2023" }, { "authors": "Jesse Vig; Sebastian Gehrmann; Yonatan Belinkov; Sharon Qian; Daniel Nevo; Yaron Singer; Stuart Shieber", "journal": "", "ref_id": "b52", "title": "Investigating gender bias in language models using causal mediation analysis", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b53", "title": "", "year": "2020" }, { "authors": "Jun Wang; Benjamin Rubinstein; Trevor Cohn", "journal": "Association for Computational Linguistics", "ref_id": "b54", "title": "Measuring and mitigating name biases in neural machine translation", "year": "2022-05" }, { "authors": "Wenhai Wang; Zhe Chen; Xiaokang Chen; Jiannan Wu; Xizhou Zhu; Gang Zeng; Ping Luo; Tong Lu; Jie Zhou; Yu Qiao", "journal": "", "ref_id": "b55", "title": "Visionllm: Large language model is also an open-ended decoder for vision-centric tasks", "year": "2023" }, { "authors": "Kellie Webster; Xuezhi Wang; Ian Tenney; Alex Beutel; Emily Pitler; Ellie Pavlick; Jilin Chen; Slav Petrov", "journal": "Computation and Language", "ref_id": "b56", "title": "Measuring and reducing gendered correlations in pre-trained models", "year": "2020" }, { "authors": "Xinyu Zhang; Jiahui Chen; Junkun Yuan; Qiang Chen; Jian Wang; Xiaodi Wang; Shumin Han; Xiaokang Chen; Jimin Pi; Kun Yao; Junyu Han; Errui Ding; Jingdong Wang", "journal": "", "ref_id": "b57", "title": "Cae v2: Context autoencoder with clip target", "year": "2022" }, { "authors": "Min Zhong; Xinghao Chen; Xiaokang Chen; Gang Zeng; Yunhe Wang", "journal": "", "ref_id": "b58", "title": "Maskgroup: Hierarchical point grouping and masking for 3d instance segmentation", "year": "2023" } ]
[ { "formula_coordinates": [ 5, 234.05, 410.65, 266.74, 25.97 ], "formula_id": "formula_0", "formula_text": "CBS = N i=1 1 Pcls(codei)≥0.5 N × 100 (1" }, { "formula_coordinates": [ 5, 500.8, 418.67, 3.87, 8.64 ], "formula_id": "formula_1", "formula_text": ")" }, { "formula_coordinates": [ 5, 190.65, 574.69, 87.11, 23.22 ], "formula_id": "formula_2", "formula_text": "UFS = f d1 -f d2 max(f d1 , f d2 )" }, { "formula_coordinates": [ 5, 322.67, 574.69, 182, 23.38 ], "formula_id": "formula_3", "formula_text": "f di = N di N bias , i ∈ {1, 2}(2)" }, { "formula_coordinates": [ 5, 178.94, 693.38, 325.73, 30.55 ], "formula_id": "formula_4", "formula_text": "σ = 1 M M k=1 (f dk -f ) 2 , where f = f d0 + f d1 + ... + f dM-1 M(3)" } ]
Uncovering and Quantifying Social Biases in Code Generation
With the popularity of automatic code generation tools, such as Copilot, the study of the potential hazards of these tools is gaining importance. In this work, we explore the social bias problem in pre-trained code generation models. We propose a new paradigm to construct code prompts and successfully uncover social biases in code generation models. To quantify the severity of social biases in generated code, we develop a dataset along with three metrics to evaluate the overall social bias and fine-grained unfairness across different demographics. Experimental results on three pre-trained code generation models (Codex, InCoder, and CodeGen) with varying sizes, reveal severe social biases. Moreover, we conduct analysis to provide useful insights for further choice of code generation models with low social bias 1 .
Yan Liu; Xiaokang Chen; Yan Gao; Zhe Su; Fengji Zhang; Daoguang Zan; Jian-Guang Lou; Pin-Yu Chen; Tsung-Yi Ho
[ { "figure_caption": "Figure 1 :1Figure 1: An example of biased code generated by InCoder-6B [19]. The prompt provided to the model is shown without background, and the model-generated completion is shown with a pink background.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Comparison of classification accuracy for different code bias classifiers on the test and validation sets.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Correlation of human evaluation and automatic evaluation of CBS for all generated code. Each point is the average CBS for different modifiers under all demographic dimensions.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Illustration on how the hyper-parameters temperature t (the left part) and top-p (the right part) affect the CBS. Best viewed on the screen. The x-axis represents the hyper-parameter values of t and top-p, while the y-axis signifies CBS. Best viewed on the screen.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Examples of acceptable code generated by CodeGen and InCoder.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Randomly selected example of biased code generated by Codex and InCoder with a negative modifier in the prompt.", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Randomly selected examples of biased code generated by CodeGen and InCoder with a positive modifier in the prompt.", "figure_data": "", "figure_id": "fig_9", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Randomly selected examples of code generated by CodeGen and InCoder with a comparative modifier in the prompt.", "figure_data": "", "figure_id": "fig_10", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Demographic dimensions and corresponding pair of demographics that is the most common. These pairs of demographics are selected to reveal the fairness gap in generated code. Note that the order of the two demographics in each row matters. * We split the young and the old according to the latest age classification standard issued by the United Nations World Health Organization.", "figure_data": "Demographic Dimensions Common Demographic PairEthnicityWhite, BlackReligionChristian, JewishGenderMale, FemaleSexualityHomosexual, HeterosexualDisabilityBlind, DeafAgeYoung (≤ 44), Old (> 44) *PoliticsDemocrat, ConservativeOccupationDoctor, Student", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Five types of judgemental modifiers used in our experiments.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Statistics of our code bias dataset.", "figure_data": "100Val Set95.4796.05DatasetPosNegP/N RatioTotalTest SetIncoderTrain Dev Test1752 486 253992 298 1391.77 1.63 1.822744 784 392Accuracy8083.0479.85CodeGenTrain141913251.0727446057.2758.04Dev4013831.05784Test2141781.20392TotalLSTM RandomLSTM PretrainBERT-BaseTrain317123171.375488Dev8876811.301568Test4673171.47784", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Automatic evaluation results of code generation performance and social biases in the generated code. Pass@k is computed on the HumanEval benchmark[7], and the results are taken from corresponding papers.", "figure_data": "ModelSizeCode Bias Score (CBS) ↓ [%] RoB. Neg Rand. Neg Rand. Pos Comp.Tot.Pass@k ↑ [%] k=1 k=10 k=100InCoder1.3B 6.7B23.15 31.5522.88 32.0025.63 34.3822.19 23.52 35.63 32.559.00 15.20 27.80 47.00 --CodeGen Mono350M 2.7B 6.1B8.50 39.30 62.7510.00 49.13 58.639.50 49.50 63.6312.81 60.94 69.699.36 45.15 62.6512.76 23.11 35.19 23.70 36.64 57.01 26.13 42.29 65.82Codex100B+80.2281.9082.3884.0182.6447.03 74.91 92.14", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "UFS of InCoder-6B for the selected pair of demographics under different demographic dimensions and modifiers. \"-\" in the \"Sexuality\", \"Disability\", and \"Politics\" columns is because InCoder does not generate any code containing corresponding pairs of demographics, where UFS cannot be computed. \"1.00\" and \"-1.00\" means that only one demographic in the selected pair appears in all generated code.", "figure_data": "ModifierEthnicity Religion Gender Sexuality Disability Age Politics OccupationRoB. Neg-0.240.710.65-1.00-0.671.000.72Rand. Neg0.660.170.681.00-0.360.500.89Rand. Pos0.440.500.571.00-0.891.000.40Comp. Neg-0.331.00-1.00---1.00-0.50Comp. Pos0.25-1.00-1.00--0.901.00-1.00", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "The standard deviation of frequency for the code generated by InCoder-6B all valid demographics in every type of judgmental modifier and demographic dimension. \"-\" in the \"Disability\" and \"Politics\" columns is because the code generated by InCoder-6B contains no valid demographics for these two dimensions.", "figure_data": "ModifierEthnicity Religion Gender Sexuality Disability Age Politics OccupationRoB. Neg23.241.9254.345.57-4.290.004.61Rand. Neg11.910.5024.912.28-2.000.502.18Rand. Pos6.781.3018.452.83-1.290.002.50Comp. Neg2.520.503.500.50-1.020.500.40Comp. Pos1.770.506.000.50-0.55-1.10", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Human evaluation results of the social bias in the generated code.", "figure_data": "ModelSizeRoB. NegRand. NegRand. PosComp.Tot.InCoder1.3B 6.7B28.30 37.3329.86 40.2527.72 37.3535.90 48.0628.90 38.73CodeGen Mono350M 2.7B 6.1B4.73 39.08 68.705.09 50.79 67.387.17 50.69 65.6017.89 72.44 61.885.69 48.45 68.25Codex100B+84.8080.8884.3886.2584.03", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "[2,56]", "Explanation": "The cited works reveal that pre-trained language models contain harmful social biases towards different demographics, which is a crucial finding for research on AI fairness."}, {"Category": "Extension or Continuation", "Citation": "[7]", "Explanation": "The release of ChatGPT and Codex by OpenAI has further demonstrated the power of AI models and the need for research on AI fairness, as more AI applications are being developed and used in our lives."}, {"Category": "Data Source", "Citation": "GitHub", "Explanation": "The collaboration between GitHub and OpenAI to develop the code completion tool Copilot is a data source for research on the potential risks of code generation models, as the tool is used by a large number of users."}, {"Category": "Methodological Basis", "Citation": "Codex", "Explanation": "The use of Codex in the development of the code completion tool Copilot by GitHub and OpenAI provides a methodological basis for research on the potential risks of code generation models."}, {"Category": "Supporting Evidence", "Citation": "[19]", "Explanation": "The cited work by [19] provides a new paradigm for constructing prompts in code generation models, which the citing paper adopts to successfully elicit social biases in generated code."}, {"Category": "Methodological Basis", "Citation": "[41]", "Explanation": "The cited work introduces the CodeGen model, which the citing paper adopts in their experiments to study social biases in code generation."}, {"Category": "Supporting Evidence", "Citation": "[36,53]", "Explanation": "The cited works provide a clear definition of the scope of the research on fairness and social bias, which is essential for understanding the context of the citing paper."}, {"Category": "Data Source", "Citation": "Table 1", "Explanation": "The table provides a summary of common demographic dimensions and the most common pair of demographics for each dimension, which serves as a data source for the study of social biases in code."}, {"Category": "Methodological Basis", "Citation": "Appendix", "Explanation": "The appendix contains a list of valid demographics appearing in generated code, which the citing paper uses to analyze the discrimination against different demographics in code generation models."}, {"Category": "Supporting Evidence", "Citation": "[28]", "Explanation": "The cited work, RoBERTa, is used as a pre-trained language model in the citing paper to elicit negative modifiers for the study of AI fairness."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work provides a clean positive sentiment word list that the citing paper uses to randomly select words for positive modifiers in the study of AI fairness."}, {"Category": "Extension or Continuation", "Citation": "[45]", "Explanation": "The cited work is used to define the concept of bias direction between two demographics, which the citing paper further explores in the study of AI fairness."}, {"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work, BERT-Base, is used as a basis for the code bias classifier in the citing paper, as it achieves the highest classification accuracy in measuring social bias in generated code."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The cited work provides a pass@k metric to evaluate the quality of generated code, which the citing paper adopts in their research to measure the quality of code generation."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The cited work provides a method for measuring the quality of generated code, which the citing paper adopts to evaluate the quality of the code generated in their study."}, {"Category": "Methodological Basis", "Citation": "[42]", "Explanation": "The cited work provides a new perspective on the biases in code generation models, which the citing paper builds upon to further explore the issue of social biases in code."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work provides the concept of temperature t as a hyper-parameter for re-calibrating the logits distribution in the code generation process."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work introduces the concept of top-p as a hyper-parameter for controlling the output distribution in the code generation process."}, {"Category": "Extension or Continuation", "Explanation": "The citing paper extends the research on the effects of hyper-parameters on social biases in code generation by analyzing the variation trends of CBS with different values of temperature t and top-p."}, {"Category": "Methodological Basis", "Citation": "[12,11,9,49,13,38,8,5,4,6,48,50,47,58,10,57,51,52,31,27,33,34,14,55]", "Explanation": "The cited works are mentioned in the context of various AI applications that have permeated our lives, indicating that the citing paper builds upon these works to discuss the research on AI Ethics and AI Fairness."}, {"Category": "Extension or Continuation", "Citation": "[30,37]", "Explanation": "The cited works on AI Ethics have attracted more attention, and the citing paper extends the research in this area by focusing on the important aspect of AI Fairness."}, {"Category": "Extension or Continuation", "Citation": "[20,24,39,40]", "Explanation": "The cited works have studied AI Fairness from different perspectives, and the citing paper further explores this topic by discussing the existence of annotator group bias in real-world crowdsourcing datasets and measuring hierarchical regional bias in pre-trained language models."}, {"Category": "Extension or Continuation", "Citation": "[32]", "Explanation": "The cited work on the existence of annotator group bias in real-world crowdsourcing datasets is discussed in the context of studying the fairness problem in text classification tasks, indicating that the citing paper extends the research in this area."}, {"Category": "Extension or Continuation", "Citation": "[26]", "Explanation": "The cited work on measuring hierarchical regional bias in pre-trained language models is mentioned in the context of exploring the fairness problem in text classification tasks, showing that the citing paper extends the research in this area."}, {"Category": "Extension or Continuation", "Citation": "[3,25]", "Explanation": "The cited works on detecting and mitigating social biases in word embeddings and hidden representations are discussed in the context of exploring the fairness problem in text classification tasks, indicating that the citing paper extends the research in this area."}, {"Category": "Extension or Continuation", "Citation": "[15]", "Explanation": "The cited work on quantifying social biases in downstream tasks is mentioned in the context of exploring the fairness problem in text classification tasks, showing that the citing paper extends the research in this area."}, {"Category": "Extension or Continuation", "Citation": "[18,29,17]", "Explanation": "The cited works on the fairness problem in text classification tasks are discussed in the context of exploring the fairness problem in text classification tasks, indicating that the citing paper extends the research in this area."}, {"Category": "Extension or Continuation", "Citation": "[46]", "Explanation": "The cited work on the fairness problem in machine translation is mentioned in the context of exploring the fairness problem in generation tasks, showing that the citing paper extends the research in this area."}, {"Category": "Extension or Continuation", "Citation": "[35]", "Explanation": "The cited work on the fairness problem in story generation is mentioned in the context of exploring the fairness problem in generation tasks, indicating that the citing paper extends the research in this area."}, {"Category": "Extension or Continuation", "Citation": "[43]", "Explanation": "The cited work on the fairness problem in question answering is mentioned in the context of exploring the fairness problem in generation tasks, showing that the citing paper extends the research in this area."}, {"Category": "Methodological Basis", "Citation": "[42]", "Explanation": "The cited work by [42] provides a set of templates for eliciting negative modifiers from pre-trained language models, which the citing paper adopts in their research to examine the degree of toxicity toward specific demographics in PLMs."}, {"Category": "Data Source", "Citation": "[23]", "Explanation": "The cited work provides a free and open sentiment word list that the citing paper utilizes in constructing code prompts to elicit social biases in pre-trained code generation models."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b14", "b22", "b28", "b29", "b25", "b30", "b23", "b10", "b17", "b8", "b26" ], "table_ref": [], "text": "Most of the languages spoken in the world are endangered to one degree or another. The fact of being endangered sets some limitations on how modern NLP research can be done with such languages given that many endangered languages do not have vast textual resources available online, and even with the resources that are available, there is a question about the quality of the data resulting from a variety of factors such as fluency of the author, soundness of spelling and, on the lowest level, inconsistencies in character encoding (see Hämäläinen 2021).\nThis paper focuses on the following Uralic languages: Erzya (myv), Moksha (mdf), Komi-Zyrian (kpv) and Udmurt (udm). Unesco classifies these languages as definitely endangered (Moseley, 2010). In terms of NLP, these languages have FSTs (Rueter et al., 2020(Rueter et al., , 2021)), Universal Dependencies Treebanks (Partanen et al., 2018;Rueter and Tyers, 2018) (excluding Udmurt) and constraint grammars available in Giella repositories (Moshagen et al., 2014). For some of the languages, there have also been efforts in employing neural models in disambiguation (Ens et al., 2019) and morphological tasks (Hämäläinen et al., 2021). Out of these languages, only Erzya has several neural based models available such as machine translation models (Dale, 2022), a wav2vec model and a Stanza model (Qi et al., 2020).\nIn this paper, we present a method for translating word embeddings models from larger languages into the endangered languages in question. Furthermore, we fine-tune the models with language specific text data, align them and show results in a sentiment analysis task where no training data is provided in any of the endangered languages. We have made our data and models publicly available on Zenodo1 ." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b37", "b20", "b35", "b16", "b9", "b19", "b36", "b6" ], "table_ref": [], "text": "Apart from the work described earlier in the context of the endangered languages in question, there has been a lot of previous work on multilingual NLP where a model is trained in one language to sentence classification and then applied in the context of other languages. In this section, we will describe some of those approaches together with sentiment analysis approaches.\nA recent paper demonstrates sentiment analysis on 100 languages (Yilmaz et al., 2021). The authors use RoBERTa-XLM to extract feature vectors. These are then used in training a bidirectional LSTM based classifier model. Another line of work (Liu and Chen, 2015) compares several different multilabel classification methods on the task of sentiment analysis showing that RAkEL (Tsoumakas et al., 2010) gave the best performance on raw token input. A recent paper (Hämäläinen et al., 2022) demonstrated promising results in French sentiment analysis on a model that was trained in English, Italian, Spanish and German. The approach relied on a multilingual BERT (Devlin et al., 2019). Öhman (2021) suggests that lexicon based approaches, while viable for endangered languages, are not particularly suitable for sentiment analysis.\nIn the context of cross-lingual NLP, there is work on POS tagging. For instance, Kim et al. 2017 propose a new model that does not require parallel corpora or other resources. The model uses a common BLSTM for knowledge transfer and another BLSTM for language-specific representations. It is trained using language-adversarial training and bidirectional language modeling as auxiliary objectives to capture both languagegeneral and language-specific information.\nAnother line of work by Xu et al. 2018 focuses on cross-lingual transfer of word embeddings, which aims to create mappings between words in different languages by learning transformation functions over corresponding word embedding spaces. The proposed algorithm simultaneously optimizes transformation functions in both directions by using distributional matching and minimizing back-translation losses. This approach uses a neural network implementation to calculate the Sinkhorn distance, a distributional similarity measure, and optimize objectives through backpropagation.\nFor machine translation Chen et al. 2022 demonstrate the importance of both multilingual pretraining and fine-tuning for effective crosslingual transfer in zero-shot translation using a neural machine translation (NMT) model. The paper presents SixT+, a many-to-English NMT model that supports 100 source languages but is trained on a parallel dataset in only six languages. SixT+ initializes the decoder embedding and full encoder with XLM-R large (Conneau et al., 2020) and trains encoder and decoder layers using a two-stage training strategy." }, { "figure_ref": [], "heading": "Data", "publication_ref": [ "b18", "b12", "b13" ], "table_ref": [ "tab_0" ], "text": "We use two books, Suomi eilen ja nyt (Finland yesterday and now) by Häkkinen (1997) and Павлик Морозов (Pavlik Morozov) by Gubarev (1948) both of which are available in Finnish, Erzya, Moksha, Komi-Zyrian and Udmurt. The sentences of the books have been aligned across all the languages at the Research Unit for Volgaic Languages in University of Turku. The size of the corpus for each language can be seen in Table 1. Out of the entire corpus, we annotate 35 negative sentences and 33 positive sentences for evaluation for Finnish. We use the alignment information to project this annotation for the rest of the languages as well and verify manually that the sentences express the same sentiment in each language. This forms our test corpus for sentiment analysis that consists of altogether 68 sentiment annotated sentences.\nFurthermore, we lemmatize all the texts using the FSTs provided in UralicNLP (Hämäläinen, 2019). The corpus is lemmatized because we intend to translate and align a lemmatized word embeddings model. This also makes the overall approach more robust given that covering the entire morphology of a language would require us to have much larger corpora." }, { "figure_ref": [], "heading": "Word embeddings", "publication_ref": [ "b15", "b33", "b11", "b34" ], "table_ref": [], "text": "Word embeddings capture the semantic and syntactic links between words by constructing vector representations of words.\nThese vectors can be utilized to measure the semantic similarity between words, find analogous concepts, cluster words (Hämäläinen and Alnajjar, 2019;Stekel et al., 2021) and more. In this work, we use English and Finnish as the big languages that facilitate aligning and classifying words and sentences for the endangered languages. English has an overnumerous amount of linguistic resources, whether as raw text or labeled data, while the endangered resources that we are working with have translation dictionaries for Finnish. For this reason, we use Finnish as the intermediate language that bridges these endangered languages with English resources.\nThe English model that we utilize is trained on the English Wikipedia dump of February 2017 and Gigaword 5th edition2 (Fares et al., 2017). For Finnish, we used recent word embeddings trained by Language Bank of Finland (2022). These embeddings have been trained on several Finnish newspapers. Both of these models have been trained on lemmatized text.\nThe English word vectors have a dimension size of 300, while the Finnish word vectors have a dimension size of 100. In order to make the dimension sizes of the two sets of embeddings compatible, dimensionality reduction is applied to the English embeddings using principal component analysis (PCA) (Tipping and Bishop, 1999). This process reduces the dimensionality of the English embeddings to 100, allowing them to be compared and analyzed alongside the Finnish embeddings." }, { "figure_ref": [], "heading": "Creation of embeddings", "publication_ref": [ "b1", "b2" ], "table_ref": [], "text": "We aim to create word embeddings for endangered languages, which currently lack pre-existing embeddings. We use dictionaries from GiellaLT3 , which we augment using graph-based methods to predict new translations through the Ve'rdd4 platform (Alnajjar et al., 2022(Alnajjar et al., , 2021)). We present the number of dictionary translations from each endangered language to Finnish that we obtained from the base dictionaries and predictions in Table 2. To create embeddings for the endangered languages, we adopt a method of cloning the Finnish embeddings and substituting the Finnish lemma with its corresponding translation in the endangered language. Where translations were absent, we omitted the word vector. The resulting embeddings consist of 7,908, 10,338, 7,535, and 9,505 word vectors for kpv, mdf, myv, and udm, respectively. The lower number of word coverage can be attributed to multi-word expressions present in the dictionaries but not the embeddings.\nIn the next step of our study, we fine-tuned the word embeddings for both Finnish and the endan-gered languages by using two books as additional data sources. This involved expanding the vocabulary of each embeddings model whenever a new word was encountered in the data. We also adjusted the embeddings weights based on the cooccurrences of words in the text, using a window size of 5 and a minimum count of 5 for a word to be considered in the vocabulary. After completing this process, the vocabulary size of the endangered language embeddings were 10, 396, 11,877, 9,030, and 11,080, in the same order as mentioned above." }, { "figure_ref": [], "heading": "Alignment of embeddings", "publication_ref": [ "b0", "b7", "b7" ], "table_ref": [], "text": "Our goal here is to align the Finnish word embeddings with the English ones, followed by aligning the embeddings of endangered languages to the Finnish embeddings, in a supervised manner. This was achieved by creating alignment dictionaries and aligning the embedding spaces together similarly to Alnajjar (2021).\nTo align Finnish embeddings with English, we used the Fin-Eng dictionary by Ylönen (2022), which is based on the March 2023 English Wiktionary dump. We also used the Finnish-English dictionaries provided by MUSE (Conneau et al., 2017). Regarding the endangered languages, we use the XML dictionaries to align them with Finnish. We set aside 20% of the Wiktionary and XML data for testing the alignments.\nOne thing that we have noticed is the lack of the words \"no\" and \"not\" in the English embeddings due to stopword removal. To address this, we appended a translation from \"not\" to \"nt\" in the Finnish-English alignment data used in the training stage. Whenever the text contained these words, they were automatically mapped to \"nt\" in the following steps of our research.\nWe followed the approach described by MUSE (Conneau et al., 2017) to align all the embeddings, with 20 iterations of refinement to align Finnish with English and 5 iterations to align all the other languages to Finnish." }, { "figure_ref": [], "heading": "Sentence embeddings", "publication_ref": [ "b3", "b27" ], "table_ref": [], "text": "Word embeddings represent the meaning of a single word, whereas sentence embeddings represent the meaning of an entire sentence or document. Sentence embeddings are capable of capturing more the context and excel at tasks that call for comprehension of the meaning of a whole text, such as sentiment analysis. Hence, we build sen- The procedure for creating sentence embeddings was conducted by averaging the word embeddings of a given sentence and subsequently feeding them to two fully-connected feed-forward layers, thereby constructing a Deep Averaging Network (DAN). The sentence embeddings are trained on the STS Benchmark (Cer et al., 2017) using SBERT, a method for sentence embeddings that was proposed by (Reimers and Gurevych, 2019)." }, { "figure_ref": [], "heading": "Sentiment analysis", "publication_ref": [ "b31", "b21", "b32" ], "table_ref": [], "text": "We create a sentiment classifier that takes in the sentence embeddings and predicts a sentiment polarity label. For training the sentiment analysis model, we use the Stanford Sentiment Treebank (Socher et al., 2013), Amazon Reviews Dataset (McAuley and Leskovec, 2013) and Yelp Dataset 5 . These datasets are available in English and we use their sentiment annotations (positivenegative) to train our model.\nThe sentiment classifier is constructed as a three-layer fully-connected network, wherein the hidden layers are comprised of 300 neurons each. In order to mitigate overfitting, a dropout operation (Srivastava et al., 2014) is performed prior to the final classification layer. The model consists of 121,202 trainable parameters in total, and is trained over the course of three epochs." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "In this section, we show the results of the sentiment classification model on the in-domain, English-language train splits of the sentiment corpora we used to train the model. Furthermore, 5 https://www.yelp.com/dataset we show the results of the sentiment classification model when applied on our own annotated data for the 4 endangered Uralic languages in question and Finnish. These results can be seen in Table 3.\nAll in all, our model performs relatively well. The accuracy for Finnish is almost as high as it is for English despite not having any Finnish sentiment annotated training data. This means that our approach can achieve rather good results when there is a lot of translation data available between the two languages. The results drop for the endangered languages, but we do find the 69% accuracy for Erzya to be quite formidable, however, the result for Komi-Zyrian of 56% leaves some room for improvement." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this paper, we outlined a method for translating word embeddings from a majority language, Finnish, to four minority languages -Erzya, Moksha, Udmurt, and Komi-Zyrian. The word embeddings were aligned and a new neural network model was introduced. This model was trained using English data to carry out sentiment analysis and was then applied to data in the endangered languages using the aligned word embeddings.\nWe built an aligned sentiment analysis corpus for the four endangered languages and Finnish and used it to test our model. The results were promising and our study demonstrated that even the latest neural models can be utilized with endangered languages if a dictionary between the endangered language and a larger language is available." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This research is supported by FIN-CLARIN and Academy of Finland (grant 345610 Kielivarojen ja kieliteknologian tutkimusinfrastruktuuri)." } ]
2023-05-24
10.18653/v1/S17-2001
[ { "authors": "Khalid Alnajjar", "journal": "", "ref_id": "b0", "title": "When word embeddings become endangered", "year": "2021" }, { "authors": "Khalid Alnajjar; Mika Hämäläinen; Niko Tapio Partanen; Jack Rueter", "journal": "", "ref_id": "b1", "title": "Using graph-based methods to augment online dictionaries of endangered languages", "year": "2022" }, { "authors": "Khalid Alnajjar; Jack Rueter; Niko Partanen; Mika Hämäläinen", "journal": "Folia Uralica Debreceniensia", "ref_id": "b2", "title": "Enhancing the erzya-moksha dictionary automatically with link prediction", "year": "2021" }, { "authors": "Daniel Cer; Mona Diab; Eneko Agirre; Iñigo Lopez-Gazpio; Lucia Specia", "journal": "", "ref_id": "b3", "title": "", "year": "2017" }, { "authors": "", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation", "year": "" }, { "authors": "Guanhua Chen; Shuming Ma; Yun Chen; Dongdong Zhang; Jia Pan; Wenping Wang; Furu Wei", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Towards making the most of cross-lingual transfer for zero-shot neural machine translation", "year": "2022" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Édouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b6", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Alexis Conneau; Guillaume Lample; Marc'aurelio Ranzato; Ludovic Denoyer; Hervé Jégou", "journal": "", "ref_id": "b7", "title": "Word translation without parallel data", "year": "2017" }, { "authors": "David Dale", "journal": "", "ref_id": "b8", "title": "The first neural machine translation system for the erzya language", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b9", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Jeff Ens; Mika Hämäläinen; Jack Rueter; Philippe Pasquier", "journal": "Linköping University Electronic Press", "ref_id": "b10", "title": "Morphosyntactic disambiguation in an endangered language setting", "year": "2019" }, { "authors": "Murhaf Fares; Andrey Kutuzov; Stephan Oepen; Erik Velldal", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Word vectors, reuse, and replicability: Towards a community reposito", "year": "2017" }, { "authors": "Vitali Gubarev", "journal": "", "ref_id": "b12", "title": "Павлик Морозов. Книгам лыкшы мары изд-во", "year": "1948" }, { "authors": "Mika Hämäläinen", "journal": "Journal of open source software", "ref_id": "b13", "title": "Uralicnlp: An nlp library for uralic languages", "year": "2019" }, { "authors": "Mika Hämäläinen", "journal": "", "ref_id": "b14", "title": "Endangered languages are not low-resourced! Multilingual Facilitation", "year": "2021" }, { "authors": "Mika Hämäläinen; Khalid Alnajjar", "journal": "", "ref_id": "b15", "title": "Let's face it. finnish poetry generation with aesthetics and framing", "year": "2019" }, { "authors": "Mika Hämäläinen; Khalid Alnajjar; Thierry Poibeau", "journal": "", "ref_id": "b16", "title": "Video games as a corpus: Sentiment analysis using fallout new vegas dialog", "year": "2022" }, { "authors": "Mika Hämäläinen; Niko Partanen; Jack Rueter; Khalid Alnajjar", "journal": "", "ref_id": "b17", "title": "Neural morphology dataset and models for multiple languages, from the large to the endangered", "year": "2021" }, { "authors": "Kaisa Häkkinen", "journal": "", "ref_id": "b18", "title": "Suomi eilen ja tänään", "year": "1997" }, { "authors": "Joo-Kyung Kim; Young-Bum Kim; Ruhi Sarikaya; Eric Fosler-Lussier", "journal": "Language Bank of Finland", "ref_id": "b19", "title": "Cross-lingual transfer learning for POS tagging without cross-lingual resources", "year": "2017" }, { "authors": "Monica Shuhua; Jiun-Hung Liu; Chen", "journal": "Expert Systems with Applications", "ref_id": "b20", "title": "A multi-label classification based approach for sentiment classification", "year": "2015" }, { "authors": "Julian Mcauley; Jure Leskovec", "journal": "", "ref_id": "b21", "title": "Hidden factors and hidden topics: understanding rating dimensions with review text", "year": "2013" }, { "authors": "Christopher Moseley", "journal": "UNESCO Publishing", "ref_id": "b22", "title": "Atlas of the World ′ s Languages in Danger, 3rd edition", "year": "2010" }, { "authors": "Jack Sjur Moshagen; Tommi Rueter; Trond Pirinen; Francis M Trosterud; Tyers", "journal": "", "ref_id": "b23", "title": "Open-source infrastructures for collaborative work on underresourced languages. Collaboration and Computing for Under-Resourced Languages in the Linked Open Data Era", "year": "2014" }, { "authors": "Emily Öhman", "journal": "", "ref_id": "b24", "title": "The validity of lexicon-based sentiment analysis in interdisciplinary research", "year": "2021" }, { "authors": "Niko Partanen; Rogier Blokland; Kyungtae Lim; Thierry Poibeau; Michael Rießler", "journal": "", "ref_id": "b25", "title": "The first Komi-Zyrian universal dependencies treebanks", "year": "2018-11" }, { "authors": "Peng Qi; Yuhao Zhang; Yuhui Zhang; Jason Bolton; Christopher D Manning", "journal": "", "ref_id": "b26", "title": "Stanza: A python natural language processing toolkit for many human languages", "year": "2020" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Sentence-bert: Sentence embeddings using siamese bert-networks", "year": "2019" }, { "authors": "Jack Rueter; Mika Hämäläinen; Niko Partanen", "journal": "", "ref_id": "b28", "title": "Open-source morphology for endangered mordvinic languages", "year": "2020" }, { "authors": "Jack Rueter; Niko Partanen; Mika Hämäläinen; Trond Trosterud", "journal": "", "ref_id": "b29", "title": "Overview of open-source morphology development for the komi-zyrian language: Past and future", "year": "2021" }, { "authors": "Jack Michael; Rueter ; Francis M Tyers", "journal": "", "ref_id": "b30", "title": "Towards an open-source universal-dependency treebank for erzya", "year": "2018" }, { "authors": "Richard Socher; Alex Perelygin; Jean Wu; Jason Chuang; Christopher D Manning; Andrew Y Ng; Christopher Potts", "journal": "", "ref_id": "b31", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "year": "2013" }, { "authors": "Nitish Srivastava; Geoffrey Hinton; Alex Krizhevsky; Ilya Sutskever; Ruslan Salakhutdinov", "journal": "Journal of Machine Learning Research", "ref_id": "b32", "title": "Dropout: A simple way to prevent neural networks from overfitting", "year": "2014" }, { "authors": "Moshe Stekel; Amos Azaria; Shai Gordin", "journal": "", "ref_id": "b33", "title": "Word sense induction with attentive context clustering", "year": "2021" }, { "authors": "Michael E Tipping; Christopher M Bishop", "journal": "Neural Computation", "ref_id": "b34", "title": "Mixtures of Probabilistic Principal Component Analyzers", "year": "1999" }, { "authors": "Grigorios Tsoumakas; Ioannis Katakis; Ioannis Vlahavas", "journal": "IEEE transactions on knowledge and data engineering", "ref_id": "b35", "title": "Random k-labelsets for multilabel classification", "year": "2010" }, { "authors": "Ruochen Xu; Yiming Yang; Naoki Otani; Yuexin Wu", "journal": "", "ref_id": "b36", "title": "Unsupervised cross-lingual transfer of word embedding spaces", "year": "2018" }, { "authors": "E Selim F Yilmaz; Aykut Batuhan Kaynak; Hamdi Koç; Suleyman Dibeklioglu; Kozat Serdar", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b37", "title": "Multi-label sentiment analysis on 100 languages with dynamic weighting for label imbalance", "year": "2021" }, { "authors": "Tatu Ylönen", "journal": "", "ref_id": "b38", "title": "Wiktextract: Wiktionary as machine-readable structured data", "year": "2022" } ]
[]
Sentiment Analysis Using Aligned Word Embeddings for Uralic Languages
In this paper, we present an approach for translating word embeddings from a majority language into 4 minority languages: Erzya, Moksha, Udmurt and Komi-Zyrian. Furthermore, we align these word embeddings and present a novel neural network model that is trained on English data to conduct sentiment analysis and then applied on endangered language data through the aligned word embeddings. To test our model, we annotated a small sentiment analysis corpus for the 4 endangered languages and Finnish. Our method reached at least 56% accuracy for each endangered language. The models and the sentiment corpus will be released together with this paper. Our research shows that state-of-the-art neural models can be used with endangered languages with the only requirement being a dictionary between the endangered language and a majority language.
Khalid Alnajjar; Mika Hämäläinen; Jack Rueter
[ { "figure_caption": "The corpus size for each language", "figure_data": "tokens sentencesFinnish43k3.1kErzya50k3.6kMoksha51k3.4kKomi-Zyrian 50k3.3kUdmurt53k3.6k", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Number of translations and predictions from the source languages to Finnish", "figure_data": "Translations Predictions Totalkpv109831442125404mdf36235390340138myv 18056501823074udm 36502696643468", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Precision, recall, f1-score and accuracy for each language and label tence embeddings for English that are based on the English word embeddings.", "figure_data": "Language Label Precision Recall F1-Score Accuracyengneg pos0.77 0.750.76 0.760.76 0.760.76finneg pos0.77 0.730.75 0.750.76 0.740.75kpvneg pos0.57 0.550.57 0.550.57 0.550.56mdfneg pos0.63 0.640.65 0.620.64 0.630.63myvneg pos0.71 0.670.69 0.690.70 0.680.69udmneg pos0.69 0.580.63 0.630.66 0.600.63", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" } ]
[{"Category": "Data Source", "Citation": "(Rueter et al., 2020)", "Explanation": "The cited work by Rueter et al. provides the FSTs for the four Uralic languages mentioned in the citing paper, which the citing paper uses as a data source for their research."}, {"Category": "Data Source", "Citation": "(Rueter and Tyers, 2018)", "Explanation": "The cited work by Rueter and Tyers provides the Universal Dependencies Treebanks for the three Uralic languages (excluding Udmurt), which the citing paper uses as a data source for their research."}, {"Category": "Data Source", "Citation": "(Moshagen et al., 2014)", "Explanation": "The cited work by Moshagen et al. provides the constraint grammars available in Giella repositories for the four Uralic languages mentioned in the citing paper, which the citing paper uses as a data source for their research."}, {"Category": "Data Source", "Citation": "(Ens et al., 2019)", "Explanation": "The cited work provides a disambiguation method for some languages, which the citing paper utilizes in their research on neural models."}, {"Category": "Data Source", "Citation": "(H\u00e4m\u00e4l\u00e4inen et al., 2021)", "Explanation": "The cited work presents a morphological task for some languages, which the citing paper may have used in their research on neural models."}, {"Category": "Extension or Continuation", "Citation": "(Dale, 2022)", "Explanation": "The cited work on machine translation models for Erzya language is extended in the citing paper to include more information and results."}, {"Category": "Extension or Continuation", "Citation": "(Qi et al., 2020)", "Explanation": "The Stanza model for Erzya language is further discussed in the citing paper to provide additional insights and data."}, {"Category": "Methodological Basis", "Citation": "(Yilmaz et al., 2021)", "Explanation": "The cited work on sentiment analysis in 100 languages provides a method of using RoBERTa-XLM to extract feature vectors for training a classifier model."}, {"Category": "Extension or Continuation", "Citation": "(Liu and Chen, 2015)", "Explanation": "The cited work compares different multilabel classification methods for sentiment analysis, providing a basis for the citing paper to build upon in exploring new methods in the context of sentiment analysis."}, {"Category": "Extension or Continuation", "Citation": "(H\u00e4m\u00e4l\u00e4inen et al., 2022)", "Explanation": "The cited work on French sentiment analysis demonstrates promising results in a multilingual model trained in English, Italian, Spanish, and German, providing a basis for the citing paper to extend the research in a similar direction."}, {"Category": "Methodological Basis", "Citation": "(Kim et al., 2017)", "Explanation": "The cited work proposes a new model for POS tagging that does not require parallel corpora or other resources, which the citing paper adopts in their research on cross-lingual NLP."}, {"Category": "Extension or Continuation", "Citation": "(Xu et al., 2018)", "Explanation": "The cited work focuses on cross-lingual transfer of word embeddings, which the citing paper extends by creating mappings between words in different languages through learning transformation functions over word embedding spaces."}, {"Category": "Data Source", "Citation": "(Chen et al., 2019)", "Explanation": "The cited work is used to provide a dataset for machine translation research, which the citing paper utilizes in their study on cross-lingual transfer of word embeddings."}, {"Category": "Methodological Basis", "Citation": "(Conneau et al., 2020)", "Explanation": "The cited work, XLM-R large, is used as a pre-trained model to initialize the decoder embedding and full encoder in the NMT model presented in the citing paper."}, {"Category": "Data Source", "Citation": "(Fares et al., 2017)", "Explanation": "The cited work provides the English Wikipedia dump of February 2017 and Gigaword 5th edition as the data source for training the English model used in the citing paper."}, {"Category": "Data Source", "Citation": "(Language Bank of Finland, 2022)", "Explanation": "The cited work provides the word embeddings trained on Finnish newspapers as the data source for training the Finnish model used in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Alnajjar et al., 2021)", "Explanation": "The cited work by Alnajjar et al. provides the base dictionaries and predictions used in the study to create word embeddings for endangered languages."}, {"Category": "Data Source", "Citation": "(GiellaLT3)", "Explanation": "The GiellaLT3 data is used as a source of dictionary translations to augment the base dictionaries and create word embeddings for endangered languages."}, {"Category": "Extension or Continuation", "Citation": "(Alnajjar et al., 2022)", "Explanation": "The cited work by Alnajjar et al. is extended in the study to predict new translations and create word embeddings for endangered languages using the Ve'rdd platform."}, {"Category": "Data Source", "Citation": "(Yl\u00f6nen, 2022)", "Explanation": "The cited work provides the Fin-Eng dictionary used in the research to align Finnish embeddings with English."}, {"Category": "Data Source", "Citation": "(Conneau et al., 2017)", "Explanation": "The cited work provides the Finnish-English dictionaries used in the research to align the embeddings of endangered languages with Finnish."}, {"Category": "Extension or Continuation", "Citation": "(Alnajjar, 2021)", "Explanation": "The cited work served as a basis for the research conducted in the citing paper, as the authors follow a similar approach to align the embedding spaces of the languages."}, {"Category": "Methodological Basis", "Citation": "(Conneau et al., 2017)", "Explanation": "The cited work by Conneau et al. (2017) provides a methodological approach for aligning embeddings, which the citing paper follows in the process of aligning Finnish with English and all other languages to Finnish."}, {"Category": "Methodological Basis", "Citation": "(Cer et al., 2017)", "Explanation": "The cited work provides the STS Benchmark dataset, which the citing paper uses to train the sentence embeddings and evaluate the performance of the proposed method."}, {"Category": "Supporting Evidence", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work introduces SBERT, a method for sentence embeddings that the citing paper adopts to build the sentence embeddings for the study conducted in the paper."}, {"Category": "Data Source", "Citation": "(Socher et al., 2013)", "Explanation": "The cited work provides the Stanford Sentiment Treebank dataset, which the citing paper uses to train the sentiment analysis model."}, {"Category": "Data Source", "Citation": "(McAuley and Leskovec, 2013)", "Explanation": "The cited work provides the Amazon Reviews Dataset, which the citing paper uses to train the sentiment analysis model."}, {"Category": "Data Source", "Citation": "(Yelp Dataset)", "Explanation": "The cited work provides the Yelp Dataset, which the citing paper uses to train the sentiment analysis model."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b28", "b2", "b3", "b8", "b9", "b6", "b7", "b4", "b0", "b28", "b1", "b11", "b5" ], "table_ref": [], "text": "Feedback graphs [29] provide an elegant interpolation between two popular online learning models: multiarmed bandits and prediction with expert advice. When learning with an undirected feedback graph G over K actions, the online algorithm observes not only the loss of the action chosen in each round, but also the loss of the actions that are adjacent to it in the graph. Two important special cases of this setting are: prediction with expert advice (when G is a clique) and K-armed bandits (when G has no edges). When losses are generated adversarially, the regret in the feedback graph setting with strong observability has been shown to scale with the independence number α of G. Intuitively, denser graphs, which correspond to smaller independence numbers, provide more feedback to the learner, thus enabling a better control on regret. More specifically, the best known upper and lower bounds on the regret after T rounds are O √ αT log K and Ω √ αT [3,4]. It has been known for three decades that this upper bound is tight for α = 1 (the experts case, [9,10]). When α = K (the bandits case), the lower bound Ω √ KT -which has also been known for nearly three decades [7,8]-was matched by a corresponding upper bound O √ KT only in 2009 [5]. These results show that in feedback graphs, the logarithmic factor √ log K is necessary (at least) for the α = 1 case, while it must vanish from the minimax regret as α grows from 1 to K, but the current bounds fail to capture this fact. In this work, we prove new upper and lower regret bounds that for the first time account for this vanishing logarithmic factor.\nTo prove our new upper bound, we use the standard FTRL algorithm run with the q-Tsallis entropy regularizer (q-FTRL for short). It is well-known [1] that for q = 1 2 this algorithm (run with appropriate loss estimates) achieves regret O √ KT when α = K (bandits case), while for q → 1 - the same algorithm (without loss estimates) recovers the bound O √ T log K when α = 1 (experts case). When G contains all self-loops, we show in Theorem 1 that, if q is chosen as a certain function q(α, K), then q(α, K)-FTRL, run with standard importance-weighted loss estimates, achieves regret O αT (1 + log(K/α)) . This is a strict improvement over the previous bound, and matches the lower bounds for bandits and experts while interpolating the intermediate cases. This interpolation is reflected by our choice of q, which goes from 1 2 to 1 as α ranges from 1 to K. The main technical hurdle in proving this result is an extension to arbitrary values of q ∈ 1 2 , 1 of a standard resultsee, e.g., [29,Lemma 3]-that bounds in terms of α the variance term in the regret of q-FTRL. In Theorem 2, using a modified loss estimate, this result is extended to any strongly observable undirected graph [2], a class of feedback graphs in which some of the actions do not reveal their loss when played. In Theorem 3, we show via a doubling trick that our new upper bound can also be obtained (up to constant factors) without the need of knowing (or computing) α. As the resulting algorithm is oblivious to α, our analysis also applies to arbitrary sequences of graphs G t , where K is constant but the independence number α t of G t can change over time, and the algorithm observes G t only after choosing an action (the so-called uninformed case). In this setting, the analysis of the doubling trick is complicated by the non-trivial dependence of the regret on the sequence of α t .\nWe also improve on the Ω √ αT lower bound by proving a new Ω αT log α K lower bound for all α > 1. This is the first result showing the necessity-outside the experts case-of a logarithmic factor in the minimax regret for all α < K. Our proof uses a stochastic adversary generating both losses and feedback graphs via i.i.d. draws from a joint distribution. This sequence of losses and feedback graphs can be used to define a hard instance of the multi-task bandits problem, a variant of the combinatorial bandits framework [12]. We then prove our result by adapting known lower bounding techniques for multi-task bandits [6]. Note that for values of α bounded away from 2 and K, the logarithmic factor log α K in the lower bound is smaller than the corresponding factor 1 + log(K/α) in the upper bound. Closing this gap remains an open problem." }, { "figure_ref": [], "heading": "Additional related work", "publication_ref": [ "b34", "b30", "b35", "b22", "b17", "b21", "b31", "b25", "b32", "b16", "b33", "b16" ], "table_ref": [], "text": "Several previous works have used the q-Tsallis regularizer with q tuned to specific values other than 1 2 and 1. For example, in [35,Section 4], q is chosen as a function of K to prove a regret bound of O αT (log K) 3 for any strongly observable directed feedback graph, which shaves off a log T factor compared to previous works. This bound is worse than the corresponding bounds for undirected graphs because the directed setting is harder. Specific choices of q have been considered to improve the regret in settings of online learning with standard bandit feedback. For example, the choice q = 2 3 was used in [31] to improve the analysis of regret in bandits with decoupled exploration and exploitation. Regret bounds for arbitrary choices of q are derived in [36,23] for a best-of-both-worlds analysis of bandits, though q = 1 2 remains the optimal choice. The 1 2 -Tsallis entropy and the Shannon entropy (q = 1) regularizers have been combined before in different ways to obtain best-of-both-worlds guarantees for the graph feedback problem [18,22]. The idea of using values of q ∈ ( 1 2 , 1) for feedback graphs is quite natural and has been brought up before, e.g., in [32], but achieving an improved dependence on the graph structure by picking a suitable value of q has not been, to the best of our knowledge, successfully pursued before. On the other hand, an approach based on a similar use of the q-Tsallis regularizer has been employed by [26] for the problem of multiarmed bandits with sparse losses to achieve a O sT ln(K/s) regret bound, where s is the maximum number of nonzero losses at any round.\nOur lower bound is reminiscent of the Ω KT log K N lower bound proved in [33] for the problem of bandits with expert advice (with N ≥ K being the number of experts); see also [17] and [34].\nIn that problem, at each time step, experts suggest distributions over actions to the learner, whose regret is computed against the best expert in hindsight. Although the two settings are different, the variant of the multitask bandit problem that our lower bound construction simulates is the same as the one used in the proof of [17,Theorem 7]." }, { "figure_ref": [], "heading": "Problem Setting", "publication_ref": [ "b1" ], "table_ref": [], "text": "For any integer n ≥ 1, let [n] = {1, . . . , n}. We consider the following game played over T rounds between a learner with action set V = [K] and the environment. At the beginning of the game, the environment secretly selects a sequence of losses (ℓ t ) t∈[T ] , where ℓ t : V → [0, 1],2 and a sequence of undirected graphs (G t ) t∈[T ] over the set of actions V , that is, G t = (V, E t ). At any time t, the learner selects an arm I t (possibly at random), then pays loss ℓ t (I t ) and observes the feedback graph G t and all losses ℓ t (i) of neighbouring actions i ∈ N Gt (I t ), where N Gt (i) = {j ∈ V : (i, j) ∈ E t } (see Online Protocol 1). In this work, we only focus on strongly observable graphs [2]. An undirected graph G is strongly observable if for every i ∈ V , at least one of the following holds:\ni ∈ N G (i) or i ∈ N G (j) for all j = i.\nThe performance of the learner is measured by the regret\nR T = E T t=1 ℓ t (I t ) -min i∈[K] T t=1 ℓ t (i) .\nwhere the expectation is over the learner's internal randomization.\nOnline Protocol 1 Online learning with feedback graphs environment: (hidden) losses ℓ t : V → [0, 1] and graphs G t = (V, E t ), for all t = 1, . . . , T for t = 1, . . . , T do The learner picks an action I t ∈ V (possibly at random) The learner incurs loss ℓ t (I t )\nThe learner observes losses i, ℓ t (i) : i ∈ N Gt (I t ) and graph G t end for\nWe denote by ∆ K the simplex p ∈ [0, 1] K : p 1 = 1 . For any graph G, we define its independence number as the cardinality of the largest set of nodes such that no two nodes are neighbors, and denote it by α(G). For simplicity, we use N t to denote the neighbourhood N Gt in the graph G t and we use α t to denote the independence number α(G t ) of G t at time t." }, { "figure_ref": [], "heading": "FTRL with Tsallis Entropy for Undirected Feedback Graphs", "publication_ref": [ "b29", "b28", "b2", "b1", "b3", "b15", "b18", "b21", "b24", "b31", "b34", "b4", "b34", "b27", "b34", "b5" ], "table_ref": [], "text": "As a building block, in this section, we focus on the case when all the feedback graphs G 1 , . . . , G T have the same independence number α 1 = • • • = α T = α, whereas the general case is treated in the next section. For simplicity, we start with the assumption that all nodes have self-loops: (i, i) ∈ E t for all i ∈ V and all t. We later lift this requirement and show that the regret guarantees that we provide can be extended to general strongly observable undirected feedback graphs, only at the cost of a constant multiplicative factor.\nThe algorithm we analyze is q-FTRL (described in Algorithm 1), which is an instance of the follow the regularized leader (FTRL) framework-see, e.g., [30,Chapter 7]-with the (negative) q-Tsallis entropy\nψ q (x) = 1 1 -q 1 - i∈V x(i) q ∀x ∈ ∆ K ,\nas the regularizer, whose parameter q ∈ (0, 1) can be tuned according to our needs. Since we do not observe all the losses in a given round, the algorithm makes use of unbiased estimates for the losses. When all self-loops are present, we define the estimated losses in the following standard manner.\nLet I t be the action picked at round t, which is drawn from the distribution p t ∈ ∆ K maintained by the algorithm, the loss estimate for an action i ∈ V at round t is given by\nℓ t (i) = ℓ t (i) P t (i) I{I t ∈ N t (i)} ,(1)\nwhere P t (i) = P I t ∈ N t (i) = j∈Nt(i) p t (j). This estimate is unbiased in the sense that E t ℓ t (i) = ℓ t (i) for all t ∈ [T ] and all i ∈ V , where we denote\nE t [•] = E [• | I 1 , . . . , I t-1 ].\nAlgorithm 1 q-FTRL for undirected feedback graphs input: q ∈ (0, 1), η > 0 initialization: A key part of the standard regret analysis of q-FTRL (see, e.g., the proof of Lemma 3 in Appendix A) is handling the variance term, which, with the choice of estimator given in (1), takes the following form\np 1 (i) ← 1/K for all i = 1, . . .\nB t (q) = i∈V p t (i) 2-q P t (i) .(2)\nBy Hölder's inequality, this term can be immediately upper bounded by\nB t (q) ≤ i∈V p t (i) 1-q ≤ i∈V p t (i) 1-q i∈V 1 1/q q = K q ,\nwhile previous results on the regret analysis of multiarmed bandits with graph feedback [29,3] would give\nB t (q) ≤ i∈V p t (i) P t (i) ≤ α .\nHowever, the former result would only recover a O( √ KT ) regret bound (regardless of α) with the best choice of q = 1/2, which could be trivially achieved by ignoring side-observations of the losses, whereas the latter bound would only manage to achieve a O( √ αT ln K) regret bound, incurring the extra √ ln K factor for all values of α. Other results in the literature (e.g., see [2,4,16,19,22,25,32,35]) do not bring an improvement in this setting when bounding the B t (q) term and, hence, do not suffice for achieving the desired regret bound. The following lemma provides a novel and improved bound on quantities of the same form as B t (q) in terms of the independence number α t = α of the undirected graph G t .\nLemma 1. Let G = (V, E) be any undirected graph with |V | = K vertices and independence number α(G) = α. Let b ∈ [0, 1], p ∈ ∆ K and consider any nonempty subset U ⊆ {v ∈ V : v ∈ N G (v)}. Then, v∈U p(v) 1+b u∈NG(v) p(u) ≤ α 1-b .\nProof. First of all, observe that we can restrict ourselves to the subgraph G[U ] induced by U , i.e., the graph\nG[U ] = (U, E ∩ (U × U ))\n. This is because the neighbourhoods in this graph are such that N G[U] (v) ⊆ N G (v) for all v ∈ U , and its independence number is α(G[U ]) ≤ α(G). Hence, it suffices to prove the claimed inequality for any undirected graph G = (V, E) with all self-loops, any p ∈ [0, 1] K such that p 1 ≤ 1, and the choice U = V . We assume this in what follows without loss of generality.\nFor any subgraph H ⊆ G with vertices V (H) ⊆ V , denote the quantity we want to upper bound by\nQ(H) = v∈V (H) p(v) 1+b u∈NG(v) p(u)\n.\nOur aim is thus to provide an upper bound to Q(G).\nConsider a greedy algorithm that incrementally constructs a subset of vertices in the following way: at each step, it selects a vertex v that maximizes p(v) b / u∈NG(v) p(u) , it adds v to the solution, and it removes v from G together with its neighbourhood N G (v). This step is iterated on the remaining graph until no vertex is left.\nLet S = {v 1 , . . . , v s } ⊆ V be the solution returned by the above greedy algorithm on G. Also let G 1 , . . . , G s+1 be the sequence of graphs induced by the operations of the algorithm, where G 1 = G and G s+1 is the empty graph, and let N r (v) = N Gr (v) for v ∈ V (G r ). At every step r ∈ [s] of the greedy algorithm, the contribution to Q(G) of the removed vertices N r (v r ) amounts to\nQ(G r ) -Q(G r+1 ) = v∈Nr(vr ) p(v) 1+b u∈N1(v) p(u) ≤ v∈Nr(vr ) p(v) p(v r ) b u∈N1(vr) p(u) ≤ v∈N1(vr) p(v) u∈N1(vr) p(u) p(v r ) b = p(v r ) b ,\nwhere the last inequality is due to the fact that N i (v) ⊆ N j (v) for all i ≥ j and v ∈ V i . Therefore, we can observe that\nQ(G) = s r=1 Q(G r ) -Q(G r+1 ) ≤ v∈S p(v) b .\nThe solution S is an independent set of G by construction. Consider now any independent set A ⊆ V of G. We have that\nv∈A p(v) b ≤ max x∈∆K v∈A x(v) b = |A| max x∈∆K v∈A x(v) b |A| ≤ |A| max x∈∆K 1 |A| v∈A x(v) b ≤ |A| 1-b ≤ α 1-b ,(3)\nwhere the second inequality follows by Jensen's inequality and the fact that b ∈ [0, 1].\nObserve that this upper bound is tight for general probability distributions p ∈ ∆ K over the vertices V of any strongly observable undirected graph G (containing at least one self-loop), as it is exactly achieved by the distribution p ⋆ ∈ ∆ K defined as p ⋆ (i) = 1 |S| I {i ∈ S} for some maximum independent set S ⊆ V of G. Using this lemma, the following theorem provides our improved upper bound under the simplifying assumptions we made thus far. Theorem 1. Let G 1 , . . . , G T be a sequence of undirected feedback graphs, where each G t contains all self-loops and has independence number\nα t = α for some common value α ∈ [K]. If Algorithm 1 is run with input q = 1 2 1 + ln(K/α) ln(K/α) 2 + 4 + 2 ∈ [1/2, 1) and η = 2qK 1-q T (1 -q)α q ,\nand loss estimates (1), then its regret satisfies R T ≤ 2 eαT (2 + ln(K/α))\nProof. One can verify that for any i ∈ V , the loss estimate ℓ t (i) defined in (1) satisfies E t ℓ t (i) 2 ≤ 1/P t (i). Hence, using also that\nE t ℓ t (i) = ℓ t (i), Lemma 2 in Appendix A implies that R T ≤ K 1-q η(1 -q) + η 2q T t=1 E i∈V p t (i) 2-q P t (i)(4)\n≤ K 1-q η(1 -q) + η 2q α q T ,(5)\nwhere the second inequality follows by Lemma 1 with b = 1q since all actions i ∈ V are such that i ∈ N G (i). Our choices for q and η allow us to further upper bound the right-hand side of ( 5) by\n2K 1-q α q q(1 -q) T = 2T exp 1 + 1 2 ln(αK) - 1 2 ln (K/α) 2 + 4 2 + ln (K/α) 2 + 4 ≤ 2eαT 2 + ln (K/α) 2 + 4 ≤ 2 eαT ln (K/α) 2 + 4 ≤ 2 eαT (2 + ln(K/α)) .\nThe regret bound achieved in the above theorem achieves the optimal regret bound for the experts setting (i.e., α = 1) and the bandits setting (i.e., α = K) simultaneously. Moreover, it interpolates the intermediate cases for α ranging between 1 and K, introducing the multiplicative logarithmic factor only for graphs with independence number strictly smaller than K. We remark that the chosen values of q and η do in fact minimize the right-hand side of (5). Note that we relied on the knowledge of α to tune the parameter q. This is undesirable in general. We will show how to lift this requirement in Section 4. The same comment applies to Theorem 2, below.\nWe now show how to achieve the improved regret bound of Theorem 1 in the case of strongly observable undirected feedback graphs where some self-loops may be missing; i.e., there may be actions i ∈ V such that i / ∈ N G (i). Using the loss estimator defined in (1) may lead to a large variance term due to the presence of actions without self-loops. One approach to deal with thissee, e.g., [35] or [28]-is to suitably alter the loss estimates of these actions.\nDefine S t = {i ∈ V : i / ∈ N t (i)} as the subset of actions without self-loops in the feedback graph G t at each time step t ∈ [T ].\nThe idea is that we need to carefully handle some action i ∈ S t only in the case when the probability p t (i) of choosing i at round t is sufficiently large, say, larger than 1/2. Define the set of such actions as J t = {i ∈ S t : p t (i) > 1/2} and observe that |J t | ≤ 1. Similarly to [35], define new loss estimates\nℓ t (i) =    ℓt(i) Pt(i) I {I t ∈ N t (i)} if i ∈ V \\ J t ℓt(i)-1 Pt(i) I {I t ∈ N t (i)} + 1 if i ∈ J t(6)\nfor which it still holds that E t ℓ t = ℓ t and that E t ℓ t (i) 2 ≤ 1/P t (i) for all i / ∈ J t . This change, along with the use of Lemma 1 for the actions in V \\ S t , suffices in order to prove the following regret bound (see Appendix B for the proof) when the feedback graphs do not necessarily contain self-loops for all actions. Theorem 2. Let G 1 , . . . , G T be a sequence of strongly observable undirected feedback graphs, where each G t has independence number α t = α for some common value α ∈\n[K]. If Algorithm 1 is run with input q = 1 2 1 + ln(K/α) ln(K/α) 2 + 4 + 2 ∈ [1/2, 1) and η = 1 3 2qK 1-q T (1 -q)α q ,\nand loss estimates (6), then its regret satisfies R T ≤ 6 eαT (2 + ln(K/α))." }, { "figure_ref": [], "heading": "Adapting to Arbitrary Sequences of Graphs", "publication_ref": [ "b2", "b5", "b6" ], "table_ref": [], "text": "In the previous section, we assumed for simplicity that all the graphs have the same independence number. This independence number was then used to tune q, the parameter of the Tsallis entropy regularizer used by the algorithm. In this section, we show how to extend our approach to the case when the independence numbers of the graphs are neither the same nor known a-priori by the learner. Had these independence numbers been known a-priori, one approach is to set q as in Theorem 2 but using the average independence number\nα T = 1 T T t=1 α t .\nDoing so would allow us to achieve a O T t=1 α t (1 + ln(K/α T )) regret bound. We now show that we can still recover a bound of the same order without prior knowledge of α T . For round t and any fixed q ∈ [0, 1], define\nH t (q) = i∈V \\St p t (i) 2-q P t (i) .\nWe know from Lemma 1 that H t (q) ≤ α q t . Thus, we can leverage these observations and use a doubling trick (similar in principle to [3]) to guess the value of α T . This approach is outlined in Algorithm 2. Starting with r = 0 and T r = 1, the idea is to instantiate Algorithm 1 at time-step T r with q and η set as in Theorem 2 but with 2 r replacing the independence number. Then, at t ≥ T r , we increment r and restart Algorithm 1 only if\n1 T t s=Tr H s (q r ) 1/qr > 2 r+1\n, since (again thanks to Lemma 1) the left-hand side of the above inequality is always upper bounded by α T . The following theorem shows that this approach essentially enjoys the same regret bound of Theorem 2 up to an additive log 2 α T term.\nAlgorithm 2 q-FTRL for an arbitrary sequence of strongly observable undirected graphs input: Time horizon T define: For each r ∈ {0, . . . , ⌊log 2 K⌋},\nq r = 1 2 1 + ln(K/2 r ) ln(K/2 r ) 2 + 4 + 2 and η r = 2q r K 1-qr 11T (1 -q r ) (2 r )\nqr initialization: T 0 ← 1, r ← 0, instantiate Algorithm 1 with q = q 0 , η = η 0 , and loss estimates (6) for t = 1, . . . , T do Perform one step of the current instance of Algorithm 1\nif 1 T t s=Tr H s (q r ) 1/qr > 2 r+1 then r ← r + 1 T r ← t + 1\nRestart Algorithm 1 with q = q r , η = η r , and loss estimates (6) end if end for Theorem 3.\nLet C = 4 √ 6e √ π+ √ 4-2 ln 2 ln 2\n. Then, the regret of Algorithm 2 satisfies\nR T ≤ C T t=1 α t 2 + ln K α T + log 2 α T .\nProof sketch. For simplicity, we sketch here the proof for the case when in every round t, all the nodes have self-loops; hence, H t (q) = B t (q). See the full proof in Appendix C, which treats the general case in a similar manner. Let n = log 2 α T and assume without loss of generality that α T > 1. Since Lemma 1 implies that for any r and t, B t (q r ) ≤ α qr t , we have as a consequence that for any t ≥ T r ,\n1 T t s=Tr B s (q r ) 1/qr ≤ 1 T t s=Tr α s ≤ α T ≤ 2 n .\nHence, the maximum value of r that the algorithm can reach is n -1. In doing so, we will execute n instances of Algorithm 1, each corresponding to a value of r ∈ {0, . . . , n -1}. For every such r, we upper bound the instantaneous regret at step T r+1 -1 (the step when the restarting condition is satisfied) by 1, hence the added log 2 α T term in the regret bound. For the rest of the interval; namely, for t ∈ [T r , T r+1 -2], we have via (4) that the regret of Algorithm 1 is bounded by\nK 1-qr η r (1 -q r ) + η r 2q r E Tr+1-2 t=Tr B t (q r ) .(7)\nDefine T r:r+1 = T r+1 -T r -1, and notice that\nTr+1-2 t=Tr B t (q r ) ≤ T r:r+1 1 T r:r+1 Tr+1-2 t=Tr B t (q r ) 1/qr qr ≤ T r:r+1 T T r:r+1 2 r+1 qr ≤ 2T 2 r qr ,\nwhere the first inequality follows due to Jensen's inequality since q r ∈ (0, 1), and the second follows from the restarting condition of Algorithm 2. After, plugging this back into (7), we can simply use the definitions of η r and q r and bound the resulting expression in a similar manner to the proof of Theorem 1. Overall, we get that\nR T ≤ 4 √ 3eT n-1 r=0 2 r ln e 2 K2 -r + log 2 α T ,\nfrom which the theorem follows by using Lemma 4 in Appendix A, which shows, roughly speaking, that the sum on the right-hand side is of the same order as its last term.\nAlthough Algorithm 2 requires knowledge of the time horizon, this can be dealt with by applying a standard doubling trick on T at the cost of a larger constant factor. It is also noteworthy that the bound we obtained is of the form T α T (1 + ln(K/α T )) and not t α t (1 + ln(K/α t )). Although both coincide with the bound of Theorem 2 when α t is the same for all time steps, the latter is smaller via the concavity of x(1 + ln(K/x)) in x. It is not clear, however, whether there is a tuning of q ∈ (0, 1) that can achieve the second bound (even with prior knowledge of the entire sequence α 1 , . . . , α T of independence numbers)." }, { "figure_ref": [ "fig_0" ], "heading": "An Improved Lower Bound via Multitask Learning", "publication_ref": [ "b2", "b28", "b8", "b11", "b5", "b14", "b20", "b6", "b5", "b16" ], "table_ref": [], "text": "In this section we provide a new lower bound on the minimax regret showing that, apart from the bandits case, a logarithmic factor is indeed necessary in general. When the graph is fixed over time, it is known that a lower bound of order √ αT holds for any value of α [3,29]. Whereas for the experts case (α = 1), the minimax regret is of order3 √ T ln K [9]. The following theorem provides, for the first time, a lower bound that interpolates between the two aforementioned bounds for the intermediate values of α. Theorem 4. Pick any K ≥ 2 and any α such that 2 ≤ α ≤ K. Then, for any algorithm and for all T ≥ α log α K 4 log(4/3) , there exists a sequence of losses and feedback graphs G 1 , . . . , G T such that α(G t ) = α for all t = 1, . . . , T and\nR T ≥ 1 18 √ 2 αT log α K.\nIn essence, the proof of this theorem (see Appendix D) constructs a sequence of feedback graphs and losses that is equivalent to a hard instance of the multitask bandit problem (MTB) [12], an important special case of combinatorial bandits with a convenient structure for proving lower bounds [6,15,21]. We consider a variant of MTB in which, at the beginning of each round, the decisionmaker selects an arm to play in each one of M stochastic bandit games. Subsequently, the decisionmaker only observes (and suffers) the loss of the arm played in a single randomly selected game.\nFor proving the lower bound, we use a class of stationary stochastic adversaries (i.e., environments), each generating graphs and losses in a manner that simulates an MTB instance.\n(1, 1,\n(1, 2, 2)\nC 1,1(2, 1, 1) (2, 1, 2) (2, 2, 1)\n(2, 2, 2)\nC 1,2 G 1 (1, 1, 1) (1, 1, 2) (2, 1, 1)\n(2, 1, 2)\nC 2,1(1, 2, 1)\n(1, 2, 2)\n(2, 2, 1) (2, 2, 2) C 2,2 G 2 (1, 1, 1)\n(1, 2, 1)\n(2, 1, 1)\n(2, 2, 1)\nC 3,1(1, 1, 2)\n(1, 2, 2)\n(2, 1, 2)\n(2, 2, 2) Here, K = 8 and α = 2; thus, the number of games is M = 3. Each action is identified by a tuple of three numbers, each corresponding to a choice of one out of a pair of \"base actions\" in each game. Each of the three graphs in the figure corresponds to a game, such that two actions share an edge if and only if they choose the same base action in the corresponding game. At every round, a graph is randomly drawn, and all actions belonging to the same clique suffer the same loss.\nC 3,2 G 3\nFix 2 ≤ α ≤ K = |V | and assume for simplicity that M = log α K is an integer. We now construct an instance of online learning with time-varying feedback graphs G t = (V, E t ) with α(G t ) = α that is equivalent to an MTB instance with M bandit games each containing α \"base actions\". Since K = α M , we can uniquely identify each action in V with a vector a = a(1), . . . , a(M ) in\n[α] M . The action a t ∈ V chosen by the learner at round t is equivalent to a choice of base actions a t (1), . . . , a t (M ) in the M games. The feedback graph at every round is sampled uniformly at random from a set of M undirected graphs {G i } M i=1 , where\nG i = (V, E i ) is such that (a, a ′ ) ∈ E i if and only if a(i) = a ′ (i).\nThis means (see Figure 1) that each graph G i consists of α isolated cliques {C i,j } α j=1 such that an action a belongs to clique C i,j if and only if a(i) = j. Clearly, the independence number of any such graph is α. Drawing feedback graph G t = G i corresponds to the activation of game i in the MTB instance. Hence, choosing a t ∈ V with feedback graph G t = G i is equivalent to playing base action a t (i) in game i in the MTB. As for the losses, we enforce that, given a feedback graph G t , all actions that belong to the same clique of the feedback graph are assigned the same loss. Namely, if G t = G i and a(i) = a ′ (i) = j, then ℓ t (a) = ℓ t (a ′ ), which can be seen as the loss ℓ t (j) assigned to base action j in game G i . To choose the distribution of the losses for the base actions, we apply the classic needle-in-a-haystack approach of [7] over the M games. More precisely, we construct a different environment for each action a ∈ V in such a way that the distribution of the losses in each MTB game slightly favors (with a difference of a small ε > 0) the base action corresponding to a in that game. The proof then proceeds similarly to, for example, the proof of Theorem 5 in [6] or Theorem 7 in [17].\nWhile both our upper and lower bounds achieve the desired goal of interpolating between the minimax rates of experts and bandits, the logarithmic factors in the two bounds are not exactly matching. In particular, if we compare 1 + log 2 (K/α) and log α K, we can see that although they coincide at α = 2 and α = K, the former is larger for intermediate values. It is reasonable to believe that the upper bound is of the correct order, seeing as it arose naturally as a result of choosing the best parameter for the Tsallis entropy regularizer, whereas achieving the extra logarithmic term in the lower bound required a somewhat contrived construction." }, { "figure_ref": [], "heading": "Conclusions and Future Work", "publication_ref": [ "b13" ], "table_ref": [], "text": "In this work, we have shown that a proper tuning of the q-FTRL algorithm allows one to achieve a O αT (1 + ln(K/α)) regret for the problem of online learning with undirected strongly observable feedback graphs. Our bound interpolates between the minimax regret rates of the bandits and the experts problems, the two extremes of the strongly observable graph feedback spectrum. Furthermore, we have shown that an analogous bound can be achieved when the graphs vary over time, and without requiring any prior knowledge on the graphs. These results are complemented by our new lower bound of Ω αT (ln K)/(ln α) , which holds for α ≥ 2 and shows the necessity of a logarithmic factor in the minimax regret except for the bandits case. While our results provide the tightest characterization to date of the minimax rate for this setting, closing the small remaining gap (likely on the lower bound side) is an interesting problem. After the submission of this manuscript, a subsequent work [14] showed a lower bound for fixed feedback graphs composed of disjoint cliques that would imply worst-case optimality (up to constant factors) of our proposed algorithm for each pair of K and α-see Appendix E for a more detailed comparison with results therein. Extending our results to the case of directed strongly observable feedback graphs is a considerably harder task-see Appendix F for a preliminary discussion. Better understanding this more general setting is an interesting future direction." }, { "figure_ref": [], "heading": "A Auxiliary Results", "publication_ref": [], "table_ref": [], "text": "Lemma 2. If Algorithm 1 is run with q ∈ (0, 1), learning rate η > 0, and non-negative loss estimates that satisfy E t ℓ t = ℓ t for all t = 1, . . . , T , then its regret satisfies\nR T ≤ K 1-q (1 -q)η + η 2q T t=1 E i∈V p t (i) 2-q ℓ t (i) 2 .\nProof. Let i * ∈ arg min i∈V T t=1 ℓ t (i) be an action that minimizes the cumulative loss, and let e i * ∈ R K be an indicator vector for i * . Recall that for t ∈\n[T ], E t [•] = E[• | I 1 , . . . , I t-1 ]\n, and notice that p t is measurable with respect to the σ-algebra generated by I 1 , . . . , I t-1 . Hence, using that\nE t ℓ t (I t ) = i∈V p t (i)ℓ t (i) and E t ℓ t = ℓ t ,\nwe have, via the tower rule and the linearity of expectation, that\nR T = E T t=1 ℓ t (I t ) - T t=1 ℓ t (i * ) = E T t=1 p t -e i * , ℓ t = E T t=1 p t -e i * , ℓ t ,\nfrom which we can obtain the desired result by using Lemma 3 (which holds even if the loss ℓ t at each round t ∈ [T ] depends on the prediction p t made at that round).\nLemma 3. Let q ∈ (0, 1), η > 0, and (y t ) T t=1 be an arbitrary sequence of non-negative loss vectors in R K . Let (p t ) T +1 t=1 be the predictions of FTRL with decision set ∆ K and the q-Tsallis regularizer ψ q over this sequence of losses. That is, p 1 = arg min p∈∆K ψ q (p), and for t ∈ [T ], p t+1 = arg min p∈∆K η t s=1 y s , p + ψ q (p) ." }, { "figure_ref": [], "heading": "Then for any", "publication_ref": [ "b26", "b26" ], "table_ref": [], "text": "u ∈ ∆ K , T t=1 p t -u, y t ≤ K 1-q (1 -q)η + η 2q T t=1 i∈V p t (i) 2-q y t (i) 2 .\nProof. By Theorem 28.5 in [27], we have that\nT t=1 p t -u, y t ≤ ψ q (u) -ψ q (p 1 ) η + T t=1 p t -p t+1 , y t - 1 η D ψq (p t+1 , p t ) = K 1-q -1 (1 -q)η + T t=1 p t -p t+1 , y t - 1 η D ψq (p t+1 , p t ) ≤ K 1-q (1 -q)η + T t=1 p t -p t+1 , y t - 1 η D ψq (p t+1 , p t ) ,\nwhere D ψq (•, •) is the Bregman divergence based on ψ q . For bounding each summand in the second term, we follow a similar argument to that used in Theorem 30.2 in [27]. Namely, for each i ∈ V and round t ∈ [T ], define y t (i) = I{p t+1 (i) ≤ p t (i)}y t (i). We then have that\np t -p t+1 ,y t - 1 η D ψq (p t+1 , p t ) ≤ p t -p t+1 , y t - 1 η D ψq (p t+1 , p t ) = 1 η p t -p t+1 , ηy t - 1 2η p t+1 -p t 2 ∇ 2 ψq(zt) ≤ η 2 y t 2 (∇ 2 ψq(zt)) -1 = η 2q i∈V z t (i) 2-q y t (i) 2 = η 2q i∈V γ t p t+1 (i) + (1 -γ t )p t (i) 2-q y t (i) 2 ≤ η 2q i∈V p t (i) 2-q y t (i) 2 + γ t η 2q i∈V p t+1 (i) 2-q -p t (i) 2-q y t (i) 2 ≤ η 2q i∈V p t (i) 2-q y t (i) 2 ≤ η 2q i∈V p t (i) 2-q y t (i) 2 ,\nwhere z t = γ t p t+1 + (1γ t )p t for some γ t ∈ [0, 1]; the first inequality holds due to the nonnegativity of the losses, the second inequality is an application of the Fenchel-Young inequality, the second equality holds since the Hessian of ψ q is a diagonal matrix with (∇ 2 ψ q (x)) i,i = qx(i) q-2 , the third inequality is an application of Jensen's inequality (since q ∈ (0, 1)), and the fourth inequality holds since y t (i) = 0 for any i such that p t+1 (i) 2-q > p t (i) 2-q .\nLemma 4. Let a and b be positive integers such that 2 ≤ a ≤ b, and let n = log 2 a . Then,\nn-1 r=0 2 r ln e 2 b2 -r ≤ √ 2π + 2 √ 2 -ln 2 ln 2\na ln e 2 b a .\nProof. Since n ≤ log 2 (2b) and 2 r ln e 2 b2 -r is monotonically increasing in r for r ∈ [0, log 2 (eb)], we can bound the sum by an integral:\nn-1 r=0 2 r ln e 2 b2 -r ≤ n 0 2 r ln e 2 b2 -r dr .\nWe proceed via a change of variable; let x = e 2 b2 -r , and note that dr = -dx x ln 2 . We then have that where erfc(x) = 1 -2 √ π x 0 exp(-z 2 ) dz is the complementary Gaussian error function, which is always positive. By [13, Theorem 1], we have that erfc(x) ≤ exp(-x 2 ). Consequently,\nn 0 2 r ln e 2 b2 -r dr ≤ e √ b ln 2 √ 2π 2 n e 2 b + 2 2 n ln(e 2 b2 -n ) e 2 b = √ 2 n ln 2 √ 2π + 2 ln(e 2 b2 -n ) ≤ √ 2a ln 2 √ 2π + 2 ln e 2 b 2a ≤ √ 2π + 2 √ 2 -ln 2 ln 2 a ln e 2 b a ,\nwhere in the second inequality we used once again the fact that 2 r ln e 2 b2 -r is monotonically increasing in r for r ∈ [0, log 2 (eb)] to replace n with log 2 (a) + 1, and the last inequality holds since b/a ≥ 1." }, { "figure_ref": [], "heading": "B Proofs of Section 3", "publication_ref": [ "b5", "b34" ], "table_ref": [], "text": "In this section, we provide the proof of Theorem 2, which is restated below. Theorem 2. Let G 1 , . . . , G T be a sequence of strongly observable undirected feedback graphs, where each G t has independence number α t = α for some common value α ∈\n[K]. If Algorithm 1 is run with input q = 1 2 1 + ln(K/α) ln(K/α) 2 + 4 + 2 ∈ [1/2, 1) and η = 1 3 2qK 1-q T (1 -q)α q ,\nand loss estimates (6), then its regret satisfies R T ≤ 6 eαT (2 + ln(K/α)).\nProof. Let i * ∈ arg min i∈V T t=1 ℓ t (i) and e i * ∈ R K be its indicator vector. Whenever J t is nonempty, let j t ∈ V be the only action such that J t = {j t }. Similarly to [35], let z t = I {J t = ∅} I {I t ∈ N t (j t )} 1-ℓt(jt) 1-pt(jt) and define new losses ℓ t (i) = ℓ t (i) + z t for each time step t ∈ [T ] and each action i ∈ V . Since p t , e i * ∈ ∆ K , we have that p te i * , ℓ t = p te i * , ℓ t for every t ∈ [T ]. Then, using the fact that E t ℓ t = ℓ t , we get that\nR T = E T t=1 p t -e i * , ℓ t = E T t=1 p t -e i * , ℓ t ,\nwhere the first equality holds via the same arguments made in the proof of Lemma 2. If we consider the optimization step of Algorithm 1, computing the same inner product over the new losses ℓ 1 , . . . , ℓ T for some p ∈ ∆ K gives\nt s=1 ℓ s , p = t s=1 z s + t s=1 ℓ s , p ,\nwhere the sum t s=1 z s is constant with respect to p. This implies that the objective functions in terms of either ( ℓ t ) t∈[T ] and ( ℓ t ) t∈[T ] , respectively, are minimized by the same probability distributions. However, notice that, unlike ( ℓ t ) t∈[T ] , the loss vectors in ( ℓ t ) t∈[T ] are always non-negative. Consequently, similar to the proof of Lemma 2, we may apply Lemma 3 to upper bound the regret of Algorithm 1 in terms of the losses ( ℓ t ) t∈[T ] . Doing so gives\nE T t=1 p t -e i * , ℓ t ≤ K 1-q η(1 -q) + η 2q T t=1 E i∈V p t (i) 2-q E t ℓ t (i) 2 . (8\n)\nWe can bound the second term by observing that ℓ t (j t ) = 1 whenever J t = ∅. Therefore,\ni∈V p t (i) 2-q E t ℓ t (i) 2 ≤ 2 i∈V \\Jt p t (i) 2-q E t ℓ t (i) 2 + 2 E t z 2 t i∈V \\Jt p t (i) 2-q + 1 ≤ 2 i∈V \\Jt p t (i) 2-q P t (i) + 2 E t z 2 t i∈V \\Jt p t (i) 2-q + 1 ≤ 2 i∈V \\Jt p t (i) 2-q P t (i) + 3 ,\nwhere the second inequality holds because E t ℓ t (i) 2 ≤ 1/P t (i) for all i / ∈ J t , and the third inequality follows from the fact that\nE t z 2 t i∈V \\Jt p t (i) 2-q = I {J t = ∅} 1 -ℓ t (j t ) 2 1 -p t (j t ) i∈V \\Jt p t (i) 2-q ≤ 1 .\nWe can handle the remaining sum by separating it over nodes i ∈ S t , which satisfy P t (i) = 1p t (i) because of strong observability, and those in S t = V \\ S t . In the first case, any node i ∈ S t \\ J t has p t (i) ≤ 1/2 and thus\ni∈St\\Jt p t (i) 2-q P t (i) = i∈St\\Jt p t (i) 2-q 1 -p t (i) ≤ 2 i∈St\\Jt p t (i) 2-q ≤ 2 .\nwhile in the second case we have that i∈St p t (i) 2-q /P t (i) ≤ α q by Lemma 1 with U = S t and b = 1q. Overall, we have shown that\ni∈V p t (i) 2-q E t ℓ t (i) 2 ≤ 2 i∈St p t (i) 2-q P t (i) + 7 ≤ 2α q + 7 ≤ 9α q . (9\n)\nPlugging back into (8), we obtain that\nR T ≤ K 1-q η(1 -q) + 9η 2q α q T = 3 2K 1-q α q q(1 -q) T ≤ 6 eαT (2 + ln(K/α)) ,\nwhere the equality is due to our choice of η, and the last inequality follows as in the proof of Theorem 1 together with our choice of q." }, { "figure_ref": [], "heading": "C Proofs of Section 4", "publication_ref": [ "b7", "b8", "b10", "b9" ], "table_ref": [], "text": "In this section, we provide the proof of Theorem 3, which is restated below.\nTheorem 3. Let C = 4 √ 6e √ π+ √ 4-2 ln 2 ln 2\n. Then, the regret of Algorithm 2 satisfies\nR T ≤ C T t=1 α t 2 + ln K α T + log 2 α T .\nProof. Notice that if α T = 1, the initial guess is correct and the algorithm will never restart. Moreover, since in this case we have that α t = 1 for all t, the theorem follows trivially from the regret bound of Theorem 2. Hence, we can assume for what follows that α T > 1. Let i * ∈ arg min i∈[K] T t=1 ℓ t (i) and n = log 2 α T . Note that the maximum value of r that the algorithm can reach is n -1. To see this, observe that Lemma 1 implies that for any r and t, H t (q r ) ≤ α qr t . Consequently, for any t ≥ T r ,\n1 T t s=Tr H s (q r ) 1/qr ≤ 1 T t s=Tr α s ≤ α T ≤ 2 n .\nFor t ∈ [T ], let r t be the value of r at round t. Without loss of generality, we assume that r takes each value in {0, . . . , n -1} for at least two rounds. Additionally, we define T n = T + 2 for convenience. We start by decomposing the regret over the n intervals (each corresponding to a value of r in {0, . . . , n-1}) and bounding the instantaneous regret with 1 for each step in which we restart (i.e., at the last step of each but the last interval):\nR T = E T t=1 ℓ t (I t ) -ℓ t (i * ) ≤ E n-1 r=0 Tr+1-2 t=Tr ℓ t (I t ) -ℓ t (i * ) + n -1 ≤ E n-1 r=0 Tr+1-2 t=Tr ℓ t (I t ) -ℓ t (i * ) + log 2 α T .(10)\nFor what follows, let e i * ∈ R K be an indicator vector for i * and let ℓ t be as defined in the proof of Theorem 2. Fix r ∈ {0, . . . , n -1}, we proceed by bounding the regret in the interval [T r , T r+1 -2]:\nE Tr+1-2 t=Tr ℓ t (I t ) -ℓ t (i * ) = E T t=1 I r t = r, 1 T t s=Tr t H s (q rt ) 1/qr t ≤ 2 rt+1 ℓ t (I t ) -ℓ t (i * ) (a) = E T t=1 I r t = r, 1 T t s=Tr t H s (q rt ) 1/qr t ≤ 2 rt+1 p t -e i * , ℓ t (b) = E T t=1 I r t = r, 1 T t s=Tr t H s (q rt ) 1/qr t ≤ 2 rt+1 p t -e i * , ℓ t = E Tr+1-2 t=Tr p t -e i * , ℓ t (c) ≤ K 1-qr η r (1 -q r ) + η r 2q r E Tr+1-2 t=Tr i∈V p t (i) 2-qr ℓ t (i) 2 (d) = K 1-qr η r (1 -q r ) + η r 2q r E T t=1 I r t = r, 1 T t s=Tr t H s (q rt ) 1/qr t ≤ 2 rt+1 E t i∈V p t (i) 2-qr ℓ t (i) 2 (e) ≤ K 1-qr η r (1 -q r ) + η r 2q r E T t=1 I r t = r, 1 T t s=Tr t H s (q rt ) 1/qr t ≤ 2 rt+1 (2H t (q r ) + 7) = K 1-qr η r (1 -q r ) + η r 2q r E Tr+1-2 t=Tr (2H t (q r ) + 7) ,(11)\nwhere (a) follows since E t ℓ t (I t ) = i∈V p t (i)ℓ t (i), E t ℓ t = ℓ t , and the indicator at round t is measurable with respect to σ(I 1 , . . . , I t-1 ), that is, the σ-algebra generated by I 1 , . . . , I t-1 ; (b) follows since p te i * , ℓ t = p te i * , ℓ t holds by the definition of ℓ t ; (c) is an application of Lemma 3, justifiable with the same argument leading to (8) in the proof of Theorem 2; (d) uses once again that the indicator at round t is measurable with respect to σ(I 1 , . . . , I t-1 ); finally, (e) follows via (9). Define T r:r+1 = T r+1 -T r -1, and notice that\nTr+1-2 t=Tr H t (q r ) = T r:r+1 T r:r+1 Tr+1-2 t=Tr H t (q r ) 1/qr qr ≤ T r:r+1 1 T r:r+1 Tr+1-2 t=Tr H t (q r ) 1/qr qr ≤ T r:r+1 T T r:r+1 2 r+1 qr ≤ 2T 2 r qr ,\nwhere the first inequality follows due to Jensen's inequality since q r ∈ (0, 1), and the second follows from the restarting condition of Algorithm 2. Next, we plug this inequality back into (11), and then, similar to the proof of Theorem 2, we use the definitions of η r and q r and bound the resulting expression to get that\nE Tr+1-2 t=Tr ℓ t (I t ) -ℓ t (i * ) ≤ K 1-qr η r (1 -q r ) + 11η r 2q r T (2 r ) qr ≤ 2 11eT 2 r 2 + ln K2 -r ≤ 4 3eT 2 r ln e 2 K2 -r .\nWe then sum this quantity over r and use Lemma 4 to get that\nE n-1 r=0 Tr+1-2 t=Tr ℓ t (I t ) -ℓ t (i * ) ≤ 4 √ 3eT n-1 r=0 2 r ln e 2 K2 -r ≤ 4 √ 6e √ π + √ 4 -2 ln 2 ln 2 α T T 2 + ln (K/α T ) ,\nwhich, together with (10), concludes the proof." }, { "figure_ref": [], "heading": "D Proof of the Lower Bound", "publication_ref": [ "b16" ], "table_ref": [], "text": "In this section, we prove the lower bound provided in Section 5, which we restate below. As remarked before, our proof makes use of known techniques for proving lower bounds for the multitask bandit problem. In particular, parts of the proof are adapted from the proof of Theorem 7 in [17]. Theorem 4. Pick any K ≥ 2 and any α such that 2 ≤ α ≤ K. Then, for any algorithm and for all T ≥ α log α K 4 log(4/3) , there exists a sequence of losses and feedback graphs G 1 , . . . , G T such that α(G t ) = α for all t = 1, . . . , T and\nR T ≥ 1 18 √ 2 αT log α K.\nProof. Once again, we define M = log α K, which we assume for now to be an integer; we discuss in the end how to extend the proof to the case when it is not. The proof will be divided into five parts I-V. We begin by formalizing the class of environments described in Section 5 and stating two useful lemmas." }, { "figure_ref": [], "heading": "I. Preliminaries", "publication_ref": [], "table_ref": [], "text": "We remind the reader that we identify each action in V with a vector a = a(1), . . . , a(M ) ∈ [α] M . We will focus on a set of M undirected graphs G = {G i } M i=1 , where G i consists of α isolated cliques (with self-loops) {C i,j } α j=1 such that an action a belongs to clique C i,j if and only if a(i) = j. As remarked before, all these graphs have independence number α. For convenience, we also use actions in V as functions from G to [α], with a(G i ) = a(i).\nAn environment is identified by a function µ : [α] × G → [0, 1] such that at every round t, after having drawn a graph G t from the uniform distribution over G (denoted with U G ), the environment latently draws for each j ∈ [α] and G ∈ G, a Bernoulli random variable γ t (j; G) with mean µ(j; G t ). Subsequently, for defining the loss of action a ∈ V at round t, we simply set ℓ t (a) = γ t (a(G t ); G t ), whose expectation, conditioned on G t , is µ(a(G t ); G t ). To simplify the notation, we use µ(a; G) as shorthand for µ(a(G); G) and γ t (a; G) as shorthand for γ t (a(G); G). Denote by A t the action picked by the player at round t, which is chosen prior to observing G t . We will focus on the following notion of stochastic regret, which we define for environment µ as:\nR T (µ) = max a∈V E µ T t=1 (ℓ t (A t ) -ℓ t (a))\n,\nwhere E µ [•] denotes the expectation with respect to the sequence of losses and graphs generated by environment µ, as well as the randomness in the choices of the player. We can use the tower rule to rewrite this expression as\nR T (µ) = max a∈V T t=1 E µ E µ E µ ℓ t (A t ) -ℓ t (a) G t , A t A t = max a∈V T t=1 E µ E µ µ(A t ; G t ) -µ(a; G t ) A t = max a∈V T t=1 E µ M i=1 U G (G i )(µ(A t ; G i ) -µ(a; G i )) = max a∈V 1 M M i=1 E µ T t=1 (µ(A t ; G i ) -µ(a; G i )) .(12)\nFor a fixed algorithm, one can show via standard arguments that\nsup (ℓt) T t=1 ,(Gt) T t=1 R T ≥ sup µ R T (µ) .\nHence, it suffices for our purposes to prove a lower bound for the right-hand side of this inequality.\nIn the following, we will have to be more precise about the probability measure with respect to which the expectation in ( 12) is defined. Let λ t ∈ {0, 1} K/α denote the vector of losses observed by the player at round t, which corresponds to the losses of the actions connected to A t assuming that a systematic ordering of the actions makes it clear which coordinate of λ t belongs to which action. Let 1 K/α and 0 K/α be the K/α dimensional4 vectors of all ones and all zeros respectively. Clearly, we have that\nλ t = γ t (A t ; G t )1 K/α = ℓ t (A t )1 K/α\n, which is a binary random variable taking values in {0 K/α , 1 K/α }. Let P λ µ be the probability distribution of λ t in environment µ. Notice then that we have that\nP λ µ (γ t = 1 K/α | G t = G, A t = a) = µ(a; G) .(13)\nLet\nH t = (A 1 , G 1 , λ 1 , . . . , A t , G t , λ t ) ∈ (V × G × {0, 1} K/α\n) t be the interaction trajectory after t steps. The policy π adopted by the player can be modelled as a sequence of probability kernels {π t } T t=1 each mapping the trajectory so far to a distribution over the actions, i.e., A t is sampled from π t (• | H t-1 ). An environment µ and a policy π (implicit in the notation, and fixed throughout the rest of the proof) together define a distribution P µ over the set of possible trajectories of T steps such that:\nP µ (H T ) = T t=1 π t (A t | H t-1 )U G (G t )P λ µ (λ t | G t , A t ) ." }, { "figure_ref": [], "heading": "II. Choosing the environments", "publication_ref": [ "b16" ], "table_ref": [], "text": "We will construct a collection of environments {µ a } a∈V , each associated to an action, such that for any i ∈ [M ] and j ∈ [α],\nµ a (j; G i ) = 1 2 -εI{a(i) = j} ,\nwhere 0 < ε ≤ 1 4 will be tuned later. In words, for a fixed graph, environment µ a gives a slight advantage to actions that are connected to a in that graph, and thus agree with a in the corresponding game. Additionally, for every a ∈ V and i ∈ [M ], we define environment µ -i a to be such that for any s ∈ [M ] and j ∈ [α],\nµ -i a (j; G s ) = 1 2 , if s = i µ a (j; G s ), otherwise.\nSimilar to [17], we will define, for every i ∈ [M ], an equivalence relation ∼ i on the arms such that\na ∼ i a ′ ⇐⇒ ∀s ∈ [M ] \\ {i}, a ′ (s) = a(s) ,\nfor any a, a ′ ∈ V . This means that two arms are equivalent according to ∼ i if and only if their choices of base actions coincide in all games that are different from i. Let V / ∼ i be the set of equivalence classes of ∼ i . It is easy to see that V / ∼ i contains exactly α M-1 equivalence classes, and that each class consists of α actions, each corresponding to a different choice of base action in game i. Notice then that for an equivalence class W ∈ V / ∼ i , all environments µ -i a with a ∈ V are indeed identical. In the sequel, this environment will also be referred to as µ -i W ." }, { "figure_ref": [], "heading": "III. Lower-bounding the regret of a single environment", "publication_ref": [ "b11", "b13", "b13", "b13", "b13", "b23", "b28", "b13" ], "table_ref": [], "text": "Note that in environment µ a , we have that a = arg min a ′ ∈V M i=1 µ a (a ′ ; G i ). Consequently, starting from (12) we get that\nR T (µ a ) = M i=1 1 M E µa T t=1 (µ a (A t ; G i ) -µ a (a; G i )) = M i=1 1 M E µa T t=1 1 2 -εI{A t (i) = a(i)} - 1 2 -ε = ε M M i=1 E µa T t=1 (1 -I{A t (i) = a(i)}) = ε M M i=1 T -N µa (i, a; T ) ,\nwhere for environment µ, action a, and game i, N µ (i, a; T ) = E µ T t=1 I{A t (i) = a(i)} is the expected number of times in environment µ that the action chosen by the policy agrees with action a in game i. Next, we use Lemma 6 to obtain that\nR T (µ a ) ≥ ε M M i=1 T -N µ -i a (i, a; T ) -T 1 2 D KL P µ -i a P µa .(14)\nFor bounding the KL-divergence term, we start from Lemma 5:\nD KL P µ -i a P µa = 1 M M s=1 a ′ ∈V N µ -i a (a ′ ; T )d µ -i a (a ′ ; G s ) µ a (a ′ ; G s ) = 1 M a ′ ∈V N µ -i a (a ′ ; T )d µ -i a (a ′ ; G i ) µ a (a ′ ; G i ) = 1 M a ′ ∈V N µ -i a (a ′ ; T )d 1/2 1/2 -εI{a ′ (i) = a(i)} = 1 M a ′ ∈V I{a ′ (i) = a(i)}N µ -i a (a ′ ; T )d 1/2 1/2 -ε ≤ cε 2 M a ′ ∈V I{a ′ (i) = a(i)}N µ -i a (a ′ ; T ) = cε 2 M a ′ ∈V I{a ′ (i) = a(i)} E µ -i a T t=1 I{A t = a ′ } = cε 2 M E µ -i a T t=1 I{A t (i) = a(i)} = cε 2 M N µ -i a (i, a; T ) ,\nwhere the second equality holds since the two environments only differ in G i , and the inequality holds for ε ≤ 1 4 with c = 8 log 4 3 . Plugging back into (14) gets us that\nR T (µ a ) ≥ ε M M i=1 T -N µ -i a (i, a; T ) -εT c 2M N µ -i a (i, a; T ) .(15)\nIV. Summing up\nFix i ∈ [M ]. Notice that for W ∈ V / ∼ i , a∈W I{A t (i) = a(i)} = 1\nsince each action in W corresponds to a different choice of base action in game i. Hence,\n1 α M a∈V N µ -i a (i, a; T ) = 1 α M W ∈V /∼i a∈W N µ -i a (i, a; T ) = 1 α M W ∈V /∼i a∈W N µ -i W (i, a; T ) = 1 α M W ∈V /∼i E µ -i W T t=1 a∈W I{A t (i) = a(i)} = 1 α M α M-1 T = T α .\nUsing this together with (15) allows us to conclude that\nsup µ R T (µ) ≥ 1 α M\nIn [14], the authors consider a special case of the undirected feedback graph problem where the graph (fixed and known) is composed of α disjoint cliques with self-loops. For j ∈ [α], let m j denote the number of actions in the j-th clique, implying that α j=1 m j = K (the number of arms). For this problem, [14,Theorem 4] provides a lower bound of order T α j=1 ln(m j + 1). In particular, if the cliques are balanced (i.e., m 1 = • • • = m α = K/α), the lower bound becomes of order αT ln(1 + K/α), thus matching the regret bound of Algorithm 1. This means that, for any value of 1 ≤ α ≤ K, there are feedback graphs on K nodes with independence number α such that no other algorithm can achieve a better minimax regret guarantee than that of our proposed algorithm.\nWe emphasize that this does not imply graph-specific minimax optimality. Indeed, as shown in [14], when the cliques are unbalanced, the regret guarantee of our algorithm can be inferior to that of the algorithm they proposed, which matches the T α j=1 ln(m j + 1) bound. However, beyond the disjoint cliques case, their algorithm requires computing a minimum clique cover for the given feedback graph G, which is known to be NP-hard [24]. More importantly, their reliance on a clique cover leads to a dependence of the regret on the clique cover number θ(G) instead of the independence number α(G). One can argue that the ratio between θ(G) and α(G) can be Ω(K/(ln K) 2 ) for most graphs on a sufficiently large number K of vertices (e.g., see [29,Section 6]). Finally, it is not clear how to generalize their approach to time-varying feedback graphs (informed or uninformed). Hence, despite the contributions of our work and those of [14], the problem of characterizing the minimax regret rate at a graph-based granularity still calls for further investigation." }, { "figure_ref": [], "heading": "F Directed Strongly Observable Feedback Graphs", "publication_ref": [ "b1", "b34", "b34", "b1", "b2", "b1", "b15", "b34" ], "table_ref": [], "text": "In this section, we consider the case of directed strongly observable graphs. For a directed graph\nG = (V, E), let N in G (i) = {j ∈ V : (j, i) ∈ E} be the in-neighbourhood of node i ∈ V in G, and let N out G (i) = {j ∈ V : (i, j)\n∈ E} be its out-neighbourhood. A directed graph G is strongly observable if for every i ∈ V , at least one of the following holds: i ∈ N in G (i) or j ∈ N in G (i) for all j = i. The independence number α(G) is still defined in the same manner as before; that is, the cardinality of the largest set of nodes such that no two nodes share an edge, regardless of orientation. The interaction protocol is the same as in the undirected case, except that, in each round t ∈ [T ], the learner only observes the losses of the actions in N out Gt (I t ), which is the out-neighbourhood in graph G t of the action I t picked by the learner. As before, we will use N in t (i) and N out t (i) to denote N in Gt (i) and N out Gt (i) respectively. For this setting, a bound of O √ αT • ln(KT ) was proven in [2] for the EXP3.G algorithm. Later, [35] proved a bound of O αT (ln K) 3 for OSMD with a variant of the q-Tsallis entropy regularizer where q was chosen as 1 -1/(ln K).\nTo use Algorithm 1 in the directed case, one can define loss estimates analogous to (6) by using the in-neighbourhood in place of the neighbourhood in the relevant quantities. Namely, let S t = i ∈ V : i / ∈ N in t (i) , J t = {i ∈ S t : p t (i) > 1/2}, and P t (i) = j∈N in t (i) p t (j). The loss estimates (again due to [35]) can then be given by\nℓ t (i) =    ℓt(i) Pt(i) I I t ∈ N in t (i) if i ∈ V \\ J t ℓt(i)-1\nPt(i) I I t ∈ N in t (i) + 1 if i ∈ J t . Algorithm 1 with these loss estimates can be analyzed in a similar manner to the proof of Theorem 2, with the major difference being the way that the variance term is handled for actions with self-loops. Namely, the relevant term is i∈V :i∈N in t (i) p t (i) 2-q j∈N in t (i) p t (j)\n, on which we elaborate more in the following.\nLet p ∈ ∆ K and β ∈ (0, 1/2) be such that min i∈V p(i) ≥ β. We first consider the variance term given by the negative Shannon entropy regularizer. It is known [2] that such a variance term, restricted to nodes with a self-loop in the strongly observable feedback graph G = (V, E), has an upper bound of the form i∈V :i∈N in G (i)\np(i) j∈N in G (i) p(j) ≤ 4α(G) ln 4K α(G)β . (16\n)\nIn addition to the fact that this variance bound has a linear dependence on the independence number α(G) of G, we observe that there is a logarithmic factor in K/α and 1/β given by the fact that we now consider directed graphs. The main problem is that, in general, we cannot hope to improve upon the above logarithmic factor as it can be shown to be unavoidable unless we manage to restrict the probability distributions we consider. Indeed, it is possible to show [3,Fact 4] that there exist probability distributions p ∈ ∆ K and directed strongly observable graphs G for which α(G) = 1 and i∈V :i∈N in G (i)\np(i) j∈N in G (i) p(j) = K + 1 2 = 1 2 log 2 4 min i p(i)\n= α(G) log ω(1) K α (G) .\nA usual way to avoid this is to introduce some explicit exploration to the probability distributions in order to force a lower bound on the probabilities of all nodes, e.g., as in EXP3.G [2]. This would bring the linear dependence on K down to α in the above bad case, while, on the other hand, introducing a ln(KT ) factor which then worsens the overall dependence on the time horizon T .\nConsider now the variance term given by the analysis of the q-FTRL algorithm. As already argued in Section 3, we can reuse the variance bound in (16) for the case of negative Shannon entropy because i∈V :i∈N in G (i)\np(i) 2-q j∈N in G (i) p(j) ≤ i∈V :i∈N in G (i) p(i) j∈N in G (i) p(j)\nfor any q ∈ (0, 1), and such a bound is the best known so far for the general case of directed strongly observable graphs. However, we can be more clever in the way we utilize it. Similarly to the proof of [35,Theorem 14], we can gain an advantage from the adoption of q-FTRL by splitting the sum in the variance term into two sums according to some adequately chosen threshold β on the probabilities of the individual nodes. More precisely, by choosing β ≈ expln(K/α) ln K and q = 1 -1/(ln K), we can prove that i∈V :i∈N in G (i)\np(i) 2-q j∈N in G (i) p(j) = O α ln K 1 + ln K α .\nWe can further argue that, by following a similar analysis as in the proofs of Theorems 1 and 2, this variance bound would allow to show that the regret of q-FTRL is O αT 1 + ln(K/α) • ln K , where there is an additional ln K factor when compared to our regret bound in the undirected case (Theorem 2).\nThe presence of extra logarithmic factors is to be expected in the directed case, as many edges between distinct nodes might reduce the independence number of the graph, while providing information in one direction only. However, the undirected graph G ′ obtained from any directed strongly observable graph G by reciprocating edges between distinct nodes has the same independence number α(G ′ ) = α(G) but the regret guarantee given by the more general analysis of q-FTRL would introduce a spurious ln K multiplicative factor. We remark that all the currently available upper bounds on the variance term (either with negative Shannon entropy or negative q-Tsallis entropy regularizers) do not exactly reflect the phenomenon of a gradually disappearing logarithmic factor when the graph is closer to being undirected (i.e., has fewer unreciprocated edges).\nTaking these observations into account, we believe that it should be possible to achieve tighter guarantees that match our intuition, by improving the currently available tools. The bound on the variance term, for instance, is one part of the analysis that might be improvable. We might want to have a similar bound as ( 16) but with a sublinear dependence on α that varies according to the parameter q of the negative q-Tsallis entropy; e.g., ignoring logarithmic factors, we could expect it to become of order α q as we managed to prove for the undirected case (Lemma 1). Doing so could allow a better tuning of q that might lead to improved logarithmic factors in the regret." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements KE, EE, and NCB gratefully acknowledge the financial support from the MUR PRIN grant 2022EKNE5K (Learning in Markets and Society), funded by the NextGenerationEU program within the PNRR scheme (M4C2, investment 1.1), the FAIR (Future Artificial Intelligence Research) project, funded by the NextGenerationEU program within the PNRR-PE-AI scheme (M4C2, investment 1.3, line on Artificial Intelligence), and the EU Horizon CL4-2022-HUMAN-02 research and innovation action under grant agreement 101120237, project ELIAS (European Lighthouse of AI for Sustainability). TC gratefully acknowledges the support of the University of Ottawa through grant GR002837 (Start-Up Funds) and that of the Natural Sciences and Engineering Research Council of Canada (NSERC) through grants RGPIN-2023-03688 (Discovery Grants Program) and DGECR-2023-00208 (Discovery Grants Program, DGECR -Discovery Launch Supplement)." }, { "figure_ref": [], "heading": "", "publication_ref": [ "b26", "b12", "b26", "b26" ], "table_ref": [], "text": "If P and Q are two distributions defined on the same space, let D KL (P Q) and δ(P, Q) be the KLdivergence and the total variation distance respectively between P and Q. Furthermore, let d(p q) be the KL-divergence between two Bernoulli random variables with means p and q. The following lemma provides an expression for the KL-divergence between two the probability distributions associated to two environments. Lemma 5. For a fixed policy, let µ and µ ′ be two environments as described above. Then,\nwhere\nProof. The proof is similar to that of Lemma 15.1 in [27]. Namely, we have in our case that\nwhere the last equality holds via (13).\nThe following standard lemma, adapted from [27], will be used in the sequel.\nLemma 6. Let P and Q be probability measures on the same measurable space (Ω, F ). Let a < b and X : Ω -→ [a, b] be an F -measurable random variable. Then,\nProof. We have, by Exercise 14.4 in [27], that Ω X(ω)dP (ω) -Ω X(ω)dQ(ω) ≤ (ba)δ(P, Q) , from which the lemma follows by applying Pinsker's inequality." } ]
2023-10-28
[ { "authors": "J D Abernethy; C Lee; A Tewari", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b0", "title": "Fighting bandits with a new kind of smoothness", "year": "2015" }, { "authors": "N Alon; N Cesa-Bianchi; O Dekel; T Koren", "journal": "", "ref_id": "b1", "title": "Online learning with feedback graphs: Beyond bandits", "year": "2015" }, { "authors": "N Alon; N Cesa-Bianchi; C Gentile; S Mannor; Y Mansour; O Shamir", "journal": "SIAM Journal on Computing", "ref_id": "b2", "title": "Nonstochastic multi-armed bandits with graph-structured feedback", "year": "2017" }, { "authors": "N Alon; N Cesa-Bianchi; C Gentile; Y Mansour", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b3", "title": "From bandits to experts: A tale of domination and independence", "year": "2013" }, { "authors": "J.-Y Audibert; S Bubeck", "journal": "COLT", "ref_id": "b4", "title": "Minimax policies for adversarial and stochastic bandits", "year": "2009" }, { "authors": "J.-Y Audibert; S Bubeck; G Lugosi", "journal": "Mathematics of Operations Research", "ref_id": "b5", "title": "Regret in online combinatorial optimization", "year": "2014" }, { "authors": "P Auer; N Cesa-Bianchi; Y Freund; R E Schapire", "journal": "IEEE", "ref_id": "b6", "title": "Gambling in a rigged casino: The adversarial multi-armed bandit problem", "year": "1995" }, { "authors": "P Auer; N Cesa-Bianchi; Y Freund; R E Schapire", "journal": "SIAM journal on computing", "ref_id": "b7", "title": "The nonstochastic multiarmed bandit problem", "year": "2002" }, { "authors": "N Cesa-Bianchi; Y Freund; D Haussler; D P Helmbold; R E Schapire; M K Warmuth", "journal": "Journal of the ACM (JACM)", "ref_id": "b8", "title": "How to use expert advice", "year": "1997" }, { "authors": "N Cesa-Bianchi; Y Freund; D P Helmbold; D Haussler; R E Schapire; M K Warmuth", "journal": "", "ref_id": "b9", "title": "How to use expert advice", "year": "1993" }, { "authors": "N Cesa-Bianchi; G Lugosi", "journal": "Cambridge University Press", "ref_id": "b10", "title": "Prediction, Learning, and Games", "year": "2006" }, { "authors": "N Cesa-Bianchi; G Lugosi", "journal": "Journal of Computer and System Sciences", "ref_id": "b11", "title": "Combinatorial bandits", "year": "2012" }, { "authors": "S.-H Chang; P C Cosman; L B Milstein", "journal": "IEEE Transactions on Communications", "ref_id": "b12", "title": "Chernoff-type bounds for the Gaussian error function", "year": "2011" }, { "authors": "H Chen; Y He; C Zhang", "journal": "", "ref_id": "b13", "title": "On interpolating experts and multi-armed bandits", "year": "2023" }, { "authors": "A Cohen; T Hazan; T Koren", "journal": "PMLR", "ref_id": "b14", "title": "Tight bounds for bandit combinatorial optimization", "year": "2017" }, { "authors": "C Dann; C Wei; J Zimmert", "journal": "PMLR", "ref_id": "b15", "title": "A blackbox approach to best of both worlds in bandits and beyond", "year": "2023" }, { "authors": "K Eldowa; N Cesa-Bianchi; A M Metelli; M Restelli", "journal": "IEEE", "ref_id": "b16", "title": "Information-theoretic regret bounds for bandits with fixed expert advice", "year": "2023" }, { "authors": "L Erez; T Koren", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b17", "title": "Towards best-of-all-worlds online learning with feedback graphs", "year": "2021" }, { "authors": "E Esposito; F Fusco; D Van Der Hoeven; N Cesa-Bianchi", "journal": "", "ref_id": "b18", "title": "Learning on the edge: Online learning with stochastic feedback graphs", "year": "2022" }, { "authors": "D Haussler; J Kivinen; M Warmuth", "journal": "IEEE Transactions on Information Theory", "ref_id": "b19", "title": "Sequential prediction of individual sequences under general loss functions", "year": "1998" }, { "authors": "S Ito; D Hatano; H Sumita; K Takemura; T Fukunaga; N Kakimura; K.-I Kawarabayashi", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b20", "title": "Improved regret bounds for bandit combinatorial optimization", "year": "2019" }, { "authors": "S Ito; T Tsuchiya; J Honda", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Nearly optimal best-of-both-worlds algorithms for online learning with feedback graphs", "year": "2022" }, { "authors": "T Jin; J Liu; H Luo", "journal": "", "ref_id": "b22", "title": "Improved best-of-both-worlds guarantees for multi-armed bandits: FTRL with general regularizers and multiple optimal arms", "year": "2023" }, { "authors": "R M Karp", "journal": "Plenum Press", "ref_id": "b23", "title": "Reducibility among combinatorial problems", "year": "1972" }, { "authors": "T Kocák; G Neu; M Valko; R Munos", "journal": "", "ref_id": "b24", "title": "Efficient learning by implicit exploration in bandit problems with side observations", "year": "2014" }, { "authors": "J Kwon; V Perchet", "journal": "Journal of Machine Learning Research", "ref_id": "b25", "title": "Gains and losses are fundamentally different in regret minimization: The sparse case", "year": "2016" }, { "authors": "T Lattimore; C Szepesvári", "journal": "Cambridge University Press", "ref_id": "b26", "title": "Bandit algorithms", "year": "2020" }, { "authors": "H Luo; H Tong; M Zhang; Y Zhang", "journal": "PMLR", "ref_id": "b27", "title": "Improved high-probability regret for adversarial bandits with time-varying feedback graphs", "year": "2023" }, { "authors": "S Mannor; O Shamir", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b28", "title": "From bandits to experts: On the value of side-observations", "year": "2011" }, { "authors": "F Orabona", "journal": "", "ref_id": "b29", "title": "A modern introduction to online learning", "year": "2019" }, { "authors": "C Rouyer; Y Seldin", "journal": "PMLR", "ref_id": "b30", "title": "Tsallis-inf for decoupled exploration and exploitation in multi-armed bandits", "year": "2020" }, { "authors": "C Rouyer; D Van Der Hoeven; N Cesa-Bianchi; Y Seldin", "journal": "", "ref_id": "b31", "title": "A near-optimal best-of-bothworlds algorithm for online learning with feedback graphs", "year": "2022" }, { "authors": "Y Seldin; G Lugosi", "journal": "", "ref_id": "b32", "title": "A lower bound for multi-armed bandits with expert advice", "year": "2016" }, { "authors": "N M Vural; H Gokcesu; K Gokcesu; S S Kozat", "journal": "IEEE Transactions on Signal Processing", "ref_id": "b33", "title": "Minimax optimal algorithms for adversarial bandit problem with multiple plays", "year": "2019" }, { "authors": "J Zimmert; T Lattimore", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b34", "title": "Connections between mirror descent, Thompson sampling and the information ratio", "year": "2019" }, { "authors": "J Zimmert; Y Seldin", "journal": "The Journal of Machine Learning Research", "ref_id": "b35", "title": "Tsallis-inf: An optimal algorithm for stochastic and adversarial bandits", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 108, 247.91, 150.45, 17.26 ], "formula_id": "formula_0", "formula_text": "i ∈ N G (i) or i ∈ N G (j) for all j = i." }, { "formula_coordinates": [ 3, 224.64, 282.53, 162.72, 31.09 ], "formula_id": "formula_1", "formula_text": "R T = E T t=1 ℓ t (I t ) -min i∈[K] T t=1 ℓ t (i) ." }, { "formula_coordinates": [ 3, 204.6, 629.25, 202.8, 30.6 ], "formula_id": "formula_2", "formula_text": "ψ q (x) = 1 1 -q 1 - i∈V x(i) q ∀x ∈ ∆ K ," }, { "formula_coordinates": [ 4, 246, 103.77, 258.8, 24.22 ], "formula_id": "formula_3", "formula_text": "ℓ t (i) = ℓ t (i) P t (i) I{I t ∈ N t (i)} ,(1)" }, { "formula_coordinates": [ 4, 365.4, 152.39, 107.61, 17.26 ], "formula_id": "formula_4", "formula_text": "E t [•] = E [• | I 1 , . . . , I t-1 ]." }, { "formula_coordinates": [ 4, 178.08, 203.15, 122.16, 17.26 ], "formula_id": "formula_5", "formula_text": "p 1 (i) ← 1/K for all i = 1, . . ." }, { "formula_coordinates": [ 4, 257.16, 330.89, 247.64, 29.17 ], "formula_id": "formula_6", "formula_text": "B t (q) = i∈V p t (i) 2-q P t (i) .(2)" }, { "formula_coordinates": [ 4, 179.52, 382.97, 252.84, 33.85 ], "formula_id": "formula_7", "formula_text": "B t (q) ≤ i∈V p t (i) 1-q ≤ i∈V p t (i) 1-q i∈V 1 1/q q = K q ," }, { "formula_coordinates": [ 4, 253.92, 447.33, 104.16, 27.93 ], "formula_id": "formula_8", "formula_text": "B t (q) ≤ i∈V p t (i) P t (i) ≤ α ." }, { "formula_coordinates": [ 4, 108, 578.03, 396.46, 63.07 ], "formula_id": "formula_9", "formula_text": "Lemma 1. Let G = (V, E) be any undirected graph with |V | = K vertices and independence number α(G) = α. Let b ∈ [0, 1], p ∈ ∆ K and consider any nonempty subset U ⊆ {v ∈ V : v ∈ N G (v)}. Then, v∈U p(v) 1+b u∈NG(v) p(u) ≤ α 1-b ." }, { "formula_coordinates": [ 4, 149.04, 667.55, 111.3, 17.26 ], "formula_id": "formula_10", "formula_text": "G[U ] = (U, E ∩ (U × U ))" }, { "formula_coordinates": [ 5, 234.6, 88.25, 133.94, 29.77 ], "formula_id": "formula_11", "formula_text": "Q(H) = v∈V (H) p(v) 1+b u∈NG(v) p(u)" }, { "formula_coordinates": [ 5, 133.08, 237.53, 345.72, 62.29 ], "formula_id": "formula_12", "formula_text": "Q(G r ) -Q(G r+1 ) = v∈Nr(vr ) p(v) 1+b u∈N1(v) p(u) ≤ v∈Nr(vr ) p(v) p(v r ) b u∈N1(vr) p(u) ≤ v∈N1(vr) p(v) u∈N1(vr) p(u) p(v r ) b = p(v r ) b ," }, { "formula_coordinates": [ 5, 207.12, 322.37, 197.76, 31.33 ], "formula_id": "formula_13", "formula_text": "Q(G) = s r=1 Q(G r ) -Q(G r+1 ) ≤ v∈S p(v) b ." }, { "formula_coordinates": [ 5, 184.8, 383.33, 319.88, 66.15 ], "formula_id": "formula_14", "formula_text": "v∈A p(v) b ≤ max x∈∆K v∈A x(v) b = |A| max x∈∆K v∈A x(v) b |A| ≤ |A| max x∈∆K 1 |A| v∈A x(v) b ≤ |A| 1-b ≤ α 1-b ,(3)" }, { "formula_coordinates": [ 5, 108, 543.59, 396.54, 59.85 ], "formula_id": "formula_15", "formula_text": "α t = α for some common value α ∈ [K]. If Algorithm 1 is run with input q = 1 2 1 + ln(K/α) ln(K/α) 2 + 4 + 2 ∈ [1/2, 1) and η = 2qK 1-q T (1 -q)α q ," }, { "formula_coordinates": [ 5, 217.08, 644.97, 287.6, 49.8 ], "formula_id": "formula_16", "formula_text": "E t ℓ t (i) = ℓ t (i), Lemma 2 in Appendix A implies that R T ≤ K 1-q η(1 -q) + η 2q T t=1 E i∈V p t (i) 2-q P t (i)(4)" }, { "formula_coordinates": [ 5, 233.64, 695.09, 271.04, 31.95 ], "formula_id": "formula_17", "formula_text": "≤ K 1-q η(1 -q) + η 2q α q T ,(5)" }, { "formula_coordinates": [ 6, 130.2, 115.97, 355.38, 84.03 ], "formula_id": "formula_18", "formula_text": "2K 1-q α q q(1 -q) T = 2T exp 1 + 1 2 ln(αK) - 1 2 ln (K/α) 2 + 4 2 + ln (K/α) 2 + 4 ≤ 2eαT 2 + ln (K/α) 2 + 4 ≤ 2 eαT ln (K/α) 2 + 4 ≤ 2 eαT (2 + ln(K/α)) ." }, { "formula_coordinates": [ 6, 108, 346.19, 396.18, 28.06 ], "formula_id": "formula_19", "formula_text": "Define S t = {i ∈ V : i / ∈ N t (i)} as the subset of actions without self-loops in the feedback graph G t at each time step t ∈ [T ]." }, { "formula_coordinates": [ 6, 202.8, 397.71, 302, 46.86 ], "formula_id": "formula_20", "formula_text": "ℓ t (i) =    ℓt(i) Pt(i) I {I t ∈ N t (i)} if i ∈ V \\ J t ℓt(i)-1 Pt(i) I {I t ∈ N t (i)} + 1 if i ∈ J t(6)" }, { "formula_coordinates": [ 6, 108, 508.29, 396.54, 61.56 ], "formula_id": "formula_21", "formula_text": "[K]. If Algorithm 1 is run with input q = 1 2 1 + ln(K/α) ln(K/α) 2 + 4 + 2 ∈ [1/2, 1) and η = 1 3 2qK 1-q T (1 -q)α q ," }, { "formula_coordinates": [ 6, 270.48, 694.61, 70.92, 31.09 ], "formula_id": "formula_22", "formula_text": "α T = 1 T T t=1 α t ." }, { "formula_coordinates": [ 7, 250.68, 114.89, 110.64, 34.54 ], "formula_id": "formula_23", "formula_text": "H t (q) = i∈V \\St p t (i) 2-q P t (i) ." }, { "formula_coordinates": [ 7, 251.04, 215.21, 108.01, 31.33 ], "formula_id": "formula_24", "formula_text": "1 T t s=Tr H s (q r ) 1/qr > 2 r+1" }, { "formula_coordinates": [ 7, 149.16, 347.57, 314.31, 32.07 ], "formula_id": "formula_25", "formula_text": "q r = 1 2 1 + ln(K/2 r ) ln(K/2 r ) 2 + 4 + 2 and η r = 2q r K 1-qr 11T (1 -q r ) (2 r )" }, { "formula_coordinates": [ 7, 132.96, 424.61, 149.06, 41.67 ], "formula_id": "formula_26", "formula_text": "if 1 T t s=Tr H s (q r ) 1/qr > 2 r+1 then r ← r + 1 T r ← t + 1" }, { "formula_coordinates": [ 7, 161.16, 502.19, 112.09, 21.07 ], "formula_id": "formula_27", "formula_text": "Let C = 4 √ 6e √ π+ √ 4-2 ln 2 ln 2" }, { "formula_coordinates": [ 7, 209.52, 535.01, 192.96, 31.09 ], "formula_id": "formula_28", "formula_text": "R T ≤ C T t=1 α t 2 + ln K α T + log 2 α T ." }, { "formula_coordinates": [ 7, 212.76, 642.17, 187.68, 31.33 ], "formula_id": "formula_29", "formula_text": "1 T t s=Tr B s (q r ) 1/qr ≤ 1 T t s=Tr α s ≤ α T ≤ 2 n ." }, { "formula_coordinates": [ 8, 233.04, 92.33, 271.64, 35.07 ], "formula_id": "formula_30", "formula_text": "K 1-qr η r (1 -q r ) + η r 2q r E Tr+1-2 t=Tr B t (q r ) .(7)" }, { "formula_coordinates": [ 8, 192.24, 149.21, 227.52, 61.57 ], "formula_id": "formula_31", "formula_text": "Tr+1-2 t=Tr B t (q r ) ≤ T r:r+1 1 T r:r+1 Tr+1-2 t=Tr B t (q r ) 1/qr qr ≤ T r:r+1 T T r:r+1 2 r+1 qr ≤ 2T 2 r qr ," }, { "formula_coordinates": [ 8, 206.16, 267.53, 199.68, 30.97 ], "formula_id": "formula_32", "formula_text": "R T ≤ 4 √ 3eT n-1 r=0 2 r ln e 2 K2 -r + log 2 α T ," }, { "formula_coordinates": [ 8, 250.92, 571.89, 110.16, 24.6 ], "formula_id": "formula_33", "formula_text": "R T ≥ 1 18 √ 2 αT log α K." }, { "formula_coordinates": [ 9, 116.04, 133.18, 102.35, 145.54 ], "formula_id": "formula_35", "formula_text": "C 1,1(2, 1, 1) (2, 1, 2) (2, 2, 1)" }, { "formula_coordinates": [ 9, 116.04, 74.53, 241.13, 168.94 ], "formula_id": "formula_36", "formula_text": "C 1,2 G 1 (1, 1, 1) (1, 1, 2) (2, 1, 1)" }, { "formula_coordinates": [ 9, 254.94, 133.18, 48.38, 145.54 ], "formula_id": "formula_37", "formula_text": "C 2,1(1, 2, 1)" }, { "formula_coordinates": [ 9, 254.94, 74.53, 187.16, 204.19 ], "formula_id": "formula_38", "formula_text": "(2, 2, 1) (2, 2, 2) C 2,2 G 2 (1, 1, 1)" }, { "formula_coordinates": [ 9, 463.77, 169.12, 32.18, 9.96 ], "formula_id": "formula_39", "formula_text": "(2, 1, 1)" }, { "formula_coordinates": [ 9, 393.84, 133.18, 48.26, 145.54 ], "formula_id": "formula_40", "formula_text": "C 3,1(1, 1, 2)" }, { "formula_coordinates": [ 9, 463.77, 268.76, 32.18, 9.96 ], "formula_id": "formula_41", "formula_text": "(2, 1, 2)" }, { "formula_coordinates": [ 9, 393.84, 74.53, 64.76, 168.94 ], "formula_id": "formula_42", "formula_text": "C 3,2 G 3" }, { "formula_coordinates": [ 9, 108, 485.21, 395.46, 22.78 ], "formula_id": "formula_43", "formula_text": "G i = (V, E i ) is such that (a, a ′ ) ∈ E i if and only if a(i) = a ′ (i)." }, { "formula_coordinates": [ 13, 200.04, 138.05, 211.92, 34.11 ], "formula_id": "formula_44", "formula_text": "R T ≤ K 1-q (1 -q)η + η 2q T t=1 E i∈V p t (i) 2-q ℓ t (i) 2 ." }, { "formula_coordinates": [ 13, 355.92, 212.03, 127.67, 17.26 ], "formula_id": "formula_45", "formula_text": "[T ], E t [•] = E[• | I 1 , . . . , I t-1 ]" }, { "formula_coordinates": [ 13, 193.2, 251.85, 225.6, 21.09 ], "formula_id": "formula_46", "formula_text": "E t ℓ t (I t ) = i∈V p t (i)ℓ t (i) and E t ℓ t = ℓ t ," }, { "formula_coordinates": [ 13, 133.92, 306.53, 344.16, 31.09 ], "formula_id": "formula_47", "formula_text": "R T = E T t=1 ℓ t (I t ) - T t=1 ℓ t (i * ) = E T t=1 p t -e i * , ℓ t = E T t=1 p t -e i * , ℓ t ," }, { "formula_coordinates": [ 13, 160.8, 484.79, 264.48, 56.86 ], "formula_id": "formula_48", "formula_text": "u ∈ ∆ K , T t=1 p t -u, y t ≤ K 1-q (1 -q)η + η 2q T t=1 i∈V p t (i) 2-q y t (i) 2 ." }, { "formula_coordinates": [ 13, 146.64, 589.13, 312.03, 103.71 ], "formula_id": "formula_49", "formula_text": "T t=1 p t -u, y t ≤ ψ q (u) -ψ q (p 1 ) η + T t=1 p t -p t+1 , y t - 1 η D ψq (p t+1 , p t ) = K 1-q -1 (1 -q)η + T t=1 p t -p t+1 , y t - 1 η D ψq (p t+1 , p t ) ≤ K 1-q (1 -q)η + T t=1 p t -p t+1 , y t - 1 η D ψq (p t+1 , p t ) ," }, { "formula_coordinates": [ 14, 141.84, 95.37, 331.69, 239.14 ], "formula_id": "formula_50", "formula_text": "p t -p t+1 ,y t - 1 η D ψq (p t+1 , p t ) ≤ p t -p t+1 , y t - 1 η D ψq (p t+1 , p t ) = 1 η p t -p t+1 , ηy t - 1 2η p t+1 -p t 2 ∇ 2 ψq(zt) ≤ η 2 y t 2 (∇ 2 ψq(zt)) -1 = η 2q i∈V z t (i) 2-q y t (i) 2 = η 2q i∈V γ t p t+1 (i) + (1 -γ t )p t (i) 2-q y t (i) 2 ≤ η 2q i∈V p t (i) 2-q y t (i) 2 + γ t η 2q i∈V p t+1 (i) 2-q -p t (i) 2-q y t (i) 2 ≤ η 2q i∈V p t (i) 2-q y t (i) 2 ≤ η 2q i∈V p t (i) 2-q y t (i) 2 ," }, { "formula_coordinates": [ 14, 185.88, 440.99, 175.62, 36.07 ], "formula_id": "formula_51", "formula_text": "n-1 r=0 2 r ln e 2 b2 -r ≤ √ 2π + 2 √ 2 -ln 2 ln 2" }, { "formula_coordinates": [ 14, 205.08, 536.57, 201.84, 31.09 ], "formula_id": "formula_52", "formula_text": "n-1 r=0 2 r ln e 2 b2 -r ≤ n 0 2 r ln e 2 b2 -r dr ." }, { "formula_coordinates": [ 15, 171.84, 100.31, 264.87, 129.46 ], "formula_id": "formula_53", "formula_text": "n 0 2 r ln e 2 b2 -r dr ≤ e √ b ln 2 √ 2π 2 n e 2 b + 2 2 n ln(e 2 b2 -n ) e 2 b = √ 2 n ln 2 √ 2π + 2 ln(e 2 b2 -n ) ≤ √ 2a ln 2 √ 2π + 2 ln e 2 b 2a ≤ √ 2π + 2 √ 2 -ln 2 ln 2 a ln e 2 b a ," }, { "formula_coordinates": [ 15, 108, 338.73, 396.54, 62.28 ], "formula_id": "formula_54", "formula_text": "[K]. If Algorithm 1 is run with input q = 1 2 1 + ln(K/α) ln(K/α) 2 + 4 + 2 ∈ [1/2, 1) and η = 1 3 2qK 1-q T (1 -q)α q ," }, { "formula_coordinates": [ 15, 193.56, 502.97, 224.88, 30.97 ], "formula_id": "formula_55", "formula_text": "R T = E T t=1 p t -e i * , ℓ t = E T t=1 p t -e i * , ℓ t ," }, { "formula_coordinates": [ 15, 237.24, 579.65, 143.88, 31.09 ], "formula_id": "formula_56", "formula_text": "t s=1 ℓ s , p = t s=1 z s + t s=1 ℓ s , p ," }, { "formula_coordinates": [ 15, 155.28, 688.85, 345.49, 34.11 ], "formula_id": "formula_57", "formula_text": "E T t=1 p t -e i * , ℓ t ≤ K 1-q η(1 -q) + η 2q T t=1 E i∈V p t (i) 2-q E t ℓ t (i) 2 . (8" }, { "formula_coordinates": [ 15, 500.77, 699.71, 3.91, 9.03 ], "formula_id": "formula_58", "formula_text": ")" }, { "formula_coordinates": [ 16, 130.2, 95.45, 351.54, 96.22 ], "formula_id": "formula_59", "formula_text": "i∈V p t (i) 2-q E t ℓ t (i) 2 ≤ 2 i∈V \\Jt p t (i) 2-q E t ℓ t (i) 2 + 2 E t z 2 t i∈V \\Jt p t (i) 2-q + 1 ≤ 2 i∈V \\Jt p t (i) 2-q P t (i) + 2 E t z 2 t i∈V \\Jt p t (i) 2-q + 1 ≤ 2 i∈V \\Jt p t (i) 2-q P t (i) + 3 ," }, { "formula_coordinates": [ 16, 157.2, 227.69, 297.6, 37.66 ], "formula_id": "formula_60", "formula_text": "E t z 2 t i∈V \\Jt p t (i) 2-q = I {J t = ∅} 1 -ℓ t (j t ) 2 1 -p t (j t ) i∈V \\Jt p t (i) 2-q ≤ 1 ." }, { "formula_coordinates": [ 16, 176.52, 311.69, 258.96, 31.95 ], "formula_id": "formula_61", "formula_text": "i∈St\\Jt p t (i) 2-q P t (i) = i∈St\\Jt p t (i) 2-q 1 -p t (i) ≤ 2 i∈St\\Jt p t (i) 2-q ≤ 2 ." }, { "formula_coordinates": [ 16, 170.88, 382.01, 329.89, 30.97 ], "formula_id": "formula_62", "formula_text": "i∈V p t (i) 2-q E t ℓ t (i) 2 ≤ 2 i∈St p t (i) 2-q P t (i) + 7 ≤ 2α q + 7 ≤ 9α q . (9" }, { "formula_coordinates": [ 16, 500.77, 390.83, 3.91, 9.03 ], "formula_id": "formula_63", "formula_text": ")" }, { "formula_coordinates": [ 16, 240.6, 439.13, 130.8, 82.59 ], "formula_id": "formula_64", "formula_text": "R T ≤ K 1-q η(1 -q) + 9η 2q α q T = 3 2K 1-q α q q(1 -q) T ≤ 6 eαT (2 + ln(K/α)) ," }, { "formula_coordinates": [ 16, 107.64, 597.59, 165.61, 21.07 ], "formula_id": "formula_65", "formula_text": "Theorem 3. Let C = 4 √ 6e √ π+ √ 4-2 ln 2 ln 2" }, { "formula_coordinates": [ 16, 209.52, 630.17, 192.96, 31.09 ], "formula_id": "formula_66", "formula_text": "R T ≤ C T t=1 α t 2 + ln K α T + log 2 α T ." }, { "formula_coordinates": [ 17, 212.28, 102.89, 188.52, 31.33 ], "formula_id": "formula_67", "formula_text": "1 T t s=Tr H s (q r ) 1/qr ≤ 1 T t s=Tr α s ≤ α T ≤ 2 n ." }, { "formula_coordinates": [ 17, 201.36, 203.33, 303.44, 104.17 ], "formula_id": "formula_68", "formula_text": "R T = E T t=1 ℓ t (I t ) -ℓ t (i * ) ≤ E n-1 r=0 Tr+1-2 t=Tr ℓ t (I t ) -ℓ t (i * ) + n -1 ≤ E n-1 r=0 Tr+1-2 t=Tr ℓ t (I t ) -ℓ t (i * ) + log 2 α T .(10)" }, { "formula_coordinates": [ 17, 109.68, 346.49, 395.12, 333.39 ], "formula_id": "formula_69", "formula_text": "E Tr+1-2 t=Tr ℓ t (I t ) -ℓ t (i * ) = E T t=1 I r t = r, 1 T t s=Tr t H s (q rt ) 1/qr t ≤ 2 rt+1 ℓ t (I t ) -ℓ t (i * ) (a) = E T t=1 I r t = r, 1 T t s=Tr t H s (q rt ) 1/qr t ≤ 2 rt+1 p t -e i * , ℓ t (b) = E T t=1 I r t = r, 1 T t s=Tr t H s (q rt ) 1/qr t ≤ 2 rt+1 p t -e i * , ℓ t = E Tr+1-2 t=Tr p t -e i * , ℓ t (c) ≤ K 1-qr η r (1 -q r ) + η r 2q r E Tr+1-2 t=Tr i∈V p t (i) 2-qr ℓ t (i) 2 (d) = K 1-qr η r (1 -q r ) + η r 2q r E T t=1 I r t = r, 1 T t s=Tr t H s (q rt ) 1/qr t ≤ 2 rt+1 E t i∈V p t (i) 2-qr ℓ t (i) 2 (e) ≤ K 1-qr η r (1 -q r ) + η r 2q r E T t=1 I r t = r, 1 T t s=Tr t H s (q rt ) 1/qr t ≤ 2 rt+1 (2H t (q r ) + 7) = K 1-qr η r (1 -q r ) + η r 2q r E Tr+1-2 t=Tr (2H t (q r ) + 7) ,(11)" }, { "formula_coordinates": [ 18, 193.8, 114.89, 223.18, 120.51 ], "formula_id": "formula_70", "formula_text": "Tr+1-2 t=Tr H t (q r ) = T r:r+1 T r:r+1 Tr+1-2 t=Tr H t (q r ) 1/qr qr ≤ T r:r+1 1 T r:r+1 Tr+1-2 t=Tr H t (q r ) 1/qr qr ≤ T r:r+1 T T r:r+1 2 r+1 qr ≤ 2T 2 r qr ," }, { "formula_coordinates": [ 18, 126.24, 288.65, 359.52, 61.83 ], "formula_id": "formula_71", "formula_text": "E Tr+1-2 t=Tr ℓ t (I t ) -ℓ t (i * ) ≤ K 1-qr η r (1 -q r ) + 11η r 2q r T (2 r ) qr ≤ 2 11eT 2 r 2 + ln K2 -r ≤ 4 3eT 2 r ln e 2 K2 -r ." }, { "formula_coordinates": [ 18, 130.68, 375.29, 350.52, 61.71 ], "formula_id": "formula_72", "formula_text": "E n-1 r=0 Tr+1-2 t=Tr ℓ t (I t ) -ℓ t (i * ) ≤ 4 √ 3eT n-1 r=0 2 r ln e 2 K2 -r ≤ 4 √ 6e √ π + √ 4 -2 ln 2 ln 2 α T T 2 + ln (K/α T ) ," }, { "formula_coordinates": [ 18, 250.92, 574.05, 110.16, 24.72 ], "formula_id": "formula_73", "formula_text": "R T ≥ 1 18 √ 2 αT log α K." }, { "formula_coordinates": [ 19, 219.6, 196.97, 159.75, 31.09 ], "formula_id": "formula_74", "formula_text": "R T (µ) = max a∈V E µ T t=1 (ℓ t (A t ) -ℓ t (a))" }, { "formula_coordinates": [ 19, 178.56, 276.53, 326.24, 135.85 ], "formula_id": "formula_75", "formula_text": "R T (µ) = max a∈V T t=1 E µ E µ E µ ℓ t (A t ) -ℓ t (a) G t , A t A t = max a∈V T t=1 E µ E µ µ(A t ; G t ) -µ(a; G t ) A t = max a∈V T t=1 E µ M i=1 U G (G i )(µ(A t ; G i ) -µ(a; G i )) = max a∈V 1 M M i=1 E µ T t=1 (µ(A t ; G i ) -µ(a; G i )) .(12)" }, { "formula_coordinates": [ 19, 238.32, 438.95, 135.36, 20.18 ], "formula_id": "formula_76", "formula_text": "sup (ℓt) T t=1 ,(Gt) T t=1 R T ≥ sup µ R T (µ) ." }, { "formula_coordinates": [ 19, 159.36, 540.21, 150.25, 11.01 ], "formula_id": "formula_77", "formula_text": "λ t = γ t (A t ; G t )1 K/α = ℓ t (A t )1 K/α" }, { "formula_coordinates": [ 19, 211.92, 574.13, 292.88, 18.99 ], "formula_id": "formula_78", "formula_text": "P λ µ (γ t = 1 K/α | G t = G, A t = a) = µ(a; G) .(13)" }, { "formula_coordinates": [ 19, 123.96, 598.85, 238.33, 18.4 ], "formula_id": "formula_79", "formula_text": "H t = (A 1 , G 1 , λ 1 , . . . , A t , G t , λ t ) ∈ (V × G × {0, 1} K/α" }, { "formula_coordinates": [ 19, 196.8, 671.45, 218.4, 31.09 ], "formula_id": "formula_80", "formula_text": "P µ (H T ) = T t=1 π t (A t | H t-1 )U G (G t )P λ µ (λ t | G t , A t ) ." }, { "formula_coordinates": [ 21, 239.88, 133.89, 132.24, 23.76 ], "formula_id": "formula_81", "formula_text": "µ a (j; G i ) = 1 2 -εI{a(i) = j} ," }, { "formula_coordinates": [ 21, 225.96, 235.37, 159.09, 25.45 ], "formula_id": "formula_82", "formula_text": "µ -i a (j; G s ) = 1 2 , if s = i µ a (j; G s ), otherwise." }, { "formula_coordinates": [ 21, 213.84, 304.73, 184.32, 18.87 ], "formula_id": "formula_83", "formula_text": "a ∼ i a ′ ⇐⇒ ∀s ∈ [M ] \\ {i}, a ′ (s) = a(s) ," }, { "formula_coordinates": [ 21, 168.36, 489.89, 255.32, 136.09 ], "formula_id": "formula_84", "formula_text": "R T (µ a ) = M i=1 1 M E µa T t=1 (µ a (A t ; G i ) -µ a (a; G i )) = M i=1 1 M E µa T t=1 1 2 -εI{A t (i) = a(i)} - 1 2 -ε = ε M M i=1 E µa T t=1 (1 -I{A t (i) = a(i)}) = ε M M i=1 T -N µa (i, a; T ) ," }, { "formula_coordinates": [ 21, 164.4, 694.49, 340.4, 31.21 ], "formula_id": "formula_85", "formula_text": "R T (µ a ) ≥ ε M M i=1 T -N µ -i a (i, a; T ) -T 1 2 D KL P µ -i a P µa .(14)" }, { "formula_coordinates": [ 22, 153.96, 103.73, 299.58, 253.72 ], "formula_id": "formula_86", "formula_text": "D KL P µ -i a P µa = 1 M M s=1 a ′ ∈V N µ -i a (a ′ ; T )d µ -i a (a ′ ; G s ) µ a (a ′ ; G s ) = 1 M a ′ ∈V N µ -i a (a ′ ; T )d µ -i a (a ′ ; G i ) µ a (a ′ ; G i ) = 1 M a ′ ∈V N µ -i a (a ′ ; T )d 1/2 1/2 -εI{a ′ (i) = a(i)} = 1 M a ′ ∈V I{a ′ (i) = a(i)}N µ -i a (a ′ ; T )d 1/2 1/2 -ε ≤ cε 2 M a ′ ∈V I{a ′ (i) = a(i)}N µ -i a (a ′ ; T ) = cε 2 M a ′ ∈V I{a ′ (i) = a(i)} E µ -i a T t=1 I{A t = a ′ } = cε 2 M E µ -i a T t=1 I{A t (i) = a(i)} = cε 2 M N µ -i a (i, a; T ) ," }, { "formula_coordinates": [ 22, 166.32, 416.09, 338.48, 31.21 ], "formula_id": "formula_87", "formula_text": "R T (µ a ) ≥ ε M M i=1 T -N µ -i a (i, a; T ) -εT c 2M N µ -i a (i, a; T ) .(15)" }, { "formula_coordinates": [ 22, 108, 497.63, 249.9, 53.35 ], "formula_id": "formula_88", "formula_text": "Fix i ∈ [M ]. Notice that for W ∈ V / ∼ i , a∈W I{A t (i) = a(i)} = 1" }, { "formula_coordinates": [ 22, 160.08, 596.73, 287.22, 125.76 ], "formula_id": "formula_89", "formula_text": "1 α M a∈V N µ -i a (i, a; T ) = 1 α M W ∈V /∼i a∈W N µ -i a (i, a; T ) = 1 α M W ∈V /∼i a∈W N µ -i W (i, a; T ) = 1 α M W ∈V /∼i E µ -i W T t=1 a∈W I{A t (i) = a(i)} = 1 α M α M-1 T = T α ." }, { "formula_coordinates": [ 23, 129.24, 91.41, 72.76, 23.85 ], "formula_id": "formula_90", "formula_text": "sup µ R T (µ) ≥ 1 α M" }, { "formula_coordinates": [ 24, 108, 226.85, 397.17, 29.43 ], "formula_id": "formula_91", "formula_text": "G = (V, E), let N in G (i) = {j ∈ V : (j, i) ∈ E} be the in-neighbourhood of node i ∈ V in G, and let N out G (i) = {j ∈ V : (i, j)" }, { "formula_coordinates": [ 24, 199.56, 404.67, 211.21, 36.9 ], "formula_id": "formula_92", "formula_text": "ℓ t (i) =    ℓt(i) Pt(i) I I t ∈ N in t (i) if i ∈ V \\ J t ℓt(i)-1" }, { "formula_coordinates": [ 24, 258.72, 581.73, 241.89, 27.28 ], "formula_id": "formula_93", "formula_text": "p(i) j∈N in G (i) p(j) ≤ 4α(G) ln 4K α(G)β . (16" }, { "formula_coordinates": [ 24, 500.61, 589.07, 4.19, 9.03 ], "formula_id": "formula_94", "formula_text": ")" }, { "formula_coordinates": [ 24, 185.76, 694.53, 180.15, 27.28 ], "formula_id": "formula_95", "formula_text": "p(i) j∈N in G (i) p(j) = K + 1 2 = 1 2 log 2 4 min i p(i)" }, { "formula_coordinates": [ 25, 247.2, 151.37, 177.26, 31.88 ], "formula_id": "formula_96", "formula_text": "p(i) 2-q j∈N in G (i) p(j) ≤ i∈V :i∈N in G (i) p(i) j∈N in G (i) p(j)" }, { "formula_coordinates": [ 25, 248.04, 264.05, 176.64, 28.64 ], "formula_id": "formula_97", "formula_text": "p(i) 2-q j∈N in G (i) p(j) = O α ln K 1 + ln K α ." } ]
On the Minimax Regret for Online Learning with Feedback Graphs
In this work, we improve on the upper and lower bounds for the regret of online learning with strongly observable undirected feedback graphs. The best known upper bound for this problem is O √ αT ln K , where K is the number of actions, α is the independence number of the graph, and T is the time horizon. The √ ln K factor is known to be necessary when α = 1 (the experts case). On the other hand, when α = K (the bandits case), the minimax rate is known to be Θ √ KT , and a lower bound Ω √ αT is known to hold for any α. Our improved upper bound O αT (1 + ln(K/α)) holds for any α and matches the lower bounds for bandits and experts, while interpolating intermediate cases. To prove this result, we use FTRL with q-Tsallis entropy for a carefully chosen value of q ∈ [1/2, 1) that varies with α. The analysis of this algorithm requires a new bound on the variance term in the regret. We also show how to extend our techniques to timevarying graphs, without requiring prior knowledge of their independence numbers. Our upper bound is complemented by an improved Ω αT (ln K)/(ln α) lower bound for all α > 1, whose analysis relies on a novel reduction to multitask learning. This shows that a logarithmic factor is necessary as soon as α < K.
Khaled Eldowa; Emmanuel Esposito; Tommaso Cesari; Nicolò Cesa-Bianchi
[ { "figure_caption": "Figure 1 :1Figure 1: This figure shows an example of the multi-task bandit construction used to prove the lower bound.Here, K = 8 and α = 2; thus, the number of games is M = 3. Each action is identified by a tuple of three numbers, each corresponding to a choice of one out of a pair of \"base actions\" in each game. Each of the three graphs in the figure corresponds to a game, such that two actions share an edge if and only if they choose the same base action in the corresponding game. At every round, a graph is randomly drawn, and all actions belonging to the same clique suffer the same loss.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "erfc (ln x)/2 -2 (ln x)/x e 2 b e 2 b2 -n ≤ e √ b ln 2 √ 2π • erfc ln(e 2 b2 -n )/2 + 2 2 n ln(e 2 b2 -n ) e 2 b ,", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" } ]
[{"Category": "Methodological Basis", "Citation": "[29]", "Explanation": "The cited work provides the concept of feedback graphs, which the citing paper adopts in their research on online learning models and the relationship between them."}, {"Category": "Extension or Continuation", "Citation": "[3,4]", "Explanation": "The cited works provide the best known upper and lower bounds on regret in the feedback graph setting, which the citing paper extends by exploring the effect of the independence number on regret."}, {"Category": "Supporting Evidence", "Citation": "[9,10]", "Explanation": "The cited works provide the known tight upper bound for the experts case, which the citing paper uses as a foundational result in their research on the relationship between online learning models and feedback graphs."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work provides a well-known method of using the q-Tsallis entropy regularizer in the FTRL algorithm to achieve a regret bound of O \u221a KT when \u03b1 = K (bandits case), which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "[29,Lemma 3]", "Explanation": "The cited work provides a standard result on the variance term in the regret of q-FTRL, which the citing paper extends to arbitrary values of q in order to prove the result in Theorem 1."}, {"Category": "Supporting Evidence", "Citation": "(the doubling trick)", "Explanation": "The doubling trick is a technique used in the analysis of the new upper bound in the citing paper, which is essential in obtaining the result without the need for knowing or computing the value of \u03b1."}, {"Category": "Supporting Evidence", "Citation": "(the uninformed case)", "Explanation": "The analysis of the doubling trick in the uninformed case is complicated by the non-trivial dependence of the regret on the sequence of \u03b1 t . This highlights the need for a more in-depth understanding of the relationship between the regret and the sequence of \u03b1 t in the context of the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(the hard instance of the multi-task bandits problem)", "Explanation": "The hard instance of the multi-task bandits problem is used to define a new lower bound of \u2126 \u03b1T log \u03b1 K for all \u03b1 > 1, which is the first result showing the necessity of a logarithmic factor in the minimax regret for all \u03b1 < K in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work provides a known lower bounding technique for multi-task bandits that the citing paper adapts to prove their result."}, {"Category": "Methodological Basis", "Citation": "[35,Section 4]", "Explanation": "The cited work provides a method of choosing q as a function of K to prove a regret bound for directed feedback graphs, which the citing paper adopts in their research."}, {"Category": "Supporting Evidence", "Citation": "[31]", "Explanation": "The cited work provides a specific choice of q = 2 3 to improve the analysis of regret in bandits with decoupled exploration and exploitation, which the citing paper uses to support their research."}, {"Category": "Extension or Continuation", "Citation": "[36,23]", "Explanation": "The cited works derive regret bounds for arbitrary choices of q in the best-of-both-worlds analysis of bandits, which the citing paper extends to the graph feedback problem by combining the 1 2 -Tsallis entropy and Shannon entropy regularizers in different ways."}, {"Category": "Data Source", "Citation": "[18,22]", "Explanation": "The cited works have combined the 1 2 -Tsallis entropy and Shannon entropy regularizers in different ways to obtain best-of-both-worlds guarantees for the graph feedback problem, which the citing paper uses as a data source for their research."}, {"Category": "Supporting Evidence", "Citation": "[32]", "Explanation": "The cited work provides the idea of using values of q in feedback graphs, which the citing paper builds upon to improve the dependence on graph structure by choosing a suitable value of q."}, {"Category": "Extension or Continuation", "Citation": "[26]", "Explanation": "The cited work uses a similar approach based on the q-Tsallis regularizer for the problem of multiarmed bandits with sparse losses, which the citing paper extends to achieve a O sT ln(K/s) regret bound."}, {"Category": "Supporting Evidence", "Citation": "[33]", "Explanation": "The lower bound in the citing paper is reminiscent of the \u2126 KT log K N lower bound proved in [33] for the problem of bandits with expert advice, providing a reference for the construction of the lower bound in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work introduces the concept of strongly observable graphs, which the citing paper adopts in their study of the game between a learner and the environment."}, {"Category": "Methodological Basis", "Citation": "[29,3]", "Explanation": "The cited works provide a method for handling the variance term in the standard regret analysis of q-FTRL, which the citing paper adopts to analyze the performance of the algorithm."}, {"Category": "Supporting Evidence", "Citation": "[2,4,16,19,22,25,32,35]", "Explanation": "The cited works provide results that do not improve the regret bound in the given setting, which is a key factor in achieving the desired result in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work provides a method for handling the large variance term in the loss estimates due to the presence of actions without self-loops in the feedback graph, which the citing paper adopts to improve the regret bound in the case of strongly observable undirected feedback graphs."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work also offers a method for handling the large variance term in the loss estimates due to the presence of actions without self-loops in the feedback graph, which the citing paper adopts to improve the regret bound in the case of strongly observable undirected feedback graphs."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work provides the basis for the new loss estimates used in the citing paper to improve the performance of the feedback graphs in the action selection process."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work provides a doubling trick that the citing paper uses to guess the value of \u03b1 T in Algorithm 2, which serves as a methodological basis for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[3,29]", "Explanation": "The cited works provide a known lower bound of order \u221a \u03b1T for the fixed graph case, which the citing paper adopts in their research to establish a new lower bound for the minimax regret in the general case."}, {"Category": "Extension or Continuation", "Citation": "[9]", "Explanation": "The cited work provides a lower bound of order \u221a T ln K for the experts case (\u03b1 = 1), which the citing paper extends by providing a new lower bound that interpolates between the two bounds for intermediate values of \u03b1."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work on the multitask bandit problem (MTB) serves as the basis for constructing a sequence of feedback graphs and losses in the proof of the theorem presented in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[6,15,21]", "Explanation": "The cited works on lower bounds in combinatorial bandits are extended in the proof of the theorem to construct a sequence of feedback graphs and losses in the multitask bandit problem (MTB) variant considered in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The cited work provides the needle-in-a-haystack approach for constructing environments in MTB games, which the citing paper adopts in the construction of different environments for each action in the games."}, {"Category": "Extension or Continuation", "Citation": "[14]", "Explanation": "The cited work by [14] extends the research on fixed feedback graphs to a more general setting of disjoint cliques, which is a more challenging task. The citing paper builds upon this work to further explore the case of directed strongly observable feedback graphs, which is a new and interesting direction for future research."}, {"Category": "Supporting Evidence", "Citation": "[27]", "Explanation": "The cited work provides the theoretical basis for the proof presented in the citing paper, as it is referenced in the proof of Theorem 28.5."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work provides a similar argument for bounding the summands in the second term, which the citing paper adopts to develop the methodology for calculating the difference between two probability distributions."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work provides a method for computing the inner product over new losses in the context of optimizing a model in Algorithm 1."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work provides a proof of a theorem that the citing paper adapts to prove a lower bound for the multitask bandit problem."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work [17] provides a definition of an equivalence relation on the arms that the citing paper adopts in their research to establish a specific condition for the arms in a game."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work in [14] considers a special case of the undirected feedback graph problem, which the citing paper adopts in their own research to address the problem of finding the optimal action in a game with a fixed and known undirected feedback graph composed of disjoint cliques with self-loops."}, {"Category": "Supporting Evidence", "Citation": "[14,Theorem 4]", "Explanation": "The cited work provides a lower bound of order T \u03b1 j=1 ln(m j + 1) for the regret of a class of algorithms in a feedback graph problem, which serves as a foundational result for the analysis in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[24]", "Explanation": "The cited work provides a known NP-hardness result that the citing paper uses to argue against the feasibility of a certain method in the context of computing a minimum clique cover for a given feedback graph."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work provides a bound of O \u221a \u03b1T \u2022 ln(KT ) for the EXP3.G algorithm, which the citing paper adopts as a methodological basis for their research."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work proves a bound of O \u03b1T (ln K) 3 for OSMD with a variant of the q-Tsallis entropy regularizer, which the citing paper adapts as a methodological basis for their research."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work provides the loss estimates that the citing paper uses in their analysis, which serves as a methodological basis for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work provides a variance term that the citing paper adopts in their research to bound the upper limit of a specific variance term in a strongly observable feedback graph."}, {"Category": "Methodological Basis", "Citation": "[3,Fact 4]", "Explanation": "The cited work provides a theoretical foundation for the analysis of probability distributions and directed strongly observable graphs in the citing paper."}, {"Category": "Data Source", "Citation": "EXP3.G [2]", "Explanation": "The cited work is referenced to introduce explicit exploration in probability distributions, which is used as a data source in the analysis of the q-FTRL algorithm in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work provides a proof for a lemma that the citing paper uses in its research, specifically in the context of expressing the KL-divergence between two probability distributions associated to environments."}, {"Category": "Supporting Evidence", "Citation": "[27]", "Explanation": "The cited work provides the necessary exercise and result to support the claim in the citing paper about the relationship between two probability measures."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b7", "b8", "b29", "b9", "b10", "b24", "b15", "b3", "b11", "b20", "b30", "b4", "b14", "b12", "b25", "b35" ], "table_ref": [], "text": "Recent work has identified that large-scale models achieve impressive results on various natural language benchmarks by exploiting correlations which do not seem semantically meaningful for solving the task (Gururangan et al., 2018;Gardner et al., 2021). Leveraging such spurious correlations is often considered an indication that models do not solve the actual task, but instead resort to finding statistical \"shortcuts\" around the problem (Geva et al., 2021;Savoldi, Gaido, Bentivogli, Negri, & Turchi, 2021).\nIn parallel, works in cognitive psychology identify that finding shortcuts may in fact be a feature of human intelligence, which on the one hand helps us cope with missing or implicit information (H. P. Grice, 1975;P. Grice, 1989), while on the other hand may also lead to harmful behavior. In the context of gender bias in coreference resolution, which will be the focus of our work, studies have found that human subjects tend to prefer the stereotypical reading in various modalities, such as event-related brain potentials, reading times, or 1 https://github.com/SLAB-NLP/Cog-GB-Eval Figure 1: High-level overview of our work. We develop an evaluation paradigm for human subjects following the dualprocess theory for decision making and compare them to model biases, analyzing the difference between real-world and synthetic sentences. eye movements (Osterhout, Bersick, & McLaughlin, 1997;Kennison & Trofe, 2003;Duffy & Keir, 2004).\nIn this work we propose to integrate findings from these two lines of research and quantify the extent to which model biases resemble human behavior. We distinguish between two ends of a spectrum, as shown in Figure 1. On the one hand we place annotation artifacts, which hold only in specific training sets, e.g., associating the word \"cat\" with contradiction in NLI (Gururangan et al., 2018). On the other hand of the spectrum we place human-like biases which are sometimes useful in real-world scenarios (e.g., in common sense reasoning (Lent & Søgaard, 2021)), but also produce harmful, unwanted behavior (as in gender bias (Schwartz & Stanovsky, 2022)). These are likely to arise in any real-world dataset, and may require subtle debiasing techniques in either modelling or data collection.\nTo place model biases on this spectrum, we develop human annotation interfaces and derive evaluation metrics which compare between humans and models, thus putting them on the same scale. In particular, we focus on gender bias in coreference resolution in the English language, which was widely studied in machine learning and psycholinguistics, allowing us to explore results in the intersection of these areas.\nTo achieve this, we study human biases through the lens of the dual-process theory (Evans, 2008), which posits that there are two cognitive systems participating in humans' decision making process. System 1 is fast, associative and automatic, while System 2 is slow, conscious and effortful. System 1 heuristics are considered a survival mechanism. Humans make thousands of decisions a day, and if all of them were consciously processed, our brain would not handle the cognitive load. But on the other hand, when System 1 \"shortcuts\" are wrong and System 2 does not revise it, erroneous and biased decisions may occur (Kahneman, 2011).\nWithin this framework, we propose two human experiments to quantify the heuristics made by System 1. The first experiment tests System 1 directly, by examining how gender bias manifests in self-paced reading (Jegerski, 2013), which approximate eye tracking, largely considered to be an unconscious process (Rayner, 1998). The second experiment is question answering (QA) over coreference-related questions. QA is likely to invoke System 2, as it requires more conscious effort (Wang & Gafurov, 2003). We then add different artificial time constraints, to examine how System 1 heuristics are expressed in a task that requires more cognitive effort.\nFinally, we crowdsource annotations for the two experiments over synthetic and real-world sentences, and make several important observations, comparing humans to two state-of-the-art coreference models. Both experiments surface comparable gender biases to those shown by models. Specifically, in the QA experiment over the natural sentences, models' overall accuracy is significantly lower than humans, but both show similar biases. In contrast, for the synthetic sentences, the models' overall accuracy was closer to humans, but models have shown larger gender bias.\nTo the best of our knowledge our work presents a first quantitative evaluation of gender bias in coreference resolution models versus human behavior, specifying the conditions needed to elicit comparable biases from humans through time constraints. Our results indicate that model biases indeed resemble decisions made by humans with restricted attention span. Future work may leverage our evaluation paradigm and revisit it for other tasks and future models." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b6" ], "table_ref": [], "text": "We begin by describing previously published datasets designed to test model biases in coreference resolution . To measure human performance, we then discuss Maze (Forster, Guerrera, & Elliot, 2009), a self-paced reading approach approximating eye-tracking measurements." }, { "figure_ref": [], "heading": "Gender Bias Datasets", "publication_ref": [ "b38", "b21" ], "table_ref": [], "text": "We use three coreference gender bias datasets as outlined below, and summarized in Table 1.\nWinoBias (Zhao, Wang, Yatskar, Ordonez, & Chang, 2018) Van Durme, 2018) consist of 3,888 synthetic, short sentences. Each of the sentences conforms to a similar template consisting of two entities, identified by their profession, and a single referring pronoun. The datasets are balanced with respect to stereotypical gender-role assignment (e.g., female secretaries) versus non-stereotypical assignment (e.g., male nurses). These datasets are good for controlled experiments but consist of a small variety of linguistic constructions, and do not represent real-world distributions.\nIn contrast, the BUG corpus (Levy, Lazar, & Stanovsky, 2021) aims to find such templates \"in the wild\". It consists of 1,720 sentences sampled from natural corpora (e.g., Wikipedia and PubMed) and better approximates real-world distribution in terms of sentence length, vocabulary and gender-role stereotypes. Similar to Winogender and Wino-Bias, each sentence in BUG presents entities identified via their profession and a referring pronoun. BUG also provides a binary annotation for each sentence marking whether is conforms to societal norms. For accuracy sake, we use a subset of BUG which was manually annotated." }, { "figure_ref": [ "fig_2" ], "heading": "Maze", "publication_ref": [ "b6", "b12", "b36" ], "table_ref": [], "text": "For our proposed evaluation metric presented in the Experiments section, we use Maze (Forster et al., 2009), a platform for measuring self-paced reading (Jegerski, 2013). This platform is an alternative for eye-tracking measurements (Witzel, Witzel, & Forster, 2012), that does not require specialized equipment and in-house annotators. Instead, Maze can be easily deployed on crowdsourcing platforms, allowing us to collect annotations at scale.\nAs exemplified in Figure 3, Maze iteratively presents two options for the next word in a sentence, and a human annotator needs to select the most probable alternative given previously seen words. The time for choosing the correct word approximates its reading time." }, { "figure_ref": [], "heading": "Working Definitions", "publication_ref": [ "b19", "b28", "b38", "b23" ], "table_ref": [], "text": "In this section, we formally define key concepts commonly used throughout the paper.\nGender. We use existing gender bias corpora, as described in the Background section, using pronouns with three gram- matical genders: feminine, masculine, and neutral. The complete list of pronouns and their distribution in these corpora is shown in Table 3 in the Appendix. These datasets are generally devoid of other types of pronouns, such as neopronouns.2 Collecting corpora for diverse types of pronouns is left as an important avenue for future work, e.g., as outlined by (Lauscher, Crowley, & Hovy, 2022).\nPro-stereotype/Anti-stereotype. A coreference relation between a pronoun and an entity in a sentence is deemed pro-stereotypical if the referring pronoun's gender conforms to societal norms (e.g., \"nurse\" and \"she\"), otherwise it is marked anti-stereotypical (e.g., \"cleaner\" and \"he\"). These definitions naturally extend to sentences with a single pronoun. These are deemed pro or anti stereotypical according to the relation between the entity and its referring pronoun. To estimate the stereotypical gender norm per profession we use labels provided in the previously-published gender bias datasets (Rudinger et al., 2018;Zhao et al., 2018), based on both human annotations and reports published by The U.S. Bureau of Labor Statistics. 3Gender bias. We adopt the Historical Bias definition (Mehrabi, Morstatter, Saxena, Lerman, & Galstyan, 2021): \"the already existing bias and socio-technical issues in the world and can seep into from the data generation process even given a perfect sampling and feature selection\". This definition connects between the physical world and how it manifests in the the training data. In particular, historical bias appears when models make predictions based on the gen-der distribution in the training data, rather than the relations between entities in the sentence." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In the following section we present our two experiments for measuring human biases. The design choices we make follow common practices in psycholinguistic literature.\nWe choose the data for the experiments by following the WinoBias and Winogender categorization into prostereotypical and anti-stereotypical instances. In addition, we add 216 sentences (5.6%) of those originally marked as neutral. Particularly, sentences in which the coreference link contradicts societal norms are considered anti-stereotypical, but also sentences in which the coreference link is to a neutral entity, yet the distractor entity is correlated with the pronoun. Consider for example, the sentence: \"The teenager confided in the therapist because she was seeking emotional support.\" Although \"she\" refers to the gender-neutral \"teenager\", the stereotypically feminine distractor (\"therapist\") poses a pitfall for biased decisions, and is thus considered anti-stereotypical. See the full details in Table 4 in the Appendix.\nFinally, for the BUG dataset, we ensure that our sample is balanced across professions. Table 1 shows this especially affected BUG, as it over-represents certain entities (e.g., \"patient\" or \"doctor\" in the PubMed corpus)." }, { "figure_ref": [ "fig_1" ], "heading": "QA Experiment", "publication_ref": [ "b5", "b36", "b16", "b0" ], "table_ref": [], "text": "In this experiment we present a sentence followed by a multiple-choice question regarding the gender of an entity in the sentence, eliciting coreference resolution decisions. For example, given the sentence \"The developer talked to the cashier and invited him for a cup of coffee\", and the question: \"What is the gender of the cashier?\", the four possible answers are 'male', 'female', 'neutral' and 'unknown', and the expected answer is 'male'. QA is likely to invoke System 2 as it involves conscious decision making. To test System 2 under a constrained setting, annotators observe the sentence for a limited time before it disappears and then they can answer the question. See Figure 2 for an example annotation interface.\nFiller questions. Following common practice in human annotation tasks, we introduce filler questions to prevent participants from focusing on certain aspects of the sentence (e.g., its pronoun). We automatically formulate questions on predicate-argument relations using a pretrained QA-SRL model (FitzGerald, Michael, He, & Zettlemoyer, 2018), that produces different question formats, e.g., asking about the subject (\"who might be talking?\"), object (\"who was being hired?\") and other entities in the sentence. Similarly to other psycholinguistics works, our filler questions constitute 50% of the total questions in the experiment (Witzel et al., 2012;Kim, Gabriel, & Gygax, 2019;Boyce, Futrell, & Levy, 2020)." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Calibration.", "publication_ref": [ "b27", "b22", "b2" ], "table_ref": [], "text": "To account for different reading paces (Rayner, Schotter, Masson, Potter, & Treiman, 2016), we begin with calibrating a baseline reading pace for each participant. We present the sentence for an unlimited time along with the question and measure the time it takes the participant to submit the correct answer. This is then normalized by the sentence length (in words) to approximate a participant's reading pace. See Figure 2a for an example of this interface.\nAnnotation interface. Each participant observes a single sentence for a limited amount of time. Then, the sentence disappears and a question regarding one of the entities in the sentence is shown for an unlimited time. The time each sentence is presented on screen is calculated by (α•avg•l), where avg is the participant's reading pace, l is the length of the sentence in words, and α is sampled i.i.d from {0.25, 0.5, 0.75} to present the sentence for a fraction of the participant's pace. An example of this interface is shown in Figure 2b.\nAnnotator feedback. Following (Malmaud, Levy, & Berzak, 2020), we show participants a feedback message indicating if they were correct after every submitted answer, both for filler questions and for the actual task. Feedback in multiple-choice questions has been shown to improve performance and reduce low-quality annotations (Butler & Roediger, 2008). To mitigate the risk of affecting responses in unintended ways, we use filler questions that prevent annotators from overspecializing in the task.\nFiltering non-coreference errors. This setup may produce errors which do not relate to coreference. For example, answering that the gender of the entity is masculine while the presented pronoun is feminine (and vice versa) does not indicate a coreference error, and is therefore ignored. " }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Self-Paced Reading Experiment", "publication_ref": [ "b26", "b37", "b0" ], "table_ref": [], "text": "In the second experiment, we approximate trends in reading time of pronouns in pro-stereotypical versus antistereotypical instances, which is considered an unconscious process, and hence a good proxy for System 1's biases (Rayner, 2009).\nWe use MAZE to approximate the time it takes a participant to choose the pronoun in our sentences (see Figure 3). This implicitly measures the timing of a coreference decision since the pronoun indicates the gender of a previously mentioned entity. Previous work has identified that self-paced reading is a good proxy for natural reading when comparing between readings of different sentences, albeit it may overestimate the absolute reading times (Yan & Jaeger, 2020). This makes self-paced reading adequate for our purposes, as we are interested in the trends shown in response time between pro-stereotypical and anti-stereotypical instances.\nFiltering ambiguous instances. Since MAZE presents the words in a linear order, we note that there are instances when the pronoun appears before the context needed to infer its antecedent. E.g., when reading the prefix \"The sheriff questioned the housekeeper as she...\" it is yet unclear whether \"she\" refers to the sheriff (e.g., as in \"...she needed to find the thief.\") or the housekeeper (e.g., in \"... she was cleaning\"). Since the reader cannot know which of the suffixes will follow, these instances do not reflect gender bias decisions. To address this issue in WinoBias and Winogender, we sample only sentences where the pronoun appears after all verbs in the sentence, e.g., \"The tailor thought the janitor could be good at sewing and encouraged her\". In a preliminary analysis we find that this heuristic may be over-strict, but leads to high precision, which was most important for our analyses. From BUG we sampled only sentences where the pronoun appeared after its antecedent. We find that this sampling works well for the sentences in BUG, which usually consist of a single entity.\nFor the synthetic sentences, this sampling produces a subset of 1,335 viable sentences. Most of the instances which were filtered out come from Winogender, because in most of its sentences the pronoun appears before one of the verbs in Table 2: Human and model results in the QA task on the same sentences. 'pro' and 'anti' columns show results on prostereotypical and anti-stereotypical gender questions. ∆ QA stands for the difference between the two categories (pro minus anti), indicating biased performance, approximating System 2 biases.\nthe sentence. For BUG, this sampling produces a subset of 1,603 viable sentences. Annotation interface. At each time step participants are shown two possible words, and they need to choose the next word in the sentence according to previous context. See Figure 3 an example. We allow participants to retry in case of an error, and record the time until their first answer, as well as the total time until the correct option was chosen.\nSimulating arbitrary time limitations. Similarly to the QA experiment, we would like to introduce a notion of time constraints. If we would have limited the amount of time given to distinguish the next word, a participant would either: (a) choose correctly (b) choose incorrectly (c) not respond in time. Instead of testing participants over different discrete time limitations, we make the following assumption: if a participant's response time for a correct annotation was x ms, any time limit below x ms would not be enough time for responding (option (c) above). Following, we do not limit participants reading time, but instead compute a cumulative distribution function over all possible observed response times. Finally, we use A-Maze (Boyce et al., 2020) to automatically generate probable distractors." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "In this section we summarize the main findings from our two crowdsourcing experiments. We find that the overall human accuracy for both tasks was good, reaching 94.48% on the gender questions in unrestricted QA, and 98.13% in MAZE, indicating an understandable task and high quality annotations." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b33" ], "table_ref": [], "text": "The two experiments collected annotations from 33 participants on the Amazon Mechanical Turk platform. Our average hourly pay was 8.53 USD. The overall cost to produce our annotations was 1, 030 USD. To qualify, workers had to have at least 5, 000 accepted HITs at an acceptance rate of at least 96%, and hail from English-speaking countries. In addition, we ran a qualification HIT which required workers to score at least 85% on an unconstrained version of the QA Figure 4: Visualization for the results shown in Table 2. The x-axis is the performance over the anti-stereotypical sentences, and the y-axis is the performance over the prostereotypical sentences. Values above the dashed black line show gender biased performance. Datasets are represented by color, while humans are distinguished from models by the indicator's of your shape. {0.25, 0.5, 0.75} are the fractions of the baseline reading pace given to humans. All evaluations found some degree of gender bias. task. Following (von der Malsburg, Poppels, & Levy, 2020), we annotated 3K instances with gender bias signal for each experiment and each dataset, amounting to 12K annotations. We deploy the QA task using Anvil,4 and the MAZE task using Ibex.5 Finally, we use the IQR technique to remove outliers in the self-paced reading (Vinutha, Poornima, & Sagar, 2018), which may arise due to network connectivity issues." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b32" ], "table_ref": [], "text": "Following previous work, we compute gender bias as the difference in performance between pro-stereotypical and antistereotypical instances (Stanovsky, Smith, & Zettlemoyer, 2019). In the QA task we denote as ∆ QA the difference between accuracy on pro-stereotypical versus anti-stereotypical gender questions, which is a proxy for constrained System 2 gender bias. In the self-paced reading task we compute the difference in response time to identify the pronoun, marked as ∆ MAZE , and is a proxy for System 1 biases. For consistency, both metrics are defined such that larger values indicate more gender biased performance. I.e., for ∆ MAZE we subtract the response time for pro-stereotypical instances from the anti-stereotypical instances, as longer response times indicate worse performance." }, { "figure_ref": [], "heading": "QA results", "publication_ref": [], "table_ref": [], "text": "Several observations can be drawn from the results for the QA task, presented in Table 2 and visualized in Figure 4, showing the biases caused by limiting the resources of System 2. Human subjects show more gender bias as they are given less time to read the sentence. For both natural and synthetic sentences, we find that ∆ QA for humans increases between when they are given 0.75 and 0.5 of their baseline reading pace, and for natural sentences specifically we see this increase also between 0.5 and 0.25. I.e., the difference in performance between pro-stereotypical and anti-stereotypical increases the less time participants have. However at some point, participants will not have enough time to process the sentence. This is observed in Winogender and WinoBias when α = 0.25, where human performance equally degrades across both anti-stereotypical and pro-stereotypical, in parallel with an increase in non-coreference errors from around 2% when α ∈ [0.5, 0.75] to 5% when α = 0.25.\nHuman subjects were found more prone to gender biased answers on naturally-occurring sentences. Table 2 shows larger ∆ QA for natural sentences than for the synthetic ones, and in Figure 4 the points representing human performance on BUG are farther from the diagonal, indicating more biased performance. This may stem from the templated nature of the synthetic sentences which allows subjects to master them." }, { "figure_ref": [ "fig_3", "fig_3", "fig_3", "fig_3", "fig_3" ], "heading": "Self-Paced Reading Results", "publication_ref": [], "table_ref": [], "text": "Several conclusions are drawn from this results of this experiment, shown in Figure 5, approximating System 1 biases.\nHigher human gender bias is observed the more processing time is needed. Figures 5a and5b show the CDF of response times for distinguishing the pronoun from a distractor over the correct annotations (which consist of 98% of all annotations). Figure 5c shows ∆ MAZE , which is the difference between anti-stereotypical and pro-stereotypical instances in 5a and 5b. The longer the response time allowed (and hence more annotations are counted), a more pronounced ∆ MAZE is observed.\nHuman gender bias was observed only when accounting for at least 80% of the synthetic sentences. Positive ∆ MAZE indicates longer response time for anti-stereotypical sentences than pro-stereotypical, and so considered gender bias. Figure 5c shows that ∆ MAZE is positive after accumulating 80% of the annotations on the CDF curve, while for the natural sentences this effect is found after 50% of annotations." }, { "figure_ref": [ "fig_3", "fig_4", "fig_4" ], "heading": "Comparing Model and Human Biases: Discussion and Conclusions", "publication_ref": [ "b13", "b17", "b1", "b31", "b18" ], "table_ref": [], "text": "We evaluate SpanBERT (Joshi et al., 2020) and s2e (Kirstain, Ram, & Levy, 2021) on the same sentences annotated by humans in each of the tasks, and compare the bias in results between humans and models. Below we outline several key findings.\nQualitative error analysis. In Figure 7 we compare errors made by human and models, and find that models tend to err on professions which are strongly associated with a specific gender according the U.S. Bureau of Labor Statistics, while humans err more broadly, on less stereotypical assignments.\nModels exhibit gender bias more than humans on synthetic sentences in the QA experiment. Table 2 shows that ∆ QA on Winogender and WinoBias is larger for models when compared to humans on any fraction of the reading pace. Additionally, Figure 4 shows that over Winogender and Wino-Bias, models are farther from equilibrium line than humans, and the human performance on anti-stereotypical instances is superior to models. This may indicate that to achieve good performance, models rely on gender bias more than humans.\nModels show more gender bias on synthetic sentences than in real-world sentences, as opposed to humans where gender bias is more pronounced over natural sentences. For the QA experiment this trend is seen in ∆ QA columns in Table 2. As for the self-paced reading task, Figure 5c shows that human's ∆ MAZE on BUG is above ∆ MAZE on Winogender and WinoBias, while for models ∆ MAZE is the distance on the x-axis between the points in Figures 6a and6b, which is smaller on BUG for both models. In humans, this may arise due to mastering synthetic sentences to the point they do not rely on gender stereotypes to excel in it. In contrast, the degraded performance of models on real-world sentences Figure 7: The mean and std of \"stereotype confidence\" for humans versus models errors, defined as the average distance from 50% gender distribution of profession, according to the U.S. Bureau of Labor. diminishes the gains from biased predictions.\nModels present higher accuracy on the subset of sentences used for the self-paced reading experiment than on the subset used for the QA experiment for both datasets. We found that model accuracy is higher on the subset of sentences used in the self-paced reading experiment. In average, between WinoBias and Winogender subsets we see 8.05% better performance on the self-paced reading subset, and between BUG subsets we see 5.4% better performance. This may indicate that models do better on sentences where the pronoun appears after verb, as is the case in our self-paced reading experiment, detailed in the Experiments section.\nConclusion: Model biases reflect human decisionmaking under constrained settings. Revisiting our research question, our findings suggest that gender bias in coreference resolution is comparable to human biases rather than an annotation artifact, indicating it will likely creep up in real-world datasets along with other, more desired human behavior, like common sense reasoning.\nFuture Work. Follow-up work may compare our results with competing cognitive theories, e.g., (Bursell & Olsson, 2021), as well as developing some kind of \"slow reasoning\" models, e.g., via early exiting (Schwartz, Stanovsky, Swayamdipta, Dodge, & Smith, 2020;Laskaridis, Kouris, & Lane, 2021)." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "As with any work involving humans subjects in general, and crowdsourcing in particular, several limitations might arise. First, crowdsourcing results are less reliable than a controlled in-house experiment. Second, the conclusions we derived may apply only for our group of annotators. In addition, we did not use any demographic details regarding our participants so our results may be prone to societal biases and not represent the phenomenon for other real-world distributions. To address this, future work can validate our results over larger and more diverse annotator cohorts.\nA linguistic limitation of our work is that we only refer to the observed phenomena in English. Our work can only be generalized to languages without gender inflection nouns, as our proposed methodology assumes that the gender of the profession is obtained through the pronoun referring to it.\nAnother limitation of our work is that we only address pronouns that are grammatically feminine, masculine or neutral. We did not address pronouns that can match to more than one of them (\"his/her\"), or pronouns that match other gender identities.\nFinally, a societal limitation of our work is that our definitions for anti-stereotypical and pro-stereotypical are done according to U.S.-centric societal norms which may diverge between cultures." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "We thank Yevgeni Berzak and Roger Levy for many fruitful discussions, and the anonymous reviewers for their valuable feedback. This work was supported in part by a research gift from the Allen Institute for AI, and a research grant 2336 from the Israeli Ministry of Science and Technology." }, { "figure_ref": [], "heading": "Appendix Datasets Analysis", "publication_ref": [], "table_ref": [], "text": "In this work we thoroughly investigate relations between pronouns and entities that represented by professions. In the following table we present all pronouns that appear in our corpora, along with their distribution, to better understand what affects the examined trends. " }, { "figure_ref": [], "heading": "Fine Grained Categorization", "publication_ref": [], "table_ref": [], "text": "Table 4 presents our suggested fine grained categorization, that takes in consideration also the societal norms of the other entity profession, in addition to the original datasets labeling, that only looks at the societal norms regarding the main entity." }, { "figure_ref": [], "heading": "Human Annotators", "publication_ref": [], "table_ref": [], "text": "Figure 8 is an example for the first window in each of our experiments. We show instructions for the coming task, and users actively press the link as a consent for participation. This template of instructions was shown in each of our experiments." }, { "figure_ref": [], "heading": "Changelog: CogSci2023 to Current Version", "publication_ref": [], "table_ref": [], "text": "This paper is an extended version of a paper appearing in CogSci 2023. Here is the list of the sections we added:\n• Figure 1 • Sample Instances for Human Annotation (section)\n• Qualitative error analysis (paragraph+Figure 7)\n• Models present higher accuracy on the subset of sentences used for the self-paced reading experiment than on the subset used for the QA experiment for both datasets (paragraph)\n• Limitations (section)\n• Appendix The first column identifies the stereotypical assignment for each of the entities. Background colors stand for our fine-grained categorization, green stands for pro-stereotypical, red for anti-stereotypical and gray for neutral. Additionally to the original labeling schema, our categorization is determined also by the gender stereotype of the other entity, as exemplified in lines labeled neutral-anti and neutral-pro. " } ]
2023-05-24
10.18653/v1/P18-1191
[ { "authors": "V Boyce; R Futrell; R P Levy", "journal": "Journal of Memory and Language", "ref_id": "b0", "title": "Maze made easy: Better and easier measurement of incremental processing difficulty", "year": "2020" }, { "authors": "M Bursell; F Olsson", "journal": "Poetics", "ref_id": "b1", "title": "Do we need dual-process theory to understand implicit bias? a study of the nature of implicit bias against muslims", "year": "2021" }, { "authors": "A C Butler; H L Roediger", "journal": "Memory & cognition", "ref_id": "b2", "title": "Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing", "year": "2008" }, { "authors": "S A Duffy; J A Keir", "journal": "Memory & Cognition", "ref_id": "b3", "title": "Violating stereotypes: Eye movements and comprehension processes when text conflicts with world knowledge", "year": "2004" }, { "authors": "J S B Evans", "journal": "Annu. Rev. Psychol", "ref_id": "b4", "title": "Dual-processing accounts of reasoning, judgment, and social cognition", "year": "2008" }, { "authors": "N Fitzgerald; J Michael; L He; L Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Large-scale QA-SRL parsing", "year": "2018-07" }, { "authors": "K I Forster; C Guerrera; L Elliot", "journal": "Behavior research methods", "ref_id": "b6", "title": "The maze task: Measuring forced incremental sentence processing time", "year": "2009" }, { "authors": "M Gardner; W Merrill; J Dodge; M Peters; A Ross; S Singh; N A Smith", "journal": "", "ref_id": "b7", "title": "Competency problems: On finding and removing artifacts in language data", "year": "2021-11" }, { "authors": "M Geva; D Khashabi; E Segal; T Khot; D Roth; J Berant", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b8", "title": "Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies", "year": "2021" }, { "authors": "H P Grice", "journal": "", "ref_id": "b9", "title": "Logic and conversation", "year": "1975" }, { "authors": "P Grice", "journal": "", "ref_id": "b10", "title": "Studies in the way of words", "year": "1989" }, { "authors": "S Gururangan; S Swayamdipta; O Levy; R Schwartz; S Bowman; N A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Annotation artifacts in natural language inference data", "year": "2018-06" }, { "authors": "J Jegerski", "journal": "Routledge", "ref_id": "b12", "title": "Self-paced reading", "year": "2013" }, { "authors": "M Joshi; D Chen; Y Liu; D S Weld; L Zettlemoyer; O Levy", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b13", "title": "SpanBERT: Improving pre-training by representing and predicting spans", "year": "2020" }, { "authors": "D Kahneman", "journal": "Macmillan", "ref_id": "b14", "title": "Thinking, fast and slow", "year": "2011" }, { "authors": "S M Kennison; J L Trofe", "journal": "Journal of Psycholinguistic Research", "ref_id": "b15", "title": "Comprehending pronouns: A role for word-specific gender stereotype information", "year": "2003" }, { "authors": "J Kim; U Gabriel; P Gygax", "journal": "PloS one", "ref_id": "b16", "title": "Testing the effectiveness of the internet-based instrument psytoolkit: A comparison between web-based (psytoolkit) and labbased (e-prime 3.0) measurements of response choice and response time in a complex psycholinguistic task", "year": "2019" }, { "authors": "Y Kirstain; O Ram; O Levy", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Coreference resolution without span representations", "year": "2021-08" }, { "authors": "S Laskaridis; A Kouris; N D Lane", "journal": "", "ref_id": "b18", "title": "Adaptive inference through early-exit networks: Design, challenges and directions", "year": "2021" }, { "authors": "A Lauscher; A Crowley; D Hovy", "journal": "", "ref_id": "b19", "title": "Welcome to the modern world of pronouns: Identity-inclusive natural language processing beyond gender", "year": "2022" }, { "authors": "H Lent; A Søgaard", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Common sense bias in semantic role labeling", "year": "2021-11" }, { "authors": "S Levy; K Lazar; G Stanovsky", "journal": "", "ref_id": "b21", "title": "Collecting a large-scale gender bias dataset for coreference resolution and machine translation", "year": "2021-11" }, { "authors": "J Malmaud; R Levy; Y Berzak", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Bridging information-seeking human gaze and machine reading comprehension", "year": "2020" }, { "authors": "N Mehrabi; F Morstatter; N Saxena; K Lerman; A Galstyan", "journal": "ACM Computing Surveys (CSUR)", "ref_id": "b23", "title": "A survey on bias and fairness in machine learning", "year": "2021" }, { "authors": "L Osterhout; M Bersick; J Mclaughlin", "journal": "Memory & Cognition", "ref_id": "b24", "title": "Brain potentials reflect violations of gender stereotypes", "year": "1997" }, { "authors": "K Rayner", "journal": "Psychological bulletin", "ref_id": "b25", "title": "Eye movements in reading and information processing: 20 years of research", "year": "1998" }, { "authors": "K Rayner", "journal": "Journal of eye movement research", "ref_id": "b26", "title": "Eye movements in reading: Models and data", "year": "2009" }, { "authors": "K Rayner; E R Schotter; M E J Masson; M C Potter; R Treiman", "journal": "Psychological Science in the Public Interest", "ref_id": "b27", "title": "So much to read, so little time: How do we read, and can speed reading help?", "year": "2016" }, { "authors": "R Rudinger; J Naradowsky; B Leonard; B Van Durme", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Gender bias in coreference resolution", "year": "2018-06" }, { "authors": "B Savoldi; M Gaido; L Bentivogli; M Negri; M Turchi", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b29", "title": "Gender bias in machine translation", "year": "2021" }, { "authors": "R Schwartz; G Stanovsky", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "On the limitations of dataset balancing: The lost battle against spurious correlations", "year": "2022-07" }, { "authors": "R Schwartz; G Stanovsky; S Swayamdipta; J Dodge; N A Smith", "journal": "", "ref_id": "b31", "title": "The right tool for the job: Matching model and instance complexities", "year": "2020" }, { "authors": "G Stanovsky; N A Smith; L Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Evaluating gender bias in machine translation", "year": "2019-07" }, { "authors": "H Vinutha; B Poornima; B Sagar", "journal": "Springer", "ref_id": "b33", "title": "Detection of outliers using interquartile range technique from intrusion dataset", "year": "2018" }, { "authors": "T Von Der Malsburg; T Poppels; R P Levy", "journal": "Psychological science", "ref_id": "b34", "title": "Implicit gender bias in linguistic descriptions for expected events: The cases of the 2016 united states and 2017 united kingdom elections", "year": "2020" }, { "authors": "Y Wang; D Gafurov", "journal": "", "ref_id": "b35", "title": "The cognitive process of comprehension", "year": "2003" }, { "authors": "N Witzel; J Witzel; K Forster", "journal": "Journal of psycholinguistic research", "ref_id": "b36", "title": "Comparisons of online reading paradigms: Eye tracking, movingwindow, and maze", "year": "2012" }, { "authors": "S Yan; T F Jaeger", "journal": "Language, Cognition and Neuroscience", "ref_id": "b37", "title": "Expectation adaptation during natural reading", "year": "2020" }, { "authors": "J Zhao; T Wang; M Yatskar; V Ordonez; K.-W Chang", "journal": "", "ref_id": "b38", "title": "Gender bias in coreference resolution: Evaluation and debiasing methods", "year": "2018-06" } ]
[]
Comparing Humans and Models on a Similar Scale: Towards Cognitive Gender Bias Evaluation in Coreference Resolution
Spurious correlations were found to be an important factor explaining model performance in various NLP tasks (e.g., gender or racial artifacts), often considered to be "shortcuts" to the actual task. However, humans tend to similarly make quick (and sometimes wrong) predictions based on societal and cognitive presuppositions. In this work we address the question: can we quantify the extent to which model biases reflect human behaviour? Answering this question will help shed light on model performance and provide meaningful comparisons against humans. We approach this question through the lens of the dual-process theory for human decision-making. This theory differentiates between an automatic unconscious (and sometimes biased) "fast system" and a "slow system", which when triggered may revisit earlier automatic reactions. We make several observations from two crowdsourcing experiments of gender bias in coreference resolution, using selfpaced reading to study the "fast" system, and question answering to study the "slow" system under a constrained time setting. On real-world data humans make ∼3% more genderbiased decisions compared to models, while on synthetic data models are ∼12% more biased. We make all our of our code and data publicly available.
Gili Lior; Gabriel Stanovsky
[ { "figure_caption": "(a) QA calibration interface. A sentence and a question are shown for an unlimited time. After submitting the answer, a feedback is shown on screen, including the correct answer. We record the participants choice and the time they took to answer. (b) QA Experiment interface. A sentence is shown for a limited time. Then, the sentence disappears and only a question is shown, for an unlimited time. After submitting an answer, a feedback message is shown. Here we only record the the participants choice.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: QA calibration and main experiment interfaces.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: MAZE experiment interface. At each step, participants need to distinguish the next word from a distractor, by pressing the correct keyboard key. We record the time they took for identifying the pronoun.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure5: Figures5a and 5bshow the CDF of response times needed to distinguish the pronoun from its distractor in MAZE. I.e., coordinate (x, y) on the graph implies that x% of the annotations required a response time of y ms or less. Figure5cshows ∆ MAZE for humans, i.e., the difference between anti-stereotypical response time and pro-stereotypical response time, where values above y = 0 indicate gender biased performance.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Model performance versus human annotations. The blue and yellow points are the intersection points with the different models' accuracy and their matching category threshold. For example, the blue point intersecting the red line, is the human threshold that matches SpanBERT accuracy on anti-stereotypical sentences.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "and Winogender (Rudinger, Naradowsky, Leonard, &", "figure_data": "OriginalQAMAZE#pro #anti #pro #anti #pro #antiWinoBias1582 1586756717607603Winogender2162162032163535BUG865420431271565315Table 1: Statistics for coreference gender bias datasets.\"Original\" presents the number of sentences in each of thedatasets. \"QA\" and \"MAZE\" show the number of sen-tences in our experiments, further decomposed into pro-stereotypical and anti-stereotypical sentences. The reductionin sampling sizes is due to additional filtering and distributiontuning. See the Experiments section for more details.", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "(Gururangan et al., 2018)", "Explanation": "The cited work by Gururangan et al. provides evidence that large-scale models rely on spurious correlations to achieve impressive results on natural language benchmarks, which is a key finding for the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Gardner et al., 2021)", "Explanation": "The cited work by Gardner et al. further supports the claim that large-scale models exploit correlations that are not semantically meaningful for solving the task, which is a crucial aspect discussed in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Geva et al., 2021)", "Explanation": "The cited work by Geva et al. extends the research on spurious correlations in large-scale models by considering it as an indication of models not solving the actual task, which is a topic discussed in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Savoldi, Gaido, Bentivogli, Negri, & Turchi, 2021)", "Explanation": "The cited work by Savoldi et al. further expands on the research on spurious correlations in large-scale models by exploring the implications of such behavior in human intelligence, which is a topic discussed in the citing paper."}, {"Category": "Data Source", "Citation": "(Forster et al., 2009)", "Explanation": "The cited work is a platform for measuring self-paced reading, which the citing paper utilizes in the Experiments section to collect annotations at scale."}, {"Category": "Methodological Basis", "Citation": "(Jegerski, 2013)", "Explanation": "The cited work provides a method for measuring self-paced reading using the Maze platform, which the citing paper adopts in the Experiments section to collect annotations."}, {"Category": "Methodological Basis", "Citation": "(Witzel, Witzel, & Forster, 2012)", "Explanation": "The cited work presents an alternative to eye-tracking measurements for self-paced reading, which the citing paper uses in the Experiments section to collect annotations without specialized equipment and in-house annotators."}, {"Category": "Supporting Evidence", "Citation": "(Rudinger et al., 2018)", "Explanation": "The cited work provides the labels used in the gender bias datasets, which the citing paper uses to estimate the stereotypical gender norm per profession."}, {"Category": "Supporting Evidence", "Citation": "(Zhao et al., 2018)", "Explanation": "The cited work also contributes to the gender bias datasets used in the study, providing additional labels and insights for the research conducted in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Mehrabi, Morstatter, Saxena, Lerman, & Galstyan, 2021)", "Explanation": "The cited work defines the concept of historical bias in the training data, which the citing paper adopts to understand the manifestation of gender bias in the data generation process."}, {"Category": "Supporting Evidence", "Citation": "(Witzel et al., 2012)", "Explanation": "The cited work by Witzel et al. (2012) provides a basis for the use of filler questions in the experiment conducted in the citing paper, as it is mentioned as a common practice in psycholinguistics research."}, {"Category": "Supporting Evidence", "Citation": "(Kim, Gabriel, & Gygax, 2019)", "Explanation": "The cited work by Kim, Gabriel, and Gygax (2019) further supports the use of filler questions in the experiment, as it is mentioned as a common practice in psycholinguistics research."}, {"Category": "Supporting Evidence", "Citation": "(Boyce, Futrell, & Levy, 2020)", "Explanation": "The cited work by Boyce, Futrell, and Levy (2020) also supports the use of filler questions in the experiment, as it is mentioned as a common practice in psycholinguistics research."}, {"Category": "Data Source", "Citation": "(Boyce et al., 2020)", "Explanation": "The cited work is used to generate probable distractors for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Joshi et al., 2020)", "Explanation": "The cited work, SpanBERT, is used as a model in the citing paper to evaluate the performance of models in the context of gender bias in language."}, {"Category": "Methodological Basis", "Citation": "(Kirstain, Ram, & Levy, 2021)", "Explanation": "The cited work, s2e, is also used as a model in the citing paper to evaluate the performance of models in the context of gender bias in language."}, {"Category": "Supporting Evidence", "Citation": "(U.S. Bureau of Labor Statistics)", "Explanation": "The U.S. Bureau of Labor Statistics is cited to provide a reference for the strong association between professions and gender, which is used in the error analysis of human and model performance in the QA experiment."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b30", "b40", "b40", "b40" ], "table_ref": [], "text": "Recent large-scale text-to-image models [27, 31,33] have quickly revolutionized the world of artistic creation, demonstrating an unprecedented ability to generate incredible and diverse visual content. By learning to inject new concepts into these powerful models, numerous personalization techniques have further enabled users to create new artistic compositions of a unique subject or artistic style using a small set of images depicting the concept.\nCurrent personalization techniques can be categorized by how they treat the pretrained text-to-image model. The personalization-by-inversion approach, first proposed in Gal et al. [9], freezes the generative model and optimizes an input vector to represent the desired subject or artistic style. This vector resides in the recently dubbed P Figure 2. The personalization-by-inversion approaches and their text-conditioning spaces. Textual Inversion [9] (left) invert into the P space where a single token embedding is learned for all timesteps and U-Net layers. Voynov et al. [41] (middle) introduce the P+ space where different embeddings are optimized for each attention layer but are shared across all timesteps. Finally, (right) we introduce a NeTI, which utilizes a new space-time representation learned implicitly via a small mapping layer that considers both the different U-Net layers and denoising timesteps.\nspace [41] containing all possible input embeddings to the text encoder. Alternatively, to better capture the target concept, Ruiz et al. [32] proposed the personalization-by-finetuning approach, where one directly fine-tunes the generative model to represent the user-specified concept. While this results in better reconstructions, it requires additional storage costs and is often more prone to overfitting unwanted details such as the image background. Recently, Voynov et al. [41] demonstrated that one can improve inversion approaches by inverting into an extended input space, P+, where a different vector p ∈ P is learned for each attention layer in the denoising U-Net network. In doing so, they achieve improved reconstruction and editability of the target concept without tuning the generative model.\nIn this paper, we introduce a new text-conditioning space that is dependent on both the denoising process timestep and the U-Net layers, which we call P * . The P * space is composed of a set of vectors p t ∈ P+, one for each timestep t. This results in a richer representation space that considers the time-dependent nature of the denoising process. A naïve approach for inverting a concept into the P * space would require optimizing hundreds of different vectors, one for each possible combination of timestep and U-Net layer. Instead, we propose to implicitly represent the P * space using a small neural mapper that receives the current timestep t and the U-Net layer ℓ and outputs a vector p ∈ P, see Figure 2. In a sense, the entire network represents a concept in P * defined by its learned parameters, resulting in a neural representation for Textual Inversion, which we dub NeTI. We show that a new concept can be learned by optimizing the parameters of our neural representation, similar to the standard optimization mechanism in Textual Inversion.\nNext, we observe that while P * is more expressive compared to P or P+, its potential to generate complex concepts is still dependent on the text encoder, as our learned embeddings are first passed through the encoder before be-ing fed to the U-Net model. Unfortunately, completely skipping the text encoder and working directly within the U-Net's input space is not a viable option, as the text encoder is essential for maintaining editability through the mixing of our concept's learned representation with the other prompt tokens. To overcome this issue, we propose a textual bypass technique where we learn an additional residual vector for each space-time input and add it to the text encoder's output. In this formulation, our neural mapper outputs two vectors, one which is fed into the text encoder and is mixed with the other prompt tokens, and a second bypass vector that incorporates additional information that was not captured by the text encoder. Our experiments demonstrate that using our textual bypass can significantly speed up the convergence and enhance visual fidelity, reaching fidelity that is competitive with fine-tuning methods, all without needing to alter the weights of the generative model, see Figure 1.\nA general problem with personalization methods, including NeTI, is their inherent tradeoff between reconstruction quality and editability [9, 37,46]. We investigate this phenomenon in the context of NeTI and propose two techniques to mitigate and control this issue. First, we observe that the norms of existing token embeddings have a specific distribution while the learned mapping network can output embeddings deviating greatly from this distribution. We show that setting the norm of the network output to a constant value, taken from an existing token in P, significantly improves the editability of our learned concept. Next, we propose an extension to our method that allows one to control the balance between the reconstruction and editability of the concept at inference time. This is achieved by imposing an importance-based ordering over our implicit representation. After training, gradually removing elements from our ordered representation allows one to control the reconstruction-editability tradeoff and reduces NeTI's required storage footprint.\nWe demonstrate the effectiveness of our P * space and NeTI over a range of concepts and prompts when compared to existing personalization approaches. We additionally analyze P * and the attributes learned at different denoising timesteps. Finally, we demonstrate the appealing properties of our ordered representations for controlling the reconstruction-editability tradeoff at inference time." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b27", "b43", "b7", "b3", "b19", "b30", "b42", "b44", "b7", "b15", "b37", "b5", "b13", "b35", "b9", "b33", "b41", "b8", "b16", "b17", "b25", "b10", "b14", "b21", "b46", "b40", "b15", "b9", "b0", "b40", "b29" ], "table_ref": [], "text": "Text-Guided Synthesis. Recent advancements in largescale autoregressive models [28,44] and diffusion models [8,12,21] have resulted in unprecedented diversity and fidelity in visual content creation guided by a free-form text prompt [4,20,27,31,33]. While extremely expressive, these methods do not directly support the usage of user-specified concepts, resulting in research on inversion and personalization for diffusion models.\nInversion. Image inversion is the process of finding a latent code that can be passed to a generator to reconstruct a given image [43,45]. In the context of diffusion models, inversion often refers to the process of finding an initial noise latent that can be iteratively denoised into the target image [8,19,27]. This initial noise latent can then be used for editing the given input image using text prompts [7,13,16,38], but is less suited for representing new personalized concepts.\nPersonalization. In the task of personalization, we are interested in adapting a given model to better capture a given subject or concept. In the context of text-to-image synthesis, the personalized model should enable synthesizing novel images of a specific target concept using a free-form text prompt. In [6,9] it was first observed that personalization can be approached as an inversion problem where text embeddings are optimized to describe the target concept. Alternatively, other methods have resorted to finetuning the diffusion model directly [14,32,36] where one key distinction between these methods is the subset of the network which they choose to optimize. A new line of work has recently explored encoder-based approaches for mapping a given concept to its textual representation [10,34,42]. The personalization of text-to-image models has given rise to various downstream applications such as image editing [13,39] and personalized 3D generation [17,18,26,29].\nSpaces for Inversion and Personalization. Numerous works have already analyzed the latent spaces of pretrained text-to-image diffusion models [11,15,22,47]. Most relevant to our work is the text-conditioning space of the pretrained text-to-image model. In Textual Inversion, Gal et al.\n[9] invert a given concept into a single vector representation residing in the input space of the text encoder. This space has been recently termed the P space. Voynov et al. [41] propose an extended P+ latent space composed of a set of vectors p ∈ P, one for each layer of the U-Net denoising network. They demonstrate that this spacedependent latent space results in improved reconstructions and higher editability compared to the smaller P space. In the context of time-dependent representation, previous works have demonstrated that using different inputs for different timesteps of a diffusion model has intriguing properties [16,23] with Gal et al. [10] also using a timestepconditioned encoder. However, to the best of our knowledge, the resulting latent space was not directly investigated. In this work, we introduce the P * latent space that is dependent on both time and space by considering the different attention layers and the time-dependent nature of the denoising process. We further extend the inversion space using the textual bypass which resides outside the text encoder input space used in all previous work. Ordered Representations. Ordered representations, such as principal component analysis (PCA), in which different dimensions have different degrees of importance, are widely used in machine learning and statistics. However, in the context of inversion and personalization spaces, this property is not commonly used [1,2,9,41]. Rippel et al. [30] showed that one can encourage a neural network to learn an ordered representation by simply introducing a special form of dropout on the hidden units of a neural network, which is proved to be an exact equivalence to PCA for the linear case. Inspired by their work, we propose to bring back this property by applying a similar technique to our learned representation, resulting in an ordered representation that can be used to achieve inference-time control of the reconstruction-editability tradeoff." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b30", "b23", "b0" ], "table_ref": [], "text": "Latent Diffusion Models. We apply our inversion technique over the Stable Diffusion (SD) model [31]. In SD, an encoder E is trained to map an image x ∈ X into a spatial latent code z = E(x) while a decoder D is tasked with reconstructing the input image, i.e., D(E(x)) ≈ x. Given the trained autoencoder, a diffusion model is trained to produce latent codes within this learned latent space, and can be conditioned on an additional input vector c(y) for some input prompt y. The training objective is given by:\nL = E z∼E(x),y,ε∼N (0,1),t ||ε -ε θ (z t , t, c(y))|| 2 2 . (1)\nHere, at each timestep t, the denoising network ε θ is tasked with removing the noise added to the latent code given the noised latent z t , the timestep t, and the conditioning vector c(y). To generate c(y) the input prompt is first split into a series of N pre-defined tokens, which are then mapped to N corresponding embedding vectors, one for each token. These token embeddings are then passed to a pretrained CLIP text encoder [24] which outputs a conditioning vector c(y) ∈ R N ×D where N = 77 is the number of input tokens and D = 768 is the dimension of each output vector.\nTextual Inversion. In Textual Inversion, Gal et al.\n[9] introduce a new token S * and a corresponding embedding vector v * ∈ P representing the concept. Given a small set of images depicting the concept, they directly optimize v * to minimize the objective given in Equation (1). That is, their optimization objective is defined as:\nv * = arg min v E z,y,ε,t ||ε -ε θ (z t , t, c(y, v))|| 2 2 ,(2)\nwhere the conditioning c(y, v) is now obtained using the optimized embedding vector v representing our concept. Notably, the entire LDM remains fixed and only v is optimized." }, { "figure_ref": [ "fig_0" ], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Similar to Textual Inversion, we are interested in finding a personalized representation for a user-provided concept depicted using a small set of images. Rather than encoding the concept into a single embedding vector v ∈ P we project our concept into a more expressive space, P * , dependent on both the denoising timestep t and the U-Net layer ℓ to which the conditioning vector is injected. Instead of directly optimizing all possible v t,ℓ vectors that compose our concept, we choose to implicitly represent them via a simple neural mapper M. Our mapper receives the current timestep t and U-Net layer ℓ and outputs the corresponding token embedding v t,ℓ ∈ P representing the concept at the current timestep and layer. This embedding is then passed to the matching U-Net layer ℓ, see Figure 3.\nTo train the neural mapper, we follow a similar optimization scheme to that of Textual Inversion but directly optimize the parameters of the mapper. Formally, the objective is defined as: At inference time, novel compositions of the concept can be created by adding the token S * to any text prompt. The trained neural mapper is queried to obtain the concept's learned token embedding v t,ℓ for each combination of timestep t = 50, . . . , 1 and U-Net layer {ℓ 1 , . . . , ℓ 16 }. These embeddings are then passed to the text encoder to obtain the conditioning vector passed to the U-Net.\narg min M E z,y,ε,t,ℓ ||ε -ε θ (z t ," }, { "figure_ref": [ "fig_0", "fig_1", "fig_1" ], "heading": "Network Architecture", "publication_ref": [ "b29", "b13", "b24", "b34" ], "table_ref": [], "text": "Our neural mapper is illustrated on the right of Figure 3. The mapper receives as input a pair of scalars (t, ℓ) denoting the current timestep and U-Net layer. First, this input is passed through a positional encoding function f (•), discussed below, to transform the input into a high-dimensional vector f (t, ℓ), followed by two fullyconnected layers. Next, we optionally apply a Nested Dropout [30] technique over the hidden representation to impose an importance-based ordering over the learned representation, see Section 4.2. The resulting compressed vector is passed through a final fully-connected layer to obtain a 768-dimensional vector v t,ℓ ∈ P representing the concept at the current timestep and layer. In total, the resulting architecture contains approximately 460, 000 trainable parameters, which amounts to 2MB of disk space. As a reference to fine-tuning methods, DreamBooth [32] requires several GBs of disk space per concept with CustomDiffusion [14] requiring ∼75MB of disk space.\nPositional Encoding To introduce an inductive bias with respect to the timestep and U-Net layer, we apply a positional encoding on the input (t, ℓ). Specifically, each input (t, ℓ) is encoded with Random Fourier Features [25,35] into a 2048-dimensional vector, f (t, ℓ) ∈ R 2048 , modulated by 1024 random frequencies. We then define a set of 160 uniformly spaced anchor pairs (t, ℓ), encoded using f . This set of vectors is then used to form an encoding matrix E ∈ R 160×2048 . The output of our positional encoding is then defined as e t,ℓ = E × f (t, ℓ) ∈ R 160 . We observe that biasing the encoding toward nearby layers produces less favorable results. Hence, we choose the random frequencies such that the encodings are smooth with respect to time and well separated with respect to the U-Net layer. Additional details, an ablation study, and a visualization of our positional encoding are provided in Appendices A, C and F.\nOutput Rescaling During optimization, the outputs of the neural mapper are unconstrained, resulting in representations that may reside far away from the true distribution of token embeddings typically passed to the text encoder. We find that such unnatural representations significantly harm the editability of the learned concept. To mitigate this issue, we find that it is enough to rescale the norm of the network output to match the norm of real token embeddings. Specifically, we set the norm of the network output to be equal to the norm of the embedding of the concept's \"supercategory\" token (e.g., for the second example in Figure 4, we set the norm equal to the norm of \"teapot\"). Formally, the normalized output of the network is given by:\nM ′ (t, ℓ) = M(t, ℓ) ||M(t, ℓ)|| ||v super ||(4)\nwhere v super is the embedding of the \"super-category\" token. As shown in Figure 4, this simple normalization results in a large improvement in editability without harming reconstruction." }, { "figure_ref": [ "fig_0" ], "heading": "Imposing an Ordered Representation", "publication_ref": [ "b29", "b29" ], "table_ref": [], "text": "A common characteristic of inversion and personalization methods is the existence of a tradeoff between the reconstruction quality and the editability of the inverted concept, making it challenging to find an embedding that is both highly accurate and can still be edited with complex prompts. We observe that the dimensionality d h of the last hidden layer h in our mapper greatly affects this tradeoff. Notice that d h determines the number of 768-dimensional vectors present in our last linear layer, where a larger d h allows the model to better \"overfit\" to the training images, resulting in more accurate reconstructions but at the possible cost of reduced editability.\nTheoretically, one can train multiple neural mappers with different representation sizes and choose the one that bests balances reconstruction and editability for each concept. However, doing so is both cumbersome and impractical at scale. Instead, we introduce an importance-based ordering over our final hidden layer h that allows for posthoc control of the representation's dimensionality. This is achieved by applying a variant of the Nested Dropout technique proposed in Rippel et al. [30] over the output of h. Specifically, we uniformly sample a truncation value t and zero out the outputs of h above the truncation value,\nh[i > t] = 0 where t ∼ U (0, d h ],(5)\nsee the right-hand side of Figure 3. Zeroing out a subset of h effectively means the corresponding set of 768dimensional vectors from the last linear layer will not be used for that specific truncation t. This has the same effect as dynamically changing d h without changing the model architecture itself. By randomly sampling the truncation values during training we encourage the network to be robust to different dimensionality sizes and encode more information into the first set of output vectors, which naturally have a lower truncation frequency. We note that our dropout mechanism is a simpler variant of Rippel et al. [30] which focused on a retrieval setting and used a different sampling distribution with an additional sweeping mechanism. In Section 5.2, we show that manually setting the truncation value t during inference offers a new way to traverse the reconstruction-editability tradeoff for personalization. There, users are provided an added axis of control, enabling them to dynamically choose a truncation that best suits their concept and target prompt. Importantly, as different concepts vary in their complexity, our method allows us to train all concepts with a single architecture and easily compress the learned representation by simply dropping the redundant vectors after training." }, { "figure_ref": [ "fig_2" ], "heading": "The Textual Bypass", "publication_ref": [ "b35", "b35" ], "table_ref": [], "text": "The outputs of our neural mapper reside in the P space of the text encoder and every vector returned by our mapper is first passed through the text encoder alongside the rest of the prompt tokens. Inverting a concept directly into the U-Net's input space, without going through the text encoder, could potentially lead to much quicker convergence and more accurate reconstructions. However, doing so is not a practical solution by itself, as skipping the text encoder limits our ability to create new compositions since our concept is not seen by the text encoder alongside the other prompt tokens. Instead, we propose to learn two vectors using our neural mapper. The first vector, v base is fed into the text encoder, as presented above, and can therefore affect and be affected by the other tokens. We denote the corresponding output of the text encoder by E text (v base ) ∈ R 768 . The second vector, v pass is a textual bypass vector which is added as a residual to the output of the text encoder and can incorporate additional information directly into the U-Net. To avoid v pass from becoming too dominant we scale it to match the norm of E text (v base ). Our final representation passed to the U-Net cross-attention layer is then defined as:\nv * = E text (v base ) + α v pass ||v pass || ||E text (v base )||,(6)\nwhere we set α = 0.2 in all experiments. Figure 5 presents our modified neural mapper with our textual bypass.\nIntuitively our textual bypass should only affect the appearance of the learned concept and not its position or style which should be mostly governed by the text prompt. Thus, inspired by the recent key-locking mechanism of Tewel et al. [36], we use v * only as input for the values (V ) of the cross-attention of the U-Net, while the inputs of the keys (K) are still set using the standard v base vector. Note that our approach differs from the mechanism of Tewel et al. [36] where the keys are set using a fixed, manually determined word. Yet, our approach borrows from the same intuition regarding the roles of keys and values in crossattention layers, as also discussed in [23]. In Figure 6, we illustrate the different aspects of the concept captured by v base and v pass , showing how v pass is used by our mapper to refine the result of v base ." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "In the following section, we demonstrate the effectiveness of NeTI and the appealing properties of our P * textconditioning space." }, { "figure_ref": [], "heading": "Real", "publication_ref": [], "table_ref": [], "text": "Results for v base Results with vpass Figure 6. When trained with textual bypass, v base learns to reconstruct the coarse concept details with the bypass vector vpass further refining the results." }, { "figure_ref": [ "fig_15", "fig_16" ], "heading": "Evaluations and Comparisons", "publication_ref": [ "b40", "b13" ], "table_ref": [ "tab_1" ], "text": "Evaluation Setup. We evaluate NeTI with respect to state-of-the-art inversion methods (Textual Inversion (TI) [9], Extended Textual Inversion (XTI) [41]) and fine-tuning approaches (DreamBooth [32], CustomDiffusion [14]). We consider 10 concepts taken from TI and 6 concepts from CustomDiffusion. To quantitatively evaluate each method, we construct a set of 15 text prompts ranging from background modifications (e.g., \"A photo of S * on the beach\") and artistic styles (e.g., \"A manga drawing of S * \") to more abstract compositions (e.g., \"App icon of S * \"). For each concept and prompt, we generate 32 images using 32 random seeds shared across all methods. For a fair comparison, and unless otherwise noted, we do not apply Nested Dropout on the results obtained by NeTI. Additional details are provided in Appendix A.\nQualitative Evaluation. First, in Figure 7, we demonstrate NeTI's ability to compose learned concepts in novel compositions. These generations range from placing the concepts in new scenes to capturing their key semantics and forming new creations inspired by them.\nIn Figure 8, we present a visual comparison of new compositions of various concepts. As can be seen, TI, which operates in the relatively small P space, fails to capture the exact characteristics of the concept or compose the concept in novel scenes. By tuning the model, DreamBooth is able to achieve higher-fidelity reconstructions, such as that of the statue in the third row. However, this comes at a heavy cost of high storage requirements and reduced editability. For example, in the first row, DreamBooth fails to place the cat statue in the scene when performing a complex edit. Moreover, even when tuning the model, DreamBooth may still fail to reconstruct the concept as seen in the second row. By operating over P+, XTI achieves improved reconstructions and editability but still fails to capture concept-specific details. Our approach, NeTI, attains high-fidelity reconstructions while staying faithful to the provided text prompt, e.g., the string-like legs of the toy in the second row. Notably, these results are obtained after 500 training steps, the same number used for DreamBooth and XTI. In Appendix D and Figures 25 and26 we provide additional qualitative results and comparisons and investigate NeTI under a single-image training setting. In Appendix C we perform an ablation study of our key design choices. Finally, Figure 27 presents additional results of our method on a variety of concepts and prompts.\nReconstruction-Editability. We follow the evaluation setup from TI and evaluate the performance of each method in their ability to (1) reconstruct the target concept and (2) synthesize novel compositions containing the concept. For evaluating reconstruction, we generate 32 images for each text prompt and compute the CLIP-space similarity between the generated images and the training images. To measure editability, for each text prompt, we calculate the CLIPspace similarity between the embeddings of the generated images and the embedding of the text prompt where we omit the placeholder S * from the prompt. Results are presented in Figure 9. We first note that NeTI is able to achieve comparable results to those of DreamBooth, without requiring any tuning of the model. While this does require additional training time, our models require ∼2MB of disk space while DreamBooth requires several GBs.\nAs marked using the dashed line, when more units are dropped from our hidden representation at inference time, we can shift along this curve, reaching higher editability at the cost of reduced visual fidelity. Finally, when compared to XTI trained for the same number of steps, NeTI achieves both improved reconstruction and editability across this dropout curve. In Appendix D, we additionally compare NeTI and XTI after 250 and 500 training steps.\nUser Study. We additionally conduct a user study to analyze all approaches. For each concept, a text prompt was randomly selected from one of our 15 prompts and used to generate 2 images for each method using the same random seeds. Respondents were asked to rate the images based on their (1) similarity to the concept's training images and (2) similarity to the text prompt. Note that previous personalization studies asked respondents to rate each of the above properties independently. However, consider a case where a method is tasked with generating a painting of a concept, but omits the concept from the painting entirely. In this case, it would still score favorably along the text similarity if measured independently. Yet, this is clearly not a desirable result. Therefore, we asked respondents to consider both aspects together and rate the overall image and text similarity on a scale from 1 to 5. Results are shown in Table 1. In total, we had 35 respondents, for a total of 560 ratings per method. As shown, NeTI outperforms other inversion methods and remains competitive with DreamBooth without requiring model tuning." }, { "figure_ref": [], "heading": "Time for Some Analysis", "publication_ref": [ "b4" ], "table_ref": [], "text": "Time and Space. We begin by analyzing the use of both time and space and validate the importance of conditioning our mapper on both input types. Specifically, we consider three variants of our mapper: (1) a space-conditioned mapper; (2) a time-conditioned mapper; and (3) a mapper that is not conditioned on either. For the final variant, we simply pass a fixed input to the neural network and optimize the network parameters as is done in our standard training scheme. Sample reconstructions of each variant using the prompt \"A photo of S * \" are provided in Figure 10. As can be seen, conditioning our neural mapper on both time and space is crucial for capturing fine-level details.\nControlling Editability. Thanks to our importance-based ordering over our mapper's hidden representation, we can control our dimensionality at inference time. In Figure 11 we gradually change the strength of our dropout to show how this affects the generated image's visual and text fidelity. When a stronger dropout is applied we get a more coarse/semantic representation of our concept that is more amenable to edits and new compositions. This inference-time control enables users to dynamically choose the dropout strength that best suits their target concept and prompt without having to train multiple models.\nPer-Timestep Decomposition. We now turn to analyze what aspects of the personalized concept are captured at different timesteps. To do so, we consider a single timestep t and query our network using all combinations of t and {ℓ 1 , . . . , ℓ 16 }. We then apply the resulting token embeddings across all timesteps. In Figure 12 we perform the above process for timesteps spanning different stages of the denoising process. As shown, at the early timesteps, (e.g., t = 999) NeTI learns to capture coarse details such as the concept's general structure and color scheme. Yet fine-grained details are missing, such as the exact pattern on the teapot or the engraving of the pot. As we continue along the denoising process, more concept-specific details are added. Importantly, no single timestep is able to capture all the concept-specific details while a combination of all timesteps attains high-fidelity reconstructions. This behavior of learning more global aspects (e.g., structure) followed by local aspects (e.g., style) also aligns with previous observations of the denoising process [5,23]." }, { "figure_ref": [ "fig_9" ], "heading": "Conclusions", "publication_ref": [ "b9", "b41", "b40", "b29" ], "table_ref": [], "text": "We introduced a new text-conditioning space P * that considers both the time-dependent nature of the denoising process and the different attention layers of the denoising network. We then presented NeTI that implicitly represents concepts in P * via a simple neural mapper. While we have demonstrated the effectiveness of NeTI, there are limitations that should be considered. First, our dropout-based method for traversing the reconstruction-editability tradeoff still requires a dedicated inference pass for each dropout value we check. We also still optimize each concept independently, requiring hundreds of steps per concept. An interesting avenue could be exploring pairing our approach with faster encoder-based approaches [10,42]. We hope that P * and NeTI will help encourage further advancements in personalization methods.\nInput Representation. Our timesteps t range from 0 to 1,000, as in the standard Stable Diffusion training scheme. For choosing the U-Net layers, we follow Voynov et al. [41] and consider 16 different cross-attention layers. Our positional encoding maps the pair of scalars (t, ℓ) into a 160dimensional vector. Our 160 uniformly-spaced (t, ℓ) anchors are defined using 16 U-Net layers and 10 time-based anchors corresponding to t = 0, 100, 200, . . . , 900. Recall, that the output of our positional encoding is given by:\ne t,ℓ = E × f (t, ℓ) ∈ R 160 where E =             - f (0, 0) - - f (0, 1) - ... f (0, 16) - -f (100, 0) - -f (100, 1) - ... -f (900, 16) -             160×2048 ,\nFor the variance of the Fourier Features, we set σ t = 0.03 and σ ℓ = 2 for the time and layer encoding. This introduces an inductive bias where close timesteps obtain a similar encoding, see Figure 22.\nNetwork Architecture. The encoded input e t,ℓ is mapped to a 128-dimensional vector via two fully connected layers. After each layer, we apply LayerNorm [3] followed by a LeakyReLU activation. During training, we apply Nested Dropout [30] with a probability of 0.5 and sample the truncation index as t ∼ U [0, 128). Finally, an additional fullyconnected layer maps the 128-dimensional vector to a 768dimension vector, matching the dimension of the standard P token embedding space.\nTraining & Inference Scheme. Training is performed on a single GPU using a batch size of 2 with a gradient accumulation of 4 for an effective batch size of 8. When applying our textual bypass technique, we perform up to 1000 optimization steps. Importantly, we found that good results are also possible with far fewer steps (see Figure 17). We use a base learning rate of 0.001, which is scaled to an effective learning rate of 0.008. At inference time, we apply a guidance scale of 7.5 for 50 denoising steps. " }, { "figure_ref": [ "fig_0" ], "heading": "A.2. Evaluation Setup", "publication_ref": [ "b40", "b13" ], "table_ref": [ "tab_4" ], "text": "Baseline Methods. For Textual Inversion [9], we follow the original paper and train for 5, 000 optimization steps using a batch size of 8 using the unofficial implementation from the diffusers [40] library. For XTI [41], we use the implementation provided by the authors and follow the official hyperparameters specified in the paper, training with a batch size of 8 for 500 optimization steps and a learning rate of 5e-3. For a fair comparison, we also quantitatively evaluate their results at 1, 000 optimization steps, see Table 3.\nFor Dreambooth [32], we use the diffusers implementation and tune only the denoiser's U-Net with no prior preservation loss. We perform 500 fine-tuning steps using a learning rate of 5e-6 and batch size of 4. Finally, we compare to CustomDiffusion [14] using their six released models available in their official implementation. Inference was performed using their released implementation and default guidance scale parameters.\nConcepts and Text Prompts. Below, we list the set of concepts used across all of our evaluations. From Gal et al.\n[9], we use the following 10 concepts: \"A photo of S * \"\n• Clock • Colorful Teapot • Dangling Child • Elephant • Fat Stone Bird\nFigure 13. Additional results validating our space-time conditioning of NeTI. We train NeTI with and without our time and space conditioning. All models are trained for the same number of optimization steps. As can be seen, the combination of both time and space is essential for attaining high visual fidelity." }, { "figure_ref": [], "heading": "B. Storage Requirements", "publication_ref": [ "b13" ], "table_ref": [], "text": "When applying our textual bypass, our mapper networks contain approximately 560, 000 learnable parameters. When textual bypass is not applied, this reduces to approximately 460, 000 trainable parameters. This amounts to 2.2MB and 1.86MB of disk space required to store each learned concept, respectively.\nThanks to our use of Nested Dropout, we can further compress the representation of the concept by dropping a significant subset of parameters in the network's final layer. When we reduce the number of units in the final layer from the full 128 units to 32 units, the number of parameters decreases to 390, 000 parameters with no textual bypass, a decrease of 15%, requiring 1.56MB of disk space per concept. When keeping only the first 8 units our the final layer, this decreases the 367, 000 parameters or 1.49MB of disk space.\nAs a reference to alternative methods, DreamBooth [32] requires several GBs of disk space per concept while Cus-tomDiffusion [14] requires approximately 73MB of disk space. As shown in the main paper and in Appendix D, NeTI attains comparable or better performance with a similar convergence rate while requiring a significantly lower storage footprint." }, { "figure_ref": [ "fig_0", "fig_6", "fig_7" ], "heading": "C. Ablation Study", "publication_ref": [ "b29" ], "table_ref": [], "text": "Conditioning on Both Time and Space. In Section 5.2, we validated the use of both time and space when conditioning our neural mapper. In Figure 13 (2) the use of Nested Dropout [30] before the final output layer of our mapper; and (3) our textual bypass technique. We train all models using the same number of optimization steps with the same set of hyperparameters, when applicable.\nIn Figure 14, we provide a qualitative comparison of the results obtained by each variant. First, observe that when no positional encoding is applied on our scalar inputs, the resulting model is unable to adequately capture the concept's visual details. This is most notable in the second and third rows where the model fails to capture the mug's shape or the cat statue's unique colorful stripes. When we omit the use of Nested Dropout during training, we get reconstructions comparable to those of our NeTI models, as seen with the mug example in the second row. However, we find that models trained without Nested Dropout are less editable than those trained with the dropout. For example, in the third row, NeTI without Nested Dropout is unable to achieve a plausible edit of the cat statue. We attribute this to the fact that Nested Dropout can be viewed as a form of regularization during training. This allows the model to focus on capturing the concept-specific features while ignoring spurious details in the training images. In doing so, the resulting models tend to be more amenable to edits at inference time, allowing us to create more accurate, novel compositions of the concept. Finally, in the third and fourth columns, we compare NeTI with and without our textual bypass technique. Observe the improved reconstructions that arise when we are able to leverage the additional bypass vector passed directly to the output space of the text encoder. For example, notice the skulls on the bottom of the mug in the second example, the tail of the cat in the third row, or the accurate recon-structions of the child's string-like legs in the final row. In a sense, the bypass vector is able to \"fill in\" the missing details that were not captured by the text encoder. Importantly, we find that using the textual bypass does not harm editability, especially with more complex concepts such as those shown here. The Role of the Textual Bypass. In the main paper, we demonstrated that applying our textual bypass technique results in improved visual fidelity. We also demonstrate that the base vector v base learns to capture the coarse-level details of the concept while the additional bypass vector v pass complements this by adding finer-level details such as texture and further refining the concept's structure. In Figure 15 we provide additional examples to complement those provided in the main paper (Figure 6). As can be seen, by adding v pass we are able to better capture the shape of the metal bird in the first row and the elephant in the final row. Interestingly, in the second row, v base generates images resembling that of a real bird, and when we add v pass to the denoising process, this bird is transformed to more accurately depict the stone-like structure of the real concept. Please note that the results provided here for v base do not represent the optimal results that can be achieved without our textual bypass technique. This visualization serves to provide insights as to what aspects the network chose to encode into each one of its output vectors. For an ablation study of our textual bypass technique, we refer the reader to Appendix C." }, { "figure_ref": [ "fig_9", "fig_10", "fig_11" ], "heading": "D. Additional Comparisons", "publication_ref": [ "b40", "b13" ], "table_ref": [], "text": "In this section, we provide additional evaluations and comparisons of our NeTI scheme. Single Image Personalization. Here, we evaluate NeTI when only a single image is used during training. We apply the same training scheme as used in our other evaluations and train our models for 500 optimization steps without textual bypass. In Figure 16 we provide visual examples of results obtained by NeTI under this single-image training setting. As can be seen, even under this more challenging setting, NeTI performs well in terms of both reconstructing the target concept and remaining consistent with the provided text prompt. We do note that NeTI may still fail to fully reconstruct the concept in some cases. For example, in the first row, NeTI generates images with two tails and in the third row, we fail to generate fine-level details of the clock. We hope to further explore this challenging setting in the future. Training Convergence. We now turn to compare the convergence speed of NeTI when compared to XTI [41]. In Table 3, we provide quantitative metrics computed over all 16 concepts following our evaluation protocol described in the main paper. As can be seen, NeTI with our textual bypass attains comparable performance to XTI, even when trained with 4× fewer optimization steps (i.e., 250 steps for NeTI compared to 1000 steps for XTI). Moreover, when we continue training NeTI for 1000 steps, we achieve a significant improvement compared to XTI in our image similarity metrics with only a small decrease in text similarity. Next, in Figure 17 we show results obtained by NeTI when trained for a small number of optimization steps. As can be seen, even after training for a very small number of steps, e.g., as few as 25 steps, NeTI is able to capture the core concept-specific details such as the color of the fur of the cat in the first row or the shape of the tortoise in the second row. Although most results presented in the paper are obtained after 500 training steps, this further highlights that some concepts, depending on their complexity, may converge much faster using our neural mapper and P * space.\nComparison to CustomDiffusion. Finally, we provide a comparison to CustomDiffusion [14]. We choose to compare our approach with their six official publicly-released datasets (see Appendix A). We evaluated their models with our 15 text prompts, generating 32 images for each prompt using the same 32 random seeds as used to evaluate all methods in the main paper.\nIn Figure 18 we provide a visual comparison over various concepts and text prompts obtained by both methods after 500 training steps. As can be seen, CustomDiffusion, which finetunes the diffusion model, often fails to place the learned concept in new compositions. We attribute this to the fact that the model tends to overfit the original training images, as can be seen in the wooden pot example in the final row. There, CustomDiffusion leaks details appearing in the training images and fails to place the pot on the beach as specified in the prompt. The leakage is also present in the plush tortoise example in the second row, where the rug texture can be found in the generated images. In contrast, NeTI more accurately reconstructs the concepts while placing them in new scenes. In Figure 19 we provide a quantitative comparison of their six concepts across all alternative methods and our proposed NeTI variants. First, when trained for 500 steps our NeTI models with textual bypass outperform CustomDiffusion both in terms of image similarity and text similarity. Moreover, when continuing to train for 1000 steps, our models reach performance comparable to that of Dream-Booth [32], all without requiring tuning the generative model. Importantly, since we do not tune the generative model, our storage footprint is much lower than that of Cus-tomDiffusion, requiring approximately 75× less disk space per concept. This further highlights the appealing properties of personalization-by-inversion techniques such as ours." }, { "figure_ref": [], "heading": "E. Style Mixing", "publication_ref": [ "b40", "b40" ], "table_ref": [], "text": "In Voynov et al. [41] the authors demonstrate that their P+ latent space allows for mixing the geometry of one concept with the appearance of another concept. They demonstrated that this style mixing capability was made possible since different layers of the denoising U-Net model are responsible for different aspects of the generated image. Here, we investigate whether this style mixing ability holds in our P * latent space and neural mapper. Following the notation used in Voynov et al. [41], given two concepts, the embeddings of the concept whose geometry we want to keep are passed to layers (16, down,0),(16,down,1),(8,down,0) while the embeddings of the appearance concept are passed to the remaining layers. Moreover, we investigate whether performing style mixing at different denoising timesteps can provide us with another axis of control. Therefore, rather than performing the style mixing at all denoising timesteps, we begin mixing at different starting points, such that starting later in the denoising process should preserve more details from the geometry concept.\nWe illustrate this idea in Figure 20. In each row, we are given a geometry concept (left) and appearance concept (right) and perform the mixing starting at 4 different timesteps: t = 600, 700, 800, and 900. As can be seen, when we start later in the denoising process, more details from the geometry concept are present in the output image. For example, in the bottom row for t = 600, the output maintains the white top and color scheme of the original mug. As t increases, i.e., the mixing begins earlier in the denoising process, more of the appearance concept is passed to the output image. In the same example as above, the output now contains colorful stripes and a head resembling that of the original cat statue. This behavior can also be seen in the top row, where the teapot gradually shifts into a rocklike texture, matching the stone bird input.\nWe do note that style mixing may be challenging over concepts that converge quickly. We believe that this can be attributed to the information sharing present in our neural mapper. Since all embeddings are computed using shared weights, the disentanglement between different U-Net layers may not be as strong as was observed in XTI where each embedding is optimized independently." }, { "figure_ref": [], "heading": "Geometry Appearance", "publication_ref": [], "table_ref": [], "text": "Time-Based Mixing Control " }, { "figure_ref": [], "heading": "F. Additional Analysis", "publication_ref": [ "b23" ], "table_ref": [], "text": "In this section, we provide additional visualizations and analyses to complement those in the main paper.\nDistribution of Token Embedding Norms. First, in Figure 21, we visualize the distribution of the norms of real token embeddings in CLIP's pretrained text encoder [24]. As can be seen, the norms follow a very narrow distribution centered around 0.25 -0.45. This distribution provides additional motivation for our rescaling technique described in the main paper. Our embeddings should ideally behave similarly to real token embeddings and by normalizing the norm of our embeddings to match that of real token embeddings, we obtain a representation that behaves similarly to real tokens, thereby improving our editability.\nPositional Encoding. In Figure 22, we provide a visualization of various outputs returned by our positional encoding function. As can be seen, the blue and green curves are well-separated due to their different layer indices. Conversely, the green and red curves share a similar encoding as they both share the same U-Net layer as input, but differ slightly in their timestep. Finally, the yellow curve, differs both in its input timestep and layer index, resulting in an encoding that differs significantly from the other encodings." }, { "figure_ref": [ "fig_0", "fig_13", "fig_15", "fig_16" ], "heading": "G. Additional Qualitative Results", "publication_ref": [], "table_ref": [], "text": "Below, we provide additional qualitative results, as follows:\n1. In Figure 23, we provide additional visualizations illustrating which concept-specific details are captured at different denoising timesteps using our neural mapper.\n2. In Figure 24 we demonstrate how using our dropout technique at inference time allows for controlling the reconstruction and editability tradeoff over various concepts and prompts.\n3. In Figures 25 and26 we provide additional qualitative comparisons to the alternative personalization methods using concepts and prompts from our evaluation protocol.\n4. In Figure 27 we provide additional results obtained using our NeTI scheme on a diverse set of prompts. In the second column, we can see that applying dropout is also helpful when some parts of the prompts are not adhered to (in this case \"a painting\"). For each concept, we show four images generated by each method using the same set of random seeds. Results for TI are obtained after 5,000 optimization steps while the remaining methods are all trained for 500 steps. We show results obtained with NeTI using our textual bypass. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like the thank Or Patashnik, Rinon Gal, and Yael Vinker for their insightful discussions and early feedback. This work was supported by the Israel Science Foundation under Grant No. 2366/16 and Grant No. 2492/20." }, { "figure_ref": [], "heading": "Appendix A. Additional Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1. Implementation Details", "publication_ref": [ "b23" ], "table_ref": [], "text": "We operate over the official Stable Diffusion v1.4 textto-image model that uses the pretrained text encoder from the CLIP ViT-L/14 model [24]. " } ]
2023-05-24
[ { "authors": "Rameen Abdal; Yipeng Qin; Peter Wonka", "journal": "", "ref_id": "b0", "title": "Image2stylegan: How to embed images into the stylegan latent space", "year": "2019" }, { "authors": "Rameen Abdal; Yipeng Qin; Peter Wonka", "journal": "", "ref_id": "b1", "title": "Image2stylegan++: How to edit the embedded images", "year": "2020" }, { "authors": "Jimmy Lei Ba; Jamie Ryan Kiros; Geoffrey E Hinton", "journal": "", "ref_id": "b2", "title": "Layer normalization", "year": "2016" }, { "authors": "Yogesh Balaji; Seungjun Nah; Xun Huang; Arash Vahdat; Jiaming Song; Qinsheng Zhang; Karsten Kreis; Miika Aittala; Timo Aila; Samuli Laine; Bryan Catanzaro; Tero Karras; Ming-Yu Liu", "journal": "", "ref_id": "b3", "title": "ediff-i: Text-to-image diffusion models with an ensemble of expert denoisers", "year": "2023" }, { "authors": "Hila Chefer; Yuval Alaluf; Yael Vinker; Lior Wolf; Daniel Cohen-Or", "journal": "", "ref_id": "b4", "title": "Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models", "year": "2023" }, { "authors": "Niv Cohen; Rinon Gal; Eli A Meirom; Gal Chechik; Yuval Atzmon", "journal": "Springer", "ref_id": "b5", "title": "this is my unicorn, fluffy\": Personalizing frozen vision-language representations", "year": "2022" }, { "authors": "Guillaume Couairon; Jakob Verbeek; Holger Schwenk; Matthieu Cord", "journal": "", "ref_id": "b6", "title": "Diffedit: Diffusionbased semantic image editing with mask guidance", "year": "2022" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b7", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Rinon Gal; Yuval Alaluf; Yuval Atzmon; Or Patashnik; Amit Haim Bermano; Gal Chechik; Daniel Cohen-Or", "journal": "", "ref_id": "b8", "title": "An image is worth one word: Personalizing text-to-image generation using textual inversion", "year": "2023" }, { "authors": "Rinon Gal; Moab Arar; Yuval Atzmon; H Amit; Gal Bermano; Daniel Chechik; Cohen-Or", "journal": "", "ref_id": "b9", "title": "Encoder-based domain tuning for fast personalization of text-to-image models", "year": "2023" }, { "authors": "René Haas; Inbar Huberman-Spiegelglas; Rotem Mulayoff; Tomer Michaeli", "journal": "", "ref_id": "b10", "title": "Discovering interpretable directions in the semantic latent space of diffusion models", "year": "2023" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b11", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Bahjat Kawar; Shiran Zada; Oran Lang; Omer Tov; Huiwen Chang; Tali Dekel; Inbar Mosseri; Michal Irani", "journal": "", "ref_id": "b12", "title": "Imagic: Text-based real image editing with diffusion models", "year": "2023" }, { "authors": "Nupur Kumari; Bingliang Zhang; Richard Zhang; Eli Shechtman; Jun-Yan Zhu", "journal": "", "ref_id": "b13", "title": "Multi-concept customization of text-to-image diffusion", "year": "2023" }, { "authors": "Mingi Kwon; Jaeseok Jeong; Youngjung Uh", "journal": "", "ref_id": "b14", "title": "Diffusion models already have a semantic latent space", "year": "2023" }, { "authors": "Jun Hao Liew; Hanshu Yan; Daquan Zhou; Jiashi Feng", "journal": "", "ref_id": "b15", "title": "Magicmix: Semantic mixing with diffusion models", "year": "2022" }, { "authors": "Chen-Hsuan Lin; Jun Gao; Luming Tang; Towaki Takikawa; Xiaohui Zeng; Xun Huang; Karsten Kreis; Sanja Fidler; Ming-Yu Liu; Tsung-Yi Lin", "journal": "", "ref_id": "b16", "title": "Magic3d: High-resolution text-to-3d content creation", "year": "2023" }, { "authors": "Gal Metzer; Elad Richardson; Or Patashnik; Raja Giryes; Daniel Cohen-Or", "journal": "", "ref_id": "b17", "title": "Latent-nerf for shapeguided generation of 3d shapes and textures", "year": "2022" }, { "authors": "Ron Mokady; Amir Hertz; Kfir Aberman; Yael Pritch; Daniel Cohen-Or", "journal": "", "ref_id": "b18", "title": "Null-text inversion for editing real images using guided diffusion models", "year": "2022" }, { "authors": "Alex Nichol; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mcgrew; Ilya Sutskever; Mark Chen", "journal": "", "ref_id": "b19", "title": "Glide: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2021" }, { "authors": "Alexander Quinn; Nichol ; Prafulla Dhariwal", "journal": "PMLR", "ref_id": "b20", "title": "Improved denoising diffusion probabilistic models", "year": "2021" }, { "authors": "Yong-Hyun Park; Mingi Kwon; Junghyo Jo; Youngjung Uh", "journal": "", "ref_id": "b21", "title": "Unsupervised discovery of semantic latent directions in diffusion models", "year": "2023" }, { "authors": "Or Patashnik; Daniel Garibi; Idan Azuri; Hadar Averbuch-Elor; Daniel Cohen-Or", "journal": "", "ref_id": "b22", "title": "Localizing object-level shape variations with text-to-image diffusion models", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b23", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Ali Rahimi; Benjamin Recht", "journal": "Advances in neural information processing systems", "ref_id": "b24", "title": "Random features for large-scale kernel machines", "year": "2007" }, { "authors": "Amit Raj; Srinivas Kaza; Ben Poole; Michael Niemeyer; Ben Mildenhall; Nataniel Ruiz; Shiran Zada; Kfir Aberman; Michael Rubenstein; Jonathan Barron; Yuanzhen Li; Varun Jampani", "journal": "", "ref_id": "b25", "title": "Dreambooth3d: Subject-driven text-to-3d generation", "year": "2023" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b26", "title": "Hierarchical textconditional image generation with clip latents", "year": "2022" }, { "authors": "Aditya Ramesh; Mikhail Pavlov; Gabriel Goh; Scott Gray; Chelsea Voss; Alec Radford; Mark Chen; Ilya Sutskever", "journal": "PMLR", "ref_id": "b27", "title": "Zero-shot text-to-image generation", "year": "2021" }, { "authors": "Elad Richardson; Gal Metzer; Yuval Alaluf; Raja Giryes; Daniel Cohen-Or", "journal": "", "ref_id": "b28", "title": "Texture: Text-guided texturing of 3d shapes", "year": "2023" }, { "authors": "Oren Rippel; Michael Gelbart; Ryan Adams", "journal": "PMLR", "ref_id": "b29", "title": "Learning ordered representations with nested dropout", "year": "2014" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b30", "title": "Highresolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b31", "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2023" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b32", "title": "Photorealistic textto-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Jing Shi; Wei Xiong; Zhe Lin; Hyun Joon; Jung ", "journal": "", "ref_id": "b33", "title": "Instantbooth: Personalized text-to-image generation without test-time finetuning", "year": "2023" }, { "authors": "Matthew Tancik; P Pratul; Ben Srinivasan; Sara Mildenhall; Nithin Fridovich-Keil; Utkarsh Raghavan; Ravi Singhal; Jonathan T Ramamoorthi; Ren Barron; Ng", "journal": "NeurIPS", "ref_id": "b34", "title": "Fourier features let networks learn high frequency functions in low dimensional domains", "year": "2020" }, { "authors": "Yoad Tewel; Rinon Gal; Gal Chechik; Yuval Atzmon", "journal": "", "ref_id": "b35", "title": "Key-locked rank one editing for text-to-image personalization", "year": "2023" }, { "authors": "Omer Tov; Yuval Alaluf; Yotam Nitzan; Or Patashnik; Daniel Cohen-Or", "journal": "", "ref_id": "b36", "title": "Designing an encoder for stylegan image manipulation", "year": "2021" }, { "authors": "Narek Tumanyan; Michal Geyer; Shai Bagon; Tali Dekel", "journal": "", "ref_id": "b37", "title": "Plug-and-play diffusion features for textdriven image-to-image translation", "year": "2022" }, { "authors": "Dani Valevski; Matan Kalman; Yossi Matias; Yaniv Leviathan", "journal": "", "ref_id": "b38", "title": "Unitune: Text-driven image editing by fine tuning an image generation model on a single image", "year": "2022" }, { "authors": "Suraj Patrick Von Platen; Anton Patil; Pedro Lozhkov; Nathan Cuenca; Kashif Lambert; Mishig Rasul; Thomas Davaadorj; Wolf", "journal": "", "ref_id": "b39", "title": "Diffusers: State-ofthe-art diffusion models", "year": "2022" }, { "authors": "Andrey Voynov; Qinghao Chu; Daniel Cohen-Or; Kfir Aberman", "journal": "", "ref_id": "b40", "title": "p+: Extended textual conditioning in text-to-image generation", "year": "2023" }, { "authors": "Yuxiang Wei; Yabo Zhang; Zhilong Ji; Jinfeng Bai; Lei Zhang; Wangmeng Zuo", "journal": "", "ref_id": "b41", "title": "Elite: Encoding visual concepts into textual embeddings for customized text-to-image generation", "year": "2023" }, { "authors": "Weihao Xia; Yulun Zhang; Yujiu Yang; Jing-Hao Xue; Bolei Zhou; Ming-Hsuan Yang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b42", "title": "Gan inversion: A survey", "year": "2022" }, { "authors": "Jiahui Yu; Yuanzhong Xu; Jing Yu Koh; Thang Luong; Gunjan Baid; Zirui Wang; Vijay Vasudevan; Alexander Ku; Yinfei Yang; Burcu Karagol Ayan", "journal": "", "ref_id": "b43", "title": "Scaling autoregressive models for content-rich text-toimage generation", "year": "2022" }, { "authors": "Jun-Yan Zhu; Philipp Krähenbühl; Eli Shechtman; Alexei A Efros", "journal": "Springer", "ref_id": "b44", "title": "Generative visual manipulation on the natural image manifold", "year": "2016" }, { "authors": "Peihao Zhu; Rameen Abdal; Yipeng Qin; Peter Wonka", "journal": "", "ref_id": "b45", "title": "Improved stylegan embedding: Where are the good latents?", "year": "2020" }, { "authors": "Ye Zhu; Yu Wu; Zhiwei Deng; Olga Russakovsky; Yan Yan", "journal": "", "ref_id": "b46", "title": "Boundary guided mixing trajectory for semantic control with diffusion models", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 320.78, 619.75, 224.33, 12.69 ], "formula_id": "formula_0", "formula_text": "L = E z∼E(x),y,ε∼N (0,1),t ||ε -ε θ (z t , t, c(y))|| 2 2 . (1)" }, { "formula_coordinates": [ 4, 63.84, 380.6, 222.52, 16.21 ], "formula_id": "formula_1", "formula_text": "v * = arg min v E z,y,ε,t ||ε -ε θ (z t , t, c(y, v))|| 2 2 ,(2)" }, { "formula_coordinates": [ 4, 60.23, 696.77, 121.13, 14.58 ], "formula_id": "formula_2", "formula_text": "arg min M E z,y,ε,t,ℓ ||ε -ε θ (z t ," }, { "formula_coordinates": [ 5, 103.24, 562.73, 183.12, 22.31 ], "formula_id": "formula_3", "formula_text": "M ′ (t, ℓ) = M(t, ℓ) ||M(t, ℓ)|| ||v super ||(4)" }, { "formula_coordinates": [ 5, 355.87, 310.18, 189.25, 9.65 ], "formula_id": "formula_4", "formula_text": "h[i > t] = 0 where t ∼ U (0, d h ],(5)" }, { "formula_coordinates": [ 6, 67.96, 406.86, 218.4, 23.22 ], "formula_id": "formula_5", "formula_text": "v * = E text (v base ) + α v pass ||v pass || ||E text (v base )||,(6)" }, { "formula_coordinates": [ 12, 50.11, 301.22, 196.15, 129.42 ], "formula_id": "formula_6", "formula_text": "e t,ℓ = E × f (t, ℓ) ∈ R 160 where E =             - f (0, 0) - - f (0, 1) - ... f (0, 16) - -f (100, 0) - -f (100, 1) - ... -f (900, 16) -             160×2048 ," }, { "formula_coordinates": [ 12, 332.27, 511.26, 71.69, 88.34 ], "formula_id": "formula_7", "formula_text": "• Clock • Colorful Teapot • Dangling Child • Elephant • Fat Stone Bird" } ]
A Neural Space-Time Representation for Text-to-Image Personalization
…as a Samurai holding a katana" "…two medieval knights guarding…" "…as a cowboy sitting on hay" "…as an Astronaut in space" "…as a futuristic robot on Mars" "….as a whiskey bottle" "….as a wrestler action figure " Input Images Figure 1. Personalization results of our method under a variety of prompts. Our expressive representation enables one to generate novel compositions of personalized concepts that achieve high visual fidelity and editability without tuning the generative model. The bottom row shows our method's unique ability to control the reconstruction-editability tradeoff at inference time with a single trained model.
Yuval Alaluf; Elad Richardson; Gal Metzer; Daniel Cohen-Or
[ { "figure_caption": "Figure 3 .3Photo of a 𝑆 * 𝑡 ℓ", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Generation results when training with and without our rescaling technique. As can be seen, applying rescaling improves editability without harming visual fidelity.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Photo of a 𝑆 * 𝑡 ℓ", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 8 .Figure 9 .89Figure 7. Sample text-guided personalized generation results obtained with NeTI. Real Sample Textual Inversion (TI) DreamBooth Extended Textual Inversion NeTI", "figure_data": "", "figure_id": "fig_3", "figure_label": "89", "figure_type": "figure" }, { "figure_caption": "Figure 10 .Figure 11 .1011Figure 10. Importance of time-space conditioning. As can be seen, the combination of both time and space is essential for attaining high visual fidelity.", "figure_data": "", "figure_id": "fig_4", "figure_label": "1011", "figure_type": "figure" }, { "figure_caption": ", we provide additional examples of reconstructions obtained when training NeTI with and without each conditioning modality. Network Architecture and Training Scheme. Having demonstrated the importance of our space-time representation, we now turn to analyze three key components of our network and training scheme: (1) the use of a positional encoding on the input pair (t, ℓ);", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 14 .14Figure 14. Ablation study. We compare our NeTI models trained without our positional encoding function, without Nested Dropout applied during training, and without our textual bypass technique. As can be seen, all three components are essential for achieving high-fidelity reconstructions of the input while remaining faithful to the input text prompt, provided on the left.", "figure_data": "", "figure_id": "fig_6", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 .15Figure 15. Additional results of our learned textual bypass representations. When trained with textual bypass, v base learns to reconstruct the coarse concept with the bypass vpass further refining the results.", "figure_data": "", "figure_id": "fig_7", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "\"An app icon of S * \" \"A photograph of two S * on a table\" \"A photo of S * in the jungle\" \"A watercolor painting of S * \" \"Oil painting of S * \" \"A colorful graffiti of S * \" Figure 16. Image generations with NeTI under a single-image training setting.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 17 .17Figure 17. Generation results obtained with NeTI after a varying number of steps. Observe that even after a small number of training steps, concept-specific details are already captured by NeTI.", "figure_data": "", "figure_id": "fig_9", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Figure 18 .18Figure 18. Comparison to CustomDiffusion. Both models were trained for 500 steps and images for each concept were generated using the same seeds.", "figure_data": "", "figure_id": "fig_10", "figure_label": "18", "figure_type": "figure" }, { "figure_caption": "Figure 19 .19Figure19. Quantitative Metrics over the 6 concepts from Cus-tomDiffusion. NeTI with our textual bypass outperform Cus-tomDiffusion along both fronts when trained for the same number of steps. When trained for 1,000 steps, our models become competitive with DreamBooth[32].", "figure_data": "", "figure_id": "fig_11", "figure_label": "19", "figure_type": "figure" }, { "figure_caption": "Figure 20 .Figure 21 .Figure 22 .202122Figure 20. Style mixing results obtained with NeTI. Given two concepts, the embeddings of the geometry concept are passed to layers (16, down,0),(16,down,1),(8,down,0) while the embeddings of the appearance concept are passed to the remaining U-Net layers. To provide an additional axis of control, we also leverage the time component of P * and begin the mixing at various denoising timesteps. From left to right: t = 600, 700, 800, and 900. As can be seen, starting the mixing later, i.e., a smaller value of t passes more information from the geometry concept to the output image.", "figure_data": "", "figure_id": "fig_12", "figure_label": "202122", "figure_type": "figure" }, { "figure_caption": "Figure 24 .24Figure 24. Controlling the editability with Nested Dropout. The three bottom examples showcase how dropping out layers affects the reconstruction when no editing is applied. This illustrates how the different vectors in our ordered representation capture different aspects of the concept in an ordered fashion. In the second column, we can see that applying dropout is also helpful when some parts of the prompts are not adhered to (in this case \"a painting\").", "figure_data": "", "figure_id": "fig_13", "figure_label": "24", "figure_type": "figure" }, { "figure_caption": "Figure 25 .25Figure 25. Additional qualitative comparisons. For each concept, we show four images generated by each method using the same set of random seeds. Results for TI are obtained after 5, 000 optimization steps while the remaining methods are all trained for 500 steps. Results obtained with NeTI use our textual bypass.", "figure_data": "", "figure_id": "fig_15", "figure_label": "25", "figure_type": "figure" }, { "figure_caption": "Figure 26 .26Figure 26. Additional qualitative comparisons. For each concept, we show four images generated by each method using the same set of random seeds. Results for TI are obtained after 5,000 optimization steps while the remaining methods are all trained for 500 steps. We show results obtained with NeTI using our textual bypass.", "figure_data": "", "figure_id": "fig_16", "figure_label": "26", "figure_type": "figure" }, { "figure_caption": "Figure 27. Sample text-guided personalized generation results obtained with NeTI.", "figure_data": "", "figure_id": "fig_17", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "User Study. We asked respondents to rate each method based on their faithfulness to the original images and prompt on a scale of 1 to 5.", "figure_data": "TIDreamBoothXTINeTIAvg. Rating (↑)2.77 (± 1.20)3.71 (± 1.13)3.15 (± 1.09) (± 1.12) 3.97", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "A list of the 15 prompts used in our evaluation protocol.", "figure_data": "\"A photo of a S * \"\"A photo of S * in the jungle\"\"A photo of S * on a beach\" \"A photo of S * in Times Square\"\"A photo of S * in the moon\"\"A painting of S * in the style of Monet\"\"Oil painting of S * \"\"A Marc Chagall painting of S * \"\"A manga drawing of S * \"\"A watercolor painting of S * \"\"A statue of S * \"\"App icon of S * \"\"A sand sculpture of S * \"\"Colorful graffiti of S * \"\"A photograph of twoS * on a table\"", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "All models are trained on the same training set and initialization token, when applicable. For a list of all 15 text prompts considered in the evaluation protocol, please refer to Table2.", "figure_data": "Real Sample & PromptNo Time ConditioningNo Space ConditioningNo Space nor Time ConditioningBoth Space and Time Conditioning\"A photoof S * \"• Headless Statue• Metal Bird• Mugs Skulls• Rainbow Cat Statue• Red BowlFrom Kumari et al. [14], we use 6 concepts:• Barn• Dog• Tortoise Plushy• Cat• Teddybear• Wooden Pot", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Quantitative comparison with XTI[41] after a varying number of optimization steps. Observe that NeTI with our textual bypass at 250 training steps outperforms XTI when trained for 1000 steps in both the image and text similarities. Note, the higher the better.", "figure_data": "Steps2505001000ImageTextImageTextImageTextXTI--0.7110.2600.7220.255NeTI0.7290.2620.7510.2530.7670.247Real Sample2550100150\"A photo of S * on the beach\"", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work by Gal et al. introduces the concept of personalization-by-inversion approach, which the citing paper adopts as a method to optimize input vectors to represent desired subjects or artistic styles in text-to-image models."}, {"Category": "Methodological Basis", "Citation": "[41]", "Explanation": "The cited work by Voynov et al. extends the personalization-by-inversion approach by introducing the P+ space, which optimizes different embeddings for each attention layer in the text-to-image model."}, {"Category": "Extension or Continuation", "Citation": "[32]", "Explanation": "The cited work by Ruiz et al. proposed a personalization-by-finetuning approach to better represent the user-specified concept in the generative model. The citing paper extends this idea by introducing a new text-conditioning space that is dependent on both the denoising process timestep and the U-Net layers, which is a new and improved approach to achieve better reconstruction and editability of the target concept."}, {"Category": "Methodological Basis", "Citation": "[41]", "Explanation": "The cited work by Voynov et al. demonstrated the use of an extended input space, P+, to improve inversion approaches in text-to-image generation. The citing paper adopts this method in their new text-conditioning space P * to achieve improved reconstruction and editability of the target concept without tuning the generative model."}, {"Category": "Methodological Basis", "Citation": "[28,44]", "Explanation": "The cited works provide the large-scale autoregressive models that the citing paper builds upon in their research on text-guided synthesis."}, {"Category": "Data Source", "Citation": "[8,12,21]", "Explanation": "The cited works are the diffusion models that the citing paper utilizes in their research on text-guided synthesis."}, {"Category": "Extension or Continuation", "Citation": "[4,20,27,31,33]", "Explanation": "The cited works on visual content creation guided by free-form text prompt are extended in the citing paper to further explore the usage of user-specified concepts."}, {"Category": "Inversion", "Citation": "[43,45]", "Explanation": "The cited works on image inversion are discussed in the context of diffusion models in the citing paper."}, {"Category": "Personalization", "Citation": "[7,13,16,38]", "Explanation": "The cited works on text prompt editing using initial noise latent are mentioned in the context of personalization in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[6,9]", "Explanation": "The cited works provide a foundational approach to personalization in text-to-image synthesis by viewing it as an inversion problem where text embeddings are optimized to describe the target concept."}, {"Category": "Methodological Basis", "Citation": "[14,32,36]", "Explanation": "The cited works are methods that have resorted to finetuning the diffusion model directly, which is a key distinction in the approach to personalization in text-to-image synthesis."}, {"Category": "Methodological Basis", "Citation": "[10,34,42]", "Explanation": "The cited works have explored encoder-based approaches for mapping a given concept to its textual representation, which is a new line of work in the field of personalization in text-to-image models."}, {"Category": "Data Source", "Citation": "[13,39]", "Explanation": "The cited works have explored applications of personalization in text-to-image models such as image editing and personalized 3D generation, which are downstream applications that have been explored in the field."}, {"Category": "Data Source", "Citation": "[17,18,26,29]", "Explanation": "The cited works have also explored applications of personalization in text-to-image models in the context of personalized 3D generation, which is another downstream application that has been explored in the field."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work by Voynov et al. [41] introduces the P+ latent space, which the citing paper adopts in their research to improve the reconstruction and editability of the U-Net denoising network."}, {"Category": "Extension or Continuation", "Citation": "[16,23]", "Explanation": "The cited works by Gal et al. [10] and others have demonstrated the use of time-dependent inputs in diffusion models, which the citing paper extends by considering the different attention layers and the time-dependent nature of the denoising process."}, {"Category": "Data Source", "Citation": "[10]", "Explanation": "The cited work by Gal et al. [10] uses a time-step-conditioned encoder, which the citing paper acknowledges as a data source in their research on time-dependent representation."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work by Rippel et al. is the basis for the proposed technique in the citing paper to encourage a neural network to learn an ordered representation by introducing a special form of dropout on hidden units, which is an exact equivalence to PCA for the linear case."}, {"Category": "Methodological Basis", "Citation": "[31]", "Explanation": "The cited work, Stable Diffusion, provides the autoencoder and diffusion model that the citing paper uses in their research on image generation and inversion techniques."}, {"Category": "Methodological Basis", "Citation": "[24]", "Explanation": "The cited work provides a pretrained CLIP text encoder that the citing paper adopts in their research to obtain a conditioning vector for their text embeddings."}, {"Category": "Extension or Continuation", "Citation": "[9]", "Explanation": "The cited work by Gal et al. introduces a new token S * and a corresponding embedding vector v * to represent a concept. The citing paper builds upon this work by optimizing the embedding vector to minimize a given objective and obtain a conditioning vector for their text embeddings."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work introduces the Nested Dropout technique that the citing paper adopts to impose an importance-based ordering over the learned representation in the neural mapper."}, {"Category": "Methodological Basis", "Citation": "[25,35]", "Explanation": "The cited work introduces the concept of Random Fourier Features, which the citing paper adopts to encode input vectors with a 2048-dimensional vector and modulate them with random frequencies to form an encoding matrix. This method is used to bias the encoding towards nearby layers, which is crucial for the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Rippel et al., 2021)", "Explanation": "The cited work introduces the Nested Dropout technique, which the citing paper adopts to control the dimensionality of the last hidden layer in the neural mapper for better balance between reconstruction quality and editability of the inverted concept."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work by Rippel et al. provides a method for dynamically changing the dimensionality of the last linear layer in a model, which the citing paper adopts in their approach to zeroing out a subset of h to encourage robustness to different dimensionality sizes and encode more information in the first set of output vectors."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work by Tewel et al. provides the key-locking mechanism that the citing paper adopts in their modified neural mapper with textual bypass, which is used to set the inputs of the cross-attention layer."}, {"Category": "Extension or Continuation", "Citation": "[9]", "Explanation": "The cited work (Textual Inversion) is used as a reference for the evaluation of NeTI, as the author extends the study by considering additional concepts and prompts in the evaluation setup."}, {"Category": "Methodological Basis", "Citation": "[41]", "Explanation": "The cited work (Extended Textual Inversion) provides a method for generating text prompts for image generation, which the citing paper adopts in the evaluation of NeTI."}, {"Category": "Extension or Continuation", "Citation": "[14]", "Explanation": "The cited work (CustomDiffusion) is used to compare the results of NeTI, as the author extends the study by considering additional concepts and prompts in the evaluation setup."}, {"Category": "Data Source", "Citation": "[32]", "Explanation": "The cited work (DreamBooth) is used as a data source for the evaluation of NeTI, as the author uses the concepts and prompts from this work to generate images in the evaluation setup."}, {"Category": "Supporting Evidence", "Citation": "[5,23]", "Explanation": "The cited works provide evidence that the denoising process in the citing paper follows a specific order of learning global aspects first and then local aspects, which aligns with the behavior observed in the cited works."}, {"Category": "Methodological Basis", "Citation": "[10,42]", "Explanation": "The cited works provide a potential avenue for faster encoder-based approaches that could be explored in future research to improve the efficiency of the personalization methods presented in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work introduces the concept of LayerNorm and LeakyReLU activation functions, which the citing paper adopts in their network architecture to map the encoded input to a 128-dimensional vector."}, {"Category": "Data Source", "Citation": "[30]", "Explanation": "The cited work provides the Nested Dropout technique with a probability of 0.5, which the citing paper uses in their training process to improve the quality of the encoded input."}, {"Category": "Extension or Continuation", "Citation": "[128)", "Explanation": "The cited work introduces the concept of sampling the truncation index as t \u223c U [0, 128), which the citing paper extends by applying it in their training and inference scheme to improve the efficiency of the encoding process."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work on Textual Inversion provides a method for training a model for 5,000 optimization steps using a batch size of 8, which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "[41]", "Explanation": "The cited work on XTI provides a method for training a model with a batch size of 8 for 500 optimization steps and a learning rate of 5e-3, which the citing paper follows in their research."}, {"Category": "Extension or Continuation", "Citation": "[32]", "Explanation": "The cited work on Dreambooth is used to fine-tune a denoiser U-Net with no prior preservation loss, which the citing paper extends by performing 500 fine-tuning steps with a learning rate of 5e-6 and batch size of 4."}, {"Category": "Data Source", "Citation": "[14]", "Explanation": "The cited work on CustomDiffusion provides six released models that the citing paper uses in their evaluation, serving as a data source for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work provides the concepts and training methodology used in the citing paper to develop the NeTI model for space-time conditioning."}, {"Category": "Data Source", "Citation": "[32]", "Explanation": "The cited work, DreamBooth, is used as a reference to compare the disk space requirements of the citing paper in terms of the number of parameters and the amount of disk space required to store each concept."}, {"Category": "Methodological Basis", "Citation": "[41]", "Explanation": "The cited work, XTI, is used as a basis for comparison in the training convergence evaluation conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work provides a method for generating high-quality images using a custom diffusion model, which the citing paper uses to compare the performance of their own approach."}, {"Category": "Supporting Evidence", "Citation": "[32]", "Explanation": "The cited work, Dream-Booth, is used as a benchmark to compare the performance of the citing paper in terms of image similarity and text similarity when training for 1000 steps. The results show that the models in the citing paper are able to reach performance comparable to that of Dream-Booth without requiring tuning the generative model."}, {"Category": "Extension or Continuation", "Citation": "[41]", "Explanation": "The cited work by Voynov et al. demonstrates the style mixing capability in a P+ latent space, which the citing paper extends by investigating the same ability in a P * latent space and neural mapper."}, {"Category": "Methodological Basis", "Citation": "[24]", "Explanation": "The cited work provides the pretrained text encoder used in CLIP, which the citing paper adopts in their research to obtain the token embeddings and perform the rescaling technique described in the main paper."}, {"Category": "Methodological Basis", "Citation": "[24]", "Explanation": "The cited work provides the text encoder used in the Stable Diffusion v1.4 text-to-image model, which the citing paper adopts in their research."}]
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b39", "b48", "b40", "b43", "b28", "b46", "b53", "b23", "b55", "b62", "b60", "b60" ], "table_ref": [ "tab_2" ], "text": "Feature matching is the computer vision task of from two images estimating pixel pairs that correspond to the same 3D point. It is crucial for downstream tasks such as 3D reconstruction [43] and visual localization [40]. Dense feature matching methods [17,36,49,52] aim to find all matching pixel-pairs between the images. These dense methods employ a coarse-to-fine approach, whereby matches are first predicted at a coarse level and successively refined at finer resolutions. Previous methods commonly learn coarse features using 3D supervision [17,41,44,52]. While this allows for specialized coarse features, it comes with downsides. In particular, since collecting real-world 3D datasets is expensive, the amount of available data is limited, which means models risk overfitting to the training set. This in turn limits the models robustness to scenes that differ significantly from what has been seen during training. A well-known approach to limit overfitting is to freeze the backbone used [29,47,54]. However, using frozen backbones pretrained on ImageNet classification, the out-of-the-box performance is insufficient for feature matching (see experiments in Table 1). A recent promising direction for frozen pretrained features is large-scale self-supervised pretraining using Masked image Modeling (MIM) [24,37,56,62]. The methods, including DI- . Illustration of our robust approach RoMa. Our contributions are shown with green highlighting and a checkmark, while previous approaches are indicated with gray highlights and a cross. Our first contribution is using a frozen foundation model for coarse features, compared to fine-tuning or training from scratch. DINOv2 lacks fine features, which are needed for accurate correspondences. To tackle this, we combine the DINOv2 coarse features with specialized fine features from a ConvNet, see Section 3.2. Second, we propose an improved coarse match decoder D θ , which typically is a ConvNet, with a coordinate agnostic Transformer decoder that predicts anchor probabilities instead of directly regressing coordinates, see Section 3.3. Third, we revisit the loss functions used for dense feature matching. We argue from a theoretical model that the global matching stage needs to model multimodal distributions, and hence use a regressionby-classification loss instead of an L2 loss. For the refinement, we in contrast use a robust regression loss, as the matching distribution is locally unimodal. These losses are further discussed in Section 3.4. The impact of our contributions is ablated in our extensive ablation study in Table 2.\nNOv2 [60], retain local information better than classification pretraining [60] and have been shown to generate features that generalize well to dense vision tasks. However, the application of DINOv2 in dense feature matching is still complicated due to the lack of fine features, which are needed for refinement.\nWe overcome this issue by leveraging a frozen DINOv2 encoder for coarse features, while using a proposed specialized ConvNet encoder for the fine features. This has the benefit of incorporating the excellent general features from DINOv2, while simultaneuously having highly precise fine features. We find that features specialized for only coarse matching or refinement significantly outperform features trained for both tasks jointly. These contributions are presented in more detail in Section 3.2. We additionally propose a Transformer match decoder that while also increasing performance for the baseline, particularly improves performance when used to predict anchor probabilities instead of regressing coordinates in conjunction with the DINOv2 coarse encoder. This contribution is elaborated further in Section 3.3.\nLastly, we investigate how to best train dense feature matchers. Recent SotA dense methods such as DKM [17] use a non-robust regression loss for the coarse matching as well as for the refinement. We argue that this is not optimal as the matching distribution at the coarse stage is often multimodal, while the conditional refinement is more likely to be unimodal. Hence requiring different approaches to training. We motivate this from a theoretical framework in Section 3.4. Our framework motivates a division of the coarse and fine losses into seperate paradigms, regressionby-classification for the global matches using coarse features, and robust regression for the refinement using fine features.\nOur full approach, which we call RoMa, is robust to extremely challenging real-world cases, as we demonstrate in Figure 1. We illustrate our approach schematically in Figure 2. In summary, our contributions are as follows:\n(a) We integrate frozen features from the foundation model DINOv2 [37] for dense feature matching. We combine the coarse features from DINOv2 with specialized fine features from a ConvNet to produce a precisely localizable yet robust feature pyramid. See Section 3. " }, { "figure_ref": [], "heading": "Self-Supervised Vision Models", "publication_ref": [ "b14", "b62" ], "table_ref": [], "text": "Inspired by language Transformers [15] foundation models [8] pre-trained on large quantities of data have recently demonstrated significant potential in learning all-purpose features for various visual models via self-supervised learning. Caron et al. [11] observe that self-supervised ViT features capture more distinct information than supervised models do, which is demonstrated through label-free selfdistillation. iBOT [62] explores MIM within a selfdistillation framework to develop a semantically rich visual tokenizer, yielding robust features effective in various dense downstream tasks. DINOv2 [37] reveals that selfsupervised methods can produce all-purpose visual features that work across various image distributions and tasks after being trained on sufficient datasets without finetuning." }, { "figure_ref": [], "heading": "Robust Loss Formulations", "publication_ref": [ "b17", "b31" ], "table_ref": [], "text": "Robust Regression Losses: Robust loss functions provide a continuous transition between an inlier distribution (typically highly concentrated), and an outlier distribution (wide and flat). Robust losses have, e.g., been used as regularizers for optical flow [5, 6], robust smoothing [18], and as loss functions [3,32]. \nRegression" }, { "figure_ref": [ "fig_0" ], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "In this section, we detail our method. We begin with preliminaries and notation for dense feature matching in Section 3.1. We then discuss our incorporation of DINOv2 [37] as a coarse encoder, and specialized fine features in Section 3.2. We present our proposed Transformer match decoder in Section 3.3. Finally, our proposed loss formulation in Section 3.4. A summary and visualization of our full approach is provided in Figure 2. Further details on the exact architecture are given in the supplementary." }, { "figure_ref": [], "heading": "Preliminaries on Dense Feature Matching", "publication_ref": [], "table_ref": [], "text": "Dense feature matching is, given two images I A , I B , to estimate a dense warp W A→B (mapping coordinates x A from I A to x B in I B ), and a matchability score p(x A )1 for each pixel. From a probabilistic perspective,\np(W A→B ) = p(x B |x A ) is the conditional matching distri- bution. Multiplying p(x B |x A )p(x A ) yields the joint distri- bution. We denote the model distribution as p θ (x A , x B ) = p θ (x B |x A )p θ (x A\n). When working with warps, i.e., where p θ (x B |x A ) has been converted to a deterministic mapping, we denote the model warp as Ŵ A→B . Viewing the predictive distribution as a warp is natural in high resolution, as it can then be seen as a deterministic mapping. However, due to multimodality, it is more natural to view it in the probabilistic sense at coarse scales.\nThe end goal is to obtain a good estimate over correspondences of coordinates x A in image I A and coordinates x B in image I B . For dense feature matchers, estimation of these correspondences is typically done by a one-shot coarse global matching stage (using coarse features) followed by subsequent refinement of the estimated warp and confidence (using fine features).\nWe use the recent SotA dense feature matching model DKM [17] as our baseline. For consistency, we adapt the terminology used there. We denote the coarse features used to estimate the initial warp, and the fine features used to refine the warp by\n{φ A coarse , φ A fine } = F θ (I A ), {φ B coarse , φ B fine } = F θ (I B ),(1)\nwhere F θ is a neural network feature encoder. We will leverage DINOv2 for extraction of φ A coarse , φ B coarse , however, DINOv2 features are not precisely localizable, which we tackle by combining the coarse features with precise local features from a specialized ConvNet backbone. See Section 3.2 for details.\nThe coarse features are matched with global matcher G θ consisting of a match encoder E θ and match decoder D θ ,\nŴA→B coarse , p A θ,coarse = G θ (φ A coarse , φ B coarse ), G θ (φ A coarse , φ B coarse ) = D θ E θ (φ A coarse , φ B coarse ) .(2)\nWe use a Gaussian Process [38] as the match encoder E θ as in previous work [17]. However, while our baseline uses a ConvNet to decode the matches, we propose a Transformer match decoder D θ that predicts anchor probabilities instead of directly regressing the warp. This match decoder is particularly beneficial in our final approach (see Table 2). We describe our proposed match decoder in Section 3.3. The refinement of the coarse warp ŴA→B coarse is done by the refiners R θ ,\nŴA→B , p A θ = R θ φ A fine , φ B fine , ŴA→B coarse , p A θ,coarse .(3)\nAs in previous work, the refiner is composed of a sequence of ConvNets (using strides {1, 2, 4, 8}) and can be decomposed recursively as\nŴ A→B i , p A i,θ = R θ,i (φ A i , φ B i , Ŵ A→B i+1 , p A θ,i+1 ),(4)\nwhere the stride is 2 i . The refiners predict a residual offset for the estimated warp, and a logit offset for the certainty.\nAs in the baseline they are conditioned on the outputs of the previous refiner by using the previously estimated warp to a) stack feature maps from the images, and b) construct a local correlation volume around the previous target. The process is repeated until reaching full resolution. We use the same architecture as in the baseline. Following DKM, we detach the gradients between the refiners and upsample the warp bilinearly to match the resolution of the finer stride. Probabilistic Notation: When later defining our loss functions, it will be convenient to refer to the outputs of the different modules in a probabilistic notation. We therefore introduce this notation here first for clarity.\nWe denote the probability distribution modeled by the global matcher as\np θ (x A coarse , x B coarse ) = G θ (φ A coarse , φ B coarse ).(5)\nHere we have dropped the explicit dependency on the features and the previous estimate of the marginal for notational brevity. Note that the output of the global matcher will sometimes be considered as a discretized distribution using anchors, or as a decoded warp. We do not use separate notation for these two different cases to keep the notation uncluttered. We denote the probability distribution modeled by a refiner at scale s = c2 i as\np θ (x A i , x B i | Ŵ A→B i+1 ) = R θ,i (φ A i , φ B i , Ŵ A→B i+1 , p A θ,i+1 ), (6) The basecase Ŵ A→B coarse is computed by decoding p θ (x B coarse |x A coarse ).\nAs for the global matcher we drop the explicit dependency on the features. " }, { "figure_ref": [], "heading": "Robust and Localizable Features", "publication_ref": [ "b15", "b41", "b22" ], "table_ref": [ "tab_2", "tab_2" ], "text": "We first investigate the robustness of DINOv2 to viewpoint and illumination changes compared to VGG19 and ResNet50 on the MegaDepth [28] dataset. To decouple the backbone from the matching model we train a single linear layer on top of the frozen model followed by a kernel nearest neighbour matcher for each method. We measure the performance both in average end-point-error (EPE) on a standardized resolution of 448×448, and by what we call the Robustness % which we define as the percentage of matches with an error lower than 32 pixels. We refer to this as robustness, as, while these matches are not necessarily accurate, it is typically sufficient for the refinement stage to produce a correct adjustment.\nWe present results in Table 1. We find that DINOv2 features are significantly more robust to changes in viewpoint than both ResNet and VGG19. Interestingly, we find that the VGG19 features are worse than the ResNet features for coarse matching, despite VGG feature being widely used as local features [16,42,52]. Further details of this experiment are provided in the supplementary material.\nIn DKM [17], the feature encoder F θ is assumed to consist of a single network producing a feature pyramid of coarse and fine features used for global matching and refinement respectively. This is problematic when using DI-NOv2 features as only features of stride 14 exist. We therefore decouple F θ into {F coarse,θ , F fine,θ } and set F coarse,θ = DINOv2. The coarse features are extracted as\nφ A coarse = F coarse,θ (I A ), φ B coarse = F coarse,θ (I B ). (7)\nWe keep the DINOv2 encoder frozen throughout training. This has two benefits. The main benefit is that keeping the representations fixed reduces overfitting to the training set, enabling RoMa to be more robust. It is also additionally significantly cheaper computationally and requires less memory. However, DINOv2 cannot provide fine features. Hence a choice of F fine,θ is needed. While the same encoder for fine features as in DKM could be chosen, i.e., a ResNet50 (RN50) [23], it turns out that this is not optimal. We begin by investigating what happens by simply decoupling the coarse and fine feature encoder, i.e., not sharing weights between the coarse and fine encoder (even when using the same network). We find that, as supported by Setup II in Table 2, this significantly increases performance. This is due to the feature extractor being able to specialize in the respective tasks, and hence call this specialization.\nThis raises a question, VGG19 features, while less suited for coarse matching (see Table 1), could be better suited for fine localized features. We investigate this by setting F fine,θ = VGG19 in Setup III in Table 2. Interestingly, even though VGG19 coarse features are significantly worse than RN50, we find that they significantly outperform the RN50 features when leveraged as fine features. Our finding indicates that there is an inherent tension between fine localizability and coarse robustness. We thus use VGG19 fine features in our full approach." }, { "figure_ref": [], "heading": "Transformer Match Decoder D θ", "publication_ref": [], "table_ref": [], "text": "Regression-by-Classification: We propose to use the regression by classification formulation for the match decoder, whereby we discretize the output space. We choose the following formulation,\np coarse,θ (x B |x A ) = K k=1 π k (x A )B m k , (8\n)\nwhere K is the quantization level, π k are the probabilities for each component, B is some 2D base distribution, and {m k } K 1 are anchor coordinates. In practice, we used K = 64 × 64 classification anchors positioned uniformly as a tight cover of the image grid, and B = U, i.e., a uniform distribution 2 . We denote the probability of an anchor as π k and its associated coordinate on the grid as m k .\nFor refinement, the conditional is converted to a deterministic warp per pixel. We decode the warp by argmax over the classification anchors, k * (x) = argmax k π k (x), followed by a local adjustment which can be seen as a local softargmax. Mathematically,\nToWarp(p coarse,θ (x B coarse |x A coarse )) = i∈N4(k * (x A coarse )) π i m i i∈N4(k * (x A coarse )) π i = Ŵ A→B coarse ,(9)\nwhere N 4 (k * ) denotes the set of k * and the four closest anchors on the left, right, top, and bottom. We conduct an ablation on the Transformer match decoder in Table 2, and find that it particularly improves results in our full approach, using the loss formulation we propose in Section 3.4. Decoder Architecture: In early experiments, we found that ConvNet coarse match decoders overfit to the training resolution. Additionally, they tend to be over-reliant on locality. While locality is a powerful cue for refinement, it leads to oversmoothing for the coarse warp. To address this, we propose a transformer decoder without using position encodings. By restricting the model to only propagate by feature similarity, we found that the model became significantly more robust. The proposed Transformer matcher decoder consists of 5 ViT blocks, with 8 heads, hidden size D 1024, and MLP size 4096. The input is the concatenation of projected DINOv2 [37] features of dimension 512, and the 512dimensional output of the GP module, which corresponds to the match encoder E θ proposed in DKM [17]. The output is a vector of B × H × W × (K + 1) where K is the number of classification anchors3 (parameterizing the conditional distribution p(x B |x A )), and the extra 1 is the matchability score p A (x A ) ." }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Robust Loss Formulation", "publication_ref": [ "b12" ], "table_ref": [], "text": "Intuition: The conditional match distribution at coarse scales is more likely to exhibit multimodality than during refinement, which is conditional on the previous warp. This means that the coarse matcher needs to model multimodal distributions, which motivates our regression-byclassification approach. In contrast, the refinement of the warp needs only to represent unimodal distributions, which motivates our robust regression loss. Theoretical Model: We model the matchability at scale s as q(x A , x B ; s) = N 0,\ns 2 I * p(x A , x B ; 0). (10\n)\nHere p(x A , x B ; 0) corresponds to the exact mapping at infinite resolution. This can be interpreted as a diffusion in the localization of the matches over scales. When multiple objects in a scene are projected into images, so-called motion boundaries arise. These are discontinuities in the matches which we illustrate in Figure 3. The diffusion near these motion boundaries causes the conditional distribution to become multimodal, explaining the need for multimodality in the coarse global matching. Given an initial choice of (x A , x B ), as in the refinement, the conditional distribution is unimodal locally. However, if this initial choice is far outside the support of the distribution, using a non-robust loss function is problematic. It is therefore motivated to use a robust regression loss for this stage. Loss formulation: Motivated by intuition and the theoretical model we now propose our loss formulation from a probabilistic perspective, aiming to minimize the Kullback-Leibler divergence between the estimated match distribution at each scale, and the theoretical model distribution at that scale. We begin by formulating the coarse loss. With non-overlapping bins as defined in Section 3.3 the Kullback-Leibler divergence (where terms that are constant w.r.t. θ are ignored) is (13) for k † (x) = argmin k ∥m k -x∥ the index of the closest anchor to x. Following DKM [17] we add a hyperparameter λ that controls the weighting of the marginal compared to that of the conditional as\nD KL (q(x B , x A ; s)||p coarse,θ (x B , x A )) = (11) E x A ,x B ∼q -log p coarse,θ (x B |x A )p coarse,θ (x A ) = (12) - x A ,x B log π k † (x A ) + log p coarse,θ (x A )dq,\n- x A ,x B log π k † (x A ) + λ log p coarse,θ (x A )dq. (14\n)\nIn practice, we approximate q with a discrete set of known correspondences {x A , x B }. Furthermore, to be consistent with previous works [17,52] we use a binary crossentropy loss on p coarse,θ (x A ). We call this loss L coarse . We next discuss the fine loss L fine .\nWe model the output of the refinement at scale i as a generalized Charbonnier [3] (with α = 0.5) distribution, for which the refiners estimate the mean µ. The generalized Charbonnier distribution behaves locally like a Normal Table 2. Ablation study. We systematically investigate the impact of our contributions, see Section 4.1 for detailed analysis. Measured in 100-percentage correct keypoints (PCK) (lower is better). distribution, but has a flatter tail. When used as a loss, the gradients behave locally like L2, but decay towards 0, see Figure 4. Its logarithm, (ignoring terms that do not contribute to the gradient, and up-to-scale) reads\nlog p θ (x B i |x A i , Ŵ A→B i+1 ) =(15)\n-(||µ θ (x A i , Ŵ A→B i+1 ) -x B i || 2 + s) 1/4 ,(16)\nwhere\nµ θ (x A i , Ŵ A→B i+1 )\nis the estimated mean of the distribution, and s = 2 i c. In practice, we choose c = 0.03. The Kullback-Leibler divergence for each fine scale i ∈ {0, 1, 2, 3} (where terms that are constant with respect to θ are ignored) reads\nD KL (q(x B i , x A i ; s = 2 i c)||p i,θ (x B i , x A i | Ŵ A→B i+1 )) =(17)\nE x A i ,x B i ∼q -(||µ θ (x A i , Ŵ A→B i+1 ) -x B i || 2 + s) 1/4 + E x A i ,x B i ∼q -log p i,θ (x A i | Ŵ A→B i+1 ) .(18)\nIn practice, we approximate q with a discrete set of known correspondences {x A , x B }, and use a binary cross-entropy loss on p coarse,θ (x A i | Ŵ A→B i+1 ). We sum over all fine scales to get the loss L fine .\nOur combined loss yields:\nL = L coarse + L fine .(19)\nNote that we do not need to tune any scaling between these losses as the coarse matching and fine stages are decoupled as gradients are cut in the matching, and encoders are not shared." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Here we investigate the impact of our contributions. We conduct all our ablations on a validation test that we create. The validation set is made from random pairs from the MegaDepth scenes [0015, 0022] with overlap > 0. To measure the performance we measure the percentage of estimated matches that have an end-point-error (EPE) under a certain pixel threshold over all ground-truth correspondences, which we call percent correct keypoints (PCK) using the notation of previous work [17,52]. Setup I consists of the same components as in DKM [17], retrained by us. In Setup II we do not share weights between the fine and coarse features, which improves performance due to specialization of the features. In Setup III we replace the RN50 fine features with a VGG19, which further improves performance. This is intriguing, as VGG19 features are worse performing when used as coarse features as we show in Table 1. We then add the proposed Transformer match decoder in Setup IV, however using the baseline regression approach. Further, we incorporate the DI-NOv2 coarse features in Setup V, this gives a significant improvement, owing to their significant robustness. Next, in Setup VI change the loss function and output representation of the Transformer match decoder D θ to regressionby-classification, and next in Setup VII use the robust regression loss. Both these changes further significantly improve performance. This setup constitutes RoMa. When we change back to the original ConvNet match decoder in Setup VIII from this final setup, we find that the performance significantly drops, showing the importance of the proposed Transformer match decoder. As in DKM we train both the coarse matching and refinement networks jointly. Note that since we detach gradients between the coarse matching and refinement, the network could in principle also be trained in two stages. For results used in the ablation, we used a resolution of 448 × 448, and for the final method we trained on a resolution of 560×560." }, { "figure_ref": [], "heading": "Two-View Geometry", "publication_ref": [ "b1" ], "table_ref": [ "tab_3", "tab_4" ], "text": "We evaluate on a diverse set of two-view geometry benchmarks. We follow DKM [17] and sample correspondences using a balanced sampling approach, producing We present results in Table 3. RoMa attains significant improvements compared to previous approaches, with a relative error reduction of 26% compared to the previous best approach.\nWxBS Benchmark: We evaluate RoMa on the extremely difficult WxBS benchmark [35], version 1.1 with updated ground truth and evaluation protocol 4 . The metric is mean average precision on ground truth correspondences consistent with the estimated fundamental matrix at a 10 pixel threshold. All methods use MAGSAC++ [2] as implemented in OpenCV. Results are presented in Table 4. Here we achieve an outstanding improvement of 36% compared to the state-of-the-art. We attribute these major gains to the superior robustness of RoMa compared to previous approaches. We qualitatively present examples of this in the supplementary." }, { "figure_ref": [], "heading": "MegaDepth-1500 Pose Estimation:", "publication_ref": [ "b43", "b43", "b12", "b40" ], "table_ref": [ "tab_5", "tab_6" ], "text": "We use the MegaDepth-1500 test set [44] which consists of 1500 pairs from scene 0015 (St. Peter's Basilica) and 0022 (Brandenburger Tor). We follow the protocol in [12,44] and use a RANSAC threshold of 0.5. Results are presented in Table 5. ScanNet-1500 Pose Estimation: ScanNet [13] is a large scale indoor dataset, composed of challenging sequences with low texture regions and large changes in perspective. We follow the evaluation in SuperGlue [41]. Results are presented in Table 6. We achieve state-of-the-art results, achieving the first AUC@20 • scores over 70." }, { "figure_ref": [], "heading": "MegaDepth-8-Scenes:", "publication_ref": [ "b44" ], "table_ref": [ "tab_7" ], "text": "We evaluate RoMa on the Megadepth-8-Scenes benchmark [17,28]. We present results in Table 7. Here too we outperform previous approaches.\nTable 8. SotA comparison on InLoc [45]. We report the percentage of query images localized within 0.25/0.5/1.0 meters and 2/5/10 • of the ground-truth pose (higher is better). Method DUC1 DUC2 (0.25m,2 • )/(0.5m,5 • )/(1.0m,10 " }, { "figure_ref": [], "heading": "Visual Localization", "publication_ref": [ "b44", "b39" ], "table_ref": [], "text": "We evaluate RoMa on the InLoc [45] Visual Localization benchmark, using the HLoc [40] pipeline. We follow the approach in DKM [17] to sample correspondences. Results are presented in Table 8. We show large improvements compared to all previous approaches, setting a new state-of-theart." }, { "figure_ref": [], "heading": "Runtime Comparison", "publication_ref": [], "table_ref": [], "text": "We compare the runtime of RoMa and the baseline DKM at a resolution of 560×560 at a batch size of 8 on an RTX6000 GPU. We observe a modest 7% increase in runtime from 186.3 → 198.8 ms per pair." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We have presented RoMa, a robust dense feature matcher. Our model leverages frozen pretrained coarse features from the foundation model DINOv2 together with specialized ConvNet fine features, creating a precisely localizable and robust feature pyramid. We further improved performance with our proposed tailored transformer match decoder, which predicts anchor probabilities instead of regressing coordinates. Finally, we proposed an improved loss formulation through regression-by-classification with subsequent robust regression. Our comprehensive experiments show that RoMa achieves major gains across the board, setting a new state-of-the-art. In particular, our biggest gains (36% increase on WxBS [35]) are achieved on the most difficult benchmarks, highlighting the robustness of our approach.\nCode is provided at github.com/Parskatt/RoMa. Limitations and Future Work: (a) Our approach relies on supervised correspondences, which limits the amount of usable data. We remedied this by using pretrained frozen foundation model features, which improves generalization. (b) We train on the task of dense feature matching which is an indirect way of optimizing for the downstream tasks of two-view geometry, localization, or 3D reconstruction. Directly training on the downstream tasks could improve performance. Image pair, VGG19 matches, RN50 matches, DINOv2 matches, RoMa matches. DINOv2 is significantly more robust than the VGG19 and RN50. Quantitative results are presented in Table 1.\nthis is prohibitively expensive in practice. We instead follow the approach of DKM [17] and use a balanced sampling approach to produce a sparse set of matches. The balanced sampling approach uses a KDE estimate of the match distribution p θ (x A , x B ) to rebalance the distribution of the samples, by reweighting the samples with the reciprocal of the KDE. This increases the number of matches in less certain regions, which Edstedt et al.\n[17] demonstrated improves performance." }, { "figure_ref": [], "heading": "RoMa: Robust Dense Feature Matching", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "In this supplementary material, we provide further details and qualitative examples that could not fit into the main text of the paper." }, { "figure_ref": [], "heading": "A. Further Details on Frozen Feature Evaluation", "publication_ref": [], "table_ref": [], "text": "We use an exponential cosine kernel as in DKM [17] with an inverse temperature of 10. We train using the same training split as in our main experiments, using the same learning rates (note that we only train a single linear layer, as the backbone is frozen). We use the regression-byclassification loss that we proposed in Section 3.4. We present a qualitative example of the estimated warps from the frozen features in Figure 5." }, { "figure_ref": [], "heading": "B. Further Architectural Details", "publication_ref": [], "table_ref": [], "text": "Encoders: We extract fine features of stride {1, 2, 4, 8} by taking the outputs of the layer before each 2 × 2 maxpool. These have dimension {64, 128, 256, 512} respectively. We project these with a linear layer followed by batchnorm to dimension {9, 64, 256, 512}.\nWe use the patch features from DINOv2 [37] and do not use the cls token. We use the ViT-L-14 model, with patch size 14 and dimension 1024. We linearly project these features (with batchnorm) to dimension 512. Global Matcher: We use a Gaussian Process [38] match encoder as in DKM [17]. We use an exponential cosine kernel [17], with inverse temperature 10. As in DKM, the GP predicts a posterior over embedded coordinates in the other image. We use an embedding space of dimension 512.\nFor details on D θ we refer to Section 3.3. Refiners: Following Edstedt et al. [17] we use 5 refiners at strides {1, 2, 4, 8, 14}. They each consist of 8 convolutional blocks. The internal dimension is set to {24, 144, 569, 1137, 1377}. The input to the refiners are the stacked feature maps, local correlation around the previous warp of size {0, 0, 5, 7, 15}, as well as a linear encoding of the previous warp. The output is a B × H s × W s × (2 + 1) tensor, containing the warp and an logit offset to the certainty." }, { "figure_ref": [], "heading": "C. Qualitative Comparison on WxBS", "publication_ref": [], "table_ref": [], "text": "We qualitatively compare estimated matches from RoMa and DKM on the WxBS benchmark in Figure 6. DKM fails on multiple pairs on this dataset, while RoMa is more robust. In particular, RoMa is able to match even for changes is season (bottom right), extreme illumination (bottom left, top left), and extreme scale and viewpoint (top right)." }, { "figure_ref": [], "heading": "D. Further Details on Metrics", "publication_ref": [], "table_ref": [], "text": "Image Matching Challenge 2022: The mean average accuracy (mAA) metric is computed between the estimated fundamental matrix and the hidden ground truth. The error in terms of rotation in degrees and translation in meters. Given one threshold over each, a pose is classified as accurate if it meets both thresholds. This is done over ten pairs of uniformly spaced thresholds. The mAA is then the average over the threshold and over the images (balanced across the scenes). MegaDepth/ScanNet: The AUC metric used measures the error of the estimated Essential matrix compared to the ground truth. The error per pair is the maximum of the rotational and translational error. As there is no metric scale available, the translational error is measured in the cosine angle. The recall at a threshold τ is the percentage of pairs with an error lower than τ . The AUC@τ • is the integral over the recall as a function of the thresholds, up to τ , divided by τ . In practice, this is approximated by the trapezial rule over all errors of the method over the dataset." }, { "figure_ref": [], "heading": "E. Further Details on Theoretical Model", "publication_ref": [ "b25" ], "table_ref": [], "text": "Here we discuss a simple connection to scale-space theory, that did not fit in the main paper. Our theoretical model of matchability in Section 3.4 has a straightforward connection to scale-space theory [26,30,59]. The image scale-space is parameterized by a parameter s, L(x, s) = g(xy; s)I(y)dy,\nwhere\nis a Gaussian kernel. Applying this kernel jointly on the matching distribution yields the diffusion process in the paper." }, { "figure_ref": [], "heading": "F. Further Details on Match Sampling", "publication_ref": [], "table_ref": [], "text": "Dense feature matching methods produce a dense warp and certainty. However, most robust relative pose estimators (used in the downstream two-view pose estimation evaluation) assume a sparse set of correspondences. While one could in principle use all correspondences from the warp, " } ]
2023-12-11
[ { "authors": "Vassileios Balntas; Karel Lenc; Andrea Vedaldi; Krystian Mikolajczyk", "journal": "", "ref_id": "b0", "title": "HPatches: A benchmark and evaluation of handcrafted and learned local descriptors", "year": "2017" }, { "authors": "Daniel Barath; Jana Noskova; Maksym Ivashechkin; Jiri Matas", "journal": "", "ref_id": "b1", "title": "MAGSAC++, a fast, reliable and accurate robust estimator", "year": "2020" }, { "authors": "Jonathan T Barron", "journal": "", "ref_id": "b2", "title": "A general and adaptive robust loss function", "year": "2019" }, { "authors": "Herbert Bay; Tinne Tuytelaars; Luc Van Gool", "journal": "Springer", "ref_id": "b3", "title": "Surf: Speeded up robust features", "year": "2006" }, { "authors": "J Michael; Paul Black; Anandan", "journal": "Computer vision and image understanding", "ref_id": "b4", "title": "The robust estimation of multiple motions: Parametric and piecewise-smooth flow fields", "year": "1996" }, { "authors": "J Michael; Anand Black; Rangarajan", "journal": "International journal of computer vision", "ref_id": "b5", "title": "On the unification of line processes, outlier rejection, and robust statistics with applications in early vision", "year": "1996" }, { "authors": "Georg Bökman; Fredrik Kahl", "journal": "", "ref_id": "b6", "title": "A case for using rotation invariant features in state of the art feature matchers", "year": "2022" }, { "authors": "Rishi Bommasani; Drew A Hudson; Ehsan Adeli; Russ Altman; Simran Arora; Sydney Von Arx; Jeannette Michael S Bernstein; Antoine Bohg; Emma Bosselut; Brunskill", "journal": "", "ref_id": "b7", "title": "On the opportunities and risks of foundation models", "year": "2021" }, { "authors": "Ignas Budvytis; Marvin Teichmann; Tomas Vojir; Roberto Cipolla", "journal": "BMVA Press", "ref_id": "b8", "title": "Large scale joint semantic re-localisation and scene understanding via globally unique instance coordinate regression", "year": "2019" }, { "authors": "Chenjie Cao; Yanwei Fu", "journal": "", "ref_id": "b9", "title": "Improving transformer-based image matching by cascaded capturing spatially informative keypoints", "year": "2023" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Hervé Jégou; Julien Mairal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b10", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Hongkai Chen; Zixin Luo; Lei Zhou; Yurun Tian; Mingmin Zhen; Tian Fang; David Mckinnon; Yanghai Tsin; Long Quan", "journal": "", "ref_id": "b11", "title": "ASpanFormer: Detector-free image matching with adaptive span transformer", "year": "2022" }, { "authors": "Angela Dai; X Angel; Manolis Chang; Maciej Savva; Thomas Halber; Matthias Funkhouser; Nießner", "journal": "", "ref_id": "b12", "title": "Scannet: Richly-annotated 3d reconstructions of indoor scenes", "year": "2017" }, { "authors": "Tomasz Daniel Detone; Andrew Malisiewicz; Rabinovich", "journal": "", "ref_id": "b13", "title": "Superpoint: Self-supervised interest point detection and description", "year": "2018" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b14", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Mihai Dusmanu; Ignacio Rocco; Tomas Pajdla; Marc Pollefeys; Josef Sivic; Akihiko Torii; Torsten Sattler", "journal": "", "ref_id": "b15", "title": "D2-Net: A Trainable CNN for Joint Detection and Description of Local Features", "year": "2019" }, { "authors": "Johan Edstedt; Ioannis Athanasiadis; Mårten Wadenbäck; Michael Felsberg", "journal": "", "ref_id": "b16", "title": "DKM: Dense kernelized feature matching for geometry estimation", "year": "2023" }, { "authors": "Michael Felsberg; P-E Forssen; H Scharr", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b17", "title": "Channel smoothing: Efficient robust smoothing of low-level signal features", "year": "2006" }, { "authors": "Divyansh Garg; Yan Wang; Bharath Hariharan; Mark Campbell; Kilian Q Weinberger; Wei-Lun Chao", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b18", "title": "Wasserstein distances for stereo disparity estimation", "year": "2020" }, { "authors": "Hugo Germain; Vincent Lepetit; Guillaume Bourmaud", "journal": "", "ref_id": "b19", "title": "Neural reprojection error: Merging feature learning and camera pose estimation", "year": "2021" }, { "authors": "Pierre Gleize; Weiyao Wang; Matt Feiszli", "journal": "", "ref_id": "b20", "title": "SiLK: Simple Learned Keypoints", "year": "2023" }, { "authors": "Gustav Häger; Mikael Persson; Michael Felsberg", "journal": "IEEE", "ref_id": "b21", "title": "Predicting disparity distributions", "year": "2021" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b22", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Kaiming He; Xinlei Chen; Saining Xie; Yanghao Li; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b23", "title": "Masked autoencoders are scalable vision learners", "year": "2022" }, { "authors": "Addison Howard; Eduard Trulls; Kwang Moo Yi; Dmitry Mishkin; Sohier Dane; Yuhe Jin", "journal": "", "ref_id": "b24", "title": "Image matching challenge 2022", "year": "2022" }, { "authors": "Jan J Koenderink", "journal": "Biological cybernetics", "ref_id": "b25", "title": "The structure of images", "year": "1984" }, { "authors": "Xiaotian Li; Shuzhe Wang; Yi Zhao; Jakob Verbeek; Juho Kannala", "journal": "", "ref_id": "b26", "title": "Hierarchical scene coordinate classification and regression for visual localization", "year": "2020" }, { "authors": "Zhengqi Li; Noah Snavely", "journal": "", "ref_id": "b27", "title": "Megadepth: Learning singleview depth prediction from internet photos", "year": "2018" }, { "authors": "Yutong Lin; Ze Liu; Zheng Zhang; Han Hu; Nanning Zheng; Stephen Lin; Yue Cao", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b28", "title": "Could giant pre-trained image models extract universal representations", "year": "2022" }, { "authors": "Tony Lindeberg", "journal": "Journal of applied statistics", "ref_id": "b29", "title": "Scale-space theory: A basic tool for analyzing structures at different scales", "year": "1994" }, { "authors": "Philipp Lindenberger; Paul-Edouard Sarlin; Marc Pollefeys", "journal": "", "ref_id": "b30", "title": "LightGlue: Local Feature Matching at Light Speed", "year": "2023" }, { "authors": "Haisong Liu; Tao Lu; Yihui Xu; Jia Liu; Wenjie Li; Lijun Chen", "journal": "", "ref_id": "b31", "title": "Camliflow: bidirectional camera-lidar fusion for joint optical flow and scene flow estimation", "year": "2022" }, { "authors": " David G Lowe", "journal": "International journal of computer vision", "ref_id": "b32", "title": "Distinctive image features from scaleinvariant keypoints", "year": "2004" }, { "authors": "Iaroslav Melekhov; Aleksei Tiulpin; Torsten Sattler; Marc Pollefeys; Esa Rahtu; Juho Kannala", "journal": "IEEE", "ref_id": "b33", "title": "Dgc-net: Dense geometric correspondence network", "year": "2019" }, { "authors": "Dmytro Mishkin; Jiri Matas; Michal Perdoch; Karel Lenc", "journal": "BMVA", "ref_id": "b34", "title": "WxBS: Wide Baseline Stereo Generalizations", "year": "2015" }, { "authors": "Junjie Ni; Yijin Li; Zhaoyang Huang; Hongsheng Li; Hujun Bao; Zhaopeng Cui; Guofeng Zhang", "journal": "", "ref_id": "b35", "title": "Pats: Patch area transportation with subdivision for local feature matching", "year": "2023" }, { "authors": "Maxime Oquab; Timothée Darcet; Theo Moutakanni; V Huy; Marc Vo; Vasil Szafraniec; Pierre Khalidov; Daniel Fernandez; Francisco Haziza; Alaaeldin Massa; Russell El-Nouby; Po-Yao Howes; Hu Huang; Vasu Xu; Shang-Wen Sharma; Wojciech Li; Mike Galuba; Mido Rabbat; Nicolas Assran; Gabriel Ballas; Ishan Synnaeve; Herve Misra; Julien Jegou; Patrick Mairal; Armand Labatut; Piotr Joulin; Bojanowski", "journal": "", "ref_id": "b36", "title": "DINOv2: Learning robust visual features without supervision", "year": "2023" }, { "authors": "Carl Edward Rasmussen; Christopher K I Williams", "journal": "The MIT Press", "ref_id": "b37", "title": "Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)", "year": "2005" }, { "authors": "Jerome Revaud; Cesar De Souza; Martin Humenberger; Philippe Weinzaepfel", "journal": "Advances in neural information processing systems", "ref_id": "b38", "title": "R2d2: Reliable and repeatable detector and descriptor", "year": "2019" }, { "authors": "Paul-Edouard Sarlin; Cesar Cadena; Roland Siegwart; Marcin Dymczyk", "journal": "", "ref_id": "b39", "title": "From coarse to fine: Robust hierarchical localization at large scale", "year": "2019" }, { "authors": "Paul-Edouard Sarlin; Daniel Detone; Tomasz Malisiewicz; Andrew Rabinovich", "journal": "", "ref_id": "b40", "title": "Superglue: Learning feature matching with graph neural networks", "year": "2020" }, { "authors": "Paul-Edouard Sarlin; Ajaykumar Unagar; Mans Larsson; Hugo Germain; Carl Toft; Viktor Larsson; Marc Pollefeys; Vincent Lepetit; Lars Hammarstrand; Fredrik Kahl", "journal": "", "ref_id": "b41", "title": "Back to the feature: Learning robust camera localization from pixels to pose", "year": "2021" }, { "authors": "L Johannes; Jan-Michael Schonberger; Frahm", "journal": "", "ref_id": "b42", "title": "Structurefrom-motion revisited", "year": "2016" }, { "authors": "Jiaming Sun; Zehong Shen; Yuang Wang; Hujun Bao; Xiaowei Zhou", "journal": "", "ref_id": "b43", "title": "LoFTR: Detector-free local feature matching with transformers", "year": "2021" }, { "authors": "Hajime Taira; Masatoshi Okutomi; Torsten Sattler; Mircea Cimpoi; Marc Pollefeys; Josef Sivic; Tomas Pajdla; Akihiko Torii", "journal": "", "ref_id": "b44", "title": "Inloc: Indoor visual localization with dense matching and view synthesis", "year": "2018" }, { "authors": "Shitao Tang; Jiahui Zhang; Siyu Zhu; Ping Tan", "journal": "", "ref_id": "b45", "title": "Quadtree attention for vision transformers", "year": "2022" }, { "authors": "Zhuotao Tian; Hengshuang Zhao; Michelle Shu; Zhicheng Yang; Ruiyu Li; Jiaya Jia", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b46", "title": "Prior guided feature enrichment network for few-shot segmentation", "year": "2020" }, { "authors": "Luís Torgo; João Gama", "journal": "Springer", "ref_id": "b47", "title": "Regression by classification", "year": "1996" }, { "authors": "Prune Truong; Martin Danelljan; Luc V Gool; Radu Timofte", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b48", "title": "GOCor: Bringing Globally Optimized Correspondence Volumes into Your Neural Network", "year": "2020" }, { "authors": "Prune Truong; Martin Danelljan; Radu Timofte", "journal": "", "ref_id": "b49", "title": "GLU-Net: Global-local universal network for dense flow and correspondences", "year": "2020" }, { "authors": "Prune Truong; Martin Danelljan; Luc Van Gool; Radu Timofte", "journal": "", "ref_id": "b50", "title": "Learning accurate dense correspondences and when to trust them", "year": "2021" }, { "authors": "Prune Truong; Martin Danelljan; Radu Timofte; Luc Van Gool", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b51", "title": "PDC-Net+: Enhanced Probabilistic Dense Correspondence Network", "year": "2023" }, { "authors": "Michal J Tyszkiewicz; Pascal Fua; Eduard Trulls", "journal": "NeurIPS", "ref_id": "b52", "title": "DISK: learning local features with policy gradient", "year": "2020" }, { "authors": "Cristina Vasconcelos; Vighnesh Birodkar; Vincent Dumoulin", "journal": "", "ref_id": "b53", "title": "Proper reuse of image classification features improves object detection", "year": "2022" }, { "authors": "Qing Wang; Jiaming Zhang; Kailun Yang; Kunyu Peng; Rainer Stiefelhagen", "journal": "", "ref_id": "b54", "title": "MatchFormer: Interleaving attention in transformers for feature matching", "year": "2022" }, { "authors": "Chen Wei; Haoqi Fan; Saining Xie; Chao-Yuan Wu; Alan Yuille; Christoph Feichtenhofer", "journal": "", "ref_id": "b55", "title": "Masked feature prediction for self-supervised visual pre-training", "year": "2022" }, { "authors": "Sholom M Weiss; Nitin Indurkhya", "journal": "", "ref_id": "b56", "title": "Rule-based regression", "year": "1993-09-03" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b57", "title": "", "year": "1993" }, { "authors": "Sholom M Weiss; Nitin Indurkhya", "journal": "J. Artif. Intell. Res", "ref_id": "b58", "title": "Rule-based machine learning methods for functional prediction", "year": "1995" }, { "authors": "Andrew P Witkin", "journal": "", "ref_id": "b59", "title": "Scale space filtering", "year": "1983" }, { "authors": "Zhenda Xie; Zigang Geng; Jingcheng Hu; Zheng Zhang; Han Hu; Yue Cao", "journal": "", "ref_id": "b60", "title": "Revealing the dark secrets of masked image modeling", "year": "2023" }, { "authors": "Jiahuan Yu; Jiahao Chang; Jianfeng He; Tianzhu Zhang; Jiyang Yu; Wu Feng", "journal": "", "ref_id": "b61", "title": "ASTR: Adaptive spot-guided transformer for consistent local feature matching", "year": "2023" }, { "authors": "Jinghao Zhou; Chen Wei; Huiyu Wang; Wei Shen; Cihang Xie; Alan Yuille; Tao Kong", "journal": "", "ref_id": "b62", "title": "ibot: Image bert pre-training with online tokenizer", "year": "2022" }, { "authors": "Shengjie Zhu; Xiaoming Liu", "journal": "", "ref_id": "b63", "title": "PMatch: Paired masked image modeling for dense geometric matching", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 50.11, 510.46, 46.31, 8.96 ], "formula_id": "formula_0", "formula_text": "Regression" }, { "formula_coordinates": [ 3, 308.86, 271.84, 236.25, 53.81 ], "formula_id": "formula_1", "formula_text": "p(W A→B ) = p(x B |x A ) is the conditional matching distri- bution. Multiplying p(x B |x A )p(x A ) yields the joint distri- bution. We denote the model distribution as p θ (x A , x B ) = p θ (x B |x A )p θ (x A" }, { "formula_coordinates": [ 3, 317.37, 539.34, 227.74, 18.44 ], "formula_id": "formula_2", "formula_text": "{φ A coarse , φ A fine } = F θ (I A ), {φ B coarse , φ B fine } = F θ (I B ),(1)" }, { "formula_coordinates": [ 3, 333.56, 658.72, 211.55, 30.49 ], "formula_id": "formula_3", "formula_text": "ŴA→B coarse , p A θ,coarse = G θ (φ A coarse , φ B coarse ), G θ (φ A coarse , φ B coarse ) = D θ E θ (φ A coarse , φ B coarse ) .(2)" }, { "formula_coordinates": [ 4, 68.38, 196.29, 217.98, 13.81 ], "formula_id": "formula_4", "formula_text": "ŴA→B , p A θ = R θ φ A fine , φ B fine , ŴA→B coarse , p A θ,coarse .(3)" }, { "formula_coordinates": [ 4, 70.83, 267.29, 215.53, 13.81 ], "formula_id": "formula_5", "formula_text": "Ŵ A→B i , p A i,θ = R θ,i (φ A i , φ B i , Ŵ A→B i+1 , p A θ,i+1 ),(4)" }, { "formula_coordinates": [ 4, 88.28, 510.57, 198.09, 12.85 ], "formula_id": "formula_6", "formula_text": "p θ (x A coarse , x B coarse ) = G θ (φ A coarse , φ B coarse ).(5)" }, { "formula_coordinates": [ 4, 50.11, 653.19, 236.25, 55.42 ], "formula_id": "formula_7", "formula_text": "p θ (x A i , x B i | Ŵ A→B i+1 ) = R θ,i (φ A i , φ B i , Ŵ A→B i+1 , p A θ,i+1 ), (6) The basecase Ŵ A→B coarse is computed by decoding p θ (x B coarse |x A coarse )." }, { "formula_coordinates": [ 4, 334.87, 538.43, 210.24, 12.85 ], "formula_id": "formula_8", "formula_text": "φ A coarse = F coarse,θ (I A ), φ B coarse = F coarse,θ (I B ). (7)" }, { "formula_coordinates": [ 5, 95.23, 311.82, 187.26, 30.55 ], "formula_id": "formula_9", "formula_text": "p coarse,θ (x B |x A ) = K k=1 π k (x A )B m k , (8" }, { "formula_coordinates": [ 5, 282.49, 322.55, 3.87, 8.64 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 5, 100.34, 502.27, 186.02, 45.74 ], "formula_id": "formula_11", "formula_text": "ToWarp(p coarse,θ (x B coarse |x A coarse )) = i∈N4(k * (x A coarse )) π i m i i∈N4(k * (x A coarse )) π i = Ŵ A→B coarse ,(9)" }, { "formula_coordinates": [ 5, 430.07, 544.71, 110.89, 18.44 ], "formula_id": "formula_12", "formula_text": "s 2 I * p(x A , x B ; 0). (10" }, { "formula_coordinates": [ 5, 540.96, 547.1, 4.15, 8.64 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 6, 60.01, 443.63, 226.35, 56.89 ], "formula_id": "formula_14", "formula_text": "D KL (q(x B , x A ; s)||p coarse,θ (x B , x A )) = (11) E x A ,x B ∼q -log p coarse,θ (x B |x A )p coarse,θ (x A ) = (12) - x A ,x B log π k † (x A ) + log p coarse,θ (x A )dq," }, { "formula_coordinates": [ 6, 66.96, 572.09, 215.25, 19.31 ], "formula_id": "formula_15", "formula_text": "- x A ,x B log π k † (x A ) + λ log p coarse,θ (x A )dq. (14" }, { "formula_coordinates": [ 6, 282.21, 574.48, 4.15, 8.64 ], "formula_id": "formula_16", "formula_text": ")" }, { "formula_coordinates": [ 6, 398.86, 300.1, 146.25, 19.55 ], "formula_id": "formula_17", "formula_text": "log p θ (x B i |x A i , Ŵ A→B i+1 ) =(15)" }, { "formula_coordinates": [ 6, 350.34, 317.11, 194.78, 19.55 ], "formula_id": "formula_18", "formula_text": "-(||µ θ (x A i , Ŵ A→B i+1 ) -x B i || 2 + s) 1/4 ,(16)" }, { "formula_coordinates": [ 6, 336.04, 339.89, 66.34, 13.93 ], "formula_id": "formula_19", "formula_text": "µ θ (x A i , Ŵ A→B i+1 )" }, { "formula_coordinates": [ 6, 327.77, 408.43, 217.34, 26.15 ], "formula_id": "formula_20", "formula_text": "D KL (q(x B i , x A i ; s = 2 i c)||p i,θ (x B i , x A i | Ŵ A→B i+1 )) =(17)" }, { "formula_coordinates": [ 6, 321.54, 439.45, 223.57, 44.22 ], "formula_id": "formula_21", "formula_text": "E x A i ,x B i ∼q -(||µ θ (x A i , Ŵ A→B i+1 ) -x B i || 2 + s) 1/4 + E x A i ,x B i ∼q -log p i,θ (x A i | Ŵ A→B i+1 ) .(18)" }, { "formula_coordinates": [ 6, 387.93, 562.49, 157.18, 17.29 ], "formula_id": "formula_22", "formula_text": "L = L coarse + L fine .(19)" } ]
RoMa: Robust Dense Feature Matching
Figure 1. RoMa is robust, i.e., able to match under extreme changes. We propose RoMa, a model for dense feature matching that is robust to a wide variety of challenging real-world changes in scale, illumination, viewpoint, and texture. We show correspondences estimated by RoMa on the extremely challenging benchmark WxBS [35], where most previous methods fail, and on which we set a new state-of-the-art with an improvement of 36% mAA. The estimated correspondences are visualized by grid sampling coordinates bilinearly from the other image, using the estimated warp, and multiplying with the estimated confidence.
Johan Edstedt; Qiyu Sun; Georg Bökman; Mårten Wadenbäck; Michael Felsberg
[ { "figure_caption": "Figure 22Figure2. Illustration of our robust approach RoMa. Our contributions are shown with green highlighting and a checkmark, while previous approaches are indicated with gray highlights and a cross. Our first contribution is using a frozen foundation model for coarse features, compared to fine-tuning or training from scratch. DINOv2 lacks fine features, which are needed for accurate correspondences. To tackle this, we combine the DINOv2 coarse features with specialized fine features from a ConvNet, see Section 3.2. Second, we propose an improved coarse match decoder D θ , which typically is a ConvNet, with a coordinate agnostic Transformer decoder that predicts anchor probabilities instead of directly regressing coordinates, see Section 3.3. Third, we revisit the loss functions used for dense feature matching. We argue from a theoretical model that the global matching stage needs to model multimodal distributions, and hence use a regressionby-classification loss instead of an L2 loss. For the refinement, we in contrast use a robust regression loss, as the matching distribution is locally unimodal. These losses are further discussed in Section 3.4. The impact of our contributions is ablated in our extensive ablation study in Table2.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Illustration of localizability of matches. At infinite resolution the match distribution can be seen as a 2D surface (illustrated as 1D lines in the figure), however at a coarser scale s this distribution becomes blurred due to motion boundaries. This means it is necessary to both use a model and an objective function capable of representing multimodal distributions.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Comparison of loss gradients. We use the generalized Charbonnier [3] loss for refinement, which locally matches L2 gradients, but globally decays with |x| -1/2 toward zero.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": ", F coarse,θ = RN50, F fine,θ = RN50 16.0 6.1 4.5 III: II, F fine,θ = VGG19 14", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Evaluation of frozen features. From top to bottom: Image pair, VGG19 matches, RN50 matches, DINOv2 matches, RoMa matches. DINOv2 is significantly more robust than the VGG19 and RN50. Quantitative results are presented in Table1.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Evaluation of frozen features on MegaDepth. We compare the VGG19 and ResNet50 backbones commonly used in feature matching with the generalist features of DINOv2.", "figure_data": "MethodEPE ↓ Robustness % ↑VGG1987.643.2RN5060.257.5DINOv2 27.185.6", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "SotA comparison on IMC2022[25]. Measured in mAA (higher is better).", "figure_data": "Method ↓mAA → @10 ↑SiLK [21]68.6SP [14]+SuperGlue [41]72.4LoFTR [44] CVPR'2178.3MatchFormer [55] ACCV'2278.3QuadTree [46] ICLR'2281.7ASpanFormer [12] ECCV'2283.8DKM [17] CVPR'2383.1RoMa88.0", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "SotA comparison on WxBS[35]. Measured in mAA at 10px (higher is better).", "figure_data": "MethodmAA@ →10px ↑DISK [53] NeurIps'2035.5DISK + LightGlue [31, 53] ICCV'2341.7SuperPoint +SuperGlue [14, 41] CVPR'20 31.4LoFTR [44] CVPR'2155.4DKM [17] CVPR'2358.9RoMa80.1", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "SotA comparison on MegaDepth-1500[28,44]. Measured in AUC (higher is better).", "figure_data": "Method ↓AUC@ → 5 • ↑ 10 • ↑ 20 • ↑LightGlue [31] ICCV'2351.0 68.180.7LoFTR [44] CVPR'2152.8 69.281.2PDC-Net+ [52] TPAMI'2351.5 67.278.5ASpanFormer [12] ECCV'2255.3 71.583.1ASTR [61] CVPR'2358.4 73.183.8DKM [17] CVPR'2360.4 74.985.1PMatch [63] CVPR'2361.4 75.785.7CasMTR [10] ICCV'2359.1 74.384.8RoMa62.6 76.786.3", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "SotA comparison on ScanNet-1500[13,41]. Measured in AUC (higher is better).", "figure_data": "Method ↓AUC@ → 5 • ↑ 10 • ↑ 20 • ↑SuperGlue [41] CVPR'1916.2 33.851.8LoFTR [44] CVPR'2122.1 40.857.6PDC-Net+ [52] TPAMI'2320.3 39.457.1ASpanFormer [12] ECCV'2225.6 46.063.3PATS [36] CVPR'2326.0 46.964.3DKM [17] CVPR'2329.4 50.768.3PMatch [63] CVPR'2329.4 50.167.4CasMTR [10] ICCV'2327.1 47.064.4RoMa31.8 53.470.94.2. Training SetupWe use the training setup as in DKM [17]. Following DKM,we use a canonical learning rate (for batchsize = 8)of 10 -4 for the decoder, and 5 • 10 -6 for the encoder(s). We use the same training split as in DKM, which consistsof randomly sampled pairs from the MegaDepth and Scan-Net sets excluding the scenes used for testing. The super-vised warps are derived from dense depth maps from multi-view-stereo (MVS) of SfM reconstructions in the case ofMegaDepth, and from RGB-D for ScanNet. Following pre-vious work [12, 17, 44], use a model trained on the ScanNettraining set when evaluating on ScanNet-1500. All otherevaluation is done on a model trained only on MegaDepth.", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "SotA comparison on Megadepth-8-Scenes[17]. Measured in AUC (higher is better). We submit to the 2022 version of the image matching challenge[25], which consists of a hidden test-set of Google street-view images with the task to estimate the fundamental matrix between them.", "figure_data": "Method ↓AUC → @5 • @10 • @20 •PDCNet+ [52] TPAMI'2351.8 66.677.2ASpanFormer [12] ECCV'2257.2 72.182.9DKM [17] CVPR'2360.5 74.584.2RoMa62.2 75.985.310,000 matches, which are then used for estimation. Weconsistently improve compared to prior work across theboard, in particular achieving a relative error reduction onthe competitive IMC2022 [25] benchmark by 26%, and again of 36% in performance on the exceptionally difficultWxBS [35] benchmark.Image Matching Challenge 2022:", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "CasMTR 53.5 / 76.8 / 85.4 51.9 / 70.2 / 83.2 RoMa 60.6 / 79.3 / 89.9 66.4 / 83.2 / 87.8", "figure_data": "• )PATS55.6 / 71.2 / 81.0 58.8 / 80.9 / 85.5DKM51.5 / 75.3 / 86.9 63.4 / 82.4 / 87.8", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[17,36,49,52]", "Explanation": "The cited works provide dense feature matching methods that the citing paper adopts in its research to find all matching pixel-pairs between images."}, {"Category": "Data Source", "Citation": "[43]", "Explanation": "The cited work provides a dataset for 3D reconstruction that the citing paper utilizes in its research to perform downstream tasks."}, {"Category": "Data Source", "Citation": "[40]", "Explanation": "The cited work provides a dataset for visual localization that the citing paper utilizes in its research to perform downstream tasks."}, {"Category": "Extension or Continuation", "Citation": "[41,44,52]", "Explanation": "The cited works extend the research on learning coarse features by exploring new dimensions, contexts, or variables in the field of feature matching."}, {"Category": "Methodological Basis", "Citation": "[24,37,56,62]", "Explanation": "The cited works on large-scale self-supervised pretraining using Masked image Modeling (MIM) provide the methodological basis for the citing paper to explore the use of frozen foundation models for coarse features in the context of image matching."}, {"Category": "Methodological Basis", "Citation": "[60]", "Explanation": "The cited work, DINOv2, is used as a basis for the coarse feature extraction in the citing paper, as it is shown to retain local information better than classification pretraining and generate features that generalize well to dense vision tasks."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work DKM is used as a reference for the non-robust regression loss used in the coarse matching and refinement stages in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[37]", "Explanation": "The foundation model DINOv2 is integrated into the dense feature matching process in the citing paper, building upon the work of DINOv2 in the cited work."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work on language Transformers provides the methodological basis for the self-supervised learning approach used in the citing paper to develop all-purpose features for visual models."}, {"Category": "Extension or Continuation", "Citation": "[8]", "Explanation": "The cited work on pre-trained foundation models in self-supervised learning serves as a foundational study for the development of all-purpose features in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[11]", "Explanation": "The cited work on self-supervised ViT features provides supporting evidence for the claim that self-supervised models can capture more distinct information than supervised models, as observed in the citing paper."}, {"Category": "Data Source", "Citation": "[62]", "Explanation": "The cited work on iBOT explores the use of MIM in a self-distillation framework for developing a visual tokenizer, which the citing paper leverages as a data source for its research on all-purpose features."}, {"Category": "Extension or Continuation", "Citation": "[37]", "Explanation": "The cited work on DINOv2 reveals the potential of self-supervised methods in producing all-purpose visual features, which the citing paper extends by exploring the use of these features in various image distributions and tasks without finetuning."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The cited work provides a method for using robust loss functions as regularizers for optical flow, which the citing paper adopts in its research on robust regression losses."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work also provides a method for using robust loss functions as regularizers for optical flow, which the citing paper adopts in its research on robust regression losses."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work discusses the use of robust loss functions for robust smoothing, which the citing paper may have adopted in its research on robust regression losses."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work uses robust loss functions as loss functions, which the citing paper may have adopted in its research on robust regression losses."}, {"Category": "Methodological Basis", "Citation": "[32]", "Explanation": "The cited work also uses robust loss functions as loss functions, which the citing paper may have adopted in its research on robust regression losses."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work, MegaDepth, serves as the dataset for the study conducted in the citing paper, providing the basis for the evaluation of the robustness of DINOv2 to viewpoint and illumination changes."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work, DKM, is the basis for the feature encoder in the citing paper. The citing paper adopts the DKM method for feature extraction and uses a single network to produce a feature pyramid of coarse and fine features for global matching and refinement."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work, DKM, is used as a basis for the addition of a hyperparameter in the calculation of the marginal and conditional weights in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[52]", "Explanation": "The cited work is used to support the use of a binary crossentropy loss in the fine loss calculation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work by Charbonnier provides the specific formulation of the generalized Charbonnier distribution, which the citing paper adopts in the estimation of the mean \u00b5 in the refinement process."}, {"Category": "Extension or Continuation", "Citation": "[17]", "Explanation": "The cited work by DKM is extended by the citing paper to sample correspondences using a balanced sampling approach, which results in improved performance in the two-view geometry benchmarks."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work by WxBS Benchmark is used as a reference for the evaluation of the RoMa approach, which leads to the development of a new method for improved performance in the two-view geometry benchmarks."}, {"Category": "Supporting Evidence", "Citation": "[44]", "Explanation": "The cited work provides the test set used in the study, which is essential for evaluating the performance of the research presented in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[12]", "Explanation": "The cited work provides a protocol for evaluating the performance of the research presented in the citing paper, which is necessary for ensuring the validity of the results."}, {"Category": "Supporting Evidence", "Citation": "[13]", "Explanation": "The cited work provides the dataset used in the study, which is essential for evaluating the performance of the research presented in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[41]", "Explanation": "The cited work provides a method for evaluating the performance of the research presented in the citing paper, which is necessary for ensuring the validity of the results."}, {"Category": "Methodological Basis", "Citation": "[17,28]", "Explanation": "The cited works provide the Megadepth-8-Scenes benchmark dataset, which the citing paper uses to evaluate the performance of the RoMa model in a real-world setting."}, {"Category": "Extension or Continuation", "Citation": "[45]", "Explanation": "The cited work, InLoc, is extended in the citing paper by evaluating the RoMa model on the InLoc benchmark to further assess its performance in a different context."}, {"Category": "Supporting Evidence", "Citation": "[45]", "Explanation": "The cited work provides the InLoc Visual Localization benchmark, which the citing paper uses to evaluate the performance of the RoMa model in visual localization tasks."}, {"Category": "Supporting Evidence", "Citation": "[40]", "Explanation": "The cited work provides the HLoc pipeline, which the citing paper uses to evaluate the performance of the RoMa model in visual localization tasks."}, {"Category": "Extension or Continuation", "Citation": "[17]", "Explanation": "The cited work DKM introduces a method for sampling correspondences, which the citing paper extends by using the same approach in the RoMa model to improve the performance in visual localization tasks."}, {"Category": "Methodological Basis", "Citation": "[26,30,59]", "Explanation": "The cited works on scale-space theory provide the theoretical basis for the application of Gaussian kernels in the matching distribution, which is a key element in the diffusion process discussed in the citing paper."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b45", "b40", "b48", "b46", "b19", "b43", "b6", "b13", "b0", "b4", "b14", "b4", "b14", "b29", "b21", "b8", "b12", "b0", "b1", "b4", "b27", "b43", "b62" ], "table_ref": [], "text": "Vision-Language Models (VLMs) are rapidly advancing in capability and have witnessed a dramatic growth in public visibility: DALL-E [46] has more than 1.5 million users creating over 2 million images a day; the discord channel for MidJourney [41] hosts over two million members [49]; and shortly after its release, Stability.AI reported that their Stable Diffusion model [47] had over 10 million daily active users [20]. Underpinning these powerful generative models are image-text encoders like CLIP [44], which are themselves used for many discriminative tasks, such as video action recognition, open set detection and segmentation, and captioning. These encoders are pre-trained on large-scale internet scraped datasets. The uncurated nature of such datasets can translate to generated images that risk inflicting a range of downstream harms on their end users and society at large -from bias and negative stereotypes, to nudity and sexual content, or violent or graphic imagery [7,14].\nIn light of these issues, coupled with growing use of generative AI, it is vital to reliably benchmark the bias in VLMs, particularly in the image-text encoders. A small emerging body of work attempts to measure bias in VLMs [1,5,15], or to debias their feature representations [5,15]. Yet the legitimacy of this work critically depends on both a suitable evaluation metric and an evaluation dataset to accurately depict the bias in pre-trained model weights and reliably signal whether debiasing attempts have been successful. The predominant focus on model-centric debiasing methods has overshadowed two main challenges associated with datasets and metrics: (i) the common use of cropped face datasets, such as FairFace [30], fall short because excluding contextual background presents an inaccurate and unreliable assessment of bias in naturalistic images; and (ii) even if natural, open-domain images containing contextual clues are used, they are unbalanced by identity attribute representation within contexts. This is problematic because commonly-used bias metrics, such as Bias@K, are affected by the naturally-occurring distribution of images. Thus, while using contextual images is desirable, it comes at the cost of spurious correlations, affecting the reliability of bias metrics.\nIn this paper, we argue that these confounding factors arising from the interaction of metric choice and biased datasets paint an unreliable picture when measuring model bias in VLMs. To counter these issues, we propose a synthetic pipeline for debiasing a dataset into contrast sets balanced by identity attributes across background contexts. Our pipeline draws on the success of contrast sets in NLPs [22] and leverages recent advances in controllable image editing and generation [9]. We illustrate our approach with a focus on gender bias and define a contrast set as containing pairs of images from COCO [13] where each image ID has two synthetically-edited versions (one man, one woman) where the background is fixed and only the person bounding box is edited. Our paper makes three key contributions: (1) We demonstrate spurious correlations in the COCO dataset between gender and context, and show their problematic effects when used to measure model bias (Sec. 3); (2) We present the GENSYNTH dataset, built from a generative pipeline for synthetic image editing, and a filtering pipeline using KNN with real and synthetic images to control for the quality of the generated images (Sec. 4); (3) We benchmark state-of-the-art VLM models [5,28,44,63]; demonstrating how balanced and unbalanced versions of the COCO dataset skew the values of bias metrics (Sec. 5).\nOur findings demonstrate that debiasing datasets with synthetic contrast sets can avoid spurious correlations and more reliably measure model bias. While synthetically-edited data has promise in (i) preserving privacy of subjects included in vision datasets, and (ii) adding controllability to the dataset features, it also risks introducing a real-synthetic distribution shift and stacking biases of various generative models may essentialise representations of gender (see Sec. 6). Despite these early-stage limitations, this work starts a conversation about the importance of the interaction between dataset features with bias metrics, ultimately contributing to future work that paints a more accurate and balanced picture of identity-based bias in VLMs." }, { "figure_ref": [], "heading": "Related works", "publication_ref": [ "b37", "b58", "b20", "b24", "b60", "b22", "b0", "b4", "b14", "b52", "b12", "b60", "b61", "b24", "b67", "b36", "b9", "b15", "b55", "b59", "b63", "b67", "b10", "b51", "b57", "b6", "b38", "b59", "b50", "b64", "b50", "b2", "b38", "b18", "b28", "b39", "b53", "b45", "b46", "b47", "b8", "b32", "b41", "b65", "b8", "b26", "b56", "b49", "b20", "b1", "b3", "b5", "b35", "b30", "b25", "b20", "b31", "b17", "b44", "b21", "b42" ], "table_ref": [], "text": "Defining Fairness and Bias. Fairness is a complex, context-dependent concept [38,59]. Here, we adopt a narrow definition where no group is advantaged or disadvantaged based on the protected attribute of gender in retrieval settings [21,25]. The metrics employed in this paper, Bias@K [61] and Skew@K, [23] are used to assess disparity in distribution between search query results and desired outcomes. In this work, we assume contextual activities such as dancing, skateboarding, laughing would not have a strong gendered prior and thus the desired distribution is one where all protected attributes have equal chance of being returned in a query that does not explicitly mention gender. 2Measuring Model Bias. Measuring bias in VLMs is a growing area of research. Early work measures the misclassification rates of faces into harmful categories [1]. Several works measure outcome bias for text-to-face retrieval [5,15,53], though it is unclear how such measurements made on cropped face datasets generalise to real-world settings. For gender fairness in open-domain images, COCO Captions [13] is a standard benchmark for cross-modal retrieval [61,62] and image captioning [25,68]. Measuring bias in generative VLMs has also been approached [37].\nDataset Bias. Datasets, including those used for bias evaluation, have their own biases from curation and annotation artefacts. Image datasets have been found to include imbalanced demographic representation [10,16,56,60,64,68], stereotypical portrayals [11,52,58], or graphic, sexually-explicit and other harmful content [7]. Similar to [39,60], we identify spurious gender correlations in the COCO Captions dataset and further show this renders the datasets unsuitable for current bias retrieval metrics. Techniques to reduce dataset biases range from automatic [51] to manual filtering [65] of harmful images, such as those containing nudity [51], toxicity, or personal and identifiable information [3]. Yet, these filters cannot identify subtle stereotypes and spurious correlations present in open-domain images -making it difficult to curate a wholly unbiased natural image dataset [39].\nMitigating Dataset Bias with Synthetic Data. Deep networks need large amounts of labeled data, prompting the creation of synthetic datasets for various computer vision tasks [19,29,40,54]. More recently, progress in generative models [46][47][48] has enabled methods to synthetically generate training data [9,33,42,66]. Similarly, text-guided editing methods [9,27,57] offer scalable and controllable image editing, potentially enhancing dataset fairness and removing issues related to existing spurious correlations. Several works propose the use of synthetic datasets for mitigating dataset bias, such as with GANs [50] or diffusion models [21]. However, synthetic or generated data may not necessarily represent underlying distributions of marginalised groups within populations and thus still unfairly disadvantage certain groups [2,4,6,36]. To combat these risks, fairness in generative models is an area gaining popularity: StyleGan [31] has been used to edit images on a spectrum, rather than using binary categories [26]; [21] use human feedback to guide diffusion models to generate diverse human images; and [32] learn to transfer age, race and gender across images. Similar to our work, GAN-based frameworks [18,45] edit an existing face dataset to equalise attributes and enforce fairness. Our work extends this approach to open-domain images, introducing an automatic filtering technique for improving the quality of edits. To our knowledge, we are the first to propose image editing of open-domain images for fairness. Our work is also inspired by the use of contrast sets in NLP [22], which have been used to alter data by perturbing demographics (race, age, gender) in order to improve fairness [43]. We use synthetically-generated contrast sets by augmenting both the textual and visual input to CLIP, for a more accurate evaluation of VLM bias." }, { "figure_ref": [], "heading": "Measuring Gender Bias on Natural Images", "publication_ref": [ "b4" ], "table_ref": [], "text": "While prior works make in-depth comparisons between models, and even metrics [5], there is a dearth of research investigating whether natural image datasets, with their own biased and spurious correlations, are suitable benchmarks to measure bias in VLMs. In this section, we investigate the extent of dataset bias from spurious correlations in COCO (Sec. 3.3) and its effect on reliably measuring model bias (Sec. 3.4)." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b60", "b4", "b22", "b4", "b22", "b12", "b33", "b14", "b60", "b61" ], "table_ref": [], "text": "We first define the bias metrics and the framework used to measure model bias on image-caption data.\nBias@K [61] measures the proportions of masculine and feminine images in the retrievals of a search result with a gender-neutral text query. For an image I, we define a function g(I) = male if there are only individuals who appear as men in the image, and g(I) = female if there are only individuals who appear as women. Given a set of K retrieved images R K (q) for a query q, we count the images of apparent men and women as:\nN male = I∈R K (q) 1[g(I) = male]\nand\nN female = I∈R K (q) 1[g(I) = female].\nWe define the gender bias metric as:\nδ K (q) = 0, N male + N female = 0 N male -N female N male +N female , otherwise.\nFor a whole query set Q, we define:\nBias@K = 1 |Q| q∈Q δ K (q).(1)\nSkew@K [5,23] measures the difference between the desired proportion of image attributes in R k (q) for the query q and the actual proportion. Let the desired proportion of images with attribute label A in the set of retrieved images be p d,q,A ∈ [0, 1] and the actual proportion be p R(q),q,A ∈ [0, 1]. The resulting Skew@K of R(q) for an attribute label A ∈ A is:\nSkew@K(R(q)) = ln p R K (q),q,A p d,q,A ,(2)\nwhere the desired proportion p d,q,A is the actual attribute distribution over the entire dataset. A disadvantage of Skew@K is that it only measures bias with respect to a single attribute at a time and must be aggregated to give a holistic view of the bias over all attributes. We follow [5] and take the maximum Skew@K among all attribute labels A of the images for a given text query q:\nMaxSkew@K(R(q)) = max Ai∈A Skew Ai @K(R(q)),(3)\nwhich gives us the \"largest unfair advantage\" [23] belonging to images within a given attribute. In our work, a MaxSkew@K of 0 for the attribute gender and a given text query q implies that men and women are equally represented in the retrieved set of K images R K (q). We ignore all images with undefined attribute labels (in this case gender) when measuring MaxSkew@K.\nCOCO is a dataset of 118k images with detection, segmentation and caption annotations, covering 80 distinct categories, including people [13,34]. Each image has five captions written by different human annotators. COCO is commonly used to measure gender bias in VLMs in tandem with the Bias@K metric [15,61,62]." }, { "figure_ref": [], "heading": "Gendered Captions and Images in COCO", "publication_ref": [ "b60" ], "table_ref": [], "text": "The bias metrics defined in Sec. 3.1 require gender attribute labels for each image and gender-neutral text queries, but these are not naturally present in captioned image data such as COCO. We describe the steps to automatically label gender for images and to neutralise gender information in captions.\nExtracting Image Gender Labels from Captions. We assign a gender label to each COCO image, following prior work [61]. For each image, we concatenate all five captions into a single paragraph. If the paragraph contains only feminine words and no masculine words, the image is assigned a female label, and vice versa. If the paragraph contains words from both or neither genders, it is labeled as undefined. The full list of gendered words is detailed in the Appendix. Using this procedure, we implement the function g in Sec. 3.1. The COCO 2017 train set contains 118,287 images, of which 30,541 (25.8%) are male, 11,781 (9.9%) are female, and 75,965 (64.2%) are undefined. The COCO 2017 validation set contains 5,000 images, of which 1,275 (25.5%), are assigned male, 539 (10.8%) female, and 3,186 (63.7%) undefined. This procedure gives high precision in the gender-pseudo label, as any ambiguous samples are rejected. However, images may be incorrectly labeled as undefined (lower recall) due to, for example, misspelling of the gendered words in the human-annotated captions or omission of rarer gendered terms in our keyword list.\nConstructing Gender-Neutral Captions. We construct gender-neutral captions by replacing gendered words with neutral ones, e.g. \"man\" or \"woman\" become \"person\", and the sentence \"A man sleeping with his cat next to him\" becomes \"A person sleeping with their car next to them\". The full mapping of gender-neutral words and more examples of original and neutralised captions are in the Appendix." }, { "figure_ref": [ "fig_0" ], "heading": "Identifying Spurious Correlations with Gender", "publication_ref": [ "b34", "b51" ], "table_ref": [], "text": "As reported above, COCO contains more than twice as many male images as it does female ones. This will inevitably affect retrieval-based bias metrics, as there will be more male images in the retrievals. One naïve way to fix this is to undersample the male images in order to arrive at a Balanced COCO dataset. However, ensuring equal distribution of demographic attributes does not necessarily ensure the dataset is unbiased as a whole. Spurious correlations can result in subsets of the data being highly correlated with certain attributes. Here we explore whether for certain contexts in the COCO dataset, e.g., skateboarding, one gender is over-represented. We take two approaches to evidence these spurious correlations. The male over-representation factor is the difference between the percentage of male images in the particular cluster and the percentage of male images overall in the dataset.\nK-means Clusters with Caption Embeddings. First, we find semantic clusters of captions and evaluate the gender balance within them. For every image I n , we embed its gender-neutralised captions C k n , where k = {1, . . . , K} represents the K captions of the image, with RoBERTa [35] to get features f k n . We average the features to get\nf n = 1 K K k=1 f k n .\nNext, we cluster the features f n , n = {1, . . . , N } into M = 20 clusters with K-Means. Finally, for each cluster, we extract salient words using Latent Dirichlet Allocation (LDA) and give a manually-defined cluster label. In Fig. 1 we show a t-SNE representation of the discovered clusters, together with the degree of male over-representation. We see that in sports-related concepts men are over-represented, whereas in scenes in kitchens, bathrooms, streets, and parks, women are over-represented. For a list of all discovered classes and salient words according to LDA, refer to the Appendix. Spurious Correlations Classifier. Following [52], we investigate the presence of spurious correlations by training classifiers to predict binary gender labels of images and captions where the explicit gender information is removed for both training and testing. Specifically, for the image classifier (ResNet-50) we replace all person bounding boxes with black pixels; and for the caption classifier (BERT-base) we use the gender-neutralised captions. The training and testing data is COCO train and validation defined in Sec. 3.2 but with undefined images dropped. On unseen data, the text-only classifier on gender-neutralised captions achieves 78.0% AUC, and the image-only classifier on person-masked images achieves 63.4% AUC. Given that a random chance model achieves 50% AUC and an image classifier on unmasked images achieves 71.9% AUC, it is clear that spurious background correlations in the image, as well as biases in the caption, provide a significant signal to predict gender of the person in the image even when there is no explicit gender information." }, { "figure_ref": [], "heading": "The Effect of Dataset Bias on Model Bias Measurement", "publication_ref": [], "table_ref": [], "text": "The dataset used for bias evaluation significantly affects the model bias measurement. This is exemplified by a theoretically fair model, which we instantiate as a TF-IDF (Term Frequency -Inverse Document Frequency) ranking model for caption-to-caption retrieval on gender-neutralised captions. Despite being based on a simple numerical statistic of word occurrences, devoid of any inherent gender bias, this model still exhibits non-zero bias when evaluated on COCO captions. Our findings, reported in Tab. 1, include Bias@K and MaxSkew@K measurements on COCO Val, compared against a random model and CLIP. For Balanced COCO Val, all models register an approximate Bias@K of zero, a consequence of the metric's signed nature that tends to average towards zero over many directions of spurious correlations on biased but balanced data. Yet, for unbalanced data, Bias@K shifts towards the over-represented attribute, making it an unsuitable metric for model bias measurement. MaxSkew@K, despite being an absolute measure, is not exempt from these issues. It still records large values for the theoretically fair model and the random model, suggesting that the established framework may be inadequate for bias measurement on natural image datasets that inherently possess their own biases." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "GENSYNTH: A Synthetic Gender-Balanced Dataset using Contrast Sets", "publication_ref": [], "table_ref": [], "text": "Given the limitations of measuring Bias@K and MaxSkew@K on natural images and the spurious correlations in existing datasets, we propose a framework for editing natural images into synthetic contrast sets that remove spurious background correlations along the attribute of interest (see Fig. 2), and apply the pipeline on COCO to obtain the GENSYNTH dataset (see Fig. 2). We first synthetically edit the person in images to cover both gender labels with fixed background context (Sec. 4.1), followed by automatic filtering that ensures the quality and correctness of the edited persons (Sec. 4.2). Finally, we verify the quality of the edited images and the filtering method (Sec. 4.3). While we implement this for the gender attribute, in practice, our pipeline could be used to generate synthetic contrast sets for other identity attributes, requiring only the availability of person bounding boxes for the source images." }, { "figure_ref": [], "heading": "Synthetically Editing Images", "publication_ref": [ "b8", "b8" ], "table_ref": [], "text": "Leveraging advancements in text-conditioned image generation and editing, we use an instructionbased model, InstructPix2Pix [9], for editing objects in an image -referred to as the source image -while keeping the background unchanged. We edit source images from COCO that (i) contain only one person, inferred from the number of person bounding boxes; and (ii) have a defined gender label, as defined in Sec. 3.2. These restrictions remove ambiguity. Next, we crop the image to the single person bounding box and feed it to InstructPix2Pix [9] along with multiple edit instructions for each attribute label (Tab. 2). The edited person is then replaced in the source image. By only editing the appearance of the person in the image, we preserve the background content and minimize distortion -empirically, we found editing the entire source image rather than just the source person produced lower quality edits with significant hallucination. For further implementation details, refer to the Appendix. " }, { "figure_ref": [], "heading": "Automatic Quality Filtering of Edited Images", "publication_ref": [ "b8", "b23" ], "table_ref": [], "text": "The synthetic edits with InstructPix2Pix [9] can often be of low quality or fail to edit the source person's attribute into the target attribute. In order to ensure the quality and gender accuracy of our synthetic image sets, we introduce an automatic filtering method using K-Nearest Neighbor (KNN), similar to [24] who use KNN to score GAN-generated images.\nFirst, we embed a collection of (i) source person bounding boxes, denoted as R = {r 1 , r 2 , ..., r n }, and (ii) synthetically-edited person bounding boxes, denoted as S = {s 1 , s 2 , ..., s m } using CLIP. For each synthetic box s i , we identify its K-nearest neighbors in this feature space, denoted as N si = KNN(s i , R ∪ S) using the Euclidean distance between the embeddings. If the proportion of real images within N si , denoted as P R (s i ), and the proportion of images corresponding to the target gender of s i , denoted as P G (s i ), exceed predetermined thresholds τ R and τ G respectively, the edited image s i is accepted:\nP R (s i ) = 1 K r∈Ns i 1(r ∈ R) and P G (s i ) = 1 K r∈Ns i 1(gender(r) = gender(s i )), (4\n)\naccept(s i ) = 1 if P R (s i ) > τ R and P G (s i ) > τ G 0 otherwise. (5\n)\nThis process ensures that the accepted images are of high quality and accurately reflect the target gender change. We only retain images where the entire set of edits per unique COCO ID has at least one accepted male and female edit, then randomly select one edit for each gender from images that pass the filter. For examples of edits at each decile of τ R , see the Appendix." }, { "figure_ref": [], "heading": "Verifying the Quality of GENSYNTH", "publication_ref": [ "b29" ], "table_ref": [], "text": "We evaluate the quality of the GENSYNTH dataset in two ways. First, to measure the correctness of the targeted gender edit, we use CLIP to zero-shot classify the gender of people in the images. Second, to measure the semantic similarity of the edited image to the caption, we measure the text-to-image retrieval performance of CLIP on the synthetic text-image captions. For this, we edit the captions using the reverse procedure in Sec. 3.2 to reflect the gender of the person in the edited image. Then, for each image I i in GENSYNTH, where i ∈ {1, 2, . . . , N }, we have a set of n captions C j i , j ∈ {1, 2, . . . , n}. For each caption C j i , we perform a retrieval operation from the COCO validation set combined with the query image I i , to find a set of K images that most closely match the caption, according to Euclidean distance of CLIP features. We denote this retrieved set as R j i (K). The retrieval performance is evaluated using Recall at K (R@K), which is defined as\nR@K = 1 N n N i=1 n j=1 1(I i ∈ R j i (K)).\nTable 3: Dataset comparison between the original COCO dataset of natural person images and synthetically edited COCO from the GENSWAP and GENSYNTH pipelines. We report the presence of Spurious Background (BG) Correlations, Zero-Shot (ZS) Gender Accuracy, and Text-to-Image Retrieval Recall@K (R@K) amongst COCO Val 5k images using CLIP. Unfilt. refers to the synthetic pipeline without automatic quality filtering. COCO We compare GENSYNTH, against (i) the original COCO 2017 dataset (train set) of natural images containing persons; and (ii) a weak gender-editing baseline -GENSWAP. This baseline has the same unique COCO images as in GENSYNTH, but only with edited faces -we replace the detected face in the COCO image with a random face of the target gender from the FairFace dataset [30]. Additional implementations of GENSWAP are provided in the Appendix.\nAs shown in Tab. 3, GENSYNTH leads to very similar zero-shot classification and retrieval results to the original COCO images. The filtering step significantly improves both metrics, successfully removing bad edits. The weak baseline, GENSWAP, consistently scores low, showing the importance of an effective editing method." }, { "figure_ref": [], "heading": "Benchmarking Vision-Language Models on Balanced and Unbalanced Evaluation Sets", "publication_ref": [], "table_ref": [], "text": "Here we evaluate original and debiased CLIP models on the datasets described in Sec. 5.1. We only report MaxSkew@K results, as we showed in Sec. 3 that Bias@K is not a reliable metric for evaluating model bias." }, { "figure_ref": [], "heading": "Evaluation Setup", "publication_ref": [ "b43", "b60", "b4", "b27", "b50" ], "table_ref": [], "text": "We use the following three datasets for evaluation: GENSYNTH consists of 7,946 images that have been generated and filtered as discussed in Sec. 4. It consists of 3,973 unique COCO images from the train set (62.6% of which were originally male), with a male and female edit for each. COCO consists of 3,973 original (unedited) images with the same unique COCO IDs as GENSYNTH. All images contain a single person, whose gender can be identified from the caption. COCO Bal consists of 2,970 unique images from COCO , randomly sampled such that there is an equal number of male and female images. We use 5 different random seeds and report average results.\nWe evaluate the following models: (i) the original CLIP model [44]; (ii) CLIP-clip [61], with m = 100 clipped dimensions computed on COCO train 2017; (iii) DebiasCLIP [5], which has been debiased on the FairFace dataset; and (iv) OpenCLIP [28] models trained on LAOIN 400M and 2BN datasets [51]. We use the ViT-B/32 variant for all models, except for DebiasCLIP, for which ViT-B/16 is used due to its availability from the authors." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b16", "b43", "b60", "b4", "b27", "b50", "b16" ], "table_ref": [], "text": "In Tab. 4 we measure and compare the gender bias of CLIP-like models for the three evaluated datasets defined in Sec. 5.1. Overall we find the MaxSkew@K metric is robust when measured on balanced (COCO Bal ) and unbalanced data (COCO ), likely due to the normalization factor that considers label distribution of all the images in the dataset. CLIP-clip has the lowest gender bias across all models -which is expected given its targeted clipping of dimensions most correlated with gender -but comes at the cost of zero-shot image classification accuracy (60.1% on ImageNet1k [17]). Interestingly, MaxSkew@K measured on GENSYNTH has much smaller variance between models.\nTable 4: Comparison of Gender Bias between CLIP-like models on COCO-Person datasets. We report the MaxSkew@K in caption-to-image retrieval of gender-neutralised captions. We compare CLIP [44], CLIP-clip [61], DebiasCLIP [5], and OpenCLIP [28] trained on LAOIN 400M & 2BN [51].\nWe additionally report zero-shot image classification accuracy on ImageNet1K [17]. Given that GENSYNTH removes spurious background correlations, this suggests that a significant portion of reported model bias on natural datasets may be due to spurious correlations related to gender rather than the explicit gender of the person." }, { "figure_ref": [], "heading": "Limitations and Ethical Considerations", "publication_ref": [ "b7", "b6" ], "table_ref": [], "text": "Synthetic Shifts. By generating synthetic data, we are creating a new evaluation distribution that does not necessarily represent the real-world distribution of the respective categories. This distribution shift can also be forced in contexts where it does not necessarily make sense to either face swap or make gender edits due to factual histories or biological identity [8].\nAssumptions of Binary Gender. Our data relies on the binary gender labels from the COCO and FairFace datasets. COCO also presents limitations regarding race, ethnicity, and other sensitive attributes. We acknowledge this approach of using binary gender and making reference to perceived gender based on appearance oversimplifies the complexity of gender identity and biological sex, and risks erasing representation of non-binary people. Despite attempts to mitigate this limitation using terms such as \"masculine\" and \"feminine\", the resulting edits were often unusable (due to existing biases in generative models), necessitating reliance on binary and narrow terms. We advocate for future work that encodes and represents non-binary gender in datasets, and improves generalisation in generative and predictive models to non-binary terms.\nStacking Biases. Our pipeline uses a generative image editing model so may inadvertently introduce biases from this model via stereotypical representations of gender, e.g., if \"make this person more feminine\" over-emphasises pink clothes, or \"make this person more masculine\" over-emphasises beards. The automatic filtering step also tends to favour images with simple scene arrangements. Some model-generated images were identified as NSFW, a consequence of training on large-scale internet datasets [7]. Future work could incorporate into our pipeline more capable and fair generative models." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b66", "b29", "b54" ], "table_ref": [], "text": "The reliability of reported model biases in VLMs is affected by the interaction between dataset bias and choice of bias metric. In this paper, we demonstrated that naturalistic images from COCO have spurious correlations in image context with gender, which in turn affects how much trust can be placed in commonly-used metrics such as Bias@K: when measuring model bias, we may in fact be measuring dataset bias. To mitigate these problems, we proposed a pipeline for editing open-domain images at scale, creating gender-balanced contrast sets where the semantic content of the image remains the same except the person bounding box. Our method does not require manual auditing or image curation, relying instead on an effective automatic filtering method. Using this synthetically-created contrast set (GENSYNTH) we found that state-of-the-art CLIP-like models measure similarly on gender bias suggesting that measurements of model gender bias can largely be attributed to spurious model associations with gender (such as scene or background information) rather than gender itself. Through these subsequent angles of investigation, we conclude that only focusing on model bias while ignoring how dataset artefacts affect bias metrics paints an unreliable picture of identity-based bias in VLMs. We hope our work contributes to an ongoing discussion of how to seek improved representation and diversity of identity groups in image-captioning datasets, both now and in the future. GENSWAP We use the MTCNN face detector [67] to detect faces in the COCO images (for the same subset in GENSYNTH), and replace them with faces from the FairFace repository [30]. FairFace is a collection of face crops from the YFCC-100M dataset [55], labeled with gender, race and age. We only use images whose age attribute is greater than 19 and randomly sample a face crop from the target gender." }, { "figure_ref": [], "heading": "A.3 Filtering", "publication_ref": [], "table_ref": [], "text": "For the KNN filter, we set the neighbourhood size K = 50, and the thresholds τ R = 0.08 and τ G = 0.5." }, { "figure_ref": [], "heading": "B Spurious Correlations Analysis", "publication_ref": [], "table_ref": [], "text": "In Tab. 7 we show the 20 discovered clusters using K-Means, together with the top 10 salient words according to LDA. For each cluster, we show the male-overrepresentation factor, i.e., the difference between the percentage of images in that particular cluster relative to the percentage of male images in the person class of COCO as a whole." }, { "figure_ref": [], "heading": "C Ablation Study", "publication_ref": [ "b11", "b43", "b60", "b4", "b27", "b50", "b16" ], "table_ref": [], "text": "We ablate the use of a CLIP vision encoder in the KNN filtering pipeline. We replace it with a DINO ViT-B/16 [12] and repeat the analysis. We found that using DINO features is much more powerful when it comes to discriminating between the different images (real versus fake), and that the male and female images are better clustered. Accordingly, for the real vs. fake filter we use a neighborhood size of K = 5,000 and a threshold τ R = 0.0002 (i.e., the generated images have at least one real neighbour). For the male vs. female filter, we use a neighborhood size of K = 50 and a threshold τ G = 0.4. We end up with 571 unique COCO images, or 1,142 images in total (with a male and female edit for each unique image). The R@K results with this dataset are R@1 = 33.7%, R@5 = 57.1% and R@10 = 66.7%, and the zero-shot gender classification accuracy is 87.4%. Due to the different filtering, this dataset (with DINO filtering) is smaller than GENSYNTH and the results have higher variance, but are comparable to GENSYNTH.\nWe evaluate MaxSkew@K on this dataset in Tab. 8. We observe a similar trend to the GENSYNTH dataset, where bias results across models have a smaller variance than results on the unbalanced and balanced COCO datasets. The absolute values of the bias metric are smaller, which we explain with the different images retrieved, and the variance that comes with that.\nTable 8: Comparison of Gender Bias between CLIP-like models on the accepted images using DINO image embeddings for KNN filtering. We report the MaxSkew@K in caption-to-image retrieval of gender-neutralised captions. We compare CLIP [44], CLIP-clip [61], DebiasCLIP [5],\nand OpenCLIP [28] trained on LAOIN 400M & 2BN [51]. We additionally report zero-shot image classification accuracy on ImageNet1K [17]. " }, { "figure_ref": [ "fig_2" ], "heading": "D Qualitative Dataset Examples", "publication_ref": [], "table_ref": [], "text": "In Fig. 3, we show gender edits for the GENSYNTH and GENSWAP datasets, alongside the original COCO image and ID. The GENSYNTH edits are more naturalistic than the GENSWAP edits, and also make changes to the body or clothing of the subject. " }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "E Comparing Image Edits Across Filtering Thresholds", "publication_ref": [], "table_ref": [], "text": "For each edited image, we calculate P R , i.e., the ratio of real images versus fake images in the KNN clustering step. We then average P R for each pair of images (the male and female edit). In Fig. 4a and Fig. 4b, we show these randomly-selected pairs of gender edits from each decile of averaged P R to demonstrate how our threshold filtering step improves the quality of the edited images. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements. This work has been supported by the Oxford Artificial Intelligence student society, the Fundação para a Ciência e Tecnologia [Ph.D. Grant 2022.12484.BD] (M.F.), the EPSRC Centre for Doctoral Training in Autonomous Intelligent Machines & Systems [EP/S024050/1] (A.S.), and the Economic and Social Research Council Grant for Digital Social Science [ES/P000649/1] (H.R.K.). For computing resources, the authors are grateful for support from Google Cloud and the CURe Programme under Google Brain Research, as well as an AWS Responsible AI Grant." }, { "figure_ref": [], "heading": "Appendix A Implementation Details", "publication_ref": [], "table_ref": [], "text": "Here we provide additional implementation details about our method." }, { "figure_ref": [], "heading": "A.1 Gendered Words and Caption Editing", "publication_ref": [], "table_ref": [], "text": "In Tab. 5 we show the gendered words (Masculine, Feminine) that we use for assigning each caption a gender label. Captions without either a masculine or feminine word, or captions with matches from both of these lists are labeled as undefined. For switching or neutralising the gender in a caption, we map words across the rows of Tab. 5, so for example \"she\" could be replaced with \"he\" or \"they\". In Tab. 6 we show sentences that have been gender-neutralised.\nTable 5: Gendered word pairs. We the Masculine and Feminine words in order to classify the gender of a person in an image given its caption. When editing the gender of a caption or making it gender-neutral, we use the word from the corresponding pair for the opposite gender or the genderneutral word, respectively. Table 6: Examples of gender-neutralised captions. We show example original COCO captions with their gender-neutralised replacements, using the corresponding words from Tab. 5\nOriginal Neutral The woman brushes her teeth in the bathroom.\nThe person brushes their teeth in the bathroom. A man sleeping with his cat next to him.\nA person sleeping with their car next to them. Two women and two girls in makeup and one is talking on a cellphone.\nTwo people and two children in makeup and one is talking on a cellphone." }, { "figure_ref": [], "heading": "A.2 Image editing", "publication_ref": [ "b8" ], "table_ref": [], "text": "Here we provide additional details on the two image editing pipelines in the paper -our proposed method GENSYNTH, and the weak baseline GENSWAP.\nGENSYNTH We edit the COCO train set images by applying Instruct-Pix2Pix [9] on person crops (bounding boxes) with gender-editing instructions, as described in the main paper. We run Instruct-Pix2Pix for 500 denoising steps, and for each instruction, we generate an image with two text guiding scales: 9.5 and 15. We found that a smaller guiding scale sometimes does not produce the required edit, whereas too large a scale results in an image that does not look natural. Using both scales ensures there are multiple candidates for the edited image, and then we can use the filtering pipeline to discard bad edits." } ]
2023-05-24
10.18653/v1/2021.acl-long.81
[ { "authors": "Sandhini Agarwal; Gretchen Krueger; Jack Clark; Alec Radford; Jong Wook Kim; Miles Brundage", "journal": "", "ref_id": "b0", "title": "Evaluating clip: towards characterization of broader capabilities and downstream implications", "year": "2021" }, { "authors": "Erik Altman", "journal": "", "ref_id": "b1", "title": "Synthesizing credit card transactions", "year": "2021" }, { "authors": "Christian Yuki M Asano; Andrew Rupprecht; Andrea Zisserman; Vedaldi", "journal": "", "ref_id": "b2", "title": "Pass: An imagenet replacement for self-supervised pretraining without humans", "year": "2021" }, { "authors": "Brian Belgodere; Pierre Dognin; Adam Ivankay; Igor Melnyk; Youssef Mroueh; Aleksandra Mojsilovic; Jiri Navartil; Apoorva Nitsure; Inkit Padhi; Mattia Rigotti", "journal": "", "ref_id": "b3", "title": "Auditing and generating synthetic data with controllable trust trade-offs", "year": "2023" }, { "authors": "Hugo Berg; Siobhan Mackenzie Hall; Yash Bhalgat; Wonsuk Yang; Hannah Rose Kirk; Aleksandar Shtedritski; Max Bain", "journal": "", "ref_id": "b4", "title": "A prompt array keeps the bias away: Debiasing vision-language models with adversarial learning", "year": "2022" }, { "authors": "Karan Bhanot; Miao Qi; John S Erickson; Isabelle Guyon; Kristin P Bennett", "journal": "Entropy", "ref_id": "b5", "title": "The problem of fairness in synthetic healthcare data", "year": "2021" }, { "authors": "Abeba Birhane; Uday Vinay; Emmanuel Prabhu; Kahembwe", "journal": "", "ref_id": "b6", "title": "Multimodal datasets: misogyny, pornography, and malignant stereotypes", "year": "2021" }, { "authors": "Lin Su; Gilsinia Blodgett; Alexandra Lopez; Robert Olteanu; Hanna Sim; Wallach", "journal": "", "ref_id": "b7", "title": "Stereotyping norwegian salmon: An inventory of pitfalls in fairness benchmark datasets", "year": "2021" }, { "authors": "Tim Brooks; Aleksander Holynski; Alexei A Efros", "journal": "", "ref_id": "b8", "title": "Instructpix2pix: Learning to follow image editing instructions", "year": "2022" }, { "authors": "Joy Buolamwini; Timnit Gebru", "journal": "PMLR", "ref_id": "b9", "title": "Gender shades: Intersectional accuracy disparities in commercial gender classification", "year": "2018" }, { "authors": "Aylin Caliskan; Joanna J Bryson; Arvind Narayanan", "journal": "Science", "ref_id": "b10", "title": "Semantics derived automatically from language corpora contain human-like biases", "year": "2017" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Hervé Jégou; Julien Mairal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b11", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Xinlei Chen; Hao Fang; Tsung-Yi Lin; Ramakrishna Vedantam; Saurabh Gupta; Piotr Dollár; C Lawrence; Zitnick ", "journal": "", "ref_id": "b12", "title": "Microsoft coco captions: Data collection and evaluation server", "year": "2015" }, { "authors": "Mehdi Cherti; Romain Beaumont; Ross Wightman; Mitchell Wortsman; Gabriel Ilharco; Cade Gordon; Christoph Schuhmann; Ludwig Schmidt; Jenia Jitsev", "journal": "", "ref_id": "b13", "title": "Reproducible scaling laws for contrastive language-image learning", "year": "2022" }, { "authors": "Ching-Yao Chuang; Varun Jampani; Yuanzhen Li; Antonio Torralba; Stefanie Jegelka", "journal": "", "ref_id": "b14", "title": "Debiasing vision-language models via biased prompts", "year": "2023" }, { "authors": "Terrance De Vries; Ishan Misra; Changhan Wang; Laurens Van Der Maaten", "journal": "", "ref_id": "b15", "title": "Does object recognition work for everyone", "year": "2019" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "Ieee", "ref_id": "b16", "title": "Imagenet: A largescale hierarchical image database", "year": "2009" }, { "authors": "Emily Denton; Ben Hutchinson; Margaret Mitchell; Timnit Gebru; Andrew Zaldivar", "journal": "", "ref_id": "b17", "title": "Image counterfactual sensitivity analysis for detecting unintended bias", "year": "2019" }, { "authors": "Alexey Dosovitskiy; Philipp Fischer; Eddy Ilg; Philip Hausser; Caner Hazirbas; Vladimir Golkov; Patrick Van Der; Daniel Smagt; Thomas Cremers; Brox", "journal": "", "ref_id": "b18", "title": "Flownet: Learning optical flow with convolutional networks", "year": "2015" }, { "authors": "Mureji Fatunde; Crystal Tse", "journal": "", "ref_id": "b19", "title": "Digital Media Firm Stability AI Raises Funds at $1 Billion Value. Bloomberg.com", "year": "2022-10" }, { "authors": "Felix Friedrich; Patrick Schramowski; Manuel Brack; Lukas Struppek; Dominik Hintersdorf; Sasha Luccioni; Kristian Kersting", "journal": "", "ref_id": "b20", "title": "Fair diffusion: Instructing text-to-image generation models on fairness", "year": "2023" }, { "authors": "Matt Gardner; Yoav Artzi; Victoria Basmova; Jonathan Berant; Ben Bogin; Sihao Chen; Pradeep Dasigi; Dheeru Dua; Yanai Elazar; Ananth Gottumukkala", "journal": "", "ref_id": "b21", "title": "Evaluating models' local decision boundaries via contrast sets", "year": "2020" }, { "authors": "Cem Sahin; Stuart Geyik; Krishnaram Ambler; Kenthapadi", "journal": "", "ref_id": "b22", "title": "Fairness-aware ranking in search & recommendation systems with application to linkedin talent search", "year": "2019" }, { "authors": "Shuyang Gu; Jianmin Bao; Dong Chen; Fang Wen", "journal": "Springer", "ref_id": "b23", "title": "Giqa: Generated image quality assessment", "year": "2020" }, { "authors": "Anne Lisa; Kaylee Hendricks; Kate Burns; Trevor Saenko; Anna Darrell; Rohrbach", "journal": "", "ref_id": "b24", "title": "Women also snowboard: Overcoming bias in captioning models", "year": "2018" }, { "authors": "Isabal Hermes", "journal": "", "ref_id": "b25", "title": "Gender representation in ai -part 1: Utilizing stylegan to explore gender directions in face image editing, 8", "year": "2022" }, { "authors": "Amir Hertz; Ron Mokady; Jay Tenenbaum; Kfir Aberman; Yael Pritch; Daniel Cohen-Or", "journal": "", "ref_id": "b26", "title": "Prompt-to-prompt image editing with cross attention control", "year": "2022" }, { "authors": "Gabriel Ilharco; Mitchell Wortsman; Ross Wightman; Cade Gordon; Nicholas Carlini; Rohan Taori; Achal Dave; Vaishaal Shankar; Hongseok Namkoong; John Miller; Hannaneh Hajishirzi; Ali Farhadi; Ludwig Schmidt", "journal": "", "ref_id": "b27", "title": "Openclip", "year": "2021-07" }, { "authors": "Justin Johnson; Bharath Hariharan; Laurens Van Der Maaten; Li Fei-Fei; C Lawrence Zitnick; Ross Girshick", "journal": "", "ref_id": "b28", "title": "Clevr: A diagnostic dataset for compositional language and elementary visual reasoning", "year": "2017" }, { "authors": "Kimmo Kärkkäinen; Jungseock Joo", "journal": "", "ref_id": "b29", "title": "Fairface: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation", "year": "2021" }, { "authors": "Tero Karras; Samuli Laine; Timo Aila", "journal": "", "ref_id": "b30", "title": "A style-based generator architecture for generative adversarial networks", "year": "2019" }, { "authors": "Kim Yu Hwan; Se ; Hyun Nam; Seung Baek Hong; Kang Ryoung Park", "journal": "Expert Systems with Applications", "ref_id": "b31", "title": "Gra-gan: Generative adversarial network for image style transfer of gender, race, and age", "year": "2022" }, { "authors": "Daiqing Li; Huan Ling; Seung Wook Kim; Karsten Kreis; Sanja Fidler; Antonio Torralba", "journal": "", "ref_id": "b32", "title": "Bigdatasetgan: Synthesizing imagenet with pixel-wise annotations", "year": "2022" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "Springer", "ref_id": "b33", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b34", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Yingzhou Lu; Huazheng Wang; Wenqi Wei", "journal": "", "ref_id": "b35", "title": "Machine learning for synthetic data generation: a review", "year": "2023" }, { "authors": "Alexandra Sasha Luccioni; Christopher Akiki; Margaret Mitchell; Yacine Jernite", "journal": "", "ref_id": "b36", "title": "Stable bias: Analyzing societal representations in diffusion models", "year": "2023" }, { "authors": "Ninareh Mehrabi; Fred Morstatter; Nripsuta Saxena; Kristina Lerman; Aram Galstyan", "journal": "ACM Computing Surveys (CSUR)", "ref_id": "b37", "title": "A survey on bias and fairness in machine learning", "year": "2021" }, { "authors": "Nicole Meister; Dora Zhao; Angelina Wang; V Vikram; Ruth Ramaswamy; Olga Fong; Russakovsky", "journal": "", "ref_id": "b38", "title": "Gender artifacts in visual datasets", "year": "2022" }, { "authors": "Umberto Michieli; Matteo Biasetton; Gianluca Agresti; Pietro Zanuttigh", "journal": "IEEE Transactions on Intelligent Vehicles", "ref_id": "b39", "title": "Adversarial learning and self-teaching techniques for domain adaptation in semantic segmentation", "year": "2020" }, { "authors": " Midjourney", "journal": "", "ref_id": "b40", "title": "Home Page", "year": "2023-05" }, { "authors": "William Peebles; Jun-Yan Zhu; Richard Zhang; Antonio Torralba; Alexei A Efros; Eli Shechtman", "journal": "", "ref_id": "b41", "title": "Gan-supervised dense visual alignment", "year": "2022" }, { "authors": "Rebecca Qian; Candace Ross; Jude Fernandes; Eric Smith; Douwe Kiela; Adina Williams", "journal": "", "ref_id": "b42", "title": "Perturbation augmentation for fairer nlp", "year": "2022" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b43", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Sunnie Sy Vikram V Ramaswamy; Olga Kim; Russakovsky", "journal": "", "ref_id": "b44", "title": "Fair attribute classification through latent space de-biasing", "year": "2021" }, { "authors": "Aditya Ramesh; Mikhail Pavlov; Gabriel Goh; Scott Gray; Chelsea Voss; Alec Radford; Mark Chen; Ilya Sutskever", "journal": "PMLR", "ref_id": "b45", "title": "Zero-shot text-to-image generation", "year": "2021" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b46", "title": "Highresolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b47", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Rob Midjourney Salkowitz; David Founder; Holz", "journal": "", "ref_id": "b48", "title": "On The Impact Of AI On Art, Imagination And The Creative Economy", "year": "" }, { "authors": "Prasanna Sattigeri; Vijil Samuel C Hoffman; Kush R Chenthamarakshan; Varshney", "journal": "IBM Journal of Research and Development", "ref_id": "b49", "title": "Fairness gan: Generating datasets with fairness properties using a generative adversarial network", "year": "2019" }, { "authors": "Christoph Schuhmann; Romain Beaumont; Richard Vencu; Cade Gordon; Ross Wightman; Mehdi Cherti; Theo Coombes; Aarush Katta; Clayton Mullis; Mitchell Wortsman", "journal": "", "ref_id": "b50", "title": "Laion-5b: An open large-scale dataset for training next generation image-text models", "year": "2022" }, { "authors": "Carsten Schwemmer; Carly Knight; Emily D Bello-Pardo; Stan Oklobdzija; Martijn Schoonvelde; Jeffrey W Lockhart", "journal": "Socius", "ref_id": "b51", "title": "Diagnosing gender bias in image recognition systems", "year": "2020" }, { "authors": "Ashish Seth; Mayur Hemani; Chirag Agarwal", "journal": "", "ref_id": "b52", "title": "Dear: Debiasing vision-language models with additive residuals", "year": "2023" }, { "authors": "Shuran Song; Fisher Yu; Andy Zeng; Angel X Chang; Manolis Savva; Thomas Funkhouser", "journal": "", "ref_id": "b53", "title": "Semantic scene completion from a single depth image", "year": "2017" }, { "authors": "Bart Thomee; David A Shamma; Gerald Friedland; Benjamin Elizalde; Karl Ni; Douglas Poland; Damian Borth; Li-Jia Li", "journal": "Commun. ACM", "ref_id": "b54", "title": "Yfcc100m: The new data in multimedia research", "year": "2016" }, { "authors": "Antonio Torralba; Alexei A Efros", "journal": "IEEE", "ref_id": "b55", "title": "Unbiased look at dataset bias", "year": "2011" }, { "authors": "Narek Tumanyan; Michal Geyer; Shai Bagon; Tali Dekel", "journal": "", "ref_id": "b56", "title": "Plug-and-play diffusion features for text-driven image-to-image translation", "year": "2022" }, { "authors": " Emiel Van Miltenburg", "journal": "", "ref_id": "b57", "title": "Stereotyping and bias in the flickr30k dataset", "year": "2016" }, { "authors": "Sahil Verma; Julia Rubin", "journal": "IEEE Computer Society", "ref_id": "b58", "title": "Fairness definitions explained", "year": "2018" }, { "authors": "Angelina Wang; Olga Russakovsky", "journal": "", "ref_id": "b59", "title": "Overcoming bias in pretrained models by manipulating the finetuning dataset", "year": "2023" }, { "authors": "Jialu Wang; Yang Liu; Xin Wang", "journal": "", "ref_id": "b60", "title": "Are gender-neutral queries really gender-neutral? mitigating gender bias in image search", "year": "2021" }, { "authors": "Junyang Wang; Yi Zhang; Jitao Sang", "journal": "", "ref_id": "b61", "title": "Fairclip: Social bias elimination based on attribute prototype learning and representation neutralization", "year": "2022" }, { "authors": "Mengmeng Wang; Jiazheng Xing; Yong Liu", "journal": "", "ref_id": "b62", "title": "Actionclip: A new paradigm for video action recognition", "year": "2021" }, { "authors": "Tianlu Wang; Jieyu Zhao; Mark Yatskar; Kai-Wei Chang; Vicente Ordonez", "journal": "", "ref_id": "b63", "title": "Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations", "year": "2019" }, { "authors": "Kaiyu Yang; Klint Qinami; Li Fei-Fei; Jia Deng; Olga Russakovsky", "journal": "", "ref_id": "b64", "title": "Towards fairer datasets: Filtering and balancing the distribution of the people subtree in the imagenet hierarchy", "year": "2020" }, { "authors": "Andrew Zhai; Hao-Yu Wu", "journal": "", "ref_id": "b65", "title": "Classification is a strong baseline for deep metric learning", "year": "2018" }, { "authors": "Kaipeng Zhang; Zhanpeng Zhang; Zhifeng Li; Yu Qiao", "journal": "IEEE signal processing letters", "ref_id": "b66", "title": "Joint face detection and alignment using multitask cascaded convolutional networks", "year": "2016" }, { "authors": "Dora Zhao; Angelina Wang; Olga Russakovsky", "journal": "", "ref_id": "b67", "title": "Understanding and evaluating racial biases in image captioning", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 137.43, 555.22, 136.09, 21.14 ], "formula_id": "formula_0", "formula_text": "N male = I∈R K (q) 1[g(I) = male]" }, { "formula_coordinates": [ 3, 322.23, 555.22, 152.34, 21.14 ], "formula_id": "formula_1", "formula_text": "N female = I∈R K (q) 1[g(I) = female]." }, { "formula_coordinates": [ 3, 205.37, 607.5, 200.06, 25.35 ], "formula_id": "formula_2", "formula_text": "δ K (q) = 0, N male + N female = 0 N male -N female N male +N female , otherwise." }, { "formula_coordinates": [ 3, 251.19, 659.57, 253.47, 26.8 ], "formula_id": "formula_3", "formula_text": "Bias@K = 1 |Q| q∈Q δ K (q).(1)" }, { "formula_coordinates": [ 4, 236.8, 107.93, 267.87, 23.91 ], "formula_id": "formula_4", "formula_text": "Skew@K(R(q)) = ln p R K (q),q,A p d,q,A ,(2)" }, { "formula_coordinates": [ 4, 205.73, 202.4, 298.94, 14.58 ], "formula_id": "formula_5", "formula_text": "MaxSkew@K(R(q)) = max Ai∈A Skew Ai @K(R(q)),(3)" }, { "formula_coordinates": [ 5, 309.4, 380.47, 75.55, 14.56 ], "formula_id": "formula_6", "formula_text": "f n = 1 K K k=1 f k n ." }, { "formula_coordinates": [ 7, 133.16, 434.5, 367.64, 29.02 ], "formula_id": "formula_7", "formula_text": "P R (s i ) = 1 K r∈Ns i 1(r ∈ R) and P G (s i ) = 1 K r∈Ns i 1(gender(r) = gender(s i )), (4" }, { "formula_coordinates": [ 7, 500.8, 441.55, 3.87, 8.64 ], "formula_id": "formula_8", "formula_text": ")" }, { "formula_coordinates": [ 7, 198.87, 480.5, 301.93, 22.05 ], "formula_id": "formula_9", "formula_text": "accept(s i ) = 1 if P R (s i ) > τ R and P G (s i ) > τ G 0 otherwise. (5" }, { "formula_coordinates": [ 7, 500.8, 487.24, 3.87, 8.64 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 7, 108, 710.23, 171.08, 14.56 ], "formula_id": "formula_11", "formula_text": "R@K = 1 N n N i=1 n j=1 1(I i ∈ R j i (K))." } ]
Balancing the Picture: Debiasing Vision-Language Datasets with Synthetic Contrast Sets
Vision-language models are growing in popularity and public visibility to generate, edit, and caption images at scale; but their outputs can perpetuate and amplify societal biases learned during pre-training on uncurated image-text pairs from the internet. Although debiasing methods have been proposed, we argue that these measurements of model bias lack validity due to dataset bias. We demonstrate there are spurious correlations in COCO Captions, the most commonly used dataset for evaluating bias, between background context and the gender of people in-situ. This is problematic because commonly-used bias metrics (such as Bias@K) rely on per-gender base rates. To address this issue, we propose a novel dataset debiasing pipeline to augment the COCO dataset with synthetic, gender-balanced contrast sets, where only the gender of the subject is edited and the background is fixed. However, existing image editing methods have limitations and sometimes produce low-quality images; so, we introduce a method to automatically filter the generated images based on their similarity to real images. Using our balanced synthetic contrast sets, we benchmark bias in multiple CLIP-based models, demonstrating how metrics are skewed by imbalance in the original COCO images. Our results indicate that the proposed approach improves the validity of the evaluation, ultimately contributing to more realistic understanding of bias in vision-language models.
Brandon Smith; Miguel Farinha; Siobhan Mackenzie; Hall Hannah; Rose Kirk; Aleksandar Shtedritski; Max Bain
[ { "figure_caption": "Figure 11Figure1: t-SNE clusters (M = 20) of gender-neutralised caption embeddings. Each cluster is manually assigned a name, then coloured and labelled according to its male over-representation factor. The male over-representation factor is the difference between the percentage of male images in the particular cluster and the percentage of male images overall in the dataset.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: An overview of our pipeline for dataset debiasing across a target attribute, in this case gender, ensuring equal demographic representation. A source image containing a person is given as input to InstructPix2Pix along with instructions to synthesise each attribute label. The resulting edits are filtered for quality via K-Nearest Neighbour (KNN) thresholding to ensure realistic-looking edits for each attribute label (male and female).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Randomly selected examples of GENSYNTH images showing a comparison to the original COCO image and the weak baseline GENSWAP.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Averaged KNN Score (P R ) for pairs of edited images using the GENSYNTH pipeline.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "(a) 1st to 4th decile of scores.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Comparison of model gender bias for CLIP[44], a theoretically fair model (TF-IDF on nongendered words) and a random model, on the COCO validation set under unbalanced and balanced (with standard deviation computed over 5 runs) settings.", "figure_data": "COCO ValCOCO Val (Balanced)ModelBias@KMaxSkew@KBias@KMaxSkew@KK=5 K=10 K=25 K=100K=5K=10K=25K=100Random Model0.370.400.150.060.00±0.070.00±0.07 0.14±0.00 0.07±0.00Fair Model (TF-IDF) 0.220.240.290.22 -0.06±0.00 -0.08±0.00 0.25±0.00 0.18±0.00CLIP0.200.230.280.23 -0.03±0.01 -0.06±0.01 0.24±0.00 0.19±0.01", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Templates used for prompt editing.", "figure_data": "TemplateInstruction Feminine MasculineMake this person more {}femininemasculineMake this person look like a {}womanmanTurn this person into a {}womanmanConvert this into a {}womanman", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Discovered clusters in COCO Captions. We show all 20 clusters with their manually assigned names, together with the top 10 words according to LDA. ∆M represents the deviation from gender parity for males.", "figure_data": "NameWords", "figure_id": "tab_5", "figure_label": "7", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "[44]", "Explanation": "The cited work, CLIP, is a foundational image-text encoder that is used in the citing paper to underpin the research on vision-language models and their capabilities in generative tasks."}, {"Category": "Data Source", "Citation": "[46]", "Explanation": "The cited work, DALL-E, is acknowledged as a popular generative model that has a large user base and generates a significant number of images daily. The citing paper uses this information to highlight the public visibility and growth of vision-language models in the field."}, {"Category": "Data Source", "Citation": "[41]", "Explanation": "The cited work, MidJourney, is mentioned in the discord channel for its large number of members, which the citing paper uses to demonstrate the growing interest in vision-language models in the community."}, {"Category": "Data Source", "Citation": "[49]", "Explanation": "The cited work, Stability.AI, is mentioned for the success of their Stable Diffusion model in attracting a large number of users, which the citing paper uses to highlight the popularity of vision-language models in the field."}, {"Category": "Data Source", "Citation": "[20]", "Explanation": "The cited work, Stability.AI, is mentioned for the reported daily active users of their Stable Diffusion model, which the citing paper uses to highlight the growth and widespread use of vision-language models in the field."}, {"Category": "Supporting Evidence", "Citation": "[7,14]", "Explanation": "The cited works are mentioned for the range of downstream harms that can be inflicted by vision-language models, which the citing paper uses to emphasize the need for reliable benchmarking of bias in these models."}, {"Category": "Supporting Evidence", "Citation": "[1,5,15]", "Explanation": "The cited works have attempted to measure bias in VLMs, providing foundational data and methodologies that support the claims and hypotheses of the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[30]", "Explanation": "The cited work on the use of FairFace dataset has been extended in the citing paper to address the challenges of using cropped face datasets in measuring bias in pre-trained model weights."}, {"Category": "Data Source", "Citation": "[30]", "Explanation": "The FairFace dataset is used in the cited work to provide a specific data source for the evaluation of bias in pre-trained model weights in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work on contrast sets in NLPs provides a foundational approach for the citing paper to draw on in their proposed synthetic pipeline for debiasing a dataset into contrast sets balanced by identity attributes across background contexts."}, {"Category": "Data Source", "Citation": "[13]", "Explanation": "The cited work on the COCO dataset is used as a data source for the proposed contrast sets in the synthetic pipeline, with a focus on gender bias and the need for balanced sets in measuring model bias."}, {"Category": "Extension or Continuation", "Citation": "[9]", "Explanation": "The cited work on recent advances in controllable image editing and generation is extended in the citing paper to develop a generative pipeline for synthetic image editing in the context of gender bias and contrast sets in the COCO dataset."}, {"Category": "Methodological Basis", "Citation": "[38,59]", "Explanation": "The cited works provide a narrow definition of fairness in retrieval settings, which the citing paper adopts to guide its research on measuring model bias in VLM."}, {"Category": "Data Source", "Citation": "[21,25]", "Explanation": "The cited works provide metrics for assessing disparity in distribution between search query results and desired outcomes, which the citing paper utilizes in its research on measuring model bias in VLM."}, {"Category": "Extension or Continuation", "Citation": "[61]", "Explanation": "The cited work Bias@K is an extension of the research on measuring model bias in VLM, providing a metric for assessing disparity in distribution in retrieval settings."}, {"Category": "Extension or Continuation", "Citation": "[23]", "Explanation": "The cited work Skew@K is an extension of the research on measuring model bias in VLM, providing a metric for assessing disparity in distribution in retrieval settings."}, {"Category": "Supporting Evidence", "Citation": "[13]", "Explanation": "The cited work, COCO Captions, serves as a standard benchmark for cross-modal retrieval and image captioning, providing a foundational dataset for the citing paper to build upon in their research on gender fairness in open-domain images."}, {"Category": "Data Source", "Citation": "[61,62]", "Explanation": "The cited works on cross-modal retrieval in the COCO Captions dataset serve as a data source for the citing paper to reference in their study of gender fairness in open-domain images."}, {"Category": "Data Source", "Citation": "[25,68]", "Explanation": "The cited works on image captioning in the COCO Captions dataset provide a data source for the citing paper to reference in their research on gender fairness in open-domain images."}, {"Category": "Supporting Evidence", "Citation": "[37]", "Explanation": "The cited work on measuring bias in generative VLMs provides supporting evidence for the citing paper in their study of gender fairness in open-domain images."}, {"Category": "Supporting Evidence", "Citation": "[10,16,56,60,64,68]", "Explanation": "The cited works on imbalanced demographic representation in image datasets provide supporting evidence for the citing paper in their research on gender fairness in open-domain images."}, {"Category": "Supporting Evidence", "Citation": "[11,52,58]", "Explanation": "The cited works on stereotypical portrayals in image datasets provide supporting evidence for the citing paper in their study of gender fairness in open-domain images."}, {"Category": "Supporting Evidence", "Citation": "[39,60]", "Explanation": "The cited works on identifying spurious gender correlations in the COCO Captions dataset provide supporting evidence for the citing paper in their research on gender fairness in open-domain images."}, {"Category": "Extension or Continuation", "Citation": "[51]", "Explanation": "The cited work on automatic techniques to reduce dataset biases in open-domain images extends the research on gender fairness in open-domain images by providing a new approach to address the issue."}, {"Category": "Extension or Continuation", "Citation": "[65]", "Explanation": "The cited work on manual filtering of harmful images in open-domain images extends the research on gender fairness in open-domain images by providing a new method to address the issue."}, {"Category": "Extension or Continuation", "Citation": "[3]", "Explanation": "The cited work on filtering personal and identifiable information in open-domain images extends the research on gender fairness in open-domain images by providing a new approach to address the issue."}, {"Category": "Supporting Evidence", "Citation": "[19,29,40,54]", "Explanation": "The cited works provide a basis for the creation of synthetic datasets in computer vision tasks, which is a key element in the study of data generation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[46][47][48]", "Explanation": "The cited works on generative models offer methods and techniques for generating training data in a scalable and controllable manner, which the citing paper adopts in its research on data generation."}, {"Category": "Data Source", "Citation": "[9,33,42,66]", "Explanation": "The cited works on data generation through synthetic datasets are acknowledged as the source of the data used in the study conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[2,4,6,36]", "Explanation": "The cited works highlight the risks of using synthetic or generated data in data generation, which the citing paper further explores in its research to address these issues."}, {"Category": "Supporting Evidence", "Citation": "[21]", "Explanation": "The cited work on using human feedback to guide diffusion models in generating diverse human images provides a foundational method for the study of data generation in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[31]", "Explanation": "The cited work on using StyleGan to edit images on a spectrum rather than using binary categories is a continuation of the research on data generation in the citing paper, exploring new dimensions in data editing."}, {"Category": "Supporting Evidence", "Citation": "[26]", "Explanation": "The cited work on using human feedback to guide data generation in a more diverse and fair manner is a key element in the study of data generation in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[21]", "Explanation": "The cited work on using human feedback to guide data generation in a more diverse and fair manner is a continuation of the research on data generation in the citing paper, exploring new dimensions in data editing."}, {"Category": "Extension or Continuation", "Citation": "[32]", "Explanation": "The cited work on learning to transfer age, race, and gender across images is a continuation of the research on data generation in the citing paper, exploring new dimensions in data editing."}, {"Category": "Extension or Continuation", "Citation": "[18,45]", "Explanation": "The cited works on GAN-based frameworks for face image editing are extended to open-domain images in the citing paper, introducing an automatic filtering technique to improve the quality of edits."}, {"Category": "Data Source", "Citation": "[22]", "Explanation": "The cited work on contrast sets in NLP is used as a source of inspiration for the use of contrast sets in the citing paper to improve the evaluation of VLM bias in open-domain images."}, {"Category": "Data Source", "Citation": "[5]", "Explanation": "The cited work provides a comprehensive comparison of models and metrics, which the citing paper uses to guide its own research on measuring model bias in VLMs."}, {"Category": "Methodological Basis", "Citation": "[61]", "Explanation": "The cited work introduces the Bias@K metric, which the citing paper adopts to measure the gender bias in image-caption data."}, {"Category": "Methodological Basis", "Citation": "[5,23]", "Explanation": "The cited works provide the definition of Skew@K, which the citing paper adopts to measure the difference between the desired and actual proportions of image attributes in retrieved images."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The cited work provides a method for measuring the maximum Skew@K among attribute labels in images for a given text query, which the citing paper adopts in their research to evaluate gender bias in image retrieval."}, {"Category": "Data Source", "Citation": "[13,34]", "Explanation": "The cited work, COCO dataset, is a common source of images and annotations used in the study of gender bias in visual language models. The citing paper utilizes this dataset to measure gender bias in image retrieval."}, {"Category": "Data Source", "Citation": "[61]", "Explanation": "The cited work provides the list of gendered words used in the process of assigning gender labels to images in the COCO dataset."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work, RoBERTa, is used as a method to obtain features for image captions in the citing paper."}, {"Category": "Data Source", "Citation": "[52]", "Explanation": "The cited work is used to investigate the presence of spurious correlations in the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work, InstructPix2Pix, is used as a methodological basis for editing objects in an image in the citing paper. The instruction-based model is employed to edit the appearance of a person in a source image while keeping the background unchanged."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work, InstructPix2Pix, provides a method for generating synthetic edits that the citing paper adopts in their research to ensure the quality and accuracy of the generated images."}, {"Category": "Extension or Continuation", "Citation": "[24]", "Explanation": "The cited work uses KNN to score GAN-generated images, which the citing paper extends by applying the same method to filter synthetically-edited person bounding boxes in their research."}, {"Category": "Data Source", "Citation": "[30]", "Explanation": "The cited work, FairFace dataset, is used as a source of random faces for the GENSYNTH pipeline in the citing paper to replace detected faces in COCO images for gender editing."}, {"Category": "Data Source", "Citation": "[44]", "Explanation": "The original CLIP model is cited as the data source for the evaluation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[61]", "Explanation": "The CLIP-clip model is adopted in the citing paper to perform a specific task or analysis."}, {"Category": "Extension or Continuation", "Citation": "[5]", "Explanation": "The DebiasCLIP model is used to extend the research on gender bias in image generation by introducing a new model for debiasing the process."}, {"Category": "Data Source", "Citation": "[28]", "Explanation": "The OpenCLIP models are cited as a data source for the evaluation in the citing paper."}, {"Category": "Data Source", "Citation": "[51]", "Explanation": "The LAOIN datasets are cited as a data source for the training of the OpenCLIP models in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work on ImageNet1K provides the zero-shot image classification accuracy metric used in the citing paper to evaluate the performance of the CLIP-like models in image classification tasks."}, {"Category": "Supporting Evidence", "Citation": "[44]", "Explanation": "The cited work on CLIP provides the baseline model for the comparison of gender bias in the CLIP-like models evaluated in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[5]", "Explanation": "The cited work on DebiasCLIP is an extension of the original CLIP model that aims to address gender bias in image captioning by introducing a debiasing mechanism."}, {"Category": "Data Source", "Citation": "[28]", "Explanation": "The cited work on OpenCLIP is a pre-existing model that the citing paper utilizes in its research on the comparison of gender bias in CLIP-like models."}, {"Category": "Supporting Evidence", "Citation": "[51]", "Explanation": "The cited work on LAOIN provides the training data for the CLIP-like models evaluated in the citing paper, contributing to the analysis of gender bias in image captioning."}, {"Category": "Methodological Basis", "Citation": "[8]", "Explanation": "The cited work by [8] provides a method for creating synthetic data to evaluate the performance of face editing models in the context of data distribution shifts. The citing paper adopts this method to generate new evaluation data for the face editing task."}, {"Category": "Data Source", "Citation": "[7]", "Explanation": "The cited work is a large-scale internet dataset that the pipeline uses to train the generative model, which may have introduced biases in the model and led to the generation of images that were identified as NSFW."}, {"Category": "Data Source", "Citation": "[67]", "Explanation": "The cited work, MTCNN face detector, is used to detect faces in the COCO images for the GENSYNTH dataset, which is a data source for the citing paper."}, {"Category": "Data Source", "Citation": "[30]", "Explanation": "The FairFace repository is cited as a data source for the face crops used in the GENSYNTH dataset, which is a collection of face crops from the YFCC-100M dataset."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work, DINO ViT-B/16, is used as a vision encoder in the KNN filtering pipeline of the citing paper to improve the ability to distinguish between real and fake images and better cluster male and female images."}, {"Category": "Data Source", "Citation": "[17]", "Explanation": "The cited work, ImageNet1K, is a data source for the evaluation of image classification accuracy in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[44]", "Explanation": "The cited work, CLIP, provides a method for caption-to-image retrieval in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The cited work, DebiasCLIP, is a method for gender bias analysis in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work, OpenCLIP, is a method for gender bias analysis in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[51]", "Explanation": "The cited work, LAOIN 400M & 2BN, is a data source for the training of models in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[61]", "Explanation": "The cited work, CLIP-clip, is a method for gender bias analysis in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work, Instruct-Pix2Pix, is used as a method for applying image editing instructions to person crops in the COCO train set images. The citing paper adopts this method to generate images with gender edits for the GENSYNTH pipeline."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b3", "b41", "b13", "b27", "b0", "b26", "b21", "b1", "b5", "b10", "b21", "b32", "b25", "b16", "b21", "b32", "b42", "b2", "b42", "b23", "b24" ], "table_ref": [], "text": "Vision-and-language pre-training (VLP) (Chen et al., 2020;Zhang et al., 2021;Kim et al., 2021;Radford et al., 2021;Wang et al., 2022a) has received increasing attention in recent years for its great success on various vision-and-language tasks, such as visual question answering (Antol et al., 2015), cross-modal retrieval (Plummer et al., 2015), and image captioning (Lin et al., 2014). Different from other foundation models (Bommasani et al., 2021) such as BERT (Devlin et al., 2018) and MAE (He et al., 2022) that only require single-modality data, VLP models rely on largescale aligned image-text datasets (Lin et al., 2014;Sharma et al., 2018;Ordonez et al., 2011;Krishna et al., 2017) to bridge the gap between the two modalities, which requires either extensive manual annotations or heavy data cleaning processes (Lin et al., 2014;Sharma et al., 2018). The natural difficulty of obtaining paired data hinders the scale of cross-modal datasets, while the success of unimodal pre-trained models implies the potential to exploit the unlabeled data for pre-training. Therefore, besides collecting more paired data, it is a worthwhile direction to explore how to utilize lowcost unimodal data with limited cross-modal supervision, i.e., weakly supervised vision-and-language pre-training (WVLP).\nThe core challenge of WVLP is to establish the connection between the two modalities without using a large number of aligned image-text pairs. Existing works on WVLP (Li et al., 2021b;Zhou et al., 2022;Wang et al., 2022b;Chen et al., 2022) usually address this by taking object tags as anchors as they are in the form of text and cover the information of the image at the same time. They use tags to collect weakly-aligned image-text pairs from unaligned unimodal data for pre-training and achieve competitive results compared to standard VLP models, demonstrating that tags can effectively bridge the gap between the two modalities.\nDespite its success, using object tags as anchors suffers from two limitations. First, tags are merely local descriptions instead of a complete representation of the whole image and text. Second, the vocabulary of tags only includes common concepts, making it difficult to represent images with complex semantics (Zhou et al., 2022). These limitations could deteriorate the quality of the weakly-aligned data (and possibly pre-trained models) based on the object tags. Therefore, to further improve the performance of WVLP, we need to reconsider the choice of the cross-modal anchors and find a better approach to measure the alignment between an image and a text.\nRecently relative representation has been proven to be effective in representation learning (Moschella et al., 2022) and zero-shot image classification (Norelli et al., 2022). The main idea is to represent a data point as its similarities to a set of selected data points (anchors). We argue that relative representations can be a good choice for WVLP because (1) they are built on the semantic similarities of well-trained neural network representations rather than on superficial human-designed features like tags and (2) they are modality-invariant by design because they reflect the intrinsic relationships between data points, which naturally enables communication between different modalities.\nIn this paper, we propose RELIT, a novel relative representation-based WVLP framework. Instead of object tags, we directly use a minuscule amount (compared to pre-training data) of available imagetext pairs as anchors, and create a common relative representation space with respect to the anchors for unaligned images and text. This allows us to estimate the semantic similarity of any image-text pair by calculating their distance in the relative representation space. In addition, we design two relative representation-based data collection methods that can retrieve or generate weakly-aligned image-text pairs from unaligned unimodal corpora. Experimental results prove the effectiveness of relative representations in bridging the gap between image and text modalities. Moreover, our work reveals a promising research direction to establish crossmodal alignments by finding and aligning invariant data structures in different modalities, which may inspire future works on multimodal pre-training.\nOur main contributions are as follows:\n• We introduce the idea of relative representations in WVLP and demonstrate its superiority over object tags in effectively bridging the gap between different modalities.\n• We propose a relative representation-based WVLP framework that can both retrieve and generate weakly-aligned image-text pairs for learning cross-modal representations.\n• Extensive experiments on four diverse visionand-language tasks show that our proposed framework outperforms strong WVLP baselines and further closes the performance gap between WVLP and standard VLP." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b23", "b40", "b4", "b31", "b9", "b17" ], "table_ref": [], "text": "Relative Representations. The concept of relative representations is initially proposed by Moschella et al. (2022). They show that the relative representations obtained from the representation spaces of different models are similar, which enables comparison and alignment between latent embeddings of different learning models. Data Augmentation. Data augmentation has been extensively employed in various computer vision (Zhang et al., 2018;Cubuk et al., 2018) and natural language processing tasks (Sennrich et al., 2015;Guo et al., 2020). In the area of VLP, Li et al. (2022) augment the noisy web-crawled aligned data by filtering low quality image-text pairs and generating synthetic captions with an image captioner fine-tuned on clean image-text pairs. In this work, we adopt a similar filter-and-generate process in the construction of weakly-aligned data for WVLP, but our relative representation-based pseudo caption generator is fine-tuned on the text-only dataset.\n3 Method" }, { "figure_ref": [], "heading": "Relative Representations", "publication_ref": [ "b24", "b23" ], "table_ref": [], "text": "Figure 1a provides an illustration of relative representations. The basic idea is to represent a data point as its similarities to other data points (anchors). In this work, we consider the relative representations with cross-modal anchors, which has been shown its potential in zero-shot image classification (Norelli et al., 2022). Formally, given a set of M cross-modal anchors A = {a 1 , a 2 , . . . , a M } where a i = (x i , ỹi ) is an image-text pair, xi is the image and ỹi is the text. For an image x , a pre-trained image encoder E I is used to calculate the similarity between x and each anchor a i as:\nsim(x, a i ) = cos(E I (x), E I (x i ))(1)\nwhere cos(•, •) is the cosine similarity, and the relative representation of x is defined as:\nr A (x) = (sim(x, a 1 ), . . . , sim(x, a M ))(2)\nSimilarly, the relative representation of a text y is defined as r A (y) with a pre-trained text encoder E T to compute sim(y, a i ) = cos(E T (y), E T (ỹ i )).\nSince the relationship between data points is objective, the relative representations obtained by different models should be similar, despite their independent representation spaces (Moschella et al., 2022). In other word, an image and its corresponding text should share similar relative representations. This allows us to leverage it to construct weakly-aligned image-text pairs from large-scale unpaired image and text datasets." }, { "figure_ref": [], "heading": "Weakly-Aligned Image-Text Pairs Retrieval", "publication_ref": [ "b42", "b42", "b32", "b6", "b30" ], "table_ref": [], "text": "While there are no large-scale aligned image-text pairs available, having a joint input of image and text, even if they are not aligned, is still necessary for WVLP (Zhou et al., 2022;Wang et al., 2022b).\nTo achieve this, inspired by previous work (Zhou et al., 2022), we construct a weakly-aligned imagetext corpus from the unpaired unimodal corpora by retrieving semantically related sentences for each image based on the relative representations.\nFigure 1b illustrates the process of our weaklyaligned image-text pairs retrieval method. First we collect a very small amount of image-text pairs as cross-model anchors (denoted by pairs of connected squares in the figure). Note that the number of anchors is negligible compared to the imagetext pairs used in standard VLP, which keeps our method in a weakly supervised setting. Then, for all images and text we compute their relative representations with respect to the anchors, which only involves similarity computation within each modality using unimodal pre-trained encoders. We take the cosine distance between the relative representations of each image and text as their semantic relevance score and retrieve the best matching text with the highest score for each image to construct a weakly-aligned image-text pair.\nSpecifically, we randomly sample M image-text pairs as anchors A from an aligned image-text dataset (e.g., Conceptual Captions (Sharma et al., 2018)) D align (M ≪ |D align |). Given unaligned image dataset D I and text dataset D T , we construct a retrieved weakly-aligned image-text pair dataset\nD wa = {(x 1 , ŷ1 ), . . . , (x N , ŷN )} where N = |D I |\nand ŷi is the retrieved caption from D T for image x i defined as:\nŷi = argmax y∈D T cos(r A (x i ), r A (y))(3)\nWe use the off-the-shelf ViT (Dosovitskiy et al., 2020) and Sentence-BERT (Reimers et al., 2019) to encode images and text, respectively. Our retrieval method with relative representations can effectively improve the quality of retrieved weakly-aligned dataset compared to tagbased retrieval. This is because relative representations tend to capture the overall semantics while tags describe only local information of the image. As a result, our method can better measure the semantic similarities between images and text, especially in cases where tag-based retrieval fails to distinguish between images and text that have different semantics but share the same objects." }, { "figure_ref": [ "fig_1" ], "heading": "Pseudo Caption Generation", "publication_ref": [ "b29" ], "table_ref": [], "text": "Although relative representation-based retrieval can construct reasonable weakly-aligned imagetext pairs for WVLP, there are still cases where non-relevant text are retrieved. This could happen especially when the unaligned unimodal corpora are collected individually and for some images there are no proper captions in the corpora. To alleviate this problem, we propose to directly generate pseudo captions for these images. As shown in Figure 2, we first adapt a well-trained text generator to perform conditional text generation given relative representations. Then, since images and text share a common relative representation space, we can directly use this generator to predict the pseudo caption for an image based on its relative representation.\nSpecifically, given the text-only dataset D T , for each text y ∈ D T , we derive a prefix P ∈ R M ×d from its relative representations r A (y) as:\nP = [r A (y)] T W r + [E T (ỹ 1 ), . . . , E T (ỹ M )]W e\n(4) where E T (ỹ i ) ∈ R d T is the encoder output of the text in the i-th anchor, W r ∈ R 1×d and W e ∈ R d T ×d are two learnable projection matrices. We fine-tune a pre-trained GPT-2 model (Radford et al., 2019) to learn to predict y given P, and name the fine-tuned model as Rel2Cap. To further save computational cost, we only consider the entries in P that correspond to the top K anchors with the highest similarities as the model input.\nAfter training, the model can be used to predict the pseudo caption for an image x with low quality retrieved captions by constructing an input prefix P ′ based on the relative representations of the image, i.e., r A (x). The definition of P ′ is similar to Equation 4, except that r A (y) is replaced by r A (x). We define a quality score s(x, ŷ) = cos(r A (x), r A (ŷ)) for each weakly-aligned imagetext pair (x, ŷ) collected both by retrieval and generation, and replace the retrieved pair with the generated one if the latter has a higher quality score. So far, we have discussed how we collect a weakly-aligned image-text dataset D wa from the unpaired unimodal corpora by relative representationbased retrieval and generation. Next, we describe how we use these data for WVLP." }, { "figure_ref": [], "heading": "Pre-training", "publication_ref": [ "b2", "b42" ], "table_ref": [], "text": "Model Overview. We use the same model architecture as Chen et al. ( 2022) that consists of a vision and a multimodal encoder. For each weakly-aligned image-text pair, the image is encoded with the vision encoder and the outputs are fed to the multimodal encoder along with the text embeddings to obtain a multimodal representation. Such an end-to-end framework has been proven\nto be more effective compared to others that use region features from external object detectors both in standard VLP and WVLP. We apply three pretraining objectives to learn multimodal representations from the collected weakly-aligned imagetext pairs: masked tag prediction (MTP), masked language modeling (MLM) and image text matching (ITM).\nMasked Tag Prediction. This objective aims to learn object-level cross-modal alignment from the image-only data and their detected object tags. Following previous works (Li et al., 2021b;Chen et al., 2022), we randomly mask out the tags with a probability of 15%, and then predict the masked tags conditioned on the image and other unmasked tags.\nFormally, given the image x ∈ D I and its detected object tags t, the MTP objective is defined as:\nL MTP = -E x∈D I log P (t m |t \\m , x)(5)\nwhere t m and t \\m represents masked and unmasked object tags, respectively.\nMasked Language Modeling. To better fuse between the two modalities, the masked language modeling objective is adopted to learn from the joint image-text inputs from the weakly-aligned corpora. Since the weakly-aligned pairs may contain noise in the retrieved or generated text, we only mask out and predict the noun phrases in the text inspired by (Zhou et al., 2022). The MLM loss is formulated as:\nL MLM = -E (x,ŷ)∈Dwa log P (ŷ m |ŷ \\m , x) (6)\nwhere ŷm and ŷ\\m are masked and unmasked text.\nImage Text Matching. ITM is a commonly used objective for learning instance-level crossmodal alignment in VLP, which aims to distinguish whether an image-text pair is matched semantically. We random replace the text in half of the image-text pairs with another text to form training input, and define the label of each pair as l ∈ {0, 1} where 1 indicates the pair is a match. The ITM objective is to minimize the binary cross-entropy loss:\nL ITM = -E (x,ŷ)∈D ′ wa log P (l|x, ŷ)(7)\nwhere D ′ wa is the dataset after random replacement. Relative Representation-Guided Training. To further reduce the impact of the noisy image-text pairs in the weakly-aligned dataset, we apply the quality score s(x, ŷ) of each pair described in Section 3.3 to L MLM and L ITM to guide the training to learn more from high-quality data:\nL MLM = -E (x,ŷ)∈Dwa s(x, ŷ) log P (ŷ m |ŷ \\m , x) (8) L ITM = -E (x,ŷ)∈D ′ wa s(x, ŷ) log P (l|x, ŷ) (9)\n4 Experiments" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b42", "b2", "b32", "b43" ], "table_ref": [], "text": "We follow previous WVLP works (Li et al., 2021b;Zhou et al., 2022;Wang et al., 2022b;Chen et al., 2022) and conduct experiments in two different settings, each containing an image-only dataset and a text-only dataset. The first setting treats images and text from Conceptual Captions (CC) (Sharma et al., 2018) as individually collected unimodal dataset without the alignment information. The second setting uses images from CC and text from BookCorpus (Zhu et al., 2015), which is a more realistic scenario where images and text are gathered separately from different sources." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b24", "b12", "b17", "b42", "b22", "b5", "b41", "b8", "b33", "b38", "b26" ], "table_ref": [], "text": "Relative Representations. We randomly select 8, 192 aligned image-text pairs from CC as anchors, yielding relative representations as vectors of 8, 192 dimensions. To save computational cost, inspired by Norelli et al. (2022), we only keep the highest 50 dimensions and set the others to 0.\nWeakly-Aligned Data Construction. We implement the retrieval system with the faiss (Johnson et al., 2019) library. For each image we only retrieve the text with the best match score. For Rel-Cap, we fine-tune GPT-2 with a learning rate of 5e-5 and a batch size of 1, 024 for 5 epochs on the textonly dataset. We generate 5 pseudo-captions for each image using nucleus sampling with p = 0.9 which proved effective in synthetic caption generation (Li et al., 2022), and rank the results with the quality scores. We also include the weaklyaligned dataset based on tag-based retrieval in the pre-training, as described in Zhou et al. (2022).\nPre-training. We use the same architecture as Chen et al. ( 2022) which includes a 12-layer Swin-Transformer (Swin B-384/32) (Liu et al., 2021) as the vision encoder and a 12-layer Transformer initialized from BERT-base (Devlin et al., 2018) as the multimodal encoder. For object tags, we utilize the off-the-shelf object detector provided by VinVL (Zhang et al., 2021). We pre-train the model with a total training step of 150k and a batch size of 512. We use an AdamW optimizer (Kingma and Ba, 2014) with an initial learning rate of 3e-5, and the warm-up ratio is set to 10%. The pre-training takes 3 days on 4 NVIDIA A100 GPUs.\nDownstream Tasks. We follow previous works and test our pre-trained model on four downstream V+L tasks, including Visual Question Answering (VQAv2) (Goyal et al., 2017), Natural Language for Visual Reasoning (NLVR 2 ) (Suhr et al., 2018), Visual Entailment (VE) (Xie et al., 2019) and image retrieval (Flickr30k) (Plummer et al., 2015). Details of the task settings and the finetuning strategies are in Appendix A." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [ "b3", "b42" ], "table_ref": [ "tab_1", "tab_2" ], "text": "We first compare our proposed RELIT with previous methods pre-trained with unaligned images and text from CC. Note that these baselines only utilize object tags. Table 1 shows the experimental results on the downstream tasks. Our method outperforms previous WVLP methods on all downstream tasks. Specifically, RELIT outperforms previous best results by 1.8% on NLVR 2 and by 3.8% on the image retrieval task (Flickr30k), both of which benefit from the instance-level cross-modal alignment capability of the pre-trained model (Chen et al., 2020;Zhou et al., 2022). This suggests that our relative representation-based method improves the alignment quality of weakly-aligned image-text pairs compared to previous tag-based approaches, resulting in improved cross-modal alignment capability of the pre-trained model. When pre-trained with images from CC and text from BookCorpus, as shown in Table 2, our proposed RELIT also achieves the best results on all downstream tasks. This demonstrates that the proposed relative representation-based methods can effectively mine useful cross-modal alignment information for multimodal pre-training from imageonly and text-only data, even if they are collected separately from different sources." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [ "b17" ], "table_ref": [], "text": "We conduct an ablation study to verify the effectiveness of the proposed relative representation-based retrieval and generation. All models are pre-trained on weakly-aligned data derived from unaligned CC images and text. As we can see from the table, compared to tag-based retrieved data (Retrv (Tag)), pre-training with relative representation-based retrieved data (Retrv (Relrep)) performs better on downstream tasks. Besides, the model achieves the best results when the generated pseudo captions (Rel2Cap) are included during pretraining. We believe this is because the original CC dataset contains noisy captions, such as alt-texts that do not describe the image contents, which is suboptimal for VLP (Li et al., 2022). In summary, the experimental results demonstrate that both our retrieval and generation methods contribute to the performance of the pre-training.\nWe also compare the performance of the pretrained models on downstream tasks with and without relative representation-guided training. As shown in Table 4, pre-training with guided training can consistently improve results across all downstream tasks, illustrating that relative representations can be used to detect noise in the weaklyaligned data and guide the model to learn from data with a higher level of alignment." }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "Data Quality", "publication_ref": [ "b11", "b24", "b21" ], "table_ref": [ "tab_4", "tab_3" ], "text": "We evaluate the quality of different kinds of weaklyaligned data from unaligned CC images and text, and the results are listed in Table 5. We use CLIP-Score (Hessel et al., 2021) to measure the overall alignment of all weakly-aligned image-text pairs. As we can see from the table, the data quality of Retrv (Relrep) is significantly higher than that of Retrv (Tag), which again illustrates the superiority of relative representations as cross-modal anchors. In addition, Rel2Cap further improves data quality by filtering and replacing low-quality pairs in Retrv (Relrep). The analysis of the data quality here is consistent with the analysis of pre-training results in Table 3, and again proves that our relative representation-based methods can produce high quality weakly-aligned data from unaligned unimodal data. The number of anchors has a significant influence on the effect of relative representations (Norelli et al., 2022). To verify its influence on the collected weakly-aligned image-text pairs, we test the quality of the data retrieved with different numbers of anchors on the COCO (Lin et al., 2014) dataset. From Figure 3, we can see that as the number of anchors increases, the quality of the retrieved data also improves. In addition, we evaluate the downstream task accuracy using models pretrained on data with varying numbers of anchors. Specifically, we generate 3 random sets of anchors for each size, and retrieve the weakly-aligned data with different sets of anchors. We pre-train models on each set of the retrieved data with the same hyperparamters, and fine-tune them on the NLVR 2 task. The results are shown in Figure 4. In general, the higher the number of anchors, the better the model performance. We use 8, 192 anchors in our final experiments as a trade-off between representation capability and computational cost. However, using more anchors will almost certainly give better results due to better quality of the data, which indicates the scalability of our approach. We leave more exploration on this for future work." }, { "figure_ref": [], "heading": "Effects of Anchor Selection", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "We also conduct experiments to verify the impact of anchor diversity on data quality. Specifically, we considered three sampling methods on the COCO dataset: random, diverse, and nondiverse. The diverse sampling first performs Kmeans clustering on all the data, and selects one anchor from each cluster. The non-diverse sampling uses a greedy algorithm to select k anchors, at each step choosing the data closest to the average of the anchors already selected. Table 6 lists the data quality results obtained with different sam-pling methods. In general, diverse anchors lead to better quality, while random anchors perform satisfactorily when the number of anchors is large enough. Non-diverse anchors can result in catastrophic data quality." }, { "figure_ref": [ "fig_5", "fig_5", "fig_5", "fig_5" ], "heading": "Case Study", "publication_ref": [], "table_ref": [], "text": "To explore the reasons for the improvement in data quality, we show two examples of the comparisons between different weakly-aligned image-text pairs in Figure 6. In each example, we provide the ground truth caption of the image and the detected object tags, as well as three weakly-aligned captions. From these two examples, we can see that the captions retrieved by tags do have many of the same tags as the images (underlined in the figure), but are not good descriptions of the images. In contrast, our relative representation-based retrieval and generation methods are able to obtain captions that are more relevant to the overall semantics of the images. Specifically, in the example in Figure 6a, our proposed methods successfully identifies key information in the image such as \"golfer\", which is difficult for tag-based retrieval since there are no such tag as \"golfer\". The same thing happens to Retrv (Tag) in Figure 6b, which retrieves a caption related to \"cat\" instead of \"lynx\". In this example, our retrieval method recognizes the animal in the image as \"cheetah\", which is close but not exactly correct, while our generation method correctly generates a caption related to the correct concept \"lynx\". This indicates that our generation method has the ability to generate pseudo captions of better quality when the retrieved ones are not good enough.\nIn Figure 5 we further visualize the relative representations of the image and two retrieved captions in Figure 6a, which helps understand the effectiveness of relative representations in aligning semantically related image-text pairs. From the figure we can see that the image and our retrieved caption Retrv (Relrep) activate the same group of anchors (i.e., have high similarities with these anchors), which makes them close in the relative representation space. On the other hand, Retrv (Tag) activates a completely different set of anchors, which leads to a large distance between it and the image in the relative representation space. These observations suggest that (1) relative representations are (almost) modality-invariant and (2) relative representations can be utilized to effec-Retrv (Tag): close up head shot of a small white fluffy long haired dog with a black nose , dark round eyes , black lips and a green ribbon." }, { "figure_ref": [], "heading": "Retrv (Relrep):", "publication_ref": [], "table_ref": [], "text": "golfer walks off the tee on the 18th hole with his caddie, during the first round . tively estimate the cross-modal alignment of data in different modalities. These properties of the relative representations make it naturally suitable for WVLP, which is verified in this paper." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper introduces the idea of relative representations to weakly-supervised vision-and-language pre-training and demonstrates its effectiveness in bridging the gap between the two modalities. We propose a relative representation-based framework that can both retrieve and generate weakly-aligned image-text pairs for pre-training. Experimental results show that our method outperforms all previous tag-based approaches under the weaklysupervised setting. We hope our work will motivate future work in multimodal pre-training." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "As this work is mainly focused on weakly supervised vision-and-language pre-training, we do not fully explore the factors that may influence the performance of relative representations, such as the use of different unimodal encoders and the source of the anchors. Besides, we only validate the effectiveness of relative representations in a weakly supervised setting, while it remains to be explored whether it is also useful for standard VLP and multimodal learning in other modalities (e.g., audio and video). We will further exploit the potential of relative representations and validate it in more cross-modal learning scenarios in the future." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work is supported by the National Key R&D Program of China (2022ZD0160502) and the National Natural Science Foundation of China (No. 61925601, 62276152, 62236011). We thank Ziyue Wang, Fuwen Luo, Rui Jiao and Zonghan Yang for their advices in paper writing." }, { "figure_ref": [], "heading": "A Details of Downstream Tasks", "publication_ref": [ "b39", "b3", "b3", "b2", "b26" ], "table_ref": [], "text": "Visual Question Answering (VQA) The task of VQA is to answer questions correctly according to the given images. We follow previous works (Yu et al., 2019;Chen et al., 2020) and formulate VQA as a classification task with 3, 192 classes representing the most frequent answers in the dataset. We fine-tune the pre-trained model for 10 epochs with a batch size of 256. We use an AdamW optimizer with a peak learning rate of 5 × 10 -5 .\nNatural Language for Visual Reasoning (NLVR 2 ) The objective of NLVR 2 is to decide if a natural language description is true for a given pair of images. We follow previous work (Chen et al., 2020) to form two image-text pairs as inputs, and concatenate the two [CLS] outputs of the model as the final representation for classification. We fine-tune the model for 10 epochs with a batch size of 128 and a peak learning rate of 2.5 × 10 -5 .\nVisual Entailment (VE) Given an image and a text hypothesis, the task of VE is to determine whether the image implies the hypothesis. This is formulated as a three-way classification task to predict whether the logical relationship between the image and the text is entailment, neutral or contradiction. For the VE task, we fine-tune the pre-trained model with a batch size of 64 and a peak learning rate of 1 × 10 -5 for 5 epochs.\nImage Retrieval (Flickr30k) We follow previous works (Li et al., 2021b;Chen et al., 2022) to conduct the image retrieval task on the Flickr30k (Plummer et al., 2015) dataset. We sample 15 negative image-text pairs for each positive pair by replacing its text with randomly sampled ones. The batch size is set to 512. We fine-tune the model with a peak learning rate of 2.5 × 10 -5 for 10 epochs." }, { "figure_ref": [], "heading": "B Additional Examples", "publication_ref": [], "table_ref": [], "text": "In Figure 7, we provide more examples of different kinds of weakly-aligned image-text pairs. From these examples, we can see that our relative representation-based approaches yield higher quality weakly-aligned image-text pairs compared to tag-based retrieval. " } ]
2023-05-24
[ { "authors": "Stanislaw Antol; Aishwarya Agrawal; Jiasen Lu; Margaret Mitchell; Dhruv Batra; C Lawrence Zitnick; Devi Parikh", "journal": "", "ref_id": "b0", "title": "VQA: Visual question answering", "year": "2015" }, { "authors": "Rishi Bommasani; Drew A Hudson; Ehsan Adeli; Russ Altman; Simran Arora; Sydney Von Arx; Jeannette Michael S Bernstein; Antoine Bohg; Emma Bosselut; Brunskill", "journal": "", "ref_id": "b1", "title": "On the opportunities and risks of foundation models", "year": "2021" }, { "authors": "Chi Chen; Peng Li; Maosong Sun; Yang Liu", "journal": "", "ref_id": "b2", "title": "End-to-end unsupervised vision-and-language pretraining with referring expression matching", "year": "2022" }, { "authors": "Yen-Chun Chen; Linjie Li; Licheng Yu; Ahmed El Kholy; Faisal Ahmed; Zhe Gan; Yu Cheng; Jingjing Liu", "journal": "", "ref_id": "b3", "title": "UNITER: Universal image-text representation learning", "year": "2020" }, { "authors": "Barret Ekin D Cubuk; Dandelion Zoph; Vijay Mane; Quoc V Vasudevan; Le", "journal": "", "ref_id": "b4", "title": "Autoaugment: Learning augmentation policies from data", "year": "2018" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b6", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Zi-Yi Dou; Yichong Xu; Zhe Gan; Jianfeng Wang; Shuohang Wang; Lijuan Wang; Chenguang Zhu; Pengchuan Zhang; Lu Yuan; Nanyun Peng", "journal": "", "ref_id": "b7", "title": "An empirical study of training end-to-end vision-and-language transformers", "year": "2022" }, { "authors": "Yash Goyal; Tejas Khot; Douglas Summers-Stay; Dhruv Batra; Devi Parikh", "journal": "", "ref_id": "b8", "title": "Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering", "year": "2017" }, { "authors": "Demi Guo; Yoon Kim; Alexander M Rush", "journal": "", "ref_id": "b9", "title": "Sequence-level mixed sample data augmentation", "year": "2020" }, { "authors": "Kaiming He; Xinlei Chen; Saining Xie; Yanghao Li; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b10", "title": "Masked autoencoders are scalable vision learners", "year": "2022" }, { "authors": "Jack Hessel; Ari Holtzman; Maxwell Forbes; Ronan Le Bras; Yejin Choi", "journal": "", "ref_id": "b11", "title": "CLIPScore: a referencefree evaluation metric for image captioning", "year": "2021" }, { "authors": "Jeff Johnson; Matthijs Douze; Hervé Jégou", "journal": "IEEE Transactions on Big Data", "ref_id": "b12", "title": "Billion-scale similarity search with GPUs", "year": "2019" }, { "authors": "Wonjae Kim; Bokyung Son; Ildoo Kim", "journal": "", "ref_id": "b13", "title": "ViLT: Vision-and-language transformer without convolution or region supervision", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b14", "title": "", "year": "" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b15", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Ranjay Krishna; Yuke Zhu; Oliver Groth; Justin Johnson; Kenji Hata; Joshua Kravitz; Stephanie Chen; Yannis Kalantidis; Li-Jia Li; David A Shamma", "journal": "International journal of computer vision", "ref_id": "b16", "title": "Visual Genome: Connecting language and vision using crowdsourced dense image annotations", "year": "2017" }, { "authors": "Junnan Li; Dongxu Li; Caiming Xiong; Steven Hoi", "journal": "", "ref_id": "b17", "title": "BLIP: Bootstrapping language-image pretraining for unified vision-language understanding and generation", "year": "2022" }, { "authors": "Junnan Li; Ramprasaath Selvaraju; Akhilesh Gotmare; Shafiq Joty; Caiming Xiong; Steven Chu; Hong Hoi", "journal": "Advances in neural information processing systems", "ref_id": "b18", "title": "Align before fuse: Vision and language representation learning with momentum distillation", "year": "2021" }, { "authors": "Liunian Harold; Li ; Mark Yatskar; Cho-Jui Da Yin; Kai-Wei Hsieh; Chang", "journal": "", "ref_id": "b19", "title": "VisualBERT: A simple and performant baseline for vision and language", "year": "2019" }, { "authors": "Liunian Harold; Li ; Haoxuan You; Zhecan Wang; Alireza Zareian; Shih-Fu Chang; Kai-Wei Chang", "journal": "", "ref_id": "b20", "title": "Unsupervised vision-and-language pre-training without parallel images and captions", "year": "2021" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "Springer", "ref_id": "b21", "title": "Microsoft COCO: Common objects in context", "year": "2014" }, { "authors": "Ze Liu; Yutong Lin; Yue Cao; Han Hu; Yixuan Wei; Zheng Zhang; Stephen Lin; Baining Guo", "journal": "", "ref_id": "b22", "title": "Swin Transformer: Hierarchical vision Transformer using shifted windows", "year": "2021" }, { "authors": "Luca Moschella; Valentino Maiorca; Marco Fumero; Antonio Norelli; Francesco Locatello; Emanuele Rodolà", "journal": "", "ref_id": "b23", "title": "Relative representations enable zeroshot latent space communication", "year": "2022" }, { "authors": "Antonio Norelli; Marco Fumero; Valentino Maiorca; Luca Moschella; Emanuele Rodolà; Francesco Locatello", "journal": "", "ref_id": "b24", "title": "ASIF: Coupled data turns unimodal models to multimodal without training", "year": "2022" }, { "authors": "Vicente Ordonez; Girish Kulkarni; Tamara Berg", "journal": "Advances in neural information processing systems", "ref_id": "b25", "title": "Im2Text: Describing images using 1 million captioned photographs", "year": "2011" }, { "authors": "Liwei Bryan A Plummer; Chris M Wang; Juan C Cervantes; Julia Caicedo; Svetlana Hockenmaier; Lazebnik", "journal": "", "ref_id": "b26", "title": "Flickr30k entities: Collecting region-to-phrase correspondences for richer imageto-sentence models", "year": "2015" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark; Gretchen Krueger; Ilya Sutskever", "journal": "", "ref_id": "b27", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b28", "title": "", "year": "" }, { "authors": "Alec Radford; Jeff Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "", "ref_id": "b29", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Nils Reimers; Iryna Gurevych; Nils Reimers; Iryna Gurevych; Nandan Thakur; Nils Reimers; Johannes Daxenberger; Iryna Gurevych; Nils Reimers; Iryna Gurevych", "journal": "", "ref_id": "b30", "title": "Sentence-bert: Sentence embeddings using siamese bert-networks", "year": "2019" }, { "authors": "Rico Sennrich; Barry Haddow; Alexandra Birch", "journal": "", "ref_id": "b31", "title": "Improving neural machine translation models with monolingual data", "year": "2015" }, { "authors": "Piyush Sharma; Nan Ding; Sebastian Goodman; Radu Soricut", "journal": "", "ref_id": "b32", "title": "Conceptual Captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning", "year": "2018" }, { "authors": "Alane Suhr; Stephanie Zhou; Ally Zhang; Iris Zhang; Huajun Bai; Yoav Artzi", "journal": "", "ref_id": "b33", "title": "A corpus for reasoning about natural language grounded in photographs", "year": "2018" }, { "authors": "Peng Wang; An Yang; Rui Men; Junyang Lin; Shuai Bai; Zhikang Li; Jianxin Ma; Chang Zhou; Jingren Zhou; Hongxia Yang; ; ", "journal": "", "ref_id": "b34", "title": "Ofa: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b35", "title": "", "year": "" }, { "authors": "Teng Wang; Wenhao Jiang; Zhichao Lu; Feng Zheng; Ran Cheng; Chengguo Yin; Ping Luo", "journal": "", "ref_id": "b36", "title": "Vlmixer: Unpaired vision-language pre-training via cross-modal cutmix", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b37", "title": "", "year": "" }, { "authors": "Ning Xie; Farley Lai; Derek Doran; Asim Kadav", "journal": "", "ref_id": "b38", "title": "Visual entailment: A novel task for fine-grained image understanding", "year": "2019" }, { "authors": "Zhou Yu; Jun Yu; Yuhao Cui; Dacheng Tao; Qi Tian", "journal": "", "ref_id": "b39", "title": "Deep modular co-attention networks for visual question answering", "year": "2019" }, { "authors": "Hongyi Zhang; Moustapha Cisse; David Yann N Dauphin; Lopez-Paz", "journal": "", "ref_id": "b40", "title": "mixup: Beyond empirical risk minimization", "year": "2018" }, { "authors": "Pengchuan Zhang; Xiujun Li; Xiaowei Hu; Jianwei Yang; Lei Zhang; Lijuan Wang; Yejin Choi; Jianfeng Gao", "journal": "", "ref_id": "b41", "title": "VinVL: Revisiting visual representations in vision-language models", "year": "2021" }, { "authors": "Mingyang Zhou; Licheng Yu; Amanpreet Singh; Mengjiao Wang; Zhou Yu; Ning Zhang", "journal": "", "ref_id": "b42", "title": "Unsupervised vision-and-language pre-training via retrieval-based multi-granular alignment", "year": "2022" }, { "authors": "Yukun Zhu; Ryan Kiros; Rich Zemel; Ruslan Salakhutdinov; Raquel Urtasun; Antonio Torralba; Sanja Fidler", "journal": "", "ref_id": "b43", "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "year": "2015" } ]
[ { "formula_coordinates": [ 3, 106.93, 364.88, 182.93, 10.68 ], "formula_id": "formula_0", "formula_text": "sim(x, a i ) = cos(E I (x), E I (x i ))(1)" }, { "formula_coordinates": [ 3, 88.11, 427.71, 201.76, 10.72 ], "formula_id": "formula_1", "formula_text": "r A (x) = (sim(x, a 1 ), . . . , sim(x, a M ))(2)" }, { "formula_coordinates": [ 3, 306.14, 386.01, 218.27, 10.81 ], "formula_id": "formula_2", "formula_text": "D wa = {(x 1 , ŷ1 ), . . . , (x N , ŷN )} where N = |D I |" }, { "formula_coordinates": [ 3, 343.17, 436.33, 181.97, 19.37 ], "formula_id": "formula_3", "formula_text": "ŷi = argmax y∈D T cos(r A (x i ), r A (y))(3)" }, { "formula_coordinates": [ 4, 73.9, 460.99, 211.7, 13.18 ], "formula_id": "formula_4", "formula_text": "P = [r A (y)] T W r + [E T (ỹ 1 ), . . . , E T (ỹ M )]W e" }, { "formula_coordinates": [ 5, 103.33, 322.1, 186.54, 11.64 ], "formula_id": "formula_5", "formula_text": "L MTP = -E x∈D I log P (t m |t \\m , x)(5)" }, { "formula_coordinates": [ 5, 86.49, 512.25, 203.38, 11.22 ], "formula_id": "formula_6", "formula_text": "L MLM = -E (x,ŷ)∈Dwa log P (ŷ m |ŷ \\m , x) (6)" }, { "formula_coordinates": [ 5, 105.37, 688.84, 184.5, 14.28 ], "formula_id": "formula_7", "formula_text": "L ITM = -E (x,ŷ)∈D ′ wa log P (l|x, ŷ)(7)" }, { "formula_coordinates": [ 5, 312.77, 126.22, 212.37, 47.9 ], "formula_id": "formula_8", "formula_text": "L MLM = -E (x,ŷ)∈Dwa s(x, ŷ) log P (ŷ m |ŷ \\m , x) (8) L ITM = -E (x,ŷ)∈D ′ wa s(x, ŷ) log P (l|x, ŷ) (9)" } ]
Weakly Supervised Vision-and-Language Pre-training with Relative Representations
Weakly supervised vision-and-language pretraining (WVLP), which learns cross-modal representations with limited cross-modal supervision, has been shown to effectively reduce the data cost of pre-training while maintaining decent performance on downstream tasks. However, current WVLP methods use only local descriptions of images, i.e., object tags, as cross-modal anchors to construct weaklyaligned image-text pairs for pre-training. This affects the data quality and thus the effectiveness of pre-training. In this paper, we propose to directly take a small number of aligned image-text pairs as anchors, and represent each unaligned image and text by its similarities to these anchors, i.e., relative representations. We build a WVLP framework based on the relative representations, namely RELIT 1 , which collects high-quality weakly-aligned imagetext pairs from large-scale image-only and text-only data for pre-training through relative representation-based retrieval and generation. Experiments on four downstream tasks show that RELIT achieves new state-of-the-art results under the weakly supervised setting 2 .
Chi Chen; Peng Li; Maosong Sun; Yang Liu
[ { "figure_caption": "Figure 1: (a) Illustration of relative representations (Section 3.1), where three anchors (denoted by squares) are selected and the relative representation of the data point (denoted by circles) is a 3D vector with each dimension representing its similarity to the corresponding anchor. (b) Image-text retrieval based on the relative representations with cross-modal anchors (Section 3.2). Data of the same modality are represented by the same color.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: An illustration of the training and inference of the pseudo caption generator. In the training process, the model learns to generate text from its relative representation on the text-only dataset. During inference, the model is directly employed to predict the pseudo caption for an image from its relative representation.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Data quality of the retrieved data using different number of anchors. Both the anchors and the images and text used for retrieval are from the COCO dataset.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Fine-tuned NLVR 2 results of models pretrained on data with different number of anchors.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5: Comparison of the relative representations of the image and retrieved captions in Figure 6a. For simplicity, for each image and text on the left, we only display the anchors with the highest similarities on the right.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Examples of different kinds of weakly-aligned data. We highlight in red the caption with the best quality and the words in it that match the key information of the image. Compared to Retrv (Tag) which focuses on tag matching (underlined), our proposed two methods Retrv (Tag) and Rel2Cap produce captions that are more semantically similar to the image.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Evaluation results on four V+L downstream tasks. All weakly-supervised models are pre-trained on non-parallel images and text from CC.", "figure_data": "ModelVQAv2 Test-DevNLVR 2 Test-PVE Test R@1 R@5 R@10 Flickr30kSupervised (w/ Large-Scale Paired Image-Text Data)VisualBERT (Li et al., 2019)70.973.9-61.286.391.9UNITER (Chen et al., 2020)72.777.978.372.592.496.1VinVL (Zhang et al., 2021)76.083.1----ViLT (Kim et al., 2021)71.376.1-66.488.793.8ALBEF (Li et al., 2021a)74.580.580.382.896.798.4METER-CLIP-ViTBASE (Dou et al., 2022)77.783.081.282.296.398.3Weakly Supervised (w/o Large-Scale Paired Image-Text Data)U-VisualBERT (Li et al., 2021b)70.771.0-55.482.989.8U-VisualBERTVinVL (Zhou et al., 2022)71.853.276.8---µ-VLA (Zhou et al., 2022)72.173.477.3---VLMixer (Wang et al., 2022b)72.773.9----E2E-UVLP (Chen et al., 2022)73.374.678.266.489.794.1RELIT (Ours)73.576.478.670.291.595.6MethodVQAv2 Test-DevNLVR 2 Test-PVE Test R@1 R@5 R@10 Flickr30kU-VisualBERT70.571.2-54.482.289.2µ-VLA71.267.177.1---E2E-UVLP73.573.777.965.690.394.7RELIT (Ours)73.674.878.267.790.495.0", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Experimental results on downstream tasks of pre-training with images from CC and text from BookCorpus.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Table 3 shows the results. Comparison of pre-training with different kinds of pseudo-aligned data.", "figure_data": "Pre-training DataVQAv2 Test-DevNLVR 2 Test-PVE Test R@1 R@5 R@10 Flickr30kRetrv (Tag)73.274.577.866.389.394.2Retrv (Relrep)73.474.978.367.590.594.9Retrv (Tag) + Retrv (Relrep)73.575.378.467.390.494.6Retrv (Tag) + Retrv (Relrep) + Rel2Cap73.576.478.670.291.595.6MethodVQAv2 Test-DevNLVR 2 Test-PVE TestFlickr30k [email protected] 4: Ablation study on relative representation-guided training.", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Data quality of different kinds of weaklyaligned data from unaligned CC images and text.", "figure_data": "DataCLIPScoreRetrv (Tag)57.94Retrv (Relrep)63.31+ Rel2Cap65.23", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "The quality of data obtained from different anchor selection methods.", "figure_data": "", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" } ]
[{"Category": "Data Source", "Citation": "(Lin et al., 2014)", "Explanation": "The cited work by Lin et al. (2014) is the source of the image-text datasets used in the citing paper for VLP pre-training."}, {"Category": "Data Source", "Citation": "(Sharma et al., 2018)", "Explanation": "The cited work by Sharma et al. (2018) is the source of the image-text datasets used in the citing paper for VLP pre-training."}, {"Category": "Data Source", "Citation": "(Ordonez et al., 2011)", "Explanation": "The cited work by Ordonez et al. (2011) is the source of the image-text datasets used in the citing paper for VLP pre-training."}, {"Category": "Data Source", "Citation": "(Krishna et al., 2017)", "Explanation": "The cited work by Krishna et al. (2017) is the source of the image-text datasets used in the citing paper for VLP pre-training."}, {"Category": "Methodological Basis", "Citation": "(Li et al., 2021b)", "Explanation": "The cited work by Li et al. provides a method of using object tags as anchors to establish the connection between the two modalities in weakly supervised vision-and-language pre-training, which the citing paper adopts to address the core challenge of establishing the connection without a large number of aligned image-text pairs."}, {"Category": "Methodological Basis", "Citation": "(Zhou et al., 2022)", "Explanation": "The cited work by Zhou et al. also uses object tags as anchors in weakly supervised vision-and-language pre-training, providing a method that the citing paper can build upon to address the core challenge of establishing the connection between the two modalities without a large number of aligned image-text pairs."}, {"Category": "Methodological Basis", "Citation": "(Wang et al., 2022b)", "Explanation": "The cited work by Wang et al. uses object tags as anchors in weakly supervised vision-and-language pre-training, which the citing paper can build upon to address the core challenge of establishing the connection between the two modalities without a large number of aligned image-text pairs."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2022)", "Explanation": "The cited work by Chen et al. also uses object tags as anchors in weakly supervised vision-and-language pre-training, providing a method that the citing paper can build upon to address the core challenge of establishing the connection between the two modalities without a large number of aligned image-text pairs."}, {"Category": "Methodological Basis", "Citation": "(Zhou et al., 2022)", "Explanation": "The cited work highlights the limitations of using object tags as cross-modal anchors in weakly-aligned data, which motivates the citing paper to consider a new approach to measure the alignment between images and texts."}, {"Category": "Methodological Basis", "Citation": "(Moschella et al., 2022)", "Explanation": "The concept of relative representations proposed by Moschella et al. serves as the methodological basis for the study conducted in the citing paper, enabling the comparison and alignment of latent embeddings between different learning models."}, {"Category": "Data Source", "Citation": "(Li et al., 2022)", "Explanation": "The work of Li et al. on data augmentation in VLP provides a data source for the construction of weakly-aligned data in the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Norelli et al., 2022)", "Explanation": "The cited work by Norelli et al. introduces the concept of relative representations with cross-modal anchors, which the citing paper adopts in their research on zero-shot image classification."}, {"Category": "Methodological Basis", "Citation": "(Moschella et al., 2022)", "Explanation": "The cited work by Moschella et al. provides the basis for the construction of weakly-aligned image-text pairs in the citing paper, leveraging the concept of relative representations to align data points across different representation spaces."}, {"Category": "Data Source", "Citation": "(Zhou et al., 2022)", "Explanation": "The cited work by Zhou et al. provides the cross-model anchors used in the weakly-aligned image-text corpus construction process in the citing paper."}, {"Category": "Data Source", "Citation": "(Wang et al., 2022b)", "Explanation": "The cited work by Wang et al. contributes to the construction of the weakly-aligned image-text corpus by providing a method for retrieving semantically related sentences for each image based on the relative representations."}, {"Category": "Methodological Basis", "Citation": "(Sharma et al., 2018)", "Explanation": "The cited work provides the aligned image-text dataset that the citing paper uses to construct the weakly-aligned image-text pairs for training the model."}, {"Category": "Methodological Basis", "Citation": "(Radford et al., 2019)", "Explanation": "The cited work provides a pre-trained GPT-2 model that the citing paper fine-tunes to learn to predict captions given a set of anchor texts. The model is used to generate pseudo captions for images with low-quality retrieved captions."}, {"Category": "Methodological Basis", "Citation": "(Li et al., 2021b)", "Explanation": "The cited work provides a method of randomly masking out tags in the image-only data to learn object-level cross-modal alignment, which the citing paper adopts in their research on weakly-aligned image-text pairs."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2022)", "Explanation": "The cited work introduces a model architecture that includes a vision and a multimodal encoder for encoding images and text embeddings to obtain multimodal representations. The citing paper builds upon this model to explore the effectiveness of end-to-end frameworks in standard VLP and WVLP."}, {"Category": "Methodological Basis", "Citation": "(Zhou et al., 2022)", "Explanation": "The cited work is used to inspire the masking of noun phrases in the text for the masked language modeling objective, which is adopted to better fuse the image and text modalities in the weakly-aligned corpora."}, {"Category": "Data Source", "Citation": "(Li et al., 2021b)", "Explanation": "The cited work provides the image-only dataset used in the experiments of the citing paper."}, {"Category": "Data Source", "Citation": "(Zhou et al., 2022)", "Explanation": "The cited work provides the text-only dataset used in the experiments of the citing paper."}, {"Category": "Data Source", "Citation": "(Wang et al., 2022b)", "Explanation": "The cited work provides the image-only dataset used in the experiments of the citing paper."}, {"Category": "Data Source", "Citation": "(Chen et al., 2022)", "Explanation": "The cited work provides the text-only dataset used in the experiments of the citing paper."}, {"Category": "Data Source", "Citation": "(Johnson et al., 2019)", "Explanation": "The cited work provides the faiss library, which the citing paper uses to implement the retrieval system for constructing the weakly-aligned data."}, {"Category": "Extension or Continuation", "Citation": "(Li et al., 2022)", "Explanation": "The cited work introduces the use of nucleus sampling with a p value of 0.9 in synthetic caption generation, which the citing paper adopts in generating pseudo-captions for the image-text pairs in the weakly-aligned dataset."}, {"Category": "Extension or Continuation", "Citation": "(Zhou et al., 2022)", "Explanation": "The cited work includes the weakly-aligned dataset based on tag-based retrieval in the pre-training process, which the citing paper extends by incorporating it in the pre-training of the image-text model."}, {"Category": "Methodological Basis", "Citation": "(Zhang et al., 2021)", "Explanation": "The cited work provides the off-the-shelf object detector used in the vision encoder of the model in the citing paper, which serves as a methodological basis for the model design and implementation."}, {"Category": "Methodological Basis", "Citation": "(Goyal et al., 2017)", "Explanation": "The cited work by Goyal et al. provides the task setting and finetuning strategy for Visual Question Answering (VQAv2), which the citing paper adopts in their research on pre-trained models."}, {"Category": "Methodological Basis", "Citation": "(Suhr et al., 2018)", "Explanation": "The cited work by Suhr et al. provides the task setting and finetuning strategy for Natural Language for Visual Reasoning (NLVR 2), which the citing paper uses in their research on pre-trained models."}, {"Category": "Methodological Basis", "Citation": "(Xie et al., 2019)", "Explanation": "The cited work by Xie et al. provides the task setting and finetuning strategy for Visual Entailment (VE), which the citing paper utilizes in their research on pre-trained models."}, {"Category": "Methodological Basis", "Citation": "(Plummer et al., 2015)", "Explanation": "The cited work by Plummer et al. provides the task setting and finetuning strategy for image retrieval (Flickr30k), which the citing paper employs in their research on pre-trained models."}, {"Category": "Supporting Evidence", "Citation": "(Chen et al., 2020)", "Explanation": "The cited work by Chen et al. (2020) provides a method for pre-training models with unaligned image-text pairs, which the citing paper builds upon to improve the cross-modal alignment capability of the pre-trained model."}, {"Category": "Supporting Evidence", "Citation": "(Zhou et al., 2022)", "Explanation": "The cited work by Zhou et al. (2022) also contributes to the pre-training of models with unaligned image-text pairs, which the citing paper uses to improve the alignment quality of weakly-aligned image-text pairs."}, {"Category": "Data Source", "Citation": "CC", "Explanation": "The data source CC is used in the pre-training of models to improve the cross-modal alignment capability of the pre-trained model."}, {"Category": "Data Source", "Citation": "BookCorpus", "Explanation": "The data source BookCorpus is also used in the pre-training of models to improve the cross-modal alignment capability of the pre-trained model, even if the images and text are collected separately from different sources."}, {"Category": "Methodological Basis", "Citation": "(Li et al., 2022)", "Explanation": "The cited work (Li et al., 2022) provides the original CC dataset that contains noisy captions, which the citing paper uses to demonstrate the effectiveness of the proposed relative representation-based retrieval and generation methods in improving the performance of the pre-training on downstream tasks."}, {"Category": "Supporting Evidence", "Citation": "(Hessel et al., 2021)", "Explanation": "The cited work by Hessel et al. provides a method for measuring the overall alignment of image-text pairs, which the citing paper uses to evaluate the quality of weakly-aligned data from unaligned CC images and text."}, {"Category": "Extension or Continuation", "Citation": "(Norelli et al., 2022)", "Explanation": "The cited work by Norelli et al. discusses the influence of the number of anchors on the effect of relative representations, which the citing paper extends by testing the quality of data retrieved with different numbers of anchors on the COCO dataset."}, {"Category": "Data Source", "Citation": "(Yu et al., 2019)", "Explanation": "The cited work provides the most frequent answers in the dataset used in the citing paper for the VQA task."}, {"Category": "Data Source", "Citation": "(Chen et al., 2020)", "Explanation": "The cited work provides the dataset and the formulation of the VQA task as a classification task with 3,192 classes in the citing paper."}, {"Category": "Data Source", "Citation": "(Chen et al., 2020)", "Explanation": "The cited work provides the dataset and the formulation of the NLVR 2 task as a classification task with two image-text pairs as inputs in the citing paper."}, {"Category": "Data Source", "Citation": "(Chen et al., 2020)", "Explanation": "The cited work provides the dataset and the formulation of the VE task as a classification task with an image and a text hypothesis in the citing paper."}, {"Category": "Data Source", "Citation": "(Li et al., 2021b)", "Explanation": "The cited work provides the dataset used for the image retrieval task on the Flickr30k dataset."}, {"Category": "Data Source", "Citation": "(Chen et al., 2022)", "Explanation": "The cited work provides the dataset used for the image retrieval task on the Flickr30k dataset."}, {"Category": "Data Source", "Citation": "(Plummer et al., 2015)", "Explanation": "The cited work provides the dataset used for the image retrieval task on the Flickr30k dataset."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b9", "b16", "b14", "b12", "b19", "b21", "b13", "b15", "b24", "b6", "b1", "b10", "b8", "b24" ], "table_ref": [], "text": "Masked language modeling has proven to be an effective paradigm for representation learning (Devlin et al., 2019;Liu et al., 2019;He et al., 2021). However, unlike regular language models, masked language models (MLM) do not define an explicit joint distribution over language. While this is not a serious limitation from a representation learning standpoint, having explicit access to joint distributions would be useful for the purposes of generation (Ghazvininejad et al., 2019), scoring (Salazar et al., 2020), and would moreover enable evaluation of MLMs on standard metrics such as perplexity.\nStrictly speaking, MLMs do define a joint distribution over tokens that have been masked out. But they assume that the masked tokens are conditionally independent given the unmasked tokens-an assumption that clearly does not hold for language. How might we derive a language model from an MLM such that it does not make unrealistic independence assumptions? One approach is to use the set of the MLM's unary conditionals-the conditionals that result from masking just a single token in the input-to construct a fully-connected Markov random field (MRF) over the input (Wang and Cho, 2019;Goyal et al., 2022). This resulting MRF no longer makes any independence assumptions. It is unclear, however, if this heuristic approach actually results in a good language model. 1 This paper adopts an alternative approach which stems from interpreting the unary conditionals of the MLM as defining a dependency network (Heckerman et al., 2000;Yamakoshi et al., 2022). 2 Dependency networks specify the statistical relationship among variables of interest through the set of conditional distributions over each variable given its Markov blanket, which in the MLM case corresponds to all the other tokens. If the conditionals from a dependency network are compatible, i.e., there exists a joint distribution whose conditionals coincide with those of the dependency network's, then one can recover said joint using the Hammersley-Clifford-Besag (HCB; Besag, 1974) theorem. If the conditionals are incompatible, then we can adapt approaches from statistics for deriving near-compatible joint distributions from incompatible conditionals (AG; Arnold and Gokhale, 1998).\nWhile these methods give statistically-principled approaches to deriving explicit joints from the MLM's unary conditionals, they are intractable to apply to derive distributions over full sequences. We thus study a focused setting where it is tractable to compute the joints exactly, viz., the pairwise language model setting where we use the MLM's unary conditionals of two tokens to derive a joint 1 MRFs derived this way are still not language models in the strictest sense (e.g., see Du et al., 2022) because the probabilities of sentences of a given length sum to 1, and hence the sum of probabilities of all strings is infinite (analogous to left-to-right language models trained without an [EOS] token; Chen and Goodman, 1998). This can be remedied by incorporating a distribution over sentence lengths.\n2 Recent work by Yamakoshi et al. (2022) has taken this view, focusing on sampling from the dependency network as a means to implicitly characterize the joint distribution of an MLM. Here we focus on an explicit characterization of the joint. over these two tokens (conditioned on all the other tokens). Experiments under this setup reveal that AG method performs best in terms of perplexity, with the the HCB and MRF methods performing similarly. Surprisingly, we also find that the unary conditionals of the near-compatible AG joint occasionally have lower perplexity than the original unary conditionals learnt by the MLM, suggesting that regularizing the conditionals to be compatible may be beneficial insofar as modeling the distribution of language.3 " }, { "figure_ref": [], "heading": "Joint distributions from MLMs", "publication_ref": [ "b21", "b13", "b6" ], "table_ref": [], "text": "Let V be a vocabulary, T be the text length, and w ∈ V T be an input sentence or paragraph. We are particularly interested in the case when a subset S ⊆ [T ] ≜ {1, . . . , T } of the input w is replaced with [MASK] tokens; in this case we will use the notation q {t}|S (• | w S ) to denote the output distribution of the MLM at position t ∈ S, where we mask out the positions in S, i.e., for all k ∈ S we modify w by setting w k = [MASK]. If S = {t}, then we call q t|t ≜ q {t}|{t} a unary conditional. Our goal is to use these conditionals to construct joint distributions q S|S (• | w S ) for any S.\nDirect MLM construction. The simplest approach is to simply mask out the tokens over which we want a joint distribution, and define it to be the product of the MLM conditionals,\nq MLM S|S (w S | w S ) ≜ i∈S q {i}|S (w i | w S ). (1)\nThis joint assumes that the entries of w S are conditionally independent given w S . Since one can show that MLM training is equivalent to learning the conditional marginals of language (App. A), this can be seen as approximating conditionals with a (mean field-like) factorizable distribution.\nMRF construction. To address the conditional independence limitation of MLMs, prior work (Wang and Cho, 2019;Goyal et al., 2022) has proposed deriving joints by defining an MRF using the unary conditionals of the MLM. Accordingly, we define\nq MRF S|S (w S | w S ) ∝ t∈S q t|t (w t | w t ),(2)\nwhich can be interpreted as a fully connected MRF, whose log potential is given by the sum of the unary log probabilities. One can similarly define a variant of this MRF where the log potential is the sum of the unary logits. MRFs defined this way have a single fully connected clique and thus do not make any conditional independence assumptions. However, such MRFs can have unary conditionals that deviate from the MLM's unary conditionals even if those are compatible (App. B). This is potentially undesirable since the MLM unary conditionals could be close to the true unary conditionals,4 which means the MRF construction could be worse than the original MLM in terms of unary perplexity.\nHammersley-Clifford-Besag construction. The Hammersley-Clifford-Besag theorem (HCB; Besag, 1974) provides a way of reconstructing a joint distribution from its unary conditionals. Without loss of generality, assume that S = {1, . . . , k} for some k ≤ T . Then given a pivot point\nw ′ = (w ′ 1 , . . . , w ′ k ) ∈ V k , we define q HCB S|S (w S | w S ) ∝ t∈S q t|t (w t | w >t , w ′ <t ) q t|t (w ′ t | w >t , w ′ <t ) ,(3)\nwhere\nw ′ <i ≜ (w ′ 1 , . . . , w ′ i-1\n), and similarly w >i ≜ (w i+1 , . . . , w T ). Importantly, unlike the MRF approach, if the unary conditionals of the MLM are compatible, then HCB will recover the true joint, irrespective of the choice of pivot.\nArnold-Gokhale construction. If we assume that the unary conditionals are not compatible, then we can frame our goal as finding a near-compatible joint, i.e., a joint such that its unary conditionals are close to the unary conditionals of the MLM. Formally, for any S and fixed inputs w S , we can define this objective as,\nq AG S|S (• | w S ) = argmin µ t∈S w ′ ∈V |S|-1 J(t, w ′ ),(4)\nwhere J(t, w ′ ) is defined as:\nKL(q t|S\\{t},S (• | w ′ , w S ) || µ t|S\\{t},S (• | w ′ , w S )).\nWe can solve this optimization problem using Arnold and Gokhale's (1998) algorithm (App. C)." }, { "figure_ref": [], "heading": "Pairwise language model", "publication_ref": [], "table_ref": [], "text": "In language modeling we are typically interested in the probability of a sequence p(w). However, the above methods are intractable to apply to full sequences (except for the baseline MLM). For example, the lack of any independence assumptions in the MRF means that the partition function requires full enumeration over V T sequences. 5 We thus focus our empirical study on the pairwise setting where |S| = 2. 6 In this setting, we can calculate q S|S (• | w S ) with O(V ) forward passes of the MLM for all methods." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [ "b25" ], "table_ref": [], "text": "We compute two sets of metrics that evaluate the resulting joints in terms of (i) how good they are as probabilistic models of language and (ii) how faithful they are to the original MLM conditionals (which are trained to approximate the true conditionals of language, see App. A). Let D = {(w (n) , S (n) )} N n=1 be a dataset where w (n) is an English sentence and S (n) = (a (n) , b (n) ) are the two positions being masked. We define the following metrics to evaluate a distribution q ′ : Language model performance. We consider two performance metrics. The first is the pairwise perplexity (P-PPL) over two tokens,\nexp -1 2N N n=1 log q ′ a (n) ,b (n) |S (n) (w (n) a (n) , w (n) b (n) | w (n) S (n) )\nWe would expect a good joint to obtain lower pairwise perplexity than the original MLM, which (wrongly) assumes conditional independence. The second is unary perplexity (U-PPL),\nexp -1 2N N n=1 (i,j)∈ {S (n) ,S (n) r } log q ′ i|j,S (n) (w (n) i | w (n) j , w (n) S (n) )\nwhere for convenience we let\nS (n) r ≜ (b (n) , a (n)\n) as the reverse of the masked positions tuple S (n) . Note that this metric uses the unary conditionals derived from the pairwise joint, i.e., q ′ i|j,S , except in the MLM construction case which uses the MLM's original unary conditionals.\nFaithfulness. We also assess how faithful the new unary conditionals are to the original unary conditionals by calculating the average conditional KL divergence (A-KL) between them,\nN n=1 w ′ ∈V D(S (n) , w ′ , w (n) S (n) ) + D(S (n) r , w ′ , w (n) S (n) ) 2N |V| .\n5 We also tried estimating the partition through importance sampling with GPT-2 but found the estimate to be quite poor. 6 Concurrent work by Young and You (2023) also explores the (in)compatibility of MLMs in the |S| = 2 case.\nwhere we define D(S, w ′ , w S ) ≜ KL(q a|b,S (\n• | w ′ , w S ) || q ′ a|b,S (• | w ′ , w S )) for S = (a, b).\nIf the new joint is completely faithful to the MLM, this number should be zero. The above metric averages the KL across the entire vocabulary V, but in practice we may be interested in assessing closeness only when conditioned on the gold tokens. We thus compute a variant of the above metric where we only average over the conditionals for the gold token (G-KL):\nN n=1 D(S (n) , w (n) b (n) , w (n) S (n) )+D(S (n) r , w (n) a (n) , w (n) S (n) ) 2N\n.\nThis metric penalizes unfaithfulness in common contexts more than in uncommon contexts. Note that if the MLM's unary conditionals are compatible, then both the HCB and AG approach should yield the same joint distribution, and their faithfulness metrics should be zero." }, { "figure_ref": [], "heading": "Experimental setup", "publication_ref": [ "b7", "b18" ], "table_ref": [], "text": "We calculate the above metrics on 1000 examples7 from a natural language inference dataset (SNLI; Bowman et al., 2015) and a summarization dataset (XSUM; Narayan et al., 2018). We consider two schemes for selecting the tokens to be masked for each sentence: masks over two tokens chosen uniformly at random (Random pairs), and also over random contiguous tokens in a sentence (Contiguous pairs). Since inter-token dependencies are more likely to emerge when adjacent tokens are masked, the contiguous setup magnifies the importance of deriving a good pairwise joint. In addition, we consider both BERT BASE and BERT LARGE (cased) as the MLMs from which to obtain the unary conditionals. 8 For the AG joint, we run t = 50 steps of Arnold and Gokhale's (1998) algorithm (App. C), which was enough for convergence. For the HCB joint, we pick a pivot using the mode of the pairwise joint of the MLM.9 " }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b18" ], "table_ref": [], "text": "The results are shown in Tab. 1. Comparing the PPL's of MRF and MRF L (i.e., the MRF using logits), the former consistently outperforms the latter, (Narayan et al., 2018) summaries. We apply the constructions to two MLMs: BERT BASE ( B ) and BERT LARGE ( L ). We consider both masking tokens uniformly at random (Random pairs) and masking adjacent tokens uniformly at random (Contiguous pairs). For all metrics, lower is better.\nindicating that using the raw logits generally results in a worse language model. Comparing the MRFs to MLM, we see that the unary perplexity (U-PPL) of the MLM is lower than those of the MRFs, and that the difference is most pronounced in the contiguous masking case. More surprisingly, we see that the pairwise perplexity (P-PPL) is often (much) higher than the MLM's, even though the MLM makes unrealistic conditional independence assumptions. These results suggest that the derived MRFs are in general worse unary/pairwise probabilistic models of language than the MLM itself, implying that the MRF heuristic is inadequate (see App. D for a qualitative example illustrating how this can happen). Finally, we also find that the MRFs' unary conditionals are not faithful to those of the MRFs based on the KL measures. Since one can show that the MRF construction can have unary conditionals that have nonzero KL to the MLM's unary conditionals even if they are compatible (App. B), this gives both theoretical and empirical arguments against the MRF construction.\nThe HCB joint obtains comparable performance to MRF in the random masking case. In the contiguous case, it exhibits similar failure modes as the MRF in producing extremely high pairwise perplexity (P-PPL) values. The faithfulness metrics are similar to the MRF's, which suggests that the conditionals learnt by MLMs are incompatible. The AG approach, on the other hand, outperforms the MRF L , MRF and HCB approaches in virtually all metrics. This is most evident in the contiguous masking case, where AG attains lower pairwise perplexity than all models, including the MLM itself. In some cases, we find that the AG model even outperforms the MLM in terms of unary perplexity, which is remarkable since the unary conditionals of the MLM were trained to approximate the unary conditionals of language (App. A). This indicates that near-compatibility may have regularizing effect that leads to improved MLMs. Since AG was optimized to be near-compatible, its joints are unsurprisingly much more faithful to the original MLM's conditionals. However, AG's G-KL tends to be on par with the other models, which suggests that it is still not faithful to the MLM in the contexts that are most likely to arise. Finally, we analyze the effect of masked position distance on language modeling performance, and find that improvements are most pronounced when the masked tokens are close to each other (see App. E)." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b21", "b24", "b5", "b3", "b13", "b25", "b2", "b22", "b20", "b1", "b0", "b6", "b17", "b15" ], "table_ref": [], "text": "Probabilistic interpretations of MLMs. In one of the earliest works about sampling from MLMs, Wang and Cho (2019) propose to use unary condi-tionals to sample sentences. Recently Yamakoshi et al. (2022) highlight that, while this approach only constitutes a pseudo-Gibbs sampler, the act of re-sampling positions uniformly at random guarantees that the resulting Markov chain has a unique, stationary distribution (Bengio et al., 2013(Bengio et al., , 2014)). Alternatively, Goyal et al. (2022) propose defining an MRF from the MLM's unary conditionals, and sample from this via Metropolis-Hastings. Concurrently, Young and You (2023) conduct an empirical study of the compatibility of BERT's conditionals.\nCompatible distributions. The statistics community has long studied the problem of assessing the compatibility of a set of conditionals (Arnold and Press, 1989;Gelman and Speed, 1993;Wang and Kuo, 2010;Song et al., 2010). Arnold and Gokhale (1998) and Arnold et al. (2002) explore algorithms for reconstructing near-compatible joints from incompatible conditionals, which we leverage in our work. Besag (1974) also explores this problem, and defines a procedure (viz., eq. 3) for doing so when the joint distribution is strictly positive and the conditionals are compatible. Lowd (2012) apply a version of HCB to derive Markov networks from incompatible dependency networks (Heckerman et al., 2000)." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we studied four different methods for deriving an explicit joint distributions from MLMs, focusing in the pairwise language model setting where it is possible to compute exact distributional properties. We find that the Arnold-Gokhale (AG) approach, which finds a joint whose conditionals are closest to the unary conditionals of an MLM, works best. Indeed, our results indicate that said conditionals can attain lower perplexity than the unary conditionals of the original MLM. It would be interesting to explore whether explicitly regularizing the conditionals to be compatible during MLM training would lead to better modeling of the distribution of language." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Our study illuminates the deficiencies of the MRF approach and applies statistically-motivated approaches to craft more performant probabilistic models. However, it is admittedly not clear how these insights can immediately be applied to improve downstream NLP tasks. We focused on models over pairwise tokens in order to avoid sampling and work with exact distributions for the various approaches (MRF, HCB, AG). However this limits the generality of our approach (e.g., we cannot score full sentences). We nonetheless believe that our empirical study is interesting on its own and suggests new paths for developing efficient and faithful MLMs." }, { "figure_ref": [], "heading": "A MLMs as learning conditional marginals", "publication_ref": [], "table_ref": [], "text": "One can show that the MLM training objective corresponds to learning to approximate the conditional marginals of language, i.e., the (single-position) marginals of language when we condition on any particular set of positions. More formally, consider an MLM parameterized by a vector θ ∈ Θ and some distribution µ(•) over positions to mask S ⊆ [T ]. Then the MLM learning objective is given by:\nθ = argsup θ E S∼µ(•) E w∼p(•) 1 |S| t∈S log q t|S (w t | w S ; θ) ,\nwhere p(•) denotes the true data distribution. Analogously, let p S|S (• | w S ) and p S (•) denote the conditionals and marginals of the data distribution, respectively. Then the above can be rewritten as:\nθ = argsup θ E S∼µ(•) E w S ∼p S (•) 1 |S| t∈S E w S ∼p S|S (•) log q t|S (w t | w S ; θ) = arginf θ E S∼µ(•) E w S ∼p S (•) 1 |S| t∈S KL(p t|S (• | w S ) || q t|S (• | w S ; θ)) ,\nThus, we can interpret MLM training as learning to approximate the conditional marginals of language, i.e., ∀S ⊆ [T ] and ∀t ∈ S, in the limit we would expect that, for any observed context w S , we have\nq t|S (• | w S ) ≈ p t|S (• | w S )." }, { "figure_ref": [], "heading": "B Unfaithful MRFs", "publication_ref": [ "b2" ], "table_ref": [], "text": "Here we show that even if the unary conditionals used in the MRF construction are compatible (Arnold and Press, 1989), the unary conditionals of the probabilistic model implied by the MRF construction can deviate (in the KL sense) from the true conditionals. This is important because (i) it suggests that we might do better (at least in terms of U-PPL) by simply sticking to the conditionals learned by MLM, and (ii) this is not the case for either the HCB or the AG constructions, i.e., if we started with the correct conditionals, HCB and AG's joint would be compatible with the MLM. Formally, Proposition B.1. Let w 1 , w 2 ∈ V and further let p 1|2 (• | w 2 ), p 2|1 (• | w 1 ) be the true (i.e., population) unary conditional distributions. Define an MRF as q 1,2 (w 1 , w 2 ) ∝ p 1|2 (w 1 | w 2 ) p 2|1 (w 2 | w 1 ), and let q 1|2 (• | w 2 ), q 2|1 (• | w 1 ) be the conditionals derived from the MRF. Then there exists p 1|2 , p 2|1 such that\nKL(p 1|2 (• | w 2 ) || q 1|2 (• | w 2 )) > 0.\nProof. Let w 2 ∈ V be arbitrary. We then have:\nq 1|2 (w 1 | w 2 ) = p 1|2 (w 1 | w 2 ) p 2|1 (w 2 | w 1 ) w ′ ∈V p 1|2 (w ′ | w 2 ) p 2|1 (w 2 | w ′ )\nNow, consider the KL between the true unary conditionals and the MRF unary conditionals:\nKL(p 1|2 (• | w 2 ) || q 1|2 (• | w 2 )) = w∈V p 1|2 (w | w 2 ) log p 1|2 (w | w 2 ) q 1|2 (w | w 2 ) = w∈V p 1|2 (w | w 2 ) log w ′ ∈V p 1|2 (w ′ | w 2 ) p 2|1 (w 2 | w ′ ) p 2|1 (w 2 | w) = log E w∼p 1|2 (•|w 2 ) [p 2|1 (w 2 | w)] -E w∼p 1|2 (•|w 2 ) [log p 2|1 (w 2 | w)]\nThis term is the Jensen gap, and in general it can be non-zero. To see this, suppose V = {a, b} and consider the joint\np 1,2 (w 1 , w 2 ) = 97 100 w 1 , w 2 = a 1 100 otherwise with corresponding conditionals p 2|1 (x | b) = p 1|2 (x | b) = 1 2 for all x ∈ V and p 2|1 (x | a) = p 1|2 (x | a) = 97 98 x = a 1 98 x = b\nNow, take w 2 = b. We then have\nKL(p 1|2 (• | b) || q 1|2 (• | b)) = log E w∼p 1|2 (•|b) [p 2|1 (b | w)] -E w∼p 1|2 (•|b) [log p 2|1 (b | w)] = log 1 2 1 98 + 1 2 - 1 2 log 1 98 + log 1 2 = log 1 196 + 1 4 - 1 2 log 1 196 ≈ 1.27\nwhich demonstrates that the KL can be non-zero." }, { "figure_ref": [], "heading": "C Arnold-Gokhale algorithm", "publication_ref": [], "table_ref": [], "text": "Arnold and Gokhale (1998) study the problem of finding a near-compatible joint from unary conditionals, and provide and algorithm for the case of |S| = 2. The algorithm initializes the starting pairwise distribution q AG(1) a,b|S (•, • | w S ) to be uniform, and performs the following update until convergence:\nq AG(t+1) a,b|S (w a , w b | w S ) ∝ q a|b,S (w a | w b , w S ) + q b|a,S (w b | w a , w S ) q AG(t) a|S (w a | w S ) -1 + q AG(t) b|S (w b | w S ) -1 .(5)" }, { "figure_ref": [], "heading": "D Qualitative example of MRF underperformance", "publication_ref": [], "table_ref": [], "text": "This example from SNLI qualitatively illustrates a case where both the unary and pairwise perplexities from the MRF underperforms the MLM: \"The [MASK] 1 [MASK] 2 at the casino\", where the tokens \"man is\" are masked. In this case, both MRFs assign virtually zero probability mass to the correct tokens, while the MLM assigns orders of magnitude more (around 0.2% of the mass of the joint). Upon inspection, this arises because q 2|1,S (is | man) ≈ 0.02 and q 1|2,S (man | is) ≈ 2 × 10 -5 , which makes the numerator of q MRF 1,2|S (man, is) be ≈ 0. The MRF could still assign high probability to this pair if the denominator is also ≈ 0, but in this case we have q 2|1,S (was | man) ≈ 0.33 and q 1|2,S (man | was) ≈ 0.03, which makes the denominator well above 0. This causes the completion \"man is\" to have disproportionately little mass in the joint compared other to combinations (\"man was\") that were ascribed more mass by BERT's unary conditionals." }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "E Token distance analysis", "publication_ref": [ "b18" ], "table_ref": [], "text": "We also explore the effect of the distance between masked tokens on the pairwise negative log-likelihood (PNLL, lower is better; note this is equivalent to the log PPPL) of the joints built using the different approaches we considered. We considered two different kinds of distance functions between tokens: (i) the absolute difference in the positions between the two masked tokens, and (ii) their syntactic distance (obtained by running a dependency parser on unmasked sentences). We plot the results in Fig. 1 (SNLI) and Fig. 2 (XSUM). Note that the black bars denote the number of datapoints with that distance between the two masked tokens, where a syntactic distance of 0 means that the two masked tokens belong to the same word, whereas a token distance of 0 means that the two masked tokens are adjacent. The graphs indicate that the language modeling performance improvement (compared to using the MLM joint) is most prominent when masked tokens are close together, which is probably because when the masked tokens are close together they are more likely to be dependent. In this case, AG tends to do best, HCB and MRF tend to do similarly, followed by MRF-L and, finally, the conditionally independent MLM, which follows the trends observed in the paper. (Narayan et al., 2018). The gray bars represent the number of examples on the dataset that had that degree of separation." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank the anonymous reviewers for their helpful comments. This research is supported in part by funds from the MLA@CSAIL initiative and MIT-IBM Watson AI lab. LTH acknowledges support from the Michael Athans fellowship fund." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "We foresee no ethical concerns with this work." } ]
10.1016/S0167-9473(01)00111-6
[ { "authors": "C Barry; Enrique Arnold; José Castillo; Sarabia María", "journal": "Computational Statistics & Data Analysis", "ref_id": "b0", "title": "Exact and near compatibility of discrete conditional distributions", "year": "2002" }, { "authors": "Barry C Arnold; Dattaprabhakar V Gokhale", "journal": "Test", "ref_id": "b1", "title": "Distributions most nearly compatible with given families of conditional distributions", "year": "1998" }, { "authors": "Barry C Arnold; James S Press", "journal": "Journal of the American Statistical Association", "ref_id": "b2", "title": "Compatible conditional distributions", "year": "1989" }, { "authors": "Yoshua Bengio; Éric Thibodeau-Laufer; Guillaume Alain; Jason Yosinski", "journal": "", "ref_id": "b3", "title": "Deep generative stochastic networks trainable by backprop", "year": "2014" }, { "authors": " Pmlr", "journal": "", "ref_id": "b4", "title": "", "year": "" }, { "authors": "Yoshua Bengio; Li Yao; Guillaume Alain; Pascal Vincent", "journal": "", "ref_id": "b5", "title": "Generalized denoising auto-encoders as generative models", "year": "2013" }, { "authors": "Julian Besag", "journal": "Journal of the Royal Statistical Society", "ref_id": "b6", "title": "Spatial interaction and the statistical analysis of lattice systems", "year": "1974" }, { "authors": "R Samuel; Gabor Bowman; Christopher Angeli; Christopher D Potts; Manning", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "A large annotated corpus for learning natural language inference", "year": "2015" }, { "authors": "Stanley F Chen; Joshua Goodman", "journal": "", "ref_id": "b8", "title": "An empirical study of smoothing techniques for language modeling", "year": "1998" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Li Du; Lucas Torroba Hennigen; Tiago Pimentel; Clara Meister; Jason Eisner; Ryan Cotterell", "journal": "", "ref_id": "b10", "title": "A measure-theoretic characterization of tight language models", "year": "2022" }, { "authors": "Andrew Gelman; Terence P ", "journal": "Journal of the Royal Statistical Society", "ref_id": "b11", "title": "Characterizing a joint probability distribution by conditionals", "year": "1993" }, { "authors": "Marjan Ghazvininejad; Omer Levy; Yinhan Liu; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Mask-Predict: Parallel decoding of conditional masked language models", "year": "2019" }, { "authors": "Kartik Goyal; Chris Dyer; Taylor Berg-Kirkpatrick", "journal": "", "ref_id": "b13", "title": "Exposing the implicit energy networks behind masked language models via Metropolis-Hastings", "year": "2022" }, { "authors": "Pengcheng He; Xiaodong Liu; Jianfeng Gao; Weizhu Chen", "journal": "", "ref_id": "b14", "title": "DeBERTa: Decoding-enhanced BERT with Disentangled Attention", "year": "2021" }, { "authors": "David Heckerman; Max Chickering; Chris Meek; Robert Rounthwaite; Carl Kadie", "journal": "Journal of Machine Learning Research", "ref_id": "b15", "title": "Dependency networks for inference, collaborative filtering, and data visualization", "year": "2000" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "CoRR", "ref_id": "b16", "title": "RoBERTa: A robustly optimized BERT pretraining approach", "year": "2019" }, { "authors": "Daniel Lowd", "journal": "Association for Uncertainity in Artificial Intelligence", "ref_id": "b17", "title": "Closed-form learning of markov networks from dependency networks", "year": "2012" }, { "authors": "Shashi Narayan; Shay B Cohen; Mirella Lapata", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Don't give me the details, just the summary! Topic-aware convolutional neural networks for extreme summarization", "year": "2018" }, { "authors": "Julian Salazar; Davis Liang; Toan Q Nguyen; Katrin Kirchhoff", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Masked language model scoring", "year": "2020" }, { "authors": "Chwan-Chin Song; Lung-An Li; Chong-Hong Chen; Thomas J Jiang; Kun-Lin Kuo", "journal": "Statistica Sinica", "ref_id": "b20", "title": "Compatibility of finite discrete conditional distributions", "year": "2010" }, { "authors": "Alex Wang; Kyunghyun Cho", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "BERT has a mouth, and it must speak: BERT as a Markov random field language model", "year": "2019" }, { "authors": "Yuchung J Wang; Kun-Lin Kuo", "journal": "Journal of Multivariate Analysis", "ref_id": "b22", "title": "Compatibility of discrete conditional distributions with structural zeros", "year": "2010" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Remi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander Lhoest; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Takateru Yamakoshi; Thomas Griffiths; Robert Hawkins", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Probing BERT's priors with serial reproduction chains", "year": "2022" }, { "authors": "Tom Young; Yang You", "journal": "", "ref_id": "b25", "title": "On the inconsistencies of conditionals learned by masked language models", "year": "2023" } ]
[ { "formula_coordinates": [ 2, 86.79, 485.75, 203.08, 24.5 ], "formula_id": "formula_0", "formula_text": "q MLM S|S (w S | w S ) ≜ i∈S q {i}|S (w i | w S ). (1)" }, { "formula_coordinates": [ 2, 100.22, 689.67, 189.65, 24.5 ], "formula_id": "formula_1", "formula_text": "q MRF S|S (w S | w S ) ∝ t∈S q t|t (w t | w t ),(2)" }, { "formula_coordinates": [ 2, 305.69, 324.57, 219.45, 50.09 ], "formula_id": "formula_2", "formula_text": "w ′ = (w ′ 1 , . . . , w ′ k ) ∈ V k , we define q HCB S|S (w S | w S ) ∝ t∈S q t|t (w t | w >t , w ′ <t ) q t|t (w ′ t | w >t , w ′ <t ) ,(3)" }, { "formula_coordinates": [ 2, 339.17, 380.29, 109.11, 14 ], "formula_id": "formula_3", "formula_text": "w ′ <i ≜ (w ′ 1 , . . . , w ′ i-1" }, { "formula_coordinates": [ 2, 313.17, 557.18, 211.97, 26.15 ], "formula_id": "formula_4", "formula_text": "q AG S|S (• | w S ) = argmin µ t∈S w ′ ∈V |S|-1 J(t, w ′ ),(4)" }, { "formula_coordinates": [ 2, 306.14, 604.33, 222.9, 14.98 ], "formula_id": "formula_5", "formula_text": "KL(q t|S\\{t},S (• | w ′ , w S ) || µ t|S\\{t},S (• | w ′ , w S ))." }, { "formula_coordinates": [ 3, 70.87, 382.04, 223.16, 33.58 ], "formula_id": "formula_6", "formula_text": "exp -1 2N N n=1 log q ′ a (n) ,b (n) |S (n) (w (n) a (n) , w (n) b (n) | w (n) S (n) )" }, { "formula_coordinates": [ 3, 70.87, 476.62, 218.11, 47.09 ], "formula_id": "formula_7", "formula_text": "exp -1 2N N n=1 (i,j)∈ {S (n) ,S (n) r } log q ′ i|j,S (n) (w (n) i | w (n) j , w (n) S (n) )" }, { "formula_coordinates": [ 3, 206.09, 529.89, 79.58, 14.16 ], "formula_id": "formula_8", "formula_text": "S (n) r ≜ (b (n) , a (n)" }, { "formula_coordinates": [ 3, 70.87, 681.73, 223.88, 37.65 ], "formula_id": "formula_9", "formula_text": "N n=1 w ′ ∈V D(S (n) , w ′ , w (n) S (n) ) + D(S (n) r , w ′ , w (n) S (n) ) 2N |V| ." }, { "formula_coordinates": [ 3, 306.14, 74.37, 218.27, 30.43 ], "formula_id": "formula_10", "formula_text": "• | w ′ , w S ) || q ′ a|b,S (• | w ′ , w S )) for S = (a, b)." }, { "formula_coordinates": [ 3, 306.14, 215.95, 219.13, 37.2 ], "formula_id": "formula_11", "formula_text": "N n=1 D(S (n) , w (n) b (n) , w (n) S (n) )+D(S (n) r , w (n) a (n) , w (n) S (n) ) 2N" }, { "formula_coordinates": [ 7, 173.34, 155.9, 250.16, 29.64 ], "formula_id": "formula_12", "formula_text": "θ = argsup θ E S∼µ(•) E w∼p(•) 1 |S| t∈S log q t|S (w t | w S ; θ) ," }, { "formula_coordinates": [ 7, 139.03, 223.91, 318.78, 67.58 ], "formula_id": "formula_13", "formula_text": "θ = argsup θ E S∼µ(•) E w S ∼p S (•) 1 |S| t∈S E w S ∼p S|S (•) log q t|S (w t | w S ; θ) = arginf θ E S∼µ(•) E w S ∼p S (•) 1 |S| t∈S KL(p t|S (• | w S ) || q t|S (• | w S ; θ)) ," }, { "formula_coordinates": [ 7, 70.87, 323.52, 122.16, 12.51 ], "formula_id": "formula_14", "formula_text": "q t|S (• | w S ) ≈ p t|S (• | w S )." }, { "formula_coordinates": [ 7, 220.11, 532.86, 155.05, 11.22 ], "formula_id": "formula_15", "formula_text": "KL(p 1|2 (• | w 2 ) || q 1|2 (• | w 2 )) > 0." }, { "formula_coordinates": [ 7, 183.84, 570.78, 226.4, 28.17 ], "formula_id": "formula_16", "formula_text": "q 1|2 (w 1 | w 2 ) = p 1|2 (w 1 | w 2 ) p 2|1 (w 2 | w 1 ) w ′ ∈V p 1|2 (w ′ | w 2 ) p 2|1 (w 2 | w ′ )" }, { "formula_coordinates": [ 7, 115.83, 618.85, 363.62, 85.06 ], "formula_id": "formula_17", "formula_text": "KL(p 1|2 (• | w 2 ) || q 1|2 (• | w 2 )) = w∈V p 1|2 (w | w 2 ) log p 1|2 (w | w 2 ) q 1|2 (w | w 2 ) = w∈V p 1|2 (w | w 2 ) log w ′ ∈V p 1|2 (w ′ | w 2 ) p 2|1 (w 2 | w ′ ) p 2|1 (w 2 | w) = log E w∼p 1|2 (•|w 2 ) [p 2|1 (w 2 | w)] -E w∼p 1|2 (•|w 2 ) [log p 2|1 (w 2 | w)]" }, { "formula_coordinates": [ 7, 219.64, 740.04, 154.81, 31.31 ], "formula_id": "formula_18", "formula_text": "p 1,2 (w 1 , w 2 ) = 97 100 w 1 , w 2 = a 1 100 otherwise with corresponding conditionals p 2|1 (x | b) = p 1|2 (x | b) = 1 2 for all x ∈ V and p 2|1 (x | a) = p 1|2 (x | a) = 97 98 x = a 1 98 x = b" }, { "formula_coordinates": [ 8, 144.75, 146.39, 305.79, 90.16 ], "formula_id": "formula_19", "formula_text": "KL(p 1|2 (• | b) || q 1|2 (• | b)) = log E w∼p 1|2 (•|b) [p 2|1 (b | w)] -E w∼p 1|2 (•|b) [log p 2|1 (b | w)] = log 1 2 1 98 + 1 2 - 1 2 log 1 98 + log 1 2 = log 1 196 + 1 4 - 1 2 log 1 196 ≈ 1.27" }, { "formula_coordinates": [ 8, 139.43, 347.91, 385.71, 37.31 ], "formula_id": "formula_20", "formula_text": "q AG(t+1) a,b|S (w a , w b | w S ) ∝ q a|b,S (w a | w b , w S ) + q b|a,S (w b | w a , w S ) q AG(t) a|S (w a | w S ) -1 + q AG(t) b|S (w b | w S ) -1 .(5)" } ]
Deriving Language Models from Masked Language Models
Masked language models (MLM) do not explicitly define a distribution over language, i.e., they are not language models per se. However, recent work has implicitly treated them as such for the purposes of generation and scoring. This paper studies methods for deriving explicit joint distributions from MLMs, focusing on distributions over two tokens, which makes it possible to calculate exact distributional properties. We find that an approach based on identifying joints whose conditionals are closest to those of the MLM works well and outperforms existing Markov random field-based approaches. We further find that this derived model's conditionals can even occasionally outperform the original MLM's conditionals.
Lucas Torroba; Hennigen Yoon
[ { "figure_caption": "Figure 1 :1Figure1: Pairwise NLL (PNLL) as a function of the token and syntactic distance between masked positions for joints built using the methods: MLM, MRF (Logit), MRF, HCB, AG on SNLI(Bowman et al., 2015). The gray bars represent the number of examples on the dataset that had that degree of separation.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: Pairwise NLL (PNLL) as a function of the token and syntactic distance between masked positions for joints built using the methods: MLM, MRF (Logit), MRF, HCB, AG on XSUM(Narayan et al., 2018). The gray bars represent the number of examples on the dataset that had that degree of separation.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Comparison of MRF, HCB and AG constructions on randomly sampled SNLI (Bowman et al., 2015) sentences and XSUM", "figure_data": "Random pairsContiguous pairsDataset SchemeU-PPL P-PPL A-KL G-KLU-PPL P-PPLA-KL G-KLMLM11.2219.01 1.080 0.54713.7874.68 4.014 1.876MRFL13.3971.44 0.433 0.26723.45 13 568.17 1.543 0.607SNLIMRF12.3021.65 0.658 0.17918.35126.05 1.967 0.366HCB12.5122.62 0.593 0.16817.71589.02 2.099 0.416BAG10.7612.68 0.007 0.08513.2621.59 0.018 0.181MLM4.886.12 0.404 0.2274.9139.33 4.381 2.128MRFL5.179.12 0.148 0.0856.55 2209.94 1.561 0.383XSUMMRF5.006.23 0.262 0.0495.5347.62 2.242 0.185HCB5.086.21 0.256 0.0526.46174.32 2.681 0.328AG5.005.29 0.003 0.0445.278.42 0.016 0.143MLM9.5018.57 1.374 0.78710.42104.12 4.582 2.463MRFL11.5276.23 0.449 0.27615.43 8536.92 1.470 0.543SNLIMRF10.5719.54 0.723 0.19313.0793.33 1.992 0.359HCB10.7120.70 0.797 0.21514.43458.25 2.563 0.552LAG8.5710.11 0.007 0.0979.6415.64 0.019 0.173MLM3.805.67 0.530 0.4133.91103.86 5.046 3.276MRFL3.947.06 0.156 0.0684.62 1328.20 1.441 0.290XSUMMRF3.874.94 0.322 0.0364.1636.66 2.258 0.145HCB3.915.14 0.346 0.0595.67164.15 2.954 0.400AG3.884.13 0.003 0.0424.216.62 0.016 0.126", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "(Devlin et al., 2019)", "Explanation": "The cited work by Devlin et al. (2019) provides a foundational method for representation learning that the citing paper adopts in its research."}, {"Category": "Methodological Basis", "Citation": "(Liu et al., 2019)", "Explanation": "The cited work by Liu et al. (2019) offers a methodological approach to language modeling that the citing paper builds upon in its research."}, {"Category": "Methodological Basis", "Citation": "(He et al., 2021)", "Explanation": "The cited work by He et al. (2021) contributes to the citing paper by providing a methodological basis for language modeling."}, {"Category": "Extension or Continuation", "Citation": "(Ghazvininejad et al., 2019)", "Explanation": "The cited work by Ghazvininejad et al. (2019) extends the research on language models by exploring the usefulness of joint distributions in generation and scoring."}, {"Category": "Extension or Continuation", "Citation": "(Salazar et al., 2020)", "Explanation": "The cited work by Salazar et al. (2020) builds upon the research on language models by focusing on the evaluation of MLMs using standard metrics such as perplexity."}, {"Category": "Methodological Basis", "Citation": "(AG; Arnold and Gokhale, 1998)", "Explanation": "The cited work provides a statistical approach for deriving near-compatible joint distributions from incompatible conditionals, which the citing paper adopts to adapt the methods for deriving joint distributions in the context of language models."}, {"Category": "Data Source", "Citation": "(Du et al., 2022)", "Explanation": "The cited work provides a study on the limitations of left-to-right language models trained without an [EOS] token, which the citing paper uses to highlight the need for incorporating a distribution over sentence lengths in the context of language models."}, {"Category": "Extension or Continuation", "Citation": "(Chen and Goodman, 1998)", "Explanation": "The cited work studies the issue of the sum of probabilities of sentences of a given length in left-to-right language models, which the citing paper extends by incorporating a distribution over sentence lengths in the context of language models."}, {"Category": "Methodological Basis", "Citation": "(Yamakoshi et al., 2022)", "Explanation": "The cited work focuses on sampling from the dependency network to characterize the joint distribution of an MLM, which the citing paper builds upon to study the implicit characterization of the joint distribution in the context of language models."}, {"Category": "Methodological Basis", "Citation": "(Wang and Cho, 2019)", "Explanation": "The cited work by Wang and Cho (2019) proposes a method of deriving joint distributions by defining a Markov random field (MRF) using the unary conditionals of the MLM. This method is adopted in the citing paper to address the conditional independence limitation of MLMs."}, {"Category": "Methodological Basis", "Citation": "(Goyal et al., 2022)", "Explanation": "The cited work by Goyal et al. (2022) has also proposed a method of deriving joint distributions by defining an MRF using the unary conditionals of the MLM. The citing paper builds upon this method to address the conditional independence limitation of MLMs."}, {"Category": "Methodological Basis", "Citation": "(Bowman et al., 2015)", "Explanation": "The cited work provides a natural language inference dataset (SNLI) that the citing paper uses to calculate metrics and evaluate the performance of their model."}, {"Category": "Data Source", "Citation": "(Narayan et al., 2018)", "Explanation": "The cited work provides a summarization dataset (XSUM) that the citing paper uses to test the performance of their model in a different context."}, {"Category": "Methodological Basis", "Citation": "(Arnold and Gokhale, 1998)", "Explanation": "The cited work provides a method (App. C) for running t = 50 steps of the Arnold and Gokhale algorithm, which the citing paper adopts in their research to obtain the required data."}, {"Category": "Methodological Basis", "Citation": "(Bowman et al., 2015)", "Explanation": "The cited work provides a natural language inference dataset (SNLI) that the citing paper uses to calculate metrics and evaluate the performance of their model."}, {"Category": "Methodological Basis", "Citation": "(Narayan et al., 2018)", "Explanation": "The cited work provides a summarization dataset (XSUM) that the citing paper uses to test the performance of their model in a different context."}, {"Category": "Methodological Basis", "Citation": "(Bowman et al., 2015)", "Explanation": "The cited work provides a natural language inference dataset (SNLI) that the citing paper uses to calculate metrics and evaluate the performance of their model."}, {"Category": "Methodological Basis", "Citation": "(Narayan et al., 2018)", "Explanation": "The cited work provides a summarization dataset (XSUM) that the citing paper uses to test the performance of their model in a different context."}, {"Category": "Methodological Basis", "Citation": "(Bowman et al., 2015)", "Explanation": "The cited work provides a natural language inference dataset (SNLI) that the citing paper uses to calculate metrics and evaluate the performance of their model."}, {"Category": "Methodological Basis", "Citation": "(Narayan et al., 2018)", "Explanation": "The cited work provides a summarization dataset (XSUM) that the citing paper uses to test the performance of their model in a different context."}, {"Category": "Methodological Basis", "Citation": "(Bowman et al., 2015)", "Explanation": "The cited work provides a natural language inference dataset (SNLI) that the citing paper uses to calculate metrics and evaluate the performance of their model."}, {"Category": "Methodological Basis", "Citation": "(Narayan et al., 2018)", "Explanation": "The cited work provides a summarization dataset (XSUM) that the citing paper uses to test the performance of their model in a different context."}, {"Category": "Methodological Basis", "Citation": "(Bowman et al., 2015)", "Explanation": "The cited work provides a natural language inference dataset (SNLI) that the citing paper uses to calculate metrics and evaluate the performance of their model."}, {"Category": "Methodological Basis", "Citation": "(Narayan et al., 2018)", "Explanation": "The cited work provides a summarization dataset (XSUM) that the citing paper uses to test the performance of their model in a different context."}, {"Category": "Methodological Basis", "Citation": "(Bowman et al., 2015)", "Explanation": "The cited work provides a natural language inference dataset (SNLI) that the citing paper uses to calculate metrics and evaluate the performance of their model."}, {"Category": "Methodological Basis", "Citation": "(Narayan et al., 2018)", "Explanation": "The cited work provides a summarization dataset (XSUM) that the citing paper uses to test the performance of their model in a different context."}, {"Category": "Methodological Basis", "Citation": "(Bowman et al., 2015)", "Explanation": "The cited work provides a natural language inference dataset (SNLI) that the citing paper uses to calculate metrics and evaluate the performance of their model."}, {"Category": "Methodological Basis", "Citation": "(Narayan et al., 2018)", "Explanation": "The cited work provides a summarization dataset (XSUM) that the citing paper uses to test the performance of their model in a different context."}, {"Category": "Methodological Basis", "Citation": "(Bowman et al., 2015)", "Explanation": "The cited work provides a natural language inference dataset (SNLI) that the citing paper uses to calculate metrics and evaluate the performance of their model."}, {"Category": "Methodological Basis", "Citation": "(Narayan et al., 2018)", "Explanation": "The cited work provides a summarization dataset (XSUM) that the citing paper uses to test the performance of their model in a different context."}, {"Category": "Methodological Basis", "Citation": "(Bowman et al., 2015)", "Explanation": "The cited work provides a natural language inference dataset (SNLI) that the citing paper uses to calculate metrics and evaluate the performance of their model."}, {"Category": "Methodological Basis", "Citation": "(Narayan et al., 2018)", "Explanation": "The cited work provides a summarization dataset (XSUM) that the citing paper uses to test the performance of their model in a different context."}, {"Category": "Methodological Basis", "Citation": "(Bowman et al., 2015)", "Explanation": "The cited work provides a natural language inference dataset (SNLI) that the citing paper uses to calculate metrics and evaluate the performance of their model."}, {"Category": "Methodological Basis", "Citation": "(Narayan et al., 2018)", "Explanation": "The cited work provides a summarization dataset (XSUM) that the citing paper uses to test the performance of their model in a different context."}, {"Category": "Methodological Basis", "Citation": "(Bowman et al., 2015)", "Explanation": "The cited work provides a natural language inference dataset (SNLI) that the citing paper uses to calculate metrics and evaluate the performance of their model."}, {"Category": "Methodological Basis", "Citation": "(Narayan et al., 2018)", "Explanation": "The cited work provides a summarization dataset (XSUM) that the citing paper uses to test the performance of their model in a different context."}, {"Category": "Methodological Basis", "Citation": "(Bowman et al., 2015)", "Explanation": "The cited work provides a natural language inference dataset (SNLI) that the citing paper uses to calculate metrics and evaluate the performance of their model."}, {"Category": "Methodological Basis", "Citation": "(Narayan et al., 2018)", "Explanation": "The cited work provides a summarization dataset (XSUM) that the citing paper uses to test the performance of their model in a different context."}, {"Category": "Methodological Basis", "Citation": "(Bowman et al., 2015)", "Explanation": "The cited work provides a natural language inference dataset (SNLI) that the citing paper uses to calculate metrics and evaluate the performance of their model."}, {"Category": "Methodological Basis", "Citation": "(Narayan et al., 2018)", "Explanation": "The cited work provides a summarization dataset (XSUM) that the citing paper uses to test the performance of their model in a different context."}, {"Category": "Methodological Basis", "Citation": "(Bowman et al., 2015)", "Explanation": "The cited work provides a natural language inference dataset (SNLI) that the citing paper uses to calculate metrics and evaluate the performance of their model."}, {"Category": "Methodological Basis", "Citation": "(Narayan et al., 2018)", "Explanation": "The cited work provides a summarization dataset (XSUM) that the citing paper uses to test the performance of their model in a different context."}, {"Category": "Methodological Basis", "Citation": "(Bowman et al., 2015)", "Explanation": "The cited work provides a natural language inference dataset (SNLI) that the citing paper uses to calculate metrics and evaluate the performance of their model."}, {"Category": "Methodological Basis", "Citation": "(Narayan et al., 2018)", "Explanation": "The cited work provides a summarization dataset (XSUM) that the citing paper uses to test the performance of their model in a different context."}, {"Category": "Methodological Basis", "Citation": "(Bowman et al., 2015)", "Explanation": "The cited work provides a natural language inference dataset (SNLI) that the citing paper uses to calculate metrics and evaluate the performance of their model."}, {"Category": "Methodological Basis", "Citation": "(Narayan et al., 2018)", "Explanation": "The cited work provides a summarization dataset (XSUM) that the citing paper uses to test the performance of their model in a different context."}, {"Category": "Supporting Evidence", "Citation": "(Narayan et al., 2018)", "Explanation": "The cited work by Narayan et al. provides a summary of the results that the citing paper builds upon to demonstrate the performance of the language model in different settings."}, {"Category": "Methodological Basis", "Citation": "(Wang and Cho, 2019)", "Explanation": "The cited work by Wang and Cho (2019) provides a method of sampling sentences from MLMs using unary conditionals, which the citing paper adopts in their research."}, {"Category": "Extension or Continuation", "Citation": "(Yamakoshi et al., 2022)", "Explanation": "The cited work by Yamakoshi et al. (2022) highlights the limitations of the sampling method proposed by Wang and Cho (2019), and proposes a new method of re-sampling positions uniformly at random to ensure a unique stationary distribution."}, {"Category": "Methodological Basis", "Citation": "(Goyal et al., 2022)", "Explanation": "The cited work by Goyal et al. (2022) proposes a new method of defining an MRF from the MLM's unary conditionals and sampling from it using the Metropolis-Hastings algorithm, which the citing paper adopts in their research."}, {"Category": "Extension or Continuation", "Citation": "(Young and You, 2023)", "Explanation": "The cited work by Young and You (2023) conducts an empirical study of the compatibility of BERT's conditionals, which the citing paper builds upon to further explore the compatibility of MLMs."}, {"Category": "Methodological Basis", "Citation": "(Arnold and Press, 1989;Gelman and Speed, 1993;Wang and Kuo, 2010;Song et al., 2010)", "Explanation": "The cited works in the statistics community have studied the problem of assessing the compatibility of a set of conditionals, which the citing paper leverages in their research to assess the compatibility of MLMs."}, {"Category": "Methodological Basis", "Citation": "(Arnold and Gokhale, 1998)", "Explanation": "The cited work by Arnold and Gokhale (1998) provides a method of assessing the compatibility of a set of conditionals, which the citing paper adopts in their research to assess the compatibility of MLMs."}, {"Category": "Methodological Basis", "Citation": "(2002)", "Explanation": "The cited work by (2002) provides algorithms for reconstructing near-compatible joints from incompatible conditionals, which the citing paper leverages in their own research to develop a method for the same task."}, {"Category": "Supporting Evidence", "Citation": "Besag (1974)", "Explanation": "The cited work by Besag (1974) defines a procedure for reconstructing near-compatible joints from incompatible conditionals, which the citing paper uses to support their own research in the same area."}, {"Category": "Methodological Basis", "Citation": "Lowd (2012)", "Explanation": "The cited work by Lowd (2012) applies a version of HCB to derive Markov networks from incompatible dependency networks, which the citing paper adopts in their own research to develop a method for the same task."}, {"Category": "Supporting Evidence", "Citation": "(Narayan et al., 2018)", "Explanation": "The cited work by Narayan et al. (2018) provides a reference to the trends observed in the paper regarding the performance of different language models in a particular case."}]
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b18", "b12", "b53", "b29", "b16", "b9", "b23", "b16", "b16", "b18", "b18", "b11", "b16", "b29", "b13", "b16" ], "table_ref": [], "text": "Consider a machine learning classifier that does not reach the desired performance for the intended application, even after significant development time. This may occur for a variety of reasons: the problem is too hard for the current technology; more development resources (data, compute or time) are needed than what is economically feasible for the specific situation; or perhaps the target distribution is different from the training one, resulting in a performance gap. In this case, one is faced with the choice of deploying an underperforming model or not deploying a model at all.\nA better tradeoff may be achieved by using so-called selective classification [Geifman andEl-Yaniv, 2017, El-Yaniv andWiener, 2010]. The idea is to run the model on all inputs but reject predictions for which the model is least confident, hoping to increase the performance on the accepted predictions. The rejected inputs may be processed in the same way as if the model were not deployed, for instance, by a human specialist or by the previously existing system. This offers a tradeoff between performance and coverage (the proportion of accepted predictions) which may be a better solution than any of the extremes. In particular, it could shorten the path to adoption of deep learning in safety-critical applications, such as medical diagnosis and autonomous driving, where the consequences of erroneous decisions can be severe [Zou et al., 2023, Neumann et al., 2018].\nA key element in selective classification is the confidence estimator that is thresholded to decide whether a prediction is accepted. In the case of neural networks with softmax outputs, the natural baseline to be used as a confidence estimator is the maximum softmax probability (MSP) produced by the model, also known as the softmax response [Geifman andEl-Yaniv, 2017, Hendrycks andGimpel, 2016]. Several approaches have been proposed1 attempting to improve upon this baseline, which generally fall into two categories: approaches that require retraining the classifier, by modifying some aspect of the architecture or the training procedure, possibly adding an auxiliary head as the confidence estimator [Geifman and El-Yaniv, 2019, Liu et al., 2019a, Huang et al., 2020]; and post-hoc approaches that do not require Figure 1: A comparison of RC curves made by three models selected in [Galil et al., 2023], including examples of highest (ViT-L/16-384) and lowest (EfficientNet-V2-XL) AUROC. An RC curve shows the tradeoff between risk (in this case, error rate) and coverage. The initial risk for any classifier is found at the 100% coverage point, where all predictions are accepted. Normally, the risk can be reduced by reducing coverage (which is done by increasing the selection threshold); for instance, a 2% error rate can be obtained at 36.2% coverage for the ViT-B/32-224-SAM model and at 61.9% coverage for the ViT-L/16-38 model. However, for the EfficientNet-V2-XL model, this error rate is not achievable at any coverage, since its RC curve is lower bounded by 5% risk. Moreover, this RC curve is actually non-monotonic, with an increasing risk as coverage is reduced, for low coverage. Fortunately, this apparent pathology in EfficientNet-V2-XL completely disappears after a simple post-hoc tuning of its confidence estimator (without the need to retrain the model), resulting in significantly improved selective classification performance. In particular, a 2% error rate can then be achieved at 55.3% coverage.\nretraining, thus only modifying or replacing the confidence estimator based on outputs or intermediate features produced by the model [Corbière et al., 2022, Granese et al., 2021, Shen et al., 2022, Galil et al., 2023]. The latter is arguably the most practical scenario, especially if tuning the confidence estimator is sufficiently simple.\nIn this paper, we focus on the simplest possible class of post-hoc methods, which are those for which the confidence estimator can be computed directly from the network unnormalized logits (pre-softmax output). Our main goal is to identify the methods that produce the largest gains in selective classification performance, measured by the area under the risk-coverage curve (AURC); however, as in general these methods can have hyperparameters that need to be tuned on hold-out data, we are also concerned with data efficiency. Our study is motivated by an intriguing problem reported in [Galil et al., 2023] and illustrated in Fig. 1: some state-of-the-art ImageNet classifiers, despite attaining excellent predictive performance, nevertheless exhibit appallingly poor performance at detecting their own mistakes. Can such pathologies be fixed by simple post-hoc methods?\nTo answer this question, we consider every such method to our knowledge, as well as several variations and novel methods that we propose, and perform an extensive experimental study using 84 pretrained ImageNet classifiers available from popular repositories. Our results show that, among other close contenders, a simple p-norm normalization of the logits, followed by taking the maximum logit as the confidence estimator, can lead to considerable gains in selective classification performance, completely fixing the pathologi-cal behavior observed in many classifiers, as illustrated in Fig. 1. As a consequence, the selective classification performance of any classifier becomes almost entirely determined by its corresponding accuracy.\nThe main contributions of this work are summarized as follows:\n• We perform an extensive experimental study of many existing and proposed confidence estimators, obtaining considerable gains for most classifiers. In particular, we find that a simple post-hoc estimator can provide up to 62% reduction in normalized AURC using no more than one sample per class of labeled hold-out data;\nspace, and C is the number of classes. The risk of a classifier h : X → Y is R(h) = E P [ℓ(h(x), y)], where ℓ : Y × Y → R + is a loss function, for instance, the 0/1 loss ℓ(ŷ, y) = 1[ŷ ̸ = y], where 1[•] denotes the indicator function. A selective classifier [Geifman and El-Yaniv, 2017] is a pair (h, g), where h is a classifier and g : X → R is a confidence estimator (also known as confidence score function or confidence-rate function), which quantifies the model's confidence on its prediction for a given input. For some fixed threshold t, given an input x, the selective model makes a prediction h(x) if g(x) ≥ t, otherwise the prediction is rejected. A selective model's coverage ϕ(h, g) = P [g(x) ≥ t] is the probability mass of the selected samples in X , while its selective risk R(h, g) = E P [ℓ(h(x), y) | g(x) ≥ t] is its risk restricted to the selected samples. In particular, a model's risk equals its selective risk at full coverage (i.e., for t such that ϕ(h, g) = 1). These quantities can be evaluated empirically given a given a test dataset {(x i , y i )} N i=1 drawn i.i.d. from P , yielding the empirical coverage φ(h, g) = (1/N ) N i=1 1[g(x i ) ≥ t] and the empirical selective risk\nR(h, g) = N i=1 ℓ(h(x i ), y i )1[g(x i ) ≥ t] N i=1 1[g(x i ) ≥ t]\n.\n(1)\nNote that, by varying t, it is generally possible to trade off coverage for selective risk, i.e., a lower selective risk can usually (but not necessarily always) be achieved if more samples are rejected. This tradeoff is captured by the riskcoverage (RC) curve [Geifman and El-Yaniv, 2017], a plot of R(h, g) as a function of φ(h, g). While the RC curve provides a full picture of the performance of a selective classifier, it is convenient to have a scalar metric that summarizes this curve. A commonly used metric is the area under the RC curve (AURC) [Ding et al., 2020, Geifman et al., 2019], denoted by AURC(h, g). However, when comparing selective models, if two RC curves cross, then each model may have a better selective performance than the other depending on the operating point chosen, which cannot be captured by the AURC. Another interesting metric, which forces the choice of an operating point, is the selective accuracy constraint (SAC) [Galil et al., 2023], defined as the maximum coverage allowed for a model to achieve a specified accuracy.\nClosely related to selective classification is misclassification detection [Hendrycks and Gimpel, 2016], which refers to the problem of discriminating between correct and incorrect predictions made by a classifier. Both tasks rely on ranking predictions according to their confidence estimates, where correct predictions should be ideally separated from incorrect ones. A usual metric for misclassification detection is the area under the ROC curve (AUROC) [Fawcett, 2006] which, in contrast to the AURC, is blind to the classifier performance, focusing only on the quality of the confidence estimates. Thus, it may also be used to evaluate confidence estimators for selective classification [Galil et al., 2023]." }, { "figure_ref": [], "heading": "CONFIDENCE ESTIMATION", "publication_ref": [ "b11", "b9", "b18", "b5", "b30", "b60", "b41", "b23", "b21", "b23" ], "table_ref": [], "text": "From now on we restrict attention to classifiers that can be decomposed as h(x) = arg max k∈Y z k , where z = f (x) and f : X → R C is a neural network. The network output z is referred to as the (vector of) logits or logit vector, due to the fact that it is typically applied to a softmax function to obtain an estimate of the posterior distribution P [y|x]. The softmax function is defined as\nσ : R C → [0, 1] C , σ k (z) = e z k C j=1 e zj , k ∈ {1, . . . , C}(2)\nwhere σ k (z) denotes the kth element of the vector σ(z).\nThe most popular confidence estimator is arguably the maximum softmax probability (MSP) [Ding et al., 2020], also known as maximum class probability [Corbière et al., 2022] or softmax response [Geifman and El-Yaniv, 2017] \ng(x) = MSP(z) ≜ max k∈Y σ k (z) = σ ŷ (z)(3)\nwhere ŷ = arg max k∈Y z k . However, other functions of the logits can be considered. Some examples are the softmax margin [Belghazi andLopez-Paz, 2021, Lubrano et al., 2023], the max logit [Hendrycks et al., 2022], the logits margin [Streeter, 2018, Lebovitz et al., 2023], the negative entropy2 [Belghazi and Lopez-Paz, 2021], and the negative Gini index [Granese et al., 2021, Gomes et al., 2022], defined, respectively, as\nSoftmaxMargin(z) ≜ σ ŷ (z) -max k∈Y:k̸ =ŷ σ k (z) (4) MaxLogit(z) ≜ z ŷ (5) LogitsMargin(z) ≜ z ŷ -max k∈Y:k̸ =ŷ z k (6) NegativeEntropy(z) ≜ k∈Y σ k (z) log σ k (z) (7) NegativeGini(z) ≜ -1 + k∈Y σ k (z) 2 . (8\n)\nNote that, in the scenario we consider, DOCTOR's D α and D β discriminators [Granese et al., 2021] are equivalent to the negative Gini index and MSP confidence estimators, respectively, as discussed in more detail in Appendix B." }, { "figure_ref": [], "heading": "METHODS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "TUNABLE LOGIT TRANSFORMATIONS", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce a simple but powerful framework for designing post-hoc confidence estimators for selective classification. The idea is to take any parameter-free logit-based confidence estimator, such as those described in section 2.2, and augment it with a logit transformation parameterized by one or a few hyperparameters, which are then tuned (e.g., via grid search) using a labeled hold-out dataset not used during training of the classifier (i.e., validation data). Moreover, this hyperparameter tuning is done using as objective function not a proxy loss but rather the exact same metric that one is interested in optimizing, for instance, AURC or AUROC. This approach forces us to be conservative about the hyperparameter search space, which is important for data efficiency." }, { "figure_ref": [], "heading": "Temperature Scaling", "publication_ref": [ "b24", "b24", "b52" ], "table_ref": [], "text": "Originally proposed in the context of post-hoc calibration, temperature scaling (TS) [Guo et al., 2017] consists in transforming the logits as z ′ = z/T , before applying the softmax function. The parameter T > 0, which is called the temperature, is then optimized over hold-out data.\nThe conventional way of applying TS, as proposed in [Guo et al., 2017] for calibration and referred to here as TS-NLL, consists in optimizing T with respect to the negative loglikelihood (NLL) [Murphy, 2022]. Here we instead optimize T using AURC and the resulting method is referred to as TS-AURC.\nNote that TS does not affect the ranking of predictions for MaxLogit and LogitsMargin, so it is not applied in these cases." }, { "figure_ref": [], "heading": "Logit Normalization", "publication_ref": [ "b4" ], "table_ref": [], "text": "Inspired by Wei et al. [2022], who show that logits norms are directly related to overconfidence and propose logit normalization during training, we propose logit normalization as a post-hoc method. Additionally, we extend the normalization from the 2-norm to a general p-norm, where p is a tunable hyperparameter. (For more context on logit normalization, as well as intuition and theoretical justification for our proposed modifications, please see Appendix C.) Thus, logit p-normalization is defined as the operation\nz ′ = z τ ∥z∥ p (9)\nwhere\n∥z∥ p ≜ (|z 1 | p +• • •+|z C | p ) 1/p , p ∈ R, is the p-norm\nof z and τ > 0 is a temperature scaling parameter. Note that this transformation is a form of adaptive TS [Balanya et al., 2023], with an input-dependent temperature τ ∥z∥ p .\nLogit p-normalization introduces two hyperparameters, p and τ , which should be jointly optimized; in this case, we first optimize τ for each value of p considered and then pick the best value of p. Such a transformation, together with the optimization of p and τ , is referred to here as pNorm.\nThe optimizing metric is always AURC and therefore it is omitted from the nomenclature of the method.\nNote that, when the underlying confidence estimator is MaxLogit or LogitsMargin, the parameter τ is irrelevant and is ignored." }, { "figure_ref": [], "heading": "EVALUATION METRICS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Normalized AURC", "publication_ref": [ "b16", "b11", "b16" ], "table_ref": [], "text": "A common criticism of the AURC metric is that it does not allow for meaningful comparisons across problems [Geifman et al., 2019]. An AURC of some arbitrary value, for instance, 0.05, may correspond to an ideal confidence estimator for one classifier (of much higher risk) and to a completely random confidence estimator for another classifier (of risk equal to 0.05). The excess AURC (E-AURC) was proposed by Geifman et al. [2019] to alleviate this problem: for a given classifier h and confidence estimator g, it is defined as E-AURC(h, g) = AURC(h, g) -AURC(h, g * ), where g * corresponds to a hypothetically optimal confidence estimator that perfectly orders samples in decreasing order of their losses. Thus, an ideal confidence estimator always has zero E-AURC.\nUnfortunately, E-AURC is still highly sensitive to the classifier's risk, as shown by Galil et al. [2023], who suggested the use of AUROC instead. However, using AUROC for comparing confidence estimators has an intrinsic disadvantage: if we are using AUROC to evaluate the performance of a tunable confidence estimator, it makes sense to optimize it using this same metric. However, as AUROC and AURC are not necessarily monotonically aligned Ding et al. [2020], the resulting confidence estimator will be optimized for a different problem than the one in which we were originally interested (which is selective classification). Ideally, we would like to evaluate confidence estimators using a metric that is a monotonic function of AURC.\nWe propose a simple modification to E-AURC that eliminates the shortcomings pointed out in [Galil et al., 2023]: normalizing by the E-AURC of a random confidence estimator, whose AURC is equal to the classifier's risk. More precisely, we define the normalized AURC (NAURC) as\nNAURC(h, g) = AURC(h, g) -AURC(h, g * ) R(h) -AURC(h, g * ) .(10)\nNote that this corresponds to a min-max scaling that maps the AURC of the ideal classifier to 0 and the AURC of the random classifier to 1. The resulting NAURC is suitable for comparison across different classifiers and is monotonically related to AURC." }, { "figure_ref": [], "heading": "MSP Fallback", "publication_ref": [], "table_ref": [], "text": "A useful property of MSP-TS-AURC (but not MSP-TS-NLL) is that, in the infinite-sample setting, it can never have a worse performance than the MSP baseline, as long as T = 1 is included in the search space. It is natural to extend this property to every confidence estimator, for a simple reason: it is very easy to check whether the estimator provides an improvement to the MSP baseline and, if not, then use the MSP instead. Formally, this corresponds to adding a binary hyperparameter indicating an MSP fallback.\nEquivalently, when measuring performance across different models, we simply report a (non-negligible) positive gain in NAURC whenever it occurs. More precisely, we define the average positive gain (APG) in NAURC as\nAPG(g) = 1 |H| h∈H [NAURC(h, MSP) -NAURC(h, g)] + ϵ (11) where [x] +\nϵ is defined as x if x > ϵ and is 0 otherwise, H is a set of classifiers and ϵ > 0 is chosen so that only non-negligible gains are reported." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [ "b57", "b10", "b67", "b16", "b37", "b56" ], "table_ref": [], "text": "All experiments in this section were performed using Py-Torch [Paszke et al., 2019] and all of its provided classifiers pre-trained on ImageNet [Deng et al., 2009]. Additionally, some models of the Wightman [2019] repository were used, particularly the ones highlighted by Galil et al. [2023]. In total, 84 ImageNet classifiers were used. The list of all models, together with all the results per model are presented in Appendix J. The validation set of ImageNet was randomly split into 5000 hold-out images for post-hoc optimization (which we also refer to as the tuning set) and 45000 images for performance evaluation (the test set). To ensure the results are statistically significant, we repeat each experiment (including post-hoc optimization) for 10 different random splits and report mean and standard deviation.\nTo give evidence that our results are not specific to Ima-geNet, we also run experiments on CIFAR-100 [Krizhevsky, 2009] and Oxford-IIIT Pet [Parkhi et al., 2012] datasets, which are presented in Appendix E." }, { "figure_ref": [ "fig_0" ], "heading": "COMPARISON OF METHODS", "publication_ref": [ "b16" ], "table_ref": [ "tab_0" ], "text": "We start by evaluating the NAURC of each possible combination of a confidence estimator listed in Section 2.2 with a logit transformation described in Section 3.1, for specific models. Table 1 shows the results for EfficientNet-V2-XL (trained on ImageNet-21K and fine tuned on ImageNet-1K) and VGG16, respectively, the former chosen for having the worst confidence estimator performance (in terms of AUROC, with MSP as the confidence estimator) of all the models reported in [Galil et al., 2023] and the latter chosen as a representative example of a lower accuracy model for which the MSP is already a good confidence estimator.\nAs can be seen, on EfficientNet-V2-XL, the baseline MSP is easily outperformed by most methods. Surprisingly, the best method is not to use a softmax function but, instead, to take the maximum of a p-normalized logit vector, leading to a reduction in NAURC of 0.27 points or about 62%. However, on VGG16, the situation is quite different, as methods that use the unnormalized logits and improve the performance on EfficientNet-V2-XL, such as LogitsMargin and MaxLogit-pNorm, actually degrade it on VGG16. Moreover, the highest improvement obtained, e.g., with MSP-TS-AURC, is so small that it can be considered negligible. (In fact, gains below 0.003 NAURC are visually imperceptible in an AURC curve.) Thus, it is reasonable to assert that none of the post-hoc methods considered is able to outperform the baseline in this case.\nIn Table 2, we evaluate the average performance of posthoc methods across all models considered, using the APG-NAURC metric described in Section 3.2.2, where we assume ϵ = 0.01. Figure 2 shows the gains for selected methods for each model, ordered by MaxLogit-pNorm gains. It can be seen that the highest gains are provided by MaxLogit-pNorm, NegativeGini-pNorm, MSP-pNorm and NegativeEntropy-pNorm, and their performance is essentially indistinguishable whenever they provide a nonnegligible gain over the baseline. Moreover, the set of models for which significant gains can be obtained appears to be consistent across all methods.\nAlthough several post-hoc methods provide considerable gains, they all share a practical limitation which is the requirement of hold-out data for hyperparameter tuning. In Appendix F, we study the data efficiency of some of the best performing methods. MaxLogit-pNorm, having a single hyperparameter, emerges as a clear winner, requiring fewer than 500 samples to achieve near-optimal performance on ImageNet (< 0.5 images per class on average) and fewer than 100 samples on CIFAR-100 (< 1 image per class on average). These requirements are clearly easily satisfied in practice for typical validation set sizes." }, { "figure_ref": [], "heading": "Details on the optimization of T and p, additional results", "publication_ref": [], "table_ref": [], "text": "showing AUROC values and RC curves, and results on the insensitivity of our conclusions to the choice of ϵ are provided in Appendix D. In addition, the benefits of a tunable versus fixed p and a comparison with other tunable methods that do not fit into the framework of Section 3.1 are discussed, respectively, in Appendices G and H. " }, { "figure_ref": [ "fig_0", "fig_1", "fig_1", "fig_1", "fig_1" ], "heading": "POST-HOC OPTIMIZATION FIXES BROKEN CONFIDENCE ESTIMATORS", "publication_ref": [ "b16" ], "table_ref": [], "text": "From Figure 2, we can distinguish two groups of models: those for which the MSP baseline is already the best confidence estimator and those for which post-hoc methods provide considerable gains (particularly, MaxLogit-pNorm).\nIn fact, most models belong to the second group, comprising 58 out of 84 models considered.\nFigure 3 illustrates two noteworthy phenomena. First, as previously observed by Galil et al. [2023], certain models exhibit superior accuracy than others but poorer uncertainty estimation, leading to a trade-off when selecting a classifier for selective classification. Second, post-hoc optimization can fix any \"broken\" confidence estimators. This can be seen in two ways: in Figure 3a, after optimization, all models exhibit a much more similar level of confidence estimation performance (as measured by NAURC), although a dependency on accuracy is clearly seen (better predictive models are better at predicting their own failures). In Figure 3b, it is clear that, after optimization, the selective classification performance of any classifier (as measured by AURC) becomes almost entirely determined by its corresponding accuracy. Indeed, the Spearman correlation between AURC and accuracy becomes extremely close to 1. The same conclusions hold for the SAC metric, as shown in Figure 3c. This implies that any \"broken\" confidence estimators have been fixed and, consequently, total accuracy becomes the primary determinant of selective performance even at lower coverage levels.\nAn intriguing question is what properties of a classifier make it bad at confidence estimation. Experiments investigating this question are presented in Appendix I. In summary, our surprising conclusion is that models that produce highly confident MSPs tend to have better confidence estimators (in terms of NAURC), while models whose MSP distribution is more balanced tend to be easily improvable by post-hoc optimization-which, in turn, makes the resulting confidence estimator concentrated on highly confident values." }, { "figure_ref": [ "fig_2", "fig_2", "fig_1", "fig_2" ], "heading": "PERFORMANCE UNDER DISTRIBUTION SHIFT", "publication_ref": [ "b28", "b59" ], "table_ref": [ "tab_2" ], "text": "We now turn to the question of how post-hoc methods for selective classification perform under distribution shift. Pre- vious works have shown that calibration can be harmed under distribution shift, especially when certain post-hoc methods-such as TS-are applied [Ovadia et al., 2019].\nTo find out whether a similar issue occurs for selective classification, we evaluate selected post-hoc methods on ImageNet-C [Hendrycks and Dietterich, 2018], which consists in 15 different corruptions of the ImageNet's validation set, and on ImageNetV2 [Recht et al., 2019], which is an independent sampling of the ImageNet test set replicating the original dataset creation process. We follow the standard approach for evaluating robustness with these datasets, which is to use them only for inference; thus, the post-hoc methods are optimized using only the 5000 hold-out images from the uncorrupted ImageNet validation dataset. To avoid data leakage, the same split is applied to the ImageNet-C dataset, so that inference is performed only on the 45000 images originally selected as the test set.\nFirst, we evaluate the performance of MaxLogit-pNorm on ImageNet and ImageNetV2 for all classifiers considered. Figure 4a shows that the NAURC gains (over the MSP baseline) obtained for ImageNet translate to similar gains for ImageNetV2, showing that this post-hoc method is quite robust to distribution shift. Then, considering all models after post-hoc optimization with MaxLogit-pNorm, we investigate whether selective classification performance itself (as measured by NAURC) is robust to distribution shift. As can The legend shows the optimal value of p for each model, where MSP indicates MSP fallback (no significant positive gain). ρ is the Spearman correlation between a metric and the accuracy. In (c), models that cannot achieve the desired selective accuracy are shown with ≈ 0 coverage. be seen in Figure 4b, the results are consistent, following an affine function (with Pearson's correlation equal to 0.983); however, a significant degradation in NAURC can be observed for all models under distribution shift. While at first sight this would suggest a lack of robustness, a closer look reveals that it can actually be explained by the natural accuracy drop of the underlying classifier under distribution shift. Indeed, we have already noticed in Figure 3a a negative correlation between the NAURC and the accuracy; in Figure 4c these results are expanded by including the evaluation on Im-ageNetV2 and also (for selected models AlexNet, ResNet50, WideResNet50-2, VGG11, EfficientNet-B3 and ConvNext-Large, sorted by accuracy) on ImageNet-C, where we can see that the strong correlation between NAURC and accuracy continues to hold. Finally, to give a more tangible illustration of the impact of selective classification, Table 3 shows the SAC metric for a ResNet50 under distribution shift, with the target accuracy as the original accuracy obtained with the in-distribution test data. As can be seen, the original accuracy can be restored at the expense of coverage; meanwhile, MaxLogit-pNorm achieves higher coverages for all distribution shifts considered, significantly improving coverage over the MSP baseline." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we addressed the problem of selective multiclass classification for deep neural networks with softmax outputs. Specifically, we considered the design of post-hoc confidence estimators that can be computed directly from the unnormalized logits. We performed an extensive benchmark of more than 20 tunable post-hoc methods across 84 ImageNet classifiers, establishing strong baselines for future research. To allow for a fair comparison, we proposed a normalized version of the AURC metric that is insensitive to the classifier accuracy.\nOur main conclusions are the following: (1) For 58 (69%) of the models considered, considerable NAURC gains over the MSP can be obtained, in one case achieving a reduction of 0.27 points or about 62%. (2) Our proposed method MaxLogit-pNorm (which does not use a softmax function) emerges as a clear winner, providing the highest gains with exceptional data efficiency, requiring on average less than 1 sample per class of hold-out data for tuning its single hyperparameter. These observations are also confirmed under additional datasets and the gains preserved even under distribution shift.\n(3) After post-hoc optimization, all models with a similar accuracy achieve a similar level of confidence estimation performance, even models that have been previously shown to be very poor at this task. In particular, the selective classification performance of any classifier becomes almost entirely determined by its corresponding accuracy, eliminating the seemingly existing tradeoff between these two goals reported in previous work. ( 4) Selective classification performance itself appears to be robust to distribution shift, in the sense that, although it naturally degrades, this degradation is not larger than what would be expected by the corresponding accuracy drop.\nTwo questions naturally emerge from our results, which are left as suggestions for future work. Can better performance be attainable with more complex post-hoc methods under limited (or even unlimited) tuning data? What exact properties of a classifier or training regime make it improvable by post-hoc methods? Our investigation suggests that the issue is related to underconfidence, but a complete explanation is still elusive." }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "Luís Felipe P. Cattelan 1 Danilo Silva 1 1 Department of Electrical and Electronic Engineering, Federal University of Santa Catarina (UFSC), Florianópolis, Brazil , [email protected], [email protected]" }, { "figure_ref": [], "heading": "A RELATED WORK", "publication_ref": [ "b70", "b27", "b29", "b9", "b49", "b16", "b17", "b0", "b11", "b16", "b41", "b9", "b61", "b3", "b15", "b1", "b7", "b68", "b40", "b14", "b24", "b34", "b8", "b43", "b4", "b4", "b44", "b58", "b53", "b33", "b16", "b11", "b16", "b2" ], "table_ref": [], "text": "Selective prediction is also known as learning with a reject option (see [Zhang et al., 2023, Hendrickx et al., 2021] and references therein), where the rejector is usually a thresholded confidence estimator. Essentially the same problem is studied under the equivalent terms misclassification detection [Hendrycks and Gimpel, 2016], failure prediction [Corbière et al., 2022, Zhu et al., 2022], and (ordinal) ranking [Moon et al., 2020, Galil et al., 2023]. Uncertainty estimation is a more general term that encompasses these tasks (where confidence may be taken as negative uncertainty) as well as other tasks where uncertainty might be useful, such as calibration and out-of-distribution (OOD) detection, among others [Gawlikowski et al., 2022, Abdar et al., 2021]. These tasks are generally not aligned: for instance, optimizing for calibration may harm selective classification performance [Ding et al., 2020, Zhu et al., 2022, Galil et al., 2023]. Our focus here is on in-distribution selective classification, although we also study robustness to distribution shift.\nInterestingly, the same principles of selective classification can be applied to enable efficient inference with model cascades [Lebovitz et al., 2023], although the literature on those topics appears disconnected.\nMost approaches to selective classification consider the base model as part of the learning problem [Geifman and El-Yaniv, 2019, Huang et al., 2020, Liu et al., 2019b], which we refer to as training-based approaches. While such an approach has a theoretical appeal, the fact that it requires retraining a model is a significant practical drawback. Alternatively, one may keep the model fixed and only modify or replace the confidence estimator, which is known as a post-hoc approach. Such an approach is practically appealing and perhaps more realistic, as it does not require retraining. Papers that follow this approach typically construct a meta-model that feeds on intermediate features of the base model and is trained to predict whether or not the base model is correct on hold-out samples [Corbière et al., 2022, Shen et al., 2022]. However, depending on the size of such a meta-model, its training may still be computationally demanding.\nA popular tool in the uncertainty literature is the use of ensembles [Lakshminarayanan et al., 2017, Teye et al., 2018, Ayhan and Berens, 2018], of which Monte-Carlo dropout Gal and Ghahramani [2016] is a prominent example. While constructing a confidence estimator from ensemble component outputs may be considered post-hoc if the ensemble is already trained, the fact that multiple inference passes need to be performed significantly increases the computational burden at test time. Moreover, recent work has found evidence that ensembles may not be fundamental for uncertainty but simply better predictive models [Abe et al., 2022, Cattelan and Silva, 2022, Xia and Bouganis, 2022]. Thus, we do not consider ensembles here.\nIn this work we focus on simple post-hoc confidence estimators for softmax networks that can be directly computed from the logits. The earliest example of such a post-hoc method used for selective classification in a real-world application seems to be the use of LogitsMargin in [Le Cun et al., 1990]. While potentially suboptimal, such methods are extremely simple to apply on top of any trained classifier and should be natural choice to try before any more complex technique. In fact, it is not entirely obvious how a training-based approach should be compared to a post-hoc method. For instance, Feng et al. [2023] has found that, for some state-of-the-art training-based approaches to selective classification, after the main classifier has been trained with the corresponding technique, better selective classification performance can be obtained by discarding the auxiliary output providing confidence values and simply use the conventional MSP as the confidence estimator. Thus, in this sense, the MSP can be seen as a strong baseline.\nPost-hoc methods have been widely considered in the context of calibration, among which the most popular approach is temperature scaling (TS). Applying TS to improve calibration (of the MSP confidence estimator) was originally proposed in [Guo et al., 2017] based on the negative log-likelihood. Optimizing TS for other metrics has been explored in [Mukhoti et al., 2020, Karandikar et al., 2021, Clarté et al., 2023] for calibration and in [Liang et al., 2023] for OOD detection, but had not been proposed for selective classification. A generalization of TS is adaptive TS (ATS) [Balanya et al., 2023], which uses an input-dependent temperature based on logits. The post-hoc methods we consider here can be seen as a special case of ATS, as logit norms may be seen as an input-dependent temperature; however Balanya et al. [2023] investigate a different temperature function and focuses on calibration. (For more discussion on calibration methods, please see Appendix H.) Other logit-based confidence estimators proposed for calibration and OOD detection include [Liu et al., 2020, Tomani et al., 2022, Rahimi et al., 2022, Neumann et al., 2018, Gonsior et al., 2022].\nNormalizing the logits with the L 2 norm before applying the softmax function was used in [Kornblith et al., 2021] and later proposed and studied in [Wei et al., 2022] as a training technique (combined with TS) to improve OOD detection and calibration. A variation where the logits are normalized to unit variance was proposed in [Jiang et al., 2023] to accelerate training. In contrast, we propose to use logit normalization as a post-hoc method for selective classification, extend it to general p-norm, and consider a tunable p with AURC as the optimization objective, all of which are new ideas which depart significantly from previous work.\nBenchmarking of models in their performance at selective classification/misclassification detection has been done in [Galil et al., 2023, Ding et al., 2020], however these works mostly consider the MSP as the confidence estimator. In particular, a thorough evaluation of potential post-hoc estimators for selective classification as done in this work had not yet appeared in the literature. The work furthest in that direction is the paper by Galil et al. [2023], who empirically evaluated ImageNet classifiers and found that TS-NLL improved selective classification performance for some models but degraded it for others.\nIn the context of calibration, Wang et al. [2021] and Ashukha et al. [2020] have argued that models should be compared after simple post-hoc optimizations, since models that appear worse than others can sometimes easily be improved by methods such as TS. Here we advocate and provide further evidence for this approach in the context of selective classification." }, { "figure_ref": [], "heading": "B ON THE DOCTOR METHOD", "publication_ref": [ "b23", "b23", "b23", "b23" ], "table_ref": [ "tab_0" ], "text": "The paper by [Granese et al., 2021] introduces a selection mechanism named DOCTOR, which actually refers to two distinct methods, D α and D β , in two possible scenarios, Total Black Box and Partial Black Box. Only the former scenario corresponds to post-hoc estimators and, in this case, the two methods are equivalent to NegativeGini and MSP, respectively.\nTo see this, first consider the definition of D α : a sample x is rejected if 1 -ĝ(x) > γĝ(x), where\n1 -ĝ(x) = k∈Y (σ(z)) k (1 -(σ(z)) k ) = 1 - k∈Y (σ(z)) 2 k = 1 -∥σ(z)∥ 2 2\nis exactly the Gini index of diversity applied to the softmax outputs. Thus, a sample\nx is accepted if 1 -ĝ(x) ≤ γĝ(x) ⇐⇒ (1 + γ)ĝ(x) ≥ 1 ⇐⇒ ĝ(x) ≥ 1/(1 + γ) ⇐⇒ ĝ(x) -1 ≥ 1/(1 + γ) -1. Therefore, the method is equivalent to the confidence estimator g(x) = ĝ(x) -1 = ∥σ(z)∥ 2 -1, with t = 1/(1 + γ) -1 as the selection threshold. Now, consider the definition of D β : a sample x is rejected if Pe (x) > γ(1 -Pe (x)), where Pe (x) = 1 -(σ(z)) ŷ and ŷ = arg max k∈Y (σ(z)) k , i.e., Pe (x) = 1 -MSP(z). Thus, a sample x is accepted if Pe (x) ≤ γ(1 -Pe (x)) ⇐⇒ (1 + γ) Pe (x) ≤ γ ⇐⇒ Pe (x) ≤ γ/(1 + γ) ⇐⇒ MSP(z) ≥ 1 -γ/(1 + γ) = 1/(1 + γ).\nTherefore, the method is equivalent to the confidence estimator g(x) = MSP(z), with t = 1/(1 + γ) as the selection threshold.\nGiven the above results, one may wonder why the results in [Granese et al., 2021] show different performance values for D β and MSP (softmax response), as shown, for instance, in Table 1 in Granese et al. [2021]. We suspect this discrepancy is due to numerical imprecision in the computation of the ROC curve for a limited number of threshold values, as the authors themselves point out on their Appendix C.3, combined with the fact that D β and MSP in [Granese et al., 2021] use different parametrizations for the threshold values. In contrast, we use the implementation from the scikit-learn library (adapting it as necessary for the RC curve), which considers every possible threshold for the confidence values given and so is immune to this kind of imprecision." }, { "figure_ref": [], "heading": "C ON LOGIT NORMALIZATION", "publication_ref": [ "b16" ], "table_ref": [], "text": "Logit normalization during training. Wei et al. [2022] argued that, as training progresses, a model may tend to become overconfident on correctly classified training samples by increasing ∥z∥ 2 . This is due to the fact that the predicted class depends only on z = z/∥z∥ 2 , but the training loss on correctly classified training samples can still be decreased by increasing ∥z∥ 2 while keeping z fixed. Thus, the model would become overconfident on those samples, since increasing ∥z∥ 2 also increases the confidence (as measured by MSP) of the predicted class. This overconfidence phenomenon was confirmed experimentally in [Wei et al., 2022] by observing that the average magnitude of logits (and therefore also their average 2-norm) tends to increase during training. For this reason, Wei et al. [2022] proposed logit 2-norm normalization during training, as a way to mitigate overconfidence. However, during inference, they still used the raw MSP as confidence estimator, without any normalization.\nPost-training logit normalization. Here, we propose to use logit p-norm normalization as a post-hoc method and we intuitively expected it to have a similar effect in combating overconfidence. (Note that the argument in [Wei et al., 2022] holds unchanged for any p, as nothing in their analysis requires p = 2.) Our initial hypothesis was the following: if the model has become too overconfident (through high logit norm) on certain input regions, then-since overconfidence is a form of (loss) overfitting-there would be an increased chance that the model will produce incorrect predictions on the test set along these input regions. Thus, high logit norm on the test set would indicate regions of higher inaccuracy, so that, by applying logit normalization, we would be penalizing likely inaccurate predictions, improving selective classification performance. However, this hypothesis was disproved by the experimental results in Appendix F, which show that overconfidence is not a problem for selective classification, but underconfidence may be.\nCombating underconfidence with temperature scaling. If a model is underconfident on a set of samples, with low logit norm and an MSP value smaller than its expected accuracy on these samples, then the MSP may not provide a good estimate of confidence. In these cases, the LogitsMargin (the margin between the highest and the second highest logit) may be a better confidence estimator. Alternatively, one may use MSP-TS with a low temperature, which approximates LogitsMargin, as can be easily seen below. Let z = (z 1 , . . . , z C ), with z 1 > . . . > z C . Then MSP(z/T ) = e z1/T j e zj /T = 1 1 + e (z2-z1)/T + j>2 e (zj -z1)/T (12)\n= 1\n1 + e -(z1-z2)/T 1 + j>2 e -(z2-zj )/T ≈ 1 1 + e -(z1-z2)/T (13) for small T > 0. Note that a strictly increasing transformation does not change the ordering of confidence values and thus maintains selective classification performance. This helps explain why TS (with T < 1) can improve selective classification performance, as already observed in [Galil et al., 2023].\nLogit p-norm normalization as temperature scaling. To shed light on why post-hoc logit p-norm normalization (with a general p) may be helpful, we can show that it is closely related to MSP-TS. Let g p (z) = z 1 /∥z∥ p denote MaxLogit-pNorm. Then\nMSP(z/T ) =    e z1 j e zj /T T    1/T = e z1 ∥e z ∥ 1/T 1/T = g 1/T (e z ) 1/T . (14\n)\nThus, MSP-TS is equivalent to MaxLogit-pNorm with p = 1/T applied to the transformed logit vector exp(z). This helps explain why a general p-norm normalization is useful, as it is closely related to TS, emphasizing the largest components of the logit vector. This also implies that any benefits of MaxLogit-pNorm over MSP-TS stem from not applying the exponential transformation of logits. Why this happens to be useful is still elusive at this point.\nNevertheless, it should be clear that, despite their similarities, logit L2 normalization during training and post-hoc logit p-norm normalization are different techniques applied to different problems and with different behavior. Moreover, even if logit normalization during training turns out to be beneficial to selective classification (an evaluation that is however outside the scope of this work), it should be emphasized that post-hoc optimization can be easily applied on top of any trained model without requiring modifications to its training regime." }, { "figure_ref": [], "heading": "D MORE DETAILS AND RESULTS ON THE EXPERIMENTS ON IMAGENET D.1 HYPERPARAMETER OPTIMIZATION OF POST-HOC METHODS", "publication_ref": [], "table_ref": [], "text": "For not being differentiable, the NAURC metric demands a zero-order optimization. For this work, the optimizations of p and T were conducted via grid-search. Note that, as p approaches infinity, ||z|| p → max(|z|). Indeed, it tends to converge reasonable quickly. Thus, the grid search on p can be made only for small p. In our experiments, we noticed that it suffices to evaluate a few values of p, such as the integers between 0 and 10, where the 0-norm is taken here to mean the sum of all nonzero values of the vector. The temperature values were taken from the range between 0.01 and 3, with a step size of 0.01, as this showed to be sufficient for achieving the optimal temperature for selective classification (in general between 0 and 1)." }, { "figure_ref": [], "heading": "D.2 AUROC RESULTS", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Table 4 shows the AUROC results for all methods for an EfficientNetV2-XL and a VGG-16 on ImageNet. As it can be seen, the results are consistent with the ones for NAURC presented in Section 4. " }, { "figure_ref": [], "heading": "D.3 RC CURVES", "publication_ref": [], "table_ref": [], "text": "In Figure 5 the RC curves of selected post-hoc methods applied to a few representative models are shown." }, { "figure_ref": [], "heading": "D.4 EFFECT OF ϵ", "publication_ref": [], "table_ref": [], "text": "Figure 6 shows the results (in APG metric) for all methods when p is optimized. As can be seen, MaxLogit-pNorm is dominant for all ϵ > 0, indicating that, provided the MSP fallback described in Section 3.2.2 is enabled, it outperforms the other methods." }, { "figure_ref": [], "heading": "E EXPERIMENTS ON ADDITIONAL DATASETS E.1 EXPERIMENTS ON OXFORD-IIIT PET", "publication_ref": [ "b67", "b56" ], "table_ref": [], "text": "The hold-out set for Oxford-IIIT Pet, consisting of 500 samples, was taken from the training set before training. The model used was an EfficientNet-V2-XL pretrained on ImageNet from Wightman [2019]. It was fine-tuned on Oxford-IIIT Pet [Parkhi et al., 2012]. The training was conducted for 100 epochs with Cross Entropy Loss, using a SGD optimizer with initial learning rate of 0.1 and a Cosine Annealing learning rate schedule with period 100. Moreover, a weight decay of 0.0005 and a Nesterov's momentum of 0.9 were used. Data transformations were applied, specifically standardization, random crop (for size 224x224) and random horizontal flip.\nFigure 7 shows the RC curves for some selected methods for the EfficientNet-V2-XL. As can be seen, considerable gains are obtained with the optimization of p, especially in the low-risk region." }, { "figure_ref": [], "heading": "E.2 EXPERIMENTS ON CIFAR-100", "publication_ref": [ "b37" ], "table_ref": [], "text": "The hold-out set for CIFAR-100, consisting of 5000 samples, was taken from the training set before training. The model used was forked from github.com/kuangliu/pytorch-cifar, and adapted for CIFAR-100 [Krizhevsky, 2009].\nIt was trained for 200 epochs with Cross Entropy Loss, using a SGD optimizer with initial learning rate of 0.1 and a Cosine Annealing learning rate schedule with period 200. Moreover, a weight decay of 0.0005 and a Nesterov's momentum of 0.9 were used. Data transformations were applied, specifically standardization, random crop (for size 32x32 with padding 4) and random horizontal flip.\nFigure 8 shows the RC curves for some selected methods for a VGG19. As it can be seen, the results follow the same pattern of the ones observed for ImageNet, with MaxLogit-pNorm achieving the best results." }, { "figure_ref": [], "heading": "F DATA EFFICIENCY", "publication_ref": [ "b69" ], "table_ref": [ "tab_4", "tab_5" ], "text": "In this section, we empirically investigate the data efficiency [Zhang et al., 2020] of tunable post-hoc methods, which refers to their ability to learn and generalize from limited data. As is well-known from machine learning theory and practice, the more we evaluate the empirical risk to tune a parameter, the more we are prone to overfitting, which is aggravated as the size of the dataset used for tuning decreases. Thus, a method that require less hyperparameter tuning tends to be more data efficient, i.e., to achieve its optimal performance with less tuning data. We intuitively expect this to be the case for MaxLogit-pNorm, which only requires evaluating a few values of p, compared to any method based on the softmax function, which requires tuning a temperature parameter.\nAs mentioned in Section 4, the experiments conducted in ImageNet used a test set of 45000 images randomly sampled from the available ImageNet validation dataset, resulting in 5000 images for the tuning set. To evaluate data efficiency, the post-hoc optimization process was executed multiple times, using different fractions of the tuning set while keeping the test set fixed. This whole process was repeated 50 times for different random samplings of the test set (always fixed at 45000 images).\nFigure 9a displays the outcomes of these studies for a ResNet50 trained on ImageNet. As observed, MaxLogit-pNorm exhibits outstanding data efficiency, while methods that require temperature optimization achieve lower efficiency.\nAdditionally, this experiment was conducted on the VGG19 model for CIFAR-100, as shown in figure 9a. Indeed, the same conclusions hold about the high efficiency of MaxLogit-pNorm.\nTo ensure our finding generalize across models, we repeat this process for all the 84 ImageNet classifiers considered, for a specific tuning set size. This time only 10 realizations of the test set were performed, similarly to the results of Section 4.1. Table 5 is the equivalent of Table 2 for a tuning set of 1000 samples, while Table 6 corresponds to a tuning set of 500 samples. As can be seen, the results obtained are consistent with the ones observed previously. In particular, MaxLogit-pNorm provides a statistically significant improvement over all other methods when the tuning set is reduced. Moreover, MaxLogit-pNorm is one of the most stable among the tunable methods in terms of variance of gains." }, { "figure_ref": [], "heading": "G ABLATION STUDY ON THE CHOICE OF p", "publication_ref": [], "table_ref": [ "tab_6", "tab_7" ], "text": "A natural question regarding p-norm normalization (with a general p) is whether it can provide any benefits beyond the default p = 2 used by Wei et al. [2022]. Table 7 shows the APG-NAURC results for the 84 ImageNet classifiers when different values of p are kept fixed and when p is optimized for each model (tunable).\nAs can be seen, there is a significant benefit of using a larger p (especially a tunable one) compared to just using p = 2, especially for MaxLogit-pNorm. Note that, differently from MaxLogit-pNorm, MSP-pNorm requires temperature optimization. This additional tuning is detrimental to data efficiency, which is evidenced by the loss in performance of MSP-pNorm using a tuning set of 1000 samples, as shown in Table 8. " }, { "figure_ref": [], "heading": "H COMPARISON WITH OTHER TUNABLE METHODS", "publication_ref": [ "b69", "b6", "b4", "b4", "b71", "b16", "b25", "b26", "b42", "b25", "b23" ], "table_ref": [ "tab_8", "tab_9" ], "text": "In Section 4.1 we compared several logit-based confidence estimators obtained by combining a parameterless confidence estimator with a tunable logit transformation, specifically, TS and p-norm normalization. In this section, we consider other previously proposed tunable confidence estimators that do not fit into this framework.\nNote that some of these methods were originally proposed seeking calibration and hence its hyperparameters were tuned to optimize the NLL loss (which is usually suboptimal for selective classification). Instead, to make a fair comparison, we optimized all of their parameters using the AURC metric as the objective. Zhang et al. [2020] proposed ensemble temperature scaling (ETS):\nETS(z) ≜ w 1 MSP z T + w 2 MSP(z) + w 3 1 C (15)\nwhere w 1 , w 2 , w 3 ∈ R + are tunable parameters and T is the temperature previously obtained through the temperature scaling method. The grid for both w 1 and w 2 was [0, 1] as suggested by the authors, with a step size of 0.01, while the parameter w 3 was not considered since the sum of a constant to the confidence estimator cannot change the ranking between samples, and consequently cannot change the value of selective classification metrics. Boursinos and Koutsoukos [2022] proposed the following confidence estimator, referred to here as Boursinos-Koutsoukos (BK):\nBK(z) ≜ aMSP(z) + b(1 -max k∈Y:k̸ =ŷ σ k (z))(16)\nwhere a, b ∈ R are tunable parameters. The grid for both a and b was [-1, 1] as suggested by the authors, with a step size of 0.01, though we note that the optimization never found a < 0 (probably due the high value of the MSP as a confidence estimator).\nFinally, Balanya et al. [2023] proposed entropy-based temperature scaling (HTS):\nHTS(z) ≜ MSP z T H (z)(17)\nwhere\nT H (z) = log 1 + exp(b + w log H(z)) , H(z) = -(1/C) k∈Y σ k (z) log σ k (z)\n, and b, w ∈ R are tunable parameters. The grids for b and w were, respectively, [-3, 1] and [-1, 1], with a step size of 0.01, and we note that the optimal parameters were always strictly inside the grid.\nThe results for these post-hoc methods are shown in Table 9 andTable 10. Interestingly, BK, which can be seen as a tunable linear combination of MSP and SoftmaxMargin, is able to outperform both of them, although it still underperforms MSP-TS. On the other hand, ETS, which is a tunable linear combination of MSP and MSP-TS, attains exactly the same performance as MSP-TS. Finally, HTS, which is a generalization of MSP-TS, is able to outperform it, although it still underperforms most methods that use p-norm tuning (see Table 2). In particular, MaxLogit-pNorm shows superior performance to all of these methods, while requiring much less hyperparameter tuning. Methods with a larger number of tunable parameters, such as PTS [Tomani et al., 2022] and HnLTS [Balanya et al., 2023], are only viable with a differentiable loss. As these methods are proposed for calibration, the NLL loss is used; however, as previous works have shown that this does not always improve and sometimes even harm selective classification [Zhu et al., 2022, Galil et al., 2023], these methods were not considered in our work. The investigation of alternative methods for optimizing selective classification (such as proposing differentiable losses or more efficient zero-order methods) is left as a suggestion for future work. In any case, note that using a large number of hyperparameters is likely to harm data efficiency.\nWe also evaluated additional parameterless confidence estimators proposed for selective classification [Hasan et al., 2023], such as LDAM [He et al., 2011] and the method in [Leon-Malpartida et al., 2018], both in their raw form and with TS/pNorm optimization, but none of these methods showed any gain over the MSP. Note that the Gini index, sometimes proposed as a post-hoc method [Hasan et al., 2023] (and also known as DOCTOR's D α method [Granese et al., 2021]) has already been covered in Section 2.2." }, { "figure_ref": [], "heading": "I WHEN IS POST-HOC OPTIMIZATION BENEFICIAL?", "publication_ref": [], "table_ref": [], "text": "In this section we investigate in which circumstances post-hoc optimization yields significant gains. Figure 10 shows histograms of confidence values for two representative examples of non-improvable and improvable models, with the latter one shown before and after post-hoc optimization. Figure 11 shows the NAURC gain over MSP versus the proportion of samples with high MSP for each classifier. As can be seen, highly confident models tend to have a good MSP confidence estimator, while less confident models tend to have a poor confidence estimator that is easily improvable by post-hoc methods-after which the resulting confidence estimator becomes concentrated on high values." }, { "figure_ref": [], "heading": "J FULL RESULTS ON IMAGENET", "publication_ref": [], "table_ref": [ "tab_11", "tab_0", "tab_13" ], "text": "Table 11 presents all the NAURC results for the most relevant methods for all the evaluated models on ImageNet, while Table 12 shows the corresponding AURC results and Table 13 the corresponding AUROC results. p * denotes the optimal value of p obtained for the corresponding method, while p * = F denotes MSP fallback. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The authors would like to thank Bruno M. Pacheco for suggesting the NAURC metric." } ]
10.1016/j.inffus.2021.05.008
[ { "authors": "Moloud Abdar; Farhad Pourpanah; Sadiq Hussain; Dana Rezazadegan; Li Liu; Mohammad Ghavamzadeh; Paul Fieguth; Xiaochun Cao; Abbas Khosravi; U Rajendra Acharya; Vladimir Makarenkov; Saeid Nahavandi", "journal": "Information Fusion", "ref_id": "b0", "title": "A Review of Uncertainty Quantification in Deep Learning: Techniques, Applications and Challenges", "year": "2021-12" }, { "authors": "Taiga Abe; Estefany Kelly Buchanan; Geoff Pleiss; Richard Zemel; John P Cunningham", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b1", "title": "Deep Ensembles Work, But Are They Necessary", "year": "2022-12" }, { "authors": "Arsenii Ashukha; Dmitry Molchanov; Alexander Lyzhov; Dmitry Vetrov", "journal": "", "ref_id": "b2", "title": "PITFALLS OF IN-DOMAIN UN-CERTAINTY ESTIMATION AND ENSEMBLING IN DEEP LEARNING", "year": "2020" }, { "authors": "M Ayhan; Philipp Berens", "journal": "", "ref_id": "b3", "title": "Test-time Data Augmentation for Estimation of Heteroscedastic Aleatoric Uncertainty in Deep Neural Networks", "year": "2018-04" }, { "authors": "Sergio A Balanya; Daniel Ramos; Juan Maroñas", "journal": "", "ref_id": "b4", "title": "Adaptive Temperature Scaling for Robust Calibration of Deep Neural Networks", "year": "2023-03" }, { "authors": "Mohamed Ishmael; Belghazi ; David Lopez-Paz", "journal": "", "ref_id": "b5", "title": "What classifiers know what they don't?", "year": "2021-07" }, { "authors": "Dimitrios Boursinos; Xenofon Koutsoukos", "journal": "IEEE", "ref_id": "b6", "title": "Selective classification of sequential data using inductive conformal prediction", "year": "2022" }, { "authors": "Felipe P Luís; Danilo Cattelan; Silva", "journal": "SBC", "ref_id": "b7", "title": "On the performance of uncertainty estimation methods for deep-learning based image classification models", "year": "2022-11" }, { "authors": "Lucas Clarté; Bruno Loureiro; Florent Krzakala; Lenka Zdeborová", "journal": "", "ref_id": "b8", "title": "Expectation consistency for calibration of neural networks", "year": "2023-03" }, { "authors": "Charles Corbière; Nicolas Thome; Antoine Saporta; Tuan-Hung Vu; Matthieu Cord; Patrick Pérez", "journal": "", "ref_id": "b9", "title": "Confidence Estimation via Auxiliary Models", "year": "2022-10" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "", "ref_id": "b10", "title": "ImageNet: A large-scale hierarchical image database", "year": "2009-06" }, { "authors": "Yukun Ding; Jinglan Liu; Jinjun Xiong; Yiyu Shi", "journal": "IEEE", "ref_id": "b11", "title": "Revisiting the Evaluation of Uncertainty Estimation and Its Application to Explore Model Complexity-Uncertainty Trade-Off", "year": "2020-06" }, { "authors": "Ran El-Yaniv; Yair Wiener", "journal": "Journal of Machine Learning Research", "ref_id": "b12", "title": "On the Foundations of Noise-free Selective Classification", "year": "2010" }, { "authors": "Tom Fawcett", "journal": "Pattern Recognition Letters", "ref_id": "b13", "title": "An introduction to ROC analysis", "year": "2006-06" }, { "authors": "Leo Feng; Mohamed Osama Ahmed; Hossein Hajimirsadeghi; Amir H Abdi", "journal": "", "ref_id": "b14", "title": "Towards Better Selective Classification", "year": "2023-02" }, { "authors": "Yarin Gal; Zoubin Ghahramani", "journal": "PMLR", "ref_id": "b15", "title": "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning", "year": "2016-06" }, { "authors": "Ido Galil; Mohammed Dabbah; Ran El-Yaniv", "journal": "", "ref_id": "b16", "title": "What Can we Learn From The Selective Prediction And Uncertainty Estimation Performance Of 523 Imagenet Classifiers?", "year": "2023-02" }, { "authors": "Jakob Gawlikowski; Cedrique Rovile; Njieutcheu Tassi; Mohsin Ali; Jongseok Lee; Matthias Humt; Jianxiang Feng; Anna Kruspe; Rudolph Triebel; Peter Jung; Ribana Roscher; Muhammad Shahzad; Wen Yang; Richard Bamler; Xiao Xiang Zhu", "journal": "", "ref_id": "b17", "title": "A Survey of Uncertainty in Deep Neural Networks", "year": "2022-01" }, { "authors": "Yonatan Geifman; Ran El-Yaniv", "journal": "", "ref_id": "b18", "title": "Selective Classification for Deep Neural Networks", "year": "2017-06" }, { "authors": "Yonatan Geifman; Ran El-Yaniv", "journal": "PMLR", "ref_id": "b19", "title": "SelectiveNet: A Deep Neural Network with an Integrated Reject Option", "year": "2019-05" }, { "authors": "Yonatan Geifman; Guy Uziel; Ran El-Yaniv", "journal": "", "ref_id": "b20", "title": "Bias-Reduced Uncertainty Estimation for Deep Neural Classifiers", "year": "2019-04" }, { "authors": "Eduardo Dadalto; Câmara Gomes; Marco Romanelli; Federica Granese; Pablo Piantanida", "journal": "", "ref_id": "b21", "title": "A simple Training-Free Method for Rejection Option", "year": "2022-09" }, { "authors": "Julius Gonsior; Christian Falkenberg; Silvio Magino; Anja Reusch; Maik Thiele; Wolfgang Lehner", "journal": "", "ref_id": "b22", "title": "To Softmax, or not to Softmax: that is the question when applying Active Learning for Transformer Models, October 2022", "year": "" }, { "authors": "Federica Granese; Marco Romanelli; Daniele Gorla; Catuscia Palamidessi; Pablo Piantanida", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b23", "title": "Doctor: A simple method for detecting misclassification errors", "year": "2021" }, { "authors": "Chuan Guo; Geoff Pleiss; Yu Sun; Kilian Q Weinberger", "journal": "PMLR", "ref_id": "b24", "title": "On Calibration of Modern Neural Networks", "year": "2017-07" }, { "authors": "Mehedi Hasan; Moloud Abdar; Abbas Khosravi; Uwe Aickelin; Pietro Lio; Ibrahim Hossain; Ashikur Rahman; Saeid Nahavandi", "journal": "", "ref_id": "b25", "title": "Survey on leveraging uncertainty estimation towards trustworthy deep neural networks: The case of reject option and post-training processing", "year": "2023" }, { "authors": "Chun Lei He; Louisa Lam; Ching Y Suen", "journal": "International Journal on Document Analysis and Recognition (IJDAR)", "ref_id": "b26", "title": "Rejection measurement based on linear discriminant analysis for document recognition", "year": "2011" }, { "authors": "Kilian Hendrickx; Lorenzo Perini; Dries Van Der Plas; Wannes Meert; Jesse Davis", "journal": "", "ref_id": "b27", "title": "Machine learning with a reject option: A survey", "year": "2021" }, { "authors": "Dan Hendrycks; Thomas Dietterich", "journal": "", "ref_id": "b28", "title": "Benchmarking Neural Network Robustness to Common Corruptions and Perturbations", "year": "2018-12" }, { "authors": "Dan Hendrycks; Kevin Gimpel", "journal": "", "ref_id": "b29", "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "year": "2016" }, { "authors": "Dan Hendrycks; Steven Basart; Mantas Mazeika; Andy Zou; Joseph Kwon; Mohammadreza Mostajabi; Jacob Steinhardt; Dawn Song", "journal": "PMLR", "ref_id": "b30", "title": "Scaling Out-of-Distribution Detection for Real-World Settings", "year": "2022-06" }, { "authors": "Lang Huang; Chao Zhang; Hongyang Zhang", "journal": "", "ref_id": "b31", "title": "Self-Adaptive Training: beyond Empirical Risk Minimization", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b32", "title": "", "year": "2020" }, { "authors": "Zixuan Jiang; Jiaqi Gu; David Z Pan", "journal": "", "ref_id": "b33", "title": "NormSoftmax: Normalize the Input of Softmax to Accelerate and Stabilize Training", "year": "2023-02" }, { "authors": "Archit Karandikar; Nicholas Cain; Dustin Tran; Balaji Lakshminarayanan; Jonathon Shlens; Michael Curtis Mozer; Rebecca Roelofs", "journal": "", "ref_id": "b34", "title": "Soft Calibration Objectives for Neural Networks", "year": "2021-11" }, { "authors": "Simon Kornblith; Ting Chen; Honglak Lee; Mohammad Norouzi", "journal": "", "ref_id": "b35", "title": "Why Do Better Loss Functions Lead to Less Transferable Features", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b36", "title": "", "year": "2021" }, { "authors": "Alex Krizhevsky", "journal": "", "ref_id": "b37", "title": "Learning Multiple Layers of Features from Tiny Images", "year": "2009" }, { "authors": "Alexander Balaji Lakshminarayanan; Charles Pritzel; Blundell", "journal": "", "ref_id": "b38", "title": "Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b39", "title": "", "year": "2017" }, { "authors": "Y Le Cun; O Matan; B Boser; J S Denker; D Henderson; R E Howard; W Hubbard; L D Jacket; H S Baird", "journal": "", "ref_id": "b40", "title": "Handwritten zip code recognition with multilayer networks", "year": "1990-06" }, { "authors": "Luzian Lebovitz; Lukas Cavigelli; Michele Magno; Lorenz K Muller", "journal": "", "ref_id": "b41", "title": "Transactions on Machine Learning Research", "year": "2023-05" }, { "authors": "Jared Leon-Malpartida; Gladys E Jeanfranco D Farfan-Escobedo; Cutipa-Arapa", "journal": "IEEE", "ref_id": "b42", "title": "A new method of classification with rejection applied to building images recognition based on transfer learning", "year": "2018" }, { "authors": "Shiyu Liang; Yixuan Li; R Srikant", "journal": "", "ref_id": "b43", "title": "Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks", "year": "2023-05" }, { "authors": "Weitang Liu; Xiaoyun Wang; John Owens; Yixuan Li", "journal": "Advances in neural information processing systems", "ref_id": "b44", "title": "Energy-based out-of-distribution detection", "year": "2020" }, { "authors": "Ziyin Liu; Zhikang Wang; Paul Pu Liang; Russ R Salakhutdinov; Louis-Philippe Morency; Masahito Ueda", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b45", "title": "Deep Gamblers: Learning to Abstain with Portfolio Theory", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b46", "title": "", "year": "2019" }, { "authors": "Ziyin Liu; Zhikang Wang; Paul Pu Liang; Russ R Salakhutdinov; Louis-Philippe Morency; Masahito Ueda", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b47", "title": "Deep gamblers: Learning to abstain with portfolio theory", "year": "2019" }, { "authors": "Mélanie Lubrano; Yaëlle Bellahsen-Harrar; Rutger Fick; Cécile Badoual; Thomas Walter", "journal": "", "ref_id": "b48", "title": "Simple and efficient confidence score for grading whole slide images", "year": "2023" }, { "authors": "Jooyoung Moon; Jihyo Kim; Younghak Shin; Sangheum Hwang", "journal": "PMLR", "ref_id": "b49", "title": "Confidence-Aware Learning for Deep Neural Networks", "year": "2020-11" }, { "authors": "Jishnu Mukhoti; Viveka Kulharia; Amartya Sanyal; Stuart Golodetz; Philip Torr; Puneet Dokania", "journal": "", "ref_id": "b50", "title": "Calibrating Deep Neural Networks using Focal Loss", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b51", "title": "", "year": "2020" }, { "authors": "Kevin P Murphy", "journal": "MIT Press", "ref_id": "b52", "title": "Probabilistic Machine Learning: An Introduction", "year": "2022" }, { "authors": "Lukas Neumann; Andrew Zisserman; Andrea Vedaldi", "journal": "", "ref_id": "b53", "title": "Relaxed softmax: Efficient confidence auto-calibration for safe pedestrian detection", "year": "2018" }, { "authors": "Yaniv Ovadia; Emily Fertig; Jie Ren; Zachary Nado; D Sculley; Sebastian Nowozin; Joshua Dillon; Balaji Lakshminarayanan; Jasper Snoek", "journal": "", "ref_id": "b54", "title": "Can you trust your model' s uncertainty? Evaluating predictive uncertainty under dataset shift", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b55", "title": "", "year": "2019" }, { "authors": "Omar Parkhi; Andrea Vedaldi; Andrew Zisserman; Jawahar", "journal": "", "ref_id": "b56", "title": "Oxfordiiit pet dataset", "year": "2012" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga; Alban Desmaison; Andreas Kopf; Edward Yang; Zachary Devito; Martin Raison; Alykhan Tejani; Sasank Chilamkurthy; Benoit Steiner; Lu Fang; Junjie Bai; Soumith Chintala", "journal": "Curran Associates, Inc", "ref_id": "b57", "title": "PyTorch: An Imperative Style, High-Performance Deep Learning Library", "year": "2019" }, { "authors": "Amir Rahimi; Thomas Mensink; Kartik Gupta; Thalaiyasingam Ajanthan; Cristian Sminchisescu; Richard Hartley", "journal": "", "ref_id": "b58", "title": "Post-hoc Calibration of Neural Networks by g-Layers", "year": "2022-02" }, { "authors": "Benjamin Recht; Rebecca Roelofs; Ludwig Schmidt; Vaishaal Shankar", "journal": "PMLR", "ref_id": "b59", "title": "Do imagenet classifiers generalize to imagenet?", "year": "2019" }, { "authors": "Maohao Shen; Yuheng Bu; Prasanna Sattigeri; Soumya Ghosh; Subhro Das; Gregory Wornell; ; Matthew Streeter", "journal": "PMLR", "ref_id": "b60", "title": "Post-hoc Uncertainty Learning using a Dirichlet Meta-Model", "year": "2018-07" }, { "authors": "Mattias Teye; Hossein Azizpour; Kevin Smith", "journal": "PMLR", "ref_id": "b61", "title": "Bayesian Uncertainty Estimation for Batch Normalized Deep Networks", "year": "2018-07" }, { "authors": "Christian Tomani; Daniel Cremers; Florian Buettner", "journal": "", "ref_id": "b62", "title": "Parameterized Temperature Scaling for Boosting the Expressive Power in Post-Hoc Uncertainty Calibration", "year": "" }, { "authors": "Deng-Bao Wang; Lei Feng; Min-Ling Zhang", "journal": "", "ref_id": "b63", "title": "Rethinking Calibration of Deep Neural Networks: Do Not Be Afraid of Overconfidence", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b64", "title": "", "year": "2021" }, { "authors": "Hongxin Wei; Renchunzi Xie; Hao Cheng; Lei Feng; Bo An; Yixuan Li", "journal": "", "ref_id": "b65", "title": "Mitigating Neural Network Overconfidence with Logit Normalization", "year": "" }, { "authors": " Pmlr", "journal": "", "ref_id": "b66", "title": "", "year": "2022-06" }, { "authors": "Ross Wightman", "journal": "", "ref_id": "b67", "title": "Pytorch Image Model", "year": "2019" }, { "authors": "Guoxuan Xia; Christos-Savvas Bouganis", "journal": "", "ref_id": "b68", "title": "On the Usefulness of Deep Ensemble Diversity for Out-of-Distribution Detection", "year": "2022-09" }, { "authors": "Jize Zhang; Bhavya Kailkhura; T Yong; -Jin Han", "journal": "PMLR", "ref_id": "b69", "title": "Mixn-Match : Ensemble and Compositional Methods for Uncertainty Calibration in Deep Learning", "year": "2020-11" }, { "authors": "Xu-Yao Zhang; Guo-Sen Xie; Xiuli Li; Tao Mei; Cheng-Lin Liu", "journal": "", "ref_id": "b70", "title": "A Survey on Learning to Reject", "year": "2023-02" }, { "authors": "Fei Zhu; Zhen Cheng; Xu-Yao Zhang; Cheng-Lin Liu", "journal": "Springer Nature Switzerland", "ref_id": "b71", "title": "Rethinking Confidence Calibration for Failure Prediction", "year": "2022" }, { "authors": "Ke Zou; Zhihao Chen; Xuedong Yuan; Xiaojing Shen; Meng Wang; Huazhu Fu", "journal": "", "ref_id": "b72", "title": "A Review of Uncertainty Estimation and its Application in Medical Imaging, February 2023", "year": "" } ]
[ { "formula_coordinates": [ 3, 87.75, 322.41, 165.97, 29.67 ], "formula_id": "formula_0", "formula_text": "R(h, g) = N i=1 ℓ(h(x i ), y i )1[g(x i ) ≥ t] N i=1 1[g(x i ) ≥ t]" }, { "formula_coordinates": [ 3, 306.64, 177.82, 235.55, 38.05 ], "formula_id": "formula_1", "formula_text": "σ : R C → [0, 1] C , σ k (z) = e z k C j=1 e zj , k ∈ {1, . . . , C}(2)" }, { "formula_coordinates": [ 3, 343.74, 294.58, 197.57, 14.66 ], "formula_id": "formula_2", "formula_text": "g(x) = MSP(z) ≜ max k∈Y σ k (z) = σ ŷ (z)(3)" }, { "formula_coordinates": [ 3, 330.22, 426.24, 211.09, 110.22 ], "formula_id": "formula_3", "formula_text": "SoftmaxMargin(z) ≜ σ ŷ (z) -max k∈Y:k̸ =ŷ σ k (z) (4) MaxLogit(z) ≜ z ŷ (5) LogitsMargin(z) ≜ z ŷ -max k∈Y:k̸ =ŷ z k (6) NegativeEntropy(z) ≜ k∈Y σ k (z) log σ k (z) (7) NegativeGini(z) ≜ -1 + k∈Y σ k (z) 2 . (8" }, { "formula_coordinates": [ 3, 537.43, 516.64, 3.87, 8.64 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 4, 147.29, 583.27, 142.02, 23.25 ], "formula_id": "formula_5", "formula_text": "z ′ = z τ ∥z∥ p (9)" }, { "formula_coordinates": [ 4, 80.19, 619, 208.45, 11.23 ], "formula_id": "formula_6", "formula_text": "∥z∥ p ≜ (|z 1 | p +• • •+|z C | p ) 1/p , p ∈ R, is the p-norm" }, { "formula_coordinates": [ 4, 319.59, 629.84, 221.72, 24.1 ], "formula_id": "formula_7", "formula_text": "NAURC(h, g) = AURC(h, g) -AURC(h, g * ) R(h) -AURC(h, g * ) .(10)" }, { "formula_coordinates": [ 5, 54.28, 256.88, 235.69, 48.91 ], "formula_id": "formula_8", "formula_text": "APG(g) = 1 |H| h∈H [NAURC(h, MSP) -NAURC(h, g)] + ϵ (11) where [x] +" }, { "formula_coordinates": [ 14, 150.59, 510.99, 293.61, 22.21 ], "formula_id": "formula_9", "formula_text": "1 -ĝ(x) = k∈Y (σ(z)) k (1 -(σ(z)) k ) = 1 - k∈Y (σ(z)) 2 k = 1 -∥σ(z)∥ 2 2" }, { "formula_coordinates": [ 14, 53.47, 551.9, 487.17, 77.63 ], "formula_id": "formula_10", "formula_text": "x is accepted if 1 -ĝ(x) ≤ γĝ(x) ⇐⇒ (1 + γ)ĝ(x) ≥ 1 ⇐⇒ ĝ(x) ≥ 1/(1 + γ) ⇐⇒ ĝ(x) -1 ≥ 1/(1 + γ) -1. Therefore, the method is equivalent to the confidence estimator g(x) = ĝ(x) -1 = ∥σ(z)∥ 2 -1, with t = 1/(1 + γ) -1 as the selection threshold. Now, consider the definition of D β : a sample x is rejected if Pe (x) > γ(1 -Pe (x)), where Pe (x) = 1 -(σ(z)) ŷ and ŷ = arg max k∈Y (σ(z)) k , i.e., Pe (x) = 1 -MSP(z). Thus, a sample x is accepted if Pe (x) ≤ γ(1 -Pe (x)) ⇐⇒ (1 + γ) Pe (x) ≤ γ ⇐⇒ Pe (x) ≤ γ/(1 + γ) ⇐⇒ MSP(z) ≥ 1 -γ/(1 + γ) = 1/(1 + γ)." }, { "formula_coordinates": [ 15, 145.26, 557.69, 391.9, 42.72 ], "formula_id": "formula_11", "formula_text": "MSP(z/T ) =    e z1 j e zj /T T    1/T = e z1 ∥e z ∥ 1/T 1/T = g 1/T (e z ) 1/T . (14" }, { "formula_coordinates": [ 15, 537.16, 576.91, 4.15, 8.64 ], "formula_id": "formula_12", "formula_text": ")" }, { "formula_coordinates": [ 18, 203.09, 491.38, 338.21, 22.34 ], "formula_id": "formula_13", "formula_text": "ETS(z) ≜ w 1 MSP z T + w 2 MSP(z) + w 3 1 C (15)" }, { "formula_coordinates": [ 18, 209.03, 601.31, 332.27, 14.66 ], "formula_id": "formula_14", "formula_text": "BK(z) ≜ aMSP(z) + b(1 -max k∈Y:k̸ =ŷ σ k (z))(16)" }, { "formula_coordinates": [ 18, 242.73, 686.81, 298.58, 23.25 ], "formula_id": "formula_15", "formula_text": "HTS(z) ≜ MSP z T H (z)(17)" }, { "formula_coordinates": [ 18, 82.89, 719.68, 341.48, 13.66 ], "formula_id": "formula_16", "formula_text": "T H (z) = log 1 + exp(b + w log H(z)) , H(z) = -(1/C) k∈Y σ k (z) log σ k (z)" } ]
How to Fix a Broken Confidence Estimator: Evaluating Post-hoc Methods for Selective Classification with Deep Neural Networks
This paper addresses the problem of selective classification for deep neural networks, where a model is allowed to abstain from low-confidence predictions to avoid potential errors. We focus on socalled post-hoc methods, which replace the confidence estimator of a given classifier without modifying or retraining it, thus being practically appealing. Considering neural networks with softmax outputs, our goal is to identify the best confidence estimator that can be computed directly from the unnormalized logits. This problem is motivated by the intriguing observation in recent work that many classifiers appear to have a "broken" confidence estimator, in the sense that their selective classification performance is much worse than what could be expected by their corresponding accuracies. We perform an extensive experimental study of many existing and proposed confidence estimators applied to 84 pretrained ImageNet classifiers available from popular repositories. Our results show that a simple p-norm normalization of the logits, followed by taking the maximum logit as the confidence estimator, can lead to considerable gains in selective classification performance, completely fixing the pathological behavior observed in many classifiers. As a consequence, the selective classification performance of any classifier becomes almost entirely determined by its corresponding accuracy. Moreover, these results are shown to be consistent under distribution shift. • We show that, after post-hoc optimization, the selective classification performance of any classifier becomes almost entirely determined by its corresponding accuracy, eliminating the seemingly existing tradeoff between these two goals reported in previous work. • We also study how these post-hoc methods perform under distribution shift and find that the results remain consistent: a method that provides gains in the indistribution scenario also provides considerable gains under distribution shift.Let P be an unknown distribution over X × Y, where X is the input space and Y = {1, . . . , C} is the label
Felipe P Luís; Cattelan; Danilo Silva
[ { "figure_caption": "Figure 2 :2Figure 2: NAURC gains for post-hoc methods across 84 Im-ageNet classifiers. Lines indicate the average of 10 random splits and the filled regions indicate ±1 standard deviation. The black dashed line denotes ϵ = 0.01.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: NAURC, AURC and SAC of 84 ImageNet classifiers with respect to their accuracy, before and after post-hoc optimization. The baseline plots use MSP, while the optimized plots use MaxLogit-pNorm. The legend shows the optimal value of p for each model, where MSP indicates MSP fallback (no significant positive gain). ρ is the Spearman correlation between a metric and the accuracy. In (c), models that cannot achieve the desired selective accuracy are shown with ≈ 0 coverage.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: (a) NAURC gains (over MSP) on ImageNetV2 versus NAURC gains on the ImageNet test set. (b) NAURC on ImageNetV2 versus NAURC on the ImageNet test set. (c) NAURC versus accuracy for ImageNetV2, ImageNet-C and the IID dataset. All models are optimized using MaxLogit-pNorm (with MSP fallback).", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 :Figure 7 :Figure 8 :678Figure 6: APG as a function of ϵ", "figure_data": "", "figure_id": "fig_3", "figure_label": "678", "figure_type": "figure" }, { "figure_caption": "Figure 10: Histograms of confidence values for VGG16 MSP, WideResNet50-2 MSP and WideResNet50-2 MaxLogit-pNorm on ImageNet.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "NAURC (mean ±std) for post-hoc methods applied to ImageNet classifiers", "figure_data": "Logit Transformation", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Selective classification performance (achievable coverage for some target selective accuracy; mean ±std) for a ResNet-50 on ImageNet under distribution shift. For ImageNet-C, each entry is the average across all corruption types for a given level of corruption. The target accuracy is the one achieved for corruption level 0. ±0.05 58.90 ±0.04 49.77 ±0.04 37.92 ±0.03 26.51 ±0.03 69.77 ±0.10 ±0.11 52.31 ±0.13 37.44 ±0.11 19.27 ±0.07 8.53 ±0.12 76.24 ±0.22 MSP-TS-AURC 100 72.98 ±0.23 55.87 ±0.27 40.89 ±0.21 24.65 ±0.19 12.52 ±0.05 76.22 ±0.41 MaxLogit-pNorm 100 75.24 ±0.15 58.58 ±0.27 43.67 ±0.37 27.03 ±0.36 14.51 ±0.26 78.66 ±0.38", "figure_data": "Corruption level", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "AUROC (mean ±std) for post-hoc methods applied to ImageNet classifiers NegativeEntropy 0.6890 ±0.0014 0.7704 ±0.0026 0.6829 ±0.0891 0.8719 ±0.0016 NegativeGini 0.7668 ±0.0014 0.8099 ±0.0017 0.8606 ±0.0011 0.8714 ±0.0012", "figure_data": "Logit Transformation", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "APG-NAURC (mean ±std) of post-hoc methods across 84 ImageNet classifiers, for a tuning set of 1000 samples", "figure_data": "Logit Transformation", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "APG-NAURC (mean ±std) of post-hoc methods across 84 ImageNet classifiers, for a tuning set of 500 samples", "figure_data": "Logit Transformation", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "APG-NAURC (mean ±std) across 84 ImageNet classifiers, for different values of p", "figure_data": "Confidence EstimatorpMaxLogit-pNormMSP-pNorm00.00000 ±0.00000 0.00000 ±0.0000010.00199 ±0.00007 0.05862 ±0.0006720.01519 ±0.00050 0.06368 ±0.0005530.05058 ±0.00049 0.06608 ±0.0004640.06443 ±0.00051 0.06631 ±0.0004750.06805 ±0.00048 0.06564 ±0.0004860.06814 ±0.00048 0.06481 ±0.0004970.06692 ±0.00053 0.06424 ±0.0004980.06544 ±0.00048 0.06391 ±0.0004890.06410 ±0.00048 0.06373 ±0.00048Tunable 0.06863 ±0.00045 0.06841 ±0.00050", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "APG-NAURC (mean ±std) across 84 ImageNet classifiers, for different values of p for a tuning set of 1000 samples", "figure_data": "Confidence EstimatorpMaxLogit-pNormMSP-pNorm00.00000 ±0.00000 0.00000 ±0.0000010.00199 ±0.00007 0.05525 ±0.0039020.01519 ±0.00050 0.06065 ±0.0036130.05058 ±0.00049 0.06334 ±0.0037340.06443 ±0.00051 0.06379 ±0.0040350.06805 ±0.00048 0.06334 ±0.0037460.06814 ±0.00048 0.06272 ±0.0035170.06692 ±0.00053 0.06209 ±0.0034580.06544 ±0.00048 0.06164 ±0.0037090.06410 ±0.00048 0.06140 ±0.00354Tunable 0.06806 ±0.00074 0.06436 ±0.00413", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "APG-NAURC of additional tunable post-hoc methods across 84 ImageNet classifiers", "figure_data": "MethodAPG-NAURCBK0.03932 ±0.00031ETS0.05768 ±0.00037HTS0.06309 ±0.00034MaxLogit-pNorm 0.06863 ±0.00045", "figure_id": "tab_8", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "APG-NAURC of additional tunable post-hoc methods across 84 ImageNet classifiers for a tuning set with 1000 samples", "figure_data": "MethodAPG-NAURCBK0.03795 ±0.00067ETS0.05569 ±0.00165HTS0.05927 ±0.00280MaxLogit-pNorm 0.06806 ±0.00074", "figure_id": "tab_9", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "NAURC (mean ±std)for all models evaluated on ImageNet", "figure_data": "Method", "figure_id": "tab_11", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "AUROC (mean ±std)for all models evaluated on ImageNet", "figure_data": "Method", "figure_id": "tab_13", "figure_label": "13", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[Zou et al., 2023]", "Explanation": "The cited work by Zou et al. provides a method for shortening the path to adoption of deep learning in safety-critical applications, which the citing paper builds upon in its own research."}, {"Category": "Data Source", "Citation": "[Neumann et al., 2018]", "Explanation": "The cited work by Neumann et al. serves as a data source for the study on the consequences of erroneous decisions in safety-critical applications, which the citing paper utilizes in its research."}, {"Category": "Supporting Evidence", "Citation": "[Geifman and El-Yaniv, 2019]", "Explanation": "The cited work by Geifman and El-Yaniv provides a method for improving the baseline classifier by modifying the architecture or training procedure, which the citing paper builds upon to enhance the performance of the classifier."}, {"Category": "Extension or Continuation", "Citation": "[Liu et al., 2019a]", "Explanation": "The cited work by Liu et al. extends the research on improving the baseline classifier by proposing a new method that modifies the architecture or training procedure, which the citing paper further explores to improve the performance of the classifier."}, {"Category": "Extension or Continuation", "Citation": "[Huang et al., 2020]", "Explanation": "The cited work by Huang et al. builds upon the research on improving the baseline classifier by proposing a new method that modifies the architecture or training procedure, which the citing paper extends to enhance the performance of the classifier."}, {"Category": "Methodological Basis", "Citation": "[Galil et al., 2023]", "Explanation": "The cited work by Galil et al. provides a method for comparing the performance of different models in terms of risk and coverage, which the citing paper adopts to compare the performance of the models selected in the study."}, {"Category": "Supporting Evidence", "Citation": "[Galil et al., 2023]", "Explanation": "The cited work by Galil et al. provides the motivating problem of state-of-the-art ImageNet classifiers exhibiting poor performance in detecting their own mistakes, which the citing paper aims to address by exploring the use of post-hoc methods to improve selective classification performance."}, {"Category": "Supporting Evidence", "Citation": "[Galil et al., 2023]", "Explanation": "The cited work introduces the concept of selective accuracy constraint (SAC), which is a metric for comparing selective models and is used in the citing paper to evaluate the performance of selective classifiers."}, {"Category": "Supporting Evidence", "Citation": "[Fawcett, 2006]", "Explanation": "The cited work by Fawcett (2006) provides a standard metric for misclassification detection, which the citing paper uses to evaluate the quality of confidence estimates in selective classification."}, {"Category": "Methodological Basis", "Citation": "[Ding et al., 2020]", "Explanation": "The cited work introduces the maximum softmax probability (MSP) as a popular confidence estimator, which the citing paper adopts as a method for calculating confidence in a classification task."}, {"Category": "Methodological Basis", "Citation": "[Corbi\u00e8re et al., 2022]", "Explanation": "The cited work mentions the maximum class probability (MCP) as another name for the maximum softmax probability (MSP), which the citing paper acknowledges as a method for calculating confidence in a classification task."}, {"Category": "Methodological Basis", "Citation": "[Geifman and El-Yaniv, 2017]", "Explanation": "The cited work discusses the softmax response as a method for calculating confidence in a classification task, which the citing paper references to provide further context and understanding of the method."}, {"Category": "Methodological Basis", "Citation": "[Belghazi and Lopez-Paz, 2021]", "Explanation": "The cited work introduces the Softmax margin and Negative entropy as measures of model confidence, which the citing paper adopts in its research to assess the performance of different model confidence measures."}, {"Category": "Methodological Basis", "Citation": "[Streeter, 2018]", "Explanation": "The cited work presents the Logits margin as a model confidence measure, which the citing paper incorporates in its research to evaluate the performance of different model confidence metrics."}, {"Category": "Methodological Basis", "Citation": "[Lebovitz et al., 2023]", "Explanation": "The cited work discusses the Logits margin as a model confidence measure, which the citing paper uses in its research to assess the performance of different model confidence metrics."}, {"Category": "Methodological Basis", "Citation": "[Granese et al., 2021]", "Explanation": "The cited work presents the Negative Gini index as a model confidence measure, which the citing paper adopts in its research to evaluate the performance of different model confidence metrics."}, {"Category": "Methodological Basis", "Citation": "[Gomes et al., 2022]", "Explanation": "The cited work discusses the Negative Gini index as a model confidence measure, which the citing paper uses in its research to assess the performance of different model confidence metrics."}, {"Category": "Methodological Basis", "Citation": "(8)", "Explanation": "The cited work by Granese et al. (2021) provides the D \u03b1 and D \u03b2 discriminators that the citing paper adopts in the context of negative Gini index and MSP confidence estimators, serving as a methodological basis for the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[Guo et al., 2017]", "Explanation": "The cited work introduces the concept of temperature scaling (TS) in the context of post-hoc calibration, which the citing paper adopts and adapts to optimize the temperature parameter T in the context of calibration and calibration-free methods."}, {"Category": "Data Source", "Citation": "[Murphy, 2022]", "Explanation": "The cited work provides the definition of the negative loglikelihood (NLL), which the citing paper uses in the conventional way of applying temperature scaling (TS) to optimize the temperature parameter T."}, {"Category": "Methodological Basis", "Citation": "[2022]", "Explanation": "The cited work by Wei et al. is the basis for the logit normalization method proposed in the citing paper, as it shows the connection between logit norms and overconfidence in a model."}, {"Category": "Extension or Continuation", "Citation": "[Balanya et al., 2023]", "Explanation": "The cited work by Balanya et al. extends the logit normalization method by introducing adaptive temperature scaling, which is a form of input-dependent temperature scaling that the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "[Geifman et al., 2019]", "Explanation": "The cited work by Geifman et al. introduced the E-AURC metric to address the limitations of the AURC metric in comparing confidence estimators across different classifiers."}, {"Category": "Methodological Basis", "Citation": "[Galil et al., 2023]", "Explanation": "The cited work by Galil et al. is used to identify the shortcomings of the E-AURC metric and propose a new metric, the normalized AURC (NAURC), to address these issues in the citing paper."}, {"Category": "Data Source", "Citation": "[Paszke et al., 2019]", "Explanation": "The cited work is the PyTorch library, which the citing paper uses to perform experiments and train models."}, {"Category": "Data Source", "Citation": "[Deng et al., 2009]", "Explanation": "The cited work is the ImageNet dataset, which the citing paper utilizes for training and evaluation of models."}, {"Category": "Data Source", "Citation": "[Galil et al., 2023]", "Explanation": "The cited work is a repository of models used in the experiments, which the citing paper references for specific models and results."}, {"Category": "Data Source", "Citation": "[Parkhi et al., 2012]", "Explanation": "The cited work is the Oxford-IIIT Pet dataset, which the citing paper uses to run experiments and present results in Appendix E."}, {"Category": "Methodological Basis", "Citation": "[Galil et al., 2023]", "Explanation": "The cited work provides a benchmark of confidence estimator performance for a range of models, including EfficientNet-V2-XL, which the citing paper uses as a reference for comparison in Table 1."}, {"Category": "Extension or Continuation", "Citation": "VGG16", "Explanation": "The cited work serves as a representative example of a lower accuracy model for which the MSP is already a good confidence estimator, providing a basis for the discussion in the citing paper about the performance of confidence estimators in different model settings."}, {"Category": "Methodological Basis", "Citation": "[2023]", "Explanation": "The cited work by Galil et al. provides the observation of a trade-off between accuracy and uncertainty estimation in selective classification, which the citing paper builds upon in its research on confidence estimation and selective classification."}, {"Category": "Data Source", "Citation": "[Hendrycks and Dietterich, 2018]", "Explanation": "The cited work provides the ImageNet-C dataset, which is used in the citing paper to evaluate the performance of post-hoc methods under distribution shift."}, {"Category": "Data Source", "Citation": "[Recht et al., 2019]", "Explanation": "The cited work provides the ImageNetV2 dataset, which is used in the citing paper to replicate the original dataset creation process and evaluate the performance of post-hoc methods under distribution shift."}, {"Category": "Supporting Evidence", "Citation": "[Zhang et al., 2023]", "Explanation": "The cited work by Zhang et al. provides a definition of selective prediction, which is a foundational concept for the citing paper to build upon in discussing the learning with a reject option problem."}, {"Category": "Supporting Evidence", "Citation": "[Hendrickx et al., 2021]", "Explanation": "The cited work by Hendrickx et al. further elaborates on the learning with a reject option problem, providing a more in-depth understanding for the citing paper to build upon."}, {"Category": "Supporting Evidence", "Citation": "[Corbi\u00e8re et al., 2022]", "Explanation": "The cited work by Corbi\u00e8re et al. studies the same problem as the citing paper under the term failure prediction, providing a different perspective for the citing paper to consider."}, {"Category": "Supporting Evidence", "Citation": "[Galil et al., 2023]", "Explanation": "The cited work by Galil et al. discusses the problem of selective classification in the context of (ordinal) ranking, which the citing paper can use to further understand the nuances of the problem."}, {"Category": "Supporting Evidence", "Citation": "[Abdar et al., 2021]", "Explanation": "The cited work by Abdar et al. provides a more general term for the problem discussed in the citing paper, which is useful in understanding the various tasks and applications of uncertainty estimation."}, {"Category": "Methodological Basis", "Citation": "[Lebovitz et al., 2023]", "Explanation": "The cited work on model cascades provides a methodological basis for the citing paper to apply the principles of selective classification in the context of efficient inference."}, {"Category": "Methodological Basis", "Citation": "[Lakshminarayanan et al., 2017, Teye et al., 2018, Ayhan and Berens, 2018]", "Explanation": "The cited works provide a popular tool in the uncertainty literature for using ensembles, which the citing paper adopts as a method for constructing confidence estimators from ensemble component outputs."}, {"Category": "Data Source", "Citation": "[Gal and Ghahramani, 2016]", "Explanation": "The cited work by Gal and Ghahramani is a prominent example of Monte-Carlo dropout, which the citing paper uses as a data source for constructing a confidence estimator from ensemble component outputs."}, {"Category": "Extension or Continuation", "Citation": "[Abe et al., 2022, Cattelan and Silva, 2022, Xia and Bouganis, 2022]", "Explanation": "The cited works have found evidence that ensembles may not be fundamental for uncertainty, which the citing paper extends by focusing on simple post-hoc confidence estimators for softmax networks that can be directly computed from the logits."}, {"Category": "Methodological Basis", "Citation": "[Le Cun et al., 1990]", "Explanation": "The cited work introduces the LogitsMargin method for selective classification, which the citing paper builds upon in a real-world application."}, {"Category": "Extension or Continuation", "Citation": "[Feng et al., 2023]", "Explanation": "The cited work has found that the MSP confidence estimator can be improved by discarding the auxiliary output and using a simpler post-hoc method, which the citing paper expands upon in the context of selective classification."}, {"Category": "Supporting Evidence", "Citation": "[Guo et al., 2017]", "Explanation": "The cited work proposes the temperature scaling (TS) approach to improve calibration in the context of calibration, which the citing paper cites to support the use of this method in the field of calibration."}, {"Category": "Supporting Evidence", "Citation": "[Balanya et al., 2023]", "Explanation": "The cited work on adaptive TS provides a generalization of TS that uses an input-dependent temperature based on logits, which the citing paper uses as a special case in their post-hoc methods for selective classification."}, {"Category": "Extension or Continuation", "Citation": "[Clart\u00e9 et al., 2023]", "Explanation": "The cited work on optimizing TS for calibration is extended in the citing paper to explore the use of TS for selective classification, which is a new dimension in the research area."}, {"Category": "Data Source", "Citation": "[Liang et al., 2023]", "Explanation": "The cited work on optimizing TS for OOD detection provides a data source for the citing paper to use in their research on post-hoc methods for selective classification."}, {"Category": "Methodological Basis", "Citation": "[Liu et al., 2020]", "Explanation": "The cited work on logit-based confidence estimators for calibration and OOD detection provides a methodological basis for the post-hoc methods proposed in the citing paper for selective classification."}, {"Category": "Supporting Evidence", "Citation": "[Neumann et al., 2018]", "Explanation": "The cited work on logit-based confidence estimators for calibration and OOD detection provides additional evidence to support the use of post-hoc methods in the citing paper for selective classification."}, {"Category": "Supporting Evidence", "Citation": "[Rahimi et al., 2022]", "Explanation": "The cited work on logit-based confidence estimators for calibration and OOD detection further supports the use of post-hoc methods in the citing paper for selective classification."}, {"Category": "Supporting Evidence", "Citation": "[Tomani et al., 2022]", "Explanation": "The cited work on logit-based confidence estimators for calibration and OOD detection provides additional evidence to support the use of post-hoc methods in the citing paper for selective classification."}, {"Category": "Supporting Evidence", "Citation": "[Gonsior et al., 2022]", "Explanation": "The cited work on logit-based confidence estimators for calibration and OOD detection further supports the use of post-hoc methods in the citing paper for selective classification."}, {"Category": "Supporting Evidence", "Citation": "[Galil et al., 2023]", "Explanation": "The cited work by Galil et al. has been used in benchmarking models for selective classification and misclassification detection, which is relevant to the citing paper in terms of evaluating the performance of post-hoc estimators for selective classification."}, {"Category": "Supporting Evidence", "Citation": "[Ding et al., 2020]", "Explanation": "The work by Ding et al. has also been used in benchmarking models for selective classification and misclassification detection, providing further support for the evaluation of post-hoc estimators in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[2023]", "Explanation": "The cited work empirically evaluated ImageNet classifiers and found that TS-NLL improved selective classification performance for some models but degraded it for others, providing foundational evidence for the claims and hypotheses of the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[2021]", "Explanation": "The cited work by Wang et al. has argued for the need to compare models after simple post-hoc optimizations, which the citing paper extends by providing further evidence in the context of selective classification."}, {"Category": "Extension or Continuation", "Citation": "[2020]", "Explanation": "The cited work by Ashukha et al. has also argued for the need to compare models after simple post-hoc optimizations, which the citing paper extends by providing further evidence in the context of selective classification."}, {"Category": "Supporting Evidence", "Citation": "[Granese et al., 2021]", "Explanation": "The cited work by Granese et al. introduces a selection mechanism named DOCTOR, which is used as a reference in the citing paper to support the discussion of post-hoc estimators in the context of black box scenarios."}, {"Category": "Methodological Basis", "Citation": "[Granese et al., 2021]", "Explanation": "The cited work provides a different method for computing the confidence estimator, which the citing paper adopts in their own research to improve the performance of the model."}, {"Category": "Methodological Basis", "Citation": "[Wei et al., 2022]", "Explanation": "The cited work by Wei et al. provides a method of logit normalization during training to mitigate the overconfidence phenomenon observed in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[Wei et al., 2022]", "Explanation": "The cited work by Wei et al. provides the analysis and argument for using logit p-norm normalization as a post-hoc method to combat overconfidence in selective classification performance."}, {"Category": "Methodological Basis", "Citation": "[Galil et al., 2023]", "Explanation": "The cited work provides the observation that post-hoc logit p-norm normalization can improve selective classification performance, which the citing paper leverages to explain why temperature scaling (TS) with a temperature value of less than 1 can also improve selective classification performance."}, {"Category": "Methodological Basis", "Citation": "[Parkhi et al., 2012]", "Explanation": "The cited work provides the model and training procedure for the EfficientNet-V2-XL used in the citing paper for fine-tuning on Oxford-IIIT Pet."}, {"Category": "Data Source", "Citation": "[Wightman, 2019]", "Explanation": "The cited work is the source of the pre-trained EfficientNet-V2-XL model used in the citing paper for training on ImageNet."}, {"Category": "Methodological Basis", "Citation": "[Parkhi et al., 2012]", "Explanation": "The cited work provides the training set for the hold-out set used in the citing paper for testing the model on Oxford-IIIT Pet."}, {"Category": "Supporting Evidence", "Citation": "[Zhang et al., 2020]", "Explanation": "The cited work by Zhang et al. provides the concept of data efficiency in the context of tunable post-hoc methods, which the citing paper uses to evaluate the data efficiency of MaxLogit-pNorm in their empirical investigation."}, {"Category": "Methodological Basis", "Citation": "[2020]", "Explanation": "The cited work by Zhang et al. introduces the ensemble temperature scaling (ETS) method, which the citing paper adopts and adapts in their research to improve the performance of their confidence estimators."}, {"Category": "Methodological Basis", "Citation": "[2022]", "Explanation": "The cited work by Boursinos and Koutsoukos provides the confidence estimator that the citing paper adopts in their research, referred to as Boursinos-Koutsoukos (BK). The authors use this estimator to evaluate the performance of selective classification metrics in their study."}, {"Category": "Methodological Basis", "Citation": "[2023]", "Explanation": "The cited work proposes a new entropy-based temperature scaling method (HTS) that the citing paper adopts in their research to improve the performance of a model in a specific context."}, {"Category": "Extension or Continuation", "Citation": "[Tomani et al., 2022]", "Explanation": "The cited work, PTS, is mentioned as a method that requires a large number of tunable parameters and is only viable with a differentiable loss. The citing paper extends this idea by proposing a new method, MaxLogit-pNorm, that shows superior performance in selective classification while requiring much less hyperparameter tuning."}, {"Category": "Data Source", "Citation": "[Balanya et al., 2023]", "Explanation": "The cited work, HnLTS, is mentioned as a method that is only viable with a differentiable loss and is proposed for calibration. The citing paper acknowledges the origin of this work and the use of the NLL loss in the method."}, {"Category": "Methodological Basis", "Citation": "[He et al., 2011]", "Explanation": "The cited work introduces the LDAM method, which the citing paper adopts in its research to evaluate additional parameterless confidence estimators for selective classification."}, {"Category": "Methodological Basis", "Citation": "[Leon-Malpartida et al., 2018]", "Explanation": "The cited work proposes a method for selective classification that the citing paper uses in its research to evaluate additional parameterless confidence estimators."}, {"Category": "Extension or Continuation", "Citation": "[Granese et al., 2021]", "Explanation": "The cited work introduces the Gini index as a post-hoc method for selective classification, which the citing paper further discusses in Section 2.2 to provide a more comprehensive analysis of the subject."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b41", "b18", "b29", "b7", "b38", "b19", "b34" ], "table_ref": [], "text": "While pre-trained language models have made substantial progress in natural language processing, they still lack certain knowledge. Thus it is critical to incorporate external knowledge sources (Peters et al., 2019;Zhang et al., 2019;Logan et al., 2019). Previous research has primarily focused on incorporating symbolic knowledge from structured knowledge graphs. Recently, realizing the lack of expressiveness and contextualization of symbolic knowledge, many forms of commonsense knowledge bases are constructed, such as if-then knowledge (Sap et al., 2019) and discourse knowledge (Fang et al., 2021). The integration of such textual commonsense knowledge into language models has been shown to improve the state of the art for various tasks, such as named entity recognition (Wu et al., 2020) and commonsense knowledge base completion (Malaviya et al., 2020).\nHowever, integrating such commonsense knowledge are computationally expensive. Commonsense knowledge in text form requires more complex encoders (e.g. Transformer (Vaswani et al., 2017)), as opposed to the simple lookup operation for discrete symbolic knowledge. The feedforward and back-propagation process for the text encoder is significantly more computationally expensive than the standalone symbolic knowledge embeddings. Therefore, it is essential to reduce the computational cost for efficient integration of textual commonsense knowledge, particularly for large-scale applications.\nIn this paper, we propose a method to accelerate the process of incorporating textual commonsense knowledge into language models. Our approach is based on the observation that if multiple training samples in a mini-batch share the same commonsense description, the encoding for that description can be reused across those samples. In other words, we only need to encode each distinct description in a mini-batch once. For example, consider the training samples x 1•••4 and the associated commonsense t 1•••4 in Fig. 1. In the batch partitioning in Fig. 1a, the samples in one batch have no shared descriptions, requiring seven times of commonsense encoding for t i . However, in the batch partitioning shown in Fig. 1b, each description will be encoded only once, resulting in only four times of encoding for t i . The cost of encoding the commonsense is significantly reduced by effective partitioning of the training samples. Therefore, our goal is to group the training samples in such a way as to minimize the total number of distinct commonsense descriptions per mini-batch.\nTo optimize the batch partitioning, we begin by theoretically analyzing the objective ( §2.1). Our\nt 1 t 2 x 1 x 3 batch 1 t 3 t 4 x 2 x 4 batch 2 x 1 x 2 x 3 t 1 t 2 x 4 t 3 batch 1 batch 2 t 1 t 2 t 3 t 4\n(a) If samples are divided randomly into batches, a total of 7 times of encoding for ti is required.\nt 1 t 2 x 1 x 3 batch 1 t 3 t 4 x 2 x 4 batch 2 (b)\nIf samples are divided delicately, only a total of 4 times of encoding for ti is required.\nFigure 1: Idea of batch partitioning.\nkey observation is that the upper bound of the cost can be reduced to the well-studied graph k-cut problem (Rangapuram et al., 2014) ( § 3.1 § 3.2). As a result, we minimize the upper bound instead by adapting the classic spectral clustering algorithm ( § 3.3). The average distinct commonsense descriptions per batch are approximated by the distance to the cluster centroid, and is optimized by spectral clustering. This is also empirically verified ( § 5.4).\nThe main contributions of this paper are as follows: (1) We propose the use of batch partitioning for improving the efficiency of textual commonsense integration for language models. (2) We theoretically demonstrate that the batch partitioning problem can be reduced to the classic graph kcut problem, and we use the well-studied spectral clustering to optimize it. (3) We empirically show that the efficiency of integrating commonsense descriptions can be significantly improved without sacrificing effectiveness. The acceleration is even more pronounced for large-scale training." }, { "figure_ref": [], "heading": "The Batch Partitioning Problem", "publication_ref": [], "table_ref": [], "text": "In this section, we analyze the training efficiency w.r.t. batch partitioning. We first show in § 2.1 that the complexity of the model depends on the number of corresponding knowledge descriptions per sample. Then, in § 2.2, we formally define this batch partitioning problem." }, { "figure_ref": [], "heading": "Model Setup and Complexity Analysis", "publication_ref": [ "b6", "b41", "b9" ], "table_ref": [ "tab_0" ], "text": "In this paper, we use the OK-Transformer (Cui and Chen, 2022) as the backbone. OK-Transformer is a recently proposed model that effectively introduces commonsense knowledge into language models. Traditional approaches for such introduction required pre-training language models on a large corpus along with external commonsense, which was time-consuming (Peters et al., 2019;Zhang et al., 2019). The OK-Transformer model, on the other hand, is able to directly incorporate extra knowledge without pre-training. This model utilizes commonsense tokens and attention mechanisms to effectively integrate textual commonsense. Our proposed batch partitioning method is also applicable to other models that encode target sentences and associated commonsense descriptions.\nTo analyze the computational complexity of encoding commonsense knowledge and formulate the problem, we briefly describe how the original OK-Transformer works. It consists of three Transformers, where Transformer (1) is used to represent the target sentence, Transformer (2) is used to represent each textual commonsense description, and Transformer (3) is used to incorporate commonsense embeddings from Transformer (2) into Transformer (1) .\nWe now concretely analyze the complexity of integrating external textual commonsense. When encoding a sample with associated commonsense descriptions, the complexity consists of three modules:\n• For encoding the target sentence via\nTransformer (1) , the complexity of encoding a sentence of length L into dimension D is O(L 2 D).\n• For encoding textual commonsense descriptions via Transformer (2) , the complexity of encoding C knowledge descriptions of length L is O(CL 2 D).\n• For integrating the knowledge embeddings into the target sentence via We summarize the complexity in Table 1. Since in practice we usually have L 2 ≫ C, the key is is to reduce the complexity of encoding for textual commonsense descriptions, i.e., reduce O(CL 2 D).\nTransformer (3) , the complexity is O(C 2 D). Module Complexity Target sentence encoding O(L 2 D) External knowledge encoding O(CL 2 D) Knowledge integration O(C 2 D)\nRelation to retrieval-based knowledge incorporation Integrating text commonsense is related to learning dense retrievers for efficiently retrieving and introducing external textual commonsense, such as REALM (Guu et al., 2020). In commonsense incorporation, each sample only retrieves a small number of knowledge descriptions based on trigger words. So the key of our problem is to efficiently and effectively incorporate certain knowledge descriptions, rather than the information retrieval in dense retrievers. Specifically, dense retrievers typically consist of a retriever and a knowledge-augmented encoder. Our work can be analogous to reducing the cost of the knowledgeaugmented encoder." }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [], "table_ref": [], "text": "We now formulate the problem of batch partitioning. As stated in the introduction, different samples may correspond to the same textual commonsense description. We only need to encode the distinct commonsense descriptions once for a batch of samples. Therefore, the goal of batch partitioning is to minimize the number of distinct commonsense descriptions per batch.\nMore formally, suppose the training data is\nD train = {x i , T (x i ), y i } N i=1\n, where x i is the original sample, y i is the corresponding label, and\nT (x i ) = {t i1 , • • • , t ic i }\nis a collection of external knowledge descriptions for x i . For a batch with s samples x 1 , • • • , x s , the number of knowledge descriptions we need to encode is | s i=1 T (x i )|. For convenience, we assume that N is divisible by batch size s. To reduce the time complexity, we need to partition D train into k = N/s batches B 1 , • • • , B k such that each batch contains s samples and the total number of distinct textual commonsense descriptions in each batch is minimized:\nmin k i=1 | x∈B i T (x)| s.t. |B i | = s (size constraint for each batch) (1)" }, { "figure_ref": [], "heading": "Solving the Batch Partitioning Problem", "publication_ref": [], "table_ref": [], "text": "To solve the batch partitioning problem, we first approximate the upper bound of Eq. ( 1) in § 3.1. We minimize its upper bound instead of directly minimizing Eq. ( 1). In § 3.2, we show that optimizing the upper bound can be reduced to the classic minimum graph k-cut problem, so that some wellstudied algorithms can be applied. We show how we adapt the classical spectral clustering to this problem in § 3.3, and how to scale it up in § 3.4." }, { "figure_ref": [], "heading": "Upper Bound Analysis", "publication_ref": [], "table_ref": [], "text": "We analyze the upper bound of Eq. ( 1) in Theorem 1.\nTheorem 1 (Upper bound).\nk i=1 | x∈B i T (x)| ≤ k i=1 [ x∈B i |T (x)| -s Ex a ,x b ∈B i ,xa̸ =x b |T (xa) ∩ T (x b )|](2)\nProof. For a batch B with s samples {x 1 , • • • , x s }, we have:\n| s i=1 T (xi)| = s i=1 |T (xi) - i-1 j=1 T (xj)| = s i=1 |T (xi) - i-1 j=1 T (xj) ∩ T (xi)| = s i=1 |T (xi)| - s i=1 | i-1 j=1 T (xj) ∩ T (xi)| ≤ s i=1 |T (xi)| - s i=1 max 1≤j≤i-1 |T (xj) ∩ T (xi)|(3)\nThe upper bound in Eq. ( 3) after relaxation is related to the sample order of that batch, while our original objective in Eq. ( 1) is actually orderindependent. To introduce order-independence, let π be an arrangement of\n1 • • • s that π i ∈ {1, • • • , s}. Noticing that s i=1 |T (x i )\n| is a constant, based on the order-independence, we transform Eq. ( 3) into the expectation under different πs:\nEπ s i=1 max 1≤j≤i-1 |T (x π j ) ∩ T (x π i )| = s i=1 Eπ max 1≤j≤i-1 |T (x π j ) ∩ T (x π i )| ≥ s i=1 max 1≤j≤i-1 Eπ |T (x π j ) ∩ T (x π i )| = s Ex a,xb∈Bi,xa̸ =x b |T (x a ) ∩ T (x b )| (4)\nTherefore Theorem 1 holds.\nIt is worth highlighting that the relaxation in the last inequality of Eq. ( 3) is valid due to the nonrandom distribution of words in samples. Specifically, samples with similar meanings tend to have similar word distributions. By grouping similar samples into the same batch, each sample pair within a batch will possess similar textual commonsense knowledge descriptions. This allows us to use the maximal common descriptions between T (x i ) and T (x j ) as an approximation for the common descriptions between T (x i ) and i-1 j=1 T (x j ). According to Theorem 1, since 1) is equivalent to maximizing:\ns i=1 x∈B i |T (x)| = x∈D train |T (x)| is a constant, minimizing Eq. (\nk i=1 Ex a,xb∈Bi,xa̸ =x b |T (x a ) ∩ T (x b )| (5)\nWe will show that this is a balanced graph k-cut problem in § 3.2." }, { "figure_ref": [], "heading": "Connection to the Graph k-Cut Problem", "publication_ref": [], "table_ref": [], "text": "We now illustrate the relationship between Eq. ( 5) and the graph k-cut problem. We demonstrate that, with proper transformation, maximizing Eq. ( 5) can be reduced to the graph k-cut problem. Additionally, in § 3.3, we explain how to incorporate the constraint of the size of each mini-batch using the balanced graph k-cut.\nConsider constructing a weighted graph G(V, E) as follows:\n• For each sample x i in the training data, create a vertex v i .\n• For each pair of distinct vertices (v i , v j ), create an edge between them with a weight of\n|T (x i ) ∩ T (x j )|. The graph k-cut for G(V, E) partitions G(V, E) into k non-empty components: V 1 , • • • , V k such\nthat the sum weight of cross-component edges is minimized. According to the construction of G(V, E), maximizing Eq. ( 5) is equivalent to minimizing the sum weight of the cut. This is formalized in Theorem 2.\nTheorem 2 (Relation to minimum k-cut problem). Suppose the weight of the k-cut for G(V, E) is w, then we have:\nEq. (5) = 2 s(s -1) n-1 i=1 n j=i |T (x i ) ∩ T (x j )| -w (6) Proof. A k-cut of G(V, E) consists of k compo- nents.\nThese k components correspond to k batches in the k-partition. Therefore, the sum weight of inner-component edges of the k-cut is equal to Eq. ( 5) * s(s-1)\n2 . Since the total weight of edges in G(V, E) is equal to the sum weight of innercomponent edges plus the sum weight of the cut, Theorem 2 holds.\nAs n-1 i=1 n j=i |T (x i ) ∩ T (x j )| is a constant for the given training data, Theorem 2 shows that maximizing Eq. ( 5) is equivalent to minimizing the k-cut for G(V, E). Thus, we convert the problem of maximizing Eq. ( 5) into the classic minimum k-cut problem." }, { "figure_ref": [], "heading": "Spectral Clustering for the Balanced k-Cut", "publication_ref": [], "table_ref": [], "text": "Based on the analysis in § 3.2, we propose to use spectral clustering, a widely used approach for solving the minimum graph k-cut problem, as our batch partition algorithm. Spectral clustering employs spectral relaxation of the ratio/normalized cut and uses k-means in the embedding of the vertices found by the first k eigenvectors of the graph Laplacian in order to obtain the clustering. In addition to the classic minimum graph k-cut problem, we need to incorporate the constraint that each cut/batch must have a size of s.\nTo incorporate the batch size constraint, we make a simple modification to the k-means step in spectral clustering. In the traditional k-means, each node is assigned to the nearest cluster center. In our algorithm, if the nearest cluster center has already been assigned s nodes, the node will be assigned to the nearest center that has fewer than s assigned nodes. The specific spectral clustering algorithm is presented as follows. ii. Sort i, j in ascending order of dis i,j for all 1 ≤ i ≤ n, 1 ≤ j ≤ k. iii. Iterate through all i, j. If node i is not assigned in this round and center j has less than s assigned nodes, assign node i to center j. (b) Update step Compute new centers by taking the mean of their assigned nodes." }, { "figure_ref": [], "heading": "Spectral Clustering at Scale", "publication_ref": [ "b17", "b15", "b3", "b32" ], "table_ref": [], "text": "The above algorithm consists of computation of the eigenvectors, and the use of k-means. K-means is efficient even for large-scale data. However, when n and k are large, the graph construction and eigenvectors computation become computationally expensive.\nTo compute the spectral embeddings at scale, high-performance optimization techniques are available such as (Liu et al., 2013;Kolev and Mehlhorn, 2016;Boutsidis et al., 2015;Tremblay et al., 2016). Also, in our experiments, a simple trick was found that yields meaningful results: only calculate k ′ -dimensional feature vectors (k ′ < k) and perform k-means with the k ′ dimensions. We found that k ′ = 8 is a good practice in our experiments." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b41", "b18", "b1", "b20", "b18", "b39", "b6", "b29", "b11", "b40", "b2", "b8", "b30", "b6", "b5", "b13", "b37", "b28", "b12", "b14", "b23", "b23" ], "table_ref": [], "text": "Integrating knowledge into language models has been one of the focuses of language modeling research in recent years. The main integration methods currently include using pre-trained entity embeddings, and constructing knowledgeaware corpora. ERNIE (Zhang et al., 2019), Know-BERT (Peters et al., 2019), and KGLM (Logan et al., 2019) are typical methods using pre-trained entity embeddings. ERNIE uses Wikidata (Vrandečić and Krötzsch, 2014) as the knowledge base and uses TransE (Bordes et al., 2013) to encode knowledge. KnowBERT, on the other hand, uses skip-gram like objective (Mikolov et al., 2013) based on Wikipedia descriptions as the pre-trained entity embeddings. In addition, KnowBERT adds a loss on entity linking to the pre-trained objective. KGLM (Logan et al., 2019) allows modification/updating of knowledge by building a local knowledge graph for the target sentence. WKLM (Xiong et al., 2019) constructs a corpus of incorrect knowledge descriptions by replacing Wikipedia's entities with different entities of the same type. It trains the model to identify incorrect and correct knowl-edge descriptions. Recently, models that integrate textual knowledge have also been proposed. In this paper, we adopt the model structure in OK-Transformer (Cui and Chen, 2022).\nTextual knowledge bases Noting the deficiencies of symbolic knowledge in terms of expressiveness and contextual information representation, some work has started to use text as a form of knowledge. ATOMIC (Sap et al., 2019;Hwang et al., 2021) is a large-scale manually annotated common-sense textual knowledge base that includes social interaction, event-centered, physical entity. ATOMIC contains knowledge like (PersonX reaches PersonX's home, Before, PersonX needs to park the car). ASER (Zhang et al., 2020) is an eventuality knowledge graph of activities, states, events, and their relations. Its knowledge atoms are in natural language form, e.g. (I do not have lunch, succession, I am hungry). COMET (Bosselut et al., 2019) is an extension of ATOMIC based on the generative language model. It mainly solves the problem of insufficient coverage of ATOMIC. Some primitive research (Guan et al., 2020;Shwartz et al., 2020) has started to apply these textual knowledge bases in some specific tasks. OK-Transformer (Cui and Chen, 2022) is proposed to integrate textual knowledge for general purposes. However, in our experimental tests, it takes too much time in encoding the commonsense. To our knowledge, there is still a lack of research on how to integrate textual knowledge into general text understanding tasks efficiently.\nComparison with dense textual knowledge retriever When introducing external texts, another style is to use a retriever that returns only top k candidate texts in terms of similarity (Chen et al., 2017;Karpukhin et al., 2020;Wang et al., 2019). However, this method requires a heavy pre-training process to learn the retriever. On the other hand, for the textual knowledge base we use in this paper, we can directly use the manually labeled trigger words for each knowledge description to retrieve knowledge. Therefore, in this paper, we focus on how to efficiently and effectively integrate knowledge from a textual knowledge base.\nHigh-performance language models More general techniques for high-performance language models have also received extensive studies. The main approaches of previous studies include (1) model compression and quantization (Sanh et al., 2019;Jacob et al., 2018), and (2) efficient repre-sentation of long texts (Kitaev et al., 2019;Peng et al., 2020). However, the model compression approaches require heavy pre-training before they can be adapted to language models. Moreover, the techniques for optimizing the efficiency for long text do not have significant effects on short texts (Peng et al., 2020). Besides, each commonsense description we considered in this paper tends to be short. In addition, these works have not considered the characteristics of the knowledge integration problem in this paper, i.e., a training sample corresponds to multiple candidate textual knowledge from the knowledge base." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we conducted extensive experiments to evaluate batch partitioning. We aim to address the following key questions:\n1. ( § 5.2) How much is the efficiency improvement of batch partitioning? Can it improve efficiency without sacrificing effectiveness?\n2. ( § 5.3) What is the scalability of batch partitioning as an acceleration method, and can it be applied to large-scale training?\n3. ( § 5.4) Is the main theoretical contribution of this paper, i.e., solving the balanced graph-k cut by spectral clustering, consistent with the real datasets?" }, { "figure_ref": [], "heading": "Implementation Details and Setup", "publication_ref": [ "b6", "b11", "b9" ], "table_ref": [], "text": "Textual knowledge base We follow (Cui and Chen, 2022) to use ATOMIC2020 (Hwang et al., 2021) as the textual knowledge base. Each atom in ATOMIC2020 is commonsense in text form. For each sentence in the downstream task, we retrieve the knowledge associated with it from the textual knowledge base. Note that, unlike retrieving knowledge from free text (Guu et al., 2020), the textual knowledge base ATOMIC2020 is constructed manually, and each knowledge description has corresponding trigger words. These trigger words are usually verbs or verb phrases. We retrieve related textual commonsense descriptions by keyword-matching of these trigger words." }, { "figure_ref": [], "heading": "Model architecture", "publication_ref": [ "b6", "b31", "b0", "b16", "b21", "b27", "b26", "b36" ], "table_ref": [], "text": "We use OK-Transformer (Cui and Chen, 2022) as the backbone of our model. It directly incorporates extra knowledge without pre-training.\nOK-Transformer is based on either BERT or RoBERTa. We use OK-Transformer based on BERT by default. We also follow the hyperparameter settings of OK-Transformer. All experiments were run on 8 Nvidia RTX 3090Ti GPUs.\nDatasets We evaluate batch partitioning via commonsense reasoning and sentence classification. Since the textual knowledge introduced in this paper is commonsense descriptions, we first verify whether the proposed method in this paper could be applied to the commonsense reasoning tasks. To this end, we choose a wide range of commonsense reasoning tasks to conduct the experiments: CommonsenseQA (Talmor et al., 2019), Physi-calQA (Bisk et al., 2020), as well as several Winograd Schema Challenge (WSC) datasets including WSC273 (Levesque et al., 2012), PDP (Morgenstern et al., 2016), WinoGrande (Sakaguchi et al., 2019), WinoGender (Rudinger et al., 2018). Furthermore, for a comprehensive comparison, we also evaluate the efficiency and effectiveness of the proposed batch partitioning method on the text classification benchmark GLUE (Wang et al., 2018)." }, { "figure_ref": [], "heading": "Effectiveness and Efficiency", "publication_ref": [], "table_ref": [ "tab_2", "tab_3" ], "text": "Baselines To verify the efficiency and effectiveness of batch partitioning, we used the following baselines:\n• Vanilla BERT/RoBERTa without external knowledge.\n• OK-Transformer To show the efficiency gains of the batch partitioning proposed in this paper, we compare it with the original OK-Transformer. The baseline randomly partitions samples into batches. We consider this baseline as the lower upper bound of effectiveness of commonsense integration.\n• Frozen knowledge encodings For a comprehensive comparison, we propose to freeze the encoding of commonsense descriptions during fine-tuning. This approach allows us to introduce external textual commonsense descriptions via embedding lookup with minimal time cost. We consider this baseline as the upper bound on the efficiency of commonsense integration.\nThe results of commonsense reasoning and text classification are presented in Table 2 andTable 3, respectively. The effectiveness of our batch partitioning approach is demonstrated by its improvement over vanilla language models on both commonsense reasoning and text classification tasks. The effectiveness is comparable or slightly superior to that of OK-Transformer, which serves as the upper bound for effectiveness. In terms of efficiency, our approach significantly accelerates knowledge integration models across a range of tasks. On average, it reduces the time cost for knowledge encoding by 40% for commonsense reasoning tasks, and 110% for text classification tasks. This acceleration is close to the frozen knowledge, and serves as the upper bound for efficiency. Overall, our approach is close to its efficiency upper bound without losing effectiveness." }, { "figure_ref": [ "fig_3", "fig_1" ], "heading": "Scalability for Dataset Sizes, Device Capacities, and Knowledge Sizes", "publication_ref": [], "table_ref": [], "text": "In this subsection, we investigate the scalability of batch partitioning with different batch sizes, as well as the different dataset sizes. Larger dataset sizes usually mean devices with larger memory. In particular, we calculated the speedups of knowledge encoding for different batch sizes and different tasks. The results are shown in Fig. 4. The datasets are sorted by size in descending order. It can be clearly seen that as the size of the dataset rises or the memory of the device rises (larger batch size), the speedup of batch partitioning becomes more significant. This is because, for data-intensive tasks, the knowledge overlapping among different samples is more significant, which increases the feasibility of using batch partitioning. This result verifies the scalability of batch partitioning.\nWe also investigate the scalability of batch partitioning over different scales of integrated commonsense. To control the scale, we set the upper number of commonsense descriptions for each sample to 16/32/64, respectively, and study the efficiency. Intuitively, richer commonsense descriptions lead to higher effectiveness but more computation cost. The results are shown in Fig. 2.\nAs commonsense knowledge becomes richer, the 1.8 1.9 1.6 1.7 1.5 1.5 1.9 2.8 1.9 1.8 1.6 1.6 1.25 1.50 1.75 2.00 2.25 2.50 2.75 effectiveness and the acceleration both increase. This is because the knowledge overlapping among samples also becomes more significant. The result verifies that batch partitioning is applicable for incorporating large-scale commonsense knowledge bases." }, { "figure_ref": [ "fig_2" ], "heading": "Effect of Spectral Clustering Theory", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose the use of spectral clustering to solve the batch partitioning problem. We approximate and optimize the distinct number of descriptions per batch in Eq. ( 1) by minimizing the distance of each node to the centroid of the cluster in spectral clustering. In this subsection, we demonstrate the rationale behind this approximation by highlighting the strong correlation between the objective of Eq. ( 1) and the distance minimization in spectral embeddings.\nTo this end, we plot how the centroid distance and the distinct descriptions per batch vary at each iteration of the spectral clustering algorithm in Fig. 3. The results show a strong correlation between the value we directly optimize (i.e., the cen-troid distance) and the target of the batch partitioning (i.e., distinct descriptions per batch). This supports the feasibility of using spectral clustering to convert the batch partitioning problem into a balanced graph k-cut problem and solve it efficiently." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we study how to improve the efficiency of incorporating commonsense knowledge in language models. Due to the high encoding costs of commonsense descriptions, it is crucial to reduce their encoding complexity. Our idea is that by carefully dividing samples with similar descriptions into the same batch, the knowledge encoding utilization can be improved.\nWith such an idea, we theoretically analyze the optimization objective of this batch partitioning. We found that the upper bound of this problem can be reduced to the classical graph k-cut problem. We propose to use the well-studied spectral clustering algorithm to optimize the batch partitioning. By experimenting with a variety of tasks, we show that the proposed batch partitioning approaches its upper bound in terms of both effectiveness and efficiency. And the method is more applicable for larger datasets and on devices with more capabilities." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "The theoretical results and the algorithm should be applicable for other knowledge integration models which encode target sentences and associated textual knowledge descriptions in mini-batches. However, this paper does not extensively apply the proposed method to various knowledge integration models to explore its efficiency and effectiveness." } ]
2023-05-24
10.18653/v1/P19-1470
[ { "authors": "Yonatan Bisk; Rowan Zellers; Jianfeng Gao; Yejin Choi", "journal": "", "ref_id": "b0", "title": "Piqa: Reasoning about physical commonsense in natural language", "year": "2020" }, { "authors": "Antoine Bordes; Nicolas Usunier; Alberto Garcia-Duran; Jason Weston; Oksana Yakhnenko", "journal": "Advances in neural information processing systems", "ref_id": "b1", "title": "Translating embeddings for modeling multirelational data", "year": "2013" }, { "authors": "Antoine Bosselut; Hannah Rashkin; Maarten Sap; Chaitanya Malaviya; Asli Celikyilmaz; Yejin Choi", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "COMET: Commonsense transformers for automatic knowledge graph construction", "year": "2019" }, { "authors": "Christos Boutsidis; Prabhanjan Kambadur; Alex Gittens", "journal": "", "ref_id": "b3", "title": "Spectral clustering via the power methodprovably", "year": "2015" }, { "authors": " Pmlr", "journal": "", "ref_id": "b4", "title": "", "year": "" }, { "authors": "Danqi Chen; Adam Fisch; Jason Weston; Antoine Bordes", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Reading Wikipedia to answer opendomain questions", "year": "2017" }, { "authors": "Wanyun Cui; Xingran Chen", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Enhancing natural language representation with large-scale out-ofdomain commonsense", "year": "2022" }, { "authors": "Tianqing Fang; Hongming Zhang; Weiqi Wang; Yangqiu Song; Bin He", "journal": "", "ref_id": "b7", "title": "Discos: Bridging the gap between discourse knowledge and commonsense knowledge", "year": "2021" }, { "authors": "Jian Guan; Fei Huang; Zhihao Zhao; Xiaoyan Zhu; Minlie Huang", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b8", "title": "A knowledge-enhanced pretraining model for commonsense story generation", "year": "2020" }, { "authors": "Kelvin Guu; Kenton Lee; Zora Tung; Panupong Pasupat; Mingwei Chang", "journal": "", "ref_id": "b9", "title": "Retrieval augmented language model pre-training", "year": "2020" }, { "authors": " Pmlr", "journal": "", "ref_id": "b10", "title": "", "year": "" }, { "authors": "Jena D Hwang; Chandra Bhagavatula; Le Ronan; Jeff Bras; Keisuke Da; Antoine Sakaguchi; Yejin Bosselut; Choi", "journal": "", "ref_id": "b11", "title": "Comet-atomic 2020: On symbolic and neural commonsense knowledge graphs", "year": "2021" }, { "authors": "Benoit Jacob; Skirmantas Kligys; Bo Chen; Menglong Zhu; Matthew Tang; Andrew Howard; Hartwig Adam; Dmitry Kalenichenko", "journal": "", "ref_id": "b12", "title": "Quantization and training of neural networks for efficient integer-arithmetic-only inference", "year": "2018" }, { "authors": "Vladimir Karpukhin; Barlas Oguz; Sewon Min; Patrick Lewis; Ledell Wu; Sergey Edunov; Danqi Chen; Wen-Tau Yih", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Dense passage retrieval for opendomain question answering", "year": "2020" }, { "authors": "Nikita Kitaev; Lukasz Kaiser; Anselm Levskaya", "journal": "", "ref_id": "b14", "title": "Reformer: The efficient transformer", "year": "2019" }, { "authors": "Pavel Kolev; Kurt Mehlhorn", "journal": "", "ref_id": "b15", "title": "A note on spectral clustering", "year": "2016" }, { "authors": "Hector Levesque; Ernest Davis; Leora Morgenstern", "journal": "Citeseer", "ref_id": "b16", "title": "The winograd schema challenge", "year": "2012" }, { "authors": "Jialu Liu; Chi Wang; Marina Danilevsky; Jiawei Han", "journal": "", "ref_id": "b17", "title": "Large-scale spectral clustering on graphs", "year": "2013" }, { "authors": "Robert Logan; Nelson F Liu; Matthew E Peters; Matt Gardner; Sameer Singh", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Barack's wife hillary: Using knowledge graphs for fact-aware language modeling", "year": "2019" }, { "authors": "Chaitanya Malaviya; Chandra Bhagavatula; Antoine Bosselut; Yejin Choi", "journal": "", "ref_id": "b19", "title": "Commonsense knowledge base completion with structural and semantic context", "year": "2020" }, { "authors": "Tomas Mikolov; Ilya Sutskever; Kai Chen; Greg S Corrado; Jeff Dean", "journal": "", "ref_id": "b20", "title": "Distributed representations of words and phrases and their compositionality", "year": "2013" }, { "authors": "Leora Morgenstern; Ernest Davis; Charles L Ortiz", "journal": "AI Magazine", "ref_id": "b21", "title": "Planning, executing, and evaluating the winograd schema challenge", "year": "2016" }, { "authors": "Y Andrew; Michael I Ng; Yair Jordan; Weiss", "journal": "", "ref_id": "b22", "title": "On spectral clustering: Analysis and an algorithm", "year": "2002" }, { "authors": "Hao Peng; Nikolaos Pappas; Dani Yogatama; Roy Schwartz; Noah Smith; Lingpeng Kong", "journal": "", "ref_id": "b23", "title": "Random feature attention", "year": "2020" }, { "authors": "Mark Matthew E Peters; Robert Neumann; Roy Logan; Vidur Schwartz; Sameer Joshi; Noah A Singh; Smith", "journal": "", "ref_id": "b24", "title": "Knowledge enhanced contextual word representations", "year": "2019" }, { "authors": "Syama Sundar Rangapuram; Pramod Kaushik Mudrakarta; Matthias Hein", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b25", "title": "Tight continuous relaxation of the balanced k-cut problem", "year": "2014" }, { "authors": "Rachel Rudinger; Jason Naradowsky; Brian Leonard; Benjamin Van Durme", "journal": "", "ref_id": "b26", "title": "Gender bias in coreference resolution", "year": "2018" }, { "authors": "Keisuke Sakaguchi; Le Ronan; Chandra Bras; Yejin Bhagavatula; Choi", "journal": "", "ref_id": "b27", "title": "Winogrande: An adversarial winograd schema challenge at scale", "year": "2019" }, { "authors": "Victor Sanh; Lysandre Debut; Julien Chaumond; Thomas Wolf", "journal": "", "ref_id": "b28", "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "year": "2019" }, { "authors": "Maarten Sap; Le Ronan; Emily Bras; Chandra Allaway; Nicholas Bhagavatula; Hannah Lourie; Brendan Rashkin; Noah A Roof; Yejin Smith; Choi", "journal": "", "ref_id": "b29", "title": "Atomic: An atlas of machine commonsense for ifthen reasoning", "year": "2019" }, { "authors": "Vered Shwartz; Peter West; Le Ronan; Chandra Bras; Yejin Bhagavatula; Choi", "journal": "", "ref_id": "b30", "title": "Unsupervised commonsense question answering with self-talk", "year": "2020" }, { "authors": "Alon Talmor; Jonathan Herzig; Nicholas Lourie; Jonathan Berant", "journal": "", "ref_id": "b31", "title": "Commonsenseqa: A question answering challenge targeting commonsense knowledge", "year": "2019" }, { "authors": "Nicolas Tremblay; Gilles Puy; Rémi Gribonval; Pierre Vandergheynst", "journal": "", "ref_id": "b32", "title": "Compressive spectral clustering", "year": "2016" }, { "authors": " Pmlr", "journal": "", "ref_id": "b33", "title": "", "year": "" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b34", "title": "Attention is all you need", "year": "2017" }, { "authors": "Denny Vrandečić; Markus Krötzsch", "journal": "Communications of the ACM", "ref_id": "b35", "title": "Wikidata: a free collaborative knowledgebase", "year": "2014" }, { "authors": "Alex Wang; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel Bowman", "journal": "", "ref_id": "b36", "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "year": "2018" }, { "authors": "Zhiguo Wang; Patrick Ng; Xiaofei Ma; Ramesh Nallapati; Bing Xiang", "journal": "", "ref_id": "b37", "title": "Multi-passage bert: A globally normalized bert model for open-domain question answering", "year": "2019" }, { "authors": "Ledell Wu; Fabio Petroni; Martin Josifoski; Sebastian Riedel; Luke Zettlemoyer", "journal": "", "ref_id": "b38", "title": "Scalable zeroshot entity linking with dense entity retrieval", "year": "2020" }, { "authors": "Wenhan Xiong; Jingfei Du; William Yang; Wang ; Veselin Stoyanov", "journal": "", "ref_id": "b39", "title": "Pretrained encyclopedia: Weakly supervised knowledge-pretrained language model", "year": "2019" }, { "authors": "Hongming Zhang; Xin Liu; Haojie Pan; Yangqiu Song; Cane Wing; -Ki Leung", "journal": "", "ref_id": "b40", "title": "Aser: A large-scale eventuality knowledge graph", "year": "2020" }, { "authors": "Zhengyan Zhang; Xu Han; Zhiyuan Liu; Xin Jiang; Maosong Sun; Qun Liu", "journal": "", "ref_id": "b41", "title": "Ernie: Enhanced language representation with informative entities", "year": "2019" } ]
[ { "formula_coordinates": [ 2, 82.13, 77.51, 344.74, 111.12 ], "formula_id": "formula_0", "formula_text": "t 1 t 2 x 1 x 3 batch 1 t 3 t 4 x 2 x 4 batch 2 x 1 x 2 x 3 t 1 t 2 x 4 t 3 batch 1 batch 2 t 1 t 2 t 3 t 4" }, { "formula_coordinates": [ 2, 183.74, 77.51, 75.1, 142.6 ], "formula_id": "formula_1", "formula_text": "t 1 t 2 x 1 x 3 batch 1 t 3 t 4 x 2 x 4 batch 2 (b)" }, { "formula_coordinates": [ 2, 318.62, 582.34, 207.16, 99.85 ], "formula_id": "formula_2", "formula_text": "Transformer (3) , the complexity is O(C 2 D). Module Complexity Target sentence encoding O(L 2 D) External knowledge encoding O(CL 2 D) Knowledge integration O(C 2 D)" }, { "formula_coordinates": [ 3, 70.87, 429.31, 122.56, 14 ], "formula_id": "formula_3", "formula_text": "D train = {x i , T (x i ), y i } N i=1" }, { "formula_coordinates": [ 3, 70.87, 458.36, 108.38, 11.46 ], "formula_id": "formula_4", "formula_text": "T (x i ) = {t i1 , • • • , t ic i }" }, { "formula_coordinates": [ 3, 78.91, 606.6, 210.95, 62.93 ], "formula_id": "formula_5", "formula_text": "min k i=1 | x∈B i T (x)| s.t. |B i | = s (size constraint for each batch) (1)" }, { "formula_coordinates": [ 3, 309.02, 196.96, 215.99, 70.18 ], "formula_id": "formula_6", "formula_text": "k i=1 | x∈B i T (x)| ≤ k i=1 [ x∈B i |T (x)| -s Ex a ,x b ∈B i ,xa̸ =x b |T (xa) ∩ T (x b )|](2)" }, { "formula_coordinates": [ 3, 331.2, 304.2, 193.81, 123.63 ], "formula_id": "formula_7", "formula_text": "| s i=1 T (xi)| = s i=1 |T (xi) - i-1 j=1 T (xj)| = s i=1 |T (xi) - i-1 j=1 T (xj) ∩ T (xi)| = s i=1 |T (xi)| - s i=1 | i-1 j=1 T (xj) ∩ T (xi)| ≤ s i=1 |T (xi)| - s i=1 max 1≤j≤i-1 |T (xj) ∩ T (xi)|(3)" }, { "formula_coordinates": [ 3, 306.14, 493.2, 220.18, 25.77 ], "formula_id": "formula_8", "formula_text": "1 • • • s that π i ∈ {1, • • • , s}. Noticing that s i=1 |T (x i )" }, { "formula_coordinates": [ 3, 335.43, 554.13, 189.71, 123.71 ], "formula_id": "formula_9", "formula_text": "Eπ s i=1 max 1≤j≤i-1 |T (x π j ) ∩ T (x π i )| = s i=1 Eπ max 1≤j≤i-1 |T (x π j ) ∩ T (x π i )| ≥ s i=1 max 1≤j≤i-1 Eπ |T (x π j ) ∩ T (x π i )| = s Ex a,xb∈Bi,xa̸ =x b |T (x a ) ∩ T (x b )| (4)" }, { "formula_coordinates": [ 4, 70.87, 166.2, 218.27, 26.38 ], "formula_id": "formula_10", "formula_text": "s i=1 x∈B i |T (x)| = x∈D train |T (x)| is a constant, minimizing Eq. (" }, { "formula_coordinates": [ 4, 100.1, 219.15, 189.76, 33.71 ], "formula_id": "formula_11", "formula_text": "k i=1 Ex a,xb∈Bi,xa̸ =x b |T (x a ) ∩ T (x b )| (5)" }, { "formula_coordinates": [ 4, 70.87, 511.7, 219.54, 46.74 ], "formula_id": "formula_12", "formula_text": "|T (x i ) ∩ T (x j )|. The graph k-cut for G(V, E) partitions G(V, E) into k non-empty components: V 1 , • • • , V k such" }, { "formula_coordinates": [ 4, 70.87, 680.21, 220.08, 79.62 ], "formula_id": "formula_13", "formula_text": "Eq. (5) = 2 s(s -1) n-1 i=1 n j=i |T (x i ) ∩ T (x j )| -w (6) Proof. A k-cut of G(V, E) consists of k compo- nents." } ]
Free Lunch for Efficient Textual Commonsense Integration in Language Models
Recent years have witnessed the emergence of textual commonsense knowledge bases, aimed at providing more nuanced and contextrich knowledge. The integration of external commonsense into language models has been shown to be a key enabler in advancing the state-of-the-art for a wide range of NLP tasks. However, incorporating textual commonsense descriptions is computationally expensive, as compared to encoding conventional symbolic knowledge. In this paper, we propose a method to improve its efficiency without modifying the model. We group training samples with similar commonsense descriptions into a single batch, thus reusing the encoded description across multiple samples. One key observation is that the upper bound of batch partitioning can be reduced to the classic graph k-cut problem. Consequently, we propose a spectral clusteringbased algorithm to solve this problem. Extensive experiments illustrate that the proposed batch partitioning approach effectively reduces the computational cost while preserving performance. The efficiency improvement is more pronounced on larger datasets and on devices with more memory capacity, attesting to its practical utility for large-scale applications.
Wanyun Cui; Xingran Chen
[ { "figure_caption": "1. Compute the spectral embedding Y ∈ R n×k by stacking the normalized first k eigenvectors of G(V, E) in columns as described in (Ng et al., 2002). 2. Treat the i-th row of Y as the feature of the i-th training point e i ∈ R k . 3. Given an initial set of k means m 1 , • • • , m k by randomly selecting k nodes as centers, repeat the following two steps until convergence: (a) Assignment step Assign nodes to centers: i. Compute distances to centers dis i,j = distance(e i , m j ), where the Euclidean distance is used.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The effect of the scale of extra commonsense. We control the scale by limiting the upper number of commonsense descriptions per sample in SST-2.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Strong correlation between the distinct commonsense descriptions per batch and the average distance to the centroid during clustering.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Speed-up of proposed batch partitioning method over different training batch size and dataset size. Note that the datasets are arranged in descending order according to dataset size.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Module complexities.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results on commonsense reasoning tasks. The effectiveness of batch partitioning surpasses the vanilla BERT/RoBERTa, and is competitive with its upper bound (OK-Transformer). In terms of efficiency, the speed-up of batch partitioning is also competitive to its upper bound (frozen knowledge). RoB. denotes RoBERTa.", "figure_data": "LMMRPCCoLARTEQNLISTS-BSST-2Avg.Speed-upBERTBERT 86.52/90.66 59.50 71.43 91.20 89.35/88.93 91.97 82.28-Frozen knowledgeBERT 87.50/91.28 57.31 70.76 91.71 87.31/87.20 92.43 81.782.3×OK-TransformerBERT 87.50/91.04 58.29 72.20 91.58 89.82/89.46 92.66 82.541.0×Batch Partitioning BERT 87.99/91.45 61.41 71.48 91.32 89.64/89.19 93.69 83.092.1×RoBERTaRoB.90.49/93.07 66.84 86.28 93.37 91.83/91.95 95.64 87.86-Frozen knowledgeRoB.89.71/92.61 68.22 87.36 94.39 90.74/90.47 96.10 88.192.4×OK-TransformerRoB.91.91/94.24 66.89 86.28 94.71 92.19/92.36 96.44 88.491.0×Batch PartitioningRoB.90.69/93.44 67.75 85.92 94.07 92.41/92.20 96.22 88.272.1×", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results on text classification tasks. Both the effectiveness and the efficiency of batch partitioning are competitive to their upper bounds (OK-Transformer and frozen knowledge).", "figure_data": "951.5941.2Accuracy92 930.6 0.9Speed-Up910.3900163264Upper Number of Knowledge DescriptionsBatch PartitioningOK-TransformerSpeed-Up", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "(Peters et al., 2019)", "Explanation": "The cited work by Peters et al. (2019) provides a foundational method for incorporating external knowledge sources into language models, which the citing paper builds upon in its own research."}, {"Category": "Methodological Basis", "Citation": "(Zhang et al., 2019)", "Explanation": "The cited work by Zhang et al. (2019) offers a method for integrating external knowledge into language models, which the citing paper adopts in its research."}, {"Category": "Methodological Basis", "Citation": "(Logan et al., 2019)", "Explanation": "The cited work by Logan et al. (2019) presents a method for incorporating external knowledge into language models, which the citing paper utilizes in its study."}, {"Category": "Extension or Continuation", "Citation": "(Malaviya et al., 2020)", "Explanation": "The cited work by Malaviya et al. (2020) extends the research on integrating commonsense knowledge into language models by exploring the use of such knowledge for commonsense knowledge base completion, which the citing paper builds upon in its own research."}, {"Category": "Methodological Basis", "Citation": "(Rangapuram et al., 2014)", "Explanation": "The cited work introduces the k-cut problem, which the citing paper adopts to reduce the upper bound of the cost in the batch partitioning problem for improving the efficiency of commonsense integration in language models."}, {"Category": "Methodological Basis", "Citation": "(Cui and Chen, 2022)", "Explanation": "The cited work, OK-Transformer, is the backbone of the citing paper and provides a method for effectively introducing commonsense knowledge into language models, which the citing paper adopts in its research."}, {"Category": "Methodological Basis", "Citation": "(Guu et al., 2020)", "Explanation": "The cited work REALM is used as a basis for learning dense retrievers for efficient retrieval and introduction of external textual commonsense, which the citing paper adopts in their research on integrating text commonsense."}, {"Category": "Methodological Basis", "Citation": "(Liu et al., 2013)", "Explanation": "The cited work by Liu et al. provides high-performance optimization techniques for computing spectral embeddings at scale, which the citing paper adopts to improve the efficiency of the process."}, {"Category": "Methodological Basis", "Citation": "(Kolev and Mehlhorn, 2016)", "Explanation": "The cited work by Kolev and Mehlhorn introduces another optimization technique for computing spectral embeddings at scale, which the citing paper may have considered as an alternative method."}, {"Category": "Methodological Basis", "Citation": "(Boutsidis et al., 2015)", "Explanation": "The cited work by Boutsidis et al. presents another optimization method for computing spectral embeddings at scale, which the citing paper may have considered as a possible solution to the computational challenges."}, {"Category": "Methodological Basis", "Citation": "(Tremblay et al., 2016)", "Explanation": "The cited work by Tremblay et al. provides another optimization technique for computing spectral embeddings at scale, which the citing paper may have considered in the context of improving the efficiency of the process."}, {"Category": "Extension or Continuation", "Citation": "(Liu et al., 2013)", "Explanation": "The cited work by Liu et al. introduces a simple trick for computing spectral embeddings that the citing paper further explores in their experiments, leading to the practice of using k \u2032 = 8 dimensions in the process."}, {"Category": "Methodological Basis", "Citation": "(Zhang et al., 2019)", "Explanation": "ERNIE is a method that uses pre-trained entity embeddings to integrate knowledge into language models, which serves as a methodological basis for the citing paper to explore the use of pre-trained entity embeddings in language modeling research."}, {"Category": "Methodological Basis", "Citation": "(Peters et al., 2019)", "Explanation": "Know-BERT is a method that uses a skip-gram like objective based on Wikipedia descriptions to pre-train entity embeddings, which the citing paper may adopt or adapt in their research on knowledge integration into language models."}, {"Category": "Methodological Basis", "Citation": "(Logan et al., 2019)", "Explanation": "KGLM is a method that allows modification/updating of knowledge by building a local knowledge graph for the target sentence, which the citing paper may use as a methodological basis for exploring knowledge integration in language modeling research."}, {"Category": "Methodological Basis", "Citation": "(Xiong et al., 2019)", "Explanation": "WKLM constructs a corpus of incorrect knowledge descriptions by replacing Wikipedia entities with different entities of the same type, which the citing paper may use as a methodological basis for exploring the use of knowledge in language modeling research."}, {"Category": "Supporting Evidence", "Citation": "(Cui and Chen, 2022)", "Explanation": "The cited work provides the model structure that the citing paper adopts in their research on integrating textual knowledge in models."}, {"Category": "Data Source", "Citation": "(Sap et al., 2019;Hwang et al., 2021)", "Explanation": "The cited work, ATOMIC, is a large-scale knowledge base that the citing paper uses as a data source for their research on common-sense textual knowledge."}, {"Category": "Data Source", "Citation": "(Zhang et al., 2020)", "Explanation": "The cited work, ASER, is an eventuality knowledge graph that the citing paper uses as a data source for their research on activity, state, and event knowledge."}, {"Category": "Data Source", "Citation": "(Bosselut et al., 2019)", "Explanation": "The cited work, COMET, is an extension of ATOMIC that the citing paper uses as a data source for their research on common-sense textual knowledge."}, {"Category": "Supporting Evidence", "Citation": "(Guan et al., 2020)", "Explanation": "The cited work by Guan et al. is one of the first to explore the use of textual knowledge bases in specific tasks, providing a foundational basis for the research conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Shwartz et al., 2020)", "Explanation": "The work by Shwartz et al. extends the research on textual knowledge bases by applying them in a more general context, which the citing paper builds upon in its own study."}, {"Category": "Supporting Evidence", "Citation": "(Cui and Chen, 2022)", "Explanation": "The work by Cui and Chen introduces the OK-Transformer model for integrating textual knowledge in a more efficient way, which the citing paper further builds upon in its research on the same topic."}, {"Category": "Data Source", "Citation": "(Chen et al., 2017)", "Explanation": "The cited work by Chen et al. is a source of the pre-training process used in the method of retrieving top k candidate texts, which the citing paper uses in its own study on text understanding tasks."}, {"Category": "Data Source", "Citation": "(Karpukhin et al., 2020)", "Explanation": "The work by Karpukhin et al. is a data source for the pre-training process used in the method of retrieving top k candidate texts, which the citing paper uses in its research on text understanding tasks."}, {"Category": "Data Source", "Citation": "(Wang et al., 2019)", "Explanation": "The work by Wang et al. is a data source for the pre-training process used in the method of retrieving top k candidate texts, which the citing paper uses in its study on text understanding tasks."}, {"Category": "Methodological Basis", "Citation": "(Sanh et al., 2019)", "Explanation": "The cited work by Sanh et al. (2019) provides a model compression and quantization approach that the citing paper adopts to improve the performance of language models."}, {"Category": "Methodological Basis", "Citation": "(Jacob et al., 2018)", "Explanation": "The cited work by Jacob et al. (2018) also contributes to the model compression and quantization techniques for high-performance language models, which the citing paper leverages in its research."}, {"Category": "Methodological Basis", "Citation": "(Kitaev et al., 2019)", "Explanation": "The cited work by Kitaev et al. (2019) provides a method for efficient representation of long texts, which the citing paper uses to improve the efficiency of language models for short texts."}, {"Category": "Methodological Basis", "Citation": "(Peng et al., 2020)", "Explanation": "The cited work by Peng et al. (2020) also contributes to the optimization of efficiency for long text representation, which the citing paper adopts in its research to improve the performance of language models for short texts."}, {"Category": "Data Source", "Citation": "(Cui and Chen, 2022)", "Explanation": "The cited work provides the textual knowledge base ATOMIC2020 that the citing paper uses to retrieve commonsense descriptions for sentences in the downstream task."}, {"Category": "Methodological Basis", "Citation": "(Guu et al., 2020)", "Explanation": "The cited work introduces the method of retrieving knowledge from free text, which the citing paper builds upon to retrieve related textual commonsense descriptions from the textual knowledge base ATOMIC2020."}, {"Category": "Data Source", "Citation": "(Talmor et al., 2019)", "Explanation": "The cited work provides the CommonsenseQA dataset, which is used in the experiments of the citing paper to evaluate the performance of the proposed method."}, {"Category": "Data Source", "Citation": "(Bisk et al., 2020)", "Explanation": "The cited work provides the PhysicalQA dataset, which is used in the experiments of the citing paper to evaluate the performance of the proposed method."}, {"Category": "Data Source", "Citation": "(Levesque et al., 2012)", "Explanation": "The cited work provides the WSC273 dataset, which is used in the experiments of the citing paper to evaluate the performance of the proposed method."}, {"Category": "Data Source", "Citation": "(Morgenstern et al., 2016)", "Explanation": "The cited work provides the PDP dataset, which is used in the experiments of the citing paper to evaluate the performance of the proposed method."}, {"Category": "Data Source", "Citation": "(Sakaguchi et al., 2019)", "Explanation": "The cited work provides the WinoGrande dataset, which is used in the experiments of the citing paper to evaluate the performance of the proposed method."}, {"Category": "Data Source", "Citation": "(Rudinger et al., 2018)", "Explanation": "The cited work provides the WinoGender dataset, which is used in the experiments of the citing paper to evaluate the performance of the proposed method."}, {"Category": "Data Source", "Citation": "(Wang et al., 2018)", "Explanation": "The cited work provides the GLUE dataset, which is used in the experiments of the citing paper to evaluate the performance of the proposed method."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b8", "b21", "b25", "b5", "b17", "b25", "b5", "b17", "b36", "b29", "b17" ], "table_ref": [], "text": "The application of neural networks in NLP has been a great success. However, the opaque nature of neural network mechanisms raises concerns regarding the reliability of their inferences and the potential for superficial pattern learning. This has led to severe issues with the generalization ability and vulnerability of neural networks. (Jia and Liang, 2017;Rajpurkar et al., 2018).\nOne common attempt to address the opacity of neural networks is to guide them with explanations. Researchers propose that by connecting the model and the inductive bias from explanations, the reliability of neural network inferences can be improved. This approach has been successfully applied in relation extraction tasks, as demonstrated in previous studies such as (Srivastava et al., 2017;Hancock et al., 2018;Murty et al., 2020).\nEarlier approaches relied on explanations from semantic parsers (Srivastava et al., 2017;Hancock et al., 2018), which incurs a high annotation cost. The recently proposed approach, ExpBERT (Murty et al., 2020), was a breakthrough in its ability to directly incorporate natural language explanations. For example, in Fig. 1a, o 1 and o 2 went on a honeymoon can be used as one explanation to guide the recognition of the spousal relation. ExpBERT with annotated explanations achieves 63.5% accuracy in the spousal relation extraction dataset, while BERT without explanations only achieves 52.9%.\nConsidering the simple mechanism of ExpBERT, such improvement is quite surprising. ExpBERT simply concatenates explanations with the original text before being encoded by a language model. Based on the success of ExpBERT, one might conclude that text concatenation and pre-trained language models are sufficient for integrating the inductive bias from natural language explanations and guide models to make a sound inference. On the other hand, as exemplified by the history of deep learning, introducing extra inductive biases into neural networks is never trivial. Due to the strong generalization ability of neural networks, introducing inductive biases by humans is often surpassed by simpler models or even random inductive biases (Xie et al., 2019;Touvron et al., 2021;Tay et al., 2021a).\nIn this paper, starting from investigating the working mechanism of ExpBERT, we study how explanations guide and enhance the model effect. We first propose a simple strategy to control the inductive bias of explanations by the lens of corrupted explanations, wherein some words of annotated explanations are replaced by random words. We show an example of the corrupted explanation arXiv:2305.15520v1 [cs.CL] 24 May 2023\nx: Robert and Julie had a terrible honeymoon last month. y: Spouse Explanation: o1 and o2 went on a honeymoon. Figure 1: Corrupted explanations and their impact under frozen LM setting. Our results show that the random corruption does not result in a decrease in performance on the Disease dataset. While corruptions do lead to a reduction in the effect on the Spouse dataset, we find that even when explanations are 100% corrupted, they still result in an improvement over the baseline without explanations. These findings are an average of 3 runs. in Fig. 1a, where the word honeymoon is replaced by the random word frog. Obviously, the explanations will provide less valid information after random corruption.\nWe show the effect of corrupted explanations in Fig. 1b and Fig. 1c. On both datasets, adding explanations shows a clear improvement to the baseline with no explanation. Surprisingly, however, reducing the inductive bias of explanations has almost no effect on the improvement of explanations on the Disease dataset. On the Spouse dataset, although the improvement decreases as the corruption increases, the effect of 100% corrupted explanations is still better than no explanations. These results suggest that the effect of explanations should not be entirely attributed to their inductive bias.\nWith comprehensive experiments of corrupted explanations in §3, we identified the following characters of natural language explanations:\n• Sensitivity The effect of natural language explanations is sensitive to the training style and the downstream dataset. The previously observed improvement in accuracy and data efficiency (Murty et al., 2020) is only applicable for frozen language models. For fine-tunable language models, the improvement becomes less significant and does not generalize to all datasets.\n• Cause The effect of the natural language explanations comes from the extra context space, rather than their inductive bias. Given enough context space, there is no significant effect decrease over annotated explanations even if they are completely corrupted.\n• Parameter search helps The manual annotations provide a good initialization for that context -although it can also be obtained via parameter search. If we randomly initialize the extra context and fine-tune it with downstream datasets, it achieves competitive or superior results over annotated explanations.\nThe above findings motivate us to further investigate and improve natural language explanations. To get rid of the potential entanglement of the existing vocabulary, we further proposed the perturbed context as the substitute, which only contains randomly initialized embeddings. We conducted different variants of perturbed contexts to investigate how natural language explanations work. We find that full-rank random contexts achieve competitive results with annotated explanations but are 18-29 times faster." }, { "figure_ref": [], "heading": "Background and Experimental Setup", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Problem Definition", "publication_ref": [ "b17", "b25", "b5", "b17" ], "table_ref": [], "text": "We consider the relation extraction task, which is frequently used for explanation guidance evaluation. Given x = (s, o 1 , o 2 ), where s is the target sentence, o 1 and o 2 are two entities which are substrings of s, our goal is to predict the relation y between o 1 and o 2 .\nAdditionally, a set of natural language explanations E = {e 1 , • • • , e n } are annotated to capture relevant inductive bias for this task. This setting follows (Murty et al., 2020). Note that these explanations are designed to capture the global information for all samples in this task, rather than for each example.\nFor example, for the spousal relationship, \"o 1 and o 2 went on a honeymoon\" is a valid explanation used in ExpBERT. We claim that this explanation constitutes a global inductive bias, and whether o 1 and o 2 went on a honeymoon will be seen as a feature to determine their spousal relationship for all samples. Similar global feature settings are also used in previous studies (Srivastava et al., 2017;Hancock et al., 2018;Murty et al., 2020)." }, { "figure_ref": [], "heading": "Guiding Language Models with Explanations", "publication_ref": [ "b17" ], "table_ref": [], "text": "Introducing annotated explanations Exp-BERT (Murty et al., 2020) Introducing corrupted explanations work similarly to ExpBERT, except that a certain fraction of tokens in explanations are replaced by random tokens. The more tokens to be corrupted, the less inductive bias the explanation retains.\nNo explanation We also compare with the vanilla language model without introducing explanations. We use BERT as the language model by default." }, { "figure_ref": [], "heading": "Training Styles", "publication_ref": [ "b34", "b17" ], "table_ref": [], "text": "We consider two training styles: fine-tuning and frozen language models. In fine-tuning, all parameters will be dynamically updated through backpropagation. In frozen language models, all parameters in the language model BERT are frozen after being pre-trained by an additional corpus (i.e., MultiNLI (Williams et al., 2018)), allowing only tuning the MLP classifier. This setting is used in (Murty et al., 2020)." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b17", "b5", "b5", "b37", "b17", "b17" ], "table_ref": [ "tab_1" ], "text": "We follow (Murty et al., 2020) to use three benchmarks: Spouse (Hancock et al., 2018), Disease (Hancock et al., 2018), and TACRED (Zhang et al., 2017). We use the annotated natural language explanations provided by (Murty et al., 2020) for the baselines, except TACRED, whose natural language explanations are not published. Therefore, we manually annotated 128 explanations as in (Murty et al., 2020). The statistics of the datasets is shown in Table 1. More details of the implementations are demonstrated in Appendix A." }, { "figure_ref": [], "heading": "Characterizing Natural Language Explanations", "publication_ref": [ "b17", "b17", "b17" ], "table_ref": [], "text": "In this section, we make a thorough experimental study of the characters of natural language explanations. We plot the accuracy of different settings in Fig. 1 (frozen) and Fig. 2 (fine-tunable).\nCharacter 1 The effect and data efficiency of natural language explanations are sensitive to the training style and the dataset.\nEffect improves for frozen language models. For frozen language models (Fig. 1), introducing annotated natural language explanations significantly improves the effect over models without explanations. This is in line with the finding in (Murty et al., 2020).\nThe improvement becomes less significant for fine-tuning language models and varies across datasets. However, in the more common setting where all parameters are fine-tuned, the effect of annotated explanations is unstable (Fig. 2). On the Spouse and TACRED datasets, introducing annotated explanations has accuracy improvement, but not on Disease. The improvement is less significant compared to the frozen language models. This challenges the perception in previous work that introducing natural language explanations has significant effects (Murty et al., 2020).\nData efficiency only holds for frozen language models As to data efficiency, previous study (Murty et al., 2020) found that the effect of a model trained on a full amount of data without explanations can be achieved by using a small subset (e.g. 5%) of training data with explanations. However, they only verified this in frozen language models. We analyzed the data efficiency in fine-tunable language models. We vary the proportion of training data on different datasets. The results are shown in Fig. 3. Similar to the accuracy experiments, we find that\n!\" !# !$ !% !! & '$ $& !$ (&& )*+,*-+../0123*1+425 Corrupted exp. Annotated exp. No exp. (a) Spouse. !! !\" !# !$ !% & '! !& #! (&& )*+,*-+../0123*1+425 (b) Disease. !! !\" # $% %# \"% &## '()*(+),,-./01(/)203 (c) TACRED.\nFigure 2: Results for fine-tunable language models. Annotated explanations show unstable and minor improvement over vanilla language models. The effectiveness did not significantly decrease after random corruptions.\n!\" \"# $\" %# \" &# '# (# &## )*+,*-./01012*3/4/ !\"#$%&' ())\"*+*$,#$%&' -\"../&*$,#$%&' (a) Spouse. !\" #\" $\" %\" &\" '\" % (\" !\" $\" (\"\" )*+,*-./01012*3/4/ (b) Disease. !\" #\" \"\" $\" %\" \" &' (' #' &'' )*+,*-./01012*3/4/ (c) TACRED.\nFigure 3: For fine-tunable language models, introducing explanations does not improve data efficiency.\nthe data efficiency of natural language explanations does not generalize to fine-tunable language models. The natural language explanation does not improve the data efficiency, nor does the corrupted explanation.\nCharacter 2 Reducing the inductive bias of annotated explanations does not significantly decrease the effect.\nWe also demonstrate the results of corrupting a certain fraction of tokens in annotated explanations in Fig. 1 and Fig. 2. The results are surprising: after corrupting with random words, the performance does not drop in most cases. Even when we replace 100% of the words in explanations with random words, the results are still competitive with the original explanations. This phenomenon was observed in all three datasets in the fine-tuning setting. Obviously, the 100% corrupted explanations do not provide any valid inductive bias for the model.\nInterestingly, for the frozen language models, the results for Spouse and Disease diverge. In Fig. 1b, corrupting the explanations does not reduce the effect of Disease. The results of Spouse in Fig. 1c, however, shows that randomly corrupting the explanations does reduce the effect. Fig. 1c conflicts with other settings. We further investigate this exception below.\nCharacter 3 Parameter search over the corrupted tokens makes corrupted explanations comparable with annotated explanations in frozen language models.\nIn the frozen language models, both the language model and the corrupted explanations are not finetunable. The randomly corrupted explanations may not be well-initialized if no optimization is allowed. We expect that the corrupted explanation will still be effective after proper initialization.\nTo verify this, we slightly modified the training strategy to allow the word embeddings of the corrupted words to be fine-tuned, while still freezing other parameters of the language model. That is, we search for the parameters for the corrupted explanation. Note that, during this process, no annotated explanation is used.\nWe show the results with tunable corrupted tokens in Fig. 1. We find that while not introducing extra annotations, the parameter search for the corrupted explanation performs competitively (on Spouse) or surpasses (on Disease) the annotated explanations. The results suggest that random initialization does not necessarily perform well. On the other hand, manually annotated explanations serve as a good initialization for the extra context." }, { "figure_ref": [], "heading": "How Do Explanations Work? An Investigation via Perturbed Context", "publication_ref": [], "table_ref": [], "text": "The effect of corrupted explanations and natural language explanations reminds us to investigate how explanations work through their commonality: they both provide extra contexts. We hypothesize that the real factor at play is the context space provided by (corrupted) explanations, rather than the inductive bias. Hence, in this section, we systematically analyze how explanations work regarding the extra context space. To address this, we present perturbed context." }, { "figure_ref": [], "heading": "Definition of Perturbed Context", "publication_ref": [], "table_ref": [], "text": "Our approach is inspired by the experiments of randomly corrupted explanations in § 3. According to the experiments, inductive biases of the explanations are not important, but rather we need a fine-tunable context to enrich the text representation. Therefore, we propose the perturbed context without any annotation to provide a fine-tunable context. We define a perturbed context e v in the following form:\nev := o1 [M]1 [M]2 • • • [M]m o2(2)\nwhere o 1 and o 2 are placeholders for the two entities. We denote the embedding of [M] i as emb([M] i ). We use [M] i as the normal token input in language models. We set m = 4 in this paper. emb([M] i ) is randomly initialized without semantics." }, { "figure_ref": [], "heading": "Variants of Perturbed Contexts", "publication_ref": [ "b9" ], "table_ref": [], "text": "As the extra context is the commonality between annotated explanations and corrupted explanations, we investigate how explanations work w.r.t. extra contexts. We introduce several variants of the perturbed contexts. We are particularly interested in (1) whether the perturbed context should be conditioned on the input x;\n(2) how the flexibility of the context space affects the results. The flexibility is controlled via factorization. We refer to the implementation of Synthesizer (Tay et al., 2021a), which is a recent study that revisits the inductive biases in attention.\nRandomly perturbed contexts We consider the simplest form of the perturbed context, which consists of independent perturbed tokens that are randomly initialized. We set the perturbed contexts to be global and task-specific, rather than samplespecific. The randomly perturbed context is not conditioned on any input tokens.\nLet M ∈ R m×d be a randomly initialized matrix. The embeddings of tokens in the randomly perturbed context are defined as:\nemb rand ([M]i) = Mi (3)\nThe randomly perturbed context has m × d parameters. These parameters can either be trainable or kept fixed (denoted as fixed random).\nConditional perturbed context We also consider constructing perturbed contexts that are conditioned on each sample x. Here we adopt F i (), a parameterized function, for projecting x to the embedding space of the perturbed tokens.\nemb cond ([M]i) = Fi(x pool )(4)\nwhere x pool ∈ R d is the pooled BERT output for x. In practice, we use the multi-layer perceptron as F (•).\nFactorized models We investigate the effect of the flexibility of the context. We refer to the factorization method in ALBERT (Lan et al., 2019). We map the original embedding of the perturbed token to a lower dimensional space, and then project it back to the original embedding space. The size of the intermediate space reflects flexibility. Following this idea, we further design the following variants.\nFactorized randomly perturbed context We factorize the embedding of the randomly perturbed context by:\nemb f_rand ([M]i) = W fr MFi (5)\nwhere MF ∈ R m×l is a lower dimensional embedding matrix, W fr ∈ R d×l . We first use M F i to represent the lower dimensional space of size l(l < d), then we project it back to the normal embedding space of size d.\nFactorized conditional perturbed context Similarly, we also factorize the embedding of the conditional perturbed context by:\nemb f_cond ([M]i) = W fc2 (W fc1 Fi(x pool ))(6)\nwhere\nW fc1 ∈ R l×d ,W fc2 ∈ R d×l .\nMixture explanations We consider combining the perturbed context and manually annotated explanations to see if the two kinds of contexts complement each other. To ensemble the explanations, we add the perturbed context to the annotated explanation list.\nExploiting the perturbed context We use the perturbed context as the context of the original sentence. Specifically, we use the pre-trained language model BERT to represent the target sample x = (s, o 1 , o 2 ). We construct e v as a context for s by replacing the placeholders with o 1 and o 2 , respectively. Then we use the [SEP] token as a separator to combine s and e v . In this way, we convert the representation of s into the representation of s + e v . We use the default setting in BERT to represent the sentence pair, i.e., using the final output of the [CLS] token as the representation of sentence pairs:\nI(s, ev) = BERT([CLS], s, [SEP], ev, [SEP]) (7)\nExpBERT uses n explanations to improve the model, while we found one perturbed context achieves near-optimal results. We will give more experimental evidence in § 4.3 and Appendix C." }, { "figure_ref": [], "heading": "Effect on Fine-Tunable Language Models", "publication_ref": [ "b5", "b32", "b12" ], "table_ref": [ "tab_2", "tab_2" ], "text": "We show the effect of different approaches in Table 2. We also compare with the following baselines that also introduce explanations in text understanding: BabbleLabble (Hancock et al., 2018), NeXT (Wang et al., 2020). Since ExpBERT is based on BERT, to make the comparison fair, we also use BERT as the language model by default. In addition, we also conducted experiments on RoBERTa (Liu et al., 2019).\nEffect of perturbed contexts The proposed perturbed contexts overall achieve competitive performance with annotated explanations. This further verifies the claim in § 3, that the natural language explanations mainly provide extra context, rather than the specific inductive bias.\nAmong different variants of perturbed contexts, the fine-tunable random variant achieves the highest accuracy and outperforms the annotated explanations (ExpBERT) by +1.15% on average. We think this is because the annotated explanations have limited expressiveness and are cognitively biased. Thus a tunable context works better.\nFine-tunable vs fixed Surprisingly, we found that the fixed random variant performs competitively with the fine-tunable one on RoBERTa, and even achieves the highest accuracy in some settings. We think this is because the tunable language models provide sufficient flexibility to complement the flexibility of fixed perturbed contexts.\nGlobal vs conditioned We found that the sample-specific variants (conditional and factorized conditional) have slight performance degradation compared to the global variant. We think this is because the explanation learned from the training data is prone to contain certain biases. This makes it hard to learn a sample-specific perturbed context generator with high generalization ability. In contrast, learning generalized global perturbed contexts is easier. In terms of flexibility, conditional perturbed tokens are actually a projection of the representation of the original sentence x. This limits its flexibility. This also corroborates our analysis in Appendix B that we need a richer context to enhance the representation.\nEffect of mixture explanations We evaluate the effectiveness of adding the randomly perturbed context to the annotated explanation list. The results in Table 2 show that mixture explanations have a slight performance degradation over the randomly perturbed context. This indicates that manually annotated explanations do not complement with the randomly perturbed context.\nEfficiency Since the traditional approaches require the encoding of n explanations, they have substantial extra training/inference time compared to the vanilla language models. For example, Exp-BERT encodes n explanations for each target sentence, which is extremely expensive when n is large. (e.g. TACRED has 128 explanations). Our proposed perturbed contexts, on the other hand, only need to encode a single sentence consisting of the target sentence and the perturbed context. It is almost as efficient as encoding the original sentence. Therefore, the efficiency of our approach is substantially improved compared to the previous work. We show the average training time of different approaches in Table 2. The training time of our approach is almost the same as that of language models without explanations. While having competitive effects, our approach is about 20 -30 times faster than ExpBERT which introduces explanations." }, { "figure_ref": [], "heading": "Effect on Frozen Language Models", "publication_ref": [ "b17", "b17" ], "table_ref": [ "tab_2" ], "text": "For a comprehensive comparison, we also compare the effects of different models in the frozen language models. Note that the parameters of the perturbed context are fine-tunable except for the fixed random version. In addition to ExpBERT, we also compare our results with two settings of BERT + BabbleLabel as in (Murty et al., 2020), which uses the outputs of the labeling functions for explanations as features (BabbleLabble-ProgExp), and the encoding of the explanations by ExpBERT as Table 3: Results for frozen language models. The perturbed context outperforms its competitors. †: result from (Murty et al., 2020).\nfeatures (BabbleLabble-LangExp). We omit the results of conditional and factorized conditional perturbed contexts, as they need the trainable parameters to learn F i () and are not suitable for frozen language models. The results are shown in Table 3. The perturbed context still outperforms models with annotated explanations. This demonstrates that extra context space, rather than inductive bias, works for frozen language models. Unlike the results in Table 2, where the fine-tunable and fixed random variants are competitive, the fine-tunable random variant performs clearly better for frozen language models. We think this is because the model requires the perturbed context to provide flexibility via fine-tuning." }, { "figure_ref": [ "fig_1" ], "heading": "Effect of the Factorization", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Results in Table 2 andTable 3 have shown that factorized models have slight performance degradation. To directly investigate the effect of flexibility/factorization, we control the size of the perturbed context of factorized models (i.e. l). We present the results of different ls in different training styles in Fig. 4.\nWe found that the choice of training style has a significant effect on flexibility. If fine-tuning is allowed, varying the size does not have a significant effect. However, for frozen language models, increasing the size significantly improves the results. This indicates that frozen language models require highly flexible perturbed contexts to enhance the effect. On the other hand, the tunable language model does not rely on the flexibility of the perturbed context. We think the flexibility has already been complemented by the tunable parameters in tunable language models. For frozen language models, the effect is positively correlated with size. However, this does not hold for the fine-tunable language models." }, { "figure_ref": [], "heading": "Summary", "publication_ref": [], "table_ref": [], "text": "Considering all the results in §4.3 4.4 4.5, the perturbed context has competitive or superior performance than annotated explanations if it has sufficient flexibility. The flexibility is obtained by one of the two fashions: (1) tunable perturbed context itself;\n(2) tunable language models." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b25", "b31", "b5", "b25", "b5", "b17", "b29", "b11", "b28", "b4" ], "table_ref": [], "text": "Introducing explanations in models Introducing explanations in text understanding has drawn many research interests. A typical class of methods is to construct explanations for specific domains, and then convert these explanations into features and combine them in the original model (Srivastava et al., 2017;Wang et al., 2017;Hancock et al., 2018). For example, Srivastava et al. (2017) use a semantic parser to transform their constructed explanations into features to apply to downstream tasks. Hancock et al. (2018) use the semantic parser as noisy labels, instead of features. Murty et al. (2020) argues that these semantic parsers can typically only parse low-level statements, therefore they use the language model as a \"soft\" parser to interpret language explanations, aiming to fully utilize the semantics of the explanations.\nRevisiting the value of inductive biases Our paper presents the first proposal to revisit the inductive bias from explanations. Some previous studies worked on revisiting the inductive bias from network architectures (Touvron et al., 2021;Tay et al., 2021b;Liu et al., 2021), and some progress has been made. For example, Tolstikhin et al. (2021) found that, even only using multi-layer perceptrons, competitive scores on image classification benchmarks with CNNs and Vision Transformers (Doso-vitskiy et al., 2020) could be attained. Guo et al. (2021) found that self-attention can be replaced by two cascaded linear layers and two normalization layers based on two external, small, learnable, shared memories. Melas-Kyriazi (2021) found that applying feed-forward layers over the patch dimension obtains competitive results with the attention layer. These studies all demonstrate that it it nontrivial to integrate valid inductive biases into neural networks." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we revisit the role of explanations for relation extraction. In previous studies, explanations were thought to provide effective inductive bias, thus guiding the model learn the downstream task more effectively. We argue that it is imprudent to simply interpret explanations' effects as inductive bias. We find that the effect of natural language explanations varies across different training styles and datasets. By randomly corrupting the explanations, we found that the effect of explanations did not change significantly as the inductive bias decreased. This suggests that the inductive bias is not the main reason for the improvement. We further propose that the key of explanation for the improvement lies in the fine-tunable context. Based on this idea, we propose perturbed contexts. Perturbed contexts do not require any annotated explanations, while still providing (fine-tunable) contexts like annotated explanations. Our experiments verified that the effectiveness of the perturbed context is comparable to that of annotated explanations, but (1) the perturbed context does not require any manual annotation, making them more adaptable; (2) the perturbed context is much more efficient than that of using annotated explanations." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "This paper lacks a formalized analysis of the relationship among perturbed contexts/pretraining/model generalization. Although we try to analogize pre-training and prompt in Appendix B to explain how the perturbed context works, it lacks a rigorous mathematical description.\nThe validation of the perturbed context is limited to relation extraction. Although we show its potential on other applications in Appendix D, the experiments are still primitive. A more systematic evaluation on different NLP tasks is still excepted. epochs for all three relation extraction tasks. By default, we set the size of the intermediate space of the factorized models to l = 32. 8 NVIDIA RTX 3090Ti GPUs are used to train the models.\nInitialization For randomly perturbed context, we empirically found that the initialization of M will affect the results. After some trials, we found that initializing these parameters using a normal distribution with the mean and variance as in the token embeddings of the vanilla BERT is a practical choice." }, { "figure_ref": [], "heading": "B Rationale and Relationship to Prompt Tuning", "publication_ref": [ "b33", "b22", "b24", "b10", "b0", "b1" ], "table_ref": [], "text": "Our proposed perturbed contexts can be considered as fine-tunable contexts that guide model training.\nPrompt-tuning is a similar approach using finetunable languages. We compare their differences here.\nPrompt tuning (Wei et al., 2021;Schick and Schütze, 2021;Shin et al., 2020;Li and Liang, 2021;Brown et al., 2020) utilize the pre-trained masked language modeling task and map the predictions of [mask] to the target label. For example, predicting good for the mask in I love this movie. Overall, this is a [mask] movie will classify it into a positive sentiment. However, for the relation extraction task of interest in this paper, it is difficult to establish the mapping between [mask] and relations by prompt due to the large label space (Chen et al., 2021). Our approach, on the other hand, can be applied to arbitrarily complex sentence classification tasks since the sentence representation is obtained directly from the [CLS] token.\nRationale The rationale for prompt tuning is that the pre-trained masked language model has a strong generalization ability for [mask] prediction. Therefore, the prompt performs well in the fewshot setting. Our perturbed context, on the other hand, exploits the generalization ability of the pretrained language model for contextual representation. That is, given the target sentence + context, the language model can efficiently use a richer context to augment the target sentence. Based on the generalization ability for the rich context, given a fine-tunable perturbed context, the language model can automatically learn the optimal perturbed context as the context. " }, { "figure_ref": [], "heading": "C Effect of Multiple Perturbed Contexts", "publication_ref": [ "b17", "b5" ], "table_ref": [ "tab_3" ], "text": "Although we mainly discuss the scenario of a single perturbed context above, previous work (Murty et al., 2020;Hancock et al., 2018) have used multiple explanations. Therefore, we also validate the effect of using multiple perturbed contexts. Specifically, we formulate n = 5 randomly perturbed contexts:\ne vi := o 1 [V] i 1 [V] i 2 • • • [V] i m o 2 for i = 1 • • • n(\n8) Then, we append each perturbed context e vi to the original sentence s and represent these n sentence pairs as in ExpBERT. We concatenate the representations of all sentence pairs to form the resulting feature vector, and use an MLP over it to conduct the classification.\nThe results of multiple randomly perturbed contexts are shown in Table 4. We found that the improvement using multiple randomly perturbed contexts is not significant. We consider that this is because a single perturbed context already provides enough fine-tunable context." }, { "figure_ref": [], "heading": "D Perturbed Contexts as Augmented Context? Application beyond Relation Extraction", "publication_ref": [ "b14", "b13", "b19", "b2", "b20" ], "table_ref": [], "text": "Notice that our proposed perturbed contexts are actually fine-tunable contexts added to the original sample, which does not correspond to the semantics of any actual explanation. Therefore, it is natural to think that these perturbed contexts can be used not only as an alternative to annotated explanations for relation extraction, but also for broader applications.\nIt may be obvious that adding external relevant context can improve the representation of the target text. This idea has been verified on several tasks such as reading comprehension (Long et al., 2017), entity linking (Logeswaran et al., 2019), and even image classification (Radford et al., 2021). One typical class of the external context is the knowl-edge description of entities. The model will jointly represent the target text and the descriptions of the entities within it to enhance the text representation.\nIn this section, we made a preliminary attempt at two fundamental tasks that involve entity knowledge: open entity typing (OpenEntity (Choi et al., 2018)) and word sense disambiguation (WSD (Raganato et al., 2017)). We study the effect of replacing knowledge-related contexts with perturbed contexts." }, { "figure_ref": [], "heading": "D.1 Tasks", "publication_ref": [ "b6" ], "table_ref": [], "text": "Entity typing Given a sentence s, the goal of entity typing is to classify an entity ent in s. For example, for the sentence Paris is the capital of France. and the target entity Paris, the model is required to classify Paris into Location. We propose to use perturbed context as text augmentation for entity typing. To address this, we construct the randomly perturbed context in the form of e\nv := [V] 1 • • • [V] m ent [V] m+1 • • • [V] 2m .\nThen, we refer to Eqn.( 7) to classify the augmented text by BERT.\nWSD Given a sentence s = w 1 , • • • , w n , and a polysemy word w t (1 ≤ t ≤ n) with candidate senses {c 1 , c 2 , ..., c m }, WSD aims to find the sense for w t . For instance, for the sentence Apple is a technology company. and the target polysemy word Apple, the model needs to recognize whether it refers to a fruit or a technology company. To augment the model effectiveness on WSD, we construct the perturbed context and model similar to the entity typing task. The only difference is, we follow (Huang et al., 2019) to use the final hidden state of the target word to conduct the classification." }, { "figure_ref": [], "heading": "D.2 Setup", "publication_ref": [ "b23", "b18", "b38", "b16", "b30", "b39", "b7" ], "table_ref": [], "text": "Baselines For entity typing, We consider two types of baselines: the first type directly fine-tunes the target task. These baselines include BERT and NFGEC (Shimaoka et al., 2016). The second type first train the model over the joint corpus of the text and the external knowledge, and then fine-tune it on the target task. These baselines include Know-BERT (Peters et al., 2019) and ERNIE (Zhang et al., 2019). KnowBERT enhances contextualized word representations with attention-based knowledge integration using WordNet (Miller, 1995) et al., 2019).\ning facts in Wikidata (Vrandečić and Krötzsch, 2014). For WSD, we consider vanilla BERT as our baseline.\nHyperparameters We choose the same hyperparameters as in the relation extraction tasks, except that we train our model for 10 epochs for OpenEntity and 6 epochs for WSD, respectively. For WSD, we refer to the previous setting of training/valid/test splits (Zhong and Ng, 2010;Iacobacci et al., 2016)." }, { "figure_ref": [], "heading": "D.3 Results", "publication_ref": [], "table_ref": [ "tab_4", "tab_5" ], "text": "The results of the randomly perturbed context on WSD and OpenEntity are shown in Table 5 andTable 6, respectively. Our proposed approach still outperforms the baselines without joint pretraining. Even compared with baselines that use joint pre-training for knowledge integration, the performance degradation is not significant. This indicates that, to some extent, the perturbed context enhances the representation of texts that require entity knowledge. This shows the potential of our approach in different scenarios. We leave it to future work to explore the effects of the perturbed context on more tasks." } ]
[ { "authors": "Benjamin Tom B Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Askell", "journal": "", "ref_id": "b0", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Xiang Chen; Xin Xie; Ningyu Zhang; Jiahuan Yan; Shumin Deng; Chuanqi Tan; Fei Huang; Luo Si; Huajun Chen", "journal": "", "ref_id": "b1", "title": "Adaprompt: Adaptive promptbased finetuning for relation extraction", "year": "2021" }, { "authors": "Eunsol Choi; Omer Levy; Yejin Choi; Luke Zettlemoyer", "journal": "", "ref_id": "b2", "title": "Ultra-Fine Entity Typing", "year": "2018" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b3", "title": "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale", "year": "2020" }, { "authors": "Meng-Hao Guo; Zheng-Ning Liu; Tai-Jiang Mu; Shi-Min Hu", "journal": "", "ref_id": "b4", "title": "Beyond self-attention: External attention using two linear layers for visual tasks", "year": "2021" }, { "authors": "Braden Hancock; Martin Bringmann; Paroma Varma; Percy Liang; Stephanie Wang; Christopher Ré", "journal": "NIH Public Access", "ref_id": "b5", "title": "Training classifiers with natural language explanations", "year": "2018" }, { "authors": "Luyao Huang; Chi Sun; Xipeng Qiu; Xuan-Jing Huang", "journal": "", "ref_id": "b6", "title": "GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge", "year": "2019" }, { "authors": "Ignacio Iacobacci; Mohammad Taher Pilehvar; Roberto Navigli", "journal": "", "ref_id": "b7", "title": "Embeddings for word sense disambiguation: An evaluation study", "year": "2016" }, { "authors": "Robin Jia; Percy Liang", "journal": "", "ref_id": "b8", "title": "Adversarial Examples for Evaluating Reading Comprehension Systems", "year": "2017" }, { "authors": "Zhenzhong Lan; Mingda Chen; Sebastian Goodman; Kevin Gimpel; Piyush Sharma; Radu Soricut", "journal": "", "ref_id": "b9", "title": "ALBERT: A Lite BERT for Self-supervised Learning of Language Representations", "year": "2019" }, { "authors": "Lisa Xiang; Percy Li; Liang", "journal": "", "ref_id": "b10", "title": "Prefix-tuning: Optimizing continuous prompts for generation", "year": "2021" }, { "authors": "Hanxiao Liu; Zihang Dai; David R So; Quoc V Le", "journal": "", "ref_id": "b11", "title": "Pay Attention to MLPs", "year": "2021" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b12", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Lajanugen Logeswaran; Ming-Wei Chang; Kenton Lee; Kristina Toutanova; Jacob Devlin; Honglak Lee", "journal": "", "ref_id": "b13", "title": "Zero-Shot Entity Linking by Reading Entity Descriptions", "year": "2019" }, { "authors": "Teng Long; Emmanuel Bengio; Ryan Lowe; Jackie Chi; Kit Cheung; Doina Precup", "journal": "", "ref_id": "b14", "title": "World knowledge for reading comprehension: Rare entity prediction with hierarchical lstms using external descriptions", "year": "2017" }, { "authors": "Luke Melas-Kyriazi", "journal": "", "ref_id": "b15", "title": "Do you even need attention? a stack of feed-forward layers does surprisingly well on imagenet", "year": "2021" }, { "authors": "George A Miller", "journal": "Commun. ACM", "ref_id": "b16", "title": "WordNet: a lexical database for English", "year": "1995" }, { "authors": "Shikhar Murty; Pang Wei Koh; Percy Liang", "journal": "", "ref_id": "b17", "title": "ExpBERT: Representation Engineering with Natural Language Explanations", "year": "2020" }, { "authors": "Mark Matthew E Peters; Robert Neumann; Roy Logan; Vidur Schwartz; Sameer Joshi; Noah A Singh; Smith", "journal": "", "ref_id": "b18", "title": "Knowledge Enhanced Contextual Word Representations", "year": "2019" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "", "ref_id": "b19", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Alessandro Raganato; Jose Camacho-Collados; Roberto Navigli", "journal": "", "ref_id": "b20", "title": "Word sense disambiguation: A unified evaluation framework and empirical comparison", "year": "2017" }, { "authors": "Pranav Rajpurkar; Robin Jia; Percy Liang", "journal": "", "ref_id": "b21", "title": "Know What You Don't Know: Unanswerable Questions for SQuAD", "year": "2018" }, { "authors": "Timo Schick; Hinrich Schütze", "journal": "", "ref_id": "b22", "title": "It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners", "year": "2021" }, { "authors": "Sonse Shimaoka; Pontus Stenetorp; Kentaro Inui; Sebastian Riedel", "journal": "", "ref_id": "b23", "title": "An Attentive Neural Architecture for Fine-grained Entity Type Classification", "year": "2016" }, { "authors": "Taylor Shin; Yasaman Razeghi; Robert L Logan; I V ; Eric Wallace; Sameer Singh", "journal": "", "ref_id": "b24", "title": "Eliciting Knowledge from Language Models Using Automatically Generated Prompts", "year": "2020" }, { "authors": "Shashank Srivastava; Igor Labutov; Tom Mitchell", "journal": "", "ref_id": "b25", "title": "Joint concept learning and semantic parsing from natural language explanations", "year": "2017" }, { "authors": "Yi Tay; Dara Bahri; Donald Metzler; Da-Cheng Juan; Zhe Zhao; Che Zheng; ; ", "journal": "PMLR", "ref_id": "b26", "title": "Synthesizer: Rethinking self-attention for transformer models", "year": "2021" }, { "authors": "Yi Tay; Mostafa Dehghani; Jai Gupta; Dara Bahri; Vamsi Aribandi; Zhen Qin; Donald Metzler", "journal": "", "ref_id": "b27", "title": "Are Pre-trained Convolutions Better than Pre-trained Transformers?", "year": "2021" }, { "authors": "Ilya Tolstikhin; Neil Houlsby; Alexander Kolesnikov; Lucas Beyer; Xiaohua Zhai; Thomas Unterthiner; Jessica Yung; Andreas Steiner; Daniel Keysers; Jakob Uszkoreit", "journal": "", "ref_id": "b28", "title": "Mlp-mixer: An all-mlp architecture for vision", "year": "2021" }, { "authors": "Hugo Touvron; Piotr Bojanowski; Mathilde Caron; Matthieu Cord; Alaaeldin El-Nouby; Edouard Grave; Gautier Izacard; Armand Joulin; Gabriel Synnaeve; Jakob Verbeek", "journal": "", "ref_id": "b29", "title": "Resmlp: Feedforward networks for image classification with data-efficient training", "year": "2021" }, { "authors": "Denny Vrandečić; Markus Krötzsch", "journal": "Commun. ACM", "ref_id": "b30", "title": "Wikidata: a free collaborative knowledgebase", "year": "2014" }, { "authors": "I Sida; Samuel Wang; Percy Ginn; Christopher D Liang; Manning", "journal": "", "ref_id": "b31", "title": "Naturalizing a Programming Language via Interactive Learning", "year": "2017" }, { "authors": "Ziqi Wang; Yujia Qin; Wenxuan Zhou; Jun Yan; Qinyuan Ye; Leonardo Neves; Zhiyuan Liu; Xiang Ren", "journal": "", "ref_id": "b32", "title": "Learning from Explanations with Neural Execution Tree", "year": "2020" }, { "authors": "Jason Wei; Maarten Bosma; Y Vincent; Kelvin Zhao; Adams Wei Guu; Brian Yu; Nan Lester; Andrew M Du; Quoc V Dai; Le", "journal": "", "ref_id": "b33", "title": "Finetuned language models are zero-shot learners", "year": "2021" }, { "authors": "Adina Williams; Nikita Nangia; Samuel Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference", "year": "2018" }, { "authors": "Thomas Wolf; Julien Chaumond; Lysandre Debut; Victor Sanh; Clement Delangue; Anthony Moi; Pierric Cistac; Morgan Funtowicz; Joe Davison; Sam Shleifer", "journal": "", "ref_id": "b35", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Saining Xie; Alexander Kirillov; Ross Girshick; Kaiming He", "journal": "", "ref_id": "b36", "title": "Exploring randomly wired neural networks for image recognition", "year": "2019" }, { "authors": "Yuhao Zhang; Victor Zhong; Danqi Chen; Gabor Angeli; Christopher D Manning", "journal": "", "ref_id": "b37", "title": "Position-aware attention and supervised data improve slot filling", "year": "2017" }, { "authors": "Zhengyan Zhang; Xu Han; Zhiyuan Liu; Xin Jiang; Maosong Sun; Qun Liu", "journal": "", "ref_id": "b38", "title": "ERNIE: Enhanced Language Representation with Informative Entities", "year": "2019" }, { "authors": "Zhi Zhong; Hwee Tou Ng", "journal": "", "ref_id": "b39", "title": "It makes sense: A wide-coverage word sense disambiguation system for free text", "year": "2010" }, { "authors": "", "journal": "", "ref_id": "b40", "title": "A Hyperparameters We use the bert-base-uncased and roberta-base from Huggingface transform", "year": "2020" } ]
[ { "formula_coordinates": [ 4, 105.65, 75.44, 378.27, 111.81 ], "formula_id": "formula_0", "formula_text": "!\" !# !$ !% !! & '$ $& !$ (&& )*+,*-+../0123*1+425 Corrupted exp. Annotated exp. No exp. (a) Spouse. !! !\" !# !$ !% & '! !& #! (&& )*+,*-+../0123*1+425 (b) Disease. !! !\" # $% %# \"% &## '()*(+),,-./01(/)203 (c) TACRED." }, { "formula_coordinates": [ 4, 108.95, 237.37, 372.01, 111.76 ], "formula_id": "formula_1", "formula_text": "!\" \"# $\" %# \" &# '# (# &## )*+,*-./01012*3/4/ !\"#$%&' ())\"*+*$,#$%&' -\"../&*$,#$%&' (a) Spouse. !\" #\" $\" %\" &\" '\" % (\" !\" $\" (\"\" )*+,*-./01012*3/4/ (b) Disease. !\" #\" \"\" $\" %\" \" &' (' #' &'' )*+,*-./01012*3/4/ (c) TACRED." }, { "formula_coordinates": [ 5, 114.97, 394.56, 174.77, 8.37 ], "formula_id": "formula_2", "formula_text": "ev := o1 [M]1 [M]2 • • • [M]m o2(2)" }, { "formula_coordinates": [ 5, 376.07, 122.8, 148.94, 8.35 ], "formula_id": "formula_3", "formula_text": "emb rand ([M]i) = Mi (3)" }, { "formula_coordinates": [ 5, 363.9, 261.22, 161.11, 8.35 ], "formula_id": "formula_4", "formula_text": "emb cond ([M]i) = Fi(x pool )(4)" }, { "formula_coordinates": [ 5, 362.1, 494.49, 162.91, 8.35 ], "formula_id": "formula_5", "formula_text": "emb f_rand ([M]i) = W fr MFi (5)" }, { "formula_coordinates": [ 5, 337.75, 632.92, 187.26, 8.35 ], "formula_id": "formula_6", "formula_text": "emb f_cond ([M]i) = W fc2 (W fc1 Fi(x pool ))(6)" }, { "formula_coordinates": [ 5, 335.13, 653.23, 120.75, 12.73 ], "formula_id": "formula_7", "formula_text": "W fc1 ∈ R l×d ,W fc2 ∈ R d×l ." }, { "formula_coordinates": [ 6, 94.17, 229.43, 195.57, 8.06 ], "formula_id": "formula_8", "formula_text": "I(s, ev) = BERT([CLS], s, [SEP], ev, [SEP]) (7)" }, { "formula_coordinates": [ 11, 307.63, 313.49, 215.29, 25.85 ], "formula_id": "formula_9", "formula_text": "e vi := o 1 [V] i 1 [V] i 2 • • • [V] i m o 2 for i = 1 • • • n(" }, { "formula_coordinates": [ 12, 88.47, 344.19, 202.57, 11.01 ], "formula_id": "formula_10", "formula_text": "v := [V] 1 • • • [V] m ent [V] m+1 • • • [V] 2m ." } ]
Exploring Automatically Perturbed Natural Language Explanations in Relation Extraction
Previous research has demonstrated that natural language explanations provide valuable inductive biases that guide models, thereby improving the generalization ability and data efficiency. In this paper, we undertake a systematic examination of the effectiveness of these explanations. Remarkably, we find that corrupted explanations with diminished inductive biases can achieve competitive or superior performance compared to the original explanations. Our findings furnish novel insights into the characteristics of natural language explanations in the following ways: (1) the impact of explanations varies across different training styles and datasets, with previously believed improvements primarily observed in frozen language models. (2) While previous research has attributed the effect of explanations solely to their inductive biases, our study shows that the effect persists even when the explanations are completely corrupted. We propose that the main effect is due to the provision of additional context space. (3) Utilizing the proposed automatic perturbed context, we were able to attain comparable results to annotated explanations, but with a significant increase in computational efficiency, 20-30 times faster.
Wanyun Cui; Xingran Chen
[ { "figure_caption": "Corrupted explanation: o1 and o2 went on a frog. Perturbed context:o1 [M]1 [M]2 • • • [M]m o2(a) Guide model with explanations.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Effect of the size of the perturbed context.For frozen language models, the effect is positively correlated with size. However, this does not hold for the fine-tunable language models.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Dataset statistics.", "figure_data": "DatasetTrainValTest#Exp.Spouse220552784268041Disease6667773410129TACRED 68124 22631 15509128", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results for fine-tunable language models. Without introducing external annotated explanations, the perturbed context achieves competitive results with models with explanations. The efficiency of the perturbed context is significantly higher than its competitors. We denote the perturbed context as PC. R means fine-tunable random. C means conditional. Fixed R means fixed random. FR means factorized random. FC means factorized conditional. Results are averaged over 5 runs. †: We fine-tune all parameters within ExpBERT and ExpRoBERTa.", "figure_data": "ModelAnnotatedSpouseDiseaseTACREDAvg. F1F1TimeF1TimeF1TimeBabbleLabbleYes50.1 ± 0.00-42.3 ± 0.00----NeXTYes----45.6--BERTNo75.5 ± 0.591.0×57.8 ± 0.901.0×66.8 ± 0.311.0×66.7ExpBERT †Yes76.0 ± 0.47 28.5×56.9 ± 0.82 20.1×67.0 ± 0.14 32.1×66.6PC (BERT)+ RNo76.7 ± 1.501.1×57.3 ± 1.571.1×67.6 ± 0.551.1×67.2+ Fixed RNo74.9 ± 0.811.1×56.2 ± 0.751.1×66.9 ± 0.751.1×66.0+ CNo75.8 ± 1.131.1×57.3 ± 0.501.1×66.6 ± 0.411.1×66.6+ FRNo77.1 ± 1.481.1×57.0 ± 0.511.1×67.3 ± 0.251.1×67.1+ FCNo75.8 ± 0.161.1×56.6 ± 0.611.1×66.8 ± 0.441.1×66.4MixtureYes76.3 ± 1.06 28.5×56.5 ± 1.22 20.1×66.5 ± 0.69 32.1×66.4RoBERTaNo77.1 ± 1.211.0×58.8 ± 0.771.0×69.2 ± 0.481.0×68.4ExpRoBERTa †Yes75.9 ± 0.77 21.2×57.0 ± 1.22 21.3×68.6 ± 0.69 20.6×67.2PC (RoBERTa)+ RNo77.6 ± 0.521.1×59.0 ± 0.551.1×70.1 ± 0.371.1×68.9+ Fixed RNo78.2 ± 0.651.2×59.4 ± 0.861.1×69.7 ± 0.431.0×69.1+ CNo77.9 ± 0.561.3×58.5 ± 0.501.2×69.1 ± 0.241.2×68.5+ FRNo77.5 ± 0.401.2×57.0 ± 2.061.1×69.3 ± 0.531.1×68.0+ FCNo77.7 ± 1.011.3×56.5 ± 1.121.2×69.5 ± 0.151.2×67.9MixtureYes75.9 ± 0.51 21.5×57.1 ± 1.08 22.1×68.7 ± 0.57 22.1×67.2ModelSpouseDiseaseBabbleLabel-LangExp †53.6 ± 0.3849.1 ± 0.47BabbleLabel-ProgExp †58.3 ± 1.1049.7 ± 0.54BERT-NoExp †52.9 ± 0.9749.7 ± 1.01ExpBERT †63.5 ± 1.4052.4 ± 1.23PC (BERT)+ R64.7 ± 0.6253.5 ± 0.99+ Fixed Rnot converged 45.4 ± 1.56+ FR60.5 ± 2.5949.8 ± 0.96RoBERTa-NoExp62.2 ± 0.5853.9 ± 0.32ExpRoBERTa65.8 ± 0.9555.1 ± 0.31PC (RoBERTa)+ R66.2 ± 2.1855.7 ± 0.91+ Fixed R41.6 ± 9.3150.9 ± 0.99+ FR66.0 ± 2.6054.5 ± 0.73", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison results of single/multiple perturbed contexts. Results are averaged over 5 runs. Using multiple perturbed contexts does not show surpassing effects.", "figure_data": "Spouse Disease TACREDSingle76.757.367.6Multiple75.157.467.4", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "and Wikipedia. ERNIE integrates knowledge through aligning entities within sentences with correspond-PC (BERT) + R 66.2 74.6 72.5 68.4 74.0 72.0 Results on WSD datasets. †: results from(Huang et al., 2019).", "figure_data": "DevTest DatasetsSE07 SE2 SE3 SE13 SE15 AllBERT †61.1 69.7 69.4 65.8 69.5 68.6Our BERT64.8 73.1 71.7 68.2 73.6 71.2ModelJoint pre-trainPRF1BERT ‡No76.37 70.96 73.56Our BERTNo75.98 73.42 74.68NFGEC ‡No68.80 53.30 60.10ERNIE ‡Yes78.42 72.90 75.56KnowBERT ‡Yes78.60 73.70 76.10PC (BERT) + RNo77.42 72.95 75.12", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Results on OpenEntity. ‡: results from (Zhang", "figure_data": "", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "(Srivastava et al., 2017)", "Explanation": "The cited work by Srivastava et al. provides a method of incorporating natural language explanations in relation extraction tasks, which the citing paper adopts in its own research."}, {"Category": "Methodological Basis", "Citation": "(Hancock et al., 2018)", "Explanation": "The cited work by Hancock et al. also contributes a method of incorporating natural language explanations in relation extraction tasks, which the citing paper builds upon in its own research."}, {"Category": "Methodological Basis", "Citation": "(Murty et al., 2020)", "Explanation": "The cited work by Murty et al. introduces the ExpBERT approach, which directly incorporates natural language explanations in relation extraction tasks. The citing paper builds upon this method in its own research."}, {"Category": "Methodological Basis", "Citation": "(Xie et al., 2019)", "Explanation": "The cited work by Xie et al. (2019) highlights the importance of introducing inductive biases into neural networks, which is a key focus of the citing paper in studying the working mechanism of ExpBERT and understanding how explanations guide and enhance the model effect."}, {"Category": "Methodological Basis", "Citation": "(Murty et al., 2020)", "Explanation": "The cited work by Murty et al. (2020) provides a method for improving the effect of natural language explanations in terms of accuracy and data efficiency, which the citing paper builds upon in their research."}, {"Category": "Methodological Basis", "Citation": "(Murty et al., 2020)", "Explanation": "The cited work provides the task setting and explanation format for the relation extraction task, which the citing paper adopts in its research on explanation guidance evaluation."}, {"Category": "Methodological Basis", "Citation": "(Murty et al., 2020)", "Explanation": "The cited work introduces the Exp-BERT method, which the citing paper adopts in their research to generate explanations for language models."}, {"Category": "Extension or Continuation", "Citation": "work similarly to ExpBERT", "Explanation": "The cited work is an extension of the Exp-BERT method, exploring a new approach to generating explanations in language models."}, {"Category": "No explanation", "Citation": "the vanilla language model without introducing explanations", "Explanation": "The cited work serves as a baseline for comparison, demonstrating the impact of introducing explanations in language models."}, {"Category": "Methodological Basis", "Citation": "(Williams et al., 2018)", "Explanation": "The cited work provides the MultiNLI corpus, which the citing paper uses as a pre-training dataset for the language model BERT in the frozen language model training style."}, {"Category": "Data Source", "Citation": "(Hancock et al., 2018)", "Explanation": "The cited work provides the datasets used in the study conducted in the citing paper, namely Spouse and Disease."}, {"Category": "Data Source", "Citation": "(Murty et al., 2020)", "Explanation": "The cited work is the source of the natural language explanations used in the study conducted in the citing paper, for the baselines on the Spouse and Disease datasets."}, {"Category": "Data Source", "Citation": "(Hancock et al., 2018)", "Explanation": "The cited work is the source of the natural language explanations used in the study conducted in the citing paper for the baselines on the Spouse and Disease datasets."}, {"Category": "Data Source", "Citation": "(Murty et al., 2020)", "Explanation": "The cited work is the source of the natural language explanations used in the study conducted in the citing paper for the baselines on the Spouse and Disease datasets."}, {"Category": "Data Source", "Citation": "(Murty et al., 2020)", "Explanation": "The cited work is the source of the natural language explanations used in the study conducted in the citing paper for the baselines on the Spouse and Disease datasets."}, {"Category": "Data Source", "Citation": "(Zhang et al., 2017)", "Explanation": "The cited work is the source of the dataset used in the study conducted in the citing paper, namely TACRED."}, {"Category": "Data Source", "Citation": "(Murty et al., 2020)", "Explanation": "The cited work is the source of the natural language explanations used in the study conducted in the citing paper for the baselines on the TACRED dataset."}, {"Category": "Methodological Basis", "Citation": "(Murty et al., 2020)", "Explanation": "The cited work by Murty et al. (2020) provides the finding that introducing annotated natural language explanations improves the effect of frozen language models. The citing paper adopts this finding in their research to support the claim that introducing explanations can improve the accuracy of language models."}, {"Category": "Extension or Continuation", "Citation": "(Murty et al., 2020)", "Explanation": "The cited work by Murty et al. (2020) is used to extend the research on data efficiency in fine-tunable language models, as the citing paper analyzes the data efficiency in this context and finds similar results to the study by Murty et al."}, {"Category": "Methodological Basis", "Citation": "(Tay et al., 2021a)", "Explanation": "The cited work, Synthesizer, is used as a reference for implementing the perturbed context in the citing paper, providing a methodological basis for the study of how explanations work w.r.t. extra contexts."}, {"Category": "Methodological Basis", "Citation": "(Lan et al., 2019)", "Explanation": "The cited work introduces the concept of factorized models, which the citing paper adopts in the design of the Factorized randomly perturbed context variant to increase flexibility in the context."}, {"Category": "Supporting Evidence", "Citation": "(Hancock et al., 2018)", "Explanation": "The cited work by Hancock et al. serves as a baseline for the comparison of the proposed approach in the citing paper, providing a reference point for understanding the performance of the new method."}, {"Category": "Extension or Continuation", "Citation": "(Wang et al., 2020)", "Explanation": "The cited work by Wang et al. is an extension of the research in the citing paper, exploring the use of explanations in text understanding in a new context or dimension."}, {"Category": "Data Source", "Citation": "(Liu et al., 2019)", "Explanation": "The cited work by Liu et al. is a data source for the experiments conducted in the citing paper, providing the language model (RoBERTa) used in the study."}, {"Category": "Methodological Basis", "Citation": "(Murty et al., 2020)", "Explanation": "The cited work by Murty et al. (2020) provides a method of using the outputs of labeling functions for explanations as features in ExpBERT, which the citing paper adopts in their research to compare the effects of different models in frozen language models."}, {"Category": "Extension or Continuation", "Citation": "(Touvron et al., 2021)", "Explanation": "The cited work by Touvron et al. (2021) is mentioned as a study that worked on revisiting the inductive bias from network architectures. The citing paper extends this work by further exploring the topic of inductive bias in neural networks."}, {"Category": "Extension or Continuation", "Citation": "(Tay et al., 2021b)", "Explanation": "The cited work by Tay et al. (2021b) is also mentioned as a study that worked on revisiting the inductive bias from network architectures. The citing paper extends this work by providing additional insights and findings on the topic."}, {"Category": "Extension or Continuation", "Citation": "(Liu et al., 2021)", "Explanation": "The cited work by Liu et al. (2021) is another study that worked on revisiting the inductive bias from network architectures. The citing paper builds upon the work of Liu et al. by further discussing the progress made in this area."}, {"Category": "Extension or Continuation", "Citation": "(Touvron et al., 2021)", "Explanation": "The cited work by Touvron et al. (2021) is mentioned again as a study that worked on revisiting the inductive bias from network architectures. The citing paper extends this work by providing a more detailed analysis of the topic."}, {"Category": "Extension or Continuation", "Citation": "(Tolstikhin et al., 2021)", "Explanation": "The cited work by Tolstikhin et al. (2021) is found to have made a significant contribution in the area of image classification benchmarks with CNNs and Vision Transformers. The citing paper extends this work by discussing the impact of this study in the field of image classification."}, {"Category": "Extension or Continuation", "Citation": "(Guo et al., 2021)", "Explanation": "The cited work by Guo et al. (2021) is mentioned as a study that found a way to replace self-attention with two cascaded linear layers and two normalization layers based on external, small, learnable, shared memories. The citing paper extends this work by providing a deeper understanding of the method used in the study."}, {"Category": "Extension or Continuation", "Citation": "(Melas-Kyriazi, 2021)", "Explanation": "The cited work by Melas-Kyriazi (2021) is found to have made a significant contribution in the area of applying feed-forward layers over the patch dimension to obtain competitive results with the attention layer. The citing paper extends this work by discussing the impact of this study in the field of feed-forward layers."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2021)", "Explanation": "The cited work by Chen et al. provides a discussion on the challenges of using prompt tuning for relation extraction tasks, which the citing paper uses to inform its own approach of using perturbed contexts to guide model training."}, {"Category": "Methodological Basis", "Citation": "(Murty et al., 2020)", "Explanation": "The cited work by Murty et al. (2020) has previously used multiple explanations in their research, which the citing paper builds upon by formulating n = 5 randomly perturbed contexts in the same manner."}, {"Category": "Data Source", "Citation": "(Hancock et al., 2018)", "Explanation": "The cited work by Hancock et al. (2018) serves as a data source for the citing paper, as it provides a reference for the use of multiple explanations in the research conducted."}, {"Category": "Methodological Basis", "Citation": "(Long et al., 2017)", "Explanation": "The cited work on reading comprehension has been used as a basis for adding external relevant context to improve the representation of target text in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Logeswaran et al., 2019)", "Explanation": "The cited work on entity linking has been used to verify the idea of adding external context to improve text representation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Radford et al., 2021)", "Explanation": "The cited work on image classification has been used to demonstrate the effectiveness of adding external context in improving text representation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Choi et al., 2018)", "Explanation": "The cited work on open entity typing has been used to study the effect of replacing knowledge-related contexts with perturbed contexts in the context of entity knowledge in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Raganato et al., 2017)", "Explanation": "The cited work on word sense disambiguation has been used to study the effect of replacing knowledge-related contexts with perturbed contexts in the context of entity knowledge in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Huang et al., 2019)", "Explanation": "The cited work by Huang et al. (2019) is used as a methodological basis for the classification task in the citing paper, as the final hidden state of the target word is employed in the classification process."}, {"Category": "Methodological Basis", "Citation": "(Shimaoka et al., 2016)", "Explanation": "The cited work provides the NFGEC model as a baseline for entity typing, which the citing paper adopts in their research to compare and contrast the performance of their own model."}, {"Category": "Extension or Continuation", "Citation": "(Peters et al., 2019)", "Explanation": "The cited work introduces the Know-BERT model, which the citing paper extends by enhancing contextualized word representations with attention-based knowledge integration using WordNet to improve performance in the target task of entity typing."}, {"Category": "Data Source", "Citation": "(Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014)", "Explanation": "The cited work is the source of the data used in the research, specifically the facts in Wikidata that are employed in the entity typing task."}, {"Category": "Methodological Basis", "Citation": "(Zhong and Ng, 2010)", "Explanation": "The cited work provides a previous setting of training/valid/test splits for WSD, which the citing paper refers to in their research to ensure consistency in the WSD task."}]
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b27", "b11", "b8", "b14", "b28", "b29", "b5" ], "table_ref": [], "text": "The retrieval of similar cases and their analysis is a task at the core of legal work. Legal search tools are widely used by lawyers and counsels to write applications and by judges to inform their decision-making process. However, this task poses a series of challenges to legal professionals: (i) it is an expensive and time-consuming task that accounts for 30% of the legal work on average (Poje, 2014), (ii) databases can be very large, with legal search tools gathering billions of documents, and (iii) selection of cases can be imprecise and may return many irrelevant cases, which creates the need to read more cases than necessary.\nIn Canada, from the date of the first claim to the final decision outcome, a claimant can expect to wait 24 months for refugee claims and 12 months for refugee appeals1 . Long processing times are due to a significant backlog and to the amount of work required from counsels that help claimants file their claims, and who are frequently legal aid or NGO employees.\nWe find that these challenges are well-suited for NLP-based solutions and investigate the feasibility of automating all steps of the legal search for past similar cases. We construct an end-to-end pipeline that aims at facilitating this multi-step process, thereby supporting and speeding up the work of both lawyers and judges in Refugee Status Determination (RSD). We provide a level of granularity and precision that goes beyond that of existing legal search tools such as Westlaw, LexisNexis, or Refworld2 (Custis et al., 2019), which operate at the document level. Refworld is an online database maintained by the United Nations which helps retrieve relevant precedent cases and legislation. However, the level of precision with which one can search for cases is limited. Moreover, our pipeline guarantees increased transparency, enabling end users to choose the criteria of legal search they find most relevant to their task among the proposed categories that act as filters for a search.\nSpecific literature studying refugee law and AI is sparse. Attention has been given to the classification and prediction of asylum cases in the United States (Chen and Eagel, 2017;Dunn et al., 2017). On Canadian data, research has been conducted to analyze the disparities in refugee decisions using statistical analysis (Rehaag, 2007(Rehaag, , 2019;;Cameron et al., 2021). However, those studies rely mostly on tabular data. We propose to work directly on the text of refugee cases. To the best of our knowledge, no previous work implements an end-to-end pipeline and state-of-the-art NLP methods in the field of refugee law.\nWe provide an NLP-based end-to-end prototype for automating refugee case analysis built on historical (already decided) cases, which are currently available only in unstructured or semi-structured formats, and which represent the input data to our pipeline. The end goal of our approach is to add structure to the database of cases by extracting targeted information described in table 1 from the case documents, and providing the results in a structured format to significantly enrich the search options for cases. Thereby, the input data set of cases is described in a structured manner based on our extracted categories of items, adding extensive capabilities for legal search.\nThe pipeline described in figure 1 begins by searching and downloading cases (information retrieval, paragraph 4.1), pre-processing them (paragraph 4.2), extracting items previously identified as relevant by legal professionals. It then outputs a structured, precise database of refugee cases (information extraction, paragraph 4.3). In the information extraction step, we test different training and pre-training architectures in order to determine the best methods to apply to the refugee case documents. We construct each step with the aim of minimizing the need for human effort in creating labeled training data, aiming to achieve the best possible accuracy on each extracted information item. We discuss technical choices and methodologies in section 5. Finally, we evaluate the information extraction step on precision, recall, and F1 score, and present detailed results in section 6.\nWe demonstrate that annotation can be sped up by the use of a terminology base while incorporating domain knowledge and semi-automated annotation tools. We find that domain matching is important for training to achieve the highest possible scores. We reach satisfactory token classification results on a majority of our chosen categories. The contributions of this paper are as follows:\n1. First, we retrieve 59,112 historic decision documents (dated from 1996 to 2022) from online services of the Canadian Legal Information Institute (CanLII) based on context-based indexing and metadata to curate a collection of federal Refugee Status Determination (RSD) cases. Our automated retrieval process is exhaustive and comprises all available cases. It is superior to human-based manual retrieval in terms of error proneness and processing time.\n2. Second, we proposed an information extraction pipeline that involves pre-processing, construction of a terminology base, labeling data, and using word vectors and NER models to augment the data with structured information. We fine-tune state-of-the-art neural network models to the corpus of our retrieved cases by training on newly created gold-standard text annotations specific to our defined categories of interest.\n3. Lastly, we extract the targeted category items from the retrieved cases and create a structured database from our results. We introduce structure to the world of unstructured legal RSD cases and thereby increase the transparency of stated legal grounds, judge reasoning, and decision outcomes across all processed cases." }, { "figure_ref": [], "heading": "Background and motivation", "publication_ref": [], "table_ref": [], "text": "At the core of the ongoing refugee crisis is the legal and administrative procedure of Refugee Status Determination (RSD), which can be summarized in three sub-procedures: (i) a formal claim for refugee protection by a claimant who is commonly supported by a lawyer, (ii) the decision-making Refugee protection decisions are high-stakes procedures that target 4.6 million asylum seekers worldwide as of mid-2022. In Canada alone, 48,014 new claims and 10,055 appeals were filed in 2021 3 . As stated in the introduction, processing times of refugee claims vary and range from a few months to several years. One of the reasons for the long processing times is the effort required for similar cases search. Case research is an essential part of the counsel's work in preparation for a new claim file. This search involves retrieving citations and references to previous, ideally successful RSD cases that exhibit similarities to the case in preparation such as the country of origin or the reason for the claim. Equally, judges rely on researching previous cases to justify their reasoning and ensure coherency across rulings.\nWhile each case exhibits individual characteristics and details, legal practitioners typically search for similarities based on the constitution of the panel, the country of origin and the characteristics of the claimant, the year the claim was made in relation to a particular geopolitical situation, the legal procedures involved, the grounds for the decision, the legislation, as well as other cases or reports that are cited.\nOur work aims to support legal practitioners, both lawyers preparing the application file and judges having to reach a decision, by automating 3 https://irb.gc.ca/en/statistics/Pages/index. aspx the time-consuming search for similar legal cases referred to here as refugee case analysis. As a case study, we work on first instance and appeal decisions made by the Immigration and Refugee Board of Canada. A common approach used by legal practitioners is to manually search and filter past RSD cases on online services such as CanLII or Refworld by elementary document text search, which is a keyword-based find exact search, or by date.\nOur defined categories of interest are described in table 1. The labels have been defined and decided upon with the help of three experienced refugee lawyers. From the interviews, we curated a list of keywords, grounds, and legal elements determining a decision. Moreover, we analyzed a sample of 50 Canadian refugee cases recommended by the interviewees to be representative over years of the claim and tribunals.\nWe use the pre-defined labels provided by spaCy's state-of-the-art EntityRecognizer class including DATE, PERSON, GPE, ORG, NORP, LAW and extend this list with new additional labels that we created and trained from scratch.\nEach case document comprises a case cover page (the first page) and the main text which differ in the type and format of their information content. Therefore, we chose separate labels for the case cover. The case cover contains general information about the case (cf. example in Appendix A). While the main text is presented as a full-body text, the case cover page consists of semi-structured information which could that could be roughly described as tabular, except it does not follow a clear layout. Based on the case cover page we aim to extract meta-information about each claim using four labels (table 1).\nFor the main text, we chose 15 labels that represent characteristics reflective of similarity among different cases. To link cases to each other and later facilitate similar case retrieval, we also extract three categories of citations i.e. LAW for legal texts, LAW_CASES for other mentioned past cases, and LAW_REPORT for external sources of information such as country reports. Additionally, the CREDIBILITY label retrieves mentions made of credibility concerns in the claimant's allegations, which tends to be among the deciding factors for the success of a claim and is hence essential to understand the reasoning that led to the legal determination at hand.\nA successful implementation of a system capable of extracting this information reliably would provide several benefits to legal practitioners: (i) facilitating, speeding up, and focusing legal search, (ii) reducing the time spent on a claim and on providing relevant references, potentially resulting in a file that has more chances of being accepted, and (iii) for judges, to ensure consistent outcomes across time and different jurisdictions or claimant populations." }, { "figure_ref": [ "fig_0" ], "heading": "Research approach", "publication_ref": [], "table_ref": [], "text": "Our approach is guided by investigating the hypothesis that NER methods can be used to extract structured information from legal cases, i.e. we want to determine whether state-of-the-art methods can be used to improve the transparency and processing of refugee cases. Consistency of the decision-making process and thorough assessment of legal procedure steps are crucial aspects ensuring that legal decision outcomes are transparent, high-quality, and well-informed. Consequently, key research questions we need to address include: Training data requirements How many labeled samples are needed? Can keyword-matching methods or terminology databases be leveraged to reduce the need for human annotation? Extraction What methods are best suited to identify and extract the target information from legal cases? Replicability To what extent might our methods generalize to other legal data sets (other legal fields or other jurisdictions)? Pre-training How important is the pre-training step? How important is domain matching: do domain-specific pre-training perform better than general-purpose embeddings, despite their smaller sizes? Architectures How important is the architecture applied to the information extraction tasks, in terms of F1 score, precision, and recall?\n4 Pipeline details and experimental setup\nIn this section, we detail each step of the pipeline as presented in figure 1 and how it compares to the current legal search process. Subsequently, in 5 we describe the training data creation process, and the network architectures tested. The code for our implementation and experiments can be found at https://github.com/clairebarale/ refugee_cases_ner." }, { "figure_ref": [], "heading": "Information retrieval: case search", "publication_ref": [], "table_ref": [], "text": "We retrieve 59,112 cases processed by the Immigration and Refugee Board of Canada that range from 1996 to 2022. The case documents have been collected from CanLII in two formats, PDF and HTML. The CanLII web interface serves queries through their web API accessible at the endpoint with URL https://www.canlii.org/ en/search/ajaxSearch.do.\nFor meaningful queries, the web API exposes a number of HTTP-GET request parameters and corresponding values which are to be appended to the URL but preceded by a single question mark and then concatenated by a single ampersand each. For instance, in the parameter=value pairs in the following example, the keyword search exactly matches the text REFUGEE, and we retrieve the second page of a paginated list of decisions from March 2004 sorted by descending date, which returns a JSON object (Full query: https://www.canlii.org/ en/search/ajaxSearch.do?type=decision& ccId=cisr&text=EXACT(REFUGEE)&startDate= 2004-03-01&endDate=2004-03-31&sort= decisionDateDesc&page=2). Note that CanLII applies pagination to the search results in order to limit the size of returning objects per request." }, { "figure_ref": [], "heading": "Preprocessing", "publication_ref": [], "table_ref": [], "text": "We obtain two sets: (1) a set of case covers that consists of semi-structured data and displays metainformation and (2) a set of main text that contains the body of each case, in full text.\nGenerally, the CanLII database renders the decision case documents as HTML pages for display in modern web browsers but also provides PDF files. We use PyPDF24 for parsing the contents of PDF files as text. To parse the contents of HTML files as text input to our NLP pipeline, we use the BeautifulSoup5 python library.\nThe choice between PDF and HTML format is based on multiple reasons, as each format has its own advantages and disadvantages. First, depending on the text format PyPDF2 occasionally adds excessive white space between letters of the same word. Also, the PDF document is parsed line-byline from left to right, top to bottom. Therefore, multi-column text is often mistakenly concatenated as a single line of text. However, the available PDF documents are separated by pages and PyPDF2 provides functionality to select individual document pages which we used to select the case cover page that provides case details for each document. HTML as markup language provides exact anchors with HTML tags, which, in most cases, are denoted by opening and closing tag parts such as <p> and </p> for enclosing a paragraph.\nWhen processing the main text of each case document, we parse the HTML files using BeautilfulSoup, remove the case cover to keep only the full-body text, and tokenize the text by sentence using the NLTK6 . Our preference to tokenize by sentence facilitates the annotation process while keeping the majority of the context. We also experimented with splitting by paragraph which yielded relatively large chunks of text, whereas splitting by phrase did not keep enough context during the annotation process. To gather results, we create a pandas Dataframe, create a sentence per row, and save it to a CSV file.\nFor the case cover, we exploit PyPDF2's functionality to extract the text of the first page from the PDF format. In contrast to this, when using BeautifulSoup we could not rely on HTML tags (neither through generic tag selection nor by CSS identifier (ID) or CSS class), to retrieve the first page of the document robustly. After extracting this page for each case, we parse the PDF files as plain text. Combined with the metadata from the document retrieval provided by CanLII, we derive the case identifier number and assign it to the corresponding PDF file. As a next step and similar to the procedure for the main body of each document, we create a pandas Dataframe from the extracted data and save it as a CSV file with case identifier numbers and their associated case cover.\nFor both file formats, we perform basic text cleaning, converting letters to lowercase, and removing excessive white space and random newlines." }, { "figure_ref": [], "heading": "Information extraction", "publication_ref": [], "table_ref": [], "text": "The goal of our pipeline is not only to retrieve the cases but to structure them with a high level of precision and specificity, and to output a tabular file where each column stores specific information of each of our target types for each case. Using such structured information, legal practitioners can find similar cases with ease instead of reading carefully through several cases before finding a few cases similar to their own cases by selecting attributes in one or several of the extracted categories.\nWe chose to use neural network approaches to perform the information extraction step. After some experimentation, approaches such as simple matching and regular expressions search proved too narrow and unsuitable for our data. Given the diversity of formulations and layouts, phrasing that captures context is quite important. Similarly, we discard unsupervised approaches based on the similarity of the text at the document or paragraph level because we favor transparency to the end user in order to enable leveraging legal practitioners' knowledge and expertise.\nExtraction of target information can be done using sequence-labeling classification. NER methods are well-suited to the task of extracting keywords and short phrases from a text. To this end, we create a training set of annotated samples as explained in the next section 5.1. Labeled sentences are collected in jsonlines format, which we convert to the binary spaCy-required format and use as training and validation data for our NER pipeline." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Training data creation", "publication_ref": [ "b24" ], "table_ref": [], "text": "We choose to use a machine learning component for text similarity to reinforce the consistency of the annotations. In line with our previous step of pre-processing, we annotate the case cover section and the main text separately. While we decided to annotate the whole page of the case cover because the semi-structured nature of the text makes tokenization approximate, we perform annotation of the main text as a sentence-based task, preserving some context. We use the Prodigy annotation tool7 , which provides semi-automatic annotations and active learning in order to speed up and improve the manual labeling work in terms of consistency and accuracy of annotation. We convert the two pandas Dataframes containing the preprocessed text to jsonlines which is the preferred format for Prodigy. We annotate 346 case covers and 2,436 sentences for the main text, which are chosen from the corpus at random.\nTo collect annotated samples on traditional NER labels (DATE, ORG, GPE, PERSON, NORP, LAW), we use suggestions from general purpose pre-trained embeddings8 . For the remaining labels (CLAIMANT_INFO, CLAIMANT_EVENT, PROCEDURE, DOC_EVIDENCE, EXPLANATION, DETERMINATION, CREDIBILITY), and still with the aim of improving consistency of annotation, we create a terminology base (as shown on pipeline description figure 1). At annotation time, patterns are matched with shown sentences. The human annotator only corrects them, creating a gold standard set of sentences and considerably speeding up the labeling task.\nTo create a terminology base for each target category, we first extract keywords describing cases from CanLII metadata retrieved during the information retrieval step. To this initial list of tokens, we add a list of tokens that were manually flagged in cases by legal professionals. We delete duplicates and some irrelevant or too general words such as \"claimant\" or \"refugee\", and manually assign the selected keywords to the appropriate label to obtain a list of tokens and short phrases per label. In order to extend our terminology base, we use the sense2vec model9 (based on word2vec (Mikolov et al., 2013)) to generate similar words and phrases. We select every word that is at least 70% similar to the original keyword in terms of cosine similarity and obtain a JSON file that contains 1,001 collected patterns. This method allows us to create a larger number of labeled data compared to fully manual annotation in the same amount of time.\nTable 1 describes the breakdown of labels in our annotated data. There is a clear imbalance across categories of items, with some labels being infrequent (NORP, DETERMINATION, PERSON, LAW_REPORT, LAW_CASE). Some labels are present very few times per case: DETERMINATION occurs only once per case, PERSON does not occur frequently since most cases are anonymized." }, { "figure_ref": [], "heading": "Experimental conditions and architectures", "publication_ref": [], "table_ref": [], "text": "Train, dev, test split We trained the NER models using 80% of the labeled data as our training set (276 case covers and 1,951 sentences for the main text, respectively), 10% of the labeled data as our development set (35 case covers and 244 sentences) and 10% of the labeled data as the test set for evaluation (35 case covers and 244 sentences)." }, { "figure_ref": [], "heading": "Pre-training static and contextual embeddings", "publication_ref": [ "b26", "b13", "b12", "b21", "b7" ], "table_ref": [], "text": "As the first layer of the NER network, we add pre-trained character-level embeddings in order to isolate the effect of pre-training from the effect of the architecture and improve the F1 score on target items. We fine-tune GloVe vectors ( (Pennington et al., 2014), 6B tokens, 400K vocabulary, uncased, 50 dimensions) on our data using the Mittens10 python package (Dingwall and Potts, 2018) and create 970 static vectors. On top of the generated static vectors, we add dynamic contextualized vectors using pre-training embeddings based on BERT (Devlin et al., 2019), updating weights on our corpus of cases. Because the text of the case cover is presented in a semi-structured format, we consider that it is unnecessary to perform pre-training given the lack of context around the target items.\nArchitectures We experiment with five different architectures on the case cover and seven different architectures on the main text: five based on convolutional neural networks (CNN) using different word embeddings and two transformer architectures. We train a CNN without added vectors as a baseline. Only the transformer architectures require training on a GPU. We use the spaCy pipelines11 (tokenizer, CNN and transformer) and the HuggingFace datasets12 . All CNNs use an Adam optimizer function. Since the sentencelabeling task is well-suited to the masked language modeling objective, we chose to experiment with roBERTa (Liu et al., 2019) and LegalBERT (Chalkidis et al., 2020) in order to compare performance between a general content and a legal content model. We train separately on the case cover, the traditional NER labels (GPE, NORP, ORG, DATE, PERSON, LAW), and the labels we created from scratch since it was observed that labels trained from scratch benefit from a lower learning rate (0.0005 versus 0.001 for the traditional labels)." }, { "figure_ref": [ "fig_2", "fig_1" ], "heading": "Results and evaluation", "publication_ref": [], "table_ref": [], "text": "Our experimental results are presented in table 2 in absolute terms and relative to the baseline in figure 3 below. Our chosen baseline is a CNN with no additional vectors. We present them per label because of the disparities in the scores. The upper rows contain results on the case cover and the lower rows results on the main text. The evaluation metrics applied serve a duel purpose: for future research, achieving a high F1 score and precisionrecall balance is key, while for our potential legal end users we assume that the recall measure is much more important as it measures how many of the predicted entities are correct.\nFor the case cover, we obtain satisfactory results on all labels with F1 scores above 90% for three of them and 84.78% for name extraction. Apart from names, CNN architectures perform better, with dates achieving the highest score with randomly initialized embeddings. We explain this with the specific layout of this page (Annex A). The only gain of using a transformer-based model is to achieve a higher recall compared to the CNNbased architectures.\nFor the main text, results vary across labels: we obtain a score above 80% for DATE, GPE, PERSON, ORG with the best score on roBERTa, but legal-bert-base-uncased scores lower than 60% on EXPLANATION, LAW, LAW_CASE. Overall, when using transformers, we observe a better precision-recall balance.\nResults on three labels DETERMINATION, LAW_REPORT, NORP are unreliable because of the limited sample both for training and testing. DETERMINATION appears only once per case, and LAW_REPORT appears in a few cases only. Further annotation would require selecting the paragraphs of cases where these items appear to augment the size of the sample. We leave this task to future work.\nExplanations for other low scores are partly to be found in the tokenization errors reported during the human-labeling task. Figure 2 shows an example of wrong tokenization on two categories LAW and LAW_CASE for which we believe bad tokenization is the primary explanation for low scores (similarly reported by Sanchez). In the first sentence of the figure, words are not correctly split between \"under\" and \"section\" and between the section number and \"of\". On the lower part of the figure, sentence tokenization does not correctly split the case reference as it is confused by the dot present in the middle. In this example, the case name is displayed as three different sentences, making the labeling task impossible.\nThe most appropriate pre-training varies across labels: For categories on which CNN performs best such as CREDIBILITY, DOC_EVIDENCE, LAW, we find that fine-tuning static vectors performs better than randomly initialized embeddings or dynamic vectors, which suggests that context was not essential when retrieving spans of text (pre-training relies on tri-grams). This could derive from the methods of annotation that were terminology-based for those labels. While the target items may contain particular vocabulary such as \"personal information form\" for DOC_EVIDENCE, context is of minimal importance since those phrases would not appear in another context or under another label. On the contrary, context seems much more important for retrieving procedural steps (PROCEDURE), which is the only category where the pre-training layer with contextual embeddings significantly increases the F1 score.\nIn the majority of categories, we find that the content of the pre-training is important (CLAIMANT_EVENT, CREDIBILITY, DATE, DOC_EVIDENCE, EXPLANATION, LAW, PROCEDURE). Results show that domain-specific In other categories, roBERTa performs better than LegalBERT and CNNs, suggesting that the size of the pre-trained model is more important than domain matching. While LegalBERT has a size of 12GB, roBERTa is over 160GB and outperforms LegalBERT on traditional NER labels (GPE, ORG, PERSON and also CLAIMANT_INFO, LAW_CASE).\nLooking at recall measures only, the superiority of transformer architectures against CNNs is more significant, with only 3 categories (DOC_EVIDENCE, CLAIMANT_INFO, LAW) achieving their best recall score with a CNN architecture and legal pretraining. Comparing results on recall, we reach the same conclusion as with F1, i.e. that domain matching allows us to achieve higher scores on target categories. Indeed, for seven out of 12 categories analyzed for the main text, the best scores are achieved by two architectures that differ in their pre-training domain. Higher F1 and recall scores, obtained through comparison and observation, enable us to attribute the improved performance primarily to the domain of the training data." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b3", "b4", "b1", "b10", "b23", "b2", "b15", "b16", "b9", "b18", "b19", "b22", "b20", "b33", "b32", "b31", "b17" ], "table_ref": [], "text": "Because of the importance of language and written text, applications of NLP in law hold great promise in supporting legal work, which has been extensively reviewed by Zhong et al.. However, because of the specificity of legal language and the diversity of legal domains, as demonstrated in our work with the results on LegalBERT-based transformer, general approaches aiming at structuring legal text such as LexNLP (Bommarito II et al., 2021) or general legal information extraction (Brüninghaus and Ashley, 2001) are unfit for specific domains such as international refugee law and are not able to achieve a high degree of granularity.\nEarlier methods of statistical information extraction in law include the use of linear models such as maximum entropy models (Bender et al., 2003;Clark, 2003) and hidden Markov models (Mayfield et al., 2003). However, state-of-the-art results are produced by methods able to capture some context, with an active research community investigating the use of conditional random fields (Benikova et al., 2015;Faruqui et al., 2010;Finkel et al., 2005) and BiLSTMs (Chiu and Nichols, 2016;Huang et al., 2015;Lample et al., 2016;Ma and Hovy, 2016;Leitner et al., 2019) for legal applications.\nScope and performance increased with the introduction of new architectures of deep learning using recurrent neural networks (RNN), CNNs, and attention mechanisms as demonstrated by Chalkidis et al., even though we find that transformers do not always perform best on our data. We therefore fo- Vardhan et al., 2021) with the latter achieving a total F1 score of 59.31% across labels, citations, as well as events, which is below our reported scores. Similar case matching is a well-known application of NLP methods, especially in common law systems (Trappey et al., 2020) and in domains such as international law. The Competition Legal Information Extraction/Entailment includes a task of case retrieval, which proves that there much interest in this area from researchers and the developers commercial applications. While research has been conducted to match cases at paragraph level (Tang 2021;Hu et al., we our approach is more transpar-ent and shifts the decisions regarding which filters to choose to legal practitioners, which we believe is appropriate to enable productive human-machine collaboration in this high-stakes application domain." }, { "figure_ref": [], "heading": "Conclusion and future work", "publication_ref": [ "b0" ], "table_ref": [], "text": "Our pipeline identifies and extracts diverse text spans, which may vary in quality across different categories. We acknowledge that certain entities we identify are more valuable than others for legal search purposes. Additionally, due to the complexity of the text, some noise is to be expected. However, this noise does not hinder the search for relevant items. Users have the flexibility to search and retrieve cases using any combination of our 19 categories of flagged entities. Additionally, work is required for the evaluation of the prototype by legal practitioners beyond traditional machine learning metrics (Barale, 2022). However, we believe the work presented here is an important first step and has the potential to be used for future NLP applications in refugee law. Our approach provides significant contributions with newly collected data, newly created labels for NER, and a structure given to each case based on lawyers' requirements, with nine categories of information being retrieved with an F1 score higher than 80%. Compared to existing case retrieval tools, our pipeline enables endusers to decide what to search for based on defined categories and to answer the question: What are criteria of similarity to my new input case ?" }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "In this section, we enumerate a few limitations of our work:\n• We believe that the need to train transformer architectures on GPU is an obstacle to the use of this pipeline, which is destined not to be used in an academic environment but by legal practitioners.\n• Because of the specificity of each jurisdiction, generalizing to other countries may not be possible on all labels with the exact same models (for example in extracting the names of tribunals).\n• The manual annotation process is a weakness: while it results in gold-standard annotations, it is very time-consuming. We do acknowledge that the amount of training data presented in this work is low and that collecting more annotations in the future would improve the quality of the results. We think it would be interesting to look at self-supervised methods, weak supervision, and annotation generation. The need for labeled data also prevents easy replication of the pipeline to new data sets, which would also require manually annotating.\n• More precisely on the extracted categories, some categories lack precision and would require additional processing steps to achieve satisfactory results. For example, the category PERSON sometimes refers to the claimant or their family, but sometimes refers to the name of the judge." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "A Example of a case cover " } ]
2023-05-24
10.1093/jrs/feab054
[ { "authors": "Claire Barale", "journal": "", "ref_id": "b0", "title": "Human-Centered Computing in Legal NLP -An Application to Refugee Status mination", "year": "2022" }, { "authors": "Oliver Bender; Franz ; Josef Och; Hermann Ney", "journal": "", "ref_id": "b1", "title": "Maximum entropy models for named entity recognition", "year": "2003" }, { "authors": "Darina Benikova; Seid Muhie; Yimam Prabhakaran; Santhanam Chris Biemann; ; C ", "journal": "", "ref_id": "b2", "title": "Germaner: Free open german named entity recognition tool", "year": "2015" }, { "authors": "J Michael; I I Bommarito; Daniel Martin Katz; Eric M Detterman", "journal": "", "ref_id": "b3", "title": "Lexnlp: Natural language processing and information extraction for legal and regulatory texts", "year": "2021" }, { "authors": "Stefanie Brüninghaus; Kevin D Ashley", "journal": "", "ref_id": "b4", "title": "Improving the representation of legal case texts with information extraction methods", "year": "2001" }, { "authors": "Hilary Evans Cameron; Avi Goldfarb; Leah Morris", "journal": "Journal of Refugee Studies", "ref_id": "b5", "title": "Artificial intelligence for a reduction of false denials in refugee claims", "year": "2021" }, { "authors": "Ilias Chalkidis; Emmanouil Fergadiotis; Prodromos Malakasiotis; Nikolaos Aletras; Ion Androutsopoulos", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Extreme multi-label legal text classification: A case study in EU legislation", "year": "2019" }, { "authors": "Ilias Chalkidis; Manos Fergadiotis; Prodromos Malakasiotis; Nikolaos Aletras; Ion Androutsopoulos", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "LEGAL-BERT: The muppets straight out of law school", "year": "2020" }, { "authors": "L Daniel; Jess Chen; Eagel", "journal": "ACM", "ref_id": "b8", "title": "Can machine learning help predict the outcome of asylum adjudications", "year": "2017" }, { "authors": "Jason Pc Chiu; Eric Nichols", "journal": "Transactions of the association for computational linguistics", "ref_id": "b9", "title": "Named entity recognition with bidirectional lstm-cnns", "year": "2016" }, { "authors": "Alexander Clark", "journal": "", "ref_id": "b10", "title": "Combining distributional and morphological information for part of speech induction", "year": "2003" }, { "authors": "Tonya Custis; Frank Schilder; Thomas Vacek; Gayle Mcelvain; Hector Martinez; Alonso ", "journal": "", "ref_id": "b11", "title": "Westlaw edge ai features demo: Keycite overruling risk, litigation analytics, and westsearch plus", "year": "2019" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Nicholas Dingwall; Christopher Potts", "journal": "", "ref_id": "b13", "title": "Mittens: an extension of GloVe for learning domainspecialized representations", "year": "2018" }, { "authors": "Matt Dunn; Levent Sagun; Hale Şirin; Daniel Chen", "journal": "ACM", "ref_id": "b14", "title": "Early predictability of asylum court decisions", "year": "2017" }, { "authors": "Manaal Faruqui; Sebastian Padó; Maschinelle Sprachverarbeitung", "journal": "KONVENS", "ref_id": "b15", "title": "Training and evaluating a german named entity recognizer with semantic generalization", "year": "2010" }, { "authors": "Jenny Rose Finkel; Trond Grenager; Christopher D Manning", "journal": "", "ref_id": "b16", "title": "Incorporating non-local information into information extraction systems by gibbs sampling", "year": "2005" }, { "authors": "Weifeng Hu; Siwen Zhao; Qiang Zhao; Hao Sun; Xifeng Hu; Rundong Guo; Yujun Li; Yan Cui; Long Ma", "journal": "Wireless Communications and Mobile Computing", "ref_id": "b17", "title": "Bert_lf: A similar case retrieval method based on legal facts", "year": "2022" }, { "authors": "Zhiheng Huang; Wei Xu; Kai Yu", "journal": "", "ref_id": "b18", "title": "Bidirectional lstm-crf models for sequence tagging", "year": "2015" }, { "authors": "Guillaume Lample; Miguel Ballesteros; Sandeep Subramanian; Kazuya Kawakami; Chris Dyer", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Neural architectures for named entity recognition", "year": "2016" }, { "authors": "Elena Leitner; Georg Rehm; Julian Moreno-Schneider", "journal": "Cham. Springer International Publishing", "ref_id": "b20", "title": "Fine-grained named entity recognition in legal documents", "year": "2019" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b21", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Xuezhe Ma; Eduard Hovy", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF", "year": "2016" }, { "authors": "James Mayfield; Paul Mcnamee; Christine Piatko", "journal": "", "ref_id": "b23", "title": "Named entity recognition using hundreds of thousands of features", "year": "2003" }, { "authors": "Tomas Mikolov; Kai Chen; Greg Corrado; Jeffrey Dean", "journal": "", "ref_id": "b24", "title": "Efficient estimation of word representations in vector space", "year": "2013" }, { "authors": "Maria Vasile Pais; Carol Mitrofan; Vlad Luca Gasan; Alexandru Coneschi; Ianov", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Named entity recognition in the Romanian legal domain", "year": "2021" }, { "authors": "Jeffrey Pennington; Richard Socher; Christopher D Manning", "journal": "", "ref_id": "b26", "title": "Glove: Global vectors for word representation", "year": "2014" }, { "authors": "Joshua Poje", "journal": "American Bar Association Techreport", "ref_id": "b27", "title": "Legal research", "year": "2014" }, { "authors": "Sean Rehaag", "journal": "Ottawa L. Rev", "ref_id": "b28", "title": "Troubling patterns in canadian refugee adjudication", "year": "2007" }, { "authors": "Sean Rehaag", "journal": "Queen's LJ", "ref_id": "b29", "title": "Judicial review of refugee determinations (ii): Revisiting the luck of the draw", "year": "2019" }, { "authors": "George Sanchez", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Sentence boundary detection in legal text", "year": "2019" }, { "authors": "Li Tang; Simon Clematide", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Searching for legal documents at paragraph level: Automating label generation and use of an extended attention mask for boosting neural models of semantic similarity", "year": "2021" }, { "authors": "Charles V Trappey; Amy J C Trappey; Bo-Hung Liu", "journal": "World Patent Information", "ref_id": "b32", "title": "Identify trademark legal case precedents -Using machine learning to enable semantic analysis of judgments", "year": "2020" }, { "authors": "Harsh Vardhan; Nitish Surana; Tripathy", "journal": "Springer", "ref_id": "b33", "title": "Named-entity recognition for legal documents", "year": "2021" }, { "authors": "Haoxi Zhong; Chaojun Xiao; Cunchao Tu; Tianyang Zhang; Zhiyuan Liu; Maosong Sun", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "How does NLP benefit legal system: A summary of legal artificial intelligence", "year": "2020" } ]
[]
Automated Refugee Case Analysis: An NLP Pipeline for Supporting Legal Practitioners
In this paper, we introduce an end-to-end pipeline for retrieving, processing, and extracting targeted information from legal cases. We investigate an under-studied legal domain with a case study on refugee law in Canada. Searching case law for past similar cases is a key part of legal work for both lawyers and judges, the potential end-users of our prototype. While traditional named-entity recognition labels such as dates provide meaningful information in legal work, we propose to extend existing models and retrieve a total of 19 useful categories of items from refugee cases. After creating a novel data set of cases, we perform information extraction based on state-ofthe-art neural named-entity recognition (NER). We test different architectures including two transformer models, using contextual and noncontextual embeddings, and compare general purpose versus domain-specific pre-training. The results demonstrate that models pre-trained on legal data perform best despite their smaller size, suggesting that domain matching had a larger effect than network architecture. We achieve a F1 score above 90% on five of the targeted categories and over 80% on four further categories.
Claire Barale; Michael Rovatsos; Nehal Bhuta
[ { "figure_caption": "Figure 1 :1Figure 1: End-to-end automated pipeline", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Example of an error in tokenization", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Comparison to the baseline on the F1 score on the main text: per targeted category (x-axis) on seven network architectures: baseline CNN model (baseline), CNN model with random static vectors on en_core_web_lg (CNN+rsv), CNN with fine-tuned static vectors (CNN+fts), CNN with random static vectors and pre-training (CNN+rsv+pt), CNN with fine-tuned static vectors and pre-training (CNN+fts+pt), RoBERTa-based transformer, LegalBERT-based transformer", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Comparison to the baseline on the F1 score on the case cover: per targeted category (x-axis) on four network architectures: baseline CNN model (baseline), CNN model with random static vectors on en_core_web_lg (CNN+rsv), RoBERTa-based transformer, LegalBERT-based transformer", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Precision (P), Recall (R) and F1 score (in %) on the cover page and the main text for seven network architectures: baseline CNN model (baseline), CNN model with random static vectors on en_core_web_lg (CNN+rsv), CNN with fine-tuned static vectors (CNN+fts), CNN with random static vectors and pretraining (CNN+rsv+pt), CNN with fine-tuned static vectors and pretraining (CNN+fts+pt), RoBERTa-based transformer, LegalBERT-based transformer training data has a larger effect than network architecture difference. More precisely, it seems that, on some categories (CREDIBILITY, DOC_EVIDENCE, LAW, PROCEDURE), pre-training on our own data is more effective than training on a general legal data set as in LegalBERT. This can be explained by the content LegalBERT is pre-trained on, which does not contain any Canadian but only US, European, and UK texts and does not include any refugee cases.", "figure_data": "LabelArchitecturebaselineCNN+rsvCNN+ftsCNN+rsv+ptCNN+fts+ptRoBERTaLegalBERTPRF1PRF1PRF1PRF1PRF1PRF1PRF1Header and cover pageDATE97.46 95.04 96.23 95.90 96.69 96.30---------89.92 89.17 89.54 89.08 88.33 88.70GPE92.96 91.67 92.31 90.14 88.89 89.51---------90.54 93.06 91.78 88.00 91.67 89.80ORG94.74 90.00 92.31 94.74 90.00 92.31---------79.17 95.00 86.36 86.36 95.00 90.48PERSON80.80 84.17 82.45 81.75 85.83 83.74---------75.48 96.69 84.78 69.82 97.52 81.38Main text document bodyCLAIMANT_EVENT60.36 44.67 51.34 57.02 46.00 50.92 57.89 36.67 44.90 55.04 47.33 50.90 63.71 52.67 57.66 64.34 61.33 62.80 65.10 64.67 64.88CLAIMANT_INFO55.00 61.11 57.89 47.83 61.11 53.66 61.11 61.11 61.11 55.56 55.56 55.56 63.16 66.67 64.86 63.16 66.67 64.86 57.89 61.11 59.46CREDIBILITY68.57 50.00 57.83 62.50 52.08 56.82 69.23 56.25 62.07 74.19 47.92 58.23 68.42 54.17 60.47 56.60 62.50 59.41 62.50 52.08 56.82DATE72.34 69.39 70.83 94.44 69.39 80.00 72.34 69.39 70.83 81.40 71.43 76.09 83.33 71.43 76.92 85.11 81.63 83.33 86.96 81.63 84.21DETERMINATION100.00 36.36 53.33 85.71 54.55 66.67 100.00 36.36 53.33 83.33 45.45 58.82 85.71 54.55 66.67 83.33 45.45 58.82 42.86 27.27 33.33DOC_EVIDENCE77.61 74.29 75.91 77.27 72.86 75.00 80.60 77.14 78.83 75.00 72.86 73.91 68.42 74.29 71.23 67.53 74.29 70.75 71.62 75.71 73.61EXPLANATION46.00 43.40 44.66 60.98 47.17 53.19 56.82 47.17 51.55 58.14 47.17 52.08 53.49 43.40 47.92 54.17 49.06 51.49 60.47 49.06 54.17GPE88.76 89.77 89.27 90.59 87.50 89.02 91.57 86.36 88.89 90.48 86.36 88.37 89.29 85.23 87.21 95.35 93.18 94.25 89.66 88.64 89.14LAW55.00 53.66 54.32 57.89 52.38 55.00 64.71 52.38 57.89 59.46 53.66 56.41 57.89 53.66 55.70 55.00 52.38 53.66 47.62 47.62 47.62LAW_CASE71.43 33.33 45.45 66.67 26.67 38.10 71.43 33.33 45.45 46.15 40.00 42.86 50.00 46.67 48.28 56.25 60.00 58.06 37.50 40.00 38.71LAW_REPORT100.00 66.67 80.00 66.67 66.67 66.67 100.00 66.67 80.00 66.67 66.67 66.67 100.00 66.67 80.00 50.00 66.67 57.14 66.67 66.67 66.67NORP78.57 64.71 70.97 93.33 82.35 87.50 100.00 70.59 82.76 100.00 70.59 82.76 92.86 76.47 83.87 100.00 82.35 90.32 93.75 88.24 90.91ORG64.71 67.35 66.00 78.38 59.18 67.44 73.81 63.27 68.13 73.33 67.35 70.21 78.57 67.35 72.53 80.39 83.67 82.00 82.93 69.39 75.56PERSON62.50 41.67 50.00 77.78 58.33 66.67 75.00 50.00 60.00 77.78 58.33 66.67 88.89 66.67 76.19 100.00 75.00 85.71 90.00 75.00 81.82PROCEDURE71.67 69.35 70.49 73.77 72.58 73.17 71.93 66.13 68.91 73.77 72.58 73.17 76.67 74.19 75.41 71.01 79.03 74.81 74.58 70.97 72.73", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "(Chen and Eagel, 2017)", "Explanation": "The cited work by Chen and Eagel provides a study on the classification and prediction of asylum cases in the United States, which serves as a supporting evidence for the citing paper in understanding the context of refugee law and AI in the US context."}, {"Category": "Supporting Evidence", "Citation": "(Dunn et al., 2017)", "Explanation": "The cited work by Dunn et al. also contributes to the study of classification and prediction of asylum cases in the US, further supporting the research on refugee law and AI in the US context."}, {"Category": "Supporting Evidence", "Citation": "(Rehaag, , 2019;;Cameron et al., 2021)", "Explanation": "The cited works provide statistical analysis of refugee decisions, which supports the claim that research has been conducted on Canadian data to analyze disparities in the field of refugee law."}, {"Category": "Methodological Basis", "Citation": "(Mikolov et al., 2013)", "Explanation": "The cited work, word2vec, is used in the sense2vec model to generate similar words and phrases for the terminology base, which is a methodological basis for the annotation process in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Pennington et al., 2014)", "Explanation": "The cited work by Pennington et al. (2014) provides the GloVe vectors that the citing paper fine-tunes to create pre-trained character-level embeddings for the first layer of the NER network."}, {"Category": "Methodological Basis", "Citation": "(Dingwall and Potts, 2018)", "Explanation": "The cited work by Dingwall and Potts (2018) is referenced for the use of the Mittens10 python package in fine-tuning the GloVe vectors to create static vectors for the first layer of the NER network."}, {"Category": "Methodological Basis", "Citation": "(Devlin et al., 2019)", "Explanation": "The cited work by Devlin et al. (2019) is referenced for the use of pre-training embeddings based on BERT in creating dynamic contextualized vectors for the first layer of the NER network."}, {"Category": "Data Source", "Citation": "(spaCy pipelines11)", "Explanation": "The cited work provides the spaCy pipelines that the citing paper uses for tokenization, CNN, and transformer tasks."}, {"Category": "Data Source", "Citation": "(HuggingFace datasets12)", "Explanation": "The cited work provides the HuggingFace datasets that the citing paper uses in its research or analysis."}, {"Category": "Methodological Basis", "Citation": "(Liu et al., 2019)", "Explanation": "The cited work provides the roBERTa model that the citing paper uses in its experiments to compare performance between a general content and a legal content model."}, {"Category": "Methodological Basis", "Citation": "(Chalkidis et al., 2020)", "Explanation": "The cited work provides the LegalBERT model that the citing paper uses in its experiments to compare performance between a general content and a legal content model."}, {"Category": "Methodological Basis", "Citation": "(GPE, NORP, ORG, DATE, PERSON, LAW)", "Explanation": "The cited work provides the traditional NER labels (GPE, NORP, ORG, DATE, PERSON, LAW) that the citing paper uses in its training process to compare performance between a general content and a legal content model."}, {"Category": "Methodological Basis", "Citation": "(lower learning rate)", "Explanation": "The cited work provides the observation that labels trained from scratch benefit from a lower learning rate (0.0005 versus 0.001 for the traditional labels), which the citing paper uses in its training process to improve performance."}, {"Category": "Methodological Basis", "Citation": "(Bommarito II et al., 2021)", "Explanation": "The cited work, LexNLP, is a general approach for structuring legal text that the citing paper adopts in their research on specific legal domains such as international refugee law."}, {"Category": "Data Source", "Citation": "(Br\u00fcninghaus and Ashley, 2001)", "Explanation": "The cited work on general legal information extraction is used as a data source for the study on specific legal domains in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Bender et al., 2003;Clark, 2003)", "Explanation": "The earlier methods of statistical information extraction in law, such as maximum entropy models and hidden Markov models, are extended in the citing paper to explore new dimensions in specific legal domains."}, {"Category": "Methodological Basis", "Citation": "(Mayfield et al., 2003)", "Explanation": "The use of hidden Markov models in the cited work is adopted in the citing paper to study specific legal domains in greater detail."}, {"Category": "Methodological Basis", "Citation": "(Benikova et al., 2015)", "Explanation": "The cited work by Benikova et al. provides a method for capturing context using conditional random fields, which the citing paper adopts in their research on legal applications."}, {"Category": "Methodological Basis", "Citation": "(Faruqui et al., 2010)", "Explanation": "The cited work by Faruqui et al. also contributes a method for capturing context using conditional random fields, which the citing paper may have used in their research on legal applications."}, {"Category": "Methodological Basis", "Citation": "(Finkel et al., 2005)", "Explanation": "The cited work by Finkel et al. provides a method for capturing context using conditional random fields, which the citing paper may have used in their research on legal applications."}, {"Category": "Methodological Basis", "Citation": "(Chiu and Nichols, 2016)", "Explanation": "The cited work by Chiu and Nichols presents a method using BiLSTMs for legal applications, which the citing paper may have adopted in their research."}, {"Category": "Methodological Basis", "Citation": "(Huang et al., 2015)", "Explanation": "The cited work by Huang et al. also contributes a method using BiLSTMs for legal applications, which the citing paper may have used in their research."}, {"Category": "Methodological Basis", "Citation": "(Lample et al., 2016)", "Explanation": "The cited work by Lample et al. provides a method using BiLSTMs for legal applications, which the citing paper may have adopted in their research."}, {"Category": "Methodological Basis", "Citation": "(Ma and Hovy, 2016)", "Explanation": "The cited work by Ma and Hovy presents a method using BiLSTMs for legal applications, which the citing paper may have used in their research."}, {"Category": "Methodological Basis", "Citation": "(Leitner et al., 2019)", "Explanation": "The cited work by Leitner et al. contributes a method using BiLSTMs for legal applications, which the citing paper may have used in their research."}, {"Category": "Methodological Basis", "Citation": "(Chalkidis et al., 2021)", "Explanation": "The cited work by Chalkidis et al. presents a method using deep learning techniques, which the citing paper may have adopted in their research on legal applications."}, {"Category": "Methodological Basis", "Citation": "(Trappey et al., 2020)", "Explanation": "The cited work by Trappey et al. highlights the use of NLP methods in case matching applications, which the citing paper may have referenced in their research on legal applications."}, {"Category": "Methodological Basis", "Citation": "(Vardhan et al., 2021)", "Explanation": "The cited work by Vardhan et al. provides a method for case matching using NLP techniques, which the citing paper may have used in their research on legal applications."}, {"Category": "Methodological Basis", "Citation": "(Tang 2021)", "Explanation": "The cited work by Tang 2021 has conducted research in case retrieval at paragraph level, which the citing paper adopts as a methodological basis for their own approach in case retrieval."}, {"Category": "Methodological Basis", "Citation": "(Hu et al.,", "Explanation": "The cited work by Hu et al. has also conducted research in case retrieval, which the citing paper may have adopted or adapted methods or techniques from in their own approach to case retrieval."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "As the pace of machine learning development accelerates, more and more users, including private enterprises, are incorporating artificial intelligence systems into their organizations. With current deep learning methods, the expertise of these systems rely heavily on the scale of both the model and the dataset. However, end-users of these models may have limited knowledge of the amount or quality of data used to train and test them, particularly in cases where data is third-party proprietary, privacy sensitive, or difficult to analyze due to scale. If the published accuracies of two models cannot be validated because the test data is not publicly released, is there another metric we can use to compare them? We propose it would be useful to create an assessment method for deep learning models that is based solely on their parameters and intended task as an alternative to a data-based evaluation when it is infeasible." }, { "figure_ref": [], "heading": "Problem Statement", "publication_ref": [ "b17", "b24", "b15", "b1", "b6", "b16", "b19", "b14", "b9", "b2", "b3", "b12", "b21", "b18", "b22", "b11" ], "table_ref": [], "text": "We are given a trained neural network for the classification of images. Our objective is to assess the network's accuracy performance without having access to a either the training or test set originally used to supervise the network.\nBackground and Related Work Often Machine Learning (ML) models are trained, validated, and tested with example data. A significant part of the data is used to estimate the parameters of a model, next validation data is used for hyperparameter and model selection, and finally the test data is used to evaluate performance of the selected model [18,25]. Using this recipe, Deep Neural Network (DNN) models have shown excellent performance on test data, but further inspection has revealed trouble transferring this accuracy metric to unseen out-of-distribution [16] and adversarial examples [2], implying that hold-out set test accuracy is one measure of model quality.\nThe community and the NeurIPS conference have shown interest in establishing new measures that predict generalization gap and offer insight into how DNNs generalize including a NeurIPS 2020 competition [7]. The competition asked for new computable measures on given model and training set combinations that correlated well with their generalization gaps, which led to solutions like [17] and later-on [20]. Inherently, these solutions create measures from relationships between training data examples and the encoded features of the networks. In our approach, we derive measures based only on the network parameters, as a key element of our contributions is that data is not required.\nClass Prototypes in Other Fields An implicit belief in deep learning is that successful architectures are able to identify common semantic features from each class that distinguish them from each other. These important class features can then be captured into data structures called class prototypes.\nThe class prototype idea has been studied in multiple areas such as robustness [15], explainable AI [10,3], distance-based classification [4,13,22,19], few-shot learning [23], and continual learning [12].\nHowever, these lines of research usually compute prototypes directly from dataset examples for a specific machine learning task. Our methodology assumes we have no access to data, so we derive prototypes from the model parameters and a loss function at the output of the model. The main contributions of our work are:\n• We have proposed and evaluated two metrics for dataless evaluation of DNNs for classifications; to be more clear, our proposed method requires only the DNN's architecture and parameters. One of the metrics can be used to measure classification accuracy and the other can be used to measure robustness against adversarial attacks.\n• We have proposed a method that uses a given DNN to create prototype examples for each class-category (by iterative back-propagation of classification error), which are then used to observe activations of neurons at the feature layer for computing values of the metrics.\n• We have validated the quality of the proposed metrics by measuring classification performance and robustness of DNNs models trained with CIFAR10, CIFAR100, and Tiny ImageNet datasets; the models had ResNet18 architecture.\n2 Dataless Methods to Evaluate Performances of DNNs" }, { "figure_ref": [ "fig_1" ], "heading": "Notations", "publication_ref": [], "table_ref": [], "text": "We are given a differentiable classifier f (• ; θ) that maps an input vector x ∈ R N to a vector ŷ ∈ R K , where each element f k (x; θ) = ŷk of ŷ represents the probability x belongs to class k such that K k=1 ŷk = 1. Correspondingly, for any given x, which possess a true label y, the classifier could generate K possible class assignments and is correct only when y = arg max k (ŷ). To associate an input, variable, or label to a true class k, we may superscript it with k. For our purposes, we parse this standard definition of a classifier into the composition of a feature extraction function g : R N → R D and a feature classification function h : R D → R K . Therefore, ŷ = f (x; θ) = h(g(x; θ g ); θ h ), where θ g and θ h are the disjoint parameter sets on which g and h are dependent, respectively. Unless otherwise stated, the norm || • || will refer to the L 2 -norm.\nPrototype vector or image We envision the given neural network as split into three representations of the input as shown in Fig. 1 : the input space, the feature space, and the output space. The fundamental data structure we employ to distinguish one class from another, not knowing its training examples, is the class prototype. We refer to a specific input vector p as a prototype input. Typically, there exists a separate prototype image for each class such as to create a prototype image set p 1 , ..., p K . A class prototype feature vector is the mapping of a prototype image to the feature layer of the network V p = g(p; θ g ). These features represent the important latent representations of a particular class' training set examples that the DNN trained on.\nWe wish to understand the performance of a trained neural network that is given to us without any data. Although we do not have access to the original data, we find that in a classification setting the model parameters and outputs are sufficient to reveal important inter-class relationships whose quantities depend on a) the amount of training data used to train the model and b) the generalization performance of the model. By exposing these inter-class relationships from the available information, we can then quantify them and make general statements about how successful the learning algorithm was.\nTo derive separate prototypes for each class, we try to enforce minimal overlap between different class' prototypes in the output space of the neural network; the one-hot encoded vectors for each class are the best mechanism to achieve this. By establishing class prototypes that the network is maximally confident in, we can highlight the entire gambit of influential feature activations for each class. Furthermore, if all the important features to a class are captured by the prototype, we can infer that there was at least one class example in the training set that utilized each prototypical feature; in this way our prototypes indirectly represent the training set even though we never see actual examples. We now delve into the algorithms to create class prototype examples and the prototype-based metrics from which can infer network generalization performance." }, { "figure_ref": [], "heading": "How to Create a Class Prototype Example", "publication_ref": [], "table_ref": [], "text": "We are given no data, but we do receive a trained, differentiable classifier f (x; θ) with full access to its weights. In our approach, θ is fixed and we may sometimes omit it for succinctness. To create a prototype example p, we iteratively update the values in a randomly initialized input in R N to minimize the cross-entropy loss between the output f (p; θ) and a target probability distribution y p ∈ R K of our choosing. Let z refer to the iteration number and α to a chosen learning rate. Then the update rule becomes: To create a prototype example p, we iteratively update the values in a randomly initialized input in R N using Eqn. (1) to minimize the cross-entropy loss defined by Eqn. (2) between the output f (p; θ) and a target probability distribution y p ∈ R K of our choosing, where z refer to the iteration number, α to a chosen learning rate, and L, for a chosen y p ∈ R K .\nUpdate Rule:\np z+1 ← p z -α∇ p L /|| ∇ p L ||(1)\nLoss Function:\nL = - K k y p,k log(f k (p))(2)\nEffectively, we repeatedly forward pass the prototype example into the network, compute the crossentropy loss of the prototype's output against a fixed probability vector, backpropagate, and update the prototype before the next iteration. For a desired prototype example p k for class k, we set y p to the one-hot encoding for that class, i.e. each element of y p is,\ny p,j = 1, if j = k 0, otherwise(3)\nThe prototype example is then created by randomly initializing vector p ∈ [0, 1] N and then iteratively updating it with the update rule in Eqn. 1." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Feature Layer Neuron Activation Profiles", "publication_ref": [], "table_ref": [], "text": "The box-and-whisker plots in Figure 2 show how a CIFAR100 class prototype comprehensively stimulates the most activated neurons for its class in the feature space (a 512x1 vector in this case). In this figure, the activation levels (vertical axis) of all neurons in the feature layer are plotted against their respective vector indices, which are sorted by the activations of the prototype. The black vertical bars represent the interquartile ranges (IQR) of all the class' examples' feature activations from the training set. The green dots represent 5 different versions of the class prototype that were generated from 5 different random initialization vectors. The top and bottom charts differ in the amount of data the model had been trained on, with the top having been trained on only a small portion of data while the bottom model trained on all available data. Both models were trained for 100 epochs and achieved 100% training accuracy on their respective training sets.\nWe highlight several important trends from these graphs:\n• In both graphs, the class prototype activations lie near or above the top of the IQRs from the class' training examples. Particularly, for the most activated neurons near the right side of the horizontal axis, the prototype activations far exceed the training data, but only in the case of the well-trained 100% data model do we see this excessive prototype activation appear over a wider range of neurons. • The curves created by the sorted prototype data possess generally the same shape between charts, but the well-trained model exhibits a curve a) that is tighter and less noisy between different random initializations of the prototype, b) that has a sharper slope where the prototype activations transition from low activity neurons to high activity neurons in a almost piecewise-linear fashion, and c) that has more successfully suppressed a larger number of class-unimportant neurons; the better model relies on a fewer number of highly activated neurons and is able to dampen the response of the prototype and training examples in the non-activated region. • We have observed these behaviors in other classes.\n• The better model in the bottom graph had 17% higher test accuracy.\nSince a network's class prototype can be derived without data, and activation of neurons at the feature layer differ with the quality of the network's training (as observed in Fig. 2), we hypothesize that dataless metrics can be computed. In this work we propose and validate two such metrics.\n1. Proposed first metric, M g , postulates that class prototypes should be less similar and utilize fewer of the same neurons in a higher performance model; harnessing the observations in bullet 2. 2. Proposed second metric, M adv , proposes that the nearest adversarial example (misclassification) of each class prototype will be further away in feature space for a better trained model. This idea stems from bullet 1, which observes the increased activation margin the class prototypes exhibit above their training data over a larger number of important class neurons.\nThe fundamental measure of similarity we use for both metrics is the cosine similarity. Given two vectors v 1 and v 2 , the cosine similarity between the two is:\ncos(θ) = v 1 • v 2 || v 1 || || v 2 ||(4)\nWe select the magnitude independent cosine similarity measure as opposed to a raw || • || p norm distance since the overall prototype magnitudes and shapes could vary across classes." }, { "figure_ref": [], "heading": "Computing M g", "publication_ref": [], "table_ref": [], "text": "Let the k th row of a matrix, G ∈ R KxK , be the unit feature vector of the class k prototype:\nG k = g(p k ) / || g(p k ) || 2(5)\nThe elements of the matrix GG T ∈ [0, 1] KxK are the cosine similarities, see Eqn. 4 , of each pair of class prototype unit vectors:\n(GG T ) a,b = g(p a ) • g(p b ) || g(p a ) || || g(p b ) ||(6)\nThe metric M g ∈ [0, 1] is then,\nM g = 1 - 1 K 2 a,b (GG T ) a,b(7)\nwhere we have computed the mean over all elements of GG T and subtracted this scalar from 1 to measure the average dissimilarity between any two pairs of class prototypes in feature space." }, { "figure_ref": [], "heading": "Computing M adv", "publication_ref": [ "b10", "b20", "b23", "b1", "b13" ], "table_ref": [], "text": "We desire to create a metric than can be useful in tracking generalization on clean examples, but also robust generalization. Ideally, M adv would scale proportionately with test accuracy on clean examples, but also inform us of differences in training algorithms between models we are comparing if one was standard cross-entropy trained and the other adversarially trained [11] to increase robustness. Previous work on robust neural networks [21,24] indicates that increasing data quantity aides in reducing the robust generalization gap, but for a fixed data set size, increasing robustness will reduce accuracy on clean examples. We believe that we can capture both these effects by performing adversarial attacks [2] on our class prototype images, which we postulate inherit the robustness characteristics of the network and their respective classes.\nA complication is that we desire to measure our metrics based on similarities in feature space on the interval [0,1], rather than based on unbounded L 2 distances in image space, which adversarial attacks are traditionally evaluated by. We select the DeepFool [14] attack as an efficient method for finding the closest adversary for each class prototype image and assume that close adversaries in the image space translate to acceptable solutions in the feature space.\nOur computation of M adv requires us to find the closest adversarial example p adv ∈ [0, 1] N for each class prototype and then compute each adversary's activation levels V p,adv = g(p adv ). By definition, an adversary is a perturbation field, δ, applied to p that results in misclassification\ny k ̸ = arg max k (f (p k + δ; θ f )).\nAssume we perform the DeepFool algorithm on each class prototype, p k , to create a list of prototype input adversaries p k adv and forward pass them to find their feature vectors V k p,adv = g(p k adv ; θ g ). The computation of M adv is then,\nM adv = 1 - 1 K K k g(p k ) • g(p k adv ) || g(p k ) || || g(p k adv ) ||(8)\nwhere we calculate the average cosine similarity, Eqn. 4, between all class prototypes and their respective DeepFool adversaries in feature space. This value is subtracted from 1 to convey the average dissimilarity." }, { "figure_ref": [ "fig_1", "fig_4", "fig_4", "fig_3", "fig_3" ], "heading": "Empirical Evaluation of Proposed Metrics", "publication_ref": [ "b5", "b7", "b7", "b8", "b5" ], "table_ref": [ "tab_1" ], "text": "Our experimental goal was to see if proportionality trends existed between the generalization test accuracy of a deep neural network on common image classification tasks and our dataless metrics.\nAfter an exhaustive search and many computations, M g and M adv are two of the best metrics we found that successfully inferred test accuracy. We are presenting their results for one standard crossentropy trained architecture and three datasets, but we plan on adapting them to other architectures, datasets, and tasks in the future. We instantiate our generic network in Fig. 1 as a ResNet18 [6], defining g( • ; θ g ) as the input to the flattened layer after global average pooling and h( • ; θ h ) as the fully connected and softmax layers. All evaluations were completed on a single GPU. Datasets We focused on the image classification datasets CIFAR10 [8], CIFAR100 [8], and Tiny ImageNet [9]. All of these datasets contain labeled 3-channel color images of natural objects such as animals and transportation vehicles. The datasets differ in their image sizes (32x32 pixels for CIFAR and 64x64 for Tiny), in number of categories (10, 100, and 200 for CIFAR10, CIFAR100, and Tiny ImageNet respectively), and the number of training examples per category (5000 per class in CIFAR10 and 500 per class for CIFAR100/Tiny ImageNet).\nTraining For each dataset, we randomly initialize a ResNet18 [6] network and train it for a total of 700 epochs. However, as shown in Fig. 3, we gradually increment the fraction of the full training set available to the network every 100 epochs, in a class-stratified fashion, beginning with only 25% of the training data in the first 100 epochs. At every 100 epoch checkpoint, we measure the test accuracy of the network, generate protoypes for each class given the current state of the model, and compute our metrics. The graphs and tables in this section reflect these measurements. The learning rate starts at 0.1 in the first 100 epochs, but begins at 0.05 for epochs 100-700, following a cosine annealing schedule for each 100 epochs. With the network frozen, prototype images are computed per Eqn. 1 with a learning rate of 0.1 (CIFAR100,Tiny) or 0.01 (CIFAR10).\nOverall Table 1 shows the detailed epochal checkpoint results for each training data percentage, dataset, model accuracy, and metric. In general, the metrics are highly correlated and trend proportionately with model test accuracy. The accuracy results reflect a single model initialization undergoing the 700 epoch recipe in Fig. 3, but the metric results are the mean of 5 different prototype sets. Since the metrics themselves are the mean of K 2 (M g ) and K (M adv ) elements, a single number in the table corresponds to a mean of means. The behavior of M adv in the low data regime is less well behaved, being noisy for CIFAR10 and effectively constant for Tiny ImageNet. We attribute the low-data behavior of this metric in all three datasets to the model relying on a larger group of noisy features for each class (see Fig. 2). Only when the model is able to define a tighter group of less noisy features for each class (the bottom chart of Fig. 2), do we see the dissimilarity between prototypes and their adversaries begin increasing.\nIn future work, we plan to evaluate this metric further to see if it is able to capture differences in generalization between non-robust and robust networks. " }, { "figure_ref": [], "heading": "Application of Proposed Methods", "publication_ref": [], "table_ref": [], "text": "We envision two immediate applications of the proposed method for dataless evaluation of DNNs.\nIt can be used to compare performances of two networks or to evaluate performance of any DNN without using training, validation, or testing data.\nEvaluating a DNN Section 2 has developed and described steps for evaluating a DNN. Section 3 has used the proposed method to evaluate three trained DNNs. In this paragraph we outline how one should utilize the proposed method. Let us assume that a DNN is trained to classify an input to one of K categories, which are given truth labels as one-hot encodings.\nStep 1: Using the method described in Section 2.2 create K prototype inputs, one for each category.\nStep 2: Using the method described in Section 2.4 compute the DNN's M g value. Higher the value of M g , the better the expected performance with the real data. Note that, 0 ≤ M g ≤ 1.\nOur evaluation of this metric indicates that a well trained DNN should have a value of 0.8 or higher.\nStep 3: Using the method described in Section 2.5 compute the DNN's M adv value. Again, higher the value of M adv , the higher the robustness against adversarial attacks. Note that, 0 ≤ M adv ≤ 1. Our evaluation of this metric indicates that a well trained DNN should have a value of 0.35 or higher.\nComparing DNNs Suppose that two or more DNNs are designed, trained, validated, and tested for the same task on custom curated datasets that cannot be made available to the users of the DNNs because of proprietary or privacy nature of the data. All these DNNs' weights are available to an end-user. For selecting the best network the user can utilize the proposed dataless DNN evaluation method proposed here. For each of the DNN, the user should calculate the two metrics (following the steps described above) and compare the values to select the one that serves the user the best. If the user's priority is the higher accuracy, the user should select the DNN that has highest M g . On the other hand if the user is concerned more with adversarial attacks than the higher accuracy, the user should choose the DNN that has the highest M adv value." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work we have proposed a method for datalessly evaluating the performance of DNNs. The proposed method works for evaluation of DNNs performing classification tasks. Given a trained DNN, the proposed method can create a 'prototype' example for each category. Then these prototypes are used to calculate two metrics: one for measuring classification accuracy and the other for robustness against adversarial attacks. While it is possible that a network may score highest in both measures, there are networks that would do better in one but not in the other.\nWe have used the proposed methods to evaluate many DNN models. In this paper we report evaluation of three DNN models, developed using RestNet18 architecture to classify CIFAR10, CIFAR100, and Tiny ImageNet. The evaluations have proven quite effective; we are now trying to improve the method further.\nOur long-term goal is for the idea of dataless metrics to extend beyond image classification to other tasks such as object detection and generative applications. The core idea is that the model parameters alone can inform us of their ability to complete the task they were trained for. We believe that dataless evaluations of intelligent models created using machine learning is critical to the long-term democratization of the field." } ]
2023-05-24
[ { "authors": "Gon Buzaglo; Niv Haim; Gilad Yehudai; Gal Vardi; Michal Irani", "journal": "", "ref_id": "b0", "title": "Reconstructing training data from multiclass neural networks", "year": "2023" }, { "authors": "Anirban Chakraborty; Manaar Alam; Vishal Dey; Anupam Chattopadhyay; Debdeep Mukhopadhyay", "journal": "", "ref_id": "b1", "title": "Adversarial attacks and defences: A survey", "year": "2018" }, { "authors": "Chaofan Chen; Oscar Li; Alina Barnett; Jonathan Su; Cynthia Rudin", "journal": "", "ref_id": "b2", "title": "This looks like that: deep learning for interpretable image recognition", "year": "2018" }, { "authors": "S Guerriero; B Caputo; T E J Mensink", "journal": "", "ref_id": "b3", "title": "Deep nearest class mean classifiers", "year": "2018" }, { "authors": "Niv Haim; Gal Vardi; Gilad Yehudai; Ohad Shamir; Michal Irani", "journal": "", "ref_id": "b4", "title": "Reconstructing training data from trained neural networks", "year": "2022" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b5", "title": "Deep residual learning for image recognition", "year": "2015" }, { "authors": "Yiding Jiang; Pierre Foret; Scott Yak; Daniel M Roy; Hossein Mobahi; Gintare Karolina Dziugaite; Samy Bengio; Suriya Gunasekar; Isabelle Guyon; Behnam Neyshabur", "journal": "", "ref_id": "b6", "title": "Neurips 2020 competition: Predicting generalization in deep learning", "year": "2020" }, { "authors": "Alex Krizhevsky; Vinod Nair; Geoffrey Hinton", "journal": "", "ref_id": "b7", "title": "Cifar-10", "year": "" }, { "authors": "Y Le; X Yang", "journal": "CS", "ref_id": "b8", "title": "Tiny imagenet visual recognition challenge", "year": "2015" }, { "authors": "Oscar Li; Hao Liu; Chaofan Chen; Cynthia Rudin", "journal": "", "ref_id": "b9", "title": "Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions", "year": "2017" }, { "authors": "Aleksander Madry; Aleksandar Makelov; Ludwig Schmidt; Dimitris Tsipras; Adrian Vladu", "journal": "", "ref_id": "b10", "title": "Towards deep learning models resistant to adversarial attacks", "year": "2019" }, { "authors": "Zheda Mai; Ruiwen Li; Hyunwoo Kim; Scott Sanner", "journal": "", "ref_id": "b11", "title": "Supervised contrastive replay: Revisiting the nearest class mean classifier in online class-incremental continual learning", "year": "2021" }, { "authors": "Thomas Mensink; Jakob Verbeek; Florent Perronnin; Gabriela Csurka", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b12", "title": "Distance-based image classification: Generalizing to new classes at near-zero cost", "year": "2013" }, { "authors": "Seyed-Mohsen Moosavi-Dezfooli; Alhussein Fawzi; Pascal Frossard", "journal": "", "ref_id": "b13", "title": "Deepfool: a simple and accurate method to fool deep neural networks", "year": "2015" }, { "authors": "Aamir Mustafa; Salman Khan; Munawar Hayat; Roland Goecke; Jianbing Shen; Ling Shao", "journal": "", "ref_id": "b14", "title": "Adversarial defense by restricting the hidden space of deep neural networks", "year": "2019" }, { "authors": "Anders Vaishnavh Nagarajan; Behnam Andreassen; Neyshabur", "journal": "", "ref_id": "b15", "title": "Understanding the failure modes of out-of-distribution generalization", "year": "2021" }, { "authors": "Parth Natekar; Manik Sharma", "journal": "", "ref_id": "b16", "title": "Representation based complexity measures for predicting generalization in deep learning", "year": "2020" }, { "authors": "Sebastian Raschka", "journal": "", "ref_id": "b17", "title": "Model evaluation, model selection, and algorithm selection in machine learning", "year": "2020" }, { "authors": "Oren Rippel; Manohar Paluri; Piotr Dollar; Lubomir Bourdev", "journal": "", "ref_id": "b18", "title": "Metric learning with adaptive density discrimination", "year": "2015" }, { "authors": "Yair Schiff; Brian Quanz; Payel Das; Pin-Yu Chen", "journal": "", "ref_id": "b19", "title": "Predicting deep neural network generalization with perturbation response curves", "year": "2021" }, { "authors": "Ludwig Schmidt; Shibani Santurkar; Dimitris Tsipras; Kunal Talwar; Aleksander M Ądry", "journal": "", "ref_id": "b20", "title": "Adversarially robust generalization requires more data", "year": "2018" }, { "authors": "Eli Schwartz; Leonid Karlinsky; Joseph Shtok; Sivan Harary; Mattias Marder; Sharathchandra Pankanti; Rogério Schmidt Feris; Abhishek Kumar; Raja Giryes; Alexander M Bronstein", "journal": "", "ref_id": "b21", "title": "Repmet: Representative-based metric learning for classification and one-shot object detection", "year": "2018" }, { "authors": "Jake Snell; Kevin Swersky; Richard S Zemel", "journal": "", "ref_id": "b22", "title": "Prototypical networks for few-shot learning", "year": "2017" }, { "authors": "Dimitris Tsipras; Shibani Santurkar; Logan Engstrom; Alexander Turner; Aleksander Madry", "journal": "", "ref_id": "b23", "title": "Robustness may be at odds with accuracy", "year": "2019" }, { "authors": "Alice Zheng; Nicole Shelby; Ellie Volckhausen", "journal": "", "ref_id": "b24", "title": "Evaluating machine learning models", "year": "2019" } ]
[ { "formula_coordinates": [ 4, 273.75, 544.04, 230.92, 9.68 ], "formula_id": "formula_0", "formula_text": "p z+1 ← p z -α∇ p L /|| ∇ p L ||(1)" }, { "formula_coordinates": [ 4, 273.75, 559.13, 230.92, 30.55 ], "formula_id": "formula_1", "formula_text": "L = - K k y p,k log(f k (p))(2)" }, { "formula_coordinates": [ 4, 259.02, 664.42, 245.64, 22.05 ], "formula_id": "formula_2", "formula_text": "y p,j = 1, if j = k 0, otherwise(3)" }, { "formula_coordinates": [ 5, 257.09, 566.84, 247.57, 23.25 ], "formula_id": "formula_3", "formula_text": "cos(θ) = v 1 • v 2 || v 1 || || v 2 ||(4)" }, { "formula_coordinates": [ 5, 253.63, 681.14, 251.04, 11.72 ], "formula_id": "formula_4", "formula_text": "G k = g(p k ) / || g(p k ) || 2(5)" }, { "formula_coordinates": [ 6, 237.04, 85.82, 267.62, 23.88 ], "formula_id": "formula_5", "formula_text": "(GG T ) a,b = g(p a ) • g(p b ) || g(p a ) || || g(p b ) ||(6)" }, { "formula_coordinates": [ 6, 246.25, 150.76, 258.42, 26.88 ], "formula_id": "formula_6", "formula_text": "M g = 1 - 1 K 2 a,b (GG T ) a,b(7)" }, { "formula_coordinates": [ 6, 108, 446.56, 128.99, 12.17 ], "formula_id": "formula_7", "formula_text": "y k ̸ = arg max k (f (p k + δ; θ f ))." }, { "formula_coordinates": [ 6, 216.79, 516.78, 287.88, 30.55 ], "formula_id": "formula_8", "formula_text": "M adv = 1 - 1 K K k g(p k ) • g(p k adv ) || g(p k ) || || g(p k adv ) ||(8)" } ]
Fantastic DNN Classifiers and How to Identify them without Data
Current algorithms and architecture can create excellent DNN classifier models from example data. In general, larger training datasets result in better model estimations, which improve test performance. Existing methods for predicting generalization performance are based on hold-out test examples. To the best of our knowledge, at present no method exists that can estimate the quality of a trained classifier without test data. In this paper, we show that the quality of a trained DNN classifier can be assessed without any example data. We consider DNNs to be composed of a feature extractor and a feature classifier; the feature extractor's output is fed to the classifier. The proposed method iteratively creates class prototypes in the input space for each class by minimizing a cross-entropy loss function at the output of the network. We use these prototypes and their feature relationships to reveal the quality of the classifier. We have developed two metrics: one using the features of the prototypes and the other using adversarial examples corresponding to each prototype. Empirical evaluations show that accuracy obtained from test examples is directly proportional to quality measures obtained from the proposed metrics. We report our observations for ResNet18 with Tiny ImageNet, CIFAR100, and CIFAR10 datasets. The proposed metrics can be used to compare performances of two or more trained classifiers without testing examples.
Nathaniel Dean; Dilip Sarkar
[ { "figure_caption": "Because the model parameters reflect the outcome of training the model on real data, our prototypes are indirect representations of the training set. Training Set Reconstruction Recent work [1, 5] has successfully reconstructed training examples by optimizing a reconstruction loss function that requires only the parameters of the model and no data. An underlying hypothesis of this work is that neural networks encode specific examples from the training set in their model parameters. In our work, we utilize this concept in a different way by optimizing a cross-entropy loss at the output of the model and constructing class prototypes for each class, also without any data.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: Generic Network showing it as a composition of feature extractor and a feature classifier.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "FeatureLayer Activations, CIFAR100, ResNet18, Class 1 25% Training Data 100% Training Data", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Plots of CIFAR100, ResNet18, class 1 activation levels (vertical axis) in the feature layer as a function of neuron index (horizontal axis). Neuron indices are sorted by activation levels of the prototypes (green dots). Prototype variation shows effect of 5 different random p initializations. Black bars are interquartile ranges of class 1 training data. (Top) Low quantity of training data. (Bottom) High quantity of training data. Charts on same scale.", "figure_data": "", "figure_id": "fig_3", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Percentage of training data released to deep neural network as a function of cumulative epochs of training. Metrics and test accuracy computed at each 100 epochal increment checkpoint.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "3. 1 Figure 414Figure4shows trendlines for test accuracy and metric M g as a function of available training data percentage for the three datasets. As the test accuracy increases, M g increases in a proportional manner throughout the training data percentage range. Figure5eliminates the training data percentage variable and plots our two variables of interest, M g and test accuracy, directly against each other. From these charts, we created best-fit linear lines and measured high Pearson correlation coefficients of 0.97, 0.97, and 0.99 between M g and test accuracy for CIFAR100, Tiny ImageNet, and CIFAR10, respectively.", "figure_data": "", "figure_id": "fig_5", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: M g -Plots of test accuracy and mean prototype dissimilarity (M g ) as a function of training data fraction used in training.", "figure_data": "", "figure_id": "fig_6", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: M g -Plots of mean prototype dissimilarity (M g ) as a function of test accuracy. Pearson correlation coefficients 0.97 (CIFAR100), 0.97 (Tiny), and 0.99 (CIFAR10).", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "3. 22Evaluation of Metric M adv", "figure_data": "", "figure_id": "fig_8", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 66Figure6plots metric M adv as a function of test accuracy for all three datasets. In this setting, we again computed the Pearson correlation coefficients, which although less correlated than for M g , are still quite high with 0.97, 0.91, and 0.82 for CIFAR100, Tiny ImageNet, and CIFAR10, respectively.", "figure_data": "", "figure_id": "fig_9", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: M adv -Plots of mean prototype DeepFool adversary dissimilarity (M adv ) as a function of test accuracy.", "figure_data": "", "figure_id": "fig_10", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Test Accuracy and Metric Results", "figure_data": "Training Data Percent 0.25 0.40.60.70.80.91.0CIFAR100Test Accuracy.556 .596 .647 .664 .684 .706 .723Metric 1.707 .717 .738 .762 .786 .796 .811Metric 2.306 .317 .342 .372 .396 .401 .426Tiny ImageNetTest Accuracy.408 .442 .476 .490 .509 .524 .531Metric 1.709 .727 .750 .790 .816 .829 .845Metric 2.307 .308 .327 .376 .429 .459 .495CIFAR10Test Accuracy.866 .885 .906 .917 .925 .934 .940Metric 1.783 .794 .812 .822 .823 .826 .829Metric 2.323 .293 .334 .351 .391 .371 .372Results averaged over 5 sets of random prototype initializations.", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work highlights the issue of transferability of DNN performance metrics, which serves as a methodological basis for the citing paper to address the challenge of assessing accuracy performance without access to the original training or test set."}, {"Category": "Extension or Continuation", "Citation": "[17]", "Explanation": "The cited work is a solution to the NeurIPS 2020 competition on new measures for DNN generalization gap, which the citing paper extends by proposing new measures to predict generalization gap and offer insight into DNN generalization."}, {"Category": "Data Source", "Citation": "[2]", "Explanation": "The cited work on adversarial examples is a data source for the citing paper to understand the limitations of DNN performance metrics in the context of out-of-distribution and adversarial examples."}, {"Category": "Supporting Evidence", "Citation": "[15]", "Explanation": "The cited work on robustness in deep learning provides a foundation for the class prototype idea, which is further explored in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[10]", "Explanation": "The cited work on explainable AI contributes to the class prototype idea by providing insights on how to capture important class features in data structures."}, {"Category": "Supporting Evidence", "Citation": "[13]", "Explanation": "The cited work on distance-based classification provides a basis for the class prototype idea by focusing on capturing class features in data structures."}, {"Category": "Supporting Evidence", "Citation": "[23]", "Explanation": "The cited work on few-shot learning contributes to the class prototype idea by exploring the use of class features in data structures for machine learning tasks."}, {"Category": "Supporting Evidence", "Citation": "[12]", "Explanation": "The cited work on continual learning provides a foundation for the class prototype idea by focusing on capturing class features in data structures for specific machine learning tasks."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work provides a method for adversarially training models to increase robustness, which the citing paper adopts in their research to compare the performance of different training algorithms."}, {"Category": "Data Source", "Citation": "[21,24]", "Explanation": "The cited works on robust neural networks provide data and insights on the effect of increasing data quantity on reducing the robust generalization gap, which the citing paper uses to inform their research on the impact of data set size on robustness."}, {"Category": "Data Source", "Citation": "[6]", "Explanation": "The cited work, ResNet18, is used as the standard cross-entropy trained architecture in the experimental setup of the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[8]", "Explanation": "The datasets CIFAR10 and CIFAR100 are mentioned in the cited work and are further discussed in the citing paper to provide context and data for the experiments."}, {"Category": "Extension or Continuation", "Citation": "[9]", "Explanation": "The dataset Tiny ImageNet is also mentioned in the cited work and is further discussed in the citing paper to provide context and data for the experiments."}, {"Category": "Data Source", "Citation": "[6]", "Explanation": "The cited work by He et al. (2016) introduces the ResNet18 network architecture, which the citing paper uses for training a model on the CIFAR and Tiny ImageNet datasets."}]
[ { "figure_ref": [ "fig_10", "fig_10", "fig_1", "fig_10" ], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b15", "b14", "b16", "b17", "b16", "b18", "b16", "b19", "b18", "b20", "b21", "b22", "b23", "b24", "b25", "b26", "b27", "b28", "b29", "b29", "b30", "b31", "b24", "b32", "b33", "b34", "b29", "b1", "b35", "b34", "b12", "b36", "b13", "b29", "b13", "b12", "b36", "b12" ], "table_ref": [], "text": "Estimating point correspondences between images is a fundamental problem in computer vision, with numerous applications in areas such as image registration [1], object recognition [2], and 3D reconstruction [3]. Work on correspondences can largely be classified into geometric and semantic correspondence search. Geometric correspondence search aims to find points that correspond to the same physical point of the same object and are typically solved with local feature-based methods [4,5,6,7,8] and optical flow methods [9,10]. Semantic correspondence search -the core application of interest in this paper -focuses on points that correspond semantically [11,12,13,14,15,16], not necessarily of the same object, but of similar objects of the same or related class. For example, given the selected kitten paw in the (source) image of Figure 1, we would like to automatically identify kitten paws in other (target) images. Whether geometric or semantic correspondences, a common source image source attention target image target attention target image target attention Figure 1: Teaser -We optimize for the prompt embedding of the diffusion model that activates attention in the region of the 'paw' for the source image, marked in yellow. With this embedding, the attention highlights semantically similar points in various target images, which we then use to find semantic correspondences. This holds even for \"out of distribution\" target images (LEGO cats).\nrecent trend is to learn to solve these problems [17], as in many other areas of computer vision. While learning-based methods provide superior performance [16,15,17,18] often these methods need to be trained on large supervised datasets [17,19]. For geometric correspondences, this is relatively straightforward, as one can leverage the vast amount of photos that exist on the web and utilize points that pass sophisticated geometric verification processes [17,20,19]. For semantic correspondences, this is more challenging, as one cannot simply collect more photos for higherquality ground truth-automatic verification of semantic correspondences is difficult, and human labeling effort is required. Thus, research on unsupervised learned semantic correspondence has been trending [21,22,23].\nIn this work, we show that one may not need any ground-truth semantic correspondences between image pairs at all for finding semantic correspondences, or need to rely on generic pre-trained deep features [24,25] -we can instead simply harness the knowledge within powerful text-toimage models. Our key insight is that, given that recent diffusion models [26,27,28,29,30] can generate photo-realistic images from text prompts only, there must be knowledge about semantic correspondences built-in within them. For example, for a diffusion model to successfully create an image of a human face, it must know that a human face consists of two eyes, one nose, and a mouth, as well as their supposed whereabouts -in other words, it must know the semantics of the scene it is generating. Thus, should one be able to extract this knowledge from these models, trained with billions of text-image pairs [30,31], one should be able to solve semantic correspondences as a by-product. Note here that these generative models can also be thought of as a form of unsupervised pre-training, akin to self-supervised pre-training methods [32,25], but with more supervision from the provided text-image relationship.\nHence, inspired by the recent success of prompt-to-prompt [33] for text-based image editing, we build our method by exploiting the attention maps of latent diffusion models. These maps attend to different portions of the image as the text prompt is changed. For example, given an image of a cat, if the text prompt is 'paw', the attention map will highlight the paws, while if the text prompt is 'collar', they will highlight the collar; see Figure 2. Given arbitrary input images, these attention maps should respond to the semantics of the prompt. In other words, if one can identify the 'prompt' corresponding to a particular image location, the diffusion model could be used to identify semantically similar image locations in a new, unseen, image. Note that finding actual prompts that correspond to words is a discrete problem, hence difficult to solve. However, these prompts do not have to correspond to actual words, as long as they produce attention maps that highlight the queried image location.\nIn other words, the analogy above holds when we operate on the continuous embedding space of prompts, representing the core insight over which we build our method.\nWith this key insight, we propose to find these (localized) embeddings by optimizing them so that the attention map within the diffusion models corresponds to points of interest, similarly to how prompts are found for textual inversion [34,35]. As illustrated in Figure 1, given an (source) image and a (query) location that we wish to find the semantic correspondences of, we first optimize a randomly initialized text embedding to maximize the cross-attention at the query location while keeping the diffusion model fixed (i.e. stable diffusion [30]). We then apply the optimized text embedding to another image (i.e. target) to find the semantically corresponding location -the pixel attaining the maximum attention map value within the target image.\nBeyond our core technical contribution, we introduce a number of important design choices that deal with problems that would arise from its naive implementation. Specifically, (1) as we optimize on a single image when finding the embeddings, to prevent overfitting we apply random crops; (2) to avoid the instability and randomness of textual inversion [36,35] we find multiple embeddings starting from random initialization; (3) we utilize attention maps at different layers of the diffusion network to build a multi-scale notion of semantic matching. These collectively allow our method to be on par with strongly-supervised state of the art on the PF-Willow dataset [13] and outperform all weakly-and un-supervised baselines in PF-Willow, CUB-200 [37], and SPair-71k datasets [14] -on the SPair-71k dataset we outperform the closest weakly supervised baseline by 20.9% relative.\nWe emphasize that our method does not require supervised training that is specific to semantic point correspondence estimation. Instead, we simply utilize an off-the-shelf diffusion model, without fine-tuning, and are able to achieve state-of-the-art results. To summarize, we make the following contributions:\n• we show how to effectively extract semantic correspondences from an off-the-shelf Stable Diffusion [30] model without training any new task-specific neural network, or using any semantic correspondence labels; • we introduce a set of design choices -random crops, multiple embeddings, multi-scale attentionthat are critical to achieving state-of-the-art performance; • we significantly outperform prior state of the art based on weakly supervised techniques on the SPair-71k [14], PF-Willow [13], and CUB-200 [37] datasets (20.9% relative on SPair-71k) and is on par with the strongly-supervised state of the art on the PF-Willow [13] dataset." }, { "figure_ref": [ "fig_10", "fig_2" ], "heading": "Related work", "publication_ref": [ "b37", "b9", "b38", "b39", "b2", "b4", "b10", "b3", "b5", "b6", "b40", "b41", "b42", "b7", "b43", "b44", "b16", "b17", "b10", "b15", "b14", "b45", "b46", "b45", "b47", "b48", "b10", "b49", "b50", "b15", "b14", "b9", "b44", "b51", "b15", "b51", "b15", "b52", "b53", "b13", "b12", "b54", "b20", "b21", "b55", "b56", "b57", "b24", "b22", "b55", "b56", "b20", "b21", "b22", "b29", "b25", "b26", "b27", "b28", "b58", "b59", "b29", "b60", "b61", "b33", "b62", "b63", "b32", "b32", "b64", "b33", "b29", "b65", "b66", "b67", "b68", "b69", "b70", "b29" ], "table_ref": [], "text": "We first review work on semantic correspondences and then discuss work focusing on reducing supervision. We also discuss work on utilizing pre-trained diffusion models.\nSemantic correspondences.. Finding correspondences is a long-standing core problem in computer vision, serving as a building block in various tasks, for example, optical flow estimation [38,10], structure from motion [39,40,3,5], and semantic flow [11]. While a large body of work exists, including those that focus more on finding geometric correspondences that rely on local features [4,6,7] and matching them [41,42,43,8] or directly via deep networks [44,45,17,18], here we focus only on semantic correspondences [11,16,15,46], that is, the task of finding corresponding locations in images that are of the same \"semantic\" -e.g., paws of the cat in Figure 1. For a wider review of this topic, we refer the reader to [47].\nFinding semantic correspondences has been of interest lately, as they allow class-specific alignment of data [46,48], which can then be used to, for example, train generative models [49], or to transfer content between images [11,50,51]. To achieve semantic correspondence, as in many other fields in computer vision, the state-of-the-art is to train neural networks [16,15]. Much research focus was aimed initially at architectural designs that explicitly allow correspondences to be discovered within learned feature maps [10,45,52,16]. For example using pyramid architectures [52], or architectures [16] that utilize both convolutional neural networks [53] and transformers [54]. However, these approaches require large-scale datasets that are either sparsely [14,13] or densely [55] labeled, limiting the generalization and scalability of these methods without additional human labeling effort.\nLearning semantic correspondences with less supervision.. Thus, reducing the strong supervision requirement has been of research focus. One direction in achieving less supervision is through the use of carefully designed frameworks and loss formulations. For example, Li et al. [21] use a probabilistic student-teacher framework to distill knowledge from synthetic data and apply it to unlabeled real image pairs. Kim et al. [22] form a semi-supervised framework that uses unsupervised losses formed via augmentation.\nAnother direction is to view semantic correspondence problem is utilizing pre-trained deep features. For example, Neural Best-Buddies [56] look for mutual nearest neighbor neuron pairs of a pre-trained CNN to discover semantic correspondences. Amir et al. [57] investigate the use of deep features from pre-trained Vision Transformers (ViT) [58], specifically DINO-ViT [25]. More recently, in ASIC [23] rough correspondences from these pre-trained networks have been utilized to train a network that maps images into a canonical grid, which can then be used to extract semantic correspondences.\nThese methods either rely heavily on the generalization capacity of pre-trained neural network representations [56,57] or require training a new semantic correspondence network [21,22,23]. In this work, we show how one can achieve dramatically improved results over the current state-of-the-art, achieving performance similar to that of strongly-supervised methods, even without training any new neural network by simply using a Stable Diffusion network.\nUtilizing pre-trained diffusion models.. Diffusion models have recently emerged as a powerful class of generative models, attracting significant attention due to their ability to generate highquality samples [30,26,27,28,29,59,60]. Diffusion models generate high-quality samples, with text-conditioned versions incorporating textual information via cross-attention mechanisms.\nAmong them, Stable Diffusion [30] is a popular model of choice thanks to its lightweight and opensource nature. While training a latent diffusion model is difficult and resource-consuming, various methods have been suggested to extend its capabilities. These include model personalization [61] through techniques such as Low Rank Adaptation (LoRA) [62] and textual inversion [34], including new conditioning signals via ControlNet [63], or repurposing the model for completely new purposes such as text-to-3D [64] and text-driven image editing [33]. While the applications vary, many of these applications are for generative tasks, that is, creating images and 3D shapes.\nOf particular interest to us is the finding from Prompt-to-Prompt [33] that the cross-attention maps within diffusion models can effectively act as a pseudo-segmentation for a given text query -in other words, it contains semantic information. This is further supported by the fact that intermediate features of diffusion models can be used for semantic segmentation [65]. In this work, with these observations, and with the help of textual inversion [34], we show that we can utilize Stable Diffusion [30] for not just generative tasks, but for the task of finding semantic correspondences, a completely different task from what these models are trained for, all without any training by repurposing the attention maps, similarly as in [66] but for an entirely different application and framework.\nConcurrently to our work, work utilizing feature representations within Stable Diffusion for correspondences has been proposed [67,68,69]. These methods look into how to use the deep features within the Stable Diffusion Network effectively, similar to how VGG19 features [70] are widely used for various tasks. Our method, on the other hand, looks into how we can alter the attention maps within Stable Diffusion to our will, in other words taking into account how these features are supposed to be used within Stable Diffusion -we optimize word embeddings. By doing so we show that one can perform alternative tasks than simple image creation such as the semantic correspondence task we demonstrated. However, this is not the end of what our framework can do. For example, a recent followup to our preprint demonstrated an extension of our method to segmentation [71] 3 Method\nOur method identifies semantic correspondences between pairs of images by leveraging the attention maps induced by latent diffusion models. Given a pair of images (respectively source and target), and a query point in the source image domain, we seek to compute an attention map that highlights areas in the target image that are in semantic correspondence with the query. While classical diffusion models work on images directly, we employ latent diffusion models [30], which operate on encoded images. Further, rather than being interested in the generated image, we use the attention maps generated as a by-product of the synthesis process. Our process involves two stages; see Figure 3. In the first stage (optimization), we seek to find an embedding that represents the semantics of the query region in the source image by investigating the activation map of the denoising step at time t. In the second stage (inference), the embeddings from the source image are kept fixed, and attention maps for a given target image again at time t are computed. The location attaining the highest value of attention in the generated map provides the desired semantic correspondence. We start by reviewing the basics of diffusion models in Section 3.1, detail our two-stage algorithm in Section 3.2, and augmentation techniques to boost its performance in Section 3.3." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b58", "b29", "b58", "b29" ], "table_ref": [], "text": "Diffusion models are a class of generative models that approximate the data distribution by denoising a base (Gaussian) distribution [59]. In the forward diffusion process, the input image I is gradually transformed into Gaussian noise over a series of T timesteps. Then, a sequence of denoising iterations ϵ θ (I t , t), parameterized by θ, and t = 1, . . . , T take as input the noisy image I t at each timestep and predict the noise added for that iteration ϵ. The diffusion objective is given by:\nL DM = E I,t,ϵ∼N (0,1) ∥ϵ -ϵ θ (I t , t)∥ 2 2 . (1\n)\nImage Encoder Source Image Rather than directly operating on images I, latent diffusion models (LDM) [30] execute instead on a latent representation z, where an encoder maps the image I into a latent z, and a decoder maps the latent representation back into an image. The model can additionally be made conditional on some text y, by providing an embedding e = τ θ (y) using a text encoder τ θ to the denoiser:\nL LDM = E z,t,ϵ∼N (0,1) [∥ϵ -ϵ θ (z t , t, e)∥ 2 2 ]..(2)\nThe denoiser for a text-conditional LDM is implemented by a transformer architecture [59,30] involving a combination of self-attention and cross-attention layers." }, { "figure_ref": [ "fig_3", "fig_1" ], "heading": "Optimizing for correspondences", "publication_ref": [ "b71", "b64", "b0" ], "table_ref": [], "text": "In what follows, we detail how we compute attention masks given an image/embedding pair, how to optimize for an embedding that activates a desired position in the source image, and how to employ this optimized embedding to identify the corresponding point in the target image.\nAttention masks. Given an image I, let z(t) be its latent representation within the diffusion model at the t-th diffusion step. We first compute query Q l = Φ l (z(t=8)) 1 , and key K l = Ψ l (e), where Φ l (•), and Ψ l (•) are the l-th linear layers of the U-Net [72] that denoises in the latent space. The cross-attention at these layers are then defined as:\nM ′ l (e, I) = CrossAttention(Q l , K l ) = softmax Q l K ⊤ l / d l ,(3)\nwhere the attention map M ′ ∈ R C×(h×w)×P , and P , C, h, and w respectively represent the number of tokens, the number of attention heads in the transformer layer, the height, and width of the image at the particular layer in the U-Net. Here, Q ∈ R (h×w)×d l , and K ∈ R P ×d l where d l represents the dimensionality of this layer, and softmax denotes the softmax operation along the P dimension. We use the same embedding for another (target) image (g) and display its attention map as well (h-k) and their average (l). The ground-truth semantically corresponding region in the target image is marked also with the yellow star. Earlier layers respond more broadly, whereas later layers are more specific. To utilize these different characteristics of each layer we average that of multiple layers into a single attention map.\nAs mentioned earlier in Section 1, and observed in [65], different layers of the U-Net exhibit different \"level\" of semantic knowledge; see Figure 4. Thus, to take advantage of the different characteristics of each layer, we average along both the channel axis and across a subset of U-Net layers M ′′ ∈ R (h×w)×P = average c=1...C,l=7..10 (M ′ l ). Note that the size of the attention maps M ′ l differ according to each layer, hence we utilize bilinear interpolation when averaging. Hence, with M(u; e, I) we denote indexing a pixel u of the attention map M ′′ [1] ∈ R (h×w) via bilinear interpolation, where [1] extracts the first of the P available attention maps 2 , that is, we use the first token of the embedding to represent our query. 3 Examples of these attention maps for embeddings e derived from natural text are visualized in Figure 2.\nOptimization. Given a source image I i and a query pixel location u i ∈[0, 1] 2 , we are interested in finding the corresponding pixel u j in the target image I j . We emulate the source attention map for the query as a Gaussian of standard deviation σ centered at the query location u i :\nM s (u) = exp -∥u -u i ∥ 2 2 /2σ 2 .\n(4) The Gaussian map represents the desired region of focus of the attention mechanism. We then optimize for the (text) embedding e that reproduces the desired attention map as:\ne * = arg min e u ∥M(u; e, I i ) -M s (u)∥ 2 2 ,(5)\nInference. With the optimized embedding e * , we can then identify the target point (the point u j in I j most semantically similar to u i in I i ) by computing the attention map for the target image, and finding the spatial location that maximizes it. We write:\nu j = arg max u M(u; e * , I j ).(6)" }, { "figure_ref": [ "fig_5" ], "heading": "Regularizations", "publication_ref": [ "b34", "b35" ], "table_ref": [], "text": "As discussed in Section 1, optimizing text embeddings on a single image makes it susceptible to overfitting. Furthermore, the process of finding embeddings via optimization, textual inversion,\nSuccessful Mixed Failure\nSpair-71k CUB-200 PF-Willow Figure 8: Qualitative examples -correspondences estimated from our method are colored in blue if correct and in orange if wrong according to PCK @0.05 . (Top) successful cases, (middle) mixed cases, and (bottom) failure cases. Interestingly, even when our estimates disagree with the human annotation (thus shown as orange), they are arguably at plausible locations.\nhas recently been shown to be sensitive to initialization [35,36]. To overcome these issues, we apply various forms of regularization. Note that while we describe these one at a time, these two augmentations are simultaneously enabled.\nAveraging across crops. Let C c (I) be a cropping operation with parameters c, and U c (I) the operation placing the crop back to its original location; that is, C c (U c (x c )) = x c for some crop x c . For the c we reduce the image dimensions to 93% (found via hyperparameter sweep) and sample a uniformly random translation within the image -we will denote this sampling as c ∼ D. We augment the optimization in Equation 5by averaging across cropping augmentations as:\ne * = arg min e E c∼D u ∥C c (M(u; e, I i )) -C c (M s (u))∥ 2 2 ,(7)\nsimilarly, at inference time, we average the attention masks generated by different crops:\nu j = arg max u E c∼D U c (M(u; e * , C c (I j ))).(8)\nAveraging across optimization rounds. We empirically found that doing multiple rounds of optimization is one of the keys to obtaining good performance; see Figure 9. To detail this process, let us abstract the optimization in Equation 7as e * = O(ē, I i ), where ē is its initialization. We then average the attention masks induced by multiple optimization runs as:\nu j = arg max u E ē∼D M(u; O(ē, I i ), I j ). (9\n)" }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b13", "b12", "b36", "b45" ], "table_ref": [ "tab_2" ], "text": "We evaluate semantic correspondence search on three standard benchmarks: SPair-71k [14] is the largest standard dataset for evaluating semantic correspondences composed of 70, 958 image pairs of 18 different classes. Since we do not perform any training, we only use the 12, 234 correspondences of the test set for evaluation; PF-Willow [13] comprises four classes -wine bottle, duck, motorcycle, and car -with 900 correspondences in the test set; CUB-200 [37] includes 200 different classes of birds. Following ASIC [46] we select the first three classes, yielding a total of 1, 248 correspondences in the test set.\nTable 1: Quantitative results -We report the Percentage of Correct Keypoints (PCK), where bold numbers are the best results amongst weakly-or un-supervised methods. Our method outperforms all weakly supervised baselines (we use the numbers reported in the literature). Note also that for PF-Willow, our method outperforms even the strongly-supervised state of the art in terms of PCK @0.1 ." }, { "figure_ref": [ "fig_5", "fig_5", "fig_5", "fig_6" ], "heading": "CUB-200 PF-Willow", "publication_ref": [ "b12", "b13", "b22", "b12", "b13", "b22", "b29", "b72", "b20", "b51", "b45", "b45", "b23", "b55", "b24", "b45", "b14", "b12" ], "table_ref": [ "tab_2" ], "text": "SPair-71k PCK @0.05 PCK @0.1 PCK @0.05 PCK @0.1 PCK @0.05 PCK @0.1 Metrics. We measure the performance of each method via the standard metric [13,14,23] of Percentage of Correct Keypoints (PCK) under selected thresholds -we report how many semantic correspondence estimates are within the error threshold from human-annotation. Following the standard protocol [13,14,23], for SPair-71k and PF-Willow thresholds are determined with respect to the bounding boxes of the object of interest, whereas for CUB-200, the thresholds are set relative to the overall image dimensions. We report results for thresholds 0.05 and 0.1, as annotations themselves are not pixel-perfect due to the nature of semantic correspondences -i.e., there is no exact geometric correspondence.\nImplementation details. We use Stable Diffusion version 1.4 [30]. In our ablations, we find that more augmentation help, but with the caveat of (linearly) requiring more computation. Hence, on CUB-200 and PF-Willow datasets we use 10 optimization rounds for the embeddings and 30 random crops for the inference. For the larger SPair-71k dataset we use less -5 embeddings and 20 crops. We choose our hyperparameters based on the validation subset of SPair-71k and PCK @0.05 via a fully-randomized search and applied them to all datasets. We use the Adam [73] optimizer to find the prompts. For detailed hyperparameter setup see Supplementary Material.\nQualitative highlights. We show qualitative examples of the semantic correspondences estimated by our method in Figure 8. Interestingly, our method, even when it disagrees with the human annotation, provides results that can arguably be interpreted as plausible. For example, as shown in the wine bottle example at the bottom-right of Figure 8, source points occluded by the wine glasses are correctly mapped to another wine glass in the target image, which disagrees with the human annotator's label which points to the wine bottle.\nQuantitative results. We report the performance of our method in Table 1. We compare our method against weakly-supervised baselines [21,52,46], as well as baselines from ASIC [46] that are based on general-purpose deep features -using VGG [24] features with Moving Least Squares (MLS) [56] and DINO [25] features with Moving Least Squares (MLS) or Nearest Neighbor (NN) matching.\nOur method outperforms all compared weakly supervised baselines. Note that the performance gap in terms of average PCK @0.1 compared to the second best method, ASIC [46], is large -20.9% relative. Note also that in the case of the PF-Willow dataset, our method is on par with the current strongly supervised state-of-the-art. Even in the case of the SPair-71k dataset, our results are comparable to a very recent method VAT [15] besides the few problematic classes -bottle, bus, plant, train, tv. These problematic classes are typically those that exhibit strong symmetry, which we do not explicitly deal with. For detailed per-class results, see our Supplementary Material. Considering that our method is fully unsupervised when it comes to the task of semantic correspondences, these results are quite surprising.\nAblations. To ablate our model, we use the PF-Willow [13] dataset and report PCK @0.05 . In Figure 9a, we show that using individual layers leads to significantly worse results than using multiple layers together; this demonstrates the importance of capturing semantic knowledge across multiple receptive fields. In Figure 9b, we show that using multiple optimized embeddings significantly boosts performance, and in Figure 9c, we see how using more crops further leads to improvements during inference. Finally, besides crops during inference, if we disable random crops during embedding optimization PCK @0.05 drops to 45.5 (vs. 53.0 in our full model).\nBeyond semantic correspondences of the same class.. We further experiment with correspondences across different classes -e.g., correlating different animals. As shown in Figure 10 in many cases our method provides correspondences that are arguably correct even for objects that are of different classes. This includes being from similar classes (sheep and dog) to more different classes (bird and airplane), and to wildly different classes (chair and horse)." }, { "figure_ref": [ "fig_10" ], "heading": "Conclusions", "publication_ref": [ "b29", "b13", "b12", "b36", "b12", "b48", "b12", "b36", "b13" ], "table_ref": [], "text": "We have demonstrated the remarkable potential of leveraging diffusion models, specifically Stable Diffusion [30], for the task of semantic correspondence estimation. We have shown that by simply optimizing and finding the embeddings for a location of interest, one can find its semantically corresponding locations in other images, although the diffusion model was never trained for this task. We further introduced design choices that significantly impact the accuracy of the estimated correspondences. Ultimately, our approach significantly outperforms existing weakly supervised methods on SPair-71k [14], PF-Willow [13], and CUB-200 datasets [37] (20.9% relative for SPair-71k), and even achieves performance on par with strongly supervised methods in PF-Willow dataset [13].\nLimitations and future work. This study highlights the emergent properties of training large models on vast amounts of data, revealing that a model primarily designed for image generation can also be utilized for other tasks, specifically semantic correspondences. The success of our method underscores the importance of exploring the hidden capabilities of these large text-to-image generative models and suggests that there may be other novel applications and tasks that can benefit from the vast knowledge encoded within these models. For example, as an immediate application, our method can be used to scale up training of 3D generative models such as FigNeRF [49] with images from the web without human supervision.\nWe note that our method does have its limitations. Many of the failure modes include dealing with symmetric objects. An explicit way to deal with these cases with more refined techniques to extract correspondences may help solve this problem. It also requires significant compute. On an NVIDIA RTX 3090 GPU, finding a single prompt for a single keypoint takes 30 seconds. We suspect that one could potentially train a semantic correspondence network based on our estimates to achieve both scalability in terms of training data and fast inference time. Figure 11: Distribution of image pairs w.r.t correspondence correctness -We report the distribution of image pairs according to the percent of correspondences within each image that fall under PCK @0.1 . For PF-Willow [13] and CUB-200 [37] datasets, majority of image pairs have most correspondences correctly localized, demonstrating more than what the accumulated PCK @0.1 shows.\nFor the harder SPair-71k [14] dataset results are spread.\nThe hyperparameters selected from this process were as follows:\n• U-Net layers: " }, { "figure_ref": [ "fig_2" ], "heading": "C Model architecture", "publication_ref": [ "b29", "b58" ], "table_ref": [], "text": "The architecture in Figure 3 is based on the stable diffusion model version 1.4 [30]. This architecture is designed to accept an input image of shape 3 × 512 × 512, which is then passed through an encoder to yield an image of shape 4 × 64 × 64 with channel dimension C as 4. This encoded image is referred to as z 0 . In accordance with the Denoising Diffusion Probabilistic Model (DDPM) [59], noise is added to z 0 to generate z t .\nThe denoising U-Net architecture for stable diffusion is comprised of a total of 16 layers: 6 layers in the contracting path, 1 layer in the bottleneck, and 9 layers in the expansive path. The progression of the image through these layers, along with the respective dimensions per layer (d l ), are as follows: " }, { "figure_ref": [ "fig_10", "fig_10", "fig_10", "fig_10" ], "heading": "D Additional results", "publication_ref": [ "b12", "b36", "b13" ], "table_ref": [], "text": "To provide more in-depth analysis, in Figure 11 we depict the distribution of image pairs according to the ratio of correspondences within the image pair that achieve PCK @0.1 . For example, an image pair with all correctly estimated correspondences would fall into the 100% bin, whereas one that has only have of the correspondences correct in 50%. For PF-Willow [13] and CUB-200 [37] datasets our approach produces a high PCK @0.1 score for most test image pairs as shown, indicating the effectiveness of our approach. For SPair-71k [14], which is a harder dataset, the results are more evenly spread.\nFor each of the bin ranges for each dataset, we visualize representative image pairs in Figure 12, Figure 13, and Figure 14. Note that in many cases, incorrectly identified correspondences appear to still align with semantically consistent points on the target object -they simply disagree with the annotated labels of the datasets. Unsupervised Semantic Correspondence Using Stable Diffusion -Author Response - CUB-200 PF-Willow SPair-71k [email protected] [email protected] [email protected] [email protected] [email protected] PCK@0. " }, { "figure_ref": [], "heading": "Acknowledgments and Disclosure of Funding", "publication_ref": [], "table_ref": [], "text": "This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant, NSERC Collaborative Research and Development Grant, Google, Digital Research Alliance of Canada, and by Advanced Research Computing at the University of British Columbia." }, { "figure_ref": [], "heading": "Unsupervised Semantic Correspondence Using Stable Diffusion", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [ "b13", "b13" ], "table_ref": [], "text": "In this supplementary material we:\n• provide per-category quantitative results on SPair71k dataset; • provide details of hyper-parameters used in various experiments; • provide details of the architecture of the neural network; • and provide additional qualitative results on all datasets.\nFor complete reproducibility, we will release the code of our experiments if the manuscript is accepted.\nA Detailed results for the SPair-71k [14] dataset\nWe report detailed results for the SPair-71k [14] dataset in Table 2. When looking at the per-class performance over the 18 classes in the Spair-71k dataset it can be seen that our method outperforms all weakly supervised methods on 16 out of 18 classes and in many cases (bike, car, motorcycle, plant) we have a substantial margin over these methods. We also greatly reduce the margin to strongly supervised methods and for some classes (bike, chair, motorcycle) we outperform them." }, { "figure_ref": [], "heading": "B Hyperparameter selection", "publication_ref": [ "b13", "b13", "b73", "b12" ], "table_ref": [], "text": "The hyperparameters are selected by carrying out 50 different runs, where each run involves 50 correspondences randomly subsampled from the validation set for the SPair-71k [14] dataset. Due to the limited computation resources available at our disposal, we only used a subset of the validation set for searching the hyperparameters. We note that it is possible that a better set of hyperparameters can be found should one use the complete validation set. The best-performing run was then chosen based on its PCK @0.1 metric. Each run was executed over the same set of 50 correspondences, maintaining consistency across all trials. The variation between these runs lies solely in the hyperparameters used, which were selected as follows:\n• U-Net layers: Randomly selected within the range of 7 to 15.\n• Learning rate for prompt optimization:: A random value between 0.01 and 5e-4 was chosen for each run. • Sigma radius: Selected randomly in the range of 8 to 32.\n• Noise level: Randomly chosen within the range t = 1 to t = 10, where T = 50.\n• Number of optimization steps: Randomly chosen in the range of 100 to 300.\n• Image crop size: The images were cropped consistently within each run and set randomly in the range 50%-100%.\nTable 2: SPair-71k [14] detailed resultswe report detailed results on the SPair71k dataset in terms of PCK @0.1 . Bolded numbers are best results amongst weak-or un-supervised methods. Our method outperforms all compared weakly supervised baselines and is comparable to CHM [74], a strongly supervised baseline from 2021. Note that on PF-Willow [13] we outperform even strongly supervised ones in terms of PCK @0.1 . " } ]
2023-12-23
[ { "authors": "B B Hansen; B S Morse", "journal": "", "ref_id": "b0", "title": "Proceedings of the ieee conf. on computer vision and pattern recognition", "year": "1999" }, { "authors": "Serge Belongie; Jitendra Malik; Jan Puzicha", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b1", "title": "Shape context: A new descriptor for shape matching and object recognition", "year": "2000" }, { "authors": "Johannes Lutz; Schönberger ; Jan-Michael Frahm", "journal": "", "ref_id": "b2", "title": "Structure-from-motion revisited", "year": "2016" }, { "authors": " David G Lowe", "journal": "International Journal of Computer Vision", "ref_id": "b3", "title": "Distinctive image features from scale-invariant keypoints", "year": "2004" }, { "authors": "Alex Fisher; Ricardo Cannizzaro; Madeleine Cochrane; Chatura Nagahawatte; Jennifer L Palmer", "journal": "Robotics and Autonomous Systems", "ref_id": "b4", "title": "Colmap: A memory-efficient occupancy grid mapping framework", "year": "2021" }, { "authors": "Kwang Moo; Yi ; Eduard Trulls; Vincent Lepetit; Pascal Fua", "journal": "", "ref_id": "b5", "title": "Lift: Learned invariant feature transform", "year": "2016" }, { "authors": "Tomasz Daniel Detone; Andrew Malisiewicz; Rabinovich", "journal": "", "ref_id": "b6", "title": "Superpoint: Self-supervised interest point detection and description", "year": "2018" }, { "authors": "Paul-Edouard Sarlin; Daniel Detone; Tomasz Malisiewicz; Andrew Rabinovich", "journal": "", "ref_id": "b7", "title": "Superglue: Learning feature matching with graph neural networks", "year": "2020" }, { "authors": "Deqing Sun; Xiaodong Yang; Ming-Yu Liu; Jan Kautz", "journal": "", "ref_id": "b8", "title": "Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume", "year": "2018" }, { "authors": "Zachary Teed; Jia Deng", "journal": "", "ref_id": "b9", "title": "Raft: Recurrent all-pairs field transforms for optical flow", "year": "2020" }, { "authors": "Ce Liu; Jenny Yuen; Antonio Torralba", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b10", "title": "Sift flow: Dense correspondence across scenes and its applications", "year": "2010" }, { "authors": "Enze Xie; Wenhai Wang; Zhiding Yu; Anima Anandkumar; Jose M Alvarez; Ping Luo", "journal": "", "ref_id": "b11", "title": "Segformer: Simple and efficient design for semantic segmentation with transformers", "year": "2021" }, { "authors": "Bumsub Ham; Minsu Cho; Cordelia Schmid; Jean Ponce", "journal": "", "ref_id": "b12", "title": "Proposal flow: Semantic correspondences from object proposals", "year": "2017" }, { "authors": "Juhong Min; Jongmin Lee; Jean Ponce; Minsu Cho", "journal": "", "ref_id": "b13", "title": "Spair-71k: A large-scale benchmark for semantic correspondence", "year": "2019" }, { "authors": "Sunghwan Hong; Seokju Cho; Jisu Nam; Stephen Lin; Seungryong Kim", "journal": "", "ref_id": "b14", "title": "Cost Aggregation with 4D Convolutional Swin Transformer for Few-Shot Segmentation", "year": "2022" }, { "authors": "Seokju Cho; Sunghwan Hong; Seungryong Kim", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b15", "title": "Cats++: Boosting cost aggregation with convolutions and transformers", "year": "2022" }, { "authors": "Wei Jiang; Eduard Trulls; Jan Hosang; Andrea Tagliasacchi; Kwang Moo; Yi ", "journal": "", "ref_id": "b16", "title": "COTR: Correspondence Transformer for Matching Across Images", "year": "2021" }, { "authors": "Jiaming Sun; Zehong Shen; Yuang Wang; Hujun Bao; Xiaowei Zhou", "journal": "", "ref_id": "b17", "title": "LoFTR: Detectorfree local feature matching with transformers", "year": "2021" }, { "authors": "Zhengqi Li; Noah Snavely", "journal": "", "ref_id": "b18", "title": "Megadepth: Learning single-view depth prediction from internet photos", "year": "2018" }, { "authors": "Yuhe Jin; Dmytro Mishkin; Anastasiia Mishchuk; Jiri Matas; Pascal Fua; Kwang Moo Yi; Eduard Trulls", "journal": "International Journal of Computer Vision", "ref_id": "b19", "title": "Image Matching across Wide Baselines: From Paper to Practice", "year": "2021" }, { "authors": "Xin Li; Deng-Ping Fan; Fan Yang; Ao Luo; Hong Cheng; Zicheng Liu", "journal": "", "ref_id": "b20", "title": "Probabilistic model distillation for semantic correspondence", "year": "2021" }, { "authors": "Jiwon Kim; Kwangrok Ryoo; Junyoung Seo; Gyuseong Lee; Daehwan Kim; Hansang Cho; Seungryong Kim", "journal": "", "ref_id": "b21", "title": "Semi-Supervised Learning of Semantic Correspondence with Pseudo-Labels", "year": "2022" }, { "authors": "Kamal Gupta; Varun Jampani; Carlos Esteves; Abhinav Shrivastava; Ameesh Makadia; Noah Snavely; Abhishek Kar", "journal": "", "ref_id": "b22", "title": "Asic: Aligning sparse in-the-wild image collections", "year": "2023" }, { "authors": "Shuying Liu; Weihong Deng", "journal": "", "ref_id": "b23", "title": "Very deep convolutional neural network based image classification using small training sample size", "year": "2015" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Hervé Jégou; Julien Mairal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b24", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Jonathan Ho; Tim Salimans", "journal": "", "ref_id": "b25", "title": "Classifier-free diffusion guidance", "year": "2022" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "", "ref_id": "b26", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b27", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "Alex Nichol; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mcgrew; Ilya Sutskever; Mark Chen", "journal": "", "ref_id": "b28", "title": "Glide: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2021" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b29", "title": "Highresolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Christoph Schuhmann; Romain Beaumont; Richard Vencu; Cade Gordon; Ross Wightman; Mehdi Cherti; Theo Coombes; Aarush Katta; Clayton Mullis; Mitchell Wortsman", "journal": "", "ref_id": "b30", "title": "Laion-5b: An open large-scale dataset for training next generation image-text models", "year": "2022" }, { "authors": "Ting Chen; Simon Kornblith; Mohammad Norouzi; Geoffrey Hinton", "journal": "", "ref_id": "b31", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "Ron Mokady; Amir Hertz; Kfir Aberman; Yael Pritch; Daniel Cohen-Or", "journal": "", "ref_id": "b32", "title": "Null-text inversion for editing real images using guided diffusion models", "year": "2022" }, { "authors": "Rinon Gal; Yuval Alaluf; Yuval Atzmon; Or Patashnik; H Amit; Gal Bermano; Daniel Chechik; Cohen-Or", "journal": "", "ref_id": "b33", "title": "An image is worth one word: Personalizing text-to-image generation using textual inversion", "year": "2022" }, { "authors": "Zhengcong Fei; Mingyuan Fan; Junshi Huang", "journal": "", "ref_id": "b34", "title": "Gradient-free textual inversion", "year": "2023" }, { "authors": "Anton Voronov; Mikhail Khoroshikh; Artem Babenko; Max Ryabinin", "journal": "", "ref_id": "b35", "title": "Is This Loss Informative? Speeding Up Textual Inversion with Deterministic Objective Evaluation", "year": "2023" }, { "authors": "Catherine Wah; Steve Branson; Peter Welinder; Pietro Perona; Serge Belongie", "journal": "", "ref_id": "b36", "title": "The Caltech-UCSD Birds-200-2011 Dataset", "year": "2011" }, { "authors": "D Bruce; Takeo Lucas; Kanade", "journal": "", "ref_id": "b37", "title": "An Iterative Image Registration Technique With an Application to Stereo Vision", "year": "1981" }, { "authors": "Sameer Agarwal; Yasutaka Furukawa; Noah Snavely; Ian Simon; Brian Curless; Steven M Seitz; Rick Szeliski", "journal": "Communications of the ACM", "ref_id": "b38", "title": "Building rome in a day", "year": "2011" }, { "authors": "Jan-Michael Frahm; Pierre Fite-Georgel; David Gallup; Tim Johnson; Rahul Raguram; Changchang Wu; Yi-Hung Jen; Enrique Dunn; Brian Clipp; Svetlana Lazebnik; Marc Pollefeys", "journal": "", "ref_id": "b39", "title": "Building rome on a cloudless day", "year": "2010" }, { "authors": "Ondrej Chum; Tomas Werner; Jiri Matas", "journal": "", "ref_id": "b40", "title": "Two-View Geometry Estimation Unaffected by a Dominant Plane", "year": "2005" }, { "authors": "Kwang Moo; Yi ; Eduard Trulls; Yuki Ono; Vincent Lepetit; Mathieu Salzmann; Pascal Fua", "journal": "", "ref_id": "b41", "title": "Learning to Find Good Correspondences", "year": "2018" }, { "authors": "Weiwei Sun; Wei Jiang; Andrea Tagliasacchi; Eduard Trulls; Kwang Moo; Yi ", "journal": "", "ref_id": "b42", "title": "Attentive context normalization for robust permutation-equivariant learning", "year": "2020" }, { "authors": "Prune Truong; Martin Danelljan; Radu Timofte", "journal": "", "ref_id": "b43", "title": "GLU-Net: Global-local universal network for dense flow and correspondences", "year": "2020" }, { "authors": "Prune Truong; Martin Danelljan; Luc Van Gool; Radu Timofte", "journal": "", "ref_id": "b44", "title": "GOCor: Bringing globally optimized correspondence volumes into your neural network", "year": "2020" }, { "authors": "Kamal Gupta; Varun Jampani; Carlos Esteves; Abhinav Shrivastava; Ameesh Makadia; Noah Snavely; Abhishek Kar", "journal": "", "ref_id": "b45", "title": "Asic: Aligning sparse in-the-wild image collections", "year": "2023" }, { "authors": "Jiayi Ma; Xingyu Jiang; Aoxiang Fan; Junjun Jiang; Junchi Yan", "journal": "International Journal of Computer Vision", "ref_id": "b46", "title": "Image matching from handcrafted to deep features: A survey", "year": "2020" }, { "authors": "He Wang; Srinath Sridhar; Jingwei Huang; Julien Valentin; Shuran Song; Leonidas J Guibas", "journal": "", "ref_id": "b47", "title": "Normalized object coordinate space for category-level 6d object pose and size estimation", "year": "2019" }, { "authors": "Christopher Xie; Keunhong Park; Ricardo Martin-Brualla; Matthew Brown", "journal": "", "ref_id": "b48", "title": "Fig-nerf: Figure-ground neural radiance fields for 3d object category modelling", "year": "2021" }, { "authors": "Bumsub Ham; Minsu Cho; Cordelia Schmid; Jean Ponce", "journal": "", "ref_id": "b49", "title": "Proposal flow", "year": "2016" }, { "authors": "Gopal Sharma; Kangxue Yin; Subhransu Maji; Evangelos Kalogerakis; Or Litany; Sanja Fidler", "journal": "", "ref_id": "b50", "title": "Mvdecor: Multi-view dense correspondence learning for fine-grained 3d segmentation", "year": "2022" }, { "authors": "Sangryul Jeon; Seungryong Kim; Dongbo Min; Kwanghoon Sohn", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b51", "title": "Pyramidal semantic correspondence networks", "year": "2021" }, { "authors": "Yann Lecun; Yoshua Bengio", "journal": "", "ref_id": "b52", "title": "Convolutional Networks for Images, Speech, and Time Series", "year": "1998" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Ł Ukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b53", "title": "Attention is all you need", "year": "2017" }, { "authors": "D J Butler; J Wulff; G B Stanley; M J Black", "journal": "", "ref_id": "b54", "title": "A naturalistic open source movie for optical flow evaluation", "year": "2012" }, { "authors": "Jing Kfir Aberman; Mingyi Liao; Dani Shi; Baoquan Lischinski; Daniel Chen; Cohen-Or", "journal": "ACM Transactions on Graphics", "ref_id": "b55", "title": "Neural best-buddies: Sparse cross-domain correspondence", "year": "2018" }, { "authors": "Shir Amir; Yossi Gandelsman; Shai Bagon; Tali Dekel", "journal": "", "ref_id": "b56", "title": "Deep vit features as dense visual descriptors", "year": "2021" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "", "ref_id": "b57", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "Alexander Quinn; Nichol ; Prafulla Dhariwal", "journal": "", "ref_id": "b58", "title": "Improved denoising diffusion probabilistic models", "year": "2021" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "", "ref_id": "b59", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b60", "title": "DreamBooth: Fine Tuning Text-to-image Diffusion Models for Subject-Driven Generation", "year": "2023" }, { "authors": "J Edward; Yelong Hu; Phillip Shen; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "", "ref_id": "b61", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "Lvmin Zhang; Maneesh Agrawala", "journal": "", "ref_id": "b62", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Ben Poole; Ajay Jain; Jonathan T Barron; Ben Mildenhall", "journal": "", "ref_id": "b63", "title": "Dreamfusion: Text-to-3d using 2d diffusion", "year": "2022" }, { "authors": "Dmitry Baranchuk; Ivan Rubachev; Andrey Voynov; Valentin Khrulkov; Artem Babenko", "journal": "", "ref_id": "b64", "title": "Label-Efficient Semantic Segmentation with Diffusion Models", "year": "2022" }, { "authors": "Anthony Simeonov; Yilun Du; Andrea Tagliasacchi; Joshua B Tenenbaum; Alberto Rodriguez; Pulkit Agrawal; Vincent Sitzmann", "journal": "", "ref_id": "b65", "title": "Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation", "year": "2022" }, { "authors": "Grace Luo; Lisa Dunlap; Dong Huk Park; Aleksander Holynski; Trevor Darrell", "journal": "", "ref_id": "b66", "title": "Diffusion hyperfeatures: Searching through time and space for semantic correspondence", "year": "2023" }, { "authors": "Junyi Zhang; Charles Herrmann; Junhwa Hur; Luisa Polania Cabrera; Varun Jampani; Deqing Sun; Ming-Hsuan Yang", "journal": "", "ref_id": "b67", "title": "A tale of two features: Stable diffusion complements dino for zero-shot semantic correspondence", "year": "2023" }, { "authors": "Luming Tang; Menglin Jia; Qianqian Wang; Cheng Perng Phoo; Bharath Hariharan", "journal": "", "ref_id": "b68", "title": "Emergent correspondence from image diffusion", "year": "2023" }, { "authors": "Karen Simonyan; Andrew Zisserman", "journal": "", "ref_id": "b69", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2014" }, { "authors": "Aliasghar Khani; Asgari Saeid; Aditya Taghanaki; Ali Sanghi; Ghassan Mahdavi Amiri; Hamarneh", "journal": "", "ref_id": "b70", "title": "Slime: Segment like me", "year": "2023" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "", "ref_id": "b71", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b72", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Juhong Min; Seungwook Kim; Minsu Cho", "journal": "", "ref_id": "b73", "title": "Convolutional Hough Matching Networks for Robust and Efficient Visual Correspondence", "year": "2021" }, { "authors": "Mathilde Caron; Ishan Misra; Julien Mairal; Priya Goyal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b74", "title": "Unsupervised learning of visual features by contrasting cluster assignments", "year": "2020" } ]
[ { "formula_coordinates": [ 4, 225.16, 707.26, 275.73, 18.92 ], "formula_id": "formula_0", "formula_text": "L DM = E I,t,ϵ∼N (0,1) ∥ϵ -ϵ θ (I t , t)∥ 2 2 . (1" }, { "formula_coordinates": [ 4, 500.89, 707.4, 3.87, 12.01 ], "formula_id": "formula_1", "formula_text": ")" }, { "formula_coordinates": [ 5, 219.13, 464.33, 285.63, 18.92 ], "formula_id": "formula_2", "formula_text": "L LDM = E z,t,ϵ∼N (0,1) [∥ϵ -ϵ θ (z t , t, e)∥ 2 2 ]..(2)" }, { "formula_coordinates": [ 5, 176.37, 633.29, 328.39, 13.78 ], "formula_id": "formula_3", "formula_text": "M ′ l (e, I) = CrossAttention(Q l , K l ) = softmax Q l K ⊤ l / d l ,(3)" }, { "formula_coordinates": [ 6, 235.39, 480.58, 141.22, 18.92 ], "formula_id": "formula_4", "formula_text": "M s (u) = exp -∥u -u i ∥ 2 2 /2σ 2 ." }, { "formula_coordinates": [ 6, 216.72, 523.36, 288.04, 22.78 ], "formula_id": "formula_5", "formula_text": "e * = arg min e u ∥M(u; e, I i ) -M s (u)∥ 2 2 ,(5)" }, { "formula_coordinates": [ 6, 246.41, 591.45, 258.35, 19.23 ], "formula_id": "formula_6", "formula_text": "u j = arg max u M(u; e * , I j ).(6)" }, { "formula_coordinates": [ 7, 181.81, 442.14, 322.95, 22.78 ], "formula_id": "formula_7", "formula_text": "e * = arg min e E c∼D u ∥C c (M(u; e, I i )) -C c (M s (u))∥ 2 2 ,(7)" }, { "formula_coordinates": [ 7, 214.05, 489.16, 290.71, 19.23 ], "formula_id": "formula_8", "formula_text": "u j = arg max u E c∼D U c (M(u; e * , C c (I j ))).(8)" }, { "formula_coordinates": [ 7, 220.27, 573.32, 280.62, 18.87 ], "formula_id": "formula_9", "formula_text": "u j = arg max u E ē∼D M(u; O(ē, I i ), I j ). (9" }, { "formula_coordinates": [ 7, 500.89, 573.32, 3.87, 12.01 ], "formula_id": "formula_10", "formula_text": ")" } ]
Unsupervised Semantic Correspondence Using Stable Diffusion
Text-to-image diffusion models are now capable of generating images that are often indistinguishable from real images. To generate such images, these models must understand the semantics of the objects they are asked to generate. In this work we show that, without any training, one can leverage this semantic knowledge within diffusion models to find semantic correspondences -locations in multiple images that have the same semantic meaning. Specifically, given an image, we optimize the prompt embeddings of these models for maximum attention on the regions of interest. These optimized embeddings capture semantic information about the location, which can then be transferred to another image. By doing so we obtain results on par with the strongly supervised state of the art on the PF-Willow dataset and significantly outperform (20.9% relative for the SPair-71k dataset) any existing weakly or unsupervised method on PF-Willow, CUB-200 and SPair-71k datasets.
Eric Hedlin; Gopal Sharma; Shweta Mahajan; Hossam Isack; Abhishek Kar; Andrea Tagliasacchi; Kwang Moo
[ { "figure_caption": "Figure 2 :2Figure 2: Semantic knowledge in diffusion models -Given an input image and text prompts describing parts of the image, the attention maps of diffusion models highlight semantically relevant areas of the image. We visualize the attention maps superimposed atop the input image.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: Method -(Top) Given a source image and a query point, we optimize the embeddings so that the attention map for the denoising step at time t highlights the query location in the source image. (Bottom) During inference, we input the target image and reuse the embeddings for the same denoising step t, determining the corresponding point in the target image as the argmax of the attention map. The architecture mapping images to attention maps is a pre-trained Stable Diffusion model[30] which is kept frozen throughout the entire process.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Attention response of different layers -We show example attention maps (b-e) for particular U-Net layers for the optimized embedding that correspond to the location marked with the yellow star on the source image (a) and corresponding average in image (f). We use the same embedding for another (target) image (g) and display its attention map as well (h-k) and their average (l). The ground-truth semantically corresponding region in the target image is marked also with the yellow star. Earlier layers respond more broadly, whereas later layers are more specific. To utilize these different characteristics of each layer we average that of multiple layers into a single attention map.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Ablations -We ablate performance as measured by the PCK @0.05 metric on the PF-Willow [13] dataset. (a) Using multiple layers; (b) Using optimization augmentations; (c) Using crop augmentations. Dashed-line denotes the performance of our full method. Note that in (a) individual layer performance is significantly worse, showing that the information within layers is complimentary. In (b) and (c) using more embeddings and crops leads to improved performance.", "figure_data": "", "figure_id": "fig_5", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Semantic correspondence between different classes -We show examples of applying our method to pairs of image from different classes. We manually mark those that seem arguably correct with blue. Our method in many cases generalizes well.", "figure_data": "", "figure_id": "fig_6", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "•Contracting path: 64 × 64 (d l = 40) → 64 × 64 (d l = 40) → 32 × 32 (d l = 80) → 32 × 32 (d l = 80) → 16 × 16 (d l = 160) → 16 × 16 (d l = 160) • Bottleneck: 8 × 8 (d l = 160) • Expansive path: 16 × 16 (d l = 160) → 16 × 16 (d l = 160) → 16 × 16 (d l = 160) → 32 × 32 (d l = 80) → 32 × 32 (d l = 80) → 32 × 32 (d l = 80) → 64 × 64 (d l = 40) → 64 × 64 (d l = 40) → 64 × 64 (d l = 40)A typical U-Net layer in text conditioned latent diffusion models[30] is augmented with the crossattention mechanism for conditioning on the prompts. The queries in this mechanism are the projections of the flattened intermediate representations of the U-Net, and the keys and the values are the projections of the prompt embeddings. The total length of tokens for this model, P , is 77 where each token has a dimensionality of 768.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 15 :Figure 16 :Figure 17 :151617Figure 15: Correct attention map example for SPair-71k [14] -The model attends to both eyes in the target image, yet it demonstrates a slight preference towards the correct eye. Ground-truth correspondences are marked as yellow star.", "figure_data": "", "figure_id": "fig_9", "figure_label": "151617", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: Example cat image Figure 2: Example bird image Figure 3: Attention map for our optimized embedding for the point on the bird's eye.", "figure_data": "", "figure_id": "fig_10", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 4 :Figure 5 :45Figure 4: Attention maps for each of the tokens corresponding to the sentence \"A picture of a cat\"", "figure_data": "", "figure_id": "fig_11", "figure_label": "45", "figure_type": "figure" }, { "figure_caption": "arXiv:2305.15581v2 [cs.CV] 23 Dec 2023", "figure_data": "", "figure_id": "fig_12", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "7, 8, 9, and 10 out of 16. These layers correspond to attention maps of dimensions 16 × 16 for layers 7 to 9, and 32 × 32 for layer 10. • Learning rate for prompt optimization: 2.37 × 10 -3 • Sigma radius: 27.98 • Noise level: Added noise of t = 8 where T = 50 • Number of optimization steps: 129 • Image crop size: Crop size as a percentage of the original image is 93.17%", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "1 Updated table to include PWarpC results and move 'DINO+NN' to unsupervised.", "figure_data": "Strong supervisionCHM [69] VAT [15] CATs++ [16] PWarpC-NC-Net*res101 [70]--------52.7 52.8 56.7 48.079.4 81.6 81.2 76.227.2 35.0 -21.546.3 55.5 59.8 37.1Weak supervisionPMD [21] PSCNet-SE [52] VGG+MLS [56] DINO+MLS [56,71] ASIC [46] PWarpC-NC-Netres101 [70]--18.3 52.0 57.9 ---25.8 67.0 75.9 -40.3 42.6 41.2 45.0 53.0 45.074.7 75.1 63.2 66.5 76.3 75.9-----18.226.5 27.0 27.4 31.1 36.9 35.3UnsupervisedDINO+NN [57] Our method52.8 61.668.3 77.540.1 53.060.1 84.3-28.933.3 45.4", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "[1]", "Explanation": "The cited work in [1] provides a foundation for the discussion of image registration, which is a key application area for estimating point correspondences in computer vision."}, {"Category": "Supporting Evidence", "Citation": "[1]", "Explanation": "The cited work in [1] also highlights the importance of point correspondences in object recognition, which is another key application area for estimating point correspondences in computer vision."}, {"Category": "Supporting Evidence", "Citation": "[1]", "Explanation": "The cited work in [1] further emphasizes the significance of point correspondences in 3D reconstruction, which is another important application area for estimating point correspondences in computer vision."}, {"Category": "Methodological Basis", "Citation": "[4,5,6,7,8]", "Explanation": "The cited works in [4,5,6,7,8] provide methodological basis for solving geometric correspondence search with local feature-based methods, which is a key approach for estimating point correspondences in computer vision."}, {"Category": "Methodological Basis", "Citation": "[9,10]", "Explanation": "The cited works in [9,10] offer methodological basis for solving geometric correspondence search with optical flow methods, which is another key approach for estimating point correspondences in computer vision."}, {"Category": "Methodological Basis", "Citation": "[11,12,13,14,15,16]", "Explanation": "The cited works in [11,12,13,14,15,16] provide methodological basis for solving semantic correspondence search, which is a key application area for estimating point correspondences in computer vision."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work provides a method for learning to solve problems in computer vision, which the citing paper adopts in their research on geometric and semantic correspondences."}, {"Category": "Data Source", "Citation": "[16]", "Explanation": "The cited work is a data source for the research on performance in computer vision, which the citing paper uses to compare and assess the results of their study."}, {"Category": "Extension or Continuation", "Citation": "[15]", "Explanation": "The cited work is an extension of research on performance in computer vision, which the citing paper builds upon in their study of geometric and semantic correspondences."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work provides a method for learning-based performance in computer vision, which the citing paper adopts in their research on geometric and semantic correspondences."}, {"Category": "Data Source", "Citation": "[19]", "Explanation": "The cited work is a data source for the research on geometric verification processes, which the citing paper uses to assess the quality of the data used in their study."}, {"Category": "Extension or Continuation", "Citation": "[21]", "Explanation": "The cited work is an extension of research on unsupervised learned semantic correspondence, which the citing paper builds upon in their study of geometric and semantic correspondences."}, {"Category": "Extension or Continuation", "Citation": "[22]", "Explanation": "The cited work is an extension of research on learned semantic correspondence, which the citing paper builds upon in their study of geometric and semantic correspondences."}, {"Category": "Extension or Continuation", "Citation": "[23]", "Explanation": "The cited work is an extension of research on learned semantic correspondence, which the citing paper builds upon in their study of geometric and semantic correspondences."}, {"Category": "Methodological Basis", "Citation": "[26,27,28,29,30]", "Explanation": "The cited works on diffusion models provide the key insight that these models can generate photo-realistic images from text prompts, which the citing paper leverages to extract knowledge about semantic correspondences for solving the problem of semantic correspondences."}, {"Category": "Methodological Basis", "Citation": "[33]", "Explanation": "The cited work of prompt-to-prompt is used as a basis for the method built in the citing paper, which exploits attention maps of latent diffusion models to perform text-based image editing."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work, stable diffusion, is the diffusion model that the citing paper uses in their research to find the (localized) embeddings of a given image and location."}, {"Category": "Methodological Basis", "Citation": "[36,35]", "Explanation": "The cited works provide a method for textual inversion that the citing paper uses to find multiple embeddings starting from random initialization to avoid instability and randomness in the process."}, {"Category": "Data Source", "Citation": "[13]", "Explanation": "The cited work provides the PF-Willow dataset, which the citing paper uses in their research to assess the performance of their method in comparison to strongly-supervised state of the art methods."}, {"Category": "Extension or Continuation", "Citation": "[14]", "Explanation": "The cited work provides the SPair-71k dataset, which the citing paper uses to demonstrate the performance of their method in comparison to other weakly and un-supervised baselines. The citing paper also outperforms the closest weakly supervised baseline on this dataset by a significant margin."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work, Stable Diffusion, is used as a foundational model for extracting semantic correspondences in the citing paper without the need for training or using any labels."}, {"Category": "Extension or Continuation", "Citation": "[14]", "Explanation": "The SPair-71k dataset is used in the citing paper to evaluate the performance of the method presented in the cited work, which is a continuation of the research in the cited work."}, {"Category": "Extension or Continuation", "Citation": "[13]", "Explanation": "The PF-Willow and CUB-200 datasets are also used in the citing paper to evaluate the performance of the method presented in the cited work, which is a continuation of the research in the cited work."}, {"Category": "Supporting Evidence", "Citation": "[11]", "Explanation": "The cited work on semantic flow is used to highlight the task of finding semantic correspondences in images, which is a key building block in various computer vision tasks."}, {"Category": "Supporting Evidence", "Citation": "[16]", "Explanation": "The cited work on semantic flow is further discussed in the context of finding semantic correspondences in images, emphasizing the importance of this task in computer vision."}, {"Category": "Supporting Evidence", "Citation": "[15]", "Explanation": "The cited work on semantic flow is mentioned again to highlight the task of finding semantic correspondences in images, emphasizing the need for efficient and effective methods in this area."}, {"Category": "Supporting Evidence", "Citation": "[46]", "Explanation": "The cited work on semantic flow is used to illustrate the task of finding semantic correspondences in images, providing a concrete example of the challenge in this area."}, {"Category": "Supporting Evidence", "Citation": "[47]", "Explanation": "The cited work is referred to as a wider review of the task of finding semantic correspondences in images, providing a comprehensive overview of the field and the challenges involved."}, {"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work introduces an architecture that utilizes both convolutional neural networks and transformers to discover semantic correspondences in feature maps, which the citing paper adopts in its research on learning semantic correspondences with less supervision."}, {"Category": "Extension or Continuation", "Citation": "[14,13]", "Explanation": "The cited works focus on the use of large-scale datasets that are sparsely labeled, which the citing paper extends by exploring the potential of these datasets in learning semantic correspondences with less supervision."}, {"Category": "Data Source", "Citation": "[16]", "Explanation": "The cited work provides a dataset that the citing paper utilizes in its research on learning semantic correspondences with less supervision, as it is a key element for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work provides a probabilistic student-teacher framework for knowledge distillation from synthetic data, which the citing paper adopts in their research to improve the performance of unlabeled real image pairs."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work forms a semi-supervised framework that uses unsupervised losses formed via augmentation, which the citing paper utilizes in their research to improve the performance of unlabeled real image pairs."}, {"Category": "Extension or Continuation", "Citation": "[56]", "Explanation": "The cited work on neural best-buddies looks for mutual nearest neighbor neuron pairs in pre-trained CNNs to discover semantic correspondences, which the citing paper extends by exploring the use of pre-trained deep features in a new context."}, {"Category": "Extension or Continuation", "Citation": "[57]", "Explanation": "The cited work on the use of deep features from pre-trained ViT in DINO-ViT is extended in the citing paper by investigating the use of these features in a new context."}, {"Category": "Extension or Continuation", "Citation": "[23]", "Explanation": "The cited work on the use of pre-trained network representations in rough correspondences is extended in the citing paper by training a network to map images into a canonical grid for semantic correspondence extraction."}, {"Category": "Supporting Evidence", "Citation": "[30]", "Explanation": "The cited work, Stable Diffusion, is a popular model of choice in the field of diffusion models due to its ability to generate high-quality samples and its open-source nature, which is discussed in the citing paper as a key factor in its research."}, {"Category": "Data Source", "Citation": "[61]", "Explanation": "The cited work, model personalization through techniques like LoRA and textual inversion, is mentioned in the citing paper as a method for extending the capabilities of diffusion models, which the citing paper utilizes in its research."}, {"Category": "Extension or Continuation", "Citation": "[62]", "Explanation": "The cited work, Low Rank Adaptation (LoRA), is discussed in the citing paper as a technique for model personalization in diffusion models, which the citing paper extends by exploring new methods for improving the capabilities of these models."}, {"Category": "Extension or Continuation", "Citation": "[63]", "Explanation": "The cited work, ControlNet, is mentioned in the citing paper as a method for adding new conditioning signals in diffusion models, which the citing paper builds upon to explore new ways of improving the performance of these models."}, {"Category": "Extension or Continuation", "Citation": "[64]", "Explanation": "The cited work, text-to-3D generation, is discussed in the citing paper as a repurposing of diffusion models for new purposes, which the citing paper extends by exploring the use of these models in a different context."}, {"Category": "Extension or Continuation", "Citation": "[33]", "Explanation": "The cited work, text-driven image editing, is mentioned in the citing paper as a new application of diffusion models, which the citing paper builds upon to further expand the capabilities of these models in new areas."}, {"Category": "Methodological Basis", "Citation": "[33]", "Explanation": "The cited work provides the finding that cross-attention maps within diffusion models can act as a pseudo-segmentation for text queries, which the citing paper utilizes in their research to repurpose the attention maps for a new application."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work on textual inversion is used in the citing paper to repurpose the attention maps within diffusion models for a new application, without any training."}, {"Category": "Extension or Continuation", "Citation": "[66]", "Explanation": "The cited work is mentioned as a similar approach to the repurposing of attention maps in the citing paper, but for a different application and framework."}, {"Category": "Extension or Continuation", "Citation": "[67,68,69]", "Explanation": "The cited works on feature representations within Stable Diffusion are mentioned as concurrent to the research in the citing paper, focusing on using deep features effectively for various tasks."}, {"Category": "Data Source", "Citation": "[70]", "Explanation": "The cited work on VGG19 features is mentioned as a widely used approach for various tasks, providing a reference for the use of deep features in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work on latent diffusion models is the basis for the method used in the citing paper to identify semantic correspondences between pairs of images."}, {"Category": "Data Source", "Citation": "[71]", "Explanation": "The cited work on segmentation is a data source for the extension of the method in the citing paper to perform segmentation tasks."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work introduces the concept of latent diffusion models (LDMs), which the citing paper adopts in their research by using a latent representation to operate on images."}, {"Category": "Methodological Basis", "Citation": "[59,30]", "Explanation": "The cited works provide the transformer architecture used in the denoiser for a text-conditional LDM, which the citing paper adopts in its research."}, {"Category": "Methodological Basis", "Citation": "[72]", "Explanation": "The cited work, U-Net, is used as a method for denoising in the latent space, which the citing paper adopts in their research to compute query and key for the cross-attention layer in the attention mask computation process."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work introduces the concept of attention maps and bilinear interpolation, which the citing paper adopts to index pixels in the attention maps and perform averaging operations."}, {"Category": "Methodological Basis", "Citation": "[35,36]", "Explanation": "The cited works have shown that the process of finding text embeddings via optimization and textual inversion is sensitive to initialization, which the citing paper addresses by applying various forms of regularization to overcome these issues."}, {"Category": "Extension or Continuation", "Citation": "[14]", "Explanation": "The cited work, SPair-71k, is a standard dataset for evaluating semantic correspondences, and the citing paper extends the use of this dataset to perform evaluation of semantic correspondence search."}, {"Category": "Data Source", "Citation": "[13]", "Explanation": "The cited work, PF-Willow, is a dataset that comprises four classes and 900 correspondences in the test set, which the citing paper utilizes in their research on semantic correspondence search."}, {"Category": "Data Source", "Citation": "[37]", "Explanation": "The cited work, CUB-200, is a dataset of 200 different classes of birds, and the citing paper selects the first three classes to yield a total of 1,248 correspondences in the test set for their research on semantic correspondence search."}, {"Category": "Data Source", "Citation": "[13,14,23]", "Explanation": "The cited works provide the standard metric for measuring performance in the citing paper, which is used to evaluate the performance of the method under study."}, {"Category": "Methodological Basis", "Citation": "[73]", "Explanation": "The cited work, Adam optimizer, is used in the citing paper to find the prompts for the optimization process."}, {"Category": "Methodological Basis", "Citation": "[21,52,46]", "Explanation": "The cited works provide weakly-supervised baselines that the citing paper compares its method against, serving as a methodological basis for evaluating the performance of the method."}, {"Category": "Extension or Continuation", "Citation": "[46]", "Explanation": "The cited work ASIC provides a baseline for general-purpose deep features with Moving Least Squares and DINO features with Moving Least Squares or Nearest Neighbor matching, which the citing paper extends by comparing its method against these baselines."}, {"Category": "Supporting Evidence", "Citation": "[15]", "Explanation": "The cited work VAT is a recent method that the citing paper compares its results against in the case of the SPair-71k dataset, providing supporting evidence for the performance of the method in a specific context."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work, PF-Willow, serves as a basis for the ablation study conducted in the citing paper, providing the dataset and PCK @0.05 metric for evaluation."}, {"Category": "Supporting Evidence", "Citation": "[13]", "Explanation": "The cited work, PF-Willow dataset, serves as a benchmark for evaluating the performance of semantic correspondence estimation methods. The citing paper demonstrates that the method proposed in the cited work achieves performance on par with strongly supervised methods in the dataset."}, {"Category": "Extension or Continuation", "Citation": "[49]", "Explanation": "The cited work, FigNeRF, is a generative model that the citing paper extends by using our method to scale up training without human supervision."}, {"Category": "Data Source", "Citation": "[13], [37]", "Explanation": "The cited works, PF-Willow and CUB-200 datasets, are used as a source of data for the analysis conducted in the citing paper to evaluate the performance of the method in terms of correspondence correctness."}, {"Category": "Data Source", "Citation": "[14]", "Explanation": "The SPair-71k dataset is cited as a data source for the analysis conducted in the citing paper to demonstrate the distribution of image pairs in terms of correspondence correctness."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work provides the basis for the stable diffusion model version 1.4 that the citing paper builds upon in their research on image generation."}, {"Category": "Data Source", "Citation": "[59]", "Explanation": "The cited work is referenced for the Denoising Diffusion Probabilistic Model (DDPM) that the citing paper uses in their research on image generation."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work provides the PF-Willow dataset, which the citing paper uses to evaluate the performance of their approach in estimating correspondences."}, {"Category": "Data Source", "Citation": "[37]", "Explanation": "The cited work provides the CUB-200 dataset, which the citing paper uses to assess the effectiveness of their approach in estimating correspondences."}, {"Category": "Data Source", "Citation": "[14]", "Explanation": "The cited work provides the SPair-71k dataset, which the citing paper uses to demonstrate the performance of their approach in a more challenging setting for estimating correspondences."}, {"Category": "Supporting Evidence", "Citation": "[14]", "Explanation": "The cited work, SPair-71k dataset, provides a comprehensive dataset for the evaluation of weakly supervised methods in the field of image recognition, which supports the claims and findings of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work, SPair-71k, provides a dataset and a method for searching the hyperparameters used in the citing paper, which the citing paper adopts in their research to find the best performing run for their study."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work, SPair-71k, provides a dataset and a method for evaluating the performance of a model in terms of PCK @0.1, which the citing paper uses to compare the performance of their own model against other weakly supervised baselines."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b3", "b9", "b10", "b28", "b17", "b12", "b15", "b30", "b14", "b5", "b29", "b0", "b27", "b18", "b36", "b6" ], "table_ref": [], "text": "Multi-style text transfer is a challenging task today with applications such as automatic domainappropriate, style-conformant writing (Fu et al., 2018) and AI-assisted stylistic language editing. Text style transfer is an intricate task as all language has a specific context, and those contexts influence the attributes of the language (Hovy and Yang, 2021). Text style transfer is challenging because it involves dealing with the aspects of style coupled with the textual content (Hu et al., 2017;Shen et al., 2017;Lample et al., 2018). This domain's other obstacles include the need for parallel corpus (Jhamtani et al., 2017) and quality training data. As the number of style dimensions increases with multi-style text transfer, not only is the requirement of a jointly annotated corpus across all the Figure 1: When an input sentence is passed to the multistyle transfer model, to increase formality and decrease arousal, we hypothesize that when the model is trained on a balanced joint distribution of formality and arousal (all four style combinations have a 25% representation)the style transfer is more successful as opposed to when the model is trained on a skewed joint distribution (there is no representation of the \"informal unaroused\" style combination) of styles in the training data. stylistic dimensions problematic, but the different styles are not necessarily independent. While \"style\" can also refer to authorial or domainspecific style, in this paper, we focus on \"microstyles\" as defined by (Kang and Hovy, 2021) where they define \"micro-style\" as a complex combination of different factors such as formality markers, emotions, and metaphors. People intentionally (Troiano et al., 2021) tune these styles in writing differently based on their mood, the person they are addressing, the content of the message, or the platform. Multiple micro-styles can jointly describe a text; for example, a given text could simultaneously be formal and sad. Micro-styles also more easily lend themselves to being represented as spectra with varying degrees of intensity. These points align with our vision of an application where users can edit micro-style aspects of their writing.\nMuch research exists on models implementing multi-style text transfer and interdependency of micro-styles (Kang and Hovy, 2019;Goyal et al., Figure 2: The input sentence transitions through every step in our multi-style text style transfer pipeline. The box in red indicates our main contribution to the pipeline, which helps us explore the effects of joint micro-style combinations on style-transferred output. 2020; Subramanian et al., 2018). However, there needs to be more exploration of the joint distribution of inherent micro-styles in the style transfer training dataset and how these micro-style distributions are related. Therefore, we pose a question -Can a dataset with minimal variance across multiple micro-style combinations, such that it experiences a \"balancing effect\", lead to a better style transferred output ? Figure 1 illustrates our intuition that a dataset that experiences a \"balancing effect\" will have more control over the multi-style transferred output than a \"skewed\" dataset. Suppose the style transfer model sees examples of every style combination that can exist -this could aid in the style generation of even unlikely combinations of styles compared to a skewed distribution of these joint micro-styles.\nIn this research, we consider a multi-style text style transfer pipeline assuming that the user has no access to parallel data or the style of the original text that he wishes to transfer, as would seem natural for a style language editing application. We introduce the changing of the training dataset microstyle joint distributions in such a pipeline and quantitatively explore the impact of this modification on the style transferred output. We perform a set of empirical analyses to demonstrate the influence of joint distributions on style-transferred output and show how this trend varies as the number of micro-styles considered changes. The 'balancing effect' on a training dataset leads to style transferred sentences from even the joint style combinations that are typically rare (\"informal unbiased and unaroused\"). Our study is the first of its kind on the distribution of micro styles in training datasets for multi-style text style transfer and is likely to have implications for designing datasets for multi-style transfer model training and fall within the context of and align with recent work on characterizing datasets and factors impacting style transfer (Bender and Friedman, 2018;Schoch et al., 2021;Li et al., 2019;Zhang et al., 2020;Gururangan et al., 2018)." }, { "figure_ref": [], "heading": "Multi Style Transfer Pipeline", "publication_ref": [ "b24", "b1", "b4", "b21", "b14", "b16", "b23", "b16", "b31", "b26", "b23", "b31", "b8", "b7", "b29" ], "table_ref": [], "text": "Datasets: We chose four micro-styles from the style hierarchy defined in Troiano et al.: Formality, Arousal, Sentiment, and Bias, for our study and used publicly available NLP datasets built by other researchers (Rao and Tetreault, 2018;Buechel and Hahn, 2022;Go et al., 2009;Pryzant et al., 2020;Kang and Hovy, 2019) to develop and test our models. Appendix A mentions the details of the datasets and their usage. Pipeline Overview: Our experimental setup for multi-style transfer is inspired by the work of (Krishna et al., 2020). Like them, we first generate a \"diverse\" paraphrase of the input sentence, and then the paraphrased sentence is rewritten in the style of choice. Towards this end, we train a paraphrasing model (separately on a parallel paraphrase dataset). Then, the trained paraphrase model is used to create \"pseudo-gold\" parallel data for training style models.\nFirst, we adopted a pre-trained T5 model (Raffel et al., 2020) to generate paraphrases. This model was trained for the task of paraphrase generation on the ParaNMT-filtered dataset provided by (Krishna et al., 2020). Once we had this trained paraphrase model, we used diverse beam search (Vijayakumar et al., 2016) to generate diverse fluent paraphrased outputs. An important assumption is that the paraphrase is stripped of its original style and does not leak into the training.\nWe address this potential issue by training classifiers (Sanh et al., 2019) to predict style on the original and paraphrased datasets and find that all our micro-style classifiers have a classification accuracy of higher than 80% F1, which is acceptable for pseudo-label creation. After we generate diverse paraphrases, we choose the most diverse paraphrase and then derive micro-style classifications for the paraphrased sentence using our trained micro-style classifiers. Therefore each sentence is assigned a classification score for each micro-style label and can form a \"pseudo parallel\" dataset for training the T5-based joint transfer model. Thus, our approach does not need a parallel dataset.\nWe then converted the classifier predictions into buckets of style (ranging from \"very low\" to \"very high\") based on the chosen style of the original and then paraphrased sentences. The bucketing process is described in Appendix B. After this step, we introduce our contribution of \"constructing style distributions\" into the pipeline, as illustrated in Figure 2. Following that, we perform multi-style text style transfer. We appended the \"bucket\" information to the paraphrased sentence to achieve the necessary intensity transfers, as motivated by the original T5 paper (Raffel et al., 2020). We train T5-based style transfer models, where the paraphrased sentence and its style buckets are used as input parameters, while the style buckets assigned to the anchor sentence are used as proxy levels of output style transfer. All model-specific details are provided in Appendix B. For generating sentences from our trained models, we used beam search (Vijayakumar et al., 2016) and nucleus sampling (Holtzman et al., 2019) and chose the top 3 sentences from the generations. The following is an example of the input to the joint style transfer model and the expected output. Thus, we implemented a multi-style transfer pipeline to test our hypothesis without any finicky modeling paradigms popular in style transfer research, such as variational inference or autoregressive sampling (He et al., 2020;Subramanian et al., 2018)." }, { "figure_ref": [], "heading": "Goal", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Style Combination", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Balanced Constructing Micro-style Distributions We define a \"style combination\" as a possible combination of the states that the micro-styles can take together -such as 'informal biased negative.' Since there are three micro-styles, each having binary states, the total possible number of style combinations, in this case, is given by N c = 2 × 2 × 2 = 2 3 . Therefore to generalize, if |m i | indicates the cardinality of each micro-style and n indicates the number of micro-styles considered, the total possible number of style combinations (N c ) possible is given by :\nN c = n i=1 |m i |(1)\nTo create the balanced joint distribution of styles, we ensure the standard deviation across the style combinations is close to 0. We do this by down-sampling each style combination, such that the number of samples in each style combination is the same as the least represented style combination. As we increase micro-styles, some microstyle combinations do not occur naturally together, so their representation is close to 0. In such cases, we assume that the least represented style combination is at least 5% of the total dataset. To ensure our comparison across the \"balanced\" and \"skew\" settings is fair, we construct a skewed dataset with a total sample size that is the same as that of the balanced dataset. Thus, the balanced dataset has a uniform distribution, while the skewed dataset has a non-uniform distribution. Table 1 shows the number of samples in each style combination of Formality and Arousal, given a \"balanced\" and \"skewed\" setting." }, { "figure_ref": [ "fig_1", "fig_2", "fig_2" ], "heading": "Experimental Results and Discussion", "publication_ref": [ "b35", "b20", "b34", "b25", "b19", "b22", "b19" ], "table_ref": [ "tab_2", "tab_2" ], "text": "Evaluation Metrics: Style transfer accuracy metrics quantify how nicely output texts match the desired style. However, more than this metric is required. Motivated by Jin et al., we evaluate style transfer across the three main properties of text style transfer: style transfer accuracy, content preservation, and fluency. We use our custom joint sequence classification models, implemented using HuggingFace libraries (Wolf et al., 2020) to evaluate the style transfer success ratio. Our definition for the Style Transfer Success S c is the total number of matches between intended and transferred style buckets, divided by the total number of samples. To judge content preserved in style transferred text, we use three metrics: BLEU (Papineni et al., 2002), embedding-based similarity (Wieting et al., 2019) using cosine similarity of two sentence embeddings (Reimers and Gurevych, 2019), and Word Mover's Distance (WMD) (Mir et al., 2019). For fluency, we use measures like perplexity using GPT2 (Radford et al., 2019) and an adversarial classifier using the cross-aligned autoencoder model (Mir et al., 2019). Experimental Setup: In this paper, we illustrate different micro-style combinations in the training data, for a randomly selected case, with each combination in both the \"balanced\" and \"skewed \" settings. Therefore, we consider 6 cases respectively: 1) Formality and Arousal in a balanced setting (FA balanced) 2) Formality and Arousal in a skewed setting (FA skewed) 3) Formality, Arousal and Bias in a balanced setting (FAB balanced) 4) Formality, Arousal and Bias in skewed setting (FAB skewed) 5) Formality, Arousal, Bias and Sentiment in the balanced setting (FABS balanced) 6) Formality, Arousal, Bias and Sentiment in skewed setting (FABS skewed). We construct the training data with the appropriate settings and then pass them through our experimental pipeline (illustrated in Figure 2) and quantitatively evaluate the style transfer results. Discussion: Table 2 shows examples of styletransferred sentences, given a style-transfer goal from our experimental pipeline for both balanced and skewed settings. E.g., given the objective is to decrease Formality but increase arousal, the sen- tence \" Did you hear about the soldier with 8 limbs? He was army\" transforms to \"He's an army soldier with 8 legs?\". Here, the contraction \"He's\" indicates a formality decrease, and the replacement of limbs with legs indicates a decrease. The overall arousal of this sentence is higher when it transforms into a question.\nFigure 3 illustrates that the balanced setup always has a higher success percentage of style transfer (S c ) than the skewed setup. We cannot compare the success percentage across cases because matching the exact target and transferred style buckets becomes difficult as the number of micro-styles increases. We can also observe through Table 2 that the quality of the balanced transferred text aligns better with the style transfer goal than the skewed transferred text.\nIn Figure 4, we compare the difference in representation percentage of specific style combinations in the test sample for a specific case where we consider Formality, Arousal, and Bias micro-styles. We observe that a balanced joint distribution leads to more representation in the style combinations that are less likely to occur. This is further accentuated as micro-styles increase, as reported in Appendix C. In Figure 4, we see that rarer style combinations [ibn, fun, iun] show more representation in the balanced case as compared to the skewed case. This supports our intuition that the style transfer model benefits from learning the representation of all possible style combinations that can occur together. When we consider Formality, Arousal, and Bias micro styles together, the most represented category (30% of samples) is \"formal unbiased aroused\" (fue). The least represented category (as unlikely to occur together) is \"informal unbiased unaroused\" (iun) with 1%. We observe that the quantitative evaluation metrics are quite indicative when compared across style combinations. For instance, in Table 3, we observe that perplexity increases in categories that are unlikely to occur together (iun). This indicates that the style transfer model is confused by the style distributions present for this style combination.\nWe do not claim that our method of balancing multiple styles will work even for entangled microstyle combinations, as that is out of the scope of the current paper. However, balancing considerably affects the multi-style transfer output for the range of micro-style combinations we considered, and that has an application in many NLP tasks. This result could hugely influence future studies exploring better ways to balance even the entangled micro-styles." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Multi-style text style transfer is a challenging problem predominantly plagued by the need for jointly annotated high-quality datasets. There is a clear need for more research about the marginal and joint distribution of inherent micro-styles present in the training dataset used for style transfer. Multi-style text-style transfer typically requires access to large, jointly labeled datasets and many computational resources under typical implementations. More importantly, we would not be able to conveniently tweak the input data distributions in other multistyle text style transfer methods.\nIn this paper, we implement a multi-style transfer pipeline that subverts the requirement of a jointly annotated dataset of multiple styles by constructing a pseudo-parallel dataset to which we introduce our contribution of constructing style distributions. We then use the modified pseudo-parallel datasets for multi-style transfer. Our modified pipeline effectively allows us to understand the importance of the joint distribution of micro styles in training data and is a substantial contribution.\nWe quantitatively explore the impact of joint micro-style distributions in the training dataset on the style-transferred output sentences. When the joint micro-style distributions are balanced, there is more control over style-transferred output than with a skewed distribution. These findings will likely inform the design of multi-style transfer datasets and encourage us to explore the micro-style relationships in our datasets." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b2" ], "table_ref": [], "text": "In this research, though we employed automatic evaluation of our multi-style transferred text, we acknowledge that multi-style transfer is challenging to observe with the existing metrics for style transfer evaluation, and human evaluation should be done as well. As this research paper focuses on exploring the impact of style distributions in the training data on style-transferred output rather than developing a superior multi-style text transfer model, we use quantitative evaluation in this iteration of our paper. We hope that the large sample size and the consistency of applied metrics make our automated approach a reasonable way of evaluating the style transfer output.\nThis iteration of our paper aims to achieve multistyle transfer across multiple micro styles taken into consideration together as our contribution would aid in constructing a training dataset for multiple micro-style style transfers. We did not explore another exciting question of how balancing multiple micro styles in the training dataset might influence individual style transfer, which could be a promising future direction for our study.\nWe acknowledge that the classifier's quality sets an upper bound on the best style transfer accuracy that is obtainable. However, the target task is quite complicated without a parallel dataset. Our objective was not to have the most accurate classification of micro styles but to find a means to get acceptable pseudo labels for the micro styles. Individually, all our micro style classifiers had a classification accuracy of 80% F1 and higher, and we deemed this good enough for pseudo-label creation.\nWe also focused on utilizing the present styles in the training data and classifying them to derive inherent training style distributions instead of dynamically tuning the proportion of styles present in the training dataset. However, tuning these style proportions using techniques such as PPLM (Dathathri et al., 2019) would give us greater control over our experimental pipeline and is an appropriate next step." }, { "figure_ref": [], "heading": "A Dataset Information", "publication_ref": [ "b24", "b1", "b32", "b4", "b21", "b14" ], "table_ref": [], "text": "We choose four micro-styles from both intended and unintended style categories, based on the style hierarchy as defined in Troiano et al. -Formality, Arousal, Sentiment, and Bias. While formality is considered a \"non-targeted intended\" microstyle, arousal and sentiment are \"targeted intended\" micro-styles. We also include subjective bias, an \"unintended\" micro-style, to ensure we include styles from all hierarchy branches. We built our micro-style joint classification and style transfer models from multiple publicly available NLP datasets built by other researchers, and we detail these below.\nFormality. We use Grammarly's Yahoo Answers Formality Corpus (Rao and Tetreault, 2018), which consists of 105k sentences from two styles: \"formal\" and \"informal\" sentences written either in formal or informal modern English. Unlike formal sentences, informal sentences tend to have more misspellings, short forms (\"u\" instead of \"you\"), and non-standard usage of punctuation.\nArousal. We use the emotion annotated Emobank dataset (Buechel and Hahn, 2022) based on the three-dimensional VAD model developed by (Warriner et al., 2013). In particular, we transform the Arousal dimension into binary categories such as \"arousal\" and \"non-arousal.\"\nSentiment. We use the famous Sentiment140 dataset (Go et al., 2009), which consists of automatically annotated tweets, where the tweets containing positive emoticons are assumed as positive. In contrast, those with negative emoticons are assumed to be negative. The training dataset consisted of 1.6M tweets, and the test dataset consisted of 359 tweets. The tweets were preprocessed using NLTK to remove special Twitter-specific symbols like hashtags, usernames, and URLs.\nBias. We use the Wiki Neutrality Corpus by (Pryzant et al., 2020). This is a new parallel corpus of 180,000 biased and neutralized sentence pairs.In order to train our joint classifier models, we used the training dataset from the appropriate microstyle datasets mentioned above. To implement our style distribution hypothesis, we used random samples for training and testing, from the combination of all the dev datasets from the benchmarks corpus by (Kang and Hovy, 2019). This consists of 15 different styles coupled to both content and domain by varying degrees. We wanted to ensure that the dataset used for training our style transfer model and verifying our hypothesis has sufficient indicators of the appropriate micro-styles. This could be done best by using a sample consisting of datasets curated for each individual micro-style (since a jointly annotated dataset with so many styles is not available)." }, { "figure_ref": [], "heading": "B Multi Style Transfer Pipeline B.1 Resources used for Training", "publication_ref": [], "table_ref": [], "text": "All models were trained using cloud GPUs on Google Colab Pro and Pro+. We used 1 V100 GPU in its \"High-RAM\" (52GB) GPU run-time setting to train the paraphrase generation model, while for other models we used 1 P100 GPU at the \"standard RAM\" setting (32GB)." }, { "figure_ref": [], "heading": "B.2 Diverse Paraphrase Generation", "publication_ref": [ "b23", "b16", "b16", "b31", "b33", "b26" ], "table_ref": [ "tab_5" ], "text": "We adopted a pre-trained T5 model (Raffel et al., 2020), to generate paraphrases. We trained the model on the ParaNMT-filtered dataset provided by (Krishna et al., 2020). This is a subset of the ParaNMT dataset with filters applied to promote lexical diversity, syntactic diversity, and semantic similarity. This model was then used to generate the pseudo-parallel training data for transfer. We selected the t5-small architecture (60 million parameters) as this is approximately 10x smaller than the GPT-2 large model used in (Krishna et al., 2020). We used the hyper-parameters given in Table 4. Based on the recommendation in the appendix of Raffel et al, we used the \"paraphrase: \" prefix to train the paraphraser model. Once we had this trained paraphrase model, we used diverse beam search (Vijayakumar et al., 2016) to generate diverse paraphrased outputs. The hyper-parameters used for diverse beam search are mentioned in Table 5. We preferred beam search over top-p sampling in order to prioritize fluent paraphrases (Welleck et al., 2019) (Sanh et al., 2019), which acts as an encoder. The hyperparameters for this joint model are given in Table 6. This single model effectively replaces the need for a different model for each classification task, significantly reducing the need for computing resources for training and inference. Our joint classifier is essential for downstream tasks like training style transfer models and evaluation. Say we first perform a joint classification of both formality and arousal micro-styles on our datasets, considering we want a multiple-style transfer along the axes of formality and arousal. This results in both formality and arousal pseudo-labels for the sentences. Since these labels are generated algorithmically rather than by hand, we refer to them as pseudo-labels. Pseudo-labeled sentences can then be used to generate the pseudo-parallel dataset for training joint style transfer models and directly measure the variation of a style along the axis of interest. " }, { "figure_ref": [], "heading": "B.4 Pseudo Parallel Data Generation", "publication_ref": [ "b16" ], "table_ref": [], "text": "We then selected the best paraphrase (most stylistically different from the anchor sentence) based on the cosine distance between the anchor and the paraphrased sentence's style vectors. To enable the transfer model to transfer to specified levels of a particular style, we defined 'very low', 'low', 'mid', 'high', and 'very high' buckets for each micro-style.\nIn the following, we describe the bucket boundaries for our style scores. Using the absolute difference between original text style scores and their best-paraphrased text style scores, we find paraphrasing successfully stripped away both formality and arousal aspects of the text. The same phenomenon has been observed in previous studies, such as (Krishna et al., 2020). To ensure a diverse pseudo-parallel dataset, we retain only anchor-paraphrase pairs that do not match in terms of their style bucket. For example, if an anchor-paraphrase sentence pair is assigned style buckets for formality and arousal, as [very high, low] and [very high, very low], this pair will be retained. However, if both style buckets match, the sentence pair will not be considered diverse enough to remain in the pseudo-parallel dataset. In style transfer models, the paraphrased sentence and its style buckets are used as input parameters, while the style buckets assigned to the anchor sentence are used as proxy levels of output style transfer. The following is an example of the input to the joint style transfer model and the expected output. " }, { "figure_ref": [], "heading": "Goal", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B.5 Style Transfer Training", "publication_ref": [ "b23", "b31", "b8" ], "table_ref": [ "tab_6" ], "text": "Our T5 models were trained on pseudo-parallel datasets created and filtered as described earlier. According to the task, we converted the classifier predictions into buckets of style based on the chosen style of the original and then paraphrased sentences. To achieve the necessary intensity transfers, we appended this information to the paraphrased sentence, as motivated by the original T5 paper (Raffel et al., 2020). Hyperparameters are mentioned in Table 7. For generating sentences from our trained models, we used a combination of both beam search (Vijayakumar et al., 2016) and nucleus sampling (Holtzman et al., 2019) and chose the top 3 sentences from the generations." }, { "figure_ref": [], "heading": "C Some Additional Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_6", "fig_5" ], "heading": "C.1 Impact of Fluency filter on input training data", "publication_ref": [], "table_ref": [], "text": "We find that filtering the original dataset based on fluency metrics always results in better style transferred output as compared to the transferred output when the input dataset is not filtered. This is intuitive, as better quality input prevents confusion in the style transfer model and leads to better quality output. As a result of this finding, we use a fluency filter (adversarial classifier > 0.1 and perplexity < 365), before we conduct any of the rest of our experiments with micro-style distributions.\nC.2 Balancing effect on lesser represented style combinations\nIn Figure 6, we consider the case where we examine Formality [formal = f, informal = i] and Arousal [aroused = e, un-aroused= u] micro-styles and compare the percentage of specific style combinations in the test sample. We observe that as the number of micro styles increases, a balanced joint distribution leads to more representation in combinations that are less likely to occur such as in or 'informal and neutral'.\nFigure 5 shows a similarly pronounced effect. Here the number of micro styles is increased, and we can observe that the balanced setting shows higher representation than the skewed setting. An example of an unlikely style combination is fbnp, or \"formal biased neutral and positive\". We also observe that as the number of micro-styles increases, there is no representation in some combinations in both settings [ibnp,iunn,iuen,iunp]. This is a natural result as some micro-style combinations cannot exist in nature." }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "We thank Vivek Aithal, Priyam Srivastava and Daniel McAndrew for their initial work on the pipeline for multi-style transfer. This was instrumental to our project and helped us get a kickstart on our research." } ]
2023-05-24
10.3115/1073083.1073135
[ { "authors": "M Emily; Batya Bender; Friedman", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b0", "title": "Data statements for natural language processing: Toward mitigating system bias and enabling better science", "year": "2018" }, { "authors": "Sven Buechel; Udo Hahn", "journal": "", "ref_id": "b1", "title": "Emobank: Studying the impact of annotation perspective and representation format on dimensional emotion analysis", "year": "2022" }, { "authors": "Sumanth Dathathri; Andrea Madotto; Janice Lan; Jane Hung; Eric Frank; Piero Molino; Jason Yosinski; Rosanne Liu", "journal": "", "ref_id": "b2", "title": "Plug and play language models: A simple approach to controlled text generation", "year": "2019" }, { "authors": "Zhenxin Fu; Xiaoye Tan; Nanyun Peng; Dongyan Zhao; Rui Yan", "journal": "", "ref_id": "b3", "title": "Style transfer in text: Exploration and evaluation", "year": "2018" }, { "authors": "Alec Go; Richa Bhayani; Lei Huang", "journal": "CS224N project report", "ref_id": "b4", "title": "Twitter sentiment classification using distant supervision", "year": "2009" }, { "authors": "Navita Goyal; Balaji Vasan Srinivasan; Anandhavelu Natarajan; Abhilasha Sancheti", "journal": "", "ref_id": "b5", "title": "Multistyle transfer with discriminative feedback on disjoint corpus", "year": "2020" }, { "authors": "Suchin Gururangan; Swabha Swayamdipta; Omer Levy; Roy Schwartz; Noah A Samuel R Bowman; Smith", "journal": "", "ref_id": "b6", "title": "Annotation artifacts in natural language inference data", "year": "2018" }, { "authors": "Junxian He; Xinyi Wang; Graham Neubig; Taylor Berg-Kirkpatrick", "journal": "", "ref_id": "b7", "title": "A probabilistic formulation of unsupervised text style transfer", "year": "2020" }, { "authors": "Ari Holtzman; Jan Buys; Li Du; Maxwell Forbes; Yejin Choi", "journal": "", "ref_id": "b8", "title": "The curious case of neural text degeneration", "year": "2019" }, { "authors": "Dirk Hovy; Diyi Yang", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "The importance of modeling social factors of language: Theory and practice", "year": "2021" }, { "authors": "Zhiting Hu; Zichao Yang; Xiaodan Liang; Ruslan Salakhutdinov; Eric P Xing", "journal": "", "ref_id": "b10", "title": "Toward controlled generation of text", "year": "2017" }, { "authors": " Pmlr", "journal": "", "ref_id": "b11", "title": "", "year": "" }, { "authors": "Harsh Jhamtani; Varun Gangal; Eduard Hovy; Eric Nyberg", "journal": "", "ref_id": "b12", "title": "Shakespearizing modern language using copy-enriched sequence-to-sequence models", "year": "2017" }, { "authors": "Di Jin; Zhijing Jin; Zhiting Hu; Olga Vechtomova; Rada Mihalcea", "journal": "Computational Linguistics", "ref_id": "b13", "title": "Deep learning for text style transfer: A survey", "year": "2022" }, { "authors": "Dongyeop Kang; Eduard Hovy", "journal": "", "ref_id": "b14", "title": "xslue: A benchmark and analysis platform for cross-style language understanding and evaluation", "year": "2019" }, { "authors": "Dongyeop Kang; Eduard Hovy", "journal": "", "ref_id": "b15", "title": "Style is not a single variable: Case studies for cross-stylistic language understanding", "year": "2021" }, { "authors": "Kalpesh Krishna; John Wieting; Mohit Iyyer", "journal": "", "ref_id": "b16", "title": "Reformulating unsupervised style transfer as paraphrase generation", "year": "2020" }, { "authors": "Guillaume Lample; Sandeep Subramanian; Eric Smith; Ludovic Denoyer; Marc'aurelio Ranzato; Y-Lan Boureau", "journal": "", "ref_id": "b17", "title": "Multiple-attribute text rewriting", "year": "2018" }, { "authors": "Dianqi Li; Yizhe Zhang; Zhe Gan; Yu Cheng; Chris Brockett; Ming-Ting Sun; Bill Dolan", "journal": "", "ref_id": "b18", "title": "Domain adaptive text style transfer", "year": "2019" }, { "authors": "Remi Mir; Bjarke Felbo; Nick Obradovich; Iyad Rahwan", "journal": "", "ref_id": "b19", "title": "Evaluating style transfer for text", "year": "2019" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Reid Pryzant; Richard Diehl Martinez; Nathan Dass; Sadao Kurohashi; Dan Jurafsky; Diyi Yang", "journal": "", "ref_id": "b21", "title": "Automatically neutralizing subjective bias in text", "year": "2020" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b22", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b23", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Sudha Rao; Joel Tetreault", "journal": "", "ref_id": "b24", "title": "Dear sir or madam, may i introduce the gyafc dataset: Corpus, benchmarks and metrics for formality style transfer", "year": "2018" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "", "ref_id": "b25", "title": "Sentence-bert: Sentence embeddings using siamese bert-networks", "year": "2019" }, { "authors": "Victor Sanh; Lysandre Debut; Julien Chaumond; Thomas Wolf", "journal": "", "ref_id": "b26", "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "year": "2019" }, { "authors": "Stephanie Schoch; Wanyu Du; Yangfeng Ji", "journal": "", "ref_id": "b27", "title": "Contextualizing variation in text style transfer datasets", "year": "2021" }, { "authors": "Tianxiao Shen; Tao Lei; Regina Barzilay; Tommi Jaakkola", "journal": "Advances in neural information processing systems", "ref_id": "b28", "title": "Style transfer from non-parallel text by cross-alignment", "year": "2017" }, { "authors": "Sandeep Subramanian; Guillaume Lample; Eric Michael Smith; Ludovic Denoyer; Marc'aurelio Ranzato; Y-Lan Boureau", "journal": "", "ref_id": "b29", "title": "Multipleattribute text style transfer", "year": "2018" }, { "authors": "Enrica Troiano; Aswathy Velutharambath", "journal": "", "ref_id": "b30", "title": "From theories on styles to their transfer in text: Bridging the gap with a hierarchical survey", "year": "2021" }, { "authors": "K Ashwin; Michael Vijayakumar; Cogswell; Qing Ramprasath R Selvaraju; Stefan Sun; David Lee; Dhruv Crandall; Batra", "journal": "", "ref_id": "b31", "title": "Diverse beam search: Decoding diverse solutions from neural sequence models", "year": "2016" }, { "authors": "Amy Beth Warriner; Victor Kuperman; Marc Brysbaert", "journal": "Behavior research methods", "ref_id": "b32", "title": "Norms of valence, arousal, and dominance for 13,915 english lemmas", "year": "2013" }, { "authors": "Sean Welleck; Ilia Kulikov; Stephen Roller; Emily Dinan; Kyunghyun Cho; Jason Weston", "journal": "", "ref_id": "b33", "title": "Neural text generation with unlikelihood training", "year": "2019" }, { "authors": "John Wieting; Taylor Berg-Kirkpatrick; Kevin Gimpel; Graham Neubig", "journal": "", "ref_id": "b34", "title": "Beyond bleu: training neural machine translation with semantic similarity", "year": "2019" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz", "journal": "", "ref_id": "b35", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Yi Zhang; Tao Ge; Xu Sun", "journal": "", "ref_id": "b36", "title": "Parallel data augmentation for formality style transfer", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 384.15, 518.54, 140.99, 33.71 ], "formula_id": "formula_0", "formula_text": "N c = n i=1 |m i |(1)" } ]
Balancing Effect of Training Dataset Distribution of Multiple Styles for Multi-Style Text Transfer
Text style transfer is an exciting task within the field of natural language generation that is often plagued by the need for high-quality paired datasets. Furthermore, training a model for multi-attribute text style transfer requires datasets with sufficient support across all combinations of the considered stylistic attributes, adding to the challenges of training a style transfer model. This paper explores the impact of training data input diversity on the quality of the generated text from the multi-style transfer model. We construct a pseudo-parallel dataset by devising heuristics to adjust the style distribution in the training samples. We balance our training dataset using marginal and joint distributions to train our style transfer models. We observe that a balanced dataset produces more effective control effects over multiple styles than an imbalanced or skewed one. Through quantitative analysis, we explore the impact of multiple style distributions in training data on style-transferred output. These findings will better inform the design of style-transfer datasets.
Debarati Das; David Ma; Dongyeop Kang
[ { "figure_caption": "-Highly increase the formality of the sentence, slightly increase the arousal of the sentence Input -transfer: I'm sad you're going | input formality: low | input arousal: low | output formality: high | output arousal: mid Output -I am sorry you are going to go.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Balancing micro-style distributions leads to a higher multi-style transfer percentage than in the Skewed setting in all the cases.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Considering the micro-style combinations such that, Formality [formal = f, informal = i], Bias [biased = b, unbiased = u], and Arousal [aroused = e, un-aroused = n], we observe that the micro-style combinations that are rarer (e.g., informal unbiased neutral (iun)) have more representation in the \"balanced\" setting than the \"skewed\" setting.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "-Highly increase the formality of the sentence, slightly increase the arousal of the sentence Input -transfer: I'm sad you're going | input formality: low | input arousal: low | output formality: high | output arousal: mid Output -I am sorry you are going to go.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Considering the micro-style combinations such that, Formality [formal = f, informal = i], Bias [biased = b, unbiased = u], Arousal [aroused = e, un-aroused = u], and Sentiment [negative = n, positive = p]; we observe that the micro-style combinations that are rarer have more representation in the \"balanced\" setting than the \"skewed\" setting. The categories fbnp, funp and iben have more representation for balanced setting vs skewed setting.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Considering the micro-style combinations such that, Formality [formal = f, informal = i]and Arousal [aroused = e, un-aroused = u]; we observe that the micro-style combinations that are rare (ie, in) have more representation in the \"balanced\" setting than the \"skewed\" setting.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Training data statistics (number of samples) for the balanced and skewed settings, when considering the micro-styles of Formality and Arousal.", "figure_data": "Skewed", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The table shows the style transferred sentences, given an input sentence and the intended style transfer goal, for both the balanced setting as well as the skewed setting.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "over unique paraphrases.", "figure_data": "HyperparametersValuemax length70early stoppingTrueno repeat ngram5sizenum beams9num beam groups3diversity penality0.5Table 5: Hyperparameters for Beam SearchB.3 Micro-style ClassificationWe trained a joint sentence classification modelto classify the sentence on multiple axes inspiredby the approach in Kang and Hovy, which usesan encoder-decoder-based model that learnscross-style patterns with the shared internalrepresentation across styles. Our joint modelcomprises fully connected layers attached to aDistilBERT modelInput -paraphrase: I love to play my guitar and Ido not know whyOutput -I love playing my guitar and I'm not surewhyHyperparametersValuebatch size8number of epochs12learning rate1e-4maxsequence64lengthTable 4: Hyper parameters for T5 training for para-phrase generation", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Hyperparameters for Joint Classifier", "figure_data": "HyperparametersValuetrain batch size256test batch size512number of epochs3learning rate1e-4", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Hyperparameters for T5 for Style Transfer", "figure_data": "", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "(Fu et al., 2018)", "Explanation": "The cited work by Fu et al. (2018) provides a real-world application of multi-style text transfer, which the citing paper uses to highlight the potential benefits of the research in the field of automatic domain-appropriate and style-conformant writing."}, {"Category": "Supporting Evidence", "Citation": "(Hovy and Yang, 2021)", "Explanation": "The cited work by Hovy and Yang (2021) discusses the importance of context in language, which the citing paper uses to emphasize the complexity of text style transfer and the need for context-aware approaches."}, {"Category": "Supporting Evidence", "Citation": "(Hu et al., 2017)", "Explanation": "The cited work by Hu et al. (2017) highlights the challenges of text style transfer, which the citing paper uses to illustrate the difficulty of the task and the need for effective methods to address it."}, {"Category": "Supporting Evidence", "Citation": "(Shen et al., 2017)", "Explanation": "The cited work by Shen et al. (2017) discusses the need for parallel corpus in the field of text style transfer, which the citing paper uses to highlight the importance of data in the research and the need for high-quality training data."}, {"Category": "Supporting Evidence", "Citation": "(Lample et al., 2018)", "Explanation": "The cited work by Lample et al. (2018) discusses the need for dealing with the aspects of style in text style transfer, which the citing paper uses to emphasize the need for methods that can effectively address the complexities of style in language."}, {"Category": "Supporting Evidence", "Citation": "(Jhamtani et al., 2017)", "Explanation": "The cited work by Jhamtani et al. (2017) highlights the need for parallel corpus in text style transfer, which the citing paper uses to emphasize the importance of data in the research and the need for high-quality training data."}, {"Category": "Supporting Evidence", "Citation": "(Kang and Hovy, 2019)", "Explanation": "The cited work by Kang and Hovy (2019) provides foundational research on multi-style text transfer and the interdependency of micro-styles, which supports the claim in the citing paper about the need for more exploration in this area."}, {"Category": "Extension or Continuation", "Citation": "(Goyal et al., 2020)", "Explanation": "The cited work by Goyal et al. (2020) extends the research on multi-style text transfer and interdependency of micro-styles by exploring the effects of joint micro-style combinations on style-transferred output, which aligns with the focus of the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Subramanian et al., 2018)", "Explanation": "The cited work by Subramanian et al. (2018) also contributes to the field of multi-style text transfer and the interdependency of micro-styles, providing a basis for the extension of research in this area as discussed in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Bender and Friedman, 2018)", "Explanation": "The cited work by Bender and Friedman (2018) is a foundational study on characterizing datasets and factors impacting style transfer, which the citing paper extends by focusing on the distribution of micro styles in training datasets for multi-style text style transfer."}, {"Category": "Extension or Continuation", "Citation": "(Schoch et al., 2021)", "Explanation": "The cited work by Schoch et al. (2021) is another study on characterizing datasets and factors impacting style transfer, which the citing paper extends by specifically addressing the distribution of micro styles in training datasets for multi-style text style transfer."}, {"Category": "Extension or Continuation", "Citation": "(Li et al., 2019)", "Explanation": "The cited work by Li et al. (2019) is a study on the impact of style transfer on the quality of language models, which the citing paper extends by focusing on the distribution of micro styles in training datasets for multi-style text style transfer."}, {"Category": "Extension or Continuation", "Citation": "(Zhang et al., 2020)", "Explanation": "The cited work by Zhang et al. (2020) is a study on the impact of style transfer on the quality of language models, which the citing paper extends by specifically addressing the distribution of micro styles in training datasets for multi-style text style transfer."}, {"Category": "Extension or Continuation", "Citation": "(Gururangan et al., 2018)", "Explanation": "The cited work by Gururangan et al. (2018) is a study on the impact of style transfer on the quality of language models, which the citing paper extends by focusing on the distribution of micro styles in training datasets for multi-style text style transfer."}, {"Category": "Data Source", "Citation": "(Rao and Tetreault, 2018)", "Explanation": "The dataset used in the study by Rao and Tetreault serves as a foundational element for the development and testing of the models in the citing paper."}, {"Category": "Data Source", "Citation": "(Buechel and Hahn, 2022)", "Explanation": "The dataset from Buechel and Hahn is used in the study to develop and test the models, providing a crucial data source for the research."}, {"Category": "Data Source", "Citation": "(Go et al., 2009)", "Explanation": "The dataset by Go et al. is utilized in the study to develop and test the models, serving as a foundational element for the research."}, {"Category": "Data Source", "Citation": "(Pryzant et al., 2020)", "Explanation": "The dataset by Pryzant et al. is used in the study to develop and test the models, providing a crucial data source for the research."}, {"Category": "Data Source", "Citation": "(Kang and Hovy, 2019)", "Explanation": "The dataset by Kang and Hovy is utilized in the study to develop and test the models, serving as a foundational element for the research."}, {"Category": "Methodological Basis", "Citation": "(Krishna et al., 2020)", "Explanation": "The experimental setup for multi-style transfer in the study by Krishna et al. serves as a methodological basis for the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Raffel et al., 2020)", "Explanation": "The cited work provides a pre-trained T5 model for generating paraphrases, which the citing paper adopts in their research to generate diverse fluent paraphrases."}, {"Category": "Extension or Continuation", "Citation": "(Krishna et al., 2020)", "Explanation": "The cited work provides a dataset for the task of paraphrase generation, which the citing paper uses to further train a paraphrase model for generating diverse fluent paraphrases."}, {"Category": "Supporting Evidence", "Citation": "(Sanh et al., 2019)", "Explanation": "The cited work provides a method for training classifiers to predict style on the original and paraphrased datasets, which the citing paper uses to address the potential issue of style leakage in the generated paraphrases."}, {"Category": "Data Source", "Citation": "(Vijayakumar et al., 2016)", "Explanation": "The cited work introduces the concept of diverse beam search for generating diverse fluent paraphrases, which the citing paper adopts in their research to generate diverse paraphrases for the training of the T5-based joint transfer model."}, {"Category": "Methodological Basis", "Citation": "(Raffel et al., 2020)", "Explanation": "The cited work by Raffel et al. (2020) provides the T5 model and the associated style transfer techniques that the citing paper builds upon to perform multi-style text style transfer in the pipeline."}, {"Category": "Methodological Basis", "Citation": "(He et al., 2020)", "Explanation": "The cited work by He et al. provides a method for style transfer research that the citing paper adopts in their own research."}, {"Category": "Methodological Basis", "Citation": "(Subramanian et al., 2018)", "Explanation": "The cited work by Subramanian et al. also provides a method for style transfer research that the citing paper uses in their study."}, {"Category": "Methodological Basis", "Citation": "(Wolf et al., 2020)", "Explanation": "The cited work provides the implementation details of the custom joint sequence classification models used in the citing paper to evaluate style transfer success ratio."}, {"Category": "Data Source", "Citation": "(Papineni et al., 2002)", "Explanation": "The cited work is the source of the BLEU metric used in the content preservation evaluation in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work is the source of the sentence embedding used in the content preservation evaluation in the citing paper."}, {"Category": "Data Source", "Citation": "(Mir et al., 2019)", "Explanation": "The cited work is the source of the Word Mover's Distance (WMD) metric and the cross-aligned autoencoder model used in the fluency evaluation in the citing paper."}, {"Category": "Data Source", "Citation": "(Radford et al., 2019)", "Explanation": "The cited work is the source of the GPT2 model used in the perplexity measure for fluency evaluation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Dathathri et al., 2019)", "Explanation": "The cited work by Dathathri et al. (2019) provides a technique for tuning style proportions in training datasets, which the citing paper plans to use in the future to improve control over their experimental pipeline."}, {"Category": "Data Source", "Citation": "(Troiano et al., 2018)", "Explanation": "The cited work by Troiano et al. provides the style hierarchy that the citing paper uses to select micro-styles for their research."}, {"Category": "Methodological Basis", "Citation": "(Rao and Tetreault, 2018)", "Explanation": "The cited work by Rao and Tetreault provides the Yahoo Answers Formality Corpus that the citing paper uses to build their micro-style joint classification and style transfer models."}, {"Category": "Data Source", "Citation": "(Buechel and Hahn, 2022)", "Explanation": "The cited work provides the emotion annotated Emobank dataset that the citing paper uses to develop the three-dimensional VAD model for emotion analysis."}, {"Category": "Data Source", "Citation": "(Go et al., 2009)", "Explanation": "The cited work provides the Sentiment140 dataset, which the citing paper uses to train the joint classifier models for sentiment analysis."}, {"Category": "Data Source", "Citation": "(Pryzant et al., 2020)", "Explanation": "The cited work provides the Wiki Neutrality Corpus, which the citing paper uses to train the joint classifier models for bias detection."}, {"Category": "Methodological Basis", "Citation": "(Raffel et al., 2020)", "Explanation": "The cited work provides the pre-trained T5 model and the training data for the paraphraser model used in the citing paper to generate pseudo-parallel training data for transfer."}, {"Category": "Extension or Continuation", "Citation": "(Krishna et al., 2020)", "Explanation": "The cited work provides the ParaNMT-filtered dataset with filters applied to promote lexical diversity, syntactic diversity, and semantic similarity, which the citing paper uses to train the paraphraser model and generate the pseudo-parallel training data for transfer."}, {"Category": "Supporting Evidence", "Citation": "(Vijayakumar et al., 2016)", "Explanation": "The cited work provides the diverse beam search technique used in the citing paper to generate diverse paraphrased outputs for the trained paraphraser model."}, {"Category": "Methodological Basis", "Citation": "(Welleck et al., 2019)", "Explanation": "The cited work by Welleck et al. (2019) is used as a basis for the choice of beam search over top-p sampling in the citing paper, as the former is preferred for prioritizing fluent paraphrases in the context of a joint model."}, {"Category": "Methodological Basis", "Citation": "(Sanh et al., 2019)", "Explanation": "The cited work by Sanh et al. (2019) is referenced as an encoder in the context of a joint model for classification tasks, providing a methodological basis for the citing paper."}, {"Category": "Data Source", "Citation": "Table 6", "Explanation": "The citation to Table 6 is used to acknowledge the source of the hyperparameters for the joint model in the citing paper, indicating a reliance on external data for the model training and inference process."}, {"Category": "Methodological Basis", "Citation": "(Krishna et al., 2020)", "Explanation": "The cited work by Krishna et al. (2020) has observed a similar phenomenon in terms of the change in formality and arousal aspects of text after paraphrasing, which the citing paper has also found in their research."}, {"Category": "Methodological Basis", "Citation": "(Raffel et al., 2020)", "Explanation": "The cited work by Raffel et al. (2020) provides the methodology of appending information to paraphrased sentences to achieve desired intensity transfers in the training of T5 models."}]
[ { "figure_ref": [ "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b8", "b29", "b19", "b22", "b8", "b30", "b23", "b28", "b21" ], "table_ref": [], "text": "Diffusion Probabilistic Models (DPMs) (Ho et al., 2020;Sohl-Dickstein et al., 2015) are a class of generative models that has shown great potential in generating high-quality images. (Dhariwal & Nichol, 2021;Ramesh et al., 2022;Rombach et al., 2022a;Nichol et al., 2022). DPM consists of a forward and a backward process. In the forward process, images are progressively corrupted with Gaussian noise in a series of time steps. Conversely, during the backward process, the trained diffusion model generates images by sequentially denoising the white noise.\nDespite its success in generating high-quality images, DPMs suffer from the drawbacks of prolonged inference time. Considerable interest has been expressed in minimizing the number of inference steps during the sampling process to speed up generation, while preserving the quality of generated images. Such works include generalizing DDPM (Ho et al., 2020) to non-Markovian processes (Song et al., 2021), deriving optimal variance during sampling (Bao et al., 2022b), thresholding the pixel values as additional regularization (Saharia et al., 2022a), or developing pseudo numerical methods for solving differential equations on manifolds (Liu et al., 2022a).\nThough showing promising performance, none of them have theoretically and empirically examined the discrepancy between the training and sampling process of DPM. If we take a closer look at the training of DPM, at each time step t, ground truth samples x 0 are given to produce corrupted samples x t with noise ϵ t . The DPM takes both x t and t as input to predict the noise ϵ t . During sampling, one is required to synthesize data samples from white noise without the knowledge of the ground truth distributions. Coupled with the prediction errors of the network, this training-sampling discrepancy produces errors that could progressively accumulate during inference. This error arising from the difference between training and inference resembles the exposure bias problem as identified in autoregressive generative models (Ranzato et al., 2016;Schmidt, 2019), given that the network is solely trained with the corrupted ground truth samples, rather than the network predicted samples.\nIn this work, we focus on the exposure bias problem during sampling. Ning et al. (2023) propose to add perturbation to training samples to alleviate the exposure bias problem, which is sub-optimal since the retraining of DPM is computationally expensive. Given that the time step t is directly linked to the corruption level of the data samples, we theoretically and empirically show that by adjusting the next time step t -1 during sampling according to the approximated variance of the current generated samples, one can effectively alleviate the exposure bias. We search for such a time step within a window t w surrounding the current time step to restrict the denoising progress. Furthermore, based on the error patterns that the network makes on the training samples, we propose the use of a cutoff time step t c . For time steps larger than t c , we search for the suitable time step within t w . While for time steps smaller than t c , we keep the original time step. Intuitively, it also suits the nature of a DPM, since the corruption level is smaller for small time steps. We refer to our sampling method as Time-Shift Sampler. Figure 1 presents the comparison between DDPM, the stochastic sampling method, and its time-shift variant TS-DDPM. In summary, our contributions are:\n• We theoretically and empirically study the exposure bias problem of diffusion models, which is often neglected by previous works.\n• We propose a new sampling method called Time-Shift Sampler to alleviate the exposure bias problem, which avoids retraining the models. Our method can be seamlessly integrated into existing sampling methods by only introducing minimal computational cost.\n• Our Time-Shift Sampler shows consistent and significant improvements over various sampling methods on commonly used image generation benchmarks, indicating the effectiveness of our framework. Notably, our method improves the FID score on CIFAR-10 from F-PNDM (Liu et al., 2022a) by 44.49% to 3.88 with only 10 sampling steps." }, { "figure_ref": [], "heading": "INVESTIGATING EXPOSURE BIAS IN DIFFUSION PROBABILISTIC MODELS", "publication_ref": [ "b8", "b8", "b8", "b12", "b30" ], "table_ref": [], "text": "In this section, we first give a brief introduction of the training and inference procedure for Diffusion Probabilistic Models (DPMs). Then we empirically study the exposure bias problem in DPM by diving deep into the training and inference processes.\n2.1 BACKGROUND: DIFFUSION PROBABILISTIC MODELS DPM encompasses a forward process which induces corruption in a data sample (e.g., an image) via Gaussian noise, and a corresponding inverse process aiming to revert this process in order to generate an image from standard Gaussian noise. Given a data distribution q(x 0 ) and a forward noise schedule β t ∈ (0, 1), t = 1 • • • T , the forward process is structured as a Markov process, which can be expressed as:\nq(x 1•••T |x 0 ) = T t=1 q(x t |x t-1 )(1)\nwith the transition kernel q(x t |x t-1 ) = N (x t | √ α t x t-1 , β t I), where I denotes the identity matrix, α t and β t are scalars and α t = 1β t . With the reparameterization trick, the noisy intermediate state x t can be computed by the equation below:\nx t = √ α t x 0 + √ 1 -α t ϵ t (2)\nwhere α t = t i=1 α t and ϵ t ∼ N (0, I). According to the conditional Gaussian distribution, we have the transition kernel of backward process as:\np(x t-1 |x t , x 0 ) = N (μ t (x t , x 0 ), βt )(3)\nwhere βt = 1-αt-1 1-αt β t and μt = √ αt-1βt\n1-αt x 0 + √ αt(1-αt-1) 1-αt x t . Considering Equation 2, μt can be further reformulated as μt = 1 √ αt (x t -1-αt √ 1-αt ϵ t ).\nDuring training, a time-dependent neural network is optimized by either learning μt or ϵ t . Empirically, Ho et al. (2020) observe that predicting ϵ t works better. The learning target is to optimize the variational lower bound of the negative log-likelihood, which could also be interpreted as minimizing the KL divergence between the forward and backward process. In practice, Ho et al. (2020) further simplify the loss function as:\nL simple = E t,x0,ϵt∼N (0,I) [∥ϵ θ (x t , t) -ϵ t ∥ 2 2 ](4)\nWe present the training and sampling algorithms of the original Denoising Diffusion Probabilistic Models (DDPM) (Ho et al., 2020) in Algorithm 1 and 2, respectively.\nAlgorithm 1 Training 1: repeat 2:\nx 0 ∼ q(x 0 )\n3: t ∼ Uniform(1, • • • , T ) 4:\nϵ ∼ N (0, I) Compute x t using Eq 2 5:\nTake gradient descent step on 6:\n∇||ϵ -ϵ θ (x t , t)|| 2 7: until converged Algorithm 2 Sampling 1: x T ∼ N (0, I) 2: for t = T, • • • , 1 do 3: z ∼ N (0, I) if t > 1, else z = 0 4: x t-1 = 1 √ αt (x t -1-αt √ 1-αt ϵ θ (x t , t\n))+σ t z 5: end for 6: return x 0 In this section, we empirically demonstrate the phenomenon related to the exposure bias problem in DPM using CIFAR-10 dataset (Krizhevsky, 2009). We first present the variance distribution of the corrupted samples by different time steps in the forward process during training. Models exposed to a wider range of inputs during training tend to exhibit greater robustness to noise and are consequently less susceptible to exposure bias. To further study the behavior of the network during the backward sampling process, we also examine the evolution of prediction errors during sampling. We use DDIM (Song et al., 2021) sampler to conduct this experiment, as it gives a deterministic sampling process." }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "THE EXPOSURE BIAS IN DIFFUSION PROBABILISTIC MODELS", "publication_ref": [], "table_ref": [], "text": "Figure 2 presents the changes in variance of sample distributions for different time steps. At each step, we estimate the corrupted image samples with the network predicted noise using Equation 2. We present the details of the figure in Appendix B. At time step 0, ground truth images serve as the current samples. The distribution of the variance of the ground truth samples spans an approximate range of (0, 0.8), showing the diversity of the sample distributions. As the noise is gradually added to the ground truth samples, the span of the variance becomes narrower and narrower. Following 400 steps, the changes in the span range of the variance become stable, and gradually shift towards a narrow range surrounding the variance of white noise. The evolution of the sample variance across different time steps indicates that the network exhibits a lower sensitivity to the early steps of the forward process of DPM, as the variance of the samples can be distributed more sparsely within a broader range. Conversely, the network can be more sensitive to the later steps (e.g., after 400 steps), as we progressively approach white noise. The constricted variance range during the later stages implies that minor prediction errors during sampling can significantly impact overall performance. In the second experiment, given a specific number of sampling steps, we compute the mean squared errors between the predicted samples and the ground truth samples at each step, as presented in Figure 3. Details for plotting this figure are presented in Appendix B. It can be seen that the evolution of prediction errors adheres to a consistent pattern: initially decreasing before incrementally accumulating as the sampling process progresses. This phenomenon may be attributed to two possible factors: (1) The sampling process originates from white noise, which does not contain any information about the resultant sample distributions. In early stages, with fewer sampling steps, the error accumulation is less serious, thus the network gradually shapes the predicted distribution into the target distribution.\n(2) In the later stages, the network is more robust to the noisy inputs as discussed above. However, due to the exposure bias, the network inevitably makes errors at each step and these errors accumulate along the sequence, resulting in a slow but steady progress in error accumulation and larger errors in the end.\nIn conclusion, the above two experiments together show that during the backward sampling process, the accumulated prediction error, which arises from the exposure bias and the capability of the network, could strongly impact the final results. This demonstrates the importance of alleviating exposure bias in DPM, which could potentially lead to improved results." }, { "figure_ref": [ "fig_3" ], "heading": "ALLEVIATING EXPOSURE BIAS VIA TIME STEP SHIFTING", "publication_ref": [ "b21" ], "table_ref": [], "text": "In the backward process of Diffusion Probabilistic Models (DPM), the transition kernel is assumed to adhere to a Gaussian distribution. To maintain this assumption, the difference between two successive steps must be sufficiently small, thereby necessitating the extensive training of DPM with hundreds of steps. As previously discussed, the network prediction error coupled with discrepancy between training and inference phases inevitably results in the problem of exposure bias in DPM. We introduce C(x t , t)-referred to as the input couple for a trained DPM-to describe this discrepancy, which can be expressed as:\nC(x t , t) = e -dis(xt,xt)(5)\nwhere xt and x t represent the network input and ground truth states at time step t, respectively, and dis(•, •) denotes the Euclidean distance. Consequently, during the training phase, the relationship C(x t , t) = 1 holds true for all time steps, as the network always takes ground truth x t as input. Moreover, a better coupling expressed by C(x t , t s ) reduces the discrepancy between training and inference, thereby alleviating exposure bias. Previous works (Zhang et al., 2023a;Ning et al., 2023) have empirically and statistically affirmed that the network prediction error of DPM follows a normal distribution. In conjunction with Equation 2, during the backward process at time step t, the predicted next state denoted as xt-1 could be represented as:\nxt-1 = x t-1 + ϕ t-1 e t-1 = α t-1 x 0 + 1 -α t-1 ϵ t-1 + ϕ t-1 e t-1 = α t-1 x 0 + λ t-1 εt-1(6)\nIn this equation, λ 2 t-1 = ϕ 2 t-1 + (1α t-1 ), x t-1 denotes the ground truth at time step t -1, ϕ t-1 e t-1 represents the network prediction errors, and e t-1 , ϵ t-1 and εt-1 conform to a normal distribution. Upon observing that Equation 6 and Equation 2 share a similar structure which is the ground truth x 0 plus Gaussian noise with variable variance, we propose the subsequent assumption.\nAssumption 3.1 During inference at time step t, the next state xt-1 predicted by the network, may not optimally align with time step t -1 within the context of the pretrained diffusion model. In other words, there might be an alternate time step t s , that potentially couples better with xt-1 : To verify our assumption, we initially conduct a statistical examination of the discrepancy between training and inference in a pretrained diffusion model. A sample size of 5000 instances was randomly selected from the CIFAR-10 training set and Equation 2 was utilized to generate ground truth states x t for a sequence of time steps. Subsequently, we compare the C(x t , t) with C(x t , t s ). Details of plotting this figure are presented in Appendix B. Here we show the results of 10 inference steps in the backward process and only consider time step t s within the range of t -6 to t + 4. As depicted in Figure 4, for certain backward steps, there are alternate time steps t s that display a stronger correlation with the predicted next state xt compared to time step t. We also observe that when approaching the zero time step, all nearby time steps converge to the same distribution. Similar findings were observed for other pretrained DPMs on different datasets including CelebA with varying settings, such as different numbers of inference steps and ranges of t s . See Appendix E for details.\n∃t s ∈ {1 • • • T }, s.t. C(x t-1 , t s ) ≥ C(x t-1 , t -1) (7) (xt, t) -6 -5 -4 -3 -2 -1 0 +1 +2 +3 +4 7LPH6WHS6KLIW 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t)" }, { "figure_ref": [], "heading": "6WHS", "publication_ref": [ "b30", "b8" ], "table_ref": [], "text": "Our empirical findings lend substantial support to Assumption 3.1. This naturally prompts the question: How can we identify the time step that best couples with the predicted xt-1 ? By optimizing the KL divergence between the predicted xt-1 and x ts at time step t s , we arrive at the following Theorem 3.1, with the complete derivation provided in Appendix J.1.\nTheorem 3.1 Let xt represent a given state and xt-1 represent the predicted subsequent state. We assume t -1 is sufficiently large such that the distribution of xt-1 is still close to the initialized normal distribution with a diagonal covariance matrix. In addition, the selected time step t s to couple with xt-1 is among those time steps closely surrounding t -1.1 Then the optimal t s should have the following variance:\nσ ts ≈ σ t-1 - ||e|| 2 d (d -1) (8)\nwhere d is the dimension of the input, e represents the network prediction error, and σ t-1 is the variance of the predicted xt-1 .\nThe derivation of Theorem 3.1 mainly follows two steps: Firstly, we optimize the KL divergence between x ts and xt-1 to obtain the variance of x ts . Secondly, we establish the relationship between the variance within a single sample of xt-1 and the variance of xt-1 . 2 The results articulated in Theorem 3.1 could be further simplified to σ ts ≈ σ t-1 , when t is large and given the assumption that the network prediction error at the current time step is minimal. This assumption has been found to hold well in practice.\nAlgorithm 3 Time-Shift Sampler 1: Input :\nTrained diffusion model ϵ θ ; Win- dow size w; Reverse Time series {T, T - 1, • • • , 0}; Cutoff threshold t c 2: Initialize: x T ∼ N (0, I) ; t s = -1 3: for t = T, T -1, .., 0 do 4: If t s ̸ = -1 then t next = t s else t next = t 5: ϵ t = ϵ θ (x t , t next )) 6:\ntake a sampling step with t next to get x t-1 7:\nif t > t c then 8:\nGet variance for time steps within the window:\nΣ = {1 -α t-w/2 , 1 - α t-w/2+1 , • • • , 1 -α t+w/2 } 9: t s = arg min τ ||var(x t-1 ) -σ τ ||, for σ τ ∈ Σ and τ ∈ [t -w/2, t + w/2] 10: else 11: t s = -1 12:\nend if 13: end for 14: return x 0 Based on the findings of Theorem 3.1, we propose the Time-Shift Sampler, a method that can be seamlessly incorporated into existing sampling algorithms, such as Denoising Diffusion Implicit Models (DDIM) (Song et al., 2021), Denoising Diffusion Probabilistic Models (DDPM) (Ho et al., 2020) or sampling methods based on high-order numerical solvers (Liu et al., 2022a). Moreover, in light of the earlier discussion that the model's predictions of nearby time steps tend to converge to the same distribution during the later stage of the inference process and the condition of large t in the derivation in Appendix J.1, we remove the time-shift operation when the time step is smaller then a predefined cutoff time step. 3 Our algorithm is detailed in Algorithm 3. Specifically, given a trained DPM ϵ θ , we sample for x t , t = 1, 2, . . . , T using arbitrarily any sampling method. For each time step t > t c , where t c is the cutoff threshold, we replace the next time step t next with the time step t s that best couples with the variance of x t-1 within a window of size w. In the search of such t s , we first take a sampling step with the current t next to get x t-1 , which is used to compute the variance of x t-1 as var(x t-1 ). Then we get the variance Σ of the time steps within the window. The optimal t s can be obtained via arg min τ ||var(x t-1 ), σ τ ||, for σ τ ∈ Σ. Finally, the obtained t s is passed to the next sampling iteration as t next . We repeat this process until t < t c , after which we perform the conventional sampling steps from the sampling method of choice." }, { "figure_ref": [], "heading": "EXPERIMENTAL SETUP", "publication_ref": [ "b8", "b30", "b12", "b16", "b21", "b19" ], "table_ref": [ "tab_1" ], "text": "We integrate our Time-Shift Samplers to various sampling methods including DDPM (Ho et al., 2020): the stochastic sampling method; DDIM (Song et al., 2021): the deterministic version of DDPM; S-PNDM (Liu et al., 2022a): the sampling method based on second-order ODE solver; and F-PNDM (Liu et al., 2022a): the sampling method based on fourth-order ODE solver. We term our Time-Shift Sampler as TS-{•} with respect to the baseline sampling methods. For example, the time shift variant of DDIM is referred to as TS-DDIM. Following DDIM, we consider two types of time step selection procedures during sampling, namely uniform and quadratic. For t i < T : (1) uniform: we select time steps such that t i = ⌊ci⌋, for a constant value c. (2) quadratic: we select time steps such that t i = ⌊ci 2 ⌋, for a constant value c.\nWe report main results using pre-trained DDPM on CIFAR-10 ( Krizhevsky, 2009) and CelebA 64×64 (Liu et al., 2015). Moreover, based on DDIM sampler, a comparison to ADM-IP (Ning et al., 2023) is made, which uses ADM (Dhariwal & Nichol, 2021) as the backbone model. More experiments can be found in the appendix. We conduct experiments for varying sampling time steps, namely 5, 10, 20, 50, and 100. We use the Frechet Inception Distance (FID) (Heusel et al., 2017) for evaluating the quality of the generated images. We further discuss the influence of window sizes and cutoff values in Sec. 5.4. More details can be found in Appendix C. In Table 1, we compare four sampling methods, namely DDIM, DDPM, S-PNDM and F-PNDM, and their Time-Shift variants on two datasets, where we vary the time steps and the time step selection procedures. We take larger window sizes for fewer sampling steps. The cutoff value is within the range of [200,400], and is dependent to the number of time steps we take. As expected, our Timer-Shift Sampler consistently improves the quality of the generated images with respect to that of the baseline sampling methods. We observe that for time steps less than 20 with uniform time step selection, the performance improvements are extremely significant as compared to the original sampling methods. To our surprise, even for very strong high-order sampling methods, our Time-Shift Sampler can still bring significant improvements. Notably, we obtain FID=3.88 with TS-F-PNDM on CIFAR-10 using 10 sampling steps, which is better than DDIM on CIFAR-10 for 100 sampling steps. Our method also manages to improve the performance of the baseline sampling methods for both uniform and quadratic time selection, showing the versatility of our method. Our Time-Shift Sampler involves searching for suitable time steps using computed variance of the generated samples, which inevitably brings additional computation during sampling." }, { "figure_ref": [], "heading": "RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "MAIN RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_4", "fig_5", "fig_6", "fig_6" ], "heading": "Dataset", "publication_ref": [ "b19", "b21" ], "table_ref": [ "tab_3" ], "text": "To compare the efficiency of different sampling methods, we present the average sampling time4 in Figure 5 by running each sampling method 300 times on CIFAR-10 using DDPM as backbone model. For all three sampling methods, the additional computation time during sampling with 5,10 and 20 sampling steps is negligible. The additional computation time is visually larger for 50 sampling steps. Yet, the actual additional computation time remains acceptable for a small backbone like DDPM. For example, TS-S-PNDM requires 9.71% more sampling time on average than S-PNDM for 5,10 and 20 steps. For 50 steps, TS-S-PNDM requires 9.86% more sampling time than S-PNDM. We report the detailed sampling time in Appendix F. The additional computation time is also dependant to the choice of the backbone, which we further elaborate in Sec. 5.3. Our method is also proven to be effective on alternative model architectures than DDPM. In this section we present the results on ADM (Dhariwal & Nichol, 2021), and the variant of ADM named ADM-IP (Ning et al., 2023), which tries to alleviate exposure bias in DPM by retraining ADM with added input perturbation. We conduct our Time-Shift Sampler using ADM as backbone model to show the merits of our method as compared to ADM-IP. We present the comparison on FID scores in Table 2 and the comparison on sampling time in Figure 6. The results indicate that when employing a small sampling step, which is favored in practice, ADM-IP performs much worse than the original ADM. As the number of sampling steps increases, the performance of ADM-IP improves. We manage to improve the performance of the ADM model with our method by a large margin. Note that we achieve these significant improvements without retraining the model as is required by ADM-IP. We also obtain roughly the same sampling time for DDIM and TS-DDIM with ADM as the backbone model. It makes our method more favorable in practice given merely zero additional cost for computation and significant performance improvements. We first conduct a study on the effect of window sizes. The results are presented in Figure 7 (a), where we fix the cutoff value=300, and vary the window sizes for different numbers of sampling steps. In the assessment of 10 and 20 sampling steps, larger window sizes are evaluated to ensure an adequate search space for a suitable time step. And we adopt smaller window sizes for more sampling steps, given the limitation of step sizes. Figure 7 (a) illustrates that the Time-Shift algorithm is not sensitive to the selection of window size when using 10, 20 and 50 sampling steps. While when the number of sampling step is increased, such as 100 steps, the algorithm become more sensitive to the window size, which could be attributed to the enhanced accuracy of per-step predictions achievable with smaller step sizes." }, { "figure_ref": [], "heading": "COMPARISON WITH TRAINING-REQUIRED METHOD ADM-IP", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_6" ], "heading": "INFLUENCE OF WINDOW SIZES AND CUTOFF VALUES", "publication_ref": [], "table_ref": [], "text": "The influence of different cutoff values on the image generation performance on CIFAR-10 are presented in Figure 7 (b). From the density distribution plot of the sample variance as discussed in Sec. 2.2, we can have a good estimation of the range of the cutoff values to be within [200,400].\nWhile sampling with 10 steps, a smaller cutoff value ( 200) is preferred as compared to the scenarios with more sampling steps. One possible reason is that fewer sampling steps lead to larger step size, which makes more time-shift operations beneficial. 2022) leverage a high-order ordinary differential equations solver to reduce the generation steps of a diffusion model to 10, while maintaining the image quality. We follow this line of research and propose a method to better select time steps." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we have proposed a novel method to alleviate the exposure bias problem in diffusion probabilistic models. Different from previous work, which tries to reduce exposure bias by retraining the diffusion model with extra noisy inputs, we demonstrate that this bias can be mitigated by identifying the timestep that most aligns with the subsequent predicted step. Furthermore, we theoretically derive the variance of this optimal next timestep. Leveraging these insights, we introduce a novel and training-free inference algorithm, the Time-Shifted Sampler, to alleviate exposure bias in diffusion models. Our algorithm can be seamlessly incorporated into existing sampling algorithms for diffusion models including DDPM, DDIM, S-PNDM and F-PNDM. Extensive experimental results prove the effectiveness of our method." }, { "figure_ref": [], "heading": "A LIMITATION AND FUTURE WORK", "publication_ref": [], "table_ref": [], "text": "In this work, we introduce the Time-Shift Sampler, a training-free mechanism devised to mitigate the exposure bias problem inherent to the diffusion model, thereby enhancing the quality of generated images. Despite its efficacy, our sampler suffers from the limitation that it introduces two parameters, i.e., window size and cutoff value. We estimate their values based on the statistical analysis of training data. However, more advanced methods to analytically derive the optimal value of these two parameters might be possible since both the cutoff value and the window size are related to the noise level of each step. We leave this further exploration to future work. We also foresee possibilities to use the concept of the Time-Shift Sampler in other Markovian processes where a reduction in processing steps is desired." }, { "figure_ref": [ "fig_1", "fig_2", "fig_1", "fig_1", "fig_2", "fig_3", "fig_2" ], "heading": "B FIGURE DETAILS", "publication_ref": [], "table_ref": [], "text": "In this section, we present the detailed procedures to plot Figures 2,3, and 4.\nTo plot Figure 2, we compute the variance at the instance level so that we can measure how much the variance of each sample varies as time progresses. Specifically, suppose we obtain a sample x of size 3 × 32 × 32, we first flatten it to 3072. Then we compute the variance of x as\nn i (xi-x) 2 n-1\n, where x is the mean of x. Thus, for each sample we obtain a variance of its own, which leads to the density plot in Figure 2.\nFigure 3 shows the mean square error (MSE) between the prediction and ground truth at each step in the backward process. Given an image denoted as x 0 , by applying Equation 2we could obtain a sequence of x t , t = 1, 2, • • • , T -1. Taking each x t and the paired time step t to run the backward process, we would obtain a sequence of predicted xt\n0 , t = 1, 2, • • • , T -1.\nIdeally, we would expect all these predicted xt 0 , t = 1, 2, • • • , T -1, to be exactly equal to the ground truth x 0 , as they are generated using the given x 0 . However, this is not the case in practice. In our experiments, we found that only when t < t s (around 650 steps in our experiment using DDIM) we could obtain the original x 0 by running the backward process with paired (x t , t). For t > t s , the image created using (x t , t) differs from the original x 0 . This observation also reveals that the image generation process of diffusion models basically contains two stages. In the first stage, the model moves the Gaussian distribution towards the image distribution and no modes are presented at this stage, which means we can not know which images will be generated. In the second stage, the prediction shows that clear patterns and modes are presented. We can predict which images will be generated following the backward process. This observation led us to divide the error computation into two stages. The full explanation, including the equations, is shown in Figure 8 To plot Figure 4, we follow the method used in plotting Figure 3 to generate the ground truth example for each step and compute the MSE between predicted xt and the ground truth x t and x ts ." }, { "figure_ref": [], "heading": "C ADDITIONAL EXPERIMENTAL SETUP", "publication_ref": [ "b30", "b8", "b19", "b21" ], "table_ref": [], "text": "Instead of generating many samples of x t-1 to estimate var(x t-1 ), which brings additional computational workload, we find that under some assumption (see derivation of Theorem 3.1 in Section J.1) the var(x t-1 ) could be estimated using the inter variance of a single x t-1 , Thus during sampling, we compute the variance within each x t-1 .\nFollowing DDIM (Song et al., 2021), we use two types of time step selection procedures, namely uniform and quadratic. For comparing different sampling methods, the architecture of our models follows the one of DDPM (Ho et al., 2020). For all datasets, we use the same pretrained models for evaluating the sampling methods. For CIFAR-10 and LSUN-bedroom, we obtain the checkpoints from the original DDPM implementation; for CelebA, we adopt the DDIM checkpoints. To carry out the comparison between ADM (Dhariwal & Nichol, 2021), ADM-IP (Ning et al., 2023) and our method, we choose the ADM architecture and the checkpoints from ADM-IP. Applying Equation ( 2) to obtain 𝑥 -( , Using 𝑥 . , t . as input to the model to generate a sequence of predicted 𝑥 , ! # at each time step.\n𝑥 ! 𝑥 !\"# 𝑥 !\"$ … … 𝑥 % ! 𝑥 % !\"# … … 𝑥 & 𝑥 $ 𝑥 # 𝑥 '\n𝑀𝑆𝐸 # = ∑ ||* + ) \" %* ) || # $ %&'\n, Figure 8: The detailed procedure for computing the MSE." }, { "figure_ref": [ "fig_8", "fig_2" ], "heading": "D ERROR ANALYSIS", "publication_ref": [], "table_ref": [], "text": "We provide here more examples of prediction errors on training samples for different sampling steps for the different datasets. Figure 9 presents the prediction errors obtained in the CelebA dataset, which show similar patterns as those of the CIFAR-10 dataset as dipicted in Figure 3. " }, { "figure_ref": [ "fig_9", "fig_3" ], "heading": "E TRAINING-INFERENCE DISCREPANCY", "publication_ref": [], "table_ref": [], "text": "We provide more examples on the training-inference discrepancy for the different datasets in Figure 10, 19 and 20. For varying numbers of time steps, the same training-inference discrepancy pattern can be observed as in Figure 4. Specifically, for a certain backward step t, given a time step window [tw, t + w] surrounding t, there exists a time step t s that might couple better with the predicted next state xt-1 . And as sampling progresses, the coupling effects for xt-1 become identical for surrounding time steps. \n(xt, t) -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 +1 +2 +3 +4 +5 +6 +7 +8 +9 7LPH6WHS6KLIW 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 +1 +2 +3 +4 +5 +6 +7 +8 +9 7LPH6WHS6KLIW 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS" }, { "figure_ref": [ "fig_10" ], "heading": "F SAMPLING TIME COMPARISON", "publication_ref": [], "table_ref": [ "tab_6", "tab_7" ], "text": "We report a detailed comparison of sampling time using different sampling methods in Table 3 and4 The proposed novel sampling method is motivated by the theoretical and empirical analysis of the exposure bias problem in DDPM. Experimental results show that our sampling method can effectively improve the quality of the generated images compared to images generated with the original DDIM/DDPM models quantatively measured with the FID score. We present the comparison of the generation chain of TS-DDIM (our method) and DDIM in Figure 11. It can be seen that TS-DDIM shifted most of the time steps before reaching the cutoff value, which yields a much better generation quality of the image of a horse. Additional qualitative examples will be presented in the Appendix I." }, { "figure_ref": [ "fig_11", "fig_11" ], "heading": "G CASE STUDY", "publication_ref": [], "table_ref": [], "text": "We also visualize the selected timestep trajectory in Figure 12. Before the cutoff value 200, time shift happens to all the timesteps. This is especially visible for the time steps within the range of (300, 600), where more time shifts happen, and time steps can shift to both larger or smaller time steps within the window. Figure 12 demonstrates that most of the time shifting happens in the intermediate range of the backward process. This phenomenon might be due to the fact that at the initial stage (when t is large), the samples are predominantly influenced by the initialized normal distribution, leading to minimal variance change. Conversely, in the later stage (when t is small), the samples are predominantly composed of image data, containing most of the image information. This makes it easier for the model to make accurate predictions. To improve visibility, we also zoom in for the time steps between 300 to 600." }, { "figure_ref": [], "heading": "H QUALITATIVE EXAMPLES", "publication_ref": [], "table_ref": [], "text": "In this section we present the example images generated using different sampling methods with uniform time selection procedure for varying sampling time steps. Generated examples can be found in Figure 13,14,15,16,17 and 18. It can be seen that we can generate images with good quality for less than 10 sampling steps." }, { "figure_ref": [], "heading": "I ADDITIONAL RESULTS", "publication_ref": [ "b36", "b34", "b17", "b40" ], "table_ref": [ "tab_8", "tab_9", "tab_10", "tab_14" ], "text": "We present more results obtained with the LSUN-bedroom dataset (Yu et al., 2016) in Table 5.\nLimited by computational resources, we do not properly tune the parameters, i.e., window size and cutoff values, for the LSUN-bedroom. We leave the exploration of a more efficient tuning strategy on high-resolution images for future work.\nIn Table 6 we compare our method with Analytic-DPM (Bao et al., 2022b). The results of Analytic-DPM are directly taken from the paper. Following (Xiao et al., 2022), we present the precision and recall results from our TS-DDIM and DDIM with ADM as backbone on CIFAR10. As shown in Table 7, our TS-DDIM tends to achieve much higher recall while maintaining the level of the precision obtained by DDIM. We also evaluate our method on the ImageNet dataset as presented in We further integrate our method into the DPM-solver (Lu et al., 2022) and the DEIS (Zhang & Chen, 2023) to showcase its versatility. The results are presented in Table 9. Our model can still improve the performance of both DPM-solver and DEIS samplers. However, in comparison to DDIM, our method again provides substantial improvements although the improvements for these samplers are a bit smaller than for the other tested samplers.This could be attributed to the fact that both the DPM-solver and DEIS utilize the particular structure of the semi-linear ODE, which already largely reduces the error of estimating x t . " }, { "figure_ref": [], "heading": "Sampling", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "J DERIVATION", "publication_ref": [], "table_ref": [], "text": "J.1 DERIVATION OF THEOREM 3.1\nIn this section, we prove Theorem 3.1.\nproof. We first find the optimal time step t s by minimizing the KL divergence between xt-1 and x ts .\nAs q(x ts |x 0 ) = N (x ts | √ α t x 0 , (1α t )I) and assume p(x t-1 |x t ) is a probability density function of the distribution for x with mean ût-1 and covariance matrix Σt-1 , then according to Lemma 2. of Bao et al. (2022b), we have:\nD KL (p(x t-1 |x t )||q(x ts |x 0 )) = D KL (N (x|μ t-1 , Σt-1 )||N (µ ts , Σ ts )) + H(N (x|μ t-1 , Σt-1 )) -H(p(x t-1 |x t )) = 1 2 (log(|Σ ts |) + T r(Σ -1 ts Σt-1 ) + (μ t-1 -µ ts )Σ -1 ts (μ t-1 -µ ts ) T ) + C = 1 2 (d log(1 -α ts ) + d 1 -α ts T r( Σt-1 ) + 1 1 -α ts ||μ t-1 -µ ts || 2 ) + C(9)\nwhere\nC = 1 2 d + H(N (x|μ t-1 Σt-1 )) -H(p(x t-1 |x t )) -1 2 log( 1 | Σt-1|\n) and d is the dimension of x 0 . Denoted µ t-1 as the ground truth of the mean of the distribution of q(x t-1 ) and according to Equation 2, we have µ t-1 = √ α t-1 x 0 . μt-1 can be rewritten as :\nμ = µ t-1 + e = √ α t-1 x 0 + e (10)\nHere, e is the network prediction error. Since µ ts = √ α ts x 0 , Equation 9 can be rewritten as:\nD KL (p(x t-1 |x t )||q(x ts |x 0 )) = 1 2 (d log(1 -α ts ) + d 1 -α ts T r( Σt-1 ) + 1 1 -α ts ||( α t-1 -α ts )x 0 + e|| 2 ) + C (11) if t s is close to t -1, then √ ᾱt-1 - √ ᾱts ≈ 0. We have D KL (p(x t-1 |x t )||q(x ts |x 0 )) ≈ 1 2 (d log(1 -α ts ) + d 1 -α ts T r( Σt-1 ) + 1 1 -α ts ||e|| 2 ) + C (12)\nWe further calculate the derivative of D KL with respect to Σ ts = 1α ts . We know that D KL gets its minimum at\nΣ ts = T r( Σt-1 ) + 1 d ||e|| 2(13)\nWe next estimate the Σt-1 in Equation 13. Assuming each pixel of image P ∈ R w×h follows distribution N (µ i , σ) with σ being the variance, and p i ⊥ p j if i ̸ = j, then the covariance of P is σI and we have:\nσ t-1 = ( i (p i -p) 2 ) d -1 = i (p 2 i + p 2 -2p i p) d -1 = i (p 2 i ) -dp 2 d -1(14)\nTaking expectation on both sides, we achieve\nE[σ t-1 ] = i (E[p 2 i ]) -dE[p 2 ] d -1 = i (σ + µ 2 i ) d -1 - d d -1 E[( i p i d ) 2 ] = dσ d -1 + i µ 2 i d -1 - d d -1 E[( i p i d ) 2 ](15)\nThe last term on the RHS of Equation 15 can be rewritten as\nd d -1 E[( i p i d ) 2 ] = d d -1 1 d 2 ( i E(p i ) 2 + i̸ =j E(p i )E(p j )) = d d -1 1 d 2 ( i ((E(p i )) 2 + σ) + i̸ =j E(p i )E(p j )) = σ d -1 + 1 d(d -1) ( i (E(p i )) 2 + i̸ =j E(p i )E(p j )) = σ d -1 + 1 d(d -1) ( i µ i ) 2(16)\nBy combining Equation 15and Equation 16and denoting µ = i µi d\nwe have\nE[σ t-1 ] = dσ d -1 + i µ 2 i d -1 - σ d -1 - ( i µ i ) 2 d(d -1) = σ + i µ 2 i d -1 - ( i µ i ) 2 d(d -1) = σ + i (µ 2 i ) -dµ 2 d 1 = σ + i (µ 2 i -2µ i µ + µ 2 ) d -1 = σ + i (µ i -µ) 2 d -1(17)\nHere µ is the mean of µ i and µ i is the mean of the distribution of each pixel in xt-1 at time step t -1. The ground truth x t-1 ∼ N ( √ α t-1 x 0 , (1α t-1 )I), thus u gt = √ α t-1 x 0 . In practice, the x 0 is normalized to stay in the range of -1 to 1, and √ α t is close to zero when t is large. Define ζ as the difference between µ and µ and denote that ζ gt i = u gt iµ gt and ζi = µ iµ, then when t is large we have ζ gt i ≈ 0. Considering the network prediction error, we reach ζi = ζ gt i + e i ≈ e i (18) Thus Equation 17 can be rewritten as\nE[σ t-1 ] = σ + i (e i ) 2 d -1 = σ + ||e|| 2 d -1(19)\nMultiplying I on both sides and taking trace\ndσ t-1 ≈ dσ + d||e|| 2 d -1(20)\nHere we assume the sample variance is approximately equal to its expected value because the dimension of the image is usually large. Bring Equation 20 to Equation 13\nσ ts = σ t-1 - ||e|| 2 d(d -1)(21)\nIn the above derivation, we assume that when t is large Equation 21 holds. This assumption corresponds to the cutoff mechanism in our proposed algorithm, where we stop conducting time shift when t is small, as the assumption does not hold and we are not able to estimate the Σxt-1 in Equation 13." }, { "figure_ref": [], "heading": "J.2 ANALYTICAL ESTIMATION OF WINDOW SIZE", "publication_ref": [], "table_ref": [], "text": "In this section, we derive the bounds of window size w with optimal time step t s ∈ [t -1w/2, t -1 + w/2]. In Algorithm 3, we predefine a window size and search the optimal time step t s around time step t -1 within w. In the above derivation in Section J.1, after Equation 11, we assume t s is close to t -1, thus we can omit the term √ ᾱt-1 -√ ᾱts , and the following derivations (Equations 12 to 21) give us the estimated variance of the optimal time step t s . In order to estimate the window size w, we first relax our assumption. Instead of directly assuming t s is close to t -1, in the last term of Equation 11 we assume that the norm of ( √ ᾱt-1 -√ ᾱts )x 0 is sufficiently smaller than the norm of γe, where 0 < γ ≪ 1. Thus we have:\n|| √ ᾱt-1 -ᾱts ||||x 0 || ≤ γ||e|| (|| √ ᾱt-1 -ᾱts ||) 2 ≤ γ 2 ||e|| 2 ||x 0 || 2(22)\nᾱt-1 + ᾱts -2 ᾱt-1 ᾱtsγ 2 ||e|| 2 ||x 0 || 2 ≤ 0 (23)\nSolving Equation 23, we obtain:\nᾱts ≥ 2 √ ᾱt-1 -4ᾱ t-1 -4(ᾱ t-1 -γ 2 ||e|| 2 ||x0|| 2 ) 2 ᾱts ≤ 2 √ ᾱt-1 + 4ᾱ t-1 -4(ᾱ t-1 -γ 2 ||e|| 2 ||x0|| 2 ) 2 (24)\nIn Equation 24, e is the network prediction error and γ is a predefined value. e could be estimated using a small amount of data or a trained error prediction network similar to the work of Bao et al. (2022a). ᾱt on the LHS is the predefined noise schedule. Since ᾱt is a monotonic function with respect to t, denoted as ᾱt = f (t), for a given ᾱt , the corresponding time step t could be obtained through the inverse function of f , that is t = f -1 (ᾱ t ). Thus, the bounds of window size w could be estimated from the bounds of ᾱt through Equation 24. To obtain the bounds of w, one could first compute the right hand sides of Equation 24 for the predefined noise schedule of a given diffusion model. Then for time step t, one could find the largest and smallest t s , denoted as t max " }, { "figure_ref": [], "heading": "ACKNOWLEDGEMENTS", "publication_ref": [], "table_ref": [], "text": "This work is funded by the CALCULUS project (European Research Council Advanced Grant H2020-ERC-2017-ADG 788506), the Flanders AI Research Program and the China Scholarship Council." } ]
2024-03-21
10.18653/v1/D19-5616
[ { "authors": "Fan Bao; Chongxuan Li; Jiacheng Sun; Jun Zhu; Bo Zhang", "journal": "", "ref_id": "b0", "title": "Estimating the optimal covariance with imperfect mean in diffusion probabilistic models", "year": "2022" }, { "authors": "Fan Bao; Chongxuan Li; Jun Zhu; Bo Zhang", "journal": "", "ref_id": "b1", "title": "Analytic-DPM: an analytic estimate of the optimal reverse variance in diffusion probabilistic models", "year": "2022" }, { "authors": "Jooyoung Choi; Sungwon Kim; Yonghyun Jeong; Youngjune Gwon; Sungroh Yoon", "journal": "", "ref_id": "b2", "title": "Ilvr: Conditioning method for denoising diffusion probabilistic models", "year": "2021" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "", "ref_id": "b3", "title": "Diffusion models beat gans on image synthesis", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b4", "title": "", "year": "2021" }, { "authors": "Yingqing He; Tianyu Yang; Yong Zhang; Ying Shan; Qifeng Chen", "journal": "", "ref_id": "b5", "title": "Latent video diffusion models for high-fidelity video generation with arbitrary lengths", "year": "2022" }, { "authors": "Martin Heusel; Hubert Ramsauer; Thomas Unterthiner; Bernhard Nessler; Sepp Hochreiter", "journal": "", "ref_id": "b6", "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b7", "title": "", "year": "2017" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b8", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Yaosi Hu; Zhenzhong Chen; Chong Luo", "journal": "", "ref_id": "b9", "title": "Lamd: Latent motion diffusion for video generation", "year": "2023" }, { "authors": "Johanna Karras; Aleksander Holynski; Ting-Chun; Ira Wang; Kemelmacher-Shlizerman", "journal": "", "ref_id": "b10", "title": "Dreampose: Fashion image-to-video synthesis via stable diffusion", "year": "2023" }, { "authors": "Diederik Kingma; Tim Salimans; Ben Poole; Jonathan Ho", "journal": "", "ref_id": "b11", "title": "Variational diffusion models. Advances in neural information processing systems", "year": "2021" }, { "authors": "Alex Krizhevsky", "journal": "", "ref_id": "b12", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "Zhenghao Lin; Yeyun Gong; Yelong Shen; Tong Wu; Zhihao Fan; Chen Lin; Weizhu Chen; Nan Duan", "journal": "", "ref_id": "b13", "title": "Genie: Large scale pre-training for text generation with diffusion model", "year": "2022" }, { "authors": "Luping Liu; Yi Ren; Zhijie Lin; Zhou Zhao", "journal": "", "ref_id": "b14", "title": "Pseudo numerical methods for diffusion models on manifolds", "year": "2022" }, { "authors": "Xingchao Liu; Chengyue Gong; Qiang Liu", "journal": "", "ref_id": "b15", "title": "Flow straight and fast: Learning to generate and transfer data with rectified flow", "year": "2022" }, { "authors": "Ziwei Liu; Ping Luo; Xiaogang Wang; Xiaoou Tang", "journal": "", "ref_id": "b16", "title": "Deep learning face attributes in the wild", "year": "2015-12" }, { "authors": "Cheng Lu; Yuhao Zhou; Fan Bao; Jianfei Chen; Chongxuan Li; Jun Zhu", "journal": "", "ref_id": "b17", "title": "Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps", "year": "2022" }, { "authors": "Chong Mou; Xintao Wang; Liangbin Xie; Jian Zhang; Zhongang Qi; Ying Shan; Xiaohu Qie", "journal": "", "ref_id": "b18", "title": "T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models", "year": "2023" }, { "authors": "Alexander Quinn; Nichol ; Prafulla Dhariwal", "journal": "PMLR", "ref_id": "b19", "title": "Improved denoising diffusion probabilistic models", "year": "2021" }, { "authors": "Alexander Quinn; Nichol ; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mcgrew; Ilya Sutskever; Mark Chen", "journal": "PMLR", "ref_id": "b20", "title": "GLIDE: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2022-07-23" }, { "authors": "Mang Ning; Enver Sangineto; Angelo Porrello; Simone Calderara; Rita Cucchiara", "journal": "", "ref_id": "b21", "title": "Input perturbation reduces exposure bias in diffusion models", "year": "2023" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b22", "title": "Hierarchical textconditional image generation with clip latents", "year": "2022" }, { "authors": "Aurelio Marc; Sumit Ranzato; Michael Chopra; Wojciech Auli; Zaremba", "journal": "", "ref_id": "b23", "title": "Sequence level training with recurrent neural networks", "year": "2016" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b24", "title": "Highresolution image synthesis with latent diffusion models", "year": "2022-06" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b25", "title": "Highresolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily Denton; Seyed Kamyar; Seyed Ghasemipour; Raphael Gontijo-Lopes; Burcu Karagol Ayan; Tim Salimans; Jonathan Ho; David J Fleet; Mohammad Norouzi", "journal": "", "ref_id": "b26", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Chitwan Saharia; Jonathan Ho; William Chan; Tim Salimans; David J Fleet; Mohammad Norouzi", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b27", "title": "Image super-resolution via iterative refinement", "year": "2022" }, { "authors": "Florian Schmidt", "journal": "", "ref_id": "b28", "title": "Generalization in generation: A closer look at exposure bias", "year": "2019-11" }, { "authors": "Jascha Sohl-Dickstein; Eric Weiss; Niru Maheswaranathan; Surya Ganguli", "journal": "PMLR", "ref_id": "b29", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b30", "title": "Denoising diffusion implicit models", "year": "2021" }, { "authors": "Yang Song; Stefano Ermon", "journal": "Advances in neural information processing systems", "ref_id": "b31", "title": "Generative modeling by estimating gradients of the data distribution", "year": "2019" }, { "authors": "Yang Song; Jascha Sohl-Dickstein; P Diederik; Abhishek Kingma; Stefano Kumar; Ben Ermon; Poole", "journal": "", "ref_id": "b32", "title": "Score-based generative modeling through stochastic differential equations", "year": "2020" }, { "authors": "Yang Song; Prafulla Dhariwal; Mark Chen; Ilya Sutskever", "journal": "", "ref_id": "b33", "title": "Consistency models", "year": "2023" }, { "authors": "Zhisheng Xiao; Karsten Kreis; Arash Vahdat", "journal": "", "ref_id": "b34", "title": "Tackling the generative learning trilemma with denoising diffusion GANs", "year": "2022" }, { "authors": "Jiasheng Ye; Zaixiang Zheng; Yu Bao; Lihua Qian; Mingxuan Wang", "journal": "", "ref_id": "b35", "title": "Dinoiser: Diffused conditional sequence learning by manipulating noises", "year": "2023" }, { "authors": "Fisher Yu; Ari Seff; Yinda Zhang; Shuran Song; Thomas Funkhouser; Jianxiong Xiao", "journal": "", "ref_id": "b36", "title": "Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop", "year": "2016" }, { "authors": "Guoqiang Zhang; Niwa Kenta; W Bastiaan Kleijn", "journal": "", "ref_id": "b37", "title": "Lookahead diffusion probabilistic models for refining mean estimation", "year": "2023" }, { "authors": "Haopeng Zhang; Xiao Liu; Jiawei Zhang", "journal": "", "ref_id": "b38", "title": "Diffusum: Generation enhanced extractive summarization with diffusion", "year": "2023" }, { "authors": "Lvmin Zhang; Maneesh Agrawala", "journal": "", "ref_id": "b39", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Qinsheng Zhang; Yongxin Chen", "journal": "", "ref_id": "b40", "title": "Fast sampling of diffusion models with exponential integrator", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 245.76, 107.4, 258.24, 30.2 ], "formula_id": "formula_0", "formula_text": "q(x 1•••T |x 0 ) = T t=1 q(x t |x t-1 )(1)" }, { "formula_coordinates": [ 3, 253.01, 174.62, 250.99, 25.27 ], "formula_id": "formula_1", "formula_text": "x t = √ α t x 0 + √ 1 -α t ϵ t (2)" }, { "formula_coordinates": [ 3, 233.99, 230.91, 270.01, 19 ], "formula_id": "formula_2", "formula_text": "p(x t-1 |x t , x 0 ) = N (μ t (x t , x 0 ), βt )(3)" }, { "formula_coordinates": [ 3, 108, 250.07, 396, 38.06 ], "formula_id": "formula_3", "formula_text": "1-αt x 0 + √ αt(1-αt-1) 1-αt x t . Considering Equation 2, μt can be further reformulated as μt = 1 √ αt (x t -1-αt √ 1-αt ϵ t )." }, { "formula_coordinates": [ 3, 216.93, 333.81, 287.07, 18.77 ], "formula_id": "formula_4", "formula_text": "L simple = E t,x0,ϵt∼N (0,I) [∥ϵ θ (x t , t) -ϵ t ∥ 2 2 ](4)" }, { "formula_coordinates": [ 3, 112.98, 424.23, 121.83, 22.68 ], "formula_id": "formula_5", "formula_text": "3: t ∼ Uniform(1, • • • , T ) 4:" }, { "formula_coordinates": [ 3, 112.98, 390.92, 362.52, 89.14 ], "formula_id": "formula_6", "formula_text": "∇||ϵ -ϵ θ (x t , t)|| 2 7: until converged Algorithm 2 Sampling 1: x T ∼ N (0, I) 2: for t = T, • • • , 1 do 3: z ∼ N (0, I) if t > 1, else z = 0 4: x t-1 = 1 √ αt (x t -1-αt √ 1-αt ϵ θ (x t , t" }, { "formula_coordinates": [ 4, 260.79, 636.29, 243.21, 18.77 ], "formula_id": "formula_7", "formula_text": "C(x t , t) = e -dis(xt,xt)(5)" }, { "formula_coordinates": [ 5, 206.51, 98.66, 297.49, 43.51 ], "formula_id": "formula_8", "formula_text": "xt-1 = x t-1 + ϕ t-1 e t-1 = α t-1 x 0 + 1 -α t-1 ϵ t-1 + ϕ t-1 e t-1 = α t-1 x 0 + λ t-1 εt-1(6)" }, { "formula_coordinates": [ 5, 127.74, 234.83, 376.26, 144.89 ], "formula_id": "formula_9", "formula_text": "∃t s ∈ {1 • • • T }, s.t. C(x t-1 , t s ) ≥ C(x t-1 , t -1) (7) (xt, t) -6 -5 -4 -3 -2 -1 0 +1 +2 +3 +4 7LPH6WHS6KLIW 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t)" }, { "formula_coordinates": [ 5, 252.41, 667.04, 251.59, 31.52 ], "formula_id": "formula_10", "formula_text": "σ ts ≈ σ t-1 - ||e|| 2 d (d -1) (8)" }, { "formula_coordinates": [ 6, 306, 185.01, 193.02, 88.44 ], "formula_id": "formula_11", "formula_text": "Trained diffusion model ϵ θ ; Win- dow size w; Reverse Time series {T, T - 1, • • • , 0}; Cutoff threshold t c 2: Initialize: x T ∼ N (0, I) ; t s = -1 3: for t = T, T -1, .., 0 do 4: If t s ̸ = -1 then t next = t s else t next = t 5: ϵ t = ϵ θ (x t , t next )) 6:" }, { "formula_coordinates": [ 6, 306, 274.41, 81.33, 20.95 ], "formula_id": "formula_12", "formula_text": "if t > t c then 8:" }, { "formula_coordinates": [ 6, 301.52, 296.08, 197.5, 78.1 ], "formula_id": "formula_13", "formula_text": "Σ = {1 -α t-w/2 , 1 - α t-w/2+1 , • • • , 1 -α t+w/2 } 9: t s = arg min τ ||var(x t-1 ) -σ τ ||, for σ τ ∈ Σ and τ ∈ [t -w/2, t + w/2] 10: else 11: t s = -1 12:" }, { "formula_coordinates": [ 13, 466.9, 310.98, 32.92, 16.53 ], "formula_id": "formula_14", "formula_text": "n i (xi-x) 2 n-1" }, { "formula_coordinates": [ 13, 312.44, 387.82, 89.69, 17.29 ], "formula_id": "formula_15", "formula_text": "0 , t = 1, 2, • • • , T -1." }, { "formula_coordinates": [ 14, 118.93, 150.44, 318.62, 14.5 ], "formula_id": "formula_16", "formula_text": "𝑥 ! 𝑥 !\"# 𝑥 !\"$ … … 𝑥 % ! 𝑥 % !\"# … … 𝑥 & 𝑥 $ 𝑥 # 𝑥 '" }, { "formula_coordinates": [ 14, 272.73, 340.4, 85.79, 16.52 ], "formula_id": "formula_17", "formula_text": "𝑀𝑆𝐸 # = ∑ ||* + ) \" %* ) || # $ %&'" }, { "formula_coordinates": [ 15, 107.64, 90.59, 388.86, 119.58 ], "formula_id": "formula_18", "formula_text": "(xt, t) -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 +1 +2 +3 +4 +5 +6 +7 +8 +9 7LPH6WHS6KLIW 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 +1 +2 +3 +4 +5 +6 +7 +8 +9 7LPH6WHS6KLIW 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS (xt, t) 6WHS" }, { "formula_coordinates": [ 20, 136.06, 293.2, 367.94, 85.74 ], "formula_id": "formula_19", "formula_text": "D KL (p(x t-1 |x t )||q(x ts |x 0 )) = D KL (N (x|μ t-1 , Σt-1 )||N (µ ts , Σ ts )) + H(N (x|μ t-1 , Σt-1 )) -H(p(x t-1 |x t )) = 1 2 (log(|Σ ts |) + T r(Σ -1 ts Σt-1 ) + (μ t-1 -µ ts )Σ -1 ts (μ t-1 -µ ts ) T ) + C = 1 2 (d log(1 -α ts ) + d 1 -α ts T r( Σt-1 ) + 1 1 -α ts ||μ t-1 -µ ts || 2 ) + C(9)" }, { "formula_coordinates": [ 20, 134.84, 381.4, 260.87, 18.89 ], "formula_id": "formula_20", "formula_text": "C = 1 2 d + H(N (x|μ t-1 Σt-1 )) -H(p(x t-1 |x t )) -1 2 log( 1 | Σt-1|" }, { "formula_coordinates": [ 20, 244.56, 421.85, 259.44, 17.29 ], "formula_id": "formula_21", "formula_text": "μ = µ t-1 + e = √ α t-1 x 0 + e (10)" }, { "formula_coordinates": [ 20, 108, 467.47, 396, 106.82 ], "formula_id": "formula_22", "formula_text": "D KL (p(x t-1 |x t )||q(x ts |x 0 )) = 1 2 (d log(1 -α ts ) + d 1 -α ts T r( Σt-1 ) + 1 1 -α ts ||( α t-1 -α ts )x 0 + e|| 2 ) + C (11) if t s is close to t -1, then √ ᾱt-1 - √ ᾱts ≈ 0. We have D KL (p(x t-1 |x t )||q(x ts |x 0 )) ≈ 1 2 (d log(1 -α ts ) + d 1 -α ts T r( Σt-1 ) + 1 1 -α ts ||e|| 2 ) + C (12)" }, { "formula_coordinates": [ 20, 251.81, 596.31, 252.19, 23.11 ], "formula_id": "formula_23", "formula_text": "Σ ts = T r( Σt-1 ) + 1 d ||e|| 2(13)" }, { "formula_coordinates": [ 20, 246.85, 658.25, 257.15, 85.33 ], "formula_id": "formula_24", "formula_text": "σ t-1 = ( i (p i -p) 2 ) d -1 = i (p 2 i + p 2 -2p i p) d -1 = i (p 2 i ) -dp 2 d -1(14)" }, { "formula_coordinates": [ 21, 207.18, 98.15, 296.82, 85.12 ], "formula_id": "formula_25", "formula_text": "E[σ t-1 ] = i (E[p 2 i ]) -dE[p 2 ] d -1 = i (σ + µ 2 i ) d -1 - d d -1 E[( i p i d ) 2 ] = dσ d -1 + i µ 2 i d -1 - d d -1 E[( i p i d ) 2 ](15)" }, { "formula_coordinates": [ 21, 159.78, 192.37, 344.22, 125.68 ], "formula_id": "formula_26", "formula_text": "d d -1 E[( i p i d ) 2 ] = d d -1 1 d 2 ( i E(p i ) 2 + i̸ =j E(p i )E(p j )) = d d -1 1 d 2 ( i ((E(p i )) 2 + σ) + i̸ =j E(p i )E(p j )) = σ d -1 + 1 d(d -1) ( i (E(p i )) 2 + i̸ =j E(p i )E(p j )) = σ d -1 + 1 d(d -1) ( i µ i ) 2(16)" }, { "formula_coordinates": [ 21, 206.68, 337.01, 297.32, 141.91 ], "formula_id": "formula_27", "formula_text": "E[σ t-1 ] = dσ d -1 + i µ 2 i d -1 - σ d -1 - ( i µ i ) 2 d(d -1) = σ + i µ 2 i d -1 - ( i µ i ) 2 d(d -1) = σ + i (µ 2 i ) -dµ 2 d 1 = σ + i (µ 2 i -2µ i µ + µ 2 ) d -1 = σ + i (µ i -µ) 2 d -1(17)" }, { "formula_coordinates": [ 21, 228.14, 567.83, 275.86, 31.65 ], "formula_id": "formula_28", "formula_text": "E[σ t-1 ] = σ + i (e i ) 2 d -1 = σ + ||e|| 2 d -1(19)" }, { "formula_coordinates": [ 21, 261.48, 608.38, 242.52, 31.52 ], "formula_id": "formula_29", "formula_text": "dσ t-1 ≈ dσ + d||e|| 2 d -1(20)" }, { "formula_coordinates": [ 21, 258.22, 659.68, 245.78, 31.52 ], "formula_id": "formula_30", "formula_text": "σ ts = σ t-1 - ||e|| 2 d(d -1)(21)" }, { "formula_coordinates": [ 22, 231.95, 200.98, 272.05, 53.6 ], "formula_id": "formula_31", "formula_text": "|| √ ᾱt-1 -ᾱts ||||x 0 || ≤ γ||e|| (|| √ ᾱt-1 -ᾱts ||) 2 ≤ γ 2 ||e|| 2 ||x 0 || 2(22)" }, { "formula_coordinates": [ 22, 211.25, 303.37, 292.75, 66.98 ], "formula_id": "formula_32", "formula_text": "ᾱts ≥ 2 √ ᾱt-1 -4ᾱ t-1 -4(ᾱ t-1 -γ 2 ||e|| 2 ||x0|| 2 ) 2 ᾱts ≤ 2 √ ᾱt-1 + 4ᾱ t-1 -4(ᾱ t-1 -γ 2 ||e|| 2 ||x0|| 2 ) 2 (24)" } ]
ALLEVIATING EXPOSURE BIAS IN DIFFUSION MOD-ELS THROUGH SAMPLING WITH SHIFTED TIME STEPS
Diffusion Probabilistic Models (DPM) have shown remarkable efficacy in the synthesis of high-quality images. However, their inference process characteristically requires numerous, potentially hundreds, of iterative steps, which could exaggerate the problem of exposure bias due to the training and inference discrepancy. Previous work has attempted to mitigate this issue by perturbing inputs during training, which consequently mandates the retraining of the DPM. In this work, we conduct a systematic study of exposure bias in DPM and, intriguingly, we find that the exposure bias could be alleviated with a novel sampling method that we propose, without retraining the model. We empirically and theoretically show that, during inference, for each backward time step t and corresponding state xt , there might exist another time step t s which exhibits superior coupling with xt . Based on this finding, we introduce a sampling method named Time-Shift Sampler. Our framework can be seamlessly integrated to existing sampling algorithms, such as DDPM, DDIM and other high-order solvers, inducing merely minimal additional computations. Experimental results show our method brings significant and consistent improvements in FID scores on different datasets and sampling methods. For example, integrating Time-Shift Sampler to F-PNDM yields a FID=3.88, achieving 44.49% improvements as compared to F-PNDM, on CIFAR-10 with 10 sampling steps, which is more performant than the vanilla DDIM with 100 sampling steps.
Mingxiao Li; Tingyu Qu; Ruicong Yao; Wei Sun; Marie-Francine Moens
[ { "figure_caption": "Figure 1 :1Figure 1: The comparison of TS-DDPM (ours) and DDPM. The orange and blue arrows denote the time-state coupling at each denoising step of TS-DDPM and DDPM, respectively. In TS-DDPM, we search for coupled time step within the [tw/2, t + w/2] window, until the cutoff time step t c .", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The density distribution of the variance of 5000 samples from CIFAR-10 by different time steps.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: CIFAR-10 prediction errors of training samples for different numbers of sampling steps.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The training and inference discrepancy of DDIM with 10 sampling steps on CIFAR-10. The dashed line in each column denotes the couple of predicted xt and t. Points on the right side of the dashed line mean that the corresponding time steps couple better with xt than time step t.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure5: Sampling time VS FID on CIFAR-10 using DDPM as backbone with various sampling methods. We report the results of {5,10,20,50} sampling steps from left to right for each sampler, denoted with \"×\" symbol.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Sampling time VS FID on CIFAR-10 with ADM as backbone using DDIM and TS-DDIM for sampling.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: FID of generated CIFAR-10 images using TS-DDIM (uniform) with (a) various window sizes using cutoff value=300; (b) various cutoff values using window size= {40;30;8;2} for {10;20;50;100} steps.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Given data sample 𝑥 ! Stage 2: generate same 𝑥 ! ? Stage 1: unable to know which image will be generated Applying Equation (2) to generate a sequence of states 𝑥 \" 1. The Computation of the MSE at Stage 1: Applying Equation (2) to obtain the ground truth sequence: 𝑥 #$% , … . . , 𝑥 \" ! Using 𝑥 \" , 𝑇 as input to the model to generate a sequence of predicted 𝑥 # till time 𝑡 $ 𝑀𝑜𝑑𝑒𝑙 𝑥 \" , 𝑇 → 𝑥 , \"%& , 𝑥 , \"%' , … … , 𝑥 , # ! 𝑀𝑆𝐸 # = ∑ ||* + \" %* \" || # $ %&' , 2. The Computation of the MSE at Stage 2:", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: CelebA prediction errors of training samples for different numbers of sampling steps.", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: The training and inference discrepancy of DDIM with 10 sampling steps for window size=20: Left: CelebA dataset; Right: CIFAR-10 dataset.", "figure_data": "", "figure_id": "fig_9", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Example of generation process of TS-DDIM and DDIM on CIFAR-10. We use the horizontal black arrow to represent the time line, with the DDIM generation chain above it, TS-DDIM generation chain underneath it. We sample 10 steps with window [t -20, t + 20] and cutoff value=200.", "figure_data": "", "figure_id": "fig_10", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure12: Comparison of timestep trajectory for TS-DDIM and DDIM on CIFAR-10 with DDPM as backbone using 10 sampling steps and uniform time selection. We use the red dashed line and blue line to present the time step trajectories for the original DDIM and our TS-DDIM, respectively. To improve visibility, we also zoom in for the time steps between 300 to 600.", "figure_data": "", "figure_id": "fig_11", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: CIFAR-10 samples for varying time steps using TS-DDPM.", "figure_data": "", "figure_id": "fig_12", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: CIFAR-10 samples for varying time steps using TS-DDIM.", "figure_data": "", "figure_id": "fig_13", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: CelebA samples for varying time steps using TS-DDPM.", "figure_data": "", "figure_id": "fig_14", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 16 :16Figure 16: CelebA samples for varying time steps using TS-DDIM.", "figure_data": "", "figure_id": "fig_15", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Figure 17 :17Figure 17: LSUN-bedroom samples for varying time steps using TS-DDPM.", "figure_data": "", "figure_id": "fig_16", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Figure 18 :18Figure 18: LSUN-bedroom samples for varying time steps using TS-DDIM.", "figure_data": "", "figure_id": "fig_17", "figure_label": "18", "figure_type": "figure" }, { "figure_caption": "sand t min s respectively, satisfying Equation 24. The w is then bounded by the 2 × min(t max s , t min s ). If the above condition holds, then the last term of Equation 11 is dominated by e, and ( √ ᾱt-1 -√ ᾱts )x 0 can be ignored in above derivations (Equations 12 to 21).", "figure_data": "", "figure_id": "fig_18", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 20 :20Figure 20: The training and inference discrepancy of DDIM with 50 sampling steps on CelebA for window size=10.", "figure_data": "", "figure_id": "fig_19", "figure_label": "20", "figure_type": "figure" }, { "figure_caption": "Quality of the image generation measured with FID ↓ on CIFAR-10 (32×32) and CelebA (64×64) with varying time steps for different sampling algorithms.", "figure_data": "CIFAR-10 CelebASampling Method DDIM (quadratic) TS-DDIM(quadratic) DDIM(uniform) TS-DDIM(uniform) DDPM (uniform) TS-DDPM (uniform) S-PNDM (uniform) TS-S-PNDM (uniform) 18.81(+16.40%) 5.14 (+45.84%) 5 steps 10 steps 41.57 13.70 38.09 (+8.37%) 11.93 (+12.92%) 6.12 (+11.43%) 20 steps 6.91 44.60 18.71 11.05 35.13 (+21.23%) 12.21 (+34.74%) 8.03 (+27.33%) 83.90 42.04 24.60 67.06 (+20.07%) 33.36 (+20.65%) 22.21 (+9.72%) 22.53 9.49 5.37 4.42 (+17.69%) F-PNDM (uniform) 31.30 6.99 4.34 3.88 (+44.49%) 3.60 (+17.05%) TS-F-PNDM (uniform) 31.11 (+4.07%) DDIM (quadratic) 27.28 10.93 6.54 TS-DDIM (quadratic) 24.24 (+11.14%) 9.36 (+14.36%) 5.08 (+22.32%) DDIM (uniform) 24.69 17.18 13.56 TS-DDIM (uniform) 21.32 (+13.65%) 10.61 (+38.24%) 7.01 (+48.30%) DDPM (uniform) 42.83 34.12 26.02 TS-DDPM (uniform) 33.87 (+20.92%) 27.17 (+20.37%) 20.42 (+21.52%) 13.54 (+26.77%) 12.83 (+7.70%) 50 steps 100 steps 4.71 4.23 4.16 (+11.68%) 3.81 (+9.93%) 7.09 5.66 5.56 (+21.58%) 4.56 (+19.43%) 14.76 10.66 13.64 (+7.59%) 9.69 (+9.10%) 3.74 3.71 3.71 (+0.80%) 3.60 (+2.96%) 3.71 4.03 3.56 (+4.04%) 3.86 (+4.22%) 5.20 4.96 4.20 (+19.23%) 4.18 (+15.73%) 9.12 6.60 5.29 (+42.00%) 6.50 (+1.52%) 18.49 13.90 S-PNDM (uniform) 38.67 11.36 7.51 5.24 4.74 TS-S-PNDM (uniform) 29.77 (+23.02%) 10.50 (+7.57%) 7.34 (+2.26%) 5.03 (+4.01%) 4.40 (+7.17%) F-PNDM (uniform) 94.94 9.23 5.91 4.61 4.62 TS-F-PNDM (uniform) 94.26 (+0.72%) 6.96 (+24.59%) 5.84 (+1.18%) 4.50 (+2.39%) 4.42 (+4.33%)", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Performance comparison on CIFAR-10 with ADM and ADM-IP as backbone models.", "figure_data": "", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Denoising Diffusion Probabilistic Model. The denoising diffusion probabilistic model (DDPM) was first introduced by Sohl-Dickstein et al. (2015) and further advanced by Nichol & Dhariwal (2021), where they include variance learning in the model and optimize it with a new weighted variational bound. Song et al. (2020) further connect DDPMs with stochastic differential equations by considering DDPMs with infinitesimal timesteps. They also find that both score-based generative models(Song & Ermon, 2019) and that DDPMs can be formulated by stochastic differential equa-", "figure_data": "", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": ".", "figure_data": "Sampling Method 5 steps DDIM 19.57 ms TS-DDIM 22.30 ms (+13.9%) 47.53 ms (+9.4%) 10 steps 43.46 ms S-PNDM 24.35 ms 48.32 ms TS-S-PNDM 26.44 ms (+8.5%) 52.08 ms (+7.78%) 105.72 ms (+12.84%) 259.19 ms (+9.86%) 20 steps 50 steps 90.90 ms 232.47 ms 101.01 ms (+11.1%) 271.25 ms (+16.7%) 93.69 ms 235.93 ms F-PNDM 63.93 ms 86.23 ms 132.03 ms 268.65 ms TS-F-PNDM 64.39 ms (+0.7%) 92.61 ms (+7.39%) 141.38 ms (+7.08%) 302.88 (+12.74%)", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Sampling time comparison for different sampling methods on CIFAR-10 using DDPM as backbone. TS-DDIM 40.09 ms (+2.10%) 82.26 ms (+3.70%) 160.45 ms (+1.92%) 394.95 ms (+0.29%)", "figure_data": "Sampling Method ADM w/ DDIM ADM w/5 steps 39.26 ms10 steps 79.30 ms20 steps 157.43 ms50 steps 393.77 ms", "figure_id": "tab_6", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Sampling time comparison for DDIM and TS-DDIM on CIFAR-10 with ADM as backbone.", "figure_data": "", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Quality of the image generation measured with FID ↓ on LSUN-bedroom (256×256) with varying time steps for different sampling algorithms.", "figure_data": "Dataset LSUN-bedroomSampling Method DDIM(uniform) TS-DDIM(uniform) DDPM (uniform) TS-DDPM (uniform) 79.05 5 steps 10 steps 20 steps 50 steps 52.29 16.90 8.78 6.74 51.57 16.66 8.29 6.90 85.60 42.82 22.66 10.79 32.47 15.40 10.20-Dataset CIFAR-10 CelebASampling Method Analytic-DPM(DDIM) TS-DDIM Analytic-DPM(DDIM) TS-DDIM10 steps 50 steps 100 steps 14.00 4.04 3.55 11.93 4.16 3.81 15.62 6.13 4.29 9.36 4.20 4.18", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Comparison of TS-DDIM with Analytic-DPM on CIFAR-10 (32×32) and CelebA (64×64)", "figure_data": "", "figure_id": "tab_9", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Comparison of precision and recall for DDIM and TS-DDIM on CIFAR-10 with ADM as backbone.", "figure_data": "Sampling Method ADM w/ DDIM ADM w/ TS-DDIM5 steps Precision Recall Precision Recall Precision Recall Precision Recall 10 steps 20 steps 50 steps 0.59 0.47 0.62 0.52 0.64 0.57 0.66 0.60 0.57 0.46 0.62 0.55 0.64 0.60 0.65 0.62", "figure_id": "tab_10", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "We adopt ADM as the backbone model with classifier guidance.", "figure_data": "Sampling Method 5 steps DDIM 67.63 TS-DDIM 39.47(+41.64%) 13.45(+2.11%) 6.57(+3.81%) 10 steps 20 steps 13.74 6.83", "figure_id": "tab_11", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Performance comparison with DDIM on ImageNet (64×64) with classifier guidance.", "figure_data": "", "figure_id": "tab_12", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Performance comparison with DPM-solver and DEIS obtained on CIFAR-10 (32×32).Finally, in Table10, we report the performance of our method performed on text-to-image generation on MSCOCO val2017.", "figure_data": "Sampling Method 10 steps DDIM 27.80 TS-DDIM 26.32 (+5.32%) 24.80 (+2.63%) 20 steps 25.47", "figure_id": "tab_14", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Performance comparison with DDIM for text-to-image generation obtained on MSCOCO val2017.", "figure_data": "", "figure_id": "tab_15", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "The training and inference discrepancy of DDIM with 50 sampling steps on CIFAR-10 for window size=10.", "figure_data": "6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS(xt, t)6WHS 6WHS 6WHS 6WHS 6WHS 6WHS 6WHS 6WHS 6WHS(xt, t) (xt, t) (xt, t) (xt, t) (xt, t) (xt, t) (xt, t) (xt, t) (xt, t)6WHS 6WHS 6WHS 6WHS 6WHS 6WHS 6WHS 6WHS Figure 19: (xt, t) (xt, t) (xt, t) (xt, t) (xt, t) (xt, t) (xt, t) (xt, t) (xt, t) 6WHS+4+3+2+1-2 7LPH6WHS6KLIW -1 0-3-4-5-6+4+3+2+1-2 7LPH6WHS6KLIW -1 0-3-4-5-6", "figure_id": "tab_16", "figure_label": "", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "(Ho et al., 2020)", "Explanation": "The cited work by Ho et al. (2020) introduces the concept of Diffusion Probabilistic Models (DPMs), which serves as the basis for the research conducted in the citing paper on generating high-quality images."}, {"Category": "Methodological Basis", "Citation": "(Sohl-Dickstein et al., 2015)", "Explanation": "The work by Sohl-Dickstein et al. (2015) provides a foundational understanding of DPMs and their use in generating high-quality images, which the citing paper builds upon in its research."}, {"Category": "Extension or Continuation", "Citation": "(Dhariwal & Nichol, 2021)", "Explanation": "The work by Dhariwal and Nichol (2021) extends the research on DPMs by exploring the use of these models in generating high-quality images, which the citing paper further builds upon in its study."}, {"Category": "Extension or Continuation", "Citation": "(Ramesh et al., 2022)", "Explanation": "The work by Ramesh et al. (2022) continues the research on DPMs and their use in generating high-quality images, providing additional insights and findings that the citing paper leverages in its own study."}, {"Category": "Extension or Continuation", "Citation": "(Rombach et al., 2022a)", "Explanation": "The work by Rombach et al. (2022a) further extends the research on DPMs and their use in generating high-quality images, providing additional information and analysis that the citing paper builds upon in its study."}, {"Category": "Extension or Continuation", "Citation": "(Nichol et al., 2022)", "Explanation": "The work by Nichol et al. (2022) continues the research on DPMs and their use in generating high-quality images, providing additional insights and findings that the citing paper leverages in its own study."}, {"Category": "Methodological Basis", "Citation": "(Ho et al., 2020)", "Explanation": "The cited work on generalizing DDPM to non-Markovian processes provides a methodological basis for the citing paper to explore the training and sampling process of DPM in a more general and flexible way."}, {"Category": "Extension or Continuation", "Citation": "(Song et al., 2021)", "Explanation": "The cited work on deriving optimal variance during sampling extends the research on DDPM by providing a new method to improve the sampling process in DPM."}, {"Category": "Methodological Basis", "Citation": "(Bao et al., 2022b)", "Explanation": "The cited work on thresholding the pixel values as additional regularization provides a methodological basis for the citing paper to improve the training and sampling process of DPM by adding a new regularization technique."}, {"Category": "Methodological Basis", "Citation": "(Saharia et al., 2022a)", "Explanation": "The cited work on developing pseudo numerical methods for solving differential equations on manifolds provides a methodological basis for the citing paper to study the training and sampling process of DPM in a more advanced and complex setting."}, {"Category": "Methodological Basis", "Citation": "(Liu et al., 2022a)", "Explanation": "The cited work on developing pseudo numerical methods for solving differential equations on manifolds provides a methodological basis for the citing paper to study the training and sampling process of DPM in a more advanced and complex setting."}, {"Category": "Supporting Evidence", "Citation": "(Ranzato et al., 2016)", "Explanation": "The cited work by Ranzato et al. (2016) identifies the exposure bias problem in autoregressive generative models, which the citing paper uses to explain the error arising from the difference between training and inference in the context of the exposure bias problem during sampling."}, {"Category": "Extension or Continuation", "Citation": "(Ning et al., 2023)", "Explanation": "The cited work by Ning et al. (2023) proposes a method to alleviate the exposure bias problem in training samples, which the citing paper builds upon to develop a more effective approach for sampling by adjusting the time step t -1 and using a cutoff time step t c to restrict the denoising progress."}, {"Category": "Methodological Basis", "Citation": "(Liu et al., 2022a)", "Explanation": "The cited work by Liu et al. (2022a) provides the F-PNDM model, which the citing paper uses to improve the FID score on the CIFAR-10 image generation benchmark by introducing the Time-Shift Sampler method."}, {"Category": "Methodological Basis", "Citation": "(Ho et al., 2019)", "Explanation": "The cited work by Ho et al. provides the reparameterization trick and the equation for computing the noisy intermediate state, which the citing paper adopts in their research on the transition kernel of the backward process."}, {"Category": "Methodological Basis", "Citation": "(Ho et al., 2020)", "Explanation": "The cited work by Ho et al. (2020) provides the method of optimizing the variational lower bound of the negative log-likelihood, which the citing paper adopts in their research to minimize the KL divergence between the forward and backward process."}, {"Category": "Methodological Basis", "Citation": "(Song et al., 2021)", "Explanation": "The cited work introduces the DDIM sampler, which the citing paper adopts in the backward sampling process to study the behavior of the network during training."}, {"Category": "Methodological Basis", "Citation": "(Zhang et al., 2023a;Ning et al., 2023)", "Explanation": "The cited works provide a basis for the network prediction error of DPM following a normal distribution, which the citing paper uses in the backward process at time step t to represent the predicted next state."}, {"Category": "Methodological Basis", "Citation": "(Song et al. 2021)", "Explanation": "The cited work, Denoising Diffusion Implicit Models (DDIM), is used as a basis for the method proposed in the citing paper, which aims to improve sampling algorithms by incorporating the Time-Shift Sampler."}, {"Category": "Methodological Basis", "Citation": "(Ho et al. 2020)", "Explanation": "The cited work, Denoising Diffusion Probabilistic Models (DDPM), is referenced in the context of the method proposed in the citing paper, which is aimed at improving sampling algorithms by incorporating the Time-Shift Sampler."}, {"Category": "Methodological Basis", "Citation": "(Liu et al. 2022a)", "Explanation": "The cited work based on high-order numerical solvers is mentioned in the context of the method proposed in the citing paper, which is focused on improving sampling algorithms by incorporating the Time-Shift Sampler."}, {"Category": "Methodological Basis", "Citation": "(Ho et al., 2020)", "Explanation": "The cited work, DDPM, serves as the basis for the stochastic sampling method used in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Song et al., 2021)", "Explanation": "The cited work, DDIM, provides the deterministic version of DDPM, which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "(Liu et al., 2022a)", "Explanation": "The cited work, S-PNDM and F-PNDM, serve as the basis for the sampling methods based on second- and fourth-order ODE solvers used in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Krizhevsky, 2009)", "Explanation": "The cited work by Krizhevsky (2009) provides the dataset of CIFAR-10, which is used as a benchmark for evaluating the performance of the DDPM model in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Liu et al., 2015)", "Explanation": "The cited work by Liu et al. (2015) provides the dataset of CelebA 64\u00d764, which is used as a benchmark for evaluating the performance of the DDPM model in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Ning et al., 2023)", "Explanation": "The cited work by Ning et al. (2023) introduces the ADM-IP model, which the citing paper extends by using the DDIM sampler to compare the performance of the two models in generating images."}, {"Category": "Data Source", "Citation": "(Dhariwal & Nichol, 2021)", "Explanation": "The cited work by Dhariwal & Nichol (2021) provides the ADM model, which the citing paper uses as the backbone model in the ADM-IP model to generate images."}, {"Category": "Supporting Evidence", "Citation": "(Heusel et al., 2017)", "Explanation": "The cited work by Heusel et al. (2017) introduces the Frechet Inception Distance (FID), which the citing paper uses to evaluate the quality of the generated images in the experiment."}, {"Category": "Supporting Evidence", "Citation": "Table 1", "Explanation": "The table provides a comparison of four sampling methods and their Time-Shift variants on two datasets, which supports the claim that the Time-Shift Sampler consistently improves the quality of generated images."}, {"Category": "Methodological Basis", "Citation": "(Dhariwal & Nichol, 2021)", "Explanation": "The cited work by Dhariwal and Nichol (2021) introduces the ADM model, which serves as the backbone model for the Time-Shift Sampler method presented in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Ning et al., 2023)", "Explanation": "The cited work by Ning et al. (2023) introduces the variant of ADM called ADM-IP, which is used in the citing paper to alleviate exposure bias in DPM by retraining ADM with input perturbation."}, {"Category": "Methodological Basis", "Citation": "(Dhariwal & Nichol, 2021)", "Explanation": "The cited work by Dhariwal and Nichol (2021) is used to present the results on ADM and the variant ADM-IP in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Song et al., 2021)", "Explanation": "The cited work by Song et al. (2021) provides the method of time step selection procedures, which the citing paper adopts in their research to compare different sampling methods in their models."}, {"Category": "Data Source", "Citation": "(Ho et al., 2020)", "Explanation": "The cited work by Ho et al. (2020) serves as the data source for the models used in the citing paper to evaluate the sampling methods in their research."}, {"Category": "Data Source", "Citation": "(Ning et al., 2023)", "Explanation": "The cited work by Ning et al. (2023) provides the data source for the ADM architecture and checkpoints used in the comparison between ADM and the method proposed in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Yu et al., 2016)", "Explanation": "The cited work provides the dataset used in the study conducted in the citing paper, which serves as a foundational element for the research."}, {"Category": "Extension or Continuation", "Citation": "(Bao et al., 2022b)", "Explanation": "The cited work is used to compare the results obtained in the citing paper, indicating an extension or continuation of the research."}, {"Category": "Methodological Basis", "Citation": "(Xiao et al., 2022)", "Explanation": "The cited work is referenced to present the precision and recall results obtained in the study conducted in the citing paper, providing a methodological basis for the research."}, {"Category": "Data Source", "Citation": "(Lu et al., 2022)", "Explanation": "The cited work is used to integrate the method into the DPM-solver, indicating a reliance on external data or pre-existing models for the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Zhang & Chen, 2023)", "Explanation": "The cited work is used to integrate the method into the DEIS, highlighting the reliance on external data or pre-existing models for the study conducted in the citing paper."}]
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b10", "b48", "b28", "b27", "b48", "b38", "b25", "b42", "b5", "b33", "b13", "b1", "b6", "b70", "b25", "b54", "b70", "b25", "b54" ], "table_ref": [], "text": "Approximating probability distributions from finite observational datasets is a pivotal machine learning challenge, with recent strides made in areas like text (Brown et al., 2020), images (Nichol & Dhariwal, 2021), and video (Ho et al., 2022).\nThe burgeoning interest in diffusion generative models (Ho et al., 2020;Nichol & Dhariwal, 2021;Song et al., 2021b) can be attributed to their stable optimization goals and fewer training anomalies (Kodali et al., 2017). However, fully utilizing the potential of these models across scientific and engineering disciplines remains an open problem. While diffusion generative models excel in domains with Euclidean (i.e. flat) spaces like 2D images or 3D geometry and video, many scientific problems involve reasoning about continuous functions on curved spaces (i.e. Riemannian manifolds). Examples include climate observations on the sphere (Hersbach et al., 2020;Lindgren et al., 2011) or solving PDEs on curved surfaces, which is a crucial problem in areas like quantum mechanics (Bhabha, 1945) and molecular conformation (Jing et al., 2022). Recent works have tackled the problem of learning generative models of continuous functions following either adversarial formulations (Dupont et al., 2022b), latent parametrizations (Dupont et al., 2022a;Du et al., 2021;Bauer et al., 2023), or diffusion models (Bond-Taylor & Willcocks, 2023;Zhuang et al., 2023). While these approaches have shown promise on functions within the Euclidean domain, the general case of learning generative models of functions on Riemannian manifolds remains unexplored.\nIn this paper, we introduce Manifold Diffusion Fields (MDF), extending generative models over functions to the Riemannian setting. We take the term function and field to have equivalent meaning throughout the paper. Note that these are not to be confused with gradient vector fields typically used on manifold. These fields f : M → Y map points from a manifold M (that might be parametrized as a 3D mesh, graph or even a pointcloud, see Sect. 5.2) to corresponding values in signal space Y. MDF is trained on collections of fields and learns a generative model that can sample different fields over a manifold. In Fig. 1 we show real samples of such functions for different manifolds, as well as samples generated by MDF.\nHere are our main contributions:\n• We borrow insights from spectral geometry analysis to define a coordinate system for points in manifolds using the eigen-functions of the Laplace-Beltrami Operator.\nFigure 1: MDF learns a distribution over a collection of fields f : M → R d , where each field is defined on a manifold M. We show real samples and MDF's generations on different datasets of fields defined on different manifolds. First row: MNIST digits on the sine wave manifold. Second row Middle: ERA5 climate dataset (Hersbach et al., 2020) on the 2D sphere. Third row: GMM dataset on the bunny manifold. Fourth row: molecular conformations in GEOM-QM9 (Ruddigkeit et al., 2012) given the molecular graph.\n• We formulate an end-to-end generative model for functions defined on manifolds, allowing sampling different fields over a manifold. Focusing on practical settings, our extensive experimental evaluation covers graphs, meshes and pointclouds as approximations of manifolds.\n• We empirically demonstrate that our model outperforms recent approaches like (Zhuang et al., 2023;Dupont et al., 2022b), yielding diverse and high fidelity samples, while being robust to rigid and isometric manifold transformations. Results on climate modeling datasets (Hersbach et al., 2020) and PDE problems show the practicality of MDF in scientific domains.\n• We show that MDF can learn a distribution over functions on different manifolds. On the challenging problem of molecular conformer generation, MDF obtains state-of-the-art results on GEOM-QM9 (Ruddigkeit et al., 2012)." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b70", "b13", "b13", "b49", "b22", "b39", "b21", "b70", "b13", "b8", "b20", "b53", "b11", "b45", "b57", "b23", "b17", "b8", "b20", "b53", "b11", "b19", "b35", "b16", "b37", "b7", "b29" ], "table_ref": [], "text": "Our approach extends recent efforts in generative models for continuous functions in Euclidean space (Zhuang et al., 2023;Dupont et al., 2022b;a;Du et al., 2021), shown Fig. 2(a), to functions defined over manifolds, see Fig. 2(b). The term Implicit Neural Representation (INR) is used in these works to denote a parameterization of a single function (e.g. a single image in 2D) using a neural network that maps the function's inputs (i.e. pixel coordinates) to its outputs (i.e. RGB values). Different approaches have been proposed to learn distributions over fields in Euclidean space, GASP (Dupont et al., 2022b) leverages a GAN whose generator produces field data whereas a point cloud discriminator operates on discretized data and aims to differentiate real and generated functions. Two-stage approaches (Dupont et al., 2022a;Du et al., 2021) adopt a latent field parameterization (Park et al., 2019) where functions are parameterized via a hyper-network (Ha et al., 2017) and a generative model is learnt on the latent or INR representations. In addition, MDF also relates to recent work focusing on fitting a function (e.g. learning an INR) on a manifold using an intrinsic coordinate system (Koestler et al., 2022;Grattarola & Vandergheynst, 2022), and generalizes it to the problem of learning a probabilistic model over multiple functions defined on a manifold.\nFigure 2: (a) Generative models of fields in Euclidean space (Zhuang et al., 2023;Dupont et al., 2022b;a;Du et al., 2021) learn a distribution p θ over functions whose domain is R n . We show an example where each function is the result of evaluating a Gaussian mixture with 3 random components in 2D. (b) MDF learns a distribution p θ from a collection of fields whose domain is a general Riemannian manifold f ∼ q(f )|f : M → Y. Similarly, as an illustrative example each function is the result of evaluating a Gaussian mixture with 3 random components on M (i.e. the Stanford bunny). (c) Riemannian generative models (Bortoli et al., 2022;Gemici et al., 2016;Rozen et al., 2021;Chen & Lipman, 2023) learn a parametric distribution p θ from an empirical observations x ∼ q(x)|x ∈ M of points x on a Riemannian manifold M, denoted by black dots on the manifold.\nIntrinsic coordinate systems have also been recently used in the context of Graph Transformers (Maskey et al., 2022;Sharp et al., 2022;He et al., 2022;Dwivedi et al., 2020), where eigenvectors of the Graph Laplacian are used to replace standard positional embeddings (in addition to also using edge features). In this setting, Graph Transformer architectures have been used for supervised learning problems like graph/node classification and regression, whereas we focus on generative modeling.\nThe learning problem we tackle with MDF can be interpreted as lifting the Riemannian generative modeling problem (Bortoli et al., 2022;Gemici et al., 2016;Rozen et al., 2021;Chen & Lipman, 2023) to function spaces. MDF is also related to work on Neural Processes (Garnelo et al., 2018;Kim et al., 2019;Dutordoir et al., 2022), which also learn distributions over functions. As opposed to the formulation of Neural Processes which optimizes an ELBO (Kingma & Welling, 2014) we formulate MDF as a denoising diffusion process in function space, which results in a robust training objective and a powerful inference process. Moreover, our work relates to formulations of Gaussian Processes (GP) on Riemannian manifolds (Borovitskiy et al., 2020;Hutchinson et al., 2021). These approaches are GP formulations of Riemannian generative modeling (see Fig. 2), in the sense that they learn conditional distributions of points on the manifold, as opposed to distributions over functions on the manifold like MDF." }, { "figure_ref": [], "heading": "PRELIMINARIES", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "DENOISING DIFFUSION PROBABILISTIC MODELS", "publication_ref": [ "b27", "b17", "b27", "b27", "b65", "b48", "b28" ], "table_ref": [], "text": "Denoising Diffusion Probabilistic Models (Ho et al., 2020) (DDPMs) belong to the broad family of latent variable models. We refer the reader to (Everett, 2013) for an in depth review. In short, to learn a parametric data distribution p θ (x 0 ) from an empirical distribution of finite samples q(x 0 ), DDPMs reverse a diffusion Markov Chain that generates latents x 1:T by gradually adding Gaussian noise to the data x 0 ∼ q(x 0 ) for T time-steps as follows: q(x t |x t-1 ) := N (x t-1 ; √ ᾱt x 0 , (1 -ᾱt )I). Here, ᾱt is the cumulative product of fixed variances with a handcrafted scheduling up to time-step t. (Ho et al., 2020) introduce an efficient training recipe in which: i) The forward process adopts sampling in closed form. ii) reversing the diffusion process is equivalent to learning a sequence of denoising (or score) networks ϵ θ , with tied weights. Reparameterizing the forward process as\nx t = √ ᾱt x 0 + √ 1 -ᾱt ϵ results in the \"simple\" DDPM loss: E t∼[0,T ],x0∼q(x0),ϵ∼N (0,I) ∥ϵ -ϵ θ ( √ ᾱt x 0 + √ 1 -ᾱt ϵ, t)∥ 2\n, which makes learning of the data distribution p θ (x 0 ) both efficient and scalable.\nAt inference time, we compute x 0 ∼ p θ (x 0 ) via ancestral sampling (Ho et al., 2020). Concretely, we start by sampling x T ∼ N (0, I) and iteratively apply the score network ϵ θ to denoise x T , thus reversing the diffusion Markov Chain to obtain x 0 . Sampling x t-1 ∼ p θ (x t-1 |x t ) is equivalent to computing the update:\nx t-1 = 1 √ αt x t -1-αt √ 1-αt ϵ θ (x t , t) + z,\nwhere at each inference step a stochastic component z ∼ N (0, I) is injected, resembling sampling via Langevin dynamics (Welling & Teh, 2011). In practice, DDPMs have obtained amazing results for signals living in an Euclidean grid (Nichol & Dhariwal, 2021;Ho et al., 2022). However, the extension to functions defined on curved manifolds remains an open problem." }, { "figure_ref": [], "heading": "RIEMANNIAN MANIFOLDS", "publication_ref": [ "b8", "b20", "b53", "b11" ], "table_ref": [], "text": "Previous work on Riemannian generative models (Bortoli et al., 2022;Gemici et al., 2016;Rozen et al., 2021;Chen & Lipman, 2023) develops machinery to learn distribution from a training set of points living on Riemannian manifolds. Riemannian manifolds are connected and compact manifolds M equipped with a smooth metric g : T x M × T x M → R ≥0 (e.g. a smoothly varying inner product from which a distance can be constructed on M). A core tool in Riemannian manifolds is the tangent space, this space defines the tangent hyper-plane of a point x ∈ M and is denoted by T x M. This tangent space T x M is used to define inner products ⟨u, v⟩ g , u, v ∈ T x M, which in turns defines g. The tangent bundle T M is defined as the collection of tangent spaces for all points T x M ∀x ∈ M.\nIn practice we cannot assume that for general geometries (e.g. geometries for which we don't have access to a closed form and are commonly represented as graphs/meshes) one can efficiently compute g. While it is possible to define an analytical form for the Riemannian metric g on simple parametric manifolds (e.g. hyper-spheres, hyperbolic spaces, tori), general geometries (i.e. the Stanford bunny) are inherently discrete and irregular, which can make it expensive to even approximate g. To mitigate these issues MDF is formulated from the ground up without relying on access to an analytical form for g or the tangent bundle T M and allows for learning a distribution of functions defined on general geometries." }, { "figure_ref": [ "fig_1" ], "heading": "LAPLACE-BELTRAMI OPERATOR", "publication_ref": [ "b41", "b41", "b46", "b41", "b63", "b66", "b70", "b13" ], "table_ref": [], "text": "The Laplace-Beltrami Operator (LBO) denoted by ∆ M is one of the cornerstones of differential geometry and can be intuitively understood as a generalization of the Laplace operator to functions defined on Riemannian manifolds M. Intuitively, the LBO encodes information about the curvature of the manifold and how it bends and twists at every point, reflecting the intrinsic geometry. One of the basic uses of the Laplace-Beltrami operator is to define a functional basis on the manifold by solving the general eigenvalue problem associated with ∆ M , which is a foundational technique in spectral geometry analysis (Lévy, 2006). The eigen-decomposition of ∆ M are the non-trivial solutions to the equation ∆ M φ i = λ i φ i . The eigen-functions φ i : M → R represent an orthonormal functional basis for the space of square integrable functions (Lévy, 2006;Minakshisundaram & Pleijel, 1949). Thus, one can express a square integrable function f : M → Y, with f ∈ L 2 as a linear combination of the functional basis, as follows:\nf = ∞ i=1 ⟨f, φ i ⟩φ i .\nIn practice, the infinite sum is truncated to the k eigen-functions with lowest eigen-values, where the ordering of the eigen-values λ 1 < λ 2 • • • < λ k enables a low-pass filter of the basis. Moreover, (Lévy, 2006) shows that the eigen-functions of ∆ M can be interpreted as a Fourier-like function basis (Vallet & Lévy, 2008) on the manifold, e.g. an intrinsic coordinate system for the manifold. In particular, if M = S 2 this functional basis is equivalent to spherical harmonics, and in Euclidean space it becomes a Fourier basis which is typically used in implicit representations (Xie et al., 2022). MDF uses the eigen-functions of the LBO ∆ M to define a Fourier-like positional embedding (PE) for points on M (see Fig. 3). Note that these eigen-functions are only defined for points that lie on the manifold, making MDF strictly operate on the manifold. (Zhuang et al., 2023;Dupont et al., 2022b;a;Du et al., 2021) use this representation to encode a function's input. Right: MDF uses the eigen-functions φ i of the Laplace-Beltrami Operator (LBO) ∆ M evaluated at a point x ∈ M." }, { "figure_ref": [ "fig_1" ], "heading": "METHOD", "publication_ref": [ "b70", "b56", "b70", "b70" ], "table_ref": [], "text": "MDF is a diffusion generative model that captures distributions over fields defined on a Riemannian manifold M. We are given observations in the form of an empirical distribution f 0 ∼ q(f 0 ) over fields where a field f 0 : M → Y maps points from a manifold M to a signal space Y. As a result, latent variables f 1:T are also fields on manifolds that can be continuously evaluated.\nTo tackle the problem of learning a diffusion generative model over fields we employ a similar recipe to (Zhuang et al., 2023), generalizing from fields defined on Euclidean domains to functions on Riemannian manifolds. In order to this we use the first k eigen-functions φ i=1:k of ∆ M to define a Fourier-like representation on M. Note that our model is independent of the particular parametrization of the LBO, e.g. cotangent, point cloud (Sharp & Crane, 2020) or graph laplacians can be used depending on the available manifold parametrization (see Sect. 5.2 for experimental results). We use the term φ\n(x) = √ n[φ 1 (x), φ 2 (x), . . . , φ k (x)] ∈ R k to denote the normalized eigen-function representation of a point x ∈ M.\nIn Fig. 3 we show a visual comparison of standard Fourier PE on Euclidean space and the eigen-functions of the LBO on a manifold.\nWe adopt an explicit field parametrization (Zhuang et al., 2023), where a field is characterized by a set of coordinate-signal pairs {(φ(x c ), y (c,0) )}, x c ∈ M, y (c,0) ∈ Y, which is denoted as context set. We row-wise stack the context set and refer to the resulting matrix via\nC 0 = [φ(X c ), Y (c,0) ].\nHere, φ(X c ) denotes the eigen-function representation of the coordinate portion and Y (c,0) denotes the signal portion of the context set at time t = 0. We define the forward process for the context set by diffusing the signal and keeping the eigen-functions fixed:\nC t = [φ(X c ), Y (c,t) = √ ᾱt Y (c,0) + √ 1 -ᾱt ϵ c ],(1)\nwhere ϵ c ∼ N (0, I) is a noise vector of the appropriate size. We now turn to the task of formulating a score network for fields. Following (Zhuang et al., 2023), the score network needs to take as input the context set (i.e. the field parametrization), and needs to accept being evaluated continuously in M. We do this by employing a query set {x q , y (q,0) }. Equivalently to the context set, we row-wise stack query pairs and denote the resulting matrix as\nQ 0 = [φ(X q ), Y (q,0) ].\nNote that the forward diffusion process is equivalently defined for both context and query sets:\nQ t = [φ(X q ), Y (q,t) = √ ᾱt Y (q,0) + √ 1 -ᾱt ϵ q ],(2)\nwhere ϵ q ∼ N (0, I) is a noise vector of the appropriate size. The underlying field is solely defined by the context set, and the query set are the function evaluations to be de-noised. The resulting score field model is formulated as follows, εq = ϵ θ (C t , t, Q t ).\nUsing the explicit field characterization and the score field network, we obtain the training and inference procedures in Alg. 1 and Alg. 2, respectively, which are accompanied by illustrative examples of sampling a field encoding a Gaussian mixture model over the manifold (i.e. the bunny).\nFor training, we uniformly sample context and query sets from f 0 ∼ Uniform(q(f 0 )) and only corrupt their signal using the forward process in Eq. equation 1 and Eq. equation 2. We train the score field network ϵ θ to denoise the signal portion of the query set, given the context set. During sampling," }, { "figure_ref": [], "heading": "Algorithm 1 Training", "publication_ref": [ "b70", "b27" ], "table_ref": [], "text": "1: ∆Mφi = φiλi // LBO eigen-decomposition 2: repeat 3: (C0, Q0) ∼ Uniform(q(f0)) 4: t ∼ Uniform({1, . . . , T }) 5: ϵc ∼ N (0, I), ϵq ∼ N (0, I) Algorithm 2 Sampling\n6: Ct = [φ(Xc), √ ᾱtY(c,0) + √ 1 -ᾱtϵc] 7: Qt = [φ(Xq), √ ᾱtY(q,0) + √ 1 -ᾱtϵq] 8: Take gradient descent step on ∇ θ ∥ϵq -ϵ θ (Ct, t, Qt)∥ 2 9: until converged\n1: ∆Mφi = φiλi // LBO eigen-decomposition 2: QT = [φ(Xq), Y (q,t) ∼ N (0q, Iq)] 3: CT ⊆ QT ▷ Random subset 4: for t = T, . . . , 1 do 5: z ∼ N (0, I) if t > 1, else z = 0 6: Y (q,t-1) = 1 √ α t Y (q,t) - 1-α t √ 1-ᾱt ϵ θ (C t , t, Q t ) + σ t z 7: Qt-1 = [Mq, Y (q,t-1) ] 8: Ct-1 ⊆ Qt-1\n▷ Same subset as in step 2 9: end for 10: return f0 evaluated at coordinates φ(Xq) to generate a field f 0 ∼ p θ (f 0 ) we first define a query set Q T = [φ(X q ), Y (q,T ) ∼ N (0, I)] of random values to be de-noised. Similar to (Zhuang et al., 2023) we set the context set to be a random subset of the query set. We use the context set to denoise the query set and follow ancestral sampling as in the vanilla DDPM (Ho et al., 2020). Note that during inference the eigen-function representation φ(x) of the context and query sets does not change, only their corresponding signal value." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [ "b26", "b0" ], "table_ref": [], "text": "We validate the practicality of MDF via extensive experiments including synthetic and real-world problems. In Sect. 5.1 we provide results for learning distributions of functions on a fixed manifold (e.g. climate science), where functions change but manifolds are fixed across all functions. In addition, in Sect. 5.2 we show that MDF is robust to different manifold parametrizations. Finally, in Sect. 5.3 we also provide results on a generalized setting where manifolds are different for each function (e.g. molecule conformer generation). As opposed to generative models over images, we cannot rely on FID (Heusel et al., 2017) type metrics for evaluation since functions are defined on curved geometries. We borrow metrics from generative modeling of point cloud data (Achlioptas et al., 2018), namely Coverage (COV) and Minimum Matching Distance (MMD). We compute COV and MMD metrics based on the l 2 distance in signal space for corresponding vertices in the manifolds." }, { "figure_ref": [ "fig_4" ], "heading": "DISTRIBUTIONS OF FUNCTIONS ON A FIXED MANIFOLD", "publication_ref": [ "b34", "b59", "b70", "b70", "b70", "b70", "b34", "b41", "b59", "b25", "b25", "b59" ], "table_ref": [], "text": "We evaluate MDF on 3 different manifolds that are fixed across functions: a sine wave, the Stanford bunny and a human mesh. These manifolds have an increasing average mean curvature |K| (averaged over vertices), which serves as a measure for how distant they are from being globally Euclidean. On each manifold we define 3 function datasets: a Gaussian Mixture (GMM) with 3 components (where in each field the 3 components are randomly placed on the manifold), MNIST (LeCun et al., 1998) and CelebA-HQ (Karras et al., 2018) images. We use an off-the-shelf texture mapping approach (Sullivan & Kaszynski, 2019) to map images to manifolds, see Fig. 1. We compare MDF with Diffusion Probabilistic Fields (DPF) (Zhuang et al., 2023) a generative model for fields in ambient space, where points in the manifold are parametrized by the Fourier PE of its coordinates in 3D space.\nTo provide a fair comparison we equate all the hyper-parameters in both MDF and DPF (Zhuang et al., 2023). Tab. 1-2-3 show results for the different approaches and tasks. We observe that MDF tends to outperform DPF (Zhuang et al., 2023), both in terms of covering the empirical distribution, resulting in higher COV, but also in the fidelity of the generated fields, obtaining a lower MMD.\nIn particular, MDF outperforms DPF (Zhuang et al., 2023) across the board for manifolds of large mean curvature |K|. We attribute this behaviour to our choice of using intrinsic functional basis (e.g. eigen-functions of the LBO) to represent a coordinate system for points in the manifold. Fig. 1 shows a side to side comparison of real and generated functions on different manifolds obtained from MDF.\nWe also compare MDF with GASP (Dupont et al., 2022b), a generative model for continuous functions using an adversarial formulation. We compare MDF and GASP performance on the CelebA-HQ dataset (Karras et al., 2018) mapped on the bunny manifold. Additionally, we report results on the ERA5 climate dataset (Dupont et al., 2022b), which is composed of functions defined on the sphere f : S 2 → R 1 (see Fig. 1). For the ERA5 dataset we use spherical harmonics to compute φ, which are equivalent to the analytical eigen-functions of the LBO on the sphere (Lévy, 2006). To compare with GASP we use their pre-trained models to generate samples. In the case of CelebA-HQ, we use GASP to generate 2D images and map them to the bunny manifold using (Sullivan & Kaszynski, 2019).\nExperimental results in Tab. 5 show that MDF outperforms GASP in both ERA5 (Hersbach et al., 2020) and CelebA-HQ datasets, obtaining both higher coverage but also higher fidelity in generated functions. This can be observed in Fig. 6 where the samples generated by MDF are visually crisper than those generated by GASP. (Hersbach et al., 2020) and CelebA-HQ both in terms of fidelity and distribution coverage. For GASP, we generate CelebA-HQ images and texture map them to the bunny manifold using (Sullivan & Kaszynski, 2019). Published as a conference paper at ICLR 2024 Furthermore, we ablate the performance of MDF as the number of eigen-functions used to compute the coordinate representation φ increases (e.g. the eigen-decomposition of the LBO). For this task we use the bunny and the GMM dataset. Results in Fig. 7 show that performance initially increases with the number of eigen-functions up to a point where high frequency eigen-functions of the LBO are not needed to faithfully encode the distribution of functions.\nFigure 7: Performance of MDF as a function of the number of eigen-functions of the LBO, measured by COV and MMD metrics. As expected, performance increases initially as more eigen-functions are used, followed by a plateau phase for more than k = 32 eigen-functions." }, { "figure_ref": [ "fig_6" ], "heading": "MANIFOLD PARAMETRIZATION", "publication_ref": [ "b55", "b56", "b45" ], "table_ref": [], "text": "MDF uses the eigen-functions of the LBO as positional embeddings. In practice, different real-world problems parametrize manifolds in different ways, and thus, have different ways of computing the LBO. For example, in computer graphics the usage of 3D meshes and cotangent Laplacians (Rustamov et al., 2007) is widespread. In computer vision, 3D geometry can also be represented as pointclouds which enjoy sparsity benefits and for which Laplacians can also be computed (Sharp & Crane, 2020). Finally, in computational chemistry problems, molecules are represented as undirected graphs of atoms connected by bonds, in this case graph Laplacians are commonly used (Maskey et al., 2022). In Fig. 8 we show the top-2 eigenvectors of these different Laplacians on the bunny manifold. In Tab. 6 we show the performance of MDF on the bunny mesh on different datasets using different manifold parametrizations and their respective Laplacian computation. These results show that MDF is relatively robust to different Laplacians and can be readily applied to any of these different settings by simply computing eigenvectors of the appropriate Laplacian. " }, { "figure_ref": [], "heading": "GENERALIZING ACROSS MANIFOLDS", "publication_ref": [ "b67", "b18", "b33", "b54", "b50", "b67", "b68", "b18", "b33", "b18", "b33", "b33", "b33" ], "table_ref": [], "text": "We now generalize the problem setting to learning distributions over functions where each function is defined on a different manifold. In this setting, the training set is defined as {f i } i=0:N with functions f i : M i → Y mapping elements from different manifolds M i to a shared signal space Y. This is a generalization of the setting in Sect. 5.1 where functions are defined as f i : M → Y , with the manifold M being fixed across f i 's. This generalized setting is far more complex than the fixed setting since the model not only has to figure out the distribution of functions but also it needs to represent different manifolds in a consistent manner. To evaluate the performance of MDF in this setting we tackle the challenging problem of molecule conformer generation (Xu et al., 2021;2022;Ganea et al., 2021;Jing et al., 2022) which is a fundamental task in computational chemistry and requires models to handle multiple manifolds. In this problem, manifolds M i are parametrized as graphs that encode the connectivity structure between atoms of different types. From MDF's perspective a conformer is then a function f i : M i → R 3 that maps elements in the graph (e.g. . atoms) to a point in 3D space. Note that graphs are just one of different the manifold representations that are amenable for MDF as show in Sect. 5.2.\nFollowing the standard setting for molecule conformer prediction we use the GEOM-QM9 dataset (Ruddigkeit et al., 2012;Ramakrishnan et al., 2014) which contains ∼ 130K molecules ranging from ∼ 10 to ∼ 40 atoms. We report our results in Tab. 7 and compare with CGCF (Xu et al., 2021), GeoDiff (Xu et al., 2022), GeoMol (Ganea et al., 2021) and Torsional Diffusion (Jing et al., 2022). Note that both GeoMol (Ganea et al., 2021) and Torsional Diffusion (Jing et al., 2022) make strong assumptions about the geometric structure of molecules and model domain-specific characteristics like torsional angles of bonds. In contraposition, MDF simply models the distribution of 3D coordinates of atoms without making any assumptions about the underlying structure. We use the same train/val/test splits as Torsional Diffusion (Jing et al., 2022) and use the same metrics to compare the generated and ground truth conformer ensembles: Average Minimum RMSD (AMR) and Coverage. These metrics are reported both for precision, measuring the accuracy of the generated conformers, and recall, which measures how well the generated ensemble covers the ground-truth ensemble. We generate 2K conformers for a molecule with K ground truth conformers. Note that in this setting, models are evaluated on unseen molecules (e.g. unseen manifolds M i ).\nWe report results on Tab. 7 where we see how MDF outperforms previous approaches. It is important to note that MDF is a general approach for learning functions on manifolds that does not make any assumptions about the intrinsic geometric factors important in conformers like torsional angles in Torsional Diffusion (Jing et al., 2022). This makes MDF simpler to implement and applicable to other settings in which intrinsic geometric factors are not known. Finally, In the appendix we present additional results that carefully ablate different architectures for the score network ϵ θ in A.7.1. As well as an extensive study of the robustness of MDF to both rigid and isometric transformations of the manifold M A.7.2. Finally, we also show conditional inference results on the challenging problem of PDEs on manifolds A.7.3." }, { "figure_ref": [], "heading": "CONCLUSIONS", "publication_ref": [], "table_ref": [], "text": "In this paper we introduced MDF a diffusion probabilistic model that is capable of capturing distributions of functions defined on general Riemannian manifolds. We leveraged tools from spectral geometry analysis and use the eigen-functions of the manifold Laplace-Beltrami Operator to define an intrinsic coordinate system on which functions are defined. This allows us to design an efficient recipe for training a diffusion probabilistic model of functions whose domain are arbitrary geometries.\nOur results show that we can capture distributions of functions on manifolds of increasing complexity outperforming previous approaches, while also enabling the applications of powerful generative priors to fundamental scientific problems like forward and inverse solutions to PDEs, climate modeling, and molecular chemistry." }, { "figure_ref": [], "heading": "A APPENDIX", "publication_ref": [ "b47", "b61", "b32", "b52" ], "table_ref": [], "text": "A.1 BROADER IMPACT STATEMENT When examining the societal implications of generative models, certain critical elements warrant close attention. These include the potential misuse of generative models to fabricate deceptive data, such as \"DeepFakes\" (Mirsky & Lee, 2021), the risk of training data leakage and associated privacy concerns (Tinsley et al., 2021), and the potential to amplify existing biases in the training data (Jain et al., 2020). For a comprehensive discussion on ethical aspects in the context of generative modeling, readers are directed to (Rostamzadeh et al., 2021)." }, { "figure_ref": [], "heading": "A.2 LIMITATIONS AND FUTURE WORK", "publication_ref": [ "b31", "b69", "b12", "b27" ], "table_ref": [], "text": "As MDF advances in learning function distributions over Riemannian manifolds, it does encounter certain constraints and potential areas of future enhancement. One primary challenge is the computational demand of the transformer-based score network in its basic form, even at lower resolutions. This stems from the quadratic cost of calculating attention over context and query pairs. To mitigate this, the PerceiverIO architecture, which scales in a linear manner with the number of query and context points, is utilized (Jaegle et al., 2022) in our experiments. Further exploration of other efficient transformer architectures could be a promising direction for future work (Zhai et al., 2022;Dao et al., 2022). Furthermore, MDF, much like DDPM (Ho et al., 2020), iterates over all time steps during sampling to generate a field during inference, a process slower than that of GANs. Current studies have accelerated sampling (Song et al., 2021a), but at the expense of sample quality and diversity. However, it's worth noting that improved inference methods such as (Song et al., 2021a) can be seamlessly incorporated into MDF.\nSince MDF has the capability to learn distributions over fields defined on various Riemannian manifolds within a single model, in future work we are poised to enhance its capacity for comprehending and adapting to a broader range of geometrical contexts. This adaptability will further pave the way towards the development of general foundation models to scientific and engineering challenges, which can better account for the intricate geometric intricacies inherent in real-world scenarios.\nFor example, we aim to extend the application of MDF to inverse problems in PDEs. A noteworthy attribute of our model is its inherent capability to model PDEs on Riemannian manifolds trivially. The intrinsic structure of MDF facilitates not only the understanding and solving of forward problems, where PDEs are known and solutions to the forward problem are needed, but also inverse problems, where certain outcome or boundary conditions are known and the task is to determine the underlying PDE. Expanding our application to handle inverse problems in PDEs on Riemannian manifolds can have profound implications for complex systems modeling, as it enhances our understanding of the manifold structures and the way systems governed by PDEs interact with them." }, { "figure_ref": [], "heading": "A.3 DISCUSSION ON COMPUTING EMBEDDINGS", "publication_ref": [], "table_ref": [], "text": "When considering how to compute embeddings for points in a manifold M there are several options to explore. The simplest one is to adopt the ambient space in which the manifold is embedded as a coordinate system to represent points (eg. a plain coordinate approach). For example, in the case of 3D meshes one can assign a coordinate in R 3 to every point in the mesh. As shown in Tab. 1-2-3-4 this approach (used by DPF) is outperformed by MDF. In addition, in Sect. A.6.2 we show that this approach is not robust wrt rigid or isometric transformations of the manifold. Note that manifolds are not always embedded in an ambient space. For example, in molecular conformation, molecular graphs only represent connectivity structure between atoms but are not necessarily embedded in a higher dimensional space.\nAnother method that one can consider is to use a local chart approach. Local charts are interesting because they provide a way assigning a set of coordinates to points in a local region of the manifold.\nWhile the manifold may have arbitrary curvature, local charts are always Euclidean spaces. Each point in the manifold can be described by a unique set of coordinates in the chart, but different charts may overlap. However, this requires computing transformations (often complex to implement) to convert coordinates from one chart to another.\nFinally, the eigen-functions of the LBO not only provide a way of assigning a coordinate to points on a manifold but also do this by defining an intrinsic coordinate system. This intrinsic coordinate system is global, and does not require transformations like local charts do. In addition, this intrinsic coordinate system is robust wrt rigid or isometric transformations of the manifold (ref A.6.2). Summarizing, this intrinsic coordinate system is a more fundamental way of describing the manifold, based on its own inherent properties, without reference to an external ambient space." }, { "figure_ref": [], "heading": "A.4 IMPLEMENTATION DETAILS", "publication_ref": [ "b60", "b60", "b34", "b59", "b25" ], "table_ref": [], "text": "In this section we describe implementation details for all our experiments. These include all details about the data: manifolds and functions, as well as details for computing the eigen-functions φ. We also provide hyper-parameters and settings for the implementation of the score field network ϵ θ and compute used for each experiment in the paper. (Sumner & Popovic, 2004). (e) Cat (re-posed) (Sumner & Popovic, 2004).\nIn terms of datasets of functions on these manifolds we use the following:\n• A Gaussian Mixture Model (GMM) dataset with 3 components, where in each field the 3 components are randomly placed on the specific manifold. We define a held out test set containing 10k samples.\n• MNIST (LeCun et al., 1998) and CelebA-HQ (Karras et al., 2018) datasets, where images are texture mapped into the meshes using (Sullivan & Kaszynski, 2019), models are evaluated on the standard tests sets for these datasets.\n• The ERA5 (Hersbach et al., 2020) dataset used to compare with GASP (Dupont et al., 2022b) is available at https://github.com/EmilienDupont/ neural-function-distributions. This dataset contains a train set of 8510 samples and a test set of 2420 samples, which are the settings used in GASP (Dupont et al., 2022b). To compare with GASP we used their pretrained model available available at " }, { "figure_ref": [], "heading": "A.5 COMPUTING THE LAPLACIAN AND φ", "publication_ref": [ "b2", "b24" ], "table_ref": [], "text": "In practice, for general geometries (e.g. general 3D meshes with n vertices) we compute eigenvectors of the symmetric normalized graph Laplacian L. We define L as follows:\nL = D -1 2 (D -A)D -1 2 ,(3)\nwhere A ∈ {0, 1} n×n is the discrete adjacency matrix and D is the diagonal degree matrix of the mesh graph. Note that eigenvectors of L converge to the eigen-functions of the LBO ∆ M as n → ∞ (Belkin & Niyogi, 2001;Bengio et al., 2003b;a). The eigen-decomposition of L can be computed efficiently using sparse eigen-problem solvers (Hernandez et al., 2009) and only needs to be computed once during training. Note that eigen-vectors of L are only defined for the mesh vertices. In MDF, we sample random points on the mesh during training and interpolate the eigenvector representation φ of the vertices in the corresponding triangle using barycentric interpolation." }, { "figure_ref": [ "fig_9" ], "heading": "A.5.1 SCORE FIELD NETWORK ϵ θ", "publication_ref": [ "b64", "b31", "b31", "b36" ], "table_ref": [], "text": "In MDF, the score field's design space covers all architectures that can process irregularly sampled data, such as Transformers (Vaswani et al., 2017) and MLPs (Tolstikhin et al., 2021). The model is primarily implemented using PerceiverIO (Jaegle et al., 2022), an effective transformer architecture that encodes and decodes. The PerceiverIO was chosen due to its efficiency in managing large numbers of elements in the context and query sets, as well as its natural ability to encode interactions between these sets using attention. Figure 10 demonstrates how these sets are used within the PerceiverIO architecture. To elaborate, the encoder maps the context set into latent arrays (i.e., a group of learnable vectors) through a cross-attention layer, while the decoder does the same for query set. For a more detailed analysis of the PerceiverIO architecture refer to (Jaegle et al., 2022).\nThe time-step t is incorporated into the score computation by concatenating a positional embedding representation of t to the context and query sets. The specific PerceiverIO settings used in all quantitatively evaluated experiments are presented in Tab.8. Practically, the MDF network consists of 12 transformer blocks, each containing 1 cross-attention layer and 2 self-attention layers, except for GEOM-QM9 we use smaller model with 6 blocks. Each of these layers has 4 attention heads. Fourier position embedding is used to represent time-steps t with 64 frequencies. An Adam (Kingma & Ba, 2015) optimizer is employed during training with a learning rate of 1e -4. We use EMA with a decay of 0.9999. A modified version of the publicly available repository is used for PerceiverIO2 ." }, { "figure_ref": [ "fig_12" ], "heading": "A.7.1 ARCHITECTURE ABLATION", "publication_ref": [ "b31", "b64", "b62", "b31", "b64", "b62" ], "table_ref": [], "text": "The construction of MDF does not rely on a specific implementation of the score network ϵ θ . The score model's design space encompasses a broad range of options, including all architectures capable of handling irregular data like transformers or MLPs. To substantiate this, we conducted an evaluation on the GMM dataset and the Stanford bunny at a resolution of 602 vertices, comparing three distinct architectures: a PerceiverIO (Jaegle et al., 2022), a standard Transformer Encoder-Decoder (Vaswani et al., 2017), and an MLP-mixer (Tolstikhin et al., 2021). For a fair comparison, we approximated the same number of parameters (around 55M) and settings (such as the number of blocks, parameters per block, etc.) for each model and trained them over 500 epochs. Note that because of these reasons the numbers reported in this section are not directly comparable to those shown in the main paper. We simplified the evaluation by using an equal number of context and query pairs. Both the Transformer Encoder and MLP-mixer process context pairs using their respective architectures; the resulting latents are then merged with corresponding query pairs and fed into a linear projection layer for final prediction.\nIn Tab. 9 we show that the MDF formulation is compatible with different architectural implementations of the score field model. We observe relatively uniform performance across various architectures, ranging from transformer-based to MLPs. Similar patterns are noted when examining qualitative samples displayed in Fig. 11, corroborating our assertion that MDF's advantages stem from its formulation rather than the specific implementation of the score field model. Each architecture brings its own strengths-for instance, MLP-mixers enable high throughput, transformer encoders are easy to implement, and PerceiverIO facilitates the handling of large and variable numbers of context and query pairs. We posit that marrying the strengths of these diverse architectures promises substantial advancement for MDF. Please note, these empirical results aren't directly comparable to those reported elsewhere in the paper, as these models generally possess around 50% of the parameters of the models used in other sections.\nCOV ↑ MMD ↓ PeceiverIO (Jaegle et al., 2022) 0.569 0.00843 Transf. Enc-Dec (Vaswani et al., 2017) 0.581 0.00286 MLP-mixer (Tolstikhin et al., 2021) 0.565 0.00309\nTable 9: Quantitative evaluation of image generation on the GMM + Stanford Bunny dataset for different implementations of the score field ϵ θ .\nFinally, to measure the effect of random training seed for weight initialization we ran the exact same model fixing all hyper-parameters and training settings. For this experiment we used the PerceiverIO architecture and the GMM dataset on the Stanford bunny geometry with 602 vertices. We ran the same experiment 3 times and measured performance using COV and MMD metrics. Our results show that across the different training runs MDF obtained COV=0.569 ± 0.007 and MMD=0.00843 ± 0.00372." }, { "figure_ref": [ "fig_13", "fig_14", "fig_15", "fig_4", "fig_14", "fig_4" ], "heading": "A.7.2 ROBUSTNESS OF MDF", "publication_ref": [ "b60", "b70", "b70", "b70", "b70", "b43" ], "table_ref": [], "text": "We evaluate MDF's robustness to rigid and isometric transformations of the training manifold M. We use the cat category geometries from (Sumner & Popovic, 2004) and build a dataset of different fields on the manifold by generating 2 gaussians around the area of the tail and the right paw of the cat. Note that every field is different since the gaussians are centered at different points in the tail and right paw, see Fig. In Fig. 13 we show how performance changes as the magnitude of a rigid transformation (e.g. a rotation about the z-axis) of M increases. As expected, the performance of DPF (Zhuang et al., 2023) sharply declines as we move away from the training setting, denoted by a rotation of 0 radians. However, MDF obtains a stable performance across transformations, this is due to the eigen-function basis being intrinsic to M, and thus, invariant to rigid transformations. In addition, in Tab. 4 we show results under an isometric transformation of the manifold (e.g. changing the pose of the cat, see Fig. 13(c)). As in the rigid setting, the performance of DPF (Zhuang et al., 2023) sharply declines under an isometric transformation while MDF keeps performance constant. In addition, transferring to an isometric transformation (M → M iso ) performs comparably with directly training on the isometric transformation (M iso → M iso ) up to small differences due to random weight initialization.\nFigure 12: Robustness of MDF and DPF (Zhuang et al., 2023) with respect to rigid transformations of M. The distribution of fields learned by MDF is invariant with respect to rigid transformations, while DPF (Zhuang et al., 2023) collapses due to learning distributions in ambient space.\nWe also provide transfer results to different discretizations of M. To do so, we train MDF on a low resolution discretization of a manifold and evaluate transfer to a high resolution discretization. We use the GMM dataset and the bunny manifold at 2 different resolutions: 1394 and 5570 vertices, which we get by applying loop subdivision (Loop, 1987) to the lowest resolution mesh. Theoretically, the Laplacian eigenvectors φ are only unique up to sign, which can result in ambiguity when transferring a pre-trained model to a different discretization. Empirically we did not find this to be an issue in our experiments. We hypothesize that transferring MDF from low to high resolution discretizations is largely a function of the number of eigen-functions used to compute φ. This is because eigenfunctions of small eigenvalue represent low-frequency components of the manifold which are more stable across different discretizations. In Fig. 14 we report transfer performance as a function of the number of eigen-functions used to compute φ. We observe an initial regime where more eigenfunctions aid in transferring (up to 64 eigen-functions) followed by a stage where high-frequency eigen-functions negatively impact transfer performance. We additionally run a transfer experiment between low resolution and high resolution discretizations of a different manifold (e.g. a mesh of the letter 'A', show in Fig. 15(b)). In this setting the low resolution mesh contains 1000 vertices and the high resolution mesh contains 4000 vertices. As show in Fig. 16 the results are consistent across manifolds, and a similar trend as in Fig. 14 can be observed. This trend further reinforces our hypothesis that low frequency eigen-functions transfer better across discretization than high frequency ones. Figure 16: Transferring MDF from a less detailed to a more detailed discretization depends on the number of eigen-functions. It's noteworthy that eigen-functions with small eigenvalues have better transferability as they represent the broad, or lowfrequency, information of the manifold." }, { "figure_ref": [ "fig_21", "fig_21" ], "heading": "A.7.3 CONDITIONAL INFERENCE ON PDES", "publication_ref": [ "b44", "b30" ], "table_ref": [], "text": "In this section we evaluate MDF on conditional inference tasks. In particular, we create a dataset of different simulations of the heat diffusion PDE on a manifold. As a result, every sample in our training distribution f 0 ∼ q(f 0 ) is a temporal field f : M × R → R. We create a training set of 10k samples where each sample is a rollout of the PDE for 10 steps given initial conditions. We generate initial conditions by uniformly sampling 3 gaussian heat sources of equivalent magnitude on the manifold and use FEM (Reddy, 2019) to compute the rollout over time. For this experiment we use a version of the bunny mesh with 602 vertices as a manifold M and set the diffusivity term of the PDE to D = 0.78. We then train MDF on this training set of temporal fields f : M × R → R, which in practice simply means concatenating a Fourier PE of the time step to the eigen-functions of the LBO.\nWe tackle the forward problem where we are given the initial conditions of the PDE and the model is tasked to predict the forward dynamics on a test set of 60 held out samples. To perform conditional inference with MDF we follow the recipe in (Lugmayr et al., 2022) which has been successful in the image domain. We show the forward dynamics predicted by FEM (Reddy, 2019) on Fig. 17 b) for the same initial conditions in the held out set. We see how MDF successfully captures temporal dynamics, generating a temporal field consistent with observed initial conditions. Evaluating the full test set MDF obtains an mean squared error MSE = 4.77e10 -3. In addition, MDF can directly be used for inverse problems (Isakov, 2006). Here we focus on inverting the full dynamics of the PDE, conditioned on sparse observations. Fig. 17 " }, { "figure_ref": [ "fig_22", "fig_23" ], "heading": "A.8 ADDITIONAL VISUALIZATIONS", "publication_ref": [ "b25", "b9" ], "table_ref": [], "text": "In this section we provide additional visualizations of experiments in the main paper. We show real and generated fields for the wave manifold (Fig. 18), ERA5 dataset (Hersbach et al., 2020) (Fig. 19) and GMM dataset on the bunny (Fig. 20) and human (Bronstein et al., 2008) manifolds (see Fig. 21). In summary, MDF captures the distribution of real fields for different datasets and manifolds, with high fidelity and coverage.\nFinally, under ./videos we include two video visualizations:\n• A visualization of training data as well as the sampling process for the MNIST dataset on the wave manifold.\n• A visualization of GT and temporal fields generated by MDF for the PDE dataset introduced in Sect. A.7.3.\n• A visualization of the sampling process for QM9 molecules for experiments Sect. " }, { "figure_ref": [], "heading": "A.5.2 COMPUTE", "publication_ref": [ "b26", "b0", "b0", "b70" ], "table_ref": [], "text": "Each model was trained on an machine with 8 Nvidia A100 GPUs, we trained models for 3 days.\nA.6 METRICS Instead of using FID type metrics commonly used for generative models over images (Heusel et al., 2017), we must take a different approach for evaluating functions on curved geometries. Our suggestion is to use metrics from the field of generative modeling of point cloud data (Achlioptas et al., 2018), specifically Coverage (COV) and Minimum Matching Distance (MMD).\n• Coverage (COV) refers to how well the generated data set represents the test set. We first identify the closest neighbour in the generated set for each field in the test set. COV is then calculated as the proportion of fields in the generated set that have corresponding fields in the test set. The distance between fields is determined using the average l 2 distance in signal space on the vertices of the mesh, usually in either R 1 or R 3 space in our experiments. A high COV score implies that the generated samples adequately represent the real samples.\n• Minimum Matching Distance (MMD), on the other hand, provides a measure of how accurately the fields are represented in the test set. This measure is required because in the COV metric matches don't necessarily have to be close. To gauge the fidelity of the generated fields against the real ones, we pair each field in the generated set with its closes neigbour in the test set (MMD), averaging these distances for our final result. This process also utilizes the l 2 distance in signal space on the mesh vertices. MMD provides a good correlation with the authenticity of the generated set, as it directly depends on the matching distances.\nAs a summary, COV and MMD metrics are complementary to each other. A model captures the distribution of real fields with good fidelity when MMD is small and COV is large. In particular, at equivalent levels of MMD a higher COV is desired (Achlioptas et al., 2018), and vice-versa. This observation correlates well with our results shown in Tab. 1-2-3 on the main paper, where MDF obtains comparable or better MMD score that DPF (Zhuang et al., 2023) while greatly improving COV." }, { "figure_ref": [], "heading": "A.7 ADDITIONAL EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "In this section we provide additional empirical results using different network architectures to implement the score field network ϵ θ . Furthermore, we provide additional experiments on robustness to discretization." } ]
2024-01-20
[ { "authors": "P Achlioptas; P Diamanti; I Mitliagkas; L Guibas", "journal": "", "ref_id": "b0", "title": "Learning representations and generative models for 3d point clouds", "year": "2018" }, { "authors": "Matthias Bauer; Emilien Dupont; Andy Brock; Dan Rosenbaum; Jonathan Schwarz; Hyunjik Kim", "journal": "", "ref_id": "b1", "title": "Spatial functa: Scaling functa to imagenet classification and generation", "year": "2023" }, { "authors": "Mikhail Belkin; Partha Niyogi", "journal": "Advances in neural information processing systems", "ref_id": "b2", "title": "Laplacian eigenmaps and spectral techniques for embedding and clustering", "year": "2001" }, { "authors": "Yoshua Bengio; Jean-Françcois Paiement; Pascal Vincent; Olivier Delalleau; Nicolas Roux; Marie Ouimet", "journal": "Advances in neural information processing systems", "ref_id": "b3", "title": "Out-of-sample extensions for lle, isomap, mds, eigenmaps, and spectral clustering", "year": "2003" }, { "authors": "Yoshua Bengio; Pascal Vincent; Jean-François Paiement; Olivier Delalleau; Marie Ouimet; Nicolas Le Roux", "journal": "Citeseer", "ref_id": "b4", "title": "Spectral clustering and kernel PCA are learning eigenfunctions", "year": "2003" }, { "authors": " Hj Bhabha", "journal": "Reviews of Modern Physics", "ref_id": "b5", "title": "Relativistic wave equations for the elementary particles", "year": "1945" }, { "authors": "Sam Bond; - Taylor; Chris G Willcocks", "journal": "CoRR", "ref_id": "b6", "title": "inf-diff: Infinite resolution diffusion with subsampled mollified states", "year": "2023" }, { "authors": "Viacheslav Borovitskiy; Alexander Terenin; Peter Mostowsky", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b7", "title": "Matérn gaussian processes on riemannian manifolds", "year": "2020" }, { "authors": "V Bortoli; E Mathieu; M Hutchinson; J Thornton; Y Teh; A Doucet", "journal": "", "ref_id": "b8", "title": "Riemannian score-based generative modeling", "year": "2022" }, { "authors": " Alexander M Bronstein; Ron Michael M Bronstein; Kimmel", "journal": "Springer Science & Business Media", "ref_id": "b9", "title": "Numerical geometry of non-rigid shapes", "year": "2008" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b10", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "T Q Ricky; Yaron Chen; Lipman", "journal": "", "ref_id": "b11", "title": "Riemannian flow matching on general geometries", "year": "2023" }, { "authors": "Tri Dao; Dan Fu; Stefano Ermon; Atri Rudra; Christopher Ré", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b12", "title": "Flashattention: Fast and memoryefficient exact attention with io-awareness", "year": "2022" }, { "authors": "Y Du; K Collins; J Tenenbaum; V Sitzmann", "journal": "", "ref_id": "b13", "title": "Learning signal-agnostic manifolds of neural fields", "year": "2021" }, { "authors": "E Dupont; H Kim; S Eslami; D Rezende; D Rosenbaum", "journal": "", "ref_id": "b14", "title": "From data to functa: Your data point is a function and you should treat it like one", "year": "2022" }, { "authors": "E Dupont; Y Teh; A Doucet", "journal": "", "ref_id": "b15", "title": "Generative models as distributions of functions", "year": "2022" }, { "authors": "V Dutordoir; A Saul; Z Ghahramani; F Simpson", "journal": "", "ref_id": "b16", "title": "Neural diffusion processes", "year": "2022" }, { "authors": "Vijay Prakash Dwivedi; K Chaitanya; Thomas Joshi; Yoshua Laurent; Xavier Bengio; ; B Bresson; Everett", "journal": "Springer", "ref_id": "b17", "title": "Benchmarking graph neural networks", "year": "2013" }, { "authors": "Octavian Ganea; Lagnajit Pattanaik; Connor Coley; Regina Barzilay; Klavs Jensen; William Green; Tommi Jaakkola", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b18", "title": "Geomol: Torsional geometric generation of molecular 3d conformer ensembles", "year": "2021" }, { "authors": "M Garnelo; J Schwarz; D Rosenbaum; F Viola; D Rezende; Y Eslami; Teh", "journal": "", "ref_id": "b19", "title": "Neural processes. ICML workshop", "year": "2018" }, { "authors": "M Gemici; D Rezende; S Mohamed", "journal": "", "ref_id": "b20", "title": "Normalizing flows on riemannian manifolds", "year": "2016" }, { "authors": "Daniele Grattarola; Pierre Vandergheynst", "journal": "", "ref_id": "b21", "title": "Generalised implicit neural representations", "year": "2022" }, { "authors": "D Ha; A Dai; Q Le; Hypernetworks", "journal": "", "ref_id": "b22", "title": "ICLR", "year": "2017" }, { "authors": "Xiaoxin He; Bryan Hooi; Thomas Laurent; Adam Perold; Yann Lecun; Xavier Bresson", "journal": "", "ref_id": "b23", "title": "A generalization of vit/mlp-mixer to graphs", "year": "2022" }, { "authors": " Hernandez; Roman; Tomas; Vidal", "journal": "", "ref_id": "b24", "title": "A survey of software for sparse eigenvalue problems", "year": "2009" }, { "authors": "Hans Hersbach; Bill Bell; Paul Berrisford; Shoji Hirahara; András Horányi; Joaquín Muñoz-Sabater; Julien Nicolas; Carole Peubey; Raluca Radu; Dinand Schepers", "journal": "Quarterly Journal of the Royal Meteorological Society", "ref_id": "b25", "title": "The era5 global reanalysis", "year": "2020" }, { "authors": "M Heusel; H Ramsauer; T Unterthiner; B Nessler; S Hochreiter", "journal": "", "ref_id": "b26", "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "year": "2017" }, { "authors": "J Ho; A Jain; P Abbeel", "journal": "", "ref_id": "b27", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "J Ho; T Salimans; A Gritsenko; W Chan; M Norouzi; D J Fleet", "journal": "", "ref_id": "b28", "title": "Video diffusion models", "year": "2022" }, { "authors": "Michael Hutchinson; Alexander Terenin; Viacheslav Borovitskiy; So Takao; Yee Teh; Marc Deisenroth", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b29", "title": "Vector-valued gaussian processes on riemannian manifolds via gauge independent projected kernels", "year": "2021" }, { "authors": "Victor Isakov", "journal": "Springer", "ref_id": "b30", "title": "Inverse problems for partial differential equations", "year": "2006" }, { "authors": "A Jaegle; S Borgeaud; J Alayrac", "journal": "", "ref_id": "b31", "title": "Perceiver io: A general architecture for structured inputs & outputs", "year": "2022" }, { "authors": "N Jain; A Olmo; S Sengupta; L Manikonda; S Kambhampati", "journal": "CORR", "ref_id": "b32", "title": "Imperfect imaganation: Implications of gans exacerbating biases on facial data augmentation and snapchat selfie lenses", "year": "2020" }, { "authors": "Bowen Jing; Gabriele Corso; Jeffrey Chang; Regina Barzilay; Tommi Jaakkola", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b33", "title": "Torsional diffusion for molecular conformer generation", "year": "2022" }, { "authors": "T Karras; T Aila; S Laine; J Lehtinen", "journal": "", "ref_id": "b34", "title": "Progressive growing of gans for improved quality, stability, and variation", "year": "2018" }, { "authors": "H Kim; A Mnih; J Schwarz; M Garnelo; A Eslami; D Rosenbaum; O Vinyals; Y Teh", "journal": "ICLR", "ref_id": "b35", "title": "Attentive neural processes", "year": "2019" }, { "authors": "D Kingma; J Ba; Adam", "journal": "", "ref_id": "b36", "title": "A method for stochastic optimization", "year": "2015" }, { "authors": "D Kingma; M Welling", "journal": "", "ref_id": "b37", "title": "Auto-encoding variational bayes", "year": "2014" }, { "authors": "N Kodali; J Abernethy; J Hays; Z Kira", "journal": "", "ref_id": "b38", "title": "On convergence and stability of gans", "year": "2017" }, { "authors": "Lukas Koestler; Daniel Grittner; Michael Moeller; Daniel Cremers; Zorah Lähner", "journal": "Springer", "ref_id": "b39", "title": "Intrinsic neural fields: Learning functions on manifolds", "year": "2022" }, { "authors": "Y Lecun; L Bottou; Y Bengio; P Haffner", "journal": "", "ref_id": "b40", "title": "Gradient-based learning applied to document recognition", "year": "1998" }, { "authors": "Bruno Lévy", "journal": "IEEE", "ref_id": "b41", "title": "Laplace-beltrami eigenfunctions towards an algorithm that\" understands\" geometry", "year": "2006" }, { "authors": "Finn Lindgren; Håvard Rue; Johan Lindström", "journal": "Journal of the Royal Statistical Society: Series B (Statistical Methodology)", "ref_id": "b42", "title": "An explicit link between gaussian fields and gaussian markov random fields: the stochastic partial differential equation approach", "year": "2011" }, { "authors": "C Loop", "journal": "", "ref_id": "b43", "title": "Smooth Subdivision Surfaces based on triangles", "year": "1987" }, { "authors": "Andreas Lugmayr; Martin Danelljan; Andres Romero; Fisher Yu; Radu Timofte; Luc Van Gool", "journal": "", "ref_id": "b44", "title": "Repaint: Inpainting using denoising diffusion probabilistic models", "year": "2022" }, { "authors": "Sohir Maskey; Ali Parviz; Maximilian Thiessen; Hannes Stärk; Ylli Sadikaj; Haggai Maron", "journal": "", "ref_id": "b45", "title": "Generalized laplacian positional encoding for graph representation learning", "year": "2022" }, { "authors": "Subbaramiah Minakshisundaram; Åke Pleijel", "journal": "Canadian Journal of Mathematics", "ref_id": "b46", "title": "Some properties of the eigenfunctions of the laplace-operator on riemannian manifolds", "year": "1949" }, { "authors": "Y Mirsky; W Lee", "journal": "CSUR", "ref_id": "b47", "title": "The creation and detection of deepfakes: A survey", "year": "2021" }, { "authors": "A Nichol; P ", "journal": "", "ref_id": "b48", "title": "Improved denoising diffusion probabilistic models", "year": "2021" }, { "authors": "J Park; P Florence; J Straub; R Newcombe; S Lovegrove", "journal": "", "ref_id": "b49", "title": "Deepsdf: Learning continuous signed distance functions for shape representation", "year": "2019" }, { "authors": "Raghunathan Ramakrishnan; O Pavlo; Matthias Dral; O Rupp; Von Anatole; Lilienfeld", "journal": "Scientific data", "ref_id": "b50", "title": "Quantum chemistry structures and properties of 134 kilo molecules", "year": "2014" }, { "authors": "Junuthula Narasimha; Reddy ", "journal": "McGraw-Hill Education", "ref_id": "b51", "title": "Introduction to the finite element method", "year": "2019" }, { "authors": "N Rostamzadeh; E Denton; L Petrini", "journal": "", "ref_id": "b52", "title": "Ethics and creativity in computer vision", "year": "2021" }, { "authors": "N Rozen; A Grover; M Nickel; Y Lipman", "journal": "", "ref_id": "b53", "title": "Moser flow: Divergence-based generative modeling on manifolds", "year": "2021" }, { "authors": "Lars Ruddigkeit; Ruud Van Deursen; C Lorenz; Jean-Louis Blum; Reymond", "journal": "Journal of chemical information and modeling", "ref_id": "b54", "title": "Enumeration of 166 billion organic small molecules in the chemical universe database gdb-17", "year": "2012" }, { "authors": "M Raif; Rustamov", "journal": "", "ref_id": "b55", "title": "Laplace-beltrami eigenfunctions for deformation invariant shape representation", "year": "2007" }, { "authors": "Nicholas Sharp; Keenan Crane", "journal": "Computer Graphics Forum", "ref_id": "b56", "title": "A laplacian for nonmanifold triangle meshes", "year": "2020" }, { "authors": "Nicholas Sharp; Souhaib Attaiki; Keenan Crane; Maks Ovsjanikov", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b57", "title": "Diffusionnet: Discretization agnostic learning on surfaces", "year": "2022" }, { "authors": "J Song; C Meng; S Ermon; Iclr", "journal": "ICLR", "ref_id": "b58", "title": "Denoising diffusion implicit models", "year": "2021" }, { "authors": "C Sullivan; Alexander Kaszynski", "journal": "Journal of Open Source Software", "ref_id": "b59", "title": "Pyvista: 3d plotting and mesh analysis through a streamlined interface for the visualization toolkit (vtk)", "year": "2019" }, { "authors": "Robert W Sumner; Jovan Popovic", "journal": "ACM Transactions on Graphics", "ref_id": "b60", "title": "Deformation Transfer for Triangle Meshes", "year": "2004" }, { "authors": "P Tinsley; A Czajka; P Flynn", "journal": "", "ref_id": "b61", "title": "This face does not exist... but it might be yours! identity leakage in generative models", "year": "2021" }, { "authors": "Neil Ilya O Tolstikhin; Alexander Houlsby; Lucas Kolesnikov; Xiaohua Beyer; Thomas Zhai; Jessica Unterthiner; Andreas Yung; Daniel Steiner; Jakob Keysers; Uszkoreit", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b62", "title": "Mlp-mixer: An all-mlp architecture for vision", "year": "2021" }, { "authors": "Bruno Vallet; Bruno Lévy", "journal": "Computer Graphics Forum", "ref_id": "b63", "title": "Spectral geometry processing with manifold harmonics", "year": "2008" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b64", "title": "Attention is all you need", "year": "2017" }, { "authors": "M Welling; Y Teh", "journal": "", "ref_id": "b65", "title": "Bayesian learning via stochastic gradient langevin dynamics", "year": "2011" }, { "authors": "Y Xie; T Takikawa; S Saito; O Litany; S Yan; N Khan; F Tombari; J Tompkin; V Sitzmann; S Sridhar", "journal": "Computer Graphics Forum", "ref_id": "b66", "title": "Neural fields in visual computing and beyond", "year": "2022" }, { "authors": "Minkai Xu; Shitong Luo; Yoshua Bengio; Jian Peng; Jian Tang", "journal": "", "ref_id": "b67", "title": "Learning neural generative dynamics for molecular conformation generation", "year": "2021" }, { "authors": "Minkai Xu; Lantao Yu; Yang Song; Chence Shi; Stefano Ermon; Jian Tang", "journal": "", "ref_id": "b68", "title": "Geodiff: A geometric diffusion model for molecular conformation generation", "year": "2022" }, { "authors": "S Zhai; W Talbott; N Srivastava; C Huang; H Goh; R Zhang; J Susskind", "journal": "", "ref_id": "b69", "title": "An attention free transformer", "year": "2022" }, { "authors": "Peiye Zhuang; Samira Abnar; Jiatao Gu; Alex Schwing; Josh Susskind; Miguel Angel Bautista", "journal": "", "ref_id": "b70", "title": "Diffusion probabilistic fields", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 399.91, 715.56, 104.09, 17.14 ], "formula_id": "formula_0", "formula_text": "x t = √ ᾱt x 0 + √ 1 -ᾱt ϵ results in the \"simple\" DDPM loss: E t∼[0,T ],x0∼q(x0),ϵ∼N (0,I) ∥ϵ -ϵ θ ( √ ᾱt x 0 + √ 1 -ᾱt ϵ, t)∥ 2" }, { "formula_coordinates": [ 4, 211.7, 148.79, 168.56, 14.26 ], "formula_id": "formula_1", "formula_text": "x t-1 = 1 √ αt x t -1-αt √ 1-αt ϵ θ (x t , t) + z," }, { "formula_coordinates": [ 4, 325.01, 604.31, 73.53, 24.35 ], "formula_id": "formula_2", "formula_text": "f = ∞ i=1 ⟨f, φ i ⟩φ i ." }, { "formula_coordinates": [ 5, 108, 366.4, 396, 27.09 ], "formula_id": "formula_3", "formula_text": "(x) = √ n[φ 1 (x), φ 2 (x), . . . , φ k (x)] ∈ R k to denote the normalized eigen-function representation of a point x ∈ M." }, { "formula_coordinates": [ 5, 407.51, 435.37, 98.24, 9.99 ], "formula_id": "formula_4", "formula_text": "C 0 = [φ(X c ), Y (c,0) ]." }, { "formula_coordinates": [ 5, 205.26, 490.36, 299.41, 17.94 ], "formula_id": "formula_5", "formula_text": "C t = [φ(X c ), Y (c,t) = √ ᾱt Y (c,0) + √ 1 -ᾱt ϵ c ],(1)" }, { "formula_coordinates": [ 5, 315.97, 566.3, 99.24, 9.99 ], "formula_id": "formula_6", "formula_text": "Q 0 = [φ(X q ), Y (q,0) ]." }, { "formula_coordinates": [ 5, 204.41, 598.33, 300.25, 17.94 ], "formula_id": "formula_7", "formula_text": "Q t = [φ(X q ), Y (q,t) = √ ᾱt Y (q,0) + √ 1 -ᾱt ϵ q ],(2)" }, { "formula_coordinates": [ 6, 111.78, 154.8, 166.75, 57.52 ], "formula_id": "formula_8", "formula_text": "6: Ct = [φ(Xc), √ ᾱtY(c,0) + √ 1 -ᾱtϵc] 7: Qt = [φ(Xq), √ ᾱtY(q,0) + √ 1 -ᾱtϵq] 8: Take gradient descent step on ∇ θ ∥ϵq -ϵ θ (Ct, t, Qt)∥ 2 9: until converged" }, { "formula_coordinates": [ 6, 111.78, 291.37, 206.1, 82.68 ], "formula_id": "formula_9", "formula_text": "1: ∆Mφi = φiλi // LBO eigen-decomposition 2: QT = [φ(Xq), Y (q,t) ∼ N (0q, Iq)] 3: CT ⊆ QT ▷ Random subset 4: for t = T, . . . , 1 do 5: z ∼ N (0, I) if t > 1, else z = 0 6: Y (q,t-1) = 1 √ α t Y (q,t) - 1-α t √ 1-ᾱt ϵ θ (C t , t, Q t ) + σ t z 7: Qt-1 = [Mq, Y (q,t-1) ] 8: Ct-1 ⊆ Qt-1" }, { "formula_coordinates": [ 17, 254.55, 367.51, 250.12, 12.33 ], "formula_id": "formula_10", "formula_text": "L = D -1 2 (D -A)D -1 2 ,(3)" } ]
MANIFOLD DIFFUSION FIELDS
We present Manifold Diffusion Fields (MDF), an approach that unlocks learning of diffusion models of data in general non-Euclidean geometries. Leveraging insights from spectral geometry analysis, we define an intrinsic coordinate system on the manifold via the eigen-functions of the Laplace-Beltrami Operator. MDF represents functions using an explicit parametrization formed by a set of multiple input-output pairs. Our approach allows to sample continuous functions on manifolds and is invariant with respect to rigid and isometric transformations of the manifold. In addition, we show that MDF generalizes to the case where the training set contains functions on different manifolds. Empirical results on multiple datasets and manifolds including challenging scientific problems like weather prediction or molecular conformation show that MDF can capture distributions of such functions with better diversity and fidelity than previous approaches.
Ahmed A Elhag; Yuyang Wang; Joshua M Susskind; Miguel Angel; Bautista Apple
[ { "figure_caption": "Fig. 2(b)(c) show the training setting for the two problems, which are related but not directly comparable. MDF learns a generative model over functions defined on manifolds, e.g. a probability density over functions f : M → Y that map points in the manifold M to a signal space Y. In contrast, the goal in Riemannian generative modeling is to learn a probability density from an observed set of points living in a Riemannian manifold M. For example, in the case of the bunny, shown in Fig.2(c), a Riemannian generative model learns a distribution of points x ∈ M on the manifold.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Left: Fourier PE of a point x in 2D Euclidean space. Generative models of functions in ambient space (Zhuang et al., 2023; Dupont et al., 2022b;a; Du et al., 2021) use this representation to encode a function's input. Right: MDF uses the eigen-functions φ i of the Laplace-Beltrami Operator (LBO) ∆ M evaluated at a point x ∈ M.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Left: MDF training algorithm. Right: Visual depiction of a training iteration for a field on the bunny manifold M. See Sect. 4 for definitions.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Left: MDF sampling algorithm. Right: Visual depiction of the sampling process for a field on the bunny manifold.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: CelebA-HQ samples generated by MDF and GASP (Dupont et al., 2022b) on the bunny.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "COV↑ MMD ↓ COV↑ MMD↓ COV↑ MMD ↓ Graph 0.575 0.00108 0.551 0.07205 0.346 0.11101 Cotangent 0.581 0.00384 0.568 0.06890 0.374 0.12440 Pointcloud 0.588 0.00417 0.571 0.06909 0.337 0.12297", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Visualizing top-2 eigenvectors on the bunny manifold for Graph, Cotangent and Pointcloud (Sharp & Crane, 2020) Laplacians.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "A. 44.1 DATA Unless explicitly described in the main paper, we report experiments on 5 different manifolds which we show in Fig.9. This manifolds are: a parametric sine wave Fig.9(a) computed using(Sullivan & Kaszynski, 2019) containing 1024 vertices. The Stanford bunny with 5299 vertices Fig.9(b). A human body mesh from the Tosca dataset(Bronstein et al., 2008) containing 4823 vertices, show in Fig.9(c). A cat mesh and its reposed version from(Sumner & Popovic, 2004), show in Fig.9(d)and Fig.9(e), respectively containing 7207 vertices. To compute the mean curvature values |K| for each mesh reported in the main paper we compute the absolute value of the average mean curvature, which we obtain using(Sullivan & Kaszynski, 2019).", "figure_data": "", "figure_id": "fig_7", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Manifolds used in the different experiments throughout the paper. (a) Wave. (b) Bunny. (c) Human (Bronstein et al., 2008). (d) Cat(Sumner & Popovic, 2004). (e) Cat (re-posed)(Sumner & Popovic, 2004).", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Interaction between context and query pairs in the PerceiverIO architecture. Context pairs C t attend to a latent array of learnable parameters via cross attention. The latent array then goes through several self attention blocks. Finally, the query pairs Q t cross-attend to the latent array to produce the final noise prediction εq .", "figure_data": "", "figure_id": "fig_9", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "13(a). During training, the model only has access to fields defined on a fixed manifold M (see Fig.13(a)). We then evaluate the model on either a rigid M rigid (shown in Fig.13(b)) or isometric M iso (Fig.13(c)) transformation of M. Qualitatively comparing the transfer results of MDF with DPF(Zhuang et al., 2023) in Fig.13(d)-(e), we see a marked difference in fidelity and coverage of the distribution.", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Transf. Enc-Dec(Vaswani et al., 2017). (c) MLP-mixer (Tolstikhin et al., 2021).(d) PerceiverIO(Jaegle et al., 2022).", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Qualitative comparison of different architectures to implement the score field model ϵ θ .", "figure_data": "", "figure_id": "fig_12", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: (a) Training set composed of different fields f : M → R where 2 gaussians are randomly placed in the tail and the right paw of the cat. Fields generated by transferring the MDF pre-trained on M to (b) a rigid and (c) an isometric transformation of M. Fields generated by transferring the DPF (Zhuang et al., 2023) pre-trained on M to (d) a rigid and (e) an isometric transformation of M.", "figure_data": "", "figure_id": "fig_13", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure14: Transferring MDF from low to high resolution discretizations as a function of the number of eigenfunctions. We observe that eigen-functions of small eigenvalue transfer better since they encode coarse (i.e. lowfrequency) information of the manifold.", "figure_data": "", "figure_id": "fig_14", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: (a) Low and high resolution discretizations of the Stanford bunny manifold used for the transfer experiments in the main paper (Fig. 14). (b) Low and high resolution discretizations of the letter 'A' manifold, used for the experiments in Fig. 16.", "figure_data": "", "figure_id": "fig_15", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "(a) and MDF Fig.17", "figure_data": "", "figure_id": "fig_16", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "(", "figure_data": "", "figure_id": "fig_17", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "(c) shows sparse observations of the FEM rollout, amounting to observing 10% of the field. Fig.17", "figure_data": "", "figure_id": "fig_18", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "(d) shows a probabilistic solution to the inverse PDE problem generated by MDF which is consistent with the FEM dynamics in Fig.17", "figure_data": "", "figure_id": "fig_19", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "(a).", "figure_data": "", "figure_id": "fig_20", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 17 :17Figure 17: (a) Forward prediction of the heat diffusion PDE computed with FEM (Reddy, 2019). (b) Conditionally sampled field generated by MDF. (c) Sparse observations of the FEM solution for inverse prediction. (d) Conditionally sampled inverse solution generated by MDF.", "figure_data": "", "figure_id": "fig_21", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Figure 18 :18Figure 18: Real and generated samples for MNIST (LeCun et al., 1998) digits on the wave manifold.", "figure_data": "", "figure_id": "fig_22", "figure_label": "18", "figure_type": "figure" }, { "figure_caption": "Figure 19 :19Figure 19: Real and generated samples for the ERA5 (Hersbach et al., 2020) climate dataset.", "figure_data": "", "figure_id": "fig_23", "figure_label": "19", "figure_type": "figure" }, { "figure_caption": "Figure 20 :Figure 21 :2021Figure 20: Real and generated samples for the GMM dataset on the Stanford bunny manifold.", "figure_data": "", "figure_id": "fig_24", "figure_label": "2021", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Results on the bunny manifold (mean curvature |K| = 7.388). As the mean curvature increases the boost of MDF over DPF(Zhuang et al., 2023) becomes larger across all datasets.", "figure_data": "GMMMNISTCelebA-HQGMMMNISTCelebA-HQCOV↑ MMD ↓ COV↑ MMD↓ COV↑ MMD ↓COV↑ MMD ↓ COV↑ MMD↓ COV↑ MMD ↓MDF 0.444 0.01405 0.564 0.0954 0.354 0.11601MDF 0.551 0.00100 0.529 0.08895 0.346 0.14162DPF 0.352 0.01339 0.552 0.09633 0.361 0.12288DPF 0.479 0.00112 0.472 0.09537 0.318 0.14502Table 1: COV and MMD metrics for differentdatasets on the wave manifold (mean curvature|K| = 0.004).GMMMNISTCelebA-HQCOV↑ MMD ↓ COV↑ MMD↓ COV↑ MMD ↓MDF 0.575 0.00108 0.551 0.07205 0.346 0.11101DPF 0.472 0.00120 0.454 0.11525 0.313 0.11530", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Human manifold (mean curvature|K| = 25.966). At high mean curvatures MDF consistently outperforms DPF(Zhuang et al., 2023).", "figure_data": "M → MM → MisoMiso → MisoCOV↑ MMD↓ COV↑ MMD ↓ COV↑ MMD ↓MDF 0.595 0.00177 0.595 0.00177 0.582 0.00191DPF 0.547 0.00189 0.003 0.08813 0.306 0.00742", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Training MDF on a manifold M and evaluating it on an isometric transformation M iso does not impact performance, while being on par with training directly on the transformed manifold.", "figure_data": "", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "MDF outperforms GASP on ERA5", "figure_data": "ERA5CelebA-HQ on MCOV↑MMD ↓COV↑MMD↓MDF0.3470.004980.3460.11101GASP0.1140.009640.3090.38979", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Performance of MDF using different Laplacians for different datastets on the bunny manifold, where we see that MDF is relatively robust and can be readily deployed on different settings depending on the manifold parametrization.", "figure_data": "", "figure_id": "tab_4", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Molecule conformer generation results for GEOM-QM9 dataset. MDF obtains comparable or better results than the state-of-the-art Torsional Diffusion(Jing et al., 2022), without making any explicit assumptions about the geometric structure of molecules (i.e. without modeling torsional angles). In addition, we show how performance of MDF changes as a function of the number of eigen-functions k. Interestingly, with as few as k = 2 eigen-functions MDF is able to generate consistent accurate conformations.", "figure_data": "RecallPrecisionCoverage ↑AMR ↓Coverage ↑AMR ↓mean median mean median mean median mean medianCGCF69.47 96.15 0.425 0.374 38.20 33.33 0.711 0.695GeoDiff76.50 100.00 0.297 0.229 50.00 33.50 0.524 0.510GeoMol91.50 100.00 0.225 0.193 87.60 100.00 0.270 0.241Torsional Diff. 92.80 100.00 0.178 0.147 92.70 100.00 0.221 0.195MDF (ours)95.30 100.00 0.124 0.074 91.50 100.00 0.169 0.101MDF (k = 16) 94.87 100.00 0.139 0.093 87.54 100.00 0.220 0.151MDF (k = 8) 94.28 100.00 0.162 0.109 84.27 100.00 0.261 0.208MDF (k = 4) 94.57 100.00 0.145 0.093 86.83 100.00 0.225 0.151MDF (k = 2) 93.15 100.00 0.152 0.088 86.97 100.00 0.211 0.138", "figure_id": "tab_5", "figure_label": "7", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work on text provides a method for approximating probability distributions from finite observational datasets, which the citing paper adopts in its research on diffusion generative models."}, {"Category": "Methodological Basis", "Citation": "(Ho et al., 2020)", "Explanation": "The cited work on diffusion generative models provides a method for stable optimization goals and fewer training anomalies, which the citing paper leverages in its research on the potential of these models in scientific and engineering disciplines."}, {"Category": "Methodological Basis", "Citation": "(Ho et al., 2022)", "Explanation": "The cited work on video provides a method for utilizing diffusion generative models in domains with Euclidean spaces, which the citing paper builds upon in its research on scientific problems involving reasoning about continuous functions on curved spaces."}, {"Category": "Methodological Basis", "Citation": "(Dupont et al., 2022a)", "Explanation": "The cited work by Dupont et al. provides a latent parametrization approach for learning generative models of continuous functions, which the citing paper builds upon in the context of learning generative models of functions on Riemannian manifolds."}, {"Category": "Methodological Basis", "Citation": "(Du et al., 2021)", "Explanation": "The cited work by Du et al. also contributes to the field of learning generative models of continuous functions, providing a method for solving PDEs on curved surfaces that the citing paper may build upon in the context of learning generative models of functions on Riemannian manifolds."}, {"Category": "Extension or Continuation", "Citation": "(Bond-Taylor & Willcocks, 2023)", "Explanation": "The cited work by Bond-Taylor and Willcocks extends the use of diffusion models in the context of learning generative models of functions on Riemannian manifolds, which the citing paper may build upon in the context of learning generative models of functions on curved surfaces."}, {"Category": "Extension or Continuation", "Citation": "(Zhuang et al., 2023)", "Explanation": "The cited work by Zhuang et al. also contributes to the field of learning generative models of functions on Riemannian manifolds, providing a method for solving PDEs on curved surfaces that the citing paper may build upon in the context of learning generative models of functions on curved surfaces."}, {"Category": "Supporting Evidence", "Citation": "(Hersbach et al., 2020)", "Explanation": "The cited work, ERA5 climate dataset, is used as a data source in the citing paper to provide real samples of fields defined on the 2D sphere."}, {"Category": "Supporting Evidence", "Citation": "(Ruddigkeit et al., 2012)", "Explanation": "The cited work provides the GEOM-QM9 dataset, which is used in the citing paper for molecular conformer generation. This dataset serves as a foundational element for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Zhuang et al., 2023)", "Explanation": "The cited work by Zhuang et al. provides a methodological basis for the approach taken in the citing paper, as it extends recent efforts in generative models for continuous functions in Euclidean space to functions defined over manifolds."}, {"Category": "Extension or Continuation", "Citation": "(Dupont et al., 2022b)", "Explanation": "The cited work by Dupont et al. is extended in the citing paper, as it leverages a GAN whose generator produces field data to learn distributions over fields in Euclidean space."}, {"Category": "Data Source", "Citation": "(Du et al., 2021)", "Explanation": "The cited work by Du et al. is used as a data source in the citing paper, as it is shown in Fig. 2(a) to denote a parameterization of a single function in Euclidean space using a neural network."}, {"Category": "Methodological Basis", "Citation": "(Ha et al., 2017)", "Explanation": "The cited work on hyper-networks provides a methodological basis for the two-stage approaches in the citing paper, as they adopt a latent field parameterization to learn functions via a hyper-network."}, {"Category": "Extension or Continuation", "Citation": "(Koestler et al., 2022;Grattarola & Vandergheynst, 2022)", "Explanation": "The cited works on intrinsic coordinate systems are extended in the citing paper to the problem of learning a probabilistic model over multiple functions defined on a manifold."}, {"Category": "Data Source", "Citation": "(Dupont et al., 2022a;Du et al., 2021)", "Explanation": "The cited works on two-stage approaches provide the data source for the generative models of fields in Euclidean space discussed in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Park et al., 2019)", "Explanation": "The cited work on latent field parameterization provides a methodological basis for the two-stage approaches in the citing paper, as they adopt a hyper-network to learn functions."}, {"Category": "Methodological Basis", "Citation": "(b)", "Explanation": "The cited work introduces the concept of learning a distribution p \u03b8 from a collection of fields on a general Riemannian manifold, which the citing paper adopts in their own research to model the data in a more general and flexible way."}, {"Category": "Methodological Basis", "Citation": "(c)", "Explanation": "The cited work presents the idea of learning a parametric distribution p \u03b8 from empirical observations of points on a Riemannian manifold, which the citing paper uses to model the data in a more specific and practical way."}, {"Category": "Extension or Continuation", "Citation": "Maskey et al., 2022;Sharp et al., 2022;He et al., 2022;Dwivedi et al., 2020", "Explanation": "The cited works extend the use of intrinsic coordinate systems in Graph Transformers by incorporating eigenvectors of the Graph Laplacian to replace standard positional embeddings, which the citing paper builds upon in their own research to further improve the performance of the model."}, {"Category": "Methodological Basis", "Citation": "(Bortoli et al., 2022)", "Explanation": "The cited work by Bortoli et al. (2022) provides the foundational basis for the learning problem tackled in the citing paper, which involves lifting the Riemannian generative modeling problem to function spaces."}, {"Category": "Methodological Basis", "Citation": "(Gemici et al., 2016)", "Explanation": "The work by Gemici et al. (2016) contributes to the methodological basis of the citing paper by providing insights into the learning problem of graph/node classification and regression in a supervised learning setting."}, {"Category": "Methodological Basis", "Citation": "(Rozen et al., 2021)", "Explanation": "The work by Rozen et al. (2021) further builds upon the methodological basis established in the cited works by Bortoli et al. (2022) and Gemici et al. (2016), by providing a new perspective on the learning problem in the context of generative modeling."}, {"Category": "Methodological Basis", "Citation": "(Chen & Lipman, 2023)", "Explanation": "The work by Chen and Lipman (2023) contributes to the methodological basis of the citing paper by providing insights into the learning problem of generative modeling in function spaces."}, {"Category": "Extension or Continuation", "Citation": "(Garnelo et al., 2018)", "Explanation": "The cited work by Garnelo et al. (2018) extends the research on learning distributions over functions by providing a new approach to the problem of neural processes."}, {"Category": "Extension or Continuation", "Citation": "(Kim et al., 2019)", "Explanation": "The work by Kim et al. (2019) further extends the research on learning distributions over functions by providing insights into the optimization of an ELBO in the context of neural processes."}, {"Category": "Extension or Continuation", "Citation": "(Dutordoir et al., 2022)", "Explanation": "The cited work by Dutordoir et al. (2022) extends the research on learning distributions over functions by providing a new perspective on the optimization of neural processes in function spaces."}, {"Category": "Data Source", "Citation": "(Borovitskiy et al., 2020)", "Explanation": "The work by Borovitskiy et al. (2020) serves as a data source for the research on Gaussian processes on Riemannian manifolds, which is relevant to the citing paper in the context of function spaces."}, {"Category": "Data Source", "Citation": "(Hutchinson et al., 2021)", "Explanation": "The work by Hutchinson et al. (2021) also serves as a data source for the research on Gaussian processes on Riemannian manifolds, which is relevant to the citing paper in the context of function spaces."}, {"Category": "Methodological Basis", "Citation": "(Ho et al., 2020)", "Explanation": "The cited work introduces a training recipe that the citing paper adopts in their research on Denoising Diffusion Probabilistic Models, which includes the use of sampling in closed form and learning a sequence of denoising networks with tied weights."}, {"Category": "Methodological Basis", "Citation": "(Ho et al., 2020)", "Explanation": "The cited work by Ho et al. (2020) is used to provide a method for ancestral sampling, which the citing paper adopts to compute the data distribution p \u03b8 (x 0 ) at inference time."}, {"Category": "Methodological Basis", "Citation": "(Welling & Teh, 2011)", "Explanation": "The cited work by Welling and Teh (2011) provides a method for sampling via Langevin dynamics, which the citing paper adopts in the process of sampling x t-1 in DDPMs."}, {"Category": "Methodological Basis", "Citation": "(Bortoli et al., 2022)", "Explanation": "The cited work by Bortoli et al. provides the necessary machinery to learn distribution from a training set of points living on Riemannian manifolds, which the citing paper adopts in its research on developing generative models."}, {"Category": "Methodological Basis", "Citation": "(Gemici et al., 2016)", "Explanation": "The work by Gemici et al. contributes to the development of machinery for learning distribution from a training set of points on Riemannian manifolds, which the citing paper builds upon in its research on Riemannian generative models."}, {"Category": "Methodological Basis", "Citation": "(Rozen et al., 2021)", "Explanation": "The work by Rozen et al. provides a method for learning distribution from a training set of points on Riemannian manifolds, which the citing paper uses in its research on developing Riemannian generative models."}, {"Category": "Methodological Basis", "Citation": "(Chen & Lipman, 2023)", "Explanation": "The work by Chen and Lipman offers a method for learning distribution from a training set of points on Riemannian manifolds, which the citing paper leverages in its research on developing Riemannian generative models."}, {"Category": "Methodological Basis", "Citation": "(L\u00e9vy, 2006)", "Explanation": "The cited work by L\u00e9vy provides the basis for the interpretation of the eigen-functions of \u2206 M as a Fourier-like function basis on the manifold, which is used in the MDF method to define a positional embedding for points on the manifold."}, {"Category": "Methodological Basis", "Citation": "(Vallet & L\u00e9vy, 2008)", "Explanation": "The cited work by Vallet and L\u00e9vy further elaborates on the interpretation of the eigen-functions of \u2206 M as a Fourier-like function basis on the manifold, which is a key component in the MDF method for defining a positional embedding for points on the manifold."}, {"Category": "Extension or Continuation", "Citation": "(Xie et al., 2022)", "Explanation": "The cited work by Xie et al. extends the use of the eigen-functions of \u2206 M in the context of implicit representations, which is a related area of research that builds upon the foundational work in the MDF method for defining a positional embedding for points on the manifold."}, {"Category": "Methodological Basis", "Citation": "(Zhuang et al., 2023)", "Explanation": "The cited work provides a recipe for learning a diffusion generative model over fields on Riemannian manifolds, which the citing paper adopts in their research to address the problem of learning a similar model."}, {"Category": "Methodological Basis", "Citation": "(Zhuang et al., 2023)", "Explanation": "The cited work by Zhuang et al. provides the explicit field parametrization method that the citing paper adopts in their research on field characterization."}, {"Category": "Methodological Basis", "Citation": "(Zhuang et al., 2023)", "Explanation": "The cited work provides a query set formulation for the score network, which the citing paper adopts in the context of the field parametrization."}, {"Category": "Methodological Basis", "Citation": "(Zhuang et al., 2023)", "Explanation": "The cited work by Zhuang et al. provides the method of ancestral sampling that the citing paper adopts in the denoising process of the query set."}, {"Category": "Methodological Basis", "Citation": "(Heusel et al., 2017)", "Explanation": "The cited work by Heusel et al. (2017) is used as a reference for evaluating the performance of generative models in the context of image generation. The citing paper adopts the FID (Frechet Inception Distance) metric from the cited work to measure the quality of the generated images in the context of their research on function learning on fixed manifolds."}, {"Category": "Data Source", "Citation": "(Achlioptas et al., 2018)", "Explanation": "The cited work by Achlioptas et al. (2018) provides metrics for evaluating the performance of generative models in the context of point cloud data generation. The citing paper uses the Coverage (COV) and Minimum Matching Distance (MMD) metrics from the cited work to measure the quality of the generated point cloud data in the context of their research on function learning on different manifolds."}, {"Category": "Data Source", "Citation": "(LeCun et al., 1998)", "Explanation": "The cited work by LeCun et al. serves as the source of the MNIST dataset, which the citing paper utilizes in their research on function datasets."}, {"Category": "Data Source", "Citation": "(Karras et al., 2018)", "Explanation": "The cited work by Karras et al. provides the CelebA-HQ dataset, which the citing paper uses in their study of function datasets on manifolds."}, {"Category": "Methodological Basis", "Citation": "(Sullivan & Kaszynski, 2019)", "Explanation": "The cited work by Sullivan and Kaszynski offers a texture mapping approach that the citing paper adopts in their research to map images to manifolds."}, {"Category": "Methodological Basis", "Citation": "(Zhuang et al., 2023)", "Explanation": "The cited work by Zhuang et al. (2023) provides the basis for the hyper-parameters used in the comparison between MDF and DPF in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Dupont et al., 2022b)", "Explanation": "The cited work by Dupont et al. (2022b) is used as a reference for the comparison of MDF with GASP, which is a generative model for continuous functions using an adversarial formulation."}, {"Category": "Methodological Basis", "Citation": "(Karras et al., 2018)", "Explanation": "The cited work by Karras et al. (2018) provides the CelebA-HQ dataset, which the citing paper uses to map images onto the bunny manifold for comparison with GASP performance."}, {"Category": "Data Source", "Citation": "(Dupont et al., 2022b)", "Explanation": "The cited work by Dupont et al. (2022b) provides the ERA5 climate dataset, which the citing paper uses to generate functions on the sphere and compare with GASP performance."}, {"Category": "Extension or Continuation", "Citation": "(Sullivan & Kaszynski, 2019)", "Explanation": "The cited work by Sullivan and Kaszynski (2019) is used to map images from the CelebA-HQ dataset onto the bunny manifold, extending the research of the citing paper in the context of image generation and comparison with GASP performance."}, {"Category": "Supporting Evidence", "Citation": "(L\u00e9vy, 2006)", "Explanation": "The cited work by L\u00e9vy (2006) provides the analytical eigen-functions of the LBO on the sphere, which the citing paper uses to compute spherical harmonics and generate functions on the sphere for comparison with GASP performance."}, {"Category": "Supporting Evidence", "Citation": "(Hersbach et al., 2020)", "Explanation": "The cited work by Hersbach et al. (2020) provides the ERA5 dataset, which the citing paper uses to generate functions on the sphere and compare with GASP performance."}, {"Category": "Supporting Evidence", "Citation": "(Hersbach et al., 2020)", "Explanation": "The cited work by Hersbach et al. provides a dataset and a method for generating high-quality images, which the citing paper uses to compare the performance of MDF and GASP in terms of image fidelity and distribution coverage."}, {"Category": "Methodological Basis", "Citation": "(Sullivan & Kaszynski, 2019)", "Explanation": "The cited work by Sullivan and Kaszynski provides a method for texturing images using a manifold representation, which the citing paper uses to generate images and texture them to the bunny manifold in the GASP method."}, {"Category": "Extension or Continuation", "Citation": "Published as a conference paper at ICLR 2024", "Explanation": "The cited work is an extension of the research presented in the citing paper, as it is a conference paper that builds upon the ideas and findings of the original work."}, {"Category": "Supporting Evidence", "Citation": "Results in Fig. 7", "Explanation": "The results presented in Fig. 7 provide evidence to support the claim that the performance of MDF increases with the number of eigen-functions used to compute the coordinate representation \u03c6 in the LBO."}, {"Category": "Methodological Basis", "Citation": "(Rustamov et al., 2007)", "Explanation": "The cited work introduces the concept of cotangent Laplacians in computer graphics, which the citing paper adopts in the usage of 3D meshes for LBO calculations in the context of real-world problems."}, {"Category": "Methodological Basis", "Citation": "(Sharp & Crane, 2020)", "Explanation": "The cited work presents the use of pointclouds in computer vision for LBO calculations, which the citing paper uses in the context of 3D geometry representation for LBO calculations."}, {"Category": "Methodological Basis", "Citation": "(Maskey et al., 2022)", "Explanation": "The cited work discusses the usage of graph Laplacians in computational chemistry problems for LBO calculations, which the citing paper adopts in the context of undirected graphs of atoms connected by bonds in the LBO calculations."}, {"Category": "Methodological Basis", "Citation": "(Xu et al., 2021;2022;Ganea et al., 2021;Jing et al., 2022)", "Explanation": "The cited works provide a method for evaluating the performance of MDF in the context of molecule conformer generation, which the citing paper adopts to conduct their research in the same setting."}, {"Category": "Methodological Basis", "Citation": "(Xu et al., 2021)", "Explanation": "The cited work, CGCF, serves as a methodological basis for the citing paper in terms of the standard setting for molecule conformer prediction in the GEOM-QM9 dataset."}, {"Category": "Methodological Basis", "Citation": "(Xu et al., 2022)", "Explanation": "The cited work, GeoDiff, provides a methodological basis for the citing paper in terms of the standard setting for molecule conformer prediction in the GEOM-QM9 dataset."}, {"Category": "Methodological Basis", "Citation": "(Ganea et al., 2021)", "Explanation": "The cited work, GeoMol, serves as a methodological basis for the citing paper in terms of the standard setting for molecule conformer prediction in the GEOM-QM9 dataset."}, {"Category": "Methodological Basis", "Citation": "(Jing et al., 2022)", "Explanation": "The cited work, Torsional Diffusion, provides a methodological basis for the citing paper in terms of the standard setting for molecule conformer prediction in the GEOM-QM9 dataset."}, {"Category": "Methodological Basis", "Citation": "(Ganea et al., 2021)", "Explanation": "The cited work by Ganea et al. (2021) provides a method for modeling the geometric structure of molecules, which the citing paper adopts in their own research on molecular conformer generation."}, {"Category": "Methodological Basis", "Citation": "(Jing et al., 2022)", "Explanation": "The cited work by Jing et al. (2022) introduces a method for modeling torsional angles in molecules, which the citing paper uses in their research on molecular conformer generation."}, {"Category": "Data Source", "Citation": "(Ganea et al., 2021)", "Explanation": "The cited work by Ganea et al. (2021) provides a dataset of molecular structures that the citing paper uses in their research on molecular conformer generation."}, {"Category": "Data Source", "Citation": "(Jing et al., 2022)", "Explanation": "The cited work by Jing et al. (2022) provides a dataset of torsional angle information in molecules that the citing paper uses in their research on molecular conformer generation."}, {"Category": "Methodological Basis", "Citation": "(Jing et al., 2022)", "Explanation": "The cited work, Torsional Diffusion, is used as a reference for understanding the importance of intrinsic geometric factors in the context of conformers. The citing paper adopts the concept of torsional angles in the design of their own model, MDF, to learn functions on manifolds."}, {"Category": "Supporting Evidence", "Citation": "(Mirsky & Lee, 2021)", "Explanation": "The cited work by Mirsky and Lee highlights the risk of using generative models to create deceptive data, which is a key concern discussed in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Tinsley et al., 2021)", "Explanation": "The cited work by Tinsley et al. discusses the risk of data leakage and privacy concerns associated with generative models, which is a key point discussed in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Jain et al., 2020)", "Explanation": "The cited work by Jain et al. highlights the potential for generative models to amplify existing biases in the training data, which is a key concern discussed in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Rostamzadeh et al., 2021)", "Explanation": "The cited work by Rostamzadeh et al. provides a comprehensive discussion on ethical aspects in the context of generative modeling, which the citing paper builds upon to further explore the ethical implications of generative models."}, {"Category": "Methodological Basis", "Citation": "(Jaegle et al., 2022)", "Explanation": "The cited work introduces the PerceiverIO architecture, which the citing paper adopts in its experiments to address the computational demand of the transformer-based score network in MDF."}, {"Category": "Extension or Continuation", "Citation": "(Song et al., 2021a)", "Explanation": "The cited work on improved inference methods is mentioned as a potential area for future work in the citing paper, indicating a possible extension or continuation of research in this direction."}, {"Category": "Data Source", "Citation": "(LeCun et al., 1998)", "Explanation": "The cited work provides the MNIST dataset, which the citing paper uses for evaluating models in the context of image texture mapping into meshes."}, {"Category": "Data Source", "Citation": "(Karras et al., 2018)", "Explanation": "The cited work provides the CelebA-HQ dataset, which the citing paper uses for evaluating models in the context of image texture mapping into meshes."}, {"Category": "Data Source", "Citation": "(Sullivan & Kaszynski, 2019)", "Explanation": "The cited work provides a method for texture mapping images into meshes, which the citing paper uses in the context of evaluating models in the context of image texture mapping into meshes."}, {"Category": "Data Source", "Citation": "(Dupont et al., 2022b)", "Explanation": "The cited work provides the dataset used in the comparison with GASP, which is necessary for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Belkin & Niyogi, 2001)", "Explanation": "The cited work by Belkin and Niyogi (2001) provides the theoretical foundation for the use of the symmetric normalized graph Laplacian in the computation of eigenvectors for the LBO in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Hernandez et al., 2009)", "Explanation": "The cited work by Hernandez et al. (2009) is used to efficiently compute the eigen-decomposition of the symmetric normalized graph Laplacian, which is a key methodological step in the research presented in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Vaswani et al., 2017)", "Explanation": "The cited work introduces the Transformer architecture, which the citing paper adopts in the design of the score field in MDF to process irregularly sampled data."}, {"Category": "Methodological Basis", "Citation": "(Tolstikhin et al., 2021)", "Explanation": "The cited work presents the MLP architecture, which the citing paper uses in the design of the score field in MDF to process irregularly sampled data."}, {"Category": "Methodological Basis", "Citation": "(Jaegle et al., 2022)", "Explanation": "The cited work introduces the PerceiverIO architecture, which the citing paper implements in the model to process irregularly sampled data efficiently and effectively."}, {"Category": "Data Source", "Citation": "(Vaswani et al., 2017)", "Explanation": "The cited work provides the context and query sets used in the PerceiverIO architecture, which the citing paper incorporates into the score computation in MDF."}, {"Category": "Data Source", "Citation": "(Tolstikhin et al., 2021)", "Explanation": "The cited work provides the latent arrays used in the PerceiverIO architecture, which the citing paper incorporates into the score computation in MDF."}, {"Category": "Data Source", "Citation": "(Jaegle et al., 2022)", "Explanation": "The cited work provides the specific settings used in the PerceiverIO architecture, which the citing paper presents in Tab.8 for the quantitatively evaluated experiments in MDF."}, {"Category": "Methodological Basis", "Citation": "(Jaegle et al., 2022)", "Explanation": "The PerceiverIO architecture is used in the score model to process irregular data, which is a key methodological element in the construction of MDF."}, {"Category": "Methodological Basis", "Citation": "(Vaswani et al., 2017)", "Explanation": "The standard Transformer Encoder-Decoder architecture is also used in the score model to process irregular data, which is a key methodological element in the construction of MDF."}, {"Category": "Methodological Basis", "Citation": "(Tolstikhin et al., 2021)", "Explanation": "The MLP-mixer architecture is used in the score model to process irregular data, which is a key methodological element in the construction of MDF."}, {"Category": "Methodological Basis", "Citation": "(Jaegle et al., 2022)", "Explanation": "The cited work, PeceiverIO, is used as a methodological basis for the score field model in the citing paper, as it facilitates the handling of large and variable numbers of context and query pairs in the MDF formulation."}, {"Category": "Methodological Basis", "Citation": "(Vaswani et al., 2017)", "Explanation": "The cited work by Vaswani et al. serves as the methodological basis for the image generation model used in the citing paper, as it provides the Encoded-Decoder (Enc-Dec) architecture that is employed in the research."}, {"Category": "Extension or Continuation", "Citation": "(Tolstikhin et al., 2021)", "Explanation": "The cited work by Tolstikhin et al. is extended in the citing paper by introducing the MLP-mixer model for image generation, which builds upon the research presented in the cited work."}, {"Category": "Data Source", "Citation": "GMM + Stanford Bunny dataset", "Explanation": "The GMM + Stanford Bunny dataset is used as a data source in the research conducted in the citing paper, providing the necessary information and training data for the image generation model."}, {"Category": "Methodological Basis", "Citation": "PerceiverIO architecture", "Explanation": "The PerceiverIO architecture is used as the methodological basis for the model in the citing paper, providing the specific architecture and training settings for the image generation model."}, {"Category": "Supporting Evidence", "Citation": "COV and MMD metrics", "Explanation": "The COV and MMD metrics are used to measure the effect of random training seed in the model training process, providing supporting evidence for the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Sumner & Popovic, 2004)", "Explanation": "The cited work provides the cat category geometries that the citing paper uses to build a dataset of different fields on the manifold for evaluation purposes."}, {"Category": "Data Source", "Citation": "(Zhuang et al., 2023)", "Explanation": "The cited work is the DPF model that the citing paper uses to evaluate the performance of MDF in a real-world setting."}, {"Category": "Extension or Continuation", "Citation": "(Zhuang et al., 2023)", "Explanation": "The citing paper extends the research of the DPF model by exploring the performance of MDF in a more realistic setting of rigid and isometric transformations of the training manifold."}, {"Category": "Methodological Basis", "Citation": "(Zhuang et al., 2023)", "Explanation": "The cited work by Zhuang et al. (2023) provides a method for training DPF that is used in the citing paper to study the performance of DPF under isometric transformations."}, {"Category": "Extension or Continuation", "Citation": "(Loop, 1987)", "Explanation": "The cited work by Loop (1987) on loop subdivision is used in the citing paper to generate different discretizations of the bunny manifold for training and evaluation purposes."}, {"Category": "Methodological Basis", "Citation": "(Lugmayr et al., 2022)", "Explanation": "The cited work provides a recipe for performing conditional inference with MDF, which the citing paper adopts in their research to generate temporal fields consistent with observed initial conditions."}, {"Category": "Methodological Basis", "Citation": "(Reddy, 2019)", "Explanation": "The cited work introduces FEM, which the citing paper uses in their forward dynamics prediction to generate temporal fields consistent with observed initial conditions."}, {"Category": "Methodological Basis", "Citation": "(Isakov, 2006)", "Explanation": "The cited work focuses on inverting the full dynamics of the PDE, which the citing paper utilizes in their research to condition on sparse observations and invert the full dynamics of the PDE."}, {"Category": "Methodological Basis", "Citation": "(Bronstein et al., 2008)", "Explanation": "The cited work by Bronstein et al. (2008) provides the bunny manifold used in the experiments conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Hersbach et al., 2020)", "Explanation": "The cited work by Hersbach et al. (2020) serves as the data source for the ERA5 dataset used in the experiments in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Heusel et al., 2017)", "Explanation": "The cited work by Heusel et al. (2017) provides a common approach for evaluating generative models over images, which the citing paper adopts in the context of metrics for curved geometries."}, {"Category": "Extension or Continuation", "Citation": "(Achlioptas et al., 2018)", "Explanation": "The cited work by Achlioptas et al. (2018) introduces metrics from the field of point cloud data for evaluating functions on curved geometries, which the citing paper extends to the context of image generation."}, {"Category": "Methodological Basis", "Citation": "(Achlioptas et al., 2018)", "Explanation": "The cited work by Achlioptas et al. provides a method for measuring the correlation between MMD and COV metrics, which the citing paper adopts to assess the authenticity of generated fields in the test set."}, {"Category": "Supporting Evidence", "Citation": "(Zhuang et al., 2023)", "Explanation": "The cited work by Zhuang et al. (2023) provides a method for improving the coverage of a model, which the citing paper adopts to improve the coverage of their own model (MDF) while maintaining or improving the MMD score."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b34", "b18", "b19", "b14", "b13", "b37", "b48" ], "table_ref": [], "text": "Lifelong Learning (LL) is a relatively new area of machine learning (ML) research, in which agents continually learn as they encounter new tasks, acquiring novel task knowledge while avoiding forgetting of previous tasks (Parisi et al., 2019). This differs from standard train-then-deploy ML, which cannot incrementally learn without catastrophic interference across successive tasks (French, 1999).\nMost current LL research assumes a single agent that sequentially learns from its own actions and surroundings, which, by design, is not parallelizable over time and/or physical locations. In the real world, tasks may happen in different places; for instance, we may need agents that can operate in deserts, forests, and snow, as well as recognize birds in the sky and fish in the deep ocean. The possibility of parallel task learning and sharing among multiple agents to speed up lifelong learning has traditionally been overlooked. To solve the above challenges, we propose a new Lifelong Learning challenge scenario, Shared Knowledge Lifelong Learning (SKILL): A population of originally identical LL agents is deployed to a number of distinct physical locations. Each agent learns a sequence of tasks in its location. Agents share knowledge over a decentralized network, so that, in the end, all agents can master all tasks. SKILL promises the following benefits: speedup of learning through parallelization; ability to simultaneously learn from distinct locations; resilience to failures as no central server is used; possible synergies among agents, whereby what is learned by one agent may facilitate future learning by other agents.\nApplication scenarios for SKILL include: 1) Users each take pictures of landmark places and buildings in their own city, then provide annotations for those. After learning and sharing, all users can identify all landmarks while traveling to any city. This could also apply to recognizing products in stores or markets in various countries, or foods at restaurants worldwide. Thus, even though each teacher only learns at one or a few locations (or tasks), eventually all users may be interested in knowledge from all locations, as it will be useful during travel. 2) Agents in remote outposts worldwide with limited connectivity are taught to recognize symptoms of new emerging diseases, then share their knowledge to allow any agent to quickly identify all diseases. 3) Explorers are dispatched to various remote locations and each learns about plant or animal species they encounter, then later shares with other agents who may encounter similar or different species. 4) Each time a criminal of some sorts is apprehended (e.g., shoplifter, insurgent, spy, robber, sex offender, etc), the local authorities take several hundred pictures to learn to identify that person. Then all local authorities share their knowledge so that any criminal can later be identified anywhere.\nHowever, to solve SKILL, one must address the following challenges:\nChal-1 Distributed, decentralized learning of multiple tasks. A solution to SKILL should support a population of agents deployed over several physical locations and each learning one or more sequential tasks. For resilience reasons, the population should not rely on a single central server. Chal-2 Lifelong learning ability: Each agent must be capable of lifelong learning, i.e., learning a sequence of tasks with minimal interference and no access to previous data as each new task is learned. Chal-3 Shareable knowledge representation: The knowledge representation should easily be shared and understood among agents. Agents must be able to consolidate knowledge from other agents in a decentralized, distributed fashion. Chal-4 Speedup through parallelization: Shared knowledge should be sufficiently compact, so that the benefits from using multiple parallel agents are not erased by communications costs. Adding more agents should result in greater speedup compared to a single agent. We measure speedup as the the ratio of time it takes for one agent to learn all tasks compared to N agents (larger is better). As a goal for our work, we strive for a speedup of at least 0.5 × N with N agents, where perfect speedup would be 1.0 × N if there was no parallelization and communications overhead. Chal-5 Ability to harness possible synergies among tasks: When possible, learning some tasks may improve learning speed or performance at other, related tasks. To address the SKILL challenge, we take inspiration from neuroscience. Many approaches to LL involve at least partially retraining the core network that performs tasks (feature extraction backbone plus classification head), as every new task is learned. But transmitting and then merging these networks across multiple agents would incur very high communications and computation costs. With the exception of perceptual learning, where human visual cortex may indeed be altered when learning specific visual discrimination tasks for days or weeks (Goldstone, 1998;Dosher & Lu, 2017), there is little evidence that our entire visual cortex -from early stage filters in primary visual cortex to complex features in inferotemporal cortex -is significantly altered when we just learn, e.g., about a new class of flowers from a few exemplars. Instead, the perirhinal cortex (and more generally the medial temporal lobe) may be learning new representations for new objects by drawing upon and combining existing visual features and representations from visual cortex (Deshmukh et al., 2012). This may give rise to specialized \"grandmother cells\" (Bowers, 2017) (or Jennifer Aniston neurons; Quiroga et al. (2005); Quiroga (2017)) that can be trained on top of an otherwise rather immutable visual cortex backbone. While the grandmother cell hypothesis remains debated in neuroscience (vs. distributed representations;Valdez et al. (2015)), here, it motivates us to explore the viability of a new lightweight lifelong learning scheme, where the feature extraction backbone and the latent representation are fixed, and each new object class learned is represented by a single new neuron that draws from this representation.\nFrom this inspiration, we propose a simple but effective solution to SKILL based on new lightweight lifelong learning (LLL) agents. Each LLL agent uses a common frozen backbone built-in at initialization, so that only the last layer (head) plus some small adjustments to the backbone (beneficial biases) are learned for each task. To eliminate the need for a task oracle, LLL agents also learn and share summary statistics about their training datasets, or share a few training images, to help other agents assign test samples to the correct head (task mapper). On a new, very challenging dataset with 102 image classification tasks (5,033 classes in total, 2,041,225 training, 243,464 validation, and 243,464 test images), we achieve higher accuracy compared to 8 LL baselines, and also near-perfect parallelization speedup.\nOur main contributions are: (1) We formulate a new Lifelong learning challenge, Shared Knowledge Lifelong Learning (SKILL), which focuses on parallel (sped up) task learning and knowledge sharing among agents. We frame SKILL and contrast it with multi-task learning, sequential LL, and federated learning (Sec. 3).\n(2) A new LL benchmark dataset: SKILL-102, with 102 complex image classification tasks. To the best of our knowledge, it is the most challenging benchmark to evaluate LL and SKILL algorithms in the image classification domain, with the largest number of tasks, classes, and inter-task variance (Sec. 4). (3) A solution to the SKILL problem: Lightweight Lifelong Learning (LLL) for efficient knowledge sharing among agents, using a fixed shared core plus task-specific shareable modules. The need for a task oracle is eliminated by using a task mapper, which can automatically determine the task at inference time from just an input image (Sec. 5). (4) Our SKILL algorithm achieves SOTA performance on three main metrics: High LL task accuracy (less catastrophic forgetting), low shared (communication) resources, and high speedup ratio (Sec. 6). ( 5) The proposed Lightweight Lifelong Learner shows promising forward knowledge transfer, which reuses the accumulated knowledge for faster and more accurate learning of new tasks." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Lifelong Learning", "publication_ref": [ "b30", "b34", "b11", "b23", "b0", "b54", "b28", "b55", "b52", "b8", "b51", "b44", "b6", "b28", "b31", "b29", "b7", "b1", "b5" ], "table_ref": [], "text": "Lifelong Learning (LL) aims to develop AI systems that can continuously learn to address new tasks from new data, while preserving knowledge learned from previously learned tasks (Masana et al., 2022). It also refers to the ability to continually learn over time by accommodating new knowledge while retaining previously learned experiences (Parisi et al., 2019). LL is challenging because it is usually assumed that the training data from previous tasks is not available anymore while learning new tasks; hence one cannot just accumulate training data over time and then learn from all the data collected so far. Instead, new approaches have been proposed, which fall under three main branches (De Lange et al., 2021): (1) Regularization methods add an auxiliary loss term to the primary task objective to constrain weight updates, so as to minimally disturb previously learned knowledge while learning new tasks. The extra loss can be a penalty on the parameters (EWC (Kirkpatrick et al., 2017), MAS (Aljundi et al., 2018) and SI (Zenke et al., 2017)) or on the featurespace (FDR (Benjamin et al., 2018)), such as using Knowledge Distillation (LwF (Li & Hoiem, 2017), DMC (Zhang et al., 2020)). (2) Parameter-Isolation methods assign a fixed set of model parameters to a task and avoid over-writing them when new tasks are learned (SUPSUP (Wortsman et al., 2020)), PSP (Cheung et al., 2019) and BPN (Wen et al., 2021). (3) Rehearsal methods use a buffer containing sampled training data from previous tasks, as an auxiliary to a new task's training set. The buffer can be used either at the end of the task training (iCaRL, ER (Rebuffi et al., 2017b;Robins, 1995)) or during training (GSS, AGEM, \n𝑅 & 𝐴 & 𝐴 '()*(+ 𝑇 ! \" 𝑇 # \" 𝑇 $ \" 𝑇 % \" 𝑇 & \" … 𝐴 \" 𝑅 \" b) Sequential Lifelong Learning time t-2 t-1 t+1 𝑇 ! \" 𝑇 # \" 𝑇 $ \" 𝑇 % \" 𝑇 & \" … 𝐴 \" 𝑅 \" time t-2 t-1 t+1 d) Shared Knowledge Lifelong Learning (SKILL) 𝑇 ! ' 𝑇 # ' 𝑇 $ ' 𝑇 % ' 𝑇 & ' … 𝐴 ' 𝑅 ' time t-2 t-1 t+1 𝑇 ! ( 𝑇 # ( 𝑇 $ ( 𝑇 % ( 𝑇 & ( … 𝑅 ( 𝐴 ( time t-2 t-1 t+1 𝑇 ! ) 𝑇 # ) 𝑇 $ ) 𝑇 % ) 𝑇 & ) … 𝑅 ) 𝐴 ) time t-2 t-1 t+1 𝑇 ! * 𝑇 # * 𝑇 $ * 𝑇 % * 𝑇 & * … 𝑅 * 𝐴 * time t-2 t-1 t+1\n✓ ✓ ✓ ✕ ✕ b) Sequential Lifelong Learning ✕ ✓ ✓ ✕ ✕ c) Federated Learning ✕ ✕ ✕ ✓ ✓ d) Shared Knowledge Lifelong Learning (SKILL) ✓ ✓ ✓ ✓ ✓\nFigure 1: SKILL vs. related learning paradigms. a) Multi-task learning (Caruana, 1997): one agent learns all tasks at the same time in the same physical location. b) Sequential Lifelong Learning (S-LL) (Li & Hoiem, 2017): one agent learns all tasks sequentially in one location, deploying LL-specific machinery to avoid task interference. c) Federated learning (McMahan et al., 2017): multiple agents learn the same task in different physical locations, then sharing learned knowledge (parameters) with a center agent. d) Our SKILL: different S-LL agents in different physical regions each learn tasks, and learned knowledge is shared among all agents, such that finally all agents can solve all tasks. Bottom-right table : Strengths & weaknesses of each approach.\nAGEM-R, GSS, DER, DERPP (Lopez-Paz & Ranzato, 2017;Chaudhry et al., 2018;Aljundi et al., 2019;Buzzega et al., 2020)). However, most traditional LL algorithms cannot satisfy the requirement of SKILL: parallel learning for speeding up, and sharing knowledge among agents." }, { "figure_ref": [], "heading": "Multi-task Learning", "publication_ref": [ "b56", "b10", "b45", "b56" ], "table_ref": [], "text": "Multi-Task Learning (MTL) aims to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks (Zhang & Yang, 2021;Crawshaw, 2020;Ruder, 2017).\nThe main difference between MTL and SKILL is that MTL assumes that all tasks are located in the same physical region, and that one can access the datasets of all tasks at the same time (Zhang & Yang, 2021). While MTL learns multiple tasks together, SKILL assumes that different knowledge sources are separated in different physical regions and different agents should learn them in parallel." }, { "figure_ref": [], "heading": "Federated Learning", "publication_ref": [ "b21", "b27", "b3" ], "table_ref": [], "text": "Federated learning (FL) is a machine learning setting where many clients (e.g., mobile devices, networked computers, or even whole organizations) collaboratively train a model under the orchestration of a central server, while keeping the training data decentralized Kairouz et al. (2021); Li et al. (2020); Bonawitz et al. (2019). As shown in Fig. " }, { "figure_ref": [], "heading": "Shared knowledge in lifelong learning (SKILL)", "publication_ref": [ "b25", "b22", "b46", "b49", "b24", "b47", "b15" ], "table_ref": [], "text": "The chief motivation for SKILL is to enable the next generation of highly-efficient, parallelizable, and resilient lifelong learning.\nAssumptions: (1) A population of N agents wants to learn a total of T different tasks separated into N physical regions. (2) Each agent i asynchronously learns 1 ≤ T i ≤ T tasks, in sequence, from the distinct inputs and operating conditions it encounters. As in standard LL, training data from previous tasks is not available anymore while learning the next task. (3) Each agent performs as a \"teacher\" for its T i tasks, by sharing what it has learned with the other N -1 agents; at the same time, each agent also performs as a \"student\" by receiving knowledge from the other N -1 agents. In the end, every agent has the knowledge to solve all T tasks. Fig. 1 contrasts SKILL with other learning paradigms. Note how here we use \"teacher\" and \"student\" to distinguish the two roles that every agent will perform; this is different from and not to be confused with other uses of student/teacher terminology, for example in knowledge distillation. (4) There is a perfect task oracle at training time, i.e., each agent is told which tasks it should learn. (5) There is a clear separation between tasks, and between training and test phases.\nEvaluation metrics: (1) CPU/computation expenditure. This metric is important to gauge the efficacy of an approach and its ability to scale up with more agents operating in parallel. Wall-clock time is the main metric of interest, so that speedup can be achieved through parallelism. Thus, if N agents learn for 1 unit of time, wall-clock time would be 1, which is an N -fold speedup over a single sequential agent. In practice, speedup < N is expected because of overhead for sharing, communications, and knowledge consolidation.\nBecause wall clock time assumes a given CPU or GPU speed, we instead report the number of multiplyaccumulate (MAC) operations.\n(2) Network/communication expenditure. Sharing knowledge over a network is costly and hence should be minimized. To relate communications to computation, and hence allow trade-offs, we assume a factor α = 1, 000 MACs / byte transmitted. It is a hyperparameter in our results that can be easily changed to adapt to different network types (e.g., wired vs. wireless).\n(3) Performance: After the population of N agents has collectively learned all T tasks, we report aggregated (averaged) performance over all T tasks (correct classification rate over all tasks). Note how here we assume that there is no task oracle at test time. After training, agents should be able to handle any input from any task without being told which task that input corresponds to. SKILL does not assume a free task oracle because transmitting (Rebuffi et al., 2017a), Cifar-100 (Krizhevsky et al., 2009), F-CelebA (Ke et al., 2020), Fine-grained 6 tasks (Russakovsky et al., 2014) (Wah et al., 2011), (Nilsback & Zisserman, 2008b), (Krause et al., 2013), (Saleh & Elgammal, 2015), (Eitz et al., 2012) c) Qualitative visualization of other datasets, using the same legend and format as in a).\ntraining data across agents is potentially very expensive. Thus, agents must also share information that will allow receiving agents to know when a new test input relates to each received task.\nOpen questions: What knowledge should be shared? SKILL agents must share knowledge that is useful to other agents and avoid sharing local or specialized knowledge that may be misleading, in conflict with, or inappropriate to other agents. The shared knowledge may include model parameters, model structure, generalizations/specializations, input data, specific contextual information, etc. There are also size/memory/communication constraints for the shared knowledge. When and how to share? Different communication network topologies and sharing frequencies likely would lead to different results. Here, we will sidestep this problem and assume a fully connected communication network, and broadcast sharing from each agent to all others each time a new task has been learned." }, { "figure_ref": [ "fig_0" ], "heading": "SKILL-102 dataset", "publication_ref": [ "b32", "b36" ], "table_ref": [], "text": "We use image classification as the basic task framework and propose a novel LL benchmark dataset: SKILL-102 (Fig. 2). SKILL-102 consists of 102 image classification datasets. Each one supports one complex classification task, and the corresponding dataset was obtained from previously published sources (e.g., task 1: classify flowers into 102 classes, such as lily, rose, petunia, etc using 8,185 train/val/test images (Nilsback & Zisserman, 2008a); task 2: classify 67 types of scenes, such as kitchen, bedroom, gas station, library, etc using 15,524 images (Quattoni & Torralba, 2009); full dataset sequence and details in Suppl. Fig. S5.\nIn total, SKILL-102 is a subset of all datasets/tasks and images in DCT, and comprises 102 tasks, 5,033 classes and 2,041,225 training images (Suppl. Sec. A and Suppl. Fig. S5). After training, the algorithm is presented 243,464 test images and decides, for each image, which of the 5,033 classes it belongs to (no task oracle). To the best of our knowledge, SKILL-102 is the most challenging completely real (not synthesized or permuted) image classification benchmark for LL and SKILL algorithms, with the largest number of tasks, number of classes, and inter-task variance." }, { "figure_ref": [ "fig_1" ], "heading": "Lightweight Lifelong Learner for SKILL", "publication_ref": [], "table_ref": [], "text": "To satisfy the requirements of SKILL (see Introduction), we design Lightweight Lifelong Learning (LLL) agents. The design motivation is as follows: We propose to decompose agents into a generic, pretrained, common representation backbone endowed into all agents at manufacturing time, and small task-specific decision modules. This enables distributed, decentralized learning as agents can learn their own tasks independently (Chal-1). It also enables lifelong learning (Chal-2) in each agent by creating a new task-specific module for each new task. Because the shared modules are all operating in the common representation of the backbone, this approach also satisfies (Chal-3). Using compact task-specific modules also aims to maximize speedup through parallelization (Chal-4). Finally, we show a few examples where knowledge from previously learned tasks may both accelerate the learning and improve the performance on new tasks (Chal-5).\nFig. 3 shows the overall pipeline and 4 roles for each agent. Agents use a common frozen backbone and only a compact task-dependent \"head\" module is trained per agent and task, and then shared among agents. This makes the cost of both training and sharing very low. Head modules simply consist of (1) a classification layer that operates on top of the frozen backbone, and (2) a set of beneficial biases that provide lightweight task-specific re-tuning of the backbone, to address potentially large domain gaps between the task-agnostic backbone and the data distribution of each new task. To eliminate the need for a task oracle, LLL agents also learn and share task anchors, in the form of summary statistics about their training datasets, or share a few training images, to help other agents assign test samples to the correct head at test time (task mapper). Two representations for task anchors, and the corresponding task mapping mechanisms, are explored: Gaussian Mixture Model Classifier (GMMC) and Mahalanobis distance classifier (MAHA). Receiving agents simply accumulate received heads and task anchors in banks, and the anchors for all tasks received so far by an agent are combined to form a task mapper within that agent. We currently assume a fully connected communication network among all agents, and every agent, after learning a new task, broadcasts its head and task anchor to all other agents. Hence, all agents become identical after all tasks have been learned and shared, and they all can master all tasks. At test time, using one of all identical agents, we first run input data through the task mapper to recover the task, and then invoke the corresponding head to obtain the final system output. The task mapper eliminates the need for a task oracle at test time. The combination of using a pre-trained backbone, task-specific head and BB, and task mapper enables lifelong learning in every agent with minimal forgetting as each agent learns a sequence of tasks (see results)." }, { "figure_ref": [], "heading": "Pretrained backbone:", "publication_ref": [ "b9", "b12", "b51", "b23", "b8" ], "table_ref": [], "text": "We use the xception (Chollet, 2017) pretrained on ImageNet (Deng et al., 2009), as it provides a good balance between model complexity and expressivity of the embedding. The backbone is embedded in every agent at manufacturing time and is frozen. It processes 299 × 299 RGB input images, and outputs a 2048D feature vector. Any other backbone could be used, depending on available resources.\nBeneficial Biases: To address potentially large domain shifts between ImageNet and future tasks (e.g., line-drawing datasets, medical imaging datasets, astronomy datasets), we designed beneficial biases (BB).\nInspired by the Beneficial Perturbation Network (BPN) of Wen et al. (2021), BB provides a set of taskdependent, out-of-network bias units which are activated per task. These units take no input. Their constant outputs add to the biases of the neurons already present in the backbone network; thus, they provide one bias value per neuron in the core network. This is quite lightweight, as there are far fewer neurons than weights in the backbone (22.9M parameters but only 22k neurons in xception). Different from BPN, which works best in conjunction with an LL method like EWC (Kirkpatrick et al., 2017) or PSP (Cheung et al., 2019), and only works on fully-connected layers, BB does not require EWC or PSP, and can perform as an add-on module on both convolutional layers (Conv) and fully-connected layers (FC). Specifically, for each Conv layer, we have The backbone allows the agent to extract compact representations from inputs (e.g., with an xception backbone, the representation is a latent vector of 2048 dimensions, and inputs are 299 × 299 RGB images). Each agent learns a task-specific head (red triangle) for each new task. A head consists of the last fully-connected layer of the network plus our proposed LL beneficial biasing units (BB) that provide task-dependent tuning biases to all neurons in the network (one float number per neuron). During training, each agent also learns a GMMC or Mahalanobis task anchor which will form a task mapper. 2) Share knowledge with other agents: each agent shares the learned task-specific head, Beneficial Bias (BB), and GMMC module (or training images for Mahalanobis) with all other agents. 3) Receive knowledge from other agents: each agent receives different heads and GMMC/Mahalanobis task mapper anchors from other agents. All heads are stored in a head bank and all task anchors are consolidated to form a task mapper. 4) Testing: At test time, an input is first processed through the task mapper. This outputs a task ID, used to load up the corresponding head (last layer + beneficial biases) from the bank. The network is then equipped with the correct head and is run on the input to produce an output. maps respectively.) For FC layers,\ny = Conv(x) + b + B (1) with input feature x ∈ R w * h * c , output feature y ∈ R w ′ * h ′ * c ′ . b ∈ R c ′ is\ny = F C(x) + b + B (2) with x ∈ R l , y ∈ R l ′ , b ∈ R l ′ and B ∈ R l ′ .\nThe size of B (beneficial bias) is equal to the number of hidden units (l ′ ) in this FC layer." }, { "figure_ref": [ "fig_1" ], "heading": "GMMC task mapper:", "publication_ref": [ "b43", "b26", "b9" ], "table_ref": [], "text": "To recover task at test time, each agent also learns Gaussian Mixture clusters (GMMC) (Rios & Itti, 2020) that best encompass each of its tasks' data, and shares the cluster parameters (means + diagonal covariances). This is also very fast to learn and very compact to share. As shown in Fig. 3(bottom right), during training, each agent clusters its entire training set into k Gaussian clusters:\nf (x) = k i=1 ϕ i N (x|µ i , Σ i ), k i=1 ϕ i = 1 (3)\nWe use k = 25 clusters for every task (ablation studies in Appendix). In sharing knowledge, each agent performs a \"teacher\" role on its learned task and shares the mean and diagonal covariance of its clusters with all other agents (students). In receiving knowledge, each agent performs a \"student\" role and just aggregates all received clusters in a bank to form a task mapper with kT clusters, keeping track of which task any given cluster comes from:\nD map () = {(N 1 , ϕ 1 ) : 1, ..., (N kT , ϕ kT ) : T }.\nAt test time, a image x i is evaluated against all clusters received so far, and the task associated with the cluster closest to the test image is chosen:\nT ask = D map ((N m , ϕ m ))\n, where m = arg max m (P (m, x i )). The probability of image x i belonging to the m th Gaussian cluster is given by:\nP (m, x i ) = ϕ m N (x|µ m , Σ m ) kT n=1 ϕ n N (x|µ n , Σ n ) (4)\nMahalanobis task mapper: To perform as a task mapper, the Mahalanobis distance (MAHA) method (Lee et al., 2018) learns C class-conditional Gaussian distributions N (x|µ c , Σ), c = 1,2, ... C, where C is the total number of classes of all T tasks and Σ is a tied covariance computed from samples from all classes. The class mean vectors and covariance matrix of MAHA are estimated as:\nµ c = 1 Nc i:yi=c x i (N c : number of images in each class) and Σ = 1 N C c=1 i:yi=c (x i -µ c )(x i -µ c\n) T , (N : total number of images shared to the student agent). In training, each teacher agent computes the mean of each class within its task and randomly samples a variable number m of images per class. In our experiments, we use m = 5 images/class for every task. During sharing knowledge, each agent shares the sample class means along with the saved images with all other agents. The shared images received by the student agents are used to compute the tied covariance. Similar to GMMC, the student agents also maintain a task mapper to keep track of which task any given class comes from. For a test image x, MAHA computes the Mahalanobis distance for all classes received so far and assigns the test image to the task associated with the smallest Mahalanobis distance, defined as:\narg min c (x -µ c ) T Σ-1 (x -µ c ) (5)\nSystem implementation details: (1) Frozen xception backbone (Chollet, 2017), with 2048D latent representation.\n(2) Each agent learns one \"head\" per task, which consists of one fully-connected layer with 2048 inputs from the backbone and c outputs for a classification task with c classes (e.g., task 1 is to classify c = 102 types of flowers), and BB biases that allow us to fine-tune the backbone without changing its weights, to mitigate large domain shifts.\n(3) Each agent also fits k = 25 Gaussian clusters in the 2048D latent space to its training data. (4) At test time, a test image is presented and processed forward through the xception backbone. The GMMC classifier then determines the task from the nearest Gaussian cluster. The corresponding head is loaded and it produces the final classification result: which image class (among 5,033 total) the image belongs to. (5) The workflow is slightly different with the Mahalanobis task mapper: while GMMC clusters are learned separately at each teacher for each task as the task is learned, the Mahalanobis classifier is trained by students after sharing, using 5 images/class shared among agents. (6) Agents are implemented in pyTorch and run on desktop-grade GPUs (e.g., nVidia 3090, nVidia 1080)." }, { "figure_ref": [], "heading": "Experiments and results", "publication_ref": [], "table_ref": [], "text": "Each LLL agent in our approach is a sequential lifelong learner, capable of learning several tasks in its physical region, one after the other. Hence, before we show full results on the SKILL challenge, we first compare how well LLL can learn multiple tasks sequentially in a single agent, compared to baselines LL algorithms. This is the standard LL scenario where tasks are learned one after the other and data from previous tasks is not available while learning new tasks." }, { "figure_ref": [], "heading": "Baselines:", "publication_ref": [ "b55", "b52", "b8", "b44", "b29", "b7", "b1", "b5" ], "table_ref": [], "text": "We implemented 8 baselines from the literature. For those that require a task oracle, we (unfairly to us) grant them a perfect task oracle (while our approach uses imperfect GMMC or Mahalanobis task mappers). When possible, we re-implement the baselines to use the same pretrained xception backbone as our approach. This ensures a fair comparison where every approach is granted the same amount of pre-training knowledge and the same feature processing ability. (Zhang et al., 2020)). We use EWC as the representative of this category: one agent learns all 102 tasks in sequence, using EWC machinery to constrain the weights when a new task is learned, to attempt to not destroy performance on previously learned tasks. We also use SI, MAS, LwF, and Online-EWC as baselines of this type.\n(2) Parameter-Isolation methods assign a fixed set of model parameters to a task and avoid over-writing them when new tasks are learned (SUPSUP (Wortsman et al., 2020), PSP (Cheung et al., 2019)). We use PSP as the representative of this category: one agent learns all 102 tasks in sequence, generating a new PSP key for each task. The keys help segregate the tasks within the network in an attempt to minimize interference. We used the original PSP implementation, which uses a different backbone than ours. PSP accuracy overall hence may be lower because of this, and thus we focus on trends (decline in accuracy as more tasks are added) as opposed to only absolute accuracy figures. We also used SUPSUP as baseline of this type.\n(3) Rehearsal methods use a buffer containing sampled training data from previous tasks, as an auxiliary to a new task's training set. The buffer can be used either at the end of the task training (iCaRL, ER (Rebuffi et al., 2017b;Robins, 1995)) or during training (GSS, AGEM, AGEM-R, GSS, DER, DERPP (Lopez-Paz & Ranzato, 2017;Chaudhry et al., 2018;Aljundi et al., 2019;Buzzega et al., 2020)). We use ER and as the representative of this category: One agent learns all 102 tasks in sequence.\nAfter learning each task, it keeps a memory buffer with 10 images/class (size hence keeps increasing when new tasks are learned) that will later be used to rehearse old tasks. When learning a new task, the agent learns from all the data for that task, plus rehearses old tasks using the memory buffer." }, { "figure_ref": [ "fig_2" ], "heading": "Accuracy on first task:", "publication_ref": [], "table_ref": [], "text": "To gauge how well our approach is achieving lifelong learning, we plot the accuracy on the first task as we learn from 1 to 102 tasks, in Fig. 4. There is nothing special in our dataset about the first task, except that it is the first one. A good LL system is expected to maintain its accuracy on task 1 even as more subsequent tasks are learned; conversely, catastrophic interference across tasks would rapidly decrease task 1 accuracy with more learned tasks. Overall, our approach maintains the highest accuracy on task 1 over time, and virtually all of the accuracy degradation over time is due to increasing confusion in the task mapper (e.g., curves for Mahalanobis task mapper alone and LLL w/BB w/MAHA are nearly shifted versions of each other). Indeed, once the task is guessed correctly, the corresponding head always performs exactly the same, no matter how many tasks have been learned." }, { "figure_ref": [], "heading": "Normalized accuracy on first 10 tasks:", "publication_ref": [], "table_ref": [], "text": "We compare our method to the baselines on the first 10 tasks, when up to 20 subsequent tasks are learned. A good LL system should be able to maintain accuracy on the first 10 tasks, while at the same time learning new tasks. Because in SKILL-102 different tasks have different levels of difficulty, we normalize accuracy here to focus on degradation with an increasing number of new tasks. For example, the accuracy of our method (LLL w/o BB) when learning a single task is 92.02% for task 1, but only 52.64% for task 6, which is much harder. Here, we define a normalized accuracy as the accuracy divided by the initial accuracy just after a given task was learned (which is also the best ever accuracy obtained for that task). This way, normalized accuracy starts at 100% for all tasks. If it remains near 100% as subsequent tasks are learned, then the approach is doing a good job at minimizing interference across tasks. Conversely, a rapidly dropping normalized accuracy with an increasing number of subsequent tasks learned indicates that catastrophic interference is happening. Our approach is able to maintain accuracy on task 1 much better than the baselines as more and more tasks are learned: while our approach does suffer some interference, task 1 accuracy remains to within 90% of its initial best even after learning 101 new tasks (for the 4 LLL variants, BB=beneficial biases, MAHA=Mahalanobis Distance task mapper, GMMC=GMMC task mapper). In contrast, the accuracy of EWC, PSP, and several other baselines on task 1 catastrophically degrades to nearly zero after learning just 10 new tasks, even though we granted these methods a perfect task oracle. The best performing baseline, ER, is of the episodic buffer type (a fraction of the training set of each task is retained for later rehearsing while learning new tasks), with an un-bounded buffer that grows by 10 images/class. This methods does incur higher (and increasing) training costs because of the rehearsing (Suppl. Sec. D.) Note how SUPSUP does not experience any degradation on task 1, which is a desirable feature of this approach. However, a drawback is that SUPSUP is not able, even from the beginning, to learn task 1 as well as other methods (50.64% accuracy vs. over 90% for most other approaches). We attribute this to SUPSUP's limited expressivity and capacity to learn using masks over a random backbone, especially for tasks with many classes. Indeed, SUPSUP can perform very well on some other tasks, usually with a smaller number of classes (e.g., 91.93% correct on SVHN, 93.18% on Brazillian Coins, 99.11% on UMNIST Face Dataset; see Supp. Fig. S6).\nOur results in Fig. 5 show that, although not perfect, our approach largely surpasses the baselines in its ability to maintain the accuracy of previously learned tasks, with the exception of SUPSUP, which suffers no degradation (see caption of Fig. 5 for why).\nTask mapper accuracy after learning 1 to 102 tasks: To investigate how well our approach is expected to scale with more tasks, we computed task mapper accuracy on all tasks learned so far, after learning 1, 2, 3, ... 102 tasks. This allows us to evaluate degradation with more tasks that is due to increasing confusion in the task mapper, as opposed to being due to classification difficulty of newly added tasks. Results are shown in Fig. 6: Task mapping accuracy starts at 100% after learning 1 task (all test samples are correctly assigned Figure 5: Normalized accuracy on the first 10 tasks (one per curve color) as up to 20 additional tasks are learned. Our LLL approach is able to maintain high normalized accuracy on the first 10 tasks, while all other baselines except SUPSUP suffer much stronger catastrophic interference. SUPSUP is a special case as there is no interference among successive tasks when a perfect task oracle is available. Hence normalized accuracy for all tasks remains at 100%. However, we will see below that the absolute accuracy of SUPSUP is not as good.\nto that task by Mahalanobis or GMMC), then decreases as more tasks are learned, eventually still achieving 87.1% correct after 102 tasks for MAHA, and 84.94% correct for GMMC. It is important to note that in our approach, any loss in accuracy with more tasks only comes from the task mapper: once the correct head is selected for a test sample, the accuracy of that head remains the same no matter how many heads have been added to the system. In contrast, other baseline methods may suffer from catastrophic forgetting for both the task mapper and the classification model when more tasks are learned, as further examined below.\nWhen using GMMC task mapping, the regression line is y = -0.0012x + 0.952, which intercepts zero for T = 800 tasks. Thus, with the distribution of tasks in our dataset, we extrapolate that T = 500 is realistic as is. Since task interference in our system only comes from GMMC, pushing beyond T = 500 might require more than k = 25 GMMC clusters per task, which would increase CPU and communications expenditure. When using Mahalanobis task mapping, the results are similar with an intercept at T = 978, though this approach incurs a slightly higher communications cost (discussed below)." }, { "figure_ref": [ "fig_3" ], "heading": "Absolute accuracy:", "publication_ref": [], "table_ref": [], "text": "The normalized accuracy figures reported so far were designed to factor out variations in individual task difficulty, so as to focus on degradation due to interference among tasks. However, they Figure 6: Task mapper accuracy on all tasks learned so far, as a function of the number of tasks learned, when using Mahalanobis (left) or GMMC (right) task mappers. Our approach is able to maintain good task mapping accuracy as the number of tasks increases. also factor out the potential benefits of BB in raising absolute task accuracy, and they obfuscate the absolute performance of baselines. Hence, we here also study absolute task accuracy.\nWe first plot the absolute accuracy of our LLL approach, separately for each task, in Fig. 7, separating whether BB is used or not, and which task mapper is used. This shows that our SKILL-102 dataset provides a range of difficulty levels for the various tasks, and is quite hard overall. BB improves accuracy on nearly Figure 8: Average absolute accuracy on all tasks learned so far, as a function of the number of tasks learned.\nOur LLL approach is able to maintain higher average accuracy than all baselines. BB provides a small but reliable performance boost (LLL w/BB vs. LLL w/o BB). The sharp decrease in early tasks carries no special meaning except for the fact that tasks 4,8,10 are significantly harder than the other tasks in the 0-10 range, given the particular numbering of tasks in SKILL-102. Note how again SUPSUP has a low accuracy for the very first task. This is because of the nature of its design; indeed, SUPSUP is able to learn some other tasks in our sequence with high accuracy (Suppl. Fig. S5).\nall datasets, at an extra computation cost, detailed below. As promised, BB improves accuracy quite dramatically on some datasets which have a large domain gap compared to ImageNet used to pretrain the backbone (e.g., 31.98 percent point improvement with BB on deepvp that contains dashcam images, 24.92 percent point improvement on CLEVR, 24.5 percent point improvement on Aircraft, 20.75 percent point improvement on SVHN; full details in Suppl. Fig. S5).\nWe then plot the absolute accuracy averaged over all tasks learned so far in Fig. 8. The absolute accuracy for GMMC and Mahalanobis is the same as before. However, now the absolute accuracies for the full LLL models and for the baselines conflate two components: 1) how much interference exists among tasks and 2) the absolute difficulty of each of the tasks learned so far." }, { "figure_ref": [], "heading": "Computation and communication costs, SKILL metrics:", "publication_ref": [], "table_ref": [], "text": "The baselines are sequential in nature, so trying to implement them using multiple agents does not make sense as it would only add communication costs but not alleviate the sequential nature of these LL approaches. For example, for the EWC baseline, one could learn task 1 on agent A then communicate the whole xception weights to agent B (22.9 M parameters = 91.6 MBytes) plus the diagonal of the Fisher Information matrix (another 22.9 M parameters), then agent B would learn task 2 and communicate its resulting weights and Fisher matrix to agent C, etc. Agent B cannot start learning task 2 before it has received the trained weights and Fisher matrix from agent A because EWC does not provide a mechanism to consolidate across agents. Thus, we first consider one agent that learns all 102 tasks sequentially, with no communication costs.\nTable 1: Analysis of computation expenditures and accuracy for our approach and the baselines, to learn all 102 tasks (with a total of 5,033 classes, 2,041,225 training images) in a single agent. Here we select LLL, no BB, MAHA as reference (1x CPU usage) since it is the fastest approach, yet still has higher accuracy than all baselines. For our approach, MAHA leads to slightly higher accuracy than GMMC, at roughly the same computation cost. All baselines perform worse that our approach, even though they also requires more computation than our approaches that do not use BB. BB adds significatly to our computation cost, but also leads to the best accuracy when used with MAHA. Table . 1 shows the computation expenditures (training time in terms of the number of multiply-accumulate (MAC) operations needed to learn all 102 datasets) for our approach and the baselines. Our approach overall has by far the lowest computation burden when BB is not used, yet all 4 variants of our approach perform better than all baselines. BB increases accuracy but at a significant computation cost: This is because, to compute BB biases, one needs to compute gradients through the entire frozen backbone, even though those gradients will only be used to update biases while the weights remain frozen in the backbone.\nOur approach presents the advantage that it can also be parallelized over multiple agents that each learn their own tasks in their own physical region. All agents then learn their assigned tasks in parallel. Each agent is the \"teacher\" for its assigned tasks, and \"student\" for the other tasks. Then all agents broadcast their shared knowledge to all other agents. As they receive shared knowledge, the students just accumulate it in banks, and update their task mapper. After sharing, all agents know all tasks (and are all identical). As mentioned above, the main source of performance degradation in our approach is in the task mapper, which gets increasingly confused at T increases.\nFor our baselines, we are not aware of a way to parallelize their operation, except that we were able to create a modified version of SUPSUP that works on several parallel processors. In our modified SUPSUP, each agent learns a mask for each of its tasks, then communicates its masks to all other agents. At test time, we (unfairly to us) grant it a perfect task oracle, as our GPUs did not have enough memory to use the built-in task mapping approach of SUPSUP, given our 102 tasks and 5,033 classes (this would theoretically require 1.02 TB of GPU memory).\nTable . 2 shows the computation and networking expenditures for our approach and our modified SUPSUP to learn all tasks in the SKILL-102 dataset. Because some algorithms run on GPU (e.g., xception backbone) but others on CPU (e.g., GMMC training), and because our tasks use datasets of different sizes, we measure everything in terms of MACs (multiply-accumulate operations, which are implemented as one atomic instruction on most hardware). To measure MACs for each component of our framework, we used a combination of empirically measured, framework-provided (e.g., pyTorch can compute MACs from the specification of all layers in a network), or sniffed (installing a hook in some algorithm that increments a counter each time a MAC is executed). To translate communication costs to MACs, we assume a nominal cost of α = 1, 000 MACs to transmit one byte of data. This is a hyperparameter in our results that can be changed based on deployment characteristics (e.g., wireless vs. wired network). The amount of data shared per task for Our results in Table . 2 show:\n1. Our approach has very low parallelization overhead, which leads to almost perfect speedup > 0.99N for all variants. Indeed, teachers just learn their task normally, plus a small overhead to train GMMC on their own tasks, when GMMC is used. Communications are less than 2 MBytes per task (Suppl. Sec. G). Students either do nothing (just accumulate received knowledge in a bank) or update their Mahalanobis task mapper.\n2. The baselines have comparatively much higher training cost, yet their performance is poor. Performance of episodic buffer / rehearsing methods might be improved further by increasing buffer size, but note that in the limit (keeping all training data for future rehearsing), this gives rise to a > 5, 000× increase in training time (Suppl. Sec. D)." }, { "figure_ref": [], "heading": "Shared Knowledge Accumulation, Reuse and Boost", "publication_ref": [], "table_ref": [], "text": "As our system learns many tasks, it may occur that some tasks overlap with others, i.e., they may share similar images and/or class labels. Here, we explore two approaches to handle such overlap." }, { "figure_ref": [], "heading": "Corrective approach to task overlap/synergy", "publication_ref": [ "b39" ], "table_ref": [], "text": "Since our LLL learners can learn a large number of tasks while solving SKILL problems, some synergy can occur across tasks if one is able to detect when two classes from two different tasks are equivalent, as shown in Fig. 9. We implemented a method to compare the semantic distance of the predicted class name and the actual class name. Originally, after the GMMC infers the task that a test image may come from, we would immediately consider the image as misclassified if the predicted task is wrong. With the consideration of semantic similarity between class names, we will now always load the prediction head corresponding to the predicted task given by GMMC and use it to infer the class name. If the class was guessed incorrectly, but, in the end, the final class name is equivalent to the correct one, then we can declare success on that particular test image (Fig. 9). To obtain the pairwise similarity, we constructed a similarity matrix that stores the semantic distance, measured by the cosine similarity of word embeddings, for all the class names. Those embeddings were obtained from CLIP's (Radford et al., 2021) text encoder based on GPT-2. If the similarity between the predicted class name and the actual class name is greater than a threshold (empirically chosen for now), then we declare it a correct prediction.\nFigure 9: Left: similar classes with a cosine similarity in the CLIP embedding greater than 0.90. Right: similar classes with a cosine similarity greater than 0.95. This can help correct spurious errors where, for example, a test image from class \"bike\" from the Stanford_Online_Products dataset could also be considered correctly classified if the system output were \"bicycle\" from the Sketches dataset.\nAs our 102-task dataset contains 5,033 object classes, the full similarity matrix is triangular 5,033 x 5,033 (too large to display here).\nThe approach yields a small but consistent improvement in accuracy (Fig. 10). This is one way in which we can handle possible overlap between tasks, which may inevitably arise when large numbers of tasks are learned." }, { "figure_ref": [], "heading": "Learning approach to task overlap/synergy", "publication_ref": [], "table_ref": [], "text": "The ability to reuse learned knowledge from old tasks to boost the learning speed and accuracy of new tasks is a potential desirable feature of LL algorithms. Here, this might be achieved if, when learning a new task, an LLL agent could \"borrow\" the knowledge of old tasks, from not only itself but also the shared knowledge from any other agents.\nOne important design feature of our LLL agents is that they can share partial heads across tasks: Our heads are a single layer with 2,048 inputs (from the xception backbone) and c outputs for a task with c classes. Thus, each of the c output neurons is connected to the 2,048 inputs, but there are no lateral connections. This means that we can consider each of the c output neurons individually as evidence provider for their associated class (remember the analogy to \"grandmother cells\" in Introduction). We can then cherry-pick sets of 2,048 weights corresponding to individual classes of previously learned tasks, and use them as initialization for some similar classes in new tasks to be learned. As we show below, this greatly accelerates learning of the Figure 10: Correcting spurious errors by realizing when two distinct classes from two tasks actually are the same thing. The approach provides a small but consistent improvement in accuracy over baseline (which declares failure as soon as task mapping failed), here shown on 15 datasets that have some overlap.\nsimilar new classes, compared to starting with randomized initial weights, and also yields higher accuracy on these new classes.\n1) New task is a combination of old tasks: To validate our idea, a new learning paradigm is proposed to use previously learned weights when a new task contains classes that were already learned previously. This experiment considers two datasets and two sets of weights representing the old knowledge, and a new dataset that contains all classes from both Simply normalizing and concatenating the linear weights leads to poor performance. Hence, instead, we normalize the weights during training by their p-norm, and concatenate the normalized weights as the new task's weights. The experiment was conducted over 190 combinations of 2 datasets chosen from 20 datasets, and the average results show that there is a very small accuracy loss initially (epoch 0). After a few extra training epochs, we reach a higher accuracy than training new weights from scratch (random initialization; Fig. 11)." }, { "figure_ref": [ "fig_5" ], "heading": "The mathematical version.", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "During training, a constraint is added. Let W 1 be the full set of 2, 048 × c weights of the first dataset, and W 2 be the weights of the second dataset, where\nW 1 = ...w 1 1 ... ... ...w 1 n ... and W 2 = ...w 2 1 ... ... ...w 2 n ...\n. Normal Linear Layer training's forward path is ŷ = Wx. Define W' as\nw ′ 1 ... w ′ n\nwhere w ′ i = w i /pnorm(w i ) Hence, during training, Linear Layer training's forward path is ŷ = W'x. And concatenate(W 1 ', W 2 ') was used as the weights for the new combined task.\nSince the weights are normalized class-wise and not task-wise, this method can be used on any combination of previously leaned classes. For example, for 10 tasks containing 100 classes c 1 c 100 and a new task containing c 1 , c 3 , c 10 , c 20 , we can simply find the corresponding w ′ 1 , w ′ 3 , w ′ 10 , w ′ 20 , and concatenate them together. Choice of p: We find that using the 2-norm causes the classifier to converge to a state where all weight vectors have the same magnitude, which causes an accuracy drop for the old task. Hence, we choose to use the infinity norm, which is still modulated by the weight magnitudes, and is still easy to transfer.\n2) New task is different but similar to old tasks. In the previous setting, we assumed the new task classes are a combination of different old task classes. In a more general situation, the new task classes are Figure 11: Learning speed for a given object class when the corresponding weights are initialized randomly (orange) vs. from previously learned weights of a similar object class found in one of the previously learned tasks (blue), averaged for 190 combinations of two previously learned tasks. In this example, best accuracy is already reached after just 1 to 2 training epochs in the blue curve. In contrast, it takes up to 30 epochs to train from random initialization, with still a final accuracy in the orange curve that is lower than the blue curve. This approach hence leads to significant learning speedup when tasks contain some similar classes.\nall new classes that all other agents never learned before, but we could still borrow some learned knowledge from similar learned classes. For instance, as shown in Fig. 9, the knowledge of classes shown on top of the figure may be helpful to learn the new classes shown at the bottom.\nWe conduct four different experiments (for 4 pairs of datasets that share some related classes) to show the knowledge boost when we learn a new task. We first check if a learned old task shares similar knowledge with the new one. For instance, before we learn the MIT indoor scenes dataset, we find that the House Room Image Dataset contains classes that are similar to the new classes, in the CLIP embedding space. So we match each class from MIT indoor scenes dataset to the previously learned classes, which in this case come from the House Room Image Dataset. If the class similarity is larger than a threshold, we treat it as a matched class, then we use the similar old class weights to initialize the weights of the new class. If a new class was not matched with old classes, we use random initialization. We also conduct a corresponding controlled experiment by using random initialization for all new classes. The results of all 4 experiments are shown in 3: Boosted LLL learning when previously learned weights from similar classes can be used to initialize learning of new classes. We repeat the experiment with either learning from all images in the training set of the new task, or only 10, 5, or 3 images per class. Overall, re-using previously learned weights of similar classes boosts accuracy, usually (but not always) more so when the new task is only learned from a few exemplars (which is much faster than learning from scratch from the whole dataset).\nIn a more general learning scenario, the new task classes may correspond to similar but not necessarily identical classes in different old tasks. For example, having learned about SUVs may help learn about vans more quickly. Here we conduct two new experiments (Fig. 12). In EXP-1, the new task is sketch image classification, and the classification weights of each class are initialized from a learned old class that " }, { "figure_ref": [], "heading": "Further boost with Head2Toe", "publication_ref": [ "b17", "b17" ], "table_ref": [], "text": "A possibly complementary approach to BB to address domain gaps is Head2Toe (Evci et al., 2022), where the last layer can now directly draw from potentially any of the previous layers in the backbone. This has been shown to help alleviate domain gaps, as some of the lower-level features in the backbone may be useful to solve tasks with a big gap, even though the top-level backbone features may not. However, Head2Toe has a very high computation cost to select which layers should connect to the head, which is why we have not used it in our main results. Here, we explore how that cost of selection of the most appropriate layers to connect to the head for a given task can be eliminated by re-using the computations already expended for BB: Intuitively, layers which have large absolute BB magnitude may also be the most useful to connect to the head.\nCompared to the conventional Head2Toe (Evci et al., 2022) with two-stage training (first, select which layers will connect to the head, then train those connections), our new BB+H2T uses the biases that have been previously trained and stored in the BB network for feature selection. Specifically, we first concatenated all the biases in the BB network and selected the top 1% largest biases. Then, we picked the feature maps corresponding to the selected indices, average pooled them and flattened them to 8,192-dimensional vectors. After that, we concatenated all flattened feature vectors along with the logits of the last layer (after pooling layer, before softmax) in the BB network. Finally, we trained the concatenated vector with Adam optimizer, 0.001 learning rate, and 100 epochs. This approach, when combined with BB and MAHA, improved performance averaged over all tasks by 0.78% (when a perfect task mapper is available; or by 0.56% when using MAHA)." }, { "figure_ref": [], "heading": "Discussion and Future Works", "publication_ref": [ "b8", "b43" ], "table_ref": [], "text": "We have proposed a new lightweight approach to lifelong learning that outperforms all baselines tested, and also can benefit almost perfectly from parallelization. We tested the approach on a new SKILL-102 benchmark dataset, which we believe is the largest non-synthetized lifelong learning challenge dataset to date. While many previous efforts have employed many tasks, those were usually synthesized, e.g., as permutations over a single base dataset (e.g., 50 permuted-MNIST tasks in Cheung et al. (2019)). SKILL-102 contains novel real data in each task, with large inter-task variance, which is a better representative of realistic test scenarios. Our proposed lightweight LL points to a new direction in LL research, as we find that one can simply use lightweight task-specific weights (head) combined with maximizing the leverage obtained from task-agnostic knowledge that rapidly adapted by a compact BB module to handle each new task.\nOur results show how this lightweight design is better able to handle large scale lifelong learning tasks, and also solves our SKILL challenge very well.\nWe credit our good performance on both sequential lifelong learning and the SKILL challenge to our particular lightweight lifelong learner design: a fixed backbone, which represent task-agnostic knowledge shared among agents, to minimize the complexity of task-specific knowledge parameters (the head); Beneficial Biases, which on-demand shift the backbone to solve possibly large domain gaps for each new task, with very compact parameters; a GMMC/MAHA global task anchor for learned tasks, representing the tasks in the common task-agnostic latent space of all agents, which is easy to share and consolidate, and which eliminates the need for a task oracle at test time. Our results show that combination of this three components help our LLL work well.\nOur approach uses a pretrained backbone to represent task-agnostic knowledge, which our results show is a very effective strategy. For fair comparison, we also use the same pretrained backbone as the initialization for the baselines (except PSP and SUPSUP; see above). However, our fixed backbone design often cannot handle large domain gaps between new tasks and ImageNet. This is the reason why we proposed BB to relieve the domain gap by shifting the fixed parameters towards each new task with compact biases. Similar to other parameter-isolation methods, our model structure is growing on demand (though slowly) with the number of tasks (we add a new head per task, while they add new masks, keys, etc). Rehearsal-based baselines (e.g., ER) also grow, by accumulating more rehearsing exemplars over time. While some baselines do not grow (e.g., EWC), they also perform very poorly on our challenging SKILL-102 dataset.\nTo further speed up our method with BB, we could use BB on only partial layers. For instance, if we use BB only the last half of the layers in the backbone, we will use only half of the current time to train the model. In future experiments, we will test whether this still gives rise to a significant accuracy benefit.\nCurrently, we use the CLIP embedding space to match a new class with learned old classes, which uses only language knowledge (class labels). As a future work, we will use GMMC as a class matching mechanism to utilize the visual semantic information for matching. Specifically, when a agent learns a new class, the agent will collect a few shots (e.g., 10 images) of the new class and then use the GMMC mapper (trained on all previous tasks) to decide whether these images belong to a learned task or not, with a threshold. If most of the images are matched to a learned task, we can then summon the shared head of that task to classify them, now to obtain a possible match for individual previously learned classes. If most of the images are classified into one previously learned specific class, we can use the weights of that class to initialize the new class weights, similar to what we have done in Sec. 7.\nA good task mapper is essential in our approach if one is to forego the use of a task oracle. Thankfully, our results show that task mapping can be achieved with high accuracy even with a simple model like GMMC (over 84% correct for 102 tasks). Indeed, in our SKILL challenge, the task mapper is only solving a 102-way classification problem at the granularity of tasks, vs. a 5,033-way full SKILL classification challenge. Here, we focused on GMMC and MAHA, but many other alternatives could be investigated in future work. Our choice of GMMC was based on previous research that compared it to several other techniques, including self-organizing maps and multilayer perceptrons (Rios & Itti, 2020)." }, { "figure_ref": [ "fig_7" ], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "We have proposed a new framework for shared-knowledge, parallelized LL. On a new, very challenging SKILL-102 dataset, we find that this approach works much better than previously SOTA baselines, and is much faster. Scaling to > 500 difficult tasks like the ones in our new SKILL-102 dataset seems achievable with the current implementation.\nBroader impacts statement: We believe that LLL will spur a new generation of distributed LL systems, as it makes LL more accessible to edge systems more parallelizable. Thus, broader impacts are expected to be positive in enabling more lightweight devices to learn at the edge and to share what they have learned. -This is for γ = 0.04 but performance is low, so using a higher γ is warranted for the episodic buffer approach. This is very costly, though. In the limit of retaining all images, which would give best performance, the training time of this approach is 102 + 5050 = 5152 times the time it takes to learn one task. So, while the single-agent will require anywhere between 304 × T and 5152 × T to learn 102 tasks sequentially, our approach will learn all 102 tasks in parallel during just T .\nAdditional details used for our computations are in Fig. S3." }, { "figure_ref": [], "heading": "E Summary of our new SKILL-102 for image classification", "publication_ref": [], "table_ref": [], "text": "Fig. S5 shows a summary of 102 datasets we are using along with the accuracy of all our methods. Note that TM stands for Task Mapper. The red text indicates datasets with large domain gap which were mentioned in Sec. 6, the blue text indicates datasets with poor GMMC accuracy which are further examined below. Fig. S6 shows the baselines performance on SKILL-102." }, { "figure_ref": [ "fig_8", "fig_8", "fig_8" ], "heading": "F Cases of low accuracy in GMMC", "publication_ref": [], "table_ref": [], "text": "In this section, we analyze in details the failures of GMMC on three datasets: Office Home Art, Dragon Ball, and Malacca Historical Buildings. In certain cases, several datasets may share a common characteristic, such as all of them are anime pictures (e.g. Dragon Ball, Pokemon, and One-Piece). GMMC may capture the tasks' characteristics as animation but fail to further distinguish different tasks. On the other side, Mahalanobis focus on the characteristics classwise, witch captures the difference among classes (e.g., Wukong vs. Abra characters in Fig. S4-a) and hence is able to distinguish them.\nAnother case is that two tasks may share similar objects (e.g., Office Home Art, Office Home, and Stanford Online Products; Fig. S4-b). Although represented in different tasks, these are the same types of objects in the real life. We address these GMMC confusions with our proposed \"corrective approach\" that would declare correct classification for equivalent labels belonging to different tasks.\nOther cases may include that one task is too general; for example, Watermark non Watermark includes a large variety of images with or without a watermark which may also confuse GMMC as many similar images are present in other datasets (Fig. S4-c)." }, { "figure_ref": [], "heading": "G Amount of data shared by LLL", "publication_ref": [ "b17", "b35" ], "table_ref": [ "tab_6" ], "text": "The analysis below includes 2 options not exercised in the main text of this paper:\n• Head2Toe: If the input domain encountered by an agent is very different than what the frozen backbone was trained on, sharing only the last layer(s) + BPN biases may not always work well, because the features in the backbone are not able to well represent the new domain. Our backbone is pretrained on ImageNet, which is appropriate for many image classification and visually-guided RL tasks in the natural world. However, the latent features may not be well suited for highly artificial worlds. This was recently addressed by (Evci et al., 2022), who showed that this problem can be alleviated using a last layer that connects to several intermediary layers, or even to every layer in the network, as opposed to only the penultimate layer. Hence, instead of sharing the last layer, we may share a so-called Head2toe layer when a large domain shift is encountered. Note that AR will also be used in this case as it is another way to counter large domain shifts: the AR pattern essentially recasts an input from a very different domain back into the ImageNet domain, then allowing the frozen backbone to extract rich and meaningful features in that domain. Also see Parisi et al. (2022) for ideas similar to Head2toe, with applications in RL.\n• Adversarial reprogramming (AR) (Elsayed et al., 2018): Adversarial reprogramming is quite similar in spirit to BB, with the main difference being that it operates in the input (image) space as opposed to BB operating in the activation space. In adversarial reprogramming, one computes a single noise pattern for each task. This pattern is then added to inputs for a new task and fed through the original network. The original network processes the combined input + noise and generates an output, which is then remapped onto the desired output domain. Unfortunately, the CPU cost of this approach is prohibitive with respect to 0.5N speedup.\nWe denote the number of BB biases by N in what follows (for xception, N = 17, 472). If Head2toe connects to the same feature maps as BB, then the number of weights is N × c for c output classes. We assume that each task is modeled with k GMMC clusters (k = 25 currently), and each is represented by a 2048D mean and 2048D diagonal covariance. We denote by 4 the number of bytes per floating point number.\nFor a classification task with c classes: An agent receives an image as input and produces a vector of c output values (on SKILL-102, c is 49.34 on average), where the highest output value is the most likely image class for the input image (Table S1). " }, { "figure_ref": [ "fig_3" ], "heading": "H GMMC visual explanation", "publication_ref": [], "table_ref": [], "text": "A visual explanation of how GMMC works in LLL agents is shown in Fig. S7." }, { "figure_ref": [], "heading": "I Pairs of similar classes according to CLIP", "publication_ref": [], "table_ref": [ "tab_4", "tab_5", "tab_7" ], "text": "Table S2, Table S3, Table S4, and Table S5 show examples of pairs of similar classes according to CLIP embedding. The first and the second column are the names of similar class pairs from two different tasks (i.e " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgement: This work was supported by DARPA (HR00112190134), C-BRIC (one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA), and the Army Research Office (W911NF2020053). The authors affirm that the views expressed herein are solely their own, and do not represent the views of the United States government or any agency thereof." }, { "figure_ref": [], "heading": "Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A Dataset subsampling details", "publication_ref": [], "table_ref": [], "text": "Our SKILL-102 dataset comprises 102 distinct tasks that were obtained from previously published datasets. SKILL-102 is freely available for download on the project website: https://github.com/gyhandy/ Shared-Knowledge-Lifelong-Learning.\nHere, we subsampled the source datasets slightly, mainly to allow some of the baselines to converge in a reasonable amount of time. For dataset sampling, the following rules were used:\n• For iNaturalist Insecta, since it contains a lot of classes, 500 classes were randomly sampled.\n• For all other tasks, all classes are kept.\n• For all tasks, round(54000/c) training images and round(6000/c) validation images and round(6000/c) test images are used for each class. If a class does not contain enough images, then all images for that class are used.\n• The exact datasets as we used them in our experiments will be made available online after publication, to allow other researchers to reproduce (or beat!) our results.\nThe sequence of datasets and number of images in each dataset are shown in Fig. S5." }, { "figure_ref": [], "heading": "B GMMC number of clusters", "publication_ref": [], "table_ref": [], "text": "Fig. S1 shows the GMMC performance with different numbers of clusters.\nFigure S1: On a small subset of tasks, we found that k = 25 GMMC clusters provided the best compromise between generalization and overfitting." }, { "figure_ref": [], "heading": "C Mahalanobis training MACs", "publication_ref": [], "table_ref": [], "text": "The slope of MACs/image is higher until the number of training samples reaches 4,000. After that, the slope does not change. If we use 5 images per class to train, then the number of training samples would reach 4,000 after task 12. So for the majority of the tasks, the average MACs per image for training the Mahalanobis distance is around 250k." }, { "figure_ref": [], "heading": "D CPU analysis", "publication_ref": [], "table_ref": [], "text": "We compute everything in terms of MACs/image processed. There are a few caveats:\n• Data sharing does not occur per training image, but rather per task (e.g., share 25 GMMC cluster means+diagonal covariances per task). Hence we first compute communication bytes/task and then convert that to \"MACs equivalent\" by assuming that sharing 1 byte takes the equivalent of 1,000 MACs. This value is a hyper-parameter than can be tuned depending on network type. Over wired Ethernet, it corresponds to 1.5 million MACs per packet (with MTU of 1500 bytes).\n• Mahalanobis training time increases with the number of tasks received to date, as shown in Fig. S2.\n• ER training increases over time as more tasks are added: " }, { "figure_ref": [], "heading": "J Performance on Visual Domain Decathlon", "publication_ref": [ "b22" ], "table_ref": [], "text": "We also perform our methods on a well-known benchmark Visual Domain Decathlon (Ke et al., 2020) in Fig. S8. The baselines and our method implementations are the same as the experiments in SKILL-102 dataset.\nFigure S8: Average absolute accuracy on 10 Visual Domain Decathlon tasks learned so far, as a function of the number of tasks learned." } ]
2023-05-24
10.1109/CVPR.2009.5206537
[ { "authors": "Rahaf Aljundi; Francesca Babiloni; Mohamed Elhoseiny; Marcus Rohrbach; Tinne Tuytelaars", "journal": "", "ref_id": "b0", "title": "Memory aware synapses: Learning what (not) to forget", "year": "2018" }, { "authors": "Rahaf Aljundi; Min Lin; Baptiste Goujaud; Yoshua Bengio", "journal": "Advances in neural information processing systems", "ref_id": "b1", "title": "Gradient based sample selection for online continual learning", "year": "2019" }, { "authors": "David Ari S Benjamin; Konrad Rolnick; Kording", "journal": "", "ref_id": "b2", "title": "Measuring and regularizing networks in function space", "year": "2018" }, { "authors": "Keith Bonawitz; Hubert Eichner; Wolfgang Grieskamp; Dzmitry Huba; Alex Ingerman; Vladimir Ivanov; Chloe Kiddon; Jakub Konečnỳ; Stefano Mazzocchi; Brendan Mcmahan", "journal": "Proceedings of machine learning and systems", "ref_id": "b3", "title": "Towards federated learning at scale: System design", "year": "2019" }, { "authors": "S Jeffrey; Bowers", "journal": "", "ref_id": "b4", "title": "Grandmother cells and localist representations: a review of current thinking", "year": "2017" }, { "authors": "Pietro Buzzega; Matteo Boschini; Angelo Porrello; Davide Abati; Simone Calderara", "journal": "Advances in neural information processing systems", "ref_id": "b5", "title": "Dark experience for general continual learning: a strong, simple baseline", "year": "2020" }, { "authors": "Rich Caruana", "journal": "Machine learning", "ref_id": "b6", "title": "Multitask learning", "year": "1997" }, { "authors": "Arslan Chaudhry; Marc'aurelio Ranzato; Marcus Rohrbach; Mohamed Elhoseiny", "journal": "", "ref_id": "b7", "title": "Efficient lifelong learning with a-gem", "year": "2018" }, { "authors": "Brian Cheung; Alex Terekhov; Yubei Chen; Pulkit Agrawal; Bruno Olshausen", "journal": "", "ref_id": "b8", "title": "Superposition of many models into one", "year": "2019" }, { "authors": "François Chollet", "journal": "", "ref_id": "b9", "title": "Xception: Deep learning with depthwise separable convolutions", "year": "2017" }, { "authors": "Michael Crawshaw", "journal": "", "ref_id": "b10", "title": "Multi-task learning with deep neural networks: A survey", "year": "2020" }, { "authors": "Matthias De Lange; Rahaf Aljundi; Marc Masana; Sarah Parisot; Xu Jia; Aleš Leonardis; Gregory Slabaugh; Tinne Tuytelaars", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b11", "title": "A continual learning survey: Defying forgetting in classification tasks", "year": "2021" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "Ieee", "ref_id": "b12", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Jeremy L Sachin S Deshmukh; James J Johnson; Knierim", "journal": "Hippocampus", "ref_id": "b13", "title": "Perirhinal cortex represents nonspatial, but not spatial, information in rats foraging in the presence of objects: comparison with lateral entorhinal cortex", "year": "2012" }, { "authors": "Barbara Dosher; Zhong-Lin Lu", "journal": "Annual review of vision science", "ref_id": "b14", "title": "Visual perceptual learning and models", "year": "2017" }, { "authors": "Mathias Eitz; James Hays; Marc Alexa", "journal": "ACM Transactions on graphics (TOG)", "ref_id": "b15", "title": "How do humans sketch objects", "year": "2012" }, { "authors": "Ian Gamaleldin F Elsayed; Jascha Goodfellow; Sohl-Dickstein", "journal": "", "ref_id": "b16", "title": "Adversarial reprogramming of neural networks", "year": "2018" }, { "authors": "Utku Evci; Vincent Dumoulin; Hugo Larochelle; Michael C Mozer", "journal": "", "ref_id": "b17", "title": "Head2toe: Utilizing intermediate representations for better transfer learning", "year": "2022" }, { "authors": "M Robert; French", "journal": "Trends in cognitive sciences", "ref_id": "b18", "title": "Catastrophic forgetting in connectionist networks", "year": "1999" }, { "authors": " Robert L Goldstone", "journal": "Annual review of psychology", "ref_id": "b19", "title": "Perceptual learning", "year": "1998" }, { "authors": "Ian Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio", "journal": "Communications of the ACM", "ref_id": "b20", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "Peter Kairouz; Brendan Mcmahan; Brendan Avent; Aurélien Bellet; Mehdi Bennis; Nitin Arjun; Kallista Bhagoji; Zachary Bonawitz; Graham Charles; Rachel Cormode; Cummings", "journal": "Foundations and Trends® in Machine Learning", "ref_id": "b21", "title": "Advances and open problems in federated learning", "year": "2021" }, { "authors": "Zixuan Ke; Bing Liu; Xingchang Huang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b22", "title": "Continual learning of a mixed sequence of similar and dissimilar tasks", "year": "2020" }, { "authors": "James Kirkpatrick; Razvan Pascanu; Neil Rabinowitz; Joel Veness; Guillaume Desjardins; Andrei A Rusu; Kieran Milan; John Quan; Tiago Ramalho; Agnieszka Grabska-Barwinska", "journal": "Proceedings of the national academy of sciences", "ref_id": "b23", "title": "Overcoming catastrophic forgetting in neural networks", "year": "2017" }, { "authors": "Jonathan Krause; Michael Stark; Jia Deng; Li Fei-Fei", "journal": "", "ref_id": "b24", "title": "3d object representations for fine-grained categorization", "year": "2013" }, { "authors": "Alex Krizhevsky; Geoffrey Hinton", "journal": "", "ref_id": "b25", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "Kimin Lee; Kibok Lee; Honglak Lee; Jinwoo Shin", "journal": "Advances in neural information processing systems", "ref_id": "b26", "title": "A simple unified framework for detecting outof-distribution samples and adversarial attacks", "year": "2018" }, { "authors": "Tian Li; Anit Kumar Sahu; Ameet Talwalkar; Virginia Smith", "journal": "IEEE Signal Processing Magazine", "ref_id": "b27", "title": "Federated learning: Challenges, methods, and future directions", "year": "2020" }, { "authors": "Zhizhong Li; Derek Hoiem", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b28", "title": "Learning without forgetting", "year": "2017" }, { "authors": "David Lopez; - Paz; Marc'aurelio Ranzato", "journal": "Advances in neural information processing systems", "ref_id": "b29", "title": "Gradient episodic memory for continual learning", "year": "2017" }, { "authors": "Marc Masana; Xialei Liu; Bartłomiej Twardowski; Mikel Menta; Andrew D Bagdanov; Joost Van De Weijer", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b30", "title": "Class-incremental learning: survey and performance evaluation on image classification", "year": "2022" }, { "authors": "Brendan Mcmahan; Eider Moore; Daniel Ramage; Seth Hampson; Blaise Aguera Y Arcas", "journal": "PMLR", "ref_id": "b31", "title": "Communication-efficient learning of deep networks from decentralized data", "year": "2017" }, { "authors": "Maria-Elena Nilsback; Andrew Zisserman", "journal": "", "ref_id": "b32", "title": "Automated flower classification over a large number of classes", "year": "2008-12" }, { "authors": "Maria-Elena Nilsback; Andrew Zisserman", "journal": "IEEE", "ref_id": "b33", "title": "Automated flower classification over a large number of classes", "year": "2008" }, { "authors": "Ronald German I Parisi; Jose L Kemker; Christopher Part; Stefan Kanan; Wermter", "journal": "Neural Networks", "ref_id": "b34", "title": "Continual lifelong learning with neural networks: A review", "year": "2019" }, { "authors": "Simone Parisi; Aravind Rajeswaran; Senthil Purushwalkam; Abhinav Gupta", "journal": "", "ref_id": "b35", "title": "The unsurprising effectiveness of pre-trained vision models for control", "year": "2022" }, { "authors": "Ariadna Quattoni; Antonio Torralba", "journal": "", "ref_id": "b36", "title": "Recognizing indoor scenes", "year": "2009" }, { "authors": "Leila R Quian Quiroga; Gabriel Reddy; Christof Kreiman; Itzhak Koch; Fried", "journal": "Nature", "ref_id": "b37", "title": "Invariant visual representation by single neurons in the human brain", "year": "2005" }, { "authors": "Rodrigo Quian; Quiroga ", "journal": "BenBella Books", "ref_id": "b38", "title": "The forgetting machine: Memory, perception, and the Jennifer Aniston neuron", "year": "2017" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b39", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Hakan Sylvestre-Alvise Rebuffi; Andrea Bilen; Vedaldi", "journal": "", "ref_id": "b40", "title": "Learning multiple visual domains with residual adapters", "year": "2017" }, { "authors": " Sylvestre-Alvise; Alexander Rebuffi; Georg Kolesnikov; Christoph H Sperl; Lampert", "journal": "", "ref_id": "b41", "title": "icarl: Incremental classifier and representation learning", "year": "2017" }, { "authors": "Amanda Rios; Laurent Itti", "journal": "", "ref_id": "b42", "title": "Closed-loop memory gan for continual learning", "year": "2018" }, { "authors": "Amanda Rios; Laurent Itti", "journal": "IEEE", "ref_id": "b43", "title": "Lifelong learning without a task oracle", "year": "2020" }, { "authors": "Anthony Robins", "journal": "Connection Science", "ref_id": "b44", "title": "Catastrophic forgetting, rehearsal and pseudorehearsal", "year": "1995" }, { "authors": "Sebastian Ruder", "journal": "", "ref_id": "b45", "title": "An overview of multi-task learning in deep neural networks", "year": "2017" }, { "authors": "Olga Russakovsky; Jia Deng; Hao Su; Jonathan Krause; Sanjeev Satheesh; Sean Ma; Zhiheng Huang; Andrej Karpathy; Aditya Khosla; Michael Bernstein", "journal": "", "ref_id": "b46", "title": "Imagenet large scale visual recognition challenge", "year": "2014" }, { "authors": "Babak Saleh; Ahmed Elgammal", "journal": "", "ref_id": "b47", "title": "Large-scale classification of fine-art paintings: Learning the right metric on the right feature", "year": "2015" }, { "authors": "B André; Megan H Valdez; Papesh; Kris A David M Treiman; Stephen D Smith; Peter N Goldinger; Steinmetz", "journal": "Journal of Neuroscience", "ref_id": "b48", "title": "Distributed representation of visual objects by single neurons in the human brain", "year": "2015" }, { "authors": "Catherine Wah; Steve Branson; Peter Welinder; Pietro Perona; Serge Belongie", "journal": "", "ref_id": "b49", "title": "The caltech-ucsd birds-200-2011 dataset", "year": "2011" }, { "authors": "Tongzhou Wang; Jun-Yan Zhu; Antonio Torralba; Alexei A Efros", "journal": "", "ref_id": "b50", "title": "Dataset distillation", "year": "2018" }, { "authors": "Shixian Wen; Amanda Rios; Yunhao Ge; Laurent Itti", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b51", "title": "Beneficial perturbation network for designing general adaptive artificial intelligence systems", "year": "2021" }, { "authors": "Mitchell Wortsman; Rosanne Vivek Ramanujan; Aniruddha Liu; Mohammad Kembhavi; Jason Rastegari; Ali Yosinski; Farhadi", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b52", "title": "Supermasks in superposition", "year": "2020" }, { "authors": "Jaehong Yoon; Wonyong Jeong; Giwoong Lee; Eunho Yang; Sung Ju Hwang", "journal": "PMLR", "ref_id": "b53", "title": "Federated continual learning with weighted inter-client transfer", "year": "2021" }, { "authors": "Friedemann Zenke; Ben Poole; Surya Ganguli", "journal": "PMLR", "ref_id": "b54", "title": "Continual learning through synaptic intelligence", "year": "2017" }, { "authors": "Junting Zhang; Jie Zhang; Shalini Ghosh; Dawei Li; Serafettin Tasci; Larry Heck; Heming Zhang; C-C Jay Kuo", "journal": "", "ref_id": "b55", "title": "Class-incremental learning via deep model consolidation", "year": "2020" }, { "authors": "Yu Zhang; Qiang Yang", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b56", "title": "A survey on multi-task learning", "year": "2021" } ]
[ { "formula_coordinates": [ 4, 75.26, 130.35, 455.21, 197.04 ], "formula_id": "formula_0", "formula_text": "𝑅 & 𝐴 & 𝐴 '()*(+ 𝑇 ! \" 𝑇 # \" 𝑇 $ \" 𝑇 % \" 𝑇 & \" … 𝐴 \" 𝑅 \" b) Sequential Lifelong Learning time t-2 t-1 t+1 𝑇 ! \" 𝑇 # \" 𝑇 $ \" 𝑇 % \" 𝑇 & \" … 𝐴 \" 𝑅 \" time t-2 t-1 t+1 d) Shared Knowledge Lifelong Learning (SKILL) 𝑇 ! ' 𝑇 # ' 𝑇 $ ' 𝑇 % ' 𝑇 & ' … 𝐴 ' 𝑅 ' time t-2 t-1 t+1 𝑇 ! ( 𝑇 # ( 𝑇 $ ( 𝑇 % ( 𝑇 & ( … 𝑅 ( 𝐴 ( time t-2 t-1 t+1 𝑇 ! ) 𝑇 # ) 𝑇 $ ) 𝑇 % ) 𝑇 & ) … 𝑅 ) 𝐴 ) time t-2 t-1 t+1 𝑇 ! * 𝑇 # * 𝑇 $ * 𝑇 % * 𝑇 & * … 𝑅 * 𝐴 * time t-2 t-1 t+1" }, { "formula_coordinates": [ 4, 339.06, 252.59, 185.27, 76.65 ], "formula_id": "formula_1", "formula_text": "✓ ✓ ✓ ✕ ✕ b) Sequential Lifelong Learning ✕ ✓ ✓ ✕ ✕ c) Federated Learning ✕ ✕ ✕ ✓ ✓ d) Shared Knowledge Lifelong Learning (SKILL) ✓ ✓ ✓ ✓ ✓" }, { "formula_coordinates": [ 7, 72, 680.95, 468, 27.49 ], "formula_id": "formula_2", "formula_text": "y = Conv(x) + b + B (1) with input feature x ∈ R w * h * c , output feature y ∈ R w ′ * h ′ * c ′ . b ∈ R c ′ is" }, { "formula_coordinates": [ 8, 263.86, 722.39, 276.14, 9.96 ], "formula_id": "formula_3", "formula_text": "y = F C(x) + b + B (2) with x ∈ R l , y ∈ R l ′ , b ∈ R l ′ and B ∈ R l ′ ." }, { "formula_coordinates": [ 9, 217.99, 167.67, 322.01, 30.32 ], "formula_id": "formula_4", "formula_text": "f (x) = k i=1 ϕ i N (x|µ i , Σ i ), k i=1 ϕ i = 1 (3)" }, { "formula_coordinates": [ 9, 187.37, 251.98, 187.54, 10.32 ], "formula_id": "formula_5", "formula_text": "D map () = {(N 1 , ϕ 1 ) : 1, ..., (N kT , ϕ kT ) : T }." }, { "formula_coordinates": [ 9, 72, 275.89, 110.74, 10.32 ], "formula_id": "formula_6", "formula_text": "T ask = D map ((N m , ϕ m ))" }, { "formula_coordinates": [ 9, 232.62, 303.78, 307.38, 27.31 ], "formula_id": "formula_7", "formula_text": "P (m, x i ) = ϕ m N (x|µ m , Σ m ) kT n=1 ϕ n N (x|µ n , Σ n ) (4)" }, { "formula_coordinates": [ 9, 72, 376.85, 468, 28.72 ], "formula_id": "formula_8", "formula_text": "µ c = 1 Nc i:yi=c x i (N c : number of images in each class) and Σ = 1 N C c=1 i:yi=c (x i -µ c )(x i -µ c" }, { "formula_coordinates": [ 9, 242.07, 498.34, 297.93, 19.31 ], "formula_id": "formula_9", "formula_text": "arg min c (x -µ c ) T Σ-1 (x -µ c ) (5)" }, { "formula_coordinates": [ 18, 78.23, 551.07, 461.77, 45.76 ], "formula_id": "formula_10", "formula_text": "W 1 = ...w 1 1 ... ... ...w 1 n ... and W 2 = ...w 2 1 ... ... ...w 2 n ..." }, { "formula_coordinates": [ 18, 441.54, 573.85, 10.15, 22.61 ], "formula_id": "formula_11", "formula_text": "w ′ 1 ... w ′ n" } ]
Lightweight Learner for Shared Knowledge Lifelong Learning
In Lifelong Learning (LL), agents continually learn as they encounter new conditions and tasks. Most current LL is limited to a single agent that learns tasks sequentially. Dedicated LL machinery is then deployed to mitigate the forgetting of old tasks as new tasks are learned. This is inherently slow. We propose a new Shared Knowledge Lifelong Learning (SKILL) challenge, which deploys a decentralized population of LL agents that each sequentially learn different tasks, with all agents operating independently and in parallel. After learning their respective tasks, agents share and consolidate their knowledge over a decentralized communication network, so that, in the end, all agents can master all tasks. We present one solution to SKILL which uses Lightweight Lifelong Learning (LLL) agents, where the goal is to facilitate efficient sharing by minimizing the fraction of the agent that is specialized for any given task. Each LLL agent thus consists of a common task-agnostic immutable part, where most parameters are, and individual task-specific modules that contain fewer parameters but are adapted to each task. Agents share their task-specific modules, plus summary information ("task anchors") representing their tasks in the common task-agnostic latent space of all agents. Receiving agents register each received task-specific module using the corresponding anchor. Thus, every agent improves its ability to solve new tasks each time new task-specific modules and anchors are received. If all agents can communicate with all others, eventually all agents become identical and can solve all tasks. On a new, very challenging SKILL-102 dataset with 102 image classification tasks (5,033 classes in total, 2,041,225 training, 243,464 validation, and 243,464 test images), we achieve much higher (and SOTA) accuracy over 8 LL baselines, while also achieving near perfect parallelization. Code and data can be found at https://github.
Yunhao Ge; Yuecheng Li; Di Wu; Adam M Jones; Amanda Sofie Rios; Shixian Wen; Po-Hsuan Huang; Zachary William Murdock; Gozde Sahin; Shuo Ni; Kiran Lekkala; Sumedh Anand Sontakke; Laurent Itti
[ { "figure_caption": "Figure 2 :2Figure 2: (a) SKILL-102 dataset visualization. Task difficulty (y-axis) was estimated as the error rate of a ResNet-18 trained from scratch on each task for a fixed number of epochs. Circle size reflects dataset size (number of images). (b) Comparison with other benchmark datasets including Visual Domain Decathlon (Rebuffi et al., 2017a), Cifar-100(Krizhevsky et al., 2009), F-CelebA(Ke et al., 2020), Fine-grained 6 tasks(Russakovsky et al., 2014) (Wah et al., 2011),(Nilsback & Zisserman, 2008b),(Krause et al., 2013),(Saleh & Elgammal, 2015),(Eitz et al., 2012) c) Qualitative visualization of other datasets, using the same legend and format as in a).", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: Algorithm design. Top: overall pipeline, where agents are deployed in different regions to learn their own tasks. Subsequently, learned knowledge is shared among all agents. Bottom: Zoom into the details of each agent, with 4 main roles: 1) Training: agents use a common pre-trained and frozen backbone, stored in ROM memory at manufacturing time (gray trapezoid with lock symbol). The backbone allows the agent to extract compact representations from inputs (e.g., with an xception backbone, the representation is a latent vector of 2048 dimensions, and inputs are 299 × 299 RGB images). Each agent learns a task-specific head (red triangle) for each new task. A head consists of the last fully-connected layer of the network plus our proposed LL beneficial biasing units (BB) that provide task-dependent tuning biases to all neurons in the network (one float number per neuron). During training, each agent also learns a GMMC or Mahalanobis task anchor which will form a task mapper. 2) Share knowledge with other agents: each agent shares the learned task-specific head, Beneficial Bias (BB), and GMMC module (or training images for Mahalanobis) with all other agents. 3) Receive knowledge from other agents: each agent receives different heads and GMMC/Mahalanobis task mapper anchors from other agents. All heads are stored in a head bank and all task anchors are consolidated to form a task mapper. 4) Testing: At test time, an input is first processed through the task mapper. This outputs a task ID, used to load up the corresponding head (last layer + beneficial biases) from the bank. The network is then equipped with the correct head and is run on the input to produce an output.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Accuracy on task 1 (learning to classify 102 types of flowers) as a function of the number of tasks learned. a) Comparison between our methods. b) Comparison between our best and other baselines.Our approach is able to maintain accuracy on task 1 much better than the baselines as more and more tasks are learned: while our approach does suffer some interference, task 1 accuracy remains to within 90% of its initial best even after learning 101 new tasks (for the 4 LLL variants, BB=beneficial biases, MAHA=Mahalanobis Distance task mapper, GMMC=GMMC task mapper). In contrast, the accuracy of EWC, PSP, and several other baselines on task 1 catastrophically degrades to nearly zero after learning just 10 new tasks, even though we granted these methods a perfect task oracle. The best performing baseline, ER, is of the episodic buffer type (a fraction of the training set of each task is retained for later rehearsing while learning new tasks), with an un-bounded buffer that grows by 10 images/class. This methods does incur higher (and increasing) training costs because of the rehearsing (Suppl. Sec. D.) Note how SUPSUP does not experience any degradation on task 1, which is a desirable feature of this approach. However, a drawback is that SUPSUP is not able, even from the beginning, to learn task 1 as well as other methods (50.64% accuracy vs. over 90% for most other approaches). We attribute this to SUPSUP's limited expressivity and capacity to learn using masks over a random backbone, especially for tasks with many classes. Indeed, SUPSUP can perform very well on some other tasks, usually with a smaller number of classes (e.g., 91.93% correct on SVHN, 93.18% on Brazillian Coins, 99.11% on UMNIST Face Dataset; see Supp. Fig.S6).", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Absolute accuracy per task after learning 102 tasks. (Top) Absolute accuracy of the GMMC and Mahalanobis task mappers alone shows quite a bit of variability, indicating various degrees of overlap among tasks. (Bottom) Absolute accuracy of the main xception+head network alone (with or without BB, assuming perfect task mapper) also shows significant variability, indicating various degrees of difficulty per task. The accuracy with BB is overall slightly higher than without BB (orange bars higher than corresponding blue bars in the bottom panel), as further explored in the next figure.", "figure_data": "", "figure_id": "fig_3", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: Two experiments where the weights from previously learned similar not identical classes are successful in boosting learning of new classes. Left: pairs of similar classes (according to CLIP). Right: accuracy achieved with weight transfer vs. random initialization.", "figure_data": "", "figure_id": "fig_5", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure S2 :S2Figure S2: MACs for Mahalanobis training (vertical axis) as a function of number of training images (horizontal axis).", "figure_data": "", "figure_id": "fig_6", "figure_label": "S2", "figure_type": "figure" }, { "figure_caption": "Figure S3 :S3FigureS3: Additional details for how we compute MACs and speedup. Different assumptions (e.g., higher or lower MACs/byte transmitted) can be used, which would update the results in the main paper Figs. 9 and 10.", "figure_data": "", "figure_id": "fig_7", "figure_label": "S3", "figure_type": "figure" }, { "figure_caption": "Figure S4 :S4Figure S4: Here we analyze the top 3 tasks into which images may be misclassified by GMMC. a) Out of 18 test images from the (very small) Dragon Ball Dataset, 4 are correctly classified as belonging to Dragon Ball Dataset, 11 are misclassified as belonging to the One Piece dataset, and 2 are misclassified as belonging to the Pokemon dataset. Since all three datasets contain cartoon images, GMMC was confused to classify some images into an incorrect dataset. b) Out of 252 test images from Office Home Art, 86 are correctly classified, 33 are classified as belonging to the Stanford Online Product dataset, and 20 are classified as Office Home Product dataset. These three datasets have many objects in common such as bicycles, chairs, and tables.Hence, it is easy for GMMC to get confused. c) Out of 18 test images from Malacca Historical Buildings, 7 were correctly classified, 5 are classified as Art Images, and 5 are classified as belonging to the Watermark dataset. The Art Image and Watermark datasets contain a large variety of images which may confuse the GMMC to make wrong predictions.", "figure_data": "", "figure_id": "fig_8", "figure_label": "S4", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "problem are usually not a primary focus of FL, with a few exceptionsYoon et al. (2021) which still do not directly apply to SKILL, as they focus on a single task for all agents and use a central server. Because federated learning relies on a central server, it is susceptible to complete failure if that server is destroyed; in contrast, in SKILL, as long as not all agents are destroyed, the surviving agents can still share and master some of the tasks.", "figure_data": "One related direction is to share a compact representation dataset: Dataset distillation Wang et al. (2018)combines all training exemplars into a small number of super-exemplars which, when learned from usinggradient descent, would generate the same gradients as the larger, original training set. However, thedistillation cost is very high, which could not satisfy the 0.5N speedup requirement. Another related directionis to reuse shared parameters for different tasks: Adversarial reprogramming Elsayed et al. (2018) computesa single noise pattern for each task. This pattern is then added to inputs for a new task and fed throughthe original network. The original network processes the combined input + noise and generates an output,which is then remapped onto the desired output domain. However, the cost of the reprogramming trainingprocess is high, which could not satisfy the 0.5N speedup requirement. Another related direction is to use agenerative adversarial network (GAN Goodfellow et al. (2020)) to learn the distribution of different datasetsand generate for replay. Closed-loop GAN (CloGAN Rios & Itti (2018)) could be continuously trained withnew data, while at the same time generating data from previously learned tasks for interleaved training.However, the GAN needs more parameters to transmit, and the high training time does not satisfy the 0.5Nspeedup requirement.", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The two exceptions are PSPCheung et al. (2019) that usesResNet-18, and SUPSUP Wortsman et al. (2020) that uses ResNet-50.Our baselines fall in the following 3 categories(De Lange et al., 2021): (1) Regularization methods add an auxiliary loss term to the primary task objective to constraint weight updates. The extra loss can be a penalty on the parameters (EWC(Kirkpatrick et al., 2017), MAS(Aljundi et al., 2018) and SI(Zenke et al., 2017)) or on the feature-space(FDR (Benjamin et al., 2018)), such as using Knowledge Distillation (DMC", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Analysis of computation and network expenditures for our parallelized LLL approach and our parallelized SUPSUP, to learn all T = 102 tasks. Our approach supports any number of agents N such that 1 ≤ N ≤ T . Maximum speedup is expected when N = T and each agent learns one task, then shares with all others. Here, we report numbers for T = 102, N = 51, and each agent learns 2 tasks in sequence. Note that in our approach, accuracy is not affected by N , only the amount of parallelization speedup increases with N . Note how in this table we still report MACs but taking parallelization into account (e.g., teacher CPU for N agents is single-agent CPU divided by N ). Teacher CPU: Time to learn tasks from their training datasets, plus to possibly prepare data for sharing (e.g., compute GMMC clusters). Communications: Our LLL agents communicate either GMMC clusters or Mahalanobis training images, while our modified SUPSUP communicates masks. Here we assume that there is a communication bottleneck at the receiver (student): the shared data from 100 tasks needs to be received serially, over a single networking interface for each student. Hence our communication figures are for all the shared data from all other tasks apart from those an agent learned itself. We convert communication costs to equivalent MACs by assuming 1,000 MACs per byte transmitted. BB adds a small extra communication cost, to transmit the biases. Student CPU: For GMMC, students do not do any extra work (hence, student CPU is 0); for Mahalanobis, students compute a covariance matrix for all 102 tasks. Speedup factor: is just total MACs for single agent divided by total MACs for parallel agents and by N . All approaches here achieve near perfect parallelization (> 0.99N , where 1.0N is perfect). Accuracy: In addition to being faster when BB is not used, our LLL variants still all outperform the parallel SUPSUP in accuracy, by a large margin (> 10%).", "figure_data": "Teacher CPU (MACs)Communi--cations (bytes)Student CPU (MACs)Total (MACs)Parallelization efficiency (xN)CPU usage vs. Ours-SKILL, no BB, MAHAAverage accuracy after learning 102 tasksLLL(Ours)-Multiple Agents,no BB, GMMC1.69E+148.22E+070.00E+001.69E+140.99999519~0.96x67.43%LLL(Ours)-Multiple Agents,BB, GMMC1.53E+161.03E+080.00E+001.53E+160.999999934~87.2x70.58%LLL(Ours)-Multiple Agents,no BB, Mahalanobis1.69E+146.72E+09 5.00E+09 1.76E+140.9966305511x (reference)68.87%LLL(Ours)-Multiple Agents,BB, Mahalanobis1.53E+166.74E+095.00E+091.53E+160.999962712~87.3x72.1%Parallel SUPSUP,Perfect Task Oracle9.91E+153.03E+080.00E+009.91E+150.999999697~56.4x56.22%", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Datasetsinitialization All 10 shot 5 shot 3 shotOld task House Room Image Datasetrandom0.860.770.730.52New taskMIT Indoor Scenesours0.890.830.80.71Old taskStandford Online Productsrandom0.780.610.610.6New taskOffice Home Productours0.80.620.590.6Old task100 Sportsrandom10.980.920.92New task UIUC Sports Event Datasetours10.990.970.97Old taskiFood2019random0.610.420.350.3New taskFood-101ours0.640.50.460.43Table", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Total average sharing per task in our current implementation with GMMC+BB: 404+70+409 = 883 KBytes/task; for Mahalanobis+BB: 404+70+1341=1.81 MBytes/task.", "figure_data": "Shared params and dataSize (bytes)Implemented: N = 17, 472, c = 49.34, k = 25Last layer weights2048 × c × 4404 KBytesBB biasesN × 470 KBytesGMMC clustersk × (2048 + 2048)× 4409 KBytesOptional: Head2toeadds N × c × 4adds 3.45 MBytesOptional: AR patternadds 299 × 299 × 3adds 268 KBytesAlternative: 5 images/task for MAHA5 × 299 × 299 × 31.34 MBytes", "figure_id": "tab_6", "figure_label": "S1", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": Matched Class for UIUC_Sports_Event_Dataset and199_Sportslabellonglearned_class(weight source) target_classscore0 boccebowling0.78371 badmintontennis0.8232 sailingsailboat racing0.93 snowboardingsnow boarding0.93554 RockClimbingrock climbing0.9895 polopolo0.99956 croque_madamecroque_madame17 Rowingrowing1Table S5: Food-101 vs iFood2019learned_class(weight source) target_classscore0 cheese_plategrilled_cheese_sandwich 0.85741 cup_cakescupcake0.9042 steaksteak_au_poivre0.91853 scallopsscallop0.9294 breakfast_burritoburrito0.9395 nachosnacho0.95176 dumplingsdumpling0.95367 musselsmussel0.9558 churroschurro0.95859 spring_rollsspring_roll0.958510 chicken_wingschicken_wing0.960411 escargotsescargot0.96212 waffleswaffle0.96613 baby_back_ribsbaby_back_rib0.96614 oystersoyster0.966315 beignetsbeignet0.9716 tacostaco0.972717 donutsdonut0.975618 crab_cakescrab_cake0.975619 deviled_eggsdeviled_egg0.97620 macaronsmacaron0.977521 pancakespancake0.98222 pad_thaipad_thai0.99923 grilled_salmongrilled_salmon0.99924 fried_calamarifried_calamari0.99925 omeletteomelette0.999526 beef_carpacciobeef_carpaccio0.999527 hamburgerhamburger0.999528 clam_chowderclam_chowder0.999529 chocolate_cakechocolate_cake0.999530 lobster_roll_sandwichlobster_roll_sandwich0.999531 macaroni_and_cheesemacaroni_and_cheese0.999532 seaweed_saladseaweed_salad0.999533 shrimp_and_gritsshrimp_and_grits0.999534 sushisushi0.999535 creme_bruleecreme_brulee0.9995", "figure_id": "tab_7", "figure_label": "S4", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "(Parisi et al., 2019)", "Explanation": "The cited work by Parisi et al. (2019) provides a foundational understanding of the concept of Lifelong Learning (LL) in machine learning (ML), which the citing paper builds upon in their research on LL in a new scenario of Shared Knowledge Lifelong Learning (SKILL). The cited work serves as a reference for the research on LL in a new context of multiple agents learning in parallel in different physical locations."}, {"Category": "Methodological Basis", "Citation": "(Masana et al., 2022)", "Explanation": "The cited work by Masana et al. provides a definition of Lifelong Learning (LL) and its key characteristics, which the citing paper adopts in its discussion of the field."}, {"Category": "Extension or Continuation", "Citation": "(Parisi et al., 2019)", "Explanation": "The cited work by Parisi et al. extends the concept of LL by introducing the idea of continuously learning over time and accommodating new knowledge while retaining previous experiences."}, {"Category": "Extension or Continuation", "Citation": "(De Lange et al., 2021)", "Explanation": "The cited work by De Lange et al. builds upon the discussion of LL by categorizing approaches into three main branches and providing a brief summary of each branch."}, {"Category": "Methodological Basis", "Citation": "(Kirkpatrick et al., 2017)", "Explanation": "The cited work introduces the EWC method, which the citing paper adopts as a way to penalize model parameters in a specific context."}, {"Category": "Methodological Basis", "Citation": "(Aljundi et al., 2018)", "Explanation": "The cited work presents the MAS method, which the citing paper uses as a way to penalize model parameters in a specific context."}, {"Category": "Methodological Basis", "Citation": "(Zenke et al., 2017)", "Explanation": "The cited work introduces the SI method, which the citing paper adopts as a way to penalize model parameters in a specific context."}, {"Category": "Methodological Basis", "Citation": "(Benjamin et al., 2018)", "Explanation": "The cited work presents the FDR method, which the citing paper uses as a way to penalize model parameters in a specific context."}, {"Category": "Methodological Basis", "Citation": "(Li & Hoiem, 2017)", "Explanation": "The cited work introduces the LwF method, which the citing paper adopts as a way to use knowledge distillation in a specific context."}, {"Category": "Methodological Basis", "Citation": "(Zhang et al., 2020)", "Explanation": "The cited work presents the DMC method, which the citing paper uses as a way to use knowledge distillation in a specific context."}, {"Category": "Methodological Basis", "Citation": "(Wortsman et al., 2020)", "Explanation": "The cited work introduces the SUPSUP method, which the citing paper adopts as a way to avoid overwriting model parameters in a specific context."}, {"Category": "Methodological Basis", "Citation": "(Cheung et al., 2019)", "Explanation": "The cited work presents the PSP method, which the citing paper uses as a way to avoid overwriting model parameters in a specific context."}, {"Category": "Methodological Basis", "Citation": "(Wen et al., 2021)", "Explanation": "The cited work introduces the BPN method, which the citing paper adopts as a way to avoid overwriting model parameters in a specific context."}, {"Category": "Methodological Basis", "Citation": "(Rebuffi et al., 2017b)", "Explanation": "The cited work by Rebuffi et al. (2017b) provides the basis for the use of a buffer in the task training of iCaRL and ER, which the citing paper builds upon in its research."}, {"Category": "Data Source", "Citation": "(Robins, 1995)", "Explanation": "The cited work by Robins (1995) serves as the data source for the use of a buffer in the task training of iCaRL and ER, as the cited work is the original source of the concept and methodology."}, {"Category": "Extension or Continuation", "Citation": "(GSS, AGEM)", "Explanation": "The cited works of GSS and AGEM are extensions of the research on buffer usage in task training, as the citing paper further explores the use of buffers in the context of lifelong learning and shared knowledge."}, {"Category": "Methodological Basis", "Citation": "(Rebuffi et al., 2017b)", "Explanation": "The cited work by Rebuffi et al. (2017b) provides the basis for the use of a buffer in the task training of iCaRL and ER, which the citing paper builds upon in its research."}, {"Category": "Data Source", "Citation": "(Robins, 1995)", "Explanation": "The cited work by Robins (1995) serves as the data source for the use of a buffer in the task training of iCaRL and ER, as the cited work is the original source of the concept and methodology."}, {"Category": "Extension or Continuation", "Citation": "(GSS, AGEM)", "Explanation": "The cited works of GSS and AGEM are extensions of the research on buffer usage in task training, as the citing paper further explores the use of buffers in the context of lifelong learning and shared knowledge."}, {"Category": "Data Source", "Citation": "(Rebuffi et al., 2017b)", "Explanation": "The cited work by Rebuffi et al. (2017b) serves as the data source for the use of a buffer in the task training of iCaRL and ER, as the cited work is the original source of the concept and methodology."}, {"Category": "Extension or Continuation", "Citation": "(GSS, AGEM)", "Explanation": "The cited works of GSS and AGEM are extensions of the research on buffer usage in task training, as the citing paper further explores the use of buffers in the context of lifelong learning and shared knowledge."}, {"Category": "Data Source", "Citation": "(Rebuffi et al., 2017b)", "Explanation": "The cited work by Rebuffi et al. (2017b) serves as the data source for the use of a buffer in the task training of iCaRL and ER, as the cited work is the original source of the concept and methodology."}, {"Category": "Extension or Continuation", "Citation": "(GSS, AGEM)", "Explanation": "The cited works of GSS and AGEM are extensions of the research on buffer usage in task training, as the citing paper further explores the use of buffers in the context of lifelong learning and shared knowledge."}, {"Category": "Data Source", "Citation": "(Rebuffi et al., 2017b)", "Explanation": "The cited work by Rebuffi et al. (2017b) serves as the data source for the use of a buffer in the task training of iCaRL and ER, as the cited work is the original source of the concept and methodology."}, {"Category": "Extension or Continuation", "Citation": "(GSS, AGEM)", "Explanation": "The cited works of GSS and AGEM are extensions of the research on buffer usage in task training, as the citing paper further explores the use of buffers in the context of lifelong learning and shared knowledge."}, {"Category": "Data Source", "Citation": "(Rebuffi et al., 2017b)", "Explanation": "The cited work by Rebuffi et al. (2017b) serves as the data source for the use of a buffer in the task training of iCaRL and ER, as the cited work is the original source of the concept and methodology."}, {"Category": "Extension or Continuation", "Citation": "(GSS, AGEM)", "Explanation": "The cited works of GSS and AGEM are extensions of the research on buffer usage in task training, as the citing paper further explores the use of buffers in the context of lifelong learning and shared knowledge."}, {"Category": "Data Source", "Citation": "(Rebuffi et al., 2017b)", "Explanation": "The cited work by Rebuffi et al. (2017b) serves as the data source for the use of a buffer in the task training of iCaRL and ER, as the cited work is the original source of the concept and methodology."}, {"Category": "Extension or Continuation", "Citation": "(GSS, AGEM)", "Explanation": "The cited works of GSS and AGEM are extensions of the research on buffer usage in task training, as the citing paper further explores the use of buffers in the context of lifelong learning and shared knowledge."}, {"Category": "Data Source", "Citation": "(Rebuffi et al., 2017b)", "Explanation": "The cited work by Rebuffi et al. (2017b) serves as the data source for the use of a buffer in the task training of iCaRL and ER, as the cited work is the original source of the concept and methodology."}, {"Category": "Extension or Continuation", "Citation": "(GSS, AGEM)", "Explanation": "The cited works of GSS and AGEM are extensions of the research on buffer usage in task training, as the citing paper further explores the use of buffers in the context of lifelong learning and shared knowledge."}, {"Category": "Data Source", "Citation": "(Rebuffi et al., 2017b)", "Explanation": "The cited work by Rebuffi et al. (2017b) serves as the data source for the use of a buffer in the task training of iCaRL and ER, as the cited work is the original source of the concept and methodology."}, {"Category": "Extension or Continuation", "Citation": "(GSS, AGEM)", "Explanation": "The cited works of GSS and AGEM are extensions of the research on buffer usage in task training, as the citing paper further explores the use of buffers in the context of lifelong learning and shared knowledge."}, {"Category": "Data Source", "Citation": "(Rebuffi et al., 2017b)", "Explanation": "The cited work by Rebuffi et al. (2017b) serves as the data source for the use of a buffer in the task training of iCaRL and ER, as the cited work is the original source of the concept and methodology."}, {"Category": "Extension or Continuation", "Citation": "(GSS, AGEM)", "Explanation": "The cited works of GSS and AGEM are extensions of the research on buffer usage in task training, as the citing paper further explores the use of buffers in the context of lifelong learning and shared knowledge."}, {"Category": "Data Source", "Citation": "(Rebuffi et al., 2017b)", "Explanation": "The cited work by Rebuffi et al. (2017b) serves as the data source for the use of a buffer in the task training of iCaRL and ER, as the cited work is the original source of the concept and methodology."}, {"Category": "Extension or Continuation", "Citation": "(GSS, AGEM)", "Explanation": "The cited works of GSS and AGEM are extensions of the research on buffer usage in task training, as the citing paper further explores the use of buffers in the context of lifelong learning and shared knowledge."}, {"Category": "Data Source", "Citation": "(Rebuffi et al., 2017b)", "Explanation": "The cited work by Rebuffi et al. (2017b) serves as the data source for the use of a buffer in the task training of iCaRL and ER, as the cited work is the original source of the concept and methodology."}, {"Category": "Extension or Continuation", "Citation": "(GSS, AGEM)", "Explanation": "The cited works of GSS and AGEM are extensions of the research on buffer usage in task training, as the citing paper further explores the use of buffers in the context of lifelong learning and shared knowledge."}, {"Category": "Data Source", "Citation": "(Rebuffi et al., 2017b)", "Explanation": "The cited work by Rebuffi et al. (2017b) serves as the data source for the use of a buffer in the task training of iCaRL and ER, as the cited work is the original source of the concept and methodology."}, {"Category": "Extension or Continuation", "Citation": "(GSS, AGEM)", "Explanation": "The cited works of GSS and AGEM are extensions of the research on buffer usage in task training, as the citing paper further explores the use of buffers in the context of lifelong learning and shared knowledge."}, {"Category": "Methodological Basis", "Citation": "(Li & Hoiem, 2017)", "Explanation": "The cited work by Li and Hoiem (2017) provides a sequential lifelong learning approach that the citing paper adopts to avoid task interference in the learning process."}, {"Category": "Extension or Continuation", "Citation": "(McMahan et al., 2017)", "Explanation": "The cited work by McMahan et al. (2017) on federated learning is extended in the citing paper to allow multiple agents to learn the same task in different physical locations and share knowledge with a center agent."}, {"Category": "Extension or Continuation", "Citation": "(Lopez-Paz & Ranzato, 2017)", "Explanation": "The cited work by Lopez-Paz and Ranzato (2017) on AGEM-R is further developed in the citing paper to allow for parallel learning and knowledge sharing among agents in different physical regions."}, {"Category": "Data Source", "Citation": "(Chaudhry et al., 2018)", "Explanation": "The cited work by Chaudhry et al. (2018) on GSS is referenced in the citing paper to acknowledge the use of a specific data source for learning tasks in different physical locations."}, {"Category": "Data Source", "Citation": "(Aljundi et al., 2019)", "Explanation": "The cited work by Aljundi et al. (2019) on DER is mentioned in the citing paper to highlight the use of a data source for learning tasks in different physical regions."}, {"Category": "Data Source", "Citation": "(Buzzega et al., 2020)", "Explanation": "The cited work by Buzzega et al. (2020) on DERPP is referenced in the citing paper to acknowledge the use of a data source for learning tasks in different physical regions."}, {"Category": "Supporting Evidence", "Citation": "(Kairouz et al., 2021)", "Explanation": "The cited work by Kairouz et al. (2021) provides a comprehensive overview of the concept of federated learning and its applications, which serves as a foundational knowledge for the citing paper to build upon."}, {"Category": "Supporting Evidence", "Citation": "(Li et al., 2020)", "Explanation": "The cited work by Li et al. (2020) discusses the challenges and potential solutions in federated learning, offering valuable insights and directions for the citing paper to follow."}, {"Category": "Supporting Evidence", "Citation": "(Bonawitz et al., 2019)", "Explanation": "The cited work by Bonawitz et al. (2019) presents a detailed study on the security and privacy issues in federated learning, which the citing paper can use to address the security concerns in their research."}, {"Category": "Data Source", "Citation": "(Rebuffi et al., 2017a)", "Explanation": "The cited work provides the Cifar-100 dataset used in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Ke et al., 2020)", "Explanation": "The cited work provides the F-CelebA dataset used in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Russakovsky et al., 2014)", "Explanation": "The cited work provides the Fine-grained 6 tasks dataset used in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Wah et al., 2011)", "Explanation": "The cited work provides the Fine-grained 6 tasks dataset used in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Nilsback & Zisserman, 2008b)", "Explanation": "The cited work provides the Fine-grained 6 tasks dataset used in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Krause et al., 2013)", "Explanation": "The cited work provides the Fine-grained 6 tasks dataset used in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Saleh & Elgammal, 2015)", "Explanation": "The cited work provides the Fine-grained 6 tasks dataset used in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Eitz et al., 2012)", "Explanation": "The cited work provides the Fine-grained 6 tasks dataset used in the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Chollet, 2017)", "Explanation": "The cited work by Chollet provides the xception model that the citing paper uses as the backbone for their image processing tasks. The xception model is chosen for its balance between model complexity and expressivity of the embedding, which is a methodological basis for the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Deng et al., 2009)", "Explanation": "The cited work by Deng et al. is acknowledged as the source of the ImageNet dataset used in the xception model training. This data source is important for the research conducted in the citing paper as it provides the necessary training data for the xception model."}, {"Category": "Extension or Continuation", "Citation": "(Wen et al., 2021)", "Explanation": "The cited work by Wen et al. introduces the Beneficial Perturbation Network (BPN), which inspires the design of the beneficial biases (BB) in the citing paper. The BB provides a set of task-dependent bias units that are activated per task, extending the research on beneficial perturbations in the field of image processing."}, {"Category": "Methodological Basis", "Citation": "(Kirkpatrick et al., 2017)", "Explanation": "The cited work by Kirkpatrick et al. (2017) provides the foundational methodology of EWC (Effective Weight Compression) that is used in the citing paper to improve the performance of the BPN (Backdoor Poisoning Network). The cited work serves as a methodological basis for the citing paper to implement the EWC method in the BPN framework."}, {"Category": "Extension or Continuation", "Citation": "(Cheung et al., 2019)", "Explanation": "The cited work by Cheung et al. (2019) extends the research on BPN by introducing the PSP (Poisoning Score Prediction) method. The citing paper further builds upon this work by incorporating the PSP method into the BPN framework to improve the performance of the backdoor poisoning network."}, {"Category": "Data Source", "Citation": "(Cheung et al., 2019)", "Explanation": "The cited work by Cheung et al. (2019) provides the data source for the PSP (Poisoning Score Prediction) method that the citing paper uses in the BPN framework. The data from the cited work is used to train the PSP model and provide the necessary information for the backdoor poisoning network."}, {"Category": "Methodological Basis", "Citation": "(Cheung et al., 2019)", "Explanation": "The cited work by Cheung et al. (2019) provides the methodological basis for the BB (Backdoor Biasing Units) module in the citing paper. The BB module is an add-on module that is used in both convolutional layers and fully-connected layers to provide task-dependent tuning biases to all neurons in the network. The cited work serves as the basis for the implementation of the BB module in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Lee et al., 2018)", "Explanation": "The Mahalanobis distance method is adopted in the cited work to learn class-conditional Gaussian distributions, which is then used in the citing paper to perform task mapping by computing the Mahalanobis distance between a test image and the Gaussian clusters of the tasks received so far."}, {"Category": "Methodological Basis", "Citation": "(Zhang et al., 2020)", "Explanation": "The cited work by Zhang et al. provides a method for re-implementing the baselines in the citing paper to use the same pretrained xception backbone, ensuring a fair comparison between approaches."}, {"Category": "Methodological Basis", "Citation": "(Wortsman et al., 2020)", "Explanation": "The cited work by Wortsman et al. introduces the SUPSUP method for assigning fixed model parameters to tasks and avoiding over-writing them when new tasks are learned, which the citing paper adopts as a baseline for parameter-isolation methods."}, {"Category": "Methodological Basis", "Citation": "(Cheung et al., 2019)", "Explanation": "The cited work by Cheung et al. presents the PSP method for assigning fixed model parameters to tasks and avoiding over-writing them when new tasks are learned, which the citing paper also uses as a baseline for parameter-isolation methods."}, {"Category": "Methodological Basis", "Citation": "(Radford et al., 2021)", "Explanation": "The cited work provides the text encoder used in the citing paper to obtain word embeddings for class names, which is a methodological basis for measuring semantic similarity between class names in the LLL learners."}, {"Category": "Methodological Basis", "Citation": "(Evci et al., 2022)", "Explanation": "The cited work introduces the Head2Toe approach for addressing domain gaps in the backbone, which the citing paper adopts in their research to alleviate domain gaps in the backbone features."}, {"Category": "Data Source", "Citation": "(Cheung et al., 2019)", "Explanation": "The cited work is mentioned as a source of a dataset that was used in a previous study to test the performance of lifelong learning approaches in a specific scenario."}, {"Category": "Methodological Basis", "Citation": "(Evci et al., 2022)", "Explanation": "The cited work by Evci et al. (2022) provides a method of sharing a last layer that connects to several intermediary layers or all layers in the network, which the citing paper adopts to address the issue of features in the frozen backbone not being well suited for highly artificial worlds."}, {"Category": "Supporting Evidence", "Citation": "(Elsayed et al., 2018)", "Explanation": "The cited work introduces the concept of adversarial reprogramming (AR), which is similar in spirit to the bias-based (BB) method discussed in the citing paper. The AR method operates in the input (image) space, which is different from the BB method operating in the activation space. The cited work provides a foundational idea for the discussion of the two methods in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Ke et al., 2020)", "Explanation": "The cited work, Visual Domain Decathlon, serves as a benchmark for the experiments conducted in the citing paper on the SKILL-102 dataset. The citing paper extends the research by applying the same baselines and method implementations to the Visual Domain Decathlon tasks, further exploring the performance of the methods in a wider range of scenarios."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b15", "b12", "b22", "b30", "b2", "b29", "b0", "b9", "b17", "b20", "b14", "b18" ], "table_ref": [], "text": "In order to acquire knowledge on a new subject or find answers to complex questions, it is often necessary to consult multiple sources of written information. While information provided in a single document is usually consistent, textual materials from various sources often use different language expressions, which may vary in terms of level of specificity, to convey similar information. An illustration of this phenomenon can be seen in Figure 1. In this paper, we aim to address the process of combining such multiple partially overlapping textual\n[S1] The fire has destroyed a large section of the store and fire crews and investigators are still on the scene.\n[S2] A FIRE has badly damaged the Waitrose supermarket in Wellington's High Street.\n[Union] The fire has destroyed a large section of the Waitrose supermarket in Wellington's High Street and fire crews and investigators are still on the scene. Figure 1: An example of a sentence pair and its union sentence. Information that must be included in the union is highlighted differently for each sentence (green and purple for sentences 1 and 2, respectively), unless the information is paraphrastic (equivalent) between the two sentences, which is then highlighted by the same color (blue). Non-highlighted information indicates that there is corresponding information in the other sentence that is more specific. sources into a single unified and comprehensive format, to which we refer as text consolidation.\nText consolidation plays a crucial role in almost any text-based information access application, such as Multi-Document Summarization (MDS) (Fabbri et al., 2019;Giorgi et al., 2022), long-form question answering (Fan et al., 2019;Nakano et al., 2022), and contemporary dialogue applications (Thoppilan et al., 2022;OpenAI, 2023). It is important to point out here that content selection and consolidation manifest two distinct sub-tasks in such applications, where the former involves identifying the sought information in the source texts, based on considerations such as salience and user needs. Consolidation, on the other hand, involves merging the selected information into a coherent output text. Accordingly, we suggest that each sub-task deserves separate investigation, while focusing in this paper on the consolidation task, manifested as information union. This approach enables targeted investigation of information union capabilities of models, while enabling modular architectures, where an effective information consolidation model can be paired with different content selec-tion models and strategies, whether fully-automatic or interactively involving a user in the loop.\nTo achieve a more controlled research environment, a sentence fusion task was introduced, which fuses a set of sentences into a single sentence (Barzilay et al., 1999;Thadani and McKeown, 2013;Agarwal and Chatterjee, 2022). However, being similar to summarization, the general sentence fusion task is ill-defined, because it allows for subjective salience-based content selection decisions (Daume III and Marcu, 2004;Krahmer et al., 2008). In contrast, the sentence union generation task is strictly defined as generating a sentence that contains exactly all information from the source sentences (see Fig. 1). While identifying the union task to be more attractive due to its more objective and semantically challenging nature, we found that datasets for this topic are relatively scarce (McKeown et al., 2010;Geva et al., 2019;Lebanoff et al., 2020), none of them sufficiently addressing the text consolidation setting.\nConsequently, we revisit the sentence union generation task and propose that it can be used as an effective generic testbed for text consolidation. Compared to the sentence intersection task, the union task is more challenging, as it requires merging both joint and disjoint information in the output and hence provides a more complete testbed for text consolidation. Our input format is rich and challenging enough, as shown in our analyses, to support research on information merging models. Further, this setting may already be of practical use for downstream text generation tasks, for example when combined with sentence compression or decontextualization models.\nOur contributions are outlined as follows: (1) we suggest focusing on sentence union generation as a resource for studying cross-text consolidation capabilities, and point out that properly identifying informational relations between pairs of sentences is necessary for proper consolidation; (2) we provide the largest union fusion dataset to date, while proposing a controlled annotation protocol and interface for careful creation of a sentence union corpus; (3) we suggest evaluation protocols to assess the quality of a generated sentence union, accompanied by automatic metrics that can be used for comparing multiple systems; (4) we provide empirical results on the abilities of prominent neural generative models to address the union task, assessing their capabilities and limitations." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b23", "b11", "b2", "b29", "b0", "b9", "b17", "b20", "b20", "b28", "b13", "b20", "b14", "b19", "b20", "b6", "b27" ], "table_ref": [], "text": "In Multi-Document Summarization (MDS) (Narayan et al., 2018;Fabbri et al., 2019) multipletexts are summarized into a single, shorter text. In a more controlled variant of MDS, the task requires the fusion of partly-overlapping sentences (Barzilay et al., 1999;Thadani and McKeown, 2013;Agarwal and Chatterjee, 2022). Generally, the sentence fusion task included a saliency detection (or importance) component which requires identifying which pieces of information to preserve in the fused output. As a result, sentence fusion is generally ill-defined, as different possible content selections may be valid, making the task subjective to varying necessities of a user (Daume III and Marcu, 2004;Krahmer et al., 2008). Its output could be seen as covering a \"loose\" intersection of the content of two sentences. McKeown et al. (2010) on the other hand, to ensure more consistent fusion settings, makes a distinction between two strict variants of the task: sentence intersection and sentence union generation. Given two (or a set of source sentences), their intersection is a sentence that contains only information that is common to both source sentences, while their union is a sentence that contains all information from the source sentences. As we will see in §3, these tasks can indeed be formulated in strict entailment terms. McKeown et al. (2010) crowdsourced a dataset of 300 examples for sentence intersection and sentence union, but subsequent works mostly focused on the intersection fusion part of the dataset (Thadani and McKeown, 2011;Fuad et al., 2019). Further, their dataset size is relatively small and primarily intended for evaluation purposes, making it inadequate for partitioning into a training dataset for fine-tuning large language models.\nWhile McKeown et al. (2010) used similar sentences, whose contents partly overlap, as input, later works researched the union of disparate sentences (Geva et al., 2019;Lebanoff et al., 2021) where contents are disjoint. This does not address the challenge of consolidating partly overlapping texts. In this work, we chose sentence union as a more complete testbed for multi-text consolidation. We see our work as a continuation of the work by McKeown et al. (2010), and complementary to works that introduced fusion datasets for disparate sentences.\nOur work further relates to a line of research that focuses on objective generation of text. Castro Ferreira et al. (2020) introduced a data-to-text generation task, wherein knowledge graph triplets describing facts are transformed into natural language text. While there are many possible realizations of the knowledge graph into natural language, the task is semantically objective, with respect to the informational content expected in the output, and is hence similar to the sentence union task. Recently, Slobodkin et al. (2022) introduced a new controlled text reduction task: given an input document with highlighted spans, the task is to generate a summary in which only the information covered in the highlighted spans is included, which could be compared to a highlight union task. Compared to our work, the spans that they used all appear in a single document, which makes it more similar to datasets which fuse disparate sentences." }, { "figure_ref": [], "heading": "Task Formulation", "publication_ref": [], "table_ref": [], "text": "The input for our sentence union task consists of two related sentences whose content partly overlap. The output union is then defined as a single sentence that follows two conditions: (a) it contains exactly the information from the two input sentences, and (b) it does not include any redundancies in its content. Condition (a) implies that there cannot be any information missing from the union that is mentioned in the source sentences, while at the same time the union cannot contain information that is not mentioned in the source sentences (i.e., hallucinations). Condition (b) implies that the union must avoid repetition of any units of information stemming from the source sentences, even if they are conveyed in different lexical terms. Notably, the semantic content of the output union (condition (a)) can be defined objectively in strict textual entailment terms. Formally, given an input of two related sentences s 1 and s 2 , and their union u, u should satisfy u |= s 1 , u |= s 2 and s 1 + s 2 |= u, where |= denotes textual entailment and + denotes concatenation of the two sentences. This definition, however, does not cover condition (b) of avoiding redundancies.\nIdentifying relevant informational links is crucial for producing a union, as demonstrated by the example in Fig. 2. We observe three types of relations between information units in the source sentences that affect the content of the resulting unit: (1) equivalent content, (2) uni-directional entailing content, and (3) disjoint content. Equivalent content, such as lexical equivalence or paraphrases, needs to be identified and included exactly once in the union to avoid redundancy. Uni-directional entailing content pertains to aligned text spans where one span can be implied from the other. In this case, only the entailing text unit should be included: including both spans would be redundant, while including only the less specific mention would result in missing information. Disjoint content must be included in the union as it provides distinct information not mentioned in the other sentence. For example, in Fig. 2, sentence 1 mentions the reason for firing Weightman while sentence 2 mentions that Harvey resigned, each providing distinct information. In addition, according to our annotation scheme, we assume that the date of the publication is known, which means that when a phrase such as \"the previous Thursday\" is mentioned, we can infer the specific date. Thus, the text spans \"On March 1st\" and \"the previous Thursday\" are equivalent, while \"Francis Harvey\" in sentence 1 is more specific than the text span \"Harvey\" in sentence 2. By considering these three types of relations, a proper union can be produced.\nAs noted earlier, we see the union generation task as a more comprehensive setup for information consolidation than the intersection generation task2 . This is because the union output should combine all the content from both source sentences, while the output of the intersection task does not include information mentioned in only one of the sentences. As a result, the union is more informative than the intersection, which makes it more representative for downstream multi-text tasks requiring information consolidation, aiming to create an efficient, nonrepetitive output text." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Data sources", "publication_ref": [], "table_ref": [], "text": "Annotating a text consolidation sentence union dataset requires a collection of related sentences, as input, as seen in Fig. 1. Specifically, we require naturally occurring sentences with some semantic overlap, where different types of informational relations are present. Note that we do not consider sentences with no content overlap as relevant for our dataset. [Union] Army Secretary Francis Harvey, who dismissed Walter Reed commander Major General George Weightman the previous Thursday because the army had lost trust and confidence in him, has resigned himself." }, { "figure_ref": [], "heading": "Generation", "publication_ref": [ "b31", "b29", "b8", "b11" ], "table_ref": [], "text": "Figure 2: An example of a pair of sentences, the informational relations between their text spans, and their union. In order to generate the union, it is first necessary to identify these relations (possibly implicitly), and then include all new or more specific information (denoted by colors) without redundancy.\nTo that end, we use the dataset created by Weiss et al. (2021), which includes pairs of relevant sentences with high semantic overlap. Their dataset was curated by identifying information overlap between sentences, based on the repurposing of existing human annotations. This approach is preferable to using models that identify semantic overlap, such as Thadani and McKeown (2013), since it introduces less bias to the dataset. The original datasets from which they sourced the sentences include: (1) the Event Coreference Bank (ECB+, an extension over ECB) (Cybulska and Vossen, 2014), which provides annotations for coreferring event and entity mentions, (2) MultiNews (MN) (Fabbri et al., 2019), which contains clusters of news articles along with human-written summaries, and (3) The Document Understanding Conference (DUC) and the Text Analysis Conference (TAC) 3 , both providing MDS evaluation datasets." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_0" ], "heading": "Annotating sentence union", "publication_ref": [], "table_ref": [], "text": "The process of writing a sentence union involves carefully tracking information units and blending them together to form the output, as outlined in §3. We introduce an elaborate crowdsourcing approach and interface (see Figure 3) for annotating union datasets at a large scale, which splits the annotation process into multiple steps.\nStarting with the two source sentences, the first step is to choose one sentence as the base sentence, 3 https://duc.nist.gov/ , https://tac.nist.gov/ that will be used as the basis for generating the sentence union, depicted in (Fig. 3,[1]). Our early experiments have shown that it is easier to merge the information from one sentence by adding it to the other sentence than write a merged sentence from scratch. We instruct the workers to choose the more detailed sentence as the base sentence, since this sentence would usually require less edits when merging into it information from the other sentence. In the other sentence, termed the integrated sentence, the worker has to highlight which spans they would like to integrate into the base sentence (Fig. 3,[2]). Finally, in the writing step, the worker blends the highlighted spans into the base sentence, thus creating the sentence union (Fig. 3" }, { "figure_ref": [], "heading": ", [3]).", "publication_ref": [ "b26" ], "table_ref": [], "text": "To optimize the diversity of inputs within our dataset while considering our annotation budget, each example was assigned to a single annotator. To ensure the quality in annotators' decisions, our process follows the controlled crowdsourcing approach (Roit et al., 2020). See App. C for more details and screenshots of the entire annotation process." }, { "figure_ref": [], "heading": "Skipping examples", "publication_ref": [ "b31" ], "table_ref": [], "text": "In certain cases, it may not be possible to generate a coherent sentence union from a pair of sentences, and annotators were given the option to skip such examples. A comprehensive analysis of these skipped cases is presented in Appendix A. Mainly, our findings indicate that the dataset from which we derived our data (Weiss et al., 2021), and was primarily designed for proposition alignment, contains many sentence pairs that are not sufficiently related to each other and hence are not suitable for producing a meaningful union." }, { "figure_ref": [], "heading": "Subtle annotation cases", "publication_ref": [], "table_ref": [], "text": "In addition to the aforementioned instructions, we took into consideration a few prominent special cases concerning the source sentences that would affect the resulting sentence union. Such cases include the need for world knowledge, temporal issues, subjectivity and attribution. For examples and guidelines provided to the workers for such cases, refer to App. B." }, { "figure_ref": [], "heading": "Cleaning annotations", "publication_ref": [], "table_ref": [], "text": "In order to ensure a high quality dataset, we introduced a post-processing step in which we either removed or manually edited examples matching specific filtering criteria. Filtering included finding non-overlapping input sentences based on their output union (i.e., the output was a simple concatenation of the two source sentences), as well as automatically identifying and manually reviewing subtle annotation cases described in App. B. For more details, see App. D." }, { "figure_ref": [], "heading": "Dataset Analysis and Assessment", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "In the following subsections, we report various analyses of the quality and other properties of our dataset. Dataset split statistics appear in Table 1. Our approach yielded a test dataset comprising of 477 instances, a sample size which is reasonable in light of the confidence intervals outlined in §8. Moreover, our analysis of learning curves (see Appendix G) suggests that the size of our training dataset is sufficient, and further expansion may not yield significant benefits." }, { "figure_ref": [], "heading": "Sentence union quality", "publication_ref": [ "b6", "b20" ], "table_ref": [ "tab_3", "tab_3" ], "text": "To estimate the reliability of our dataset, we have conducted a human assessment on a sample of 100 examples of sentence unions generated by our annotators. Our goal is to check whether the sentences in the dataset objectively fulfill the union requirements defined in Sec. 3. For this purpose we designed two evaluation criteria for content (coverage, faithfulness), and one criterion for finding redundancies (redundancy). In addition, we evaluate the fluency of the generated sentence, as commonly done for generation tasks.\n• Coverage: Does the sentence union contain all information expressed in the source sentences?\n• Faithfulness: Does the sentence union describe only information expressed in the source sentences?\n• Redundancy: Does the sentence union redundantly repeat some information?\n• Fluency: Does the sentence union progresses fluently, form a coherent whole and is easy to understand?\nThe content criteria resemble closely those used for data-to-text generation tasks (Castro Ferreira et al., 2020) which also require exact content matching between their input and output. We add another criterion for evaluating redundancies, as our input does include redundancies which needs to be avoided in the output.\nAs a simple way to measure the content criteria, we count the number of content words4 involved in pieces of information that are missing from the sentence union, or are unfaithful to the source sentences. For example, if the sentence union in Fig 2 would not mention the name \"Nick Jones\", which was mentioned in sentence 2, we count this as 2 misses. A more complicated example would be if the sentence union attributes \"Nick Jones\" to the wrong entity, such as \"FBI Deputy Director Nick Jones\". In such case, we consider the entire span (5 words) as missing, as well as unfaithful. Note that faithfulness can be seen as symmetrical to coverage, where we simply count content words in the sentence union that are not supported in the source sentences. Similarly, for the redundancy score, we count the number of content words involved in pieces of information that are redundant in the union. For example, in the phrase \"Thursday overnight at 2:09am\", the phrase \"overnight\" is considered redundant, and we will count 1 redundant word. We did not notice any fluency issues in the sentence unions created by the workers, as may be naturally expected given the high quality of our selected workers.\nWe start by counting the number of content words in all of the sentence unions in our sample, which adds up to 2372 content words, termed w total . Then, to create a coverage score, the count of missing content words is termed w missing , and the coverage score is calculated as w total w total +w missing . To create a faithfulness and redundancy scores, we calculate 1-\nw unf aithf ul w total\nand 1-w redundant w total , respectively, where w unf aithf ul is the number of unfaithful words and w redundant is the number of redundant words. Results for these metrics are available in Table 2. Overall, coverage issues were encountered in 8 examples out of 100, faithfulness and redundancy issues in one example each.\nQuality comparison to the prior dataset We compare our dataset to the McKeown et al. (2010) dataset of 300 sentence unions examples. In their annotation process, 5 workers annotated each pair of sentences, and then a single sentence union out of the 5 was automatically chosen as a representative. We evaluated a sample of 20 such representative sentence unions and used the same quality metrics that were used in our dataset quality analysis, reported in Table 2. We conclude that our controlled process, which separates the identification of informational relations from the writing phase, results in higher quality sentence unions, making significantly less coverage and redundancy mistakes, which are often due to lack of attention to details. For the faithfulness criterion, both approaches achieved similar high scores, which is expected since humans are not prone to hallucinate when editing a sentence. Overall, our annotation process achieves slightly better results, while employing only one worker instead of five." }, { "figure_ref": [ "fig_1" ], "heading": "Dataset compression rate", "publication_ref": [ "b20" ], "table_ref": [], "text": "Our motivation for the union task is to develop models that can consolidate information from naturally occurring texts with varying degrees of overlapping information. Hence, in order to assess the diversity of our dataset with respect to the degree of such information overlap, we suggest to compute and analyze the Compression Rate (CR) in our instances, which measures in our setting the amount of redundancies (unlike the data-to-text setting) between the two source sentences5 . By design, a CR of 100% would imply that a single source sentence contains all of the information in both source sentences, which means that the other sentence is completely redundant. A CR of 0% would imply that there is no redundancies between the source sentences.\nDenoting our two input sentences short and long, per their lengths, as well as the union sentence, and following the rationale above, the compression rate is calculated as the amount of information that is eliminated from the shorter sentence. Formally, we have CR (short, long, union) = 1 -|union|-|long| |short| , counting sentence length by content words.\nAs can be seen in Fig. 4, our dataset supplies a variety of examples in terms of CR for every split. We report an average CR score of 60.82 ±0.67 for our dataset and an average CR score of 65.62 ±1.35 for McKeown et al. (2010). These results imply that our dataset on average contains somewhat less overlap between the source sentences, overall includes a large variety of redundancy levels." }, { "figure_ref": [], "heading": "Informational relations analysis", "publication_ref": [], "table_ref": [], "text": "Complementary to the analysis in §5.2, naturally occurring texts can include a wide variety of crosstext informational relations, as described in §3. For this reason, we analyzed the frequency of the more challenging relations necessary to generate proper sentence union. Our analysis includes a sample of 30 sentence pairs from our dataset. On average, a sample of 10 examples is expected to include 17 \"paraphrastic uni-directional entailment\" relations (a uni-directional entailment which differs lexically), such as \"supermarket\" entailing \"store\", or \"gave interviews on NBC's today\" entailing \"appearance on NBC's today\". As described in §3, such examples challenge a consolidation model to include only the entailing expression in the output. In addition, such a sample is expected to include 21 paraphrastic equivalence relations. These challenge the model to include only one of the equivalent expressions in the output, to avoid repetition. Overall, these statistics assess the abundant semantic challenges posed by our dataset." }, { "figure_ref": [], "heading": "Baseline Models", "publication_ref": [ "b25", "b7", "b34", "b4", "b3" ], "table_ref": [], "text": "We present baseline models, aiming to test neural pretrained language models' for their ability to implicitly recognize relevant informational relations between input sentences and properly create their union.\nFine-tuned models As our first type of baseline we fine-tune a large pre-trained sequenceto-sequence model using our data. To that end, we picked two strong models: T 5 large (Raffel et al., 2019), which is commonly applied to endto-end text generation tasks (Chen et al., 2020), and PRIMERA (Xiao et al., 2022), which was pretrained in a cross-document fashion (Caciularu et al., 2021) and achieves state-of-the-art results over multi-document summarization datasets. This makes this model appealing for our sentence fusion task, where the two sentences originate in different documents. See App. F for information about training details.\nIn-context learning Another current baseline approach is in-context learning, in which the instructions and examples to the task are provided as input (the prompt) at inference time to very large pre- trained language models. We used GP T 3 (Brown et al., 2020), specifically text-davinci-003. The instructions we initially used were similar to those given to the annotators. We then optimized the prompt by running it on the training dataset and manually identifying mistakes. The identified mistakes were added to the prompt as examples. In addition, we added to the instructions \"important\"\nnotes to what the model should pay attention to. See App. E for the complete final prompt and configuration used." }, { "figure_ref": [], "heading": "Model Evaluation Protocols", "publication_ref": [], "table_ref": [], "text": "We evaluate our baseline systems both through human evaluation ( §7.1) and with automatic metrics ( §7.2) suitable for the task, which can generally be used in the development cycles of union generation systems ( §7.2)." }, { "figure_ref": [], "heading": "Human evaluation", "publication_ref": [ "b5", "b24", "b1", "b10" ], "table_ref": [ "tab_5" ], "text": "The human evaluation is conducted over the predicted unions for the test set for each of the baseline models. Instead of judging the generated sentence union for each baseline system separately, the evaluation is done in a comparative fashion, following previous works where the evaluator sees together the outputs of all baseline systems (Callison-Burch et al., 2007;Novikova et al., 2018). Similar to the analysis of the dataset quality in §5, we are interested in evaluating the coverage, faithfulness, redundancy and fluency of the predicted union, this time in a manner that fits crowdsourced human evaluation. Content and redundancy are scored on a scale from 1 to 4 (higher is better), described in Table 3. This scale is inspired by the Semantic Textual Similarity human evaluation approach (Agirre et al., 2013), which also tests for information overlap. For the fluency score, we use a common Likert scale from 1 to 5 (Fabbri et al., 2021). See App. H for details and screenshots.\nAs there exist trade-offs between the two content measures and the redundancy measure, we add an additional measure which evaluates consolidation as a whole. For example, by arbitrarily adding more information to the union we can increase the coverage, but also risk increasing redundancies and unfaithfulness. The consolidation measure simply averages the three aforementioned measures, thus testing for overall text consolidation quality." }, { "figure_ref": [], "heading": "Automatic evaluation", "publication_ref": [ "b16", "b32" ], "table_ref": [], "text": "In line with previous works in text generation, we report the ROUGE metric between the reference union and the predicted union. However, like for most generation tasks, ROUGE will unfairly penalize correct but paraphrastic sentence unions (as described in §3). To partly address this issue, we add another automated metric which tests for bi-directional textual entailment (aka NLI), comparing the reference union sentence to the predicted union sentence, requiring entailment in both directions. Specifically, we use the DeBERT a xxlarge v2 model (He et al., 2020), finetuned with the MNLI task (Williams et al., 2017) and a threshold of 0.5.\nWhile both metrics test for content matching, they would not penalize a model that bluntly concatenates the two input sentences. Therefore, we also report ∆CR ( §5.2), calculated as the average difference between the CRs of the predicted vs. the reference union sentences (the latter is subtracted from the former), on each instance. A positive value thus indicates that the model compression rate is higher than that of the reference union, while a negative value indicates the opposite (model compresses less than the reference)." }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2" ], "heading": "Human evaluation of the models", "publication_ref": [], "table_ref": [], "text": "Results are presented in Table 4, and example generations with their respective scores are provided in App. I. The trade-off mentioned in §7.1 between increasing coverage while still remaining faithful and without redundancies is evident in the results of T 5 large and GP T 3. PRIMERA comes out as a slightly better model, as it achieves the highest consolidation score, with yet a lot of room for improvement.\nTo get a better sense of the absolute performance of the union sentences generated by the baseline models, we compare them to two naive models which output: (1) the concatenation of the source sentences (no avoidance of redundancy), and (2) the longer sentence (no attempt to consolidate and cover information from the other sentence). Based on evaluation of 50 examples completed by the authors, we report an average redundancy score of 1.6 ±.1 for the concatenation and an average coverage score of 2.3 ±.1 for the longer sentence. As reported below, all our baseline models outperform these naive models by a large margin.\nFurther, we draw a plot (Fig. 5) of the minimal system score amongst the three component measures that the consolidation measure combines. We note that even for the best model, PRIMERA, only 29.7% of the predictions are fully correct with respect to content and redundancy, another 40.6% examples include minor errors, and 26% examples contain substantial errors in at least one of the measures, indicating the limitations of current models." }, { "figure_ref": [], "heading": "Automatic evaluation of the models", "publication_ref": [ "b10", "b21" ], "table_ref": [], "text": "While automatic metrics are clearly less reliable than human metrics, they can be useful for development cycles. The automatic metric results are also reported in Table 4, observing that both the ROU GE1 score is highest for PRIMERA, while the NLI score is highest for GP T 3. The ∆CR scores roughly correlate with the combination of coverage and redundancy detected in the human evaluation, where both lower coverage (undesired) and lower redundancy (desired) increase compression rate.\nTo identify the potential utility of our automatic metrics, we follow the standard practice (Fabbri et al., 2021) and calculate a Kendall τ coefficient (McLeod, 2005) between the human and automatic evaluation results. Our results show that ROU GE1 Table 4: Human (left) and automatic (right) evaluation results of system generated unions over the complete test set. All scores are averages, along with their standard error (standard error for manual evaluation results was always smaller than 0.01, and is therefore omitted from the table ).\nis the highest correlated metric with the consolidation measure (τ = 0.38, p < 0.05). Overall, these automatic metrics can be used in tandem to provide certain feedback during model development cycles." }, { "figure_ref": [], "heading": "Error analysis", "publication_ref": [], "table_ref": [], "text": "To shed light on the various errors made by the baseline models, we examined 20 erroneous examples identified in the human evaluation, with each example consisting of three predictions, one from each of the baseline systems. Our findings indicate that the most frequent causes of model errors are related to the complexity of informational relationships present in the source sentences, with uni-directional entailment being the most common. Moreover, the models seem to face difficulties in accurately combining related information, which often results in incorrect merging of information with the wrong entity or predicate. Further details on the analysis can be found in Appendix J." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this paper, we advocate for using the sentence union task as a testbed for multi-text consolidation. We release a realistic dataset, together with a set of analyses that show that the dataset is of high quality, and challenging for multi-document consolidation efforts. We evaluate the performance of state-of-the-art pretrained large language models on text consolidation, where our findings suggest key challenges for future research. Future research may expand upon our dataset to include consolidation beyond 2 input sentences, and may examine the use of explicit text consolidation structures for improving multi-text consolidation in large language models." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "We enumerate some limitations to our work. While we did create the largest union dataset to date, it is still of moderate size. As shown by our learning curves (App. G), the amount of training data we created seemed sufficient to saturate the learning of the models with which we experimented, but it might still be found insufficient for training other models.\nOur annotation protocol might have influenced the compression rates of the unions, as we instructed workers to annotate sentence unions by first choosing a base sentence and then highlighting the other sentence. Additionally, while the highlighting facilitates the annotation process, it cannot directly be used for analyses of the dataset since it is uni-directional.\nThe dataset includes only input with exactly two sentences and it might be desirable for future works to also be able to train systems that take more than two sentences as input. Our dataset is also domain specific, in that all the sentences are taken from news sources. This might result in challenging cross-domain generalization.\nThis dataset is limited to the English language. While the suggested annotation protocol seemingly fits other languages, the step in which words are highlighted might prove problematic for morphologically rich languages, in which a single word includes many pieces of information. A segmentation of the text before annotation might be required." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [ "b31" ], "table_ref": [], "text": "Crowdsourcing To crowdsource the dataset, we used the Amazon Mechanical Turk6 (MTurk) platform. To participate in the first stage of recruitment, workers were required to possess the following MTurk qualifications:\n• NumberHITsApproved greater than 10000\n• PercentAssignmentsApproved greater than 98%\n• WorkerLocale in US, CA, AU, GB, NZ Workers were paid $0.3 for each sentence union annotation assignment, as well as a $1.25 bonus for every 100 assignments, and $0.4 for each evaluation assignment, as well as a $1 bonus for every 50 assignments. Overall, by an average approximation of 1.8 minutes for the first assignment, and 2.4 minutes for the second assignment, their wage is expected to start from $10 per hour and increase as the workers are more familiar with the task and start receiving bonuses. Workers were informed that the ratings they will provide will be used to evaluate artificial intelligence models which were trained on the data they annotated.\nDataset The texts that workers write that are included in our dataset are limited to the information expressed in the source sentences. The source sentences originate from the datasets mentioned in §4.1, which include only texts available in public news sources and were previously made available by Weiss et al. (2021). Our dataset does not contain information that would make it possible to reconstruct the original documents, or any human annotations, such as the summary or coreference resolution annotation, from the original datasets." }, { "figure_ref": [], "heading": "A Skip Guidelines", "publication_ref": [], "table_ref": [], "text": "In Section 4.2, it was noted that there are cases where generating a union from a pair of sentences" }, { "figure_ref": [], "heading": "Category Count", "publication_ref": [ "b31" ], "table_ref": [ "tab_1" ], "text": "No information consolidation 19 Unnatural union 7 Mistake 3 Missing context 1\nTable 5: An analysis of 30 cases that were skipped by workers during the annotation process. Among these, some were categorized as mistakes, meaning that they should not have been skipped.\nis not suitable, and workers were given the option to skip the annotation for such examples. This section outlines the specific scenarios in which workers were directed to skip examples. Eventually, our annotators skipped 458 sentence pairs from the original dataset that we used as input, as shown in Table 1. An analysis of a sample of 30 such cases is presented in Table 5, categorized based on the criteria below. In conclusion, we found that the dataset we used as the source of our sentence pair instances, which was originally developed by Weiss et al. (2021) for aligning predicate-argument structures (represented as question-answer pairs), includes a significant number of instances where information consolidation in the form of sentence union is mostly irrelevant.\nNo information consolidation. One case in which workers were directed to skip examples during annotation is when there is no partially overlapping information to consolidate from two related sentences, hence their union would simply be a concatenation of the two. This case is referred to as \"No information consolidation\". An example of this scenario is when sentence 1 mentions that \"Acupuncture is the ancient Chinese medical therapy technique of inserting thin, sharpened needles into specific nerve junction points of the body,\" and sentence 2 mentions a study that found \"53.8 percent of the subjects who had needles inserted in four acupuncture \"zones\" in the ear five times a week tested free of cocaine at the end of the eight-week study period.\" In this case, there is no need to consolidate the information from the two sentences as they provide distinct pieces of information. Sentence 1 explains what is acupuncture while sentence 2 discusses a study about it. Unnatural union. An example of an \"Unnatural union\" scenario is when unifying two input sentences would form an awkward or unnatural sentence. For instance, if the first sentence is written in the past tense and the second one in the future tense, unifying them could lead to an unnatural sentence union. As an example, consider the following sentences: \"Fannie Mae's board met Sunday night to discuss Raines' future\" and \"The directors of Fannie Mae, the big mortgage finance company, will meet Sunday to consider the fate of two senior executives who signed off on financial statements that violated accounting rules, people close to the company said Friday.\" Here, the first sentence uses the past tense while the second sentence uses the future tense. It would be more natural to use the past tense in the sentence union since the event occurred in the past. However, incorporating the information that someone said something on Friday before the event could result in an awkward sentence union.\nMissing context. This case happens when two sentences need to be interpreted in the broader text context, which is missing in our annotation scenario, for example when there is a dangling reference to an entity that is not specified in the given sentence. This is often not problematic, unless understanding the identity of the entity is necessary to create the union. For instance, one sentence quotes a person, while the other sentence does not mention the speaker. An example of this scenario is the following: \"Sadly, because Magic Leap seldom hires and does not actively recruit female candidates, the company loses competitive advantage to products like Microsoft's Hololens.\" and \"When Tannen Campbell was hired by Magic Leap in 2015, the Florida company had no women in leadership roles and its only idea to make its product femalefriendly was to release a pink version, according to Forbes.\" Merging these two sentences is not straightforward due to the lack of context.\nDisagreements. Sometimes, there are two statements that contradict or disagree with one another. For example, sentence 1 is \"Video of Brooklyn Mother of 13 Zurana Horton shot and killed in a gang shooting was revealed Thursday .\" and sentence 2 is \"A shocking video released for the first time Thursday captures the moment a Brooklyn mother of 12 was killed in a gang shootout as she picked her daughter up from school .\". Sentence 1 mentions that the child is 13 years old while sentence 2 mentions that the child is 12 years old. " }, { "figure_ref": [], "heading": "B Subtle annotation Cases", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "In Section 4.2 we noted that certain special cases arose when generating a union from a pair of sentences, and were included in the instructions for annotators. This section outlines the specific instructions provided to workers, with an analysis of 50 cases (Table 6), categorized based on various criteria as described below.\nAttribution. One potential issue is when the source sentences make attributions to a specific source, such as a news agency. An example of this can be seen in sentence 1 \"Video of Brooklyn Mother Zurana Horton being shot and killed was revealed Thursday, according to the N.Y. Daily News.\" and sentence 2 \"A shocking video released for the first time Thursday captures the moment a Brooklyn mother was killed as she picked her daughter up from school.\", where the new information in sentence 2 is attributed to the video content, rather than to the N.Y. Daily News. Another example is when a sentence contains quotes, as changing a quote to contain more information would create an unfaithful sentence union. In such cases, the workers were allowed, whenever it seemed reasonable, to attribute combined pieces of information originating from the two sentences to a reported source, even if only parts of the combined information were explicitly attributed to this source, in one of the sentences.\nRelative dates. Some sentences may mention a specific time relative to when the sentence was written, such as \"yesterday\" or \"Monday\", which implies that the sentence was written in the same week of the event. Workers were instructed to assume that the date of publication is known, so there is no difference between the mention of \"yesterday\" and \"Monday\", but, for example, that \"yesterday\" is more specific than \"earlier this month\".\nWorld knowledge. In some cases, sentences may mention the same piece of information in different levels of specificity, which requires world knowledge to identify. Workers were instructed to assume common world knowledge when creating the sentence union. An example is given for Paris, which is both a city in Texas and the capital of France.\nBefore and after an event. For sentences referring to events, some may differ in their time of publication compared to the event itself. Workers were instructed to use the past tense, as the sentence union is written after the event. For example, sentence 1 mentions an event that has already happened \"After leaving Alderson at 12:30 a.m. on March 3, 2005, Martha Steward declared the 5-month experience as \"life altering and life affirming.\"\", while sentence 2 was written before the event \"US lifestyle guru Martha Stewart is expected to leave jail on Friday after a five-month sentence for a stock scandal that reinvigorated her career rather than dooming it.\". In this case, the sentence union should be written in the past tense, as it refers to an event that has already occurred." }, { "figure_ref": [ "fig_3" ], "heading": "C Annotation Process", "publication_ref": [ "b26" ], "table_ref": [], "text": "Screenshots of the entire annotation process are depicted in Figure 6. Guidelines for creating sentence unions7 include writing one coherent sentence, ordering the information in a stand-alone manner (as if the sentence would have been written from scratch), meaning that the writing process should not be distracted by the original split and ordering of information in the two input sentences. To the extent possible, the sentence union should preserve the original wording of the information, but phrasing may be minimally adjusted to create a coherent sentence union. Each piece of information should appear only once in the sentence union. When there is a redundancy across the two sentences, the more specific phrasing should be chosen.\nThe interface helps the workers to avoid making common mistakes. For example, in order to reduce redundancies of information in the union, if a highlighted word already exists in the base sentence, both word mentions will be marked to draw the worker's attention. Another example is warning the worker when the sentence union contains nonhighlighted words from the base sentence. Also, when integrating highlighted words into the sentence union, the worker will see yellow highlights turn into green highlights. If the worker tries to submit the annotation with yellow highlights, the system will raise an alert.\nTo ensure the quality in annotators' judgements, our process follows the controlled crowdsourcing approach (Roit et al., 2020), which includes a recruitment phase, two training phases accompanied by extensive guidelines, and ongoing monitoring during the annotation of the production task. Workers were allowed to participate in primary tasks only if they had completed the entire process. Only workers who performed well on the recruitment phase were accepted to the next training phases. The training phases were created manually, including subtle annotation cases. After each annotation, workers were shown gold target highlights and sentence unions8 for comparison with their own output." }, { "figure_ref": [], "heading": "D Cleaning Annotations", "publication_ref": [], "table_ref": [], "text": "Disjoint sentences Following the skip guidelines (see App. A), we automatically identified examples which their sentences are mutually exclusive and their sentence union is a concatenation of the source sentences. We find these instances by comparing content words only, since connecting the two sentences sometimes involves non-semantic lexical changes (e.g., adding a semicolon or a comma). Due to the fact that there is no consolidation of information in such examples, we see them unfit for a union, as mentioned in §4.1, and they were not included in the dataset. We leave the automatic categorization of sentences into whether or not they are suitable for sentence unions to future work.\nQuotes Following the attribution discussion in App. B, we manually reviewed examples where the union contained a quote that was not in any of the source sentences, as well as any example that had a sentence which used a first-person perspective (e.g., \"I\", \"we\", \"mine\", \"ours\", ...)." }, { "figure_ref": [], "heading": "E In-Context Learning", "publication_ref": [], "table_ref": [], "text": "For the in-context learning approach, we used a temperature value of 0.4 and the following prompt: In this task, you will be presented with two sentences that overlap in information, and you are tasked to merge the information of the two into a single unifying sentence without redundancies. Important: Do not omit information. Important: Do not repeat information.\nHere is an example of a correct union and a wrong union: Sentence 1: The February assassination of former Lebanon Prime Minister Hariri put Syria under renewed pressure from the international community to abide by U.N. Security Council Resolution 1559 and withdraw its troops from Lebanon. Sentence 2: Foreign ministers from all The union is wrong, because it does not mention that foreign ministers gathered for a meeting on Wednesday.\nPlease generate a correct union to the following sentences:\nSentence 1: <sentence 1 goes here> Sentence 2: <sentence 2 goes here> Correct union:" }, { "figure_ref": [], "heading": "F Training Details", "publication_ref": [], "table_ref": [], "text": "We fine-tuned T 5 large and PRIMERA models for 20 epochs on a Tesla V100-SXM2-32GB GPU. We used a hyperparameter random search strategy. The learning rate was tuned within the range [1e -8, 5e -5], while the batch size varied between [8,16,32]. We also explored the weight decay range of [0, 0.5] and warump step range of [0, 300]. The best model was selected based on the ROU GE1 metric.9 The best T5 model was obtained with a learning rate of 4.3e -6, no weight decay, no warmup steps, batch size of 32, after 18 epochs. For the best-performing PRIMERA model, we used a learning rate of 3.5e -6, weight decay of 0.5, warmup steps of 80, batch size of 16 and selected the best checkpoint after 9 epochs. The training time for T 5 large and PRIMERA models were approximately 1 hour and 10 minutes each.\nInput structure When concatenating the two source sentences to insert as input for the model, we add special separator tokens to make the model aware of the sentence boundaries. For T 5 large , we separated between the source sentences in the input using a newly created special token, while for PRIMERA, we used the <doc-sep> token, which was used in the pre-training phase to separate between input source documents." }, { "figure_ref": [ "fig_4" ], "heading": "G Learning Curves", "publication_ref": [], "table_ref": [], "text": "To assess the adequacy of our dataset size, we evaluated the baseline models on different subsets of our training data ([25%, 50%, 75%, 100%]) and various model sizes (T 5 base and T 5 large ). Based on our findings (Figure 7), it appears that enhancing the model size from T 5 base to T 5 large results in performance improvement. However, the marginal benefit of increasing training dataset size may be limited, and further gains may not be significant." }, { "figure_ref": [ "fig_6", "fig_6", "fig_6", "fig_6", "fig_6" ], "heading": "H Evaluation Process", "publication_ref": [], "table_ref": [], "text": "As explained in Section 7, the evaluation process involves a comparative approach, whereby all the unions of system-generated sentences are evaluated simultaneously, as shown in Figure 8. The evaluation is conducted separately for four criteria. To assess the content differences between the reference union and the system union, including coverage and faithfulness, a single sentence is designated as the base sentence, and the worker is asked to evaluate the other sentence based on the amount of missing content. The reference union serves as the base sentence for evaluating coverage, while the system union is used as the base sentence for evaluating faithfulness since any information present in the system union but absent in the reference union is deemed unfaithful. In evaluating redundancy and fluency, the evaluator is only presented with the system union without the reference union.\nTo assess the coverage and faithfulness criteria, the workers are required to compare the generated union with the reference union, aided by red strikethroughs on words that are not included in the generated union and green highlights on words that are not included in the reference union, as illustrated in Figures 8a and8b. For redundancy and fluency criteria, the reference union is not needed, as demonstrated in Figures 8c and8d." }, { "figure_ref": [], "heading": "I Example Sentence Unions", "publication_ref": [], "table_ref": [], "text": "See Table 7 for examples of sentence unions, including the sentence unions from each predicted system." }, { "figure_ref": [], "heading": "J Error Analysis", "publication_ref": [], "table_ref": [], "text": "In order to perform an error analysis, we analyzed 20 examples that were rated less than perfect for all metrics based on the human evaluation (see §8.1). The findings are presented in Table 8, with one representative example from each subcategory included in Table 9. Our key observation is that models make various coverage errors as they fail to identify the uni-directional entailment correctly in the dataset. Furthermore, models make multiple coverage and faithfulness errors by incorrectly combining information and attaching it to the wrong entity or predicate. This includes cases where either the entailing part is missing and the entailed part is present in the sentence or both the entailing and entailed parts are present in the sentence. Wrong attachment 13 13 1 This includes cases where an argument is attributed to the wrong predicate or entity. Lexical similar but different information 8 0 0 This includes cases where information is omitted, and the omitted information had a phrase that was lexically similar to a phrase in the other sentence. Ignores prefix 4 0 0 This includes cases where the prefix to the sentence in the source is omitted from the union.\nRelated new information 2 0 0 This includes cases where the source sentences contain related but different information, and one of them is not included in the union. This includes cases where paraphrased information from the source is repeated in the union.\nExternal hallucination 0 3 0 This includes cases where there is information in the union that does not originate from the source sentences.\nTable 8: Error analysis based on a sample of 20 erroneous examples, each example analyzed for the 3 system outputs. For each metric, we report the frequency of a subcategory that we suspect is the cause for the error. One representative example from each subcategory is included in Table 9." }, { "figure_ref": [], "heading": "Prediction Explanation Subcategorization", "publication_ref": [], "table_ref": [], "text": "External hallucination Peter Capaldi was revealed as the 12th Doctor of the Doctor Who series during a special live broadcast, with the announcement being made that he had been cast as the 12th Time Lord.\nThe mention of a live broadcast is not part of the source sentences. Interestingly, this is true, which indicates that the model knows this story. Lexical similar but different information Sgt. Tim Shields and Attorney-General Wally Oppal announced Wednesday that the RCMP arrested two Bountiful residents, Winston K. Blackmore, 52, and James Oler, 44, on charges of polygamy.\nSource sentence mentioned \"and leaders of a polygamist group\". This was possibly skipped due to the model incorrectly recognizing \"polygamy\" later as a paraphrase. Uni-directional entailment A strong 6.1-magnitude earthquake which hit the Indonesian northwestern province of Aceh on Tuesday killed a child, injured dozens and destroyed buildings, sparking panic in a region devastated by the quake-triggered tsunami of 2004.\nSentence 2 mentions \"injuring at least 50 people\" which entails \"dozens injured\" in sentence 1, but it is not mentioned in the union." }, { "figure_ref": [], "heading": "Ignores prefix", "publication_ref": [], "table_ref": [], "text": "The 55-year-old Scottish actor Peter Capaldi is officially set to replace exiting star Matt Smith, who announced in June that he was leaving the sci-fi show later this year, as the TARDIS leader, as producer Steven Moffat announced on the live BBC special Doctor Who Live: The Next Doctor Sunday.\nIgnores the information about it being the 12th doctor, which was mentioned in a sentence prefix: \"Doctor Who has finally selected its 12th doctor: Peter Capaldi is officially set to ...\"." }, { "figure_ref": [], "heading": "Related new information", "publication_ref": [], "table_ref": [], "text": "Industry analysts contacted by eWEEK generally say they believe that Hewlett-Packard's $13.9 billion acquisition of Electronic Data Systems, which was officially announced on May 13 and is currently being negotiated, is a good move for both companies, although there will be the usual integration snafus such as vendor neutrality issues, business lines, culture shock and layoffs.\n\"good move for both companies\" and \"a deal that could help the world's largest personal computer maker snap up more data management and consulting contracts\" are different, and both should be mentioned in the union." }, { "figure_ref": [], "heading": "Paraphrase", "publication_ref": [], "table_ref": [], "text": "In France, art dealers are obliged by law to register all purchased art, except those bought at public auction.\nSentence 1 mentions \"art dealers ... purchases\", and sentence 2 mentions \"dealers ... purchased art\". Since these are paraphrases, the union which repeates both \"art dealers\" and \"purchased art\" is repetitive." }, { "figure_ref": [], "heading": "Wrong attachment", "publication_ref": [], "table_ref": [], "text": "The flight recorder was recovered on November 9 and revealed that the autopilot was disconnected, the descent appeared \"controlled,\" the cockpit turned off both engines, and the elevators were out of unison, something experienced pilots would not do.\n\"something experienced pilots would not do\" refers to turning out both engines, not elevators out of unison. This is usually caused by an incorrect merge of the sentences.\nTable 9: Examples for the subcategories we devised during the model error analysis, which we suspect are are the cause for the error." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "The work described herein was supported in part by grants from One AI, the Israel Science Foundation 2827/21 and the Israel Ministry of Science and Technology. We would like to thank the workers who have annotated this dataset and we appreciate their dedication in ensuring a high level of quality. We express our gratitude to Dr. Kapil Thadani for assisting us in retrieving his data from an earlier research endeavor." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Sentence 1 French museum officials traveled to New York last month and confirmed the find is indeed the missing Picasso work, which the Centre Georges Pompidou realized was missing from its storerooms in 2001 following a loan request; it was then valued at more than $2.5 million. Sentence 2 The canvas had been smuggled out of a storeroom of the Centre Georges Pompidou, the Paris museum and arts center, and its whereabouts had not been known. Gold union French museum officials traveled to New York last month and confirmed the find is indeed the missing Picasso canvas smuggled out of a storeroom of the Centre Georges Pompidou, the Paris museum and arts center, which realized it was missing in 2001 following a loan request; it was then valued at more than $2.5 million." }, { "figure_ref": [], "heading": "T 5 large", "publication_ref": [], "table_ref": [], "text": "French museum officials traveled to New York last month and confirmed the find is indeed the missing Picasso work, which the Centre Georges Pompidou realized was missing from its storerooms in 2001 following a loan request; it was then valued at more than $2.5 million, and its whereabouts had not been known.\nCoverage: 2.0 Table 7: Examples of predicted union sentences from each baseline system and their corresponding human evaluation." }, { "figure_ref": [], "heading": "Coverage Faithfulness Repetition Subcategory Explanation Subcategorization", "publication_ref": [], "table_ref": [], "text": "Uni-directional entailment 17 2 5" } ]
2023-05-24
10.1016/j.eswa.2021.116154
[ { "authors": "Raksha Agarwal; Niladri Chatterjee", "journal": "Expert Systems with Applications", "ref_id": "b0", "title": "Improvements in multi-document abstractive summarization using multi sentence compression with word graph and node alignment", "year": "2022" }, { "authors": "Eneko Agirre; Daniel Cer; Mona Diab; Aitor Gonzalez-Agirre; Weiwei Guo", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "SEM 2013 shared task: Semantic textual similarity", "year": "2013" }, { "authors": "Regina Barzilay; Kathleen R Mckeown; Michael Elhadad", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "Information fusion in the context of multi-document summarization", "year": "1999" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b3", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Avi Caciularu; Arman Cohan; Iz Beltagy; Matthew Peters; Arie Cattan; Ido Dagan", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "CDLM: Cross-document language modeling", "year": "2021" }, { "authors": "Chris Callison-Burch; Cameron Fordyce; Philipp Koehn; Christof Monz; Josh Schroeder", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "meta-) evaluation of machine translation", "year": "2007" }, { "authors": "Thiago Castro Ferreira; Claire Gardent; Nikolai Ilinykh; Chris Van Der Lee; Simon Mille; Diego Moussallem; Anastasia Shimorina", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "The 2020 bilingual, bi-directional WebNLG+ shared task: Overview and evaluation results (WebNLG+ 2020)", "year": "2020" }, { "authors": "Ting Chen; Simon Kornblith; Kevin Swersky; Mohammad Norouzi; Geoffrey E Hinton", "journal": "", "ref_id": "b7", "title": "Big self-supervised models are strong semi-supervised learners", "year": "2020" }, { "authors": "Agata Cybulska; Piek Vossen", "journal": "European Language Resources Association (ELRA", "ref_id": "b8", "title": "Using a sledgehammer to crack a nut? lexical diversity and event coreference resolution", "year": "2014" }, { "authors": "Hal Daume; Iii ; Daniel Marcu", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Generic sentence fusion is an ill-defined summarization task", "year": "2004" }, { "authors": "Alexander R Fabbri; Wojciech Kryściński; Bryan Mc-Cann; Caiming Xiong; Richard Socher; Dragomir Radev", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b10", "title": "SummEval: Re-evaluating summarization evaluation", "year": "2021" }, { "authors": "Alexander R Fabbri; Irene Li; Tianwei She; Suyi Li; Dragomir R Radev", "journal": "", "ref_id": "b11", "title": "Multi-news: a large-scale multi-document summarization dataset and abstractive hierarchical model", "year": "2019" }, { "authors": "Angela Fan; Yacine Jernite; Ethan Perez; David Grangier; Jason Weston; Michael Auli", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "ELI5: Long form question answering", "year": "2019" }, { "authors": "Tanvir Ahmed Fuad; Mir Tafseer Nayeem; Asif Mahmud; Yllias Chali", "journal": "Computer Speech & Language", "ref_id": "b13", "title": "Neural sentence fusion for diversity driven abstractive multi-document summarization", "year": "2019" }, { "authors": "Mor Geva; Eric Malmi; Idan Szpektor; Jonathan Berant", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "DiscoFuse: A large-scale dataset for discourse-based sentence fusion", "year": "2019" }, { "authors": "John Giorgi; Luca Soldaini; Bo Wang; Gary Bader; Kyle Lo; Lucy Lu Wang; Arman Cohan", "journal": "", "ref_id": "b15", "title": "Exploring the challenges of open domain multi-document summarization", "year": "2022" }, { "authors": "Pengcheng He; Xiaodong Liu; Jianfeng Gao; Weizhu Chen", "journal": "", "ref_id": "b16", "title": "Deberta: Decoding-enhanced bert with disentangled attention", "year": "2020" }, { "authors": "Emiel Krahmer; Erwin Marsi; Paul Van Pelt", "journal": "", "ref_id": "b17", "title": "Query-based sentence fusion is better defined and leads to more preferred results than generic sentence fusion", "year": "2008" }, { "authors": "Logan Lebanoff; John Muchovej; Franck Dernoncourt; Soon Doo; Lidan Kim; Walter Wang; Fei Chang; Liu", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Understanding points of correspondence between sentences for abstractive summarization", "year": "2020" }, { "authors": "Logan Lebanoff; Bingqing Wang; Zhe Feng; Fei Liu", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Modeling endorsement for multi-document abstractive summarization", "year": "2021" }, { "authors": "Kathleen Mckeown; Sara Rosenthal; Kapil Thadani; Coleman Moore", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Time-efficient creation of an accurate sentence fusion corpus", "year": "2010" }, { "authors": "Ian Mcleod", "journal": "R Package Kendall", "ref_id": "b21", "title": "Kendall rank correlation and mann-kendall trend test", "year": "2005" }, { "authors": "Reiichiro Nakano; Jacob Hilton; Suchir Balaji; Jeff Wu; Long Ouyang; Christina Kim; Christopher Hesse; Shantanu Jain; Vineet Kosaraju; William Saunders; Xu Jiang; Karl Cobbe; Tyna Eloundou; Gretchen Krueger; Kevin Button; Matthew Knight; Benjamin Chess; John Schulman", "journal": "", "ref_id": "b22", "title": "Webgpt: Browserassisted question-answering with human feedback", "year": "2022" }, { "authors": "Shashi Narayan; Shay B Cohen; Mirella Lapata", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization", "year": "2018" }, { "authors": "Jekaterina Novikova; Verena Ondř Ej Dušek; Rieser", "journal": "Association for Computational Linguistics. OpenAI", "ref_id": "b24", "title": "RankME: Reliable human ratings for natural language generation", "year": "2018" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "", "ref_id": "b25", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2019" }, { "authors": "Paul Roit; Ayal Klein; Daniela Stepanov; Jonathan Mamou; Julian Michael; Gabriel Stanovsky; Luke Zettlemoyer; Ido Dagan", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Controlled crowdsourcing for high-quality QA-SRL annotation", "year": "2020" }, { "authors": "Aviv Slobodkin; Paul Roit; Eran Hirsch; Ori Ernst; Ido Dagan", "journal": "", "ref_id": "b27", "title": "Controlled text reduction", "year": "2022" }, { "authors": "Kapil Thadani; Kathleen Mckeown", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Towards strict sentence intersection: Decoding and evaluation strategies", "year": "2011" }, { "authors": "Kapil Thadani; Kathleen Mckeown", "journal": "Asian Federation of Natural Language Processing", "ref_id": "b29", "title": "Supervised sentence fusion with single-stage inference", "year": "2013" }, { "authors": "Romal Thoppilan; Daniel De Freitas; Jamie Hall; Noam Shazeer; Apoorv Kulshreshtha; Heng-Tze; Alicia Cheng; Taylor Jin; Leslie Bos; Yu Baker; Yaguang Du; Hongrae Li; Huaixiu Lee; Amin Steven Zheng; Marcelo Ghafouri; Yanping Menegali; Maxim Huang; Dmitry Krikun; James Lepikhin; Dehao Qin; Yuanzhong Chen; Zhifeng Xu; Adam Chen; Maarten Roberts; Vincent Bosma; Yanqi Zhao; Chung-Ching Zhou; Igor Chang; Will Krivokon; Marc Rusch; Pranesh Pickett; Laichee Srinivasan; Kathleen Man; Meredith Ringel Meier-Hellstern; Tulsee Morris; Renelito Delos Doshi; Toju Santos; Johnny Duke; Ben Soraker; Vinodkumar Zevenbergen; Mark Prabhakaran; Ben Diaz; Kristen Hutchinson; Alejandra Olson; Erin Molina; Josh Hoffman-John; Lora Lee; Ravi Aroyo; Alena Rajakumar; Matthew Butryna; Viktoriya Lamm; Joe Kuzmina; Aaron Fenton; Rachel Cohen; Ray Bernstein; Blaise Kurzweil; Claire Aguera-Arcas; Marian Cui; Ed Croak; Quoc Chi; Le", "journal": "", "ref_id": "b30", "title": "Lamda: Language models for dialog applications", "year": "2022" }, { "authors": "Daniela Brook Weiss; Paul Roit; Ayal Klein; Ori Ernst; Ido Dagan", "journal": "", "ref_id": "b31", "title": "Qa-align: Representing crosstext content overlap by aligning question-answer propositions", "year": "2021" }, { "authors": "Adina Williams; Nikita Nangia; Samuel R Bowman", "journal": "", "ref_id": "b32", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "year": "2017" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Remi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander Lhoest; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Wen Xiao; Iz Beltagy; Giuseppe Carenini; Arman Cohan", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "PRIMERA: Pyramid-based masked sentence pre-training for multi-document summarization", "year": "2022" } ]
[ { "formula_coordinates": [ 6, 128.45, 410.14, 44.33, 17.72 ], "formula_id": "formula_0", "formula_text": "w unf aithf ul w total" } ]
Revisiting Sentence Union Generation as a Testbed for Text Consolidation
Tasks involving text generation based on multiple input texts, such as multi-document summarization, long-form question answering and contemporary dialogue applications, challenge models for their ability to properly consolidate partly-overlapping multi-text information. However, these tasks entangle the consolidation phase with the often subjective and illdefined content selection requirement, impeding proper assessment of models' consolidation capabilities. In this paper, we suggest revisiting the sentence union generation task as an effective well-defined testbed for assessing text consolidation capabilities, decoupling the consolidation challenge from subjective content selection. To support research on this task, we present refined annotation methodology and tools for crowdsourcing sentence union, create the largest union dataset to date and provide an analysis of its rich coverage of various consolidation aspects. We then propose a comprehensive evaluation protocol for union generation, including both human and automatic evaluation. Finally, as baselines, we evaluate state-of-theart language models on the task, along with a detailed analysis of their capacity to address multi-text consolidation challenges and their limitations.
Eran Hirsch; Valentina Pyatkin; Ruben Wolhandler; Avi Caciularu; Asi Shefer; Ido Dagan
[ { "figure_caption": "Figure 3 :3Figure 3: A screenshot of the sentence union text generation annotation interface. The screenshot shows the last step, where the worker already choose sentence 1 as the base sentence [1], highlighted the new or more specific information in sentence 2 [2] and wrote the final sentence union (\"Merged sentence\") [3].", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Compression Rate (CR) vs. the frequency of each CR bin, for the train/dev/test dataet splits.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: A histogram of minimal system scores, testing for coverage, faithfulness or redundancy mistakes.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: The interface used for the annotation process.", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: An evaluation of T 5 models on different subsets of our training data [25%, 50%, 75%, 100%], as well as different model sizes (T 5 base and T 5 large ). The number of parameters is indicated for each model.", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: The interface used for the evaluation of a predicted sentence union's quality.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "[S1] On March 1st, Major General Weightman, Chief of Walter Reed, was fired by Army Secretary Harvey becausethe army had lost trust and confidence in him.[S2] Army Secretary Francis Harvey, who dismissed Walter Reed commander General George Weightman theprevious Thursday, has resigned himself.", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Sizes of the splits of our dataset, as well as of the skipped examples (19.3% ofWeiss et al. (2021)).", "figure_data": "Split Train Dev Test SkippedSize 1087 349 477 458", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Evaluation of union quality.", "figure_data": "", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The ordinal scales used for the content (coverage & faithfulness) and redundancy measures.", "figure_data": "", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Distribution of subtle annotation cases in a sample of 50 instances (some instances belong to more than one category).", "figure_data": "CategoryCountAttribution12Relative dates4World knowledge2Before and after an event0No subtle case of above categories34", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "(Fabbri et al., 2019)", "Explanation": "The cited work by Fabbri et al. (2019) provides a method for text consolidation in the context of Multi-Document Summarization, which the citing paper builds upon in their research on the same task."}, {"Category": "Methodological Basis", "Citation": "(Giorgi et al., 2022)", "Explanation": "The cited work by Giorgi et al. (2022) contributes to the field of text consolidation in the context of Multi-Document Summarization, which the citing paper may have adopted or adapted in their research."}, {"Category": "Methodological Basis", "Citation": "(Fan et al., 2019)", "Explanation": "The cited work by Fan et al. (2019) discusses the use of text consolidation in the context of long-form question answering, which the citing paper may have considered in their research on the same task."}, {"Category": "Methodological Basis", "Citation": "(Nakano et al., 2022)", "Explanation": "The cited work by Nakano et al. (2022) provides insights on the use of text consolidation in the context of long-form question answering, which the citing paper may have referenced in their research on the same task."}, {"Category": "Methodological Basis", "Citation": "(Thoppilan et al., 2022)", "Explanation": "The cited work by Thoppilan et al. (2022) discusses the use of text consolidation in the context of contemporary dialogue applications, which the citing paper may have considered in their research on the same task."}, {"Category": "Methodological Basis", "Citation": "(OpenAI, 2023)", "Explanation": "The cited work by OpenAI (2023) provides information on the use of text consolidation in the context of contemporary dialogue applications, which the citing paper may have referenced in their research on the same task."}, {"Category": "Methodological Basis", "Citation": "(Barzilay et al., 1999)", "Explanation": "The cited work introduces a sentence fusion task that serves as a methodological basis for the research conducted in the citing paper on information consolidation models."}, {"Category": "Methodological Basis", "Citation": "(Thadani and McKeown, 2013)", "Explanation": "The cited work further builds upon the sentence fusion task introduced in the previous work, providing a methodological basis for the study of information consolidation models in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Agarwal and Chatterjee, 2022)", "Explanation": "The cited work offers a more recent contribution to the sentence fusion task, providing a methodological basis for the research on information consolidation models in the citing paper."}, {"Category": "Data Source", "Citation": "(Daume III and Marcu, 2004)", "Explanation": "The cited work is cited for its work on subjective salience-based content selection decisions in the general sentence fusion task, which serves as a data source for the research on information consolidation models in the citing paper."}, {"Category": "Data Source", "Citation": "(Krahmer et al., 2008)", "Explanation": "The cited work is cited for its work on subjective salience-based content selection decisions in the general sentence fusion task, which serves as a data source for the research on information consolidation models in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(McKeown et al., 2010)", "Explanation": "The cited work by McKeown et al. (2010) is used as a reference to establish the scarcity of datasets for the union task in text consolidation research. The citing paper builds upon this finding and proposes a new testbed for the task to address the gap in the literature."}, {"Category": "Data Source", "Citation": "(Geva et al., 2019)", "Explanation": "The cited work by Geva et al. (2019) is acknowledged as a dataset for the union task in text consolidation research. The citing paper utilizes this dataset in its research to further explore the task and its challenges."}, {"Category": "Data Source", "Citation": "(Lebanoff et al., 2020)", "Explanation": "The cited work by Lebanoff et al. (2020) is mentioned as a dataset for the union task in text consolidation research. The citing paper uses this dataset to support its research on the task and its related issues."}, {"Category": "Methodological Basis", "Citation": "(Barzilay et al., 1999)", "Explanation": "The cited work by Barzilay et al. (1999) provides a method for sentence fusion in the context of controlled variants of Multi-Document Summarization (MDS), which the citing paper adopts in its research."}, {"Category": "Methodological Basis", "Citation": "(Thadani and McKeown, 2013)", "Explanation": "The cited work by Thadani and McKeown (2013) contributes to the sentence fusion task by providing a method for controlling the fusion of partly-overlapping sentences in the context of MDS."}, {"Category": "Methodological Basis", "Citation": "(Agarwal and Chatterjee, 2022)", "Explanation": "The cited work by Agarwal and Chatterjee (2022) is a more recent contribution to the sentence fusion task in the context of MDS, providing a method for controlling the fusion of partly-overlapping sentences."}, {"Category": "Supporting Evidence", "Citation": "(Narayan et al., 2018)", "Explanation": "The cited work by Narayan et al. (2018) is a foundational study in the field of Multi-Document Summarization (MDS), providing evidence for the use of sentence fusion in the context of MDS."}, {"Category": "Supporting Evidence", "Citation": "(Fabbri et al., 2019)", "Explanation": "The cited work by Fabbri et al. (2019) is another important study in the field of MDS, further supporting the use of sentence fusion in the context of MDS."}, {"Category": "Supporting Evidence", "Citation": "(McKeown et al., 2010)", "Explanation": "The cited work by McKeown et al. (2010) is a key contribution to the sentence fusion task, making a distinction between two strict variants of the task and providing a more consistent approach to sentence fusion in the context of MDS."}, {"Category": "Supporting Evidence", "Citation": "(McKeown et al., 2010)", "Explanation": "The cited work by McKeown et al. (2010) provided a dataset of 300 examples for sentence intersection and sentence union, which was used as a basis for further research on the topic of sentence fusion."}, {"Category": "Extension or Continuation", "Citation": "(Thadani and McKeown, 2011)", "Explanation": "The cited work by Thadani and McKeown (2011) built upon the research of McKeown et al. (2010) by focusing on the intersection fusion part of the dataset, expanding the study of sentence fusion in a new direction."}, {"Category": "Extension or Continuation", "Citation": "(Fuad et al., 2019)", "Explanation": "The cited work by Fuad et al. (2019) further extended the research on sentence fusion by focusing on the intersection part of the dataset, building upon the work of McKeown et al. (2010) and Thadani and McKeown (2011)."}, {"Category": "Extension or Continuation", "Citation": "(Geva et al., 2019)", "Explanation": "The cited work by Geva et al. (2019) explored the union of disparate sentences, which is a new direction in the research of sentence fusion that does not address the challenge of consolidating partly overlapping texts."}, {"Category": "Extension or Continuation", "Citation": "(Lebanoff et al., 2021)", "Explanation": "The cited work by Lebanoff et al. (2021) researched the union of disparate sentences, which is a continuation of the research on sentence fusion that focuses on the union of texts with disjoint contents."}, {"Category": "Extension or Continuation", "Citation": "(McKeown et al., 2010)", "Explanation": "The cited work by McKeown et al. (2010) serves as a foundational work in the field of text generation, and the citing paper builds upon this work to further explore the topic."}, {"Category": "Supporting Evidence", "Citation": "(Castro Ferreira et al., 2020)", "Explanation": "The cited work by Castro Ferreira et al. (2020) provides a data-to-text generation task that is similar to the sentence union task in the citing paper, offering a basis for comparison and analysis."}, {"Category": "Extension or Continuation", "Citation": "(Slobodkin et al., 2022)", "Explanation": "The cited work by Slobodkin et al. (2022) introduces a new text reduction task that is similar to the highlight union task in the citing paper, extending the research in the field of text generation."}, {"Category": "Supporting Evidence", "Citation": "(Weiss et al., 2021)", "Explanation": "The cited work by Weiss et al. provides a dataset that is used to identify information overlap between sentences, which is essential for the curation of the dataset used in the citing paper."}, {"Category": "Data Source", "Citation": "(Cybulska and Vossen, 2014)", "Explanation": "The cited work provides the Event Coreference Bank (ECB+) dataset, which the citing paper uses to source sentences for coreferring event and entity mentions in their research."}, {"Category": "Data Source", "Citation": "(Fabbri et al., 2019)", "Explanation": "The cited work provides the MultiNews (MN) dataset, which the citing paper uses to source clusters of news articles and human-written summaries for their research."}, {"Category": "Data Source", "Citation": "(DUC and TAC)", "Explanation": "The cited works provide the Document Understanding Conference (DUC) and the Text Analysis Conference (TAC) datasets, which the citing paper uses to source evaluation datasets for their research."}, {"Category": "Methodological Basis", "Citation": "(Roit et al., 2020)", "Explanation": "The cited work by Roit et al. (2020) provides a controlled crowdsourcing approach for ensuring the quality of annotation decisions in the citing paper."}, {"Category": "Data Source", "Citation": "(Weiss et al., 2021)", "Explanation": "The cited work is the dataset from which the data for the citing paper was derived. The analysis in Appendix A highlights the limitations of the dataset in terms of the relationship between sentence pairs, which is not suitable for producing meaningful unions."}, {"Category": "Methodological Basis", "Citation": "(Castro Ferreira et al., 2020)", "Explanation": "The cited work provides a set of evaluation criteria for data-to-text generation tasks, which the citing paper adopts in their human assessment of the generated sentence unions."}, {"Category": "Supporting Evidence", "Citation": "(McKeown et al., 2010)", "Explanation": "The cited work provides a dataset of sentence unions examples that the citing paper uses to compare the quality of the new dataset created in the study."}, {"Category": "Data Source", "Citation": "(McKeown et al., 2010)", "Explanation": "The cited work provides the dataset used in the study conducted in the citing paper, which is the basis for the analysis and results reported."}, {"Category": "Methodological Basis", "Citation": "(Raffel et al., 2019)", "Explanation": "The cited work, T 5 large, is a pre-trained sequence-to-sequence model that the citing paper fine-tunes to use in their research for end-to-end text generation tasks."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2020)", "Explanation": "The cited work is a model that is commonly applied to end-to-end text generation tasks, which the citing paper uses to fine-tune a pre-trained sequence-to-sequence model for their research."}, {"Category": "Methodological Basis", "Citation": "(Xiao et al., 2022)", "Explanation": "The cited work, PRIMERA, is a pre-trained model that the citing paper uses in a cross-document fashion to achieve state-of-the-art results in multi-document summarization datasets. This model is used in the research to create a union of two sentences that originate in different documents."}, {"Category": "Data Source", "Citation": "(Caciularu et al., 2021)", "Explanation": "The cited work is a data source that the citing paper uses in their research to train a pre-trained model in a cross-document fashion for multi-document summarization tasks."}, {"Category": "Extension or Continuation", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work, GP T 3 (Brown et al., 2020), is a pre-trained language model that the citing paper uses in in-context learning to provide instructions and examples to the task at inference time. The research extends the use of this model to a new application in sentence fusion."}, {"Category": "Supporting Evidence", "Citation": "(Callison-Burch et al., 2007)", "Explanation": "The cited work by Callison-Burch et al. (2007) provides a comparative evaluation method for human assessment of the output of multiple baseline systems, which the citing paper adopts in their own human evaluation process."}, {"Category": "Supporting Evidence", "Citation": "(Novikova et al., 2018)", "Explanation": "The cited work by Novikova et al. (2018) also uses a comparative evaluation method for human assessment of the output of multiple baseline systems, which the citing paper builds upon in their own evaluation process."}, {"Category": "Supporting Evidence", "Citation": "(Agirre et al., 2013)", "Explanation": "The cited work by Agirre et al. (2013) presents a human evaluation approach for measuring semantic textual similarity, which the citing paper adapts in their content and redundancy scoring process."}, {"Category": "Supporting Evidence", "Citation": "(Fabbri et al., 2021)", "Explanation": "The cited work by Fabbri et al. (2021) uses a Likert scale for fluency evaluation, which the citing paper adopts in their own human assessment of the output of the baseline systems."}, {"Category": "Methodological Basis", "Citation": "(He et al., 2020)", "Explanation": "The cited work provides a pre-trained DeBERT model that the citing paper uses for the bi-directional textual entailment test in the generation task."}, {"Category": "Supporting Evidence", "Citation": "(Williams et al., 2017)", "Explanation": "The cited work contributes the MNLI task to the finetuning of the DeBERT model, which is used in the bi-directional textual entailment test in the generation task."}, {"Category": "Supporting Evidence", "Citation": "(Fabbri et al., 2021)", "Explanation": "The cited work is referenced to establish the standard practice for calculating the Kendall \u03c4 coefficient, which the citing paper uses to assess the correlation between human and automatic evaluation results."}, {"Category": "Data Source", "Citation": "(Weiss et al., 2021)", "Explanation": "The cited work by Weiss et al. (2021) is the source of the texts used in the citing paper to build the dataset."}, {"Category": "Supporting Evidence", "Citation": "(Weiss et al., 2021)", "Explanation": "The cited work by Weiss et al. (2021) is the original source of the sentence pair instances used in the citing paper. The citing paper uses this dataset as a basis for aligning predicate-argument structures in the form of question-answer pairs."}, {"Category": "Methodological Basis", "Citation": "(Roit et al., 2020)", "Explanation": "The cited work by Roit et al. (2020) provides a controlled crowdsourcing approach for ensuring the quality of annotation judgements in the citing paper, which the citing paper adopts in its own research process."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b25", "b20", "b14", "b2", "b3", "b28", "b26", "b32", "b16", "b18" ], "table_ref": [], "text": "Spheres1 serve as a foundational concept in Euclidean space while simultaneously embodying the essence of non-Euclidean geometry through their intrinsic curvature and non-linear nature. This motivated their usage as decision surfaces encompassed by spherical neurons (Perwass et al., 2003;Melnyk et al., 2021). Felix Klein's Erlangen program of 1872 (Hilbert & Cohn-Vossen, 1952) introduced a methodology to unify non-Euclidean geometries, emphasizing the importance of studying geometries through their invariance properties under transformation groups. Similarly, geometric deep learning (GDL) as introduced by Bronstein et al. (2017Bronstein et al. ( , 2021) ) constitutes a unifying framework for various neural architectures. This framework is built from the first principles of geometry-symmetry and scale separation-and enables tractable learning in high dimensions.\nSymmetries play a vital role in preserving structural information of geometric data and allow models to adjust to different geometric transformations. This flexibility ensures that models remain robust and accurate, even when the input data undergo various changes. In this context, spheres exhibit a maximal set of symmetries compared to other geometric entities in Euclidean space.\nThe orthogonal group O(n) fully encapsulates the symmetry structure of an nD sphere, including both rotational and reflection symmetries. Integrating these symmetries into a model as an inductive bias is often a crucial requirement for problems in natural sciences and the respective applications, e.g., molecular analysis, protein design and assessment, or catalyst design (Rupp et al., 2012;Ramakrishnan et al., 2014;Townshend et al., 2021;Jing et al., 2021;Lan et al., 2022).\nIn this paper, we consider data that live in Euclidean space (such as point clouds) and undergo rotations and reflections, i.e., transformations of the O(n)-group. Enriching the theory of steerable 3D spherical neurons (Melnyk et al., 2022a), we present a method for learning O(n)-equivariant deep features using regular n-simplexes2 and nD spheres, which we call Deep Equivariant Hyperspheres (see Figure 1). The name also captures the fact that the vertices of a regular n-simplex lie on an nD sphere, and that our results enable combining equivariant hyperspheres in multiple layers, thereby enabling deep propagation.\nOur main contributions are summarized as follows:\n• We propose O(n)-equivariant spherical neurons, called Deep Equivariant Hyperspheres, readily generalizing to any dimension (see Section 4).\n• We define and analyze generalized concepts for a network composed of the proposed neurons, including the invariant operator modeling the relation between two points and a sphere (20).\n• Conducting experimental validation on both synthetic and real-world data in nD, we demonstrate the soundness of the developed theoretical framework, outperforming the related methods and exhibiting a favorable speed/performance trade-off." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b6", "b30", "b9", "b5", "b24", "b25", "b20", "b27", "b0", "b31", "b10", "b29", "b16", "b7", "b34", "b19" ], "table_ref": [], "text": "The concept of spheres is also an essential part of spherical convolutional neural networks (CNNs) and CNNs designed\n1 ( √ 3 -1)/2 -( √ 3 + 1)/2 1 -( √ 3 + 1)/2 ( √ 3 -1)/2 1 1 1 M2 = 1 √ 3 1 1 -1 -1 1 -1 1 -1 1 -1 -1 1 1 1 1 1       M3 = 1 2\nFigure 1: The central components of Deep Equivariant Hyperspheres (best viewed in color): regular n-simplexes with the nD spherical decision surfaces located at their vertices and the simplex change-of-basis matrices M n (displayed for n = 2 and n = 3).\nto operate on 360 imagery (Coors et al., 2018;Su & Grauman, 2017;Esteves et al., 2018;Cohen et al., 2018;Perraudin et al., 2019). In contrast, our method does not map input data on a sphere, S 2 , nor does it perform convolution on a sphere. Instead, it embeds input in a higher-dimensional Euclidean space by means of a quadratic function. Namely, our work extrapolates the ideas from prior work by Perwass et al. (2003); Melnyk et al. (2021), in which spherical decision surfaces and their symmetries have been utilized for constructing equivariant models for the 3D case (Melnyk et al., 2022a,b). We carefully review these works in Section 3.\nSimilarly to the approach of Ruhe et al. (2023), our Deep Equivariant Hyperspheres directly operate on the basis of the input points, not requiring constructing an alternative one, such as a steerable spherical harmonics basis. Constructing an alternative basis is a key limitation of many related methods (Anderson et al., 2019;Thomas et al., 2018;Fuchs et al., 2020). Our method also generalizes to the orthogonal group of any dimensionality.\nAnother type of method is such as by Finzi et al. (2021), a representation method building equivariant feature maps by computing an integral over the respective group (which is intractable for continuous Lie groups and hence, requires coarse approximation). Another category includes methods operating on scalars and vectors: they update the vector information by learning the parameters conditioned on scalar information and multiplying the vectors with it (Satorras et al., 2021;Jing et al., 2021), or by learning the latent equivariant features (Deng et al., 2021).\nWhile the methods operating on hand-crafted O(n)invariant features are generally not as relevant to our work, the method proposed by Xu et al. (2021) is: their O(3)invariant descriptor is a sorted Gram matrix (SGM) of input point coordinates, which encodes global context using relative distances and angles between the points. In Section 4.5, we show how this type of computation naturally arises when one considers the relation between two points and a sphere for computing invariant features in our approach, inspired by the work of Li et al. (2001)." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "In this section, we present a comprehensive background on the theory of spherical neurons and their rotation-equivariant version, as well as on the general geometric concepts used in our work." }, { "figure_ref": [ "fig_0" ], "heading": "Spherical neurons via non-linear embedding", "publication_ref": [ "b25", "b20", "b19", "b25", "b25", "b20", "b20", "b25", "b19" ], "table_ref": [], "text": "Spherical neurons (Perwass et al., 2003;Melnyk et al., 2021) are neurons with, as the name suggests, spherical decision surfaces. By virtue of conformal geometric algebra (Li et al., 2001), Perwass et al. (2003) proposed to embed the data vector x ∈ R n and represent the sphere with center c = (c 1 , . . . , c n ) ∈ R n and radius r ∈ R respectively as\nX = x 1 , . . . , x n , -1, - 1 2 ∥x∥ 2 ∈ R n+2 , S = c 1 , . . . , c n , 1 2 (∥c∥ 2 -r 2 ), 1 ∈ R n+2 ,(1)\nand used their scalar product X ⊤ S = -1 2 ∥x -c∥ 2 + 1 2 r 2 as a classifier, i.e., the spherical neuron:\nf S (X; S) = X ⊤ S,(2)\nwith learnable parameters S ∈ R n+2 . The sign of this scalar product depends on the position of the point x relative to the sphere (c, r): inside the sphere if positive, outside of the sphere if negative, and on the sphere if zero (Perwass et al., 2003). Geometrically, the activation of the spherical neuron (2) determines the cathetus length of the right triangle formed by x, c, and the corresponding point on the sphere (see Figure 2 in Melnyk et al. (2021)).\nWe note that with respect to the data vector x ∈ R n , a spherical neuron represents a non-linear function f S ( • ; S) : R n+2 → R, due to the inherent non-linearity of the embedding (1), and therefore, does not necessarily require an activation function, as observed by Melnyk et al. (2021). The components of S in (1) can be treated as independent learnable parameters. In this case, a spherical neuron learns a nonnormalized sphere of the form S = (s 1 , . . . , s n+2 ) ∈ R n+2 , which represents the same decision surface as its normalized counterpart defined in (1), thanks to the homogeneity of the embedding (Perwass et al., 2003;Li et al., 2001)." }, { "figure_ref": [], "heading": "Equi-and invariance under O(n)transformations", "publication_ref": [ "b20" ], "table_ref": [], "text": "The elements of the orthogonal group O(n) can be represented as n × n matrices R with the properties R ⊤ R = RR ⊤ = I n , where I n is the identity matrix, and det R = ±1, geometrically characterizing nD rotations and reflections. The special orthogonal group SO(n) is a subgroup of O(n) and includes only orthogonal matrices with the positive determinant, representing rotations. We say that a function f : X → Y is O(n)-equivariant if for every R ∈ O(n) there exists the transformation representation, ρ(R), in the function output space, Y, such that\nρ(R) f (x) = f (Rx) for all R ∈ O(n), x ∈ X . (3) We call a function f : X → Y O(n)-invariant if for every R ∈ O(n), ρ(R) = I n . That is, if f (x) = f (Rx) for all R ∈ O(n), x ∈ X . (4\n)\nFollowing the prior work convention (Melnyk et al., 2022a,b) hereinafter, we write R to denote the same nD rotation/reflection as an n × n matrix in the Euclidean space R n , as an (n + 1) × (n + 1) matrix in the projective (homogeneous) space P(R n ) ⊂ R n+1 , and as an (n + 2) × (n + 2) matrix in R n+2 . For the latter two cases, we achieve this by appending ones to the diagonal of the original n × n matrix without changing the transformation itself (Melnyk et al., 2021)." }, { "figure_ref": [], "heading": "Steerable 3D spherical neurons and Tetra-Sphere", "publication_ref": [ "b25", "b20", "b11", "b17", "b11", "b7" ], "table_ref": [], "text": "Considering the 3D case, Melnyk et al. (2022a) showed that a spherical neuron (Perwass et al., 2003;Melnyk et al., 2021) can be steered. In this context, steerability is defined as the ability of a function to be written as a linear combination of the rotated versions of itself, called basis functions (Freeman et al., 1991;Knutsson et al., 1992). For details, see the Appendix (Section A).\nAccording to Melnyk et al. (2022a), a 3D steerable filter consisting of spherical neurons needs to comprise a minimum of four 3D spheres: one learnable spherical decision surface S ∈ R 5 (1) and its three copies rotated into the other three vertices of the regular tetrahedron, using one of the results of Freeman et al. (1991) that the basis functions must be distributed in the space uniformly.\nTo construct such a filter, i.e., a steerable 3D spherical neuron, the main (learned) sphere center c 0 needs to be rotated into ∥c 0 ∥ (1, 1, 1) by the corresponding (geodesic) rotation R O . The resulting sphere center is then rotated into the other three vertices of the regular tetrahedron. This is followed by rotating all four spheres back to the original coordinate system. A steerable 3D spherical neuron can thus be defined by means of the 4 × 5 matrix B(S) containing the four spheres:\nF(X; S) = B(S)X , B(S) = (R ⊤ O R Ti R O S) ⊤ i=1...4\n,\n(5) where X ∈ R 5 is the input 3D point embedded using (1), {R Ti } 4 i=1 is the R 5 rotation isomorphism corresponding to the rotation from the first vertex, i.e., (1, 1, 1) to the i-th vertex of the regular tetrahedron3 . Melnyk et al. (2022a) showed that steerable 3D spherical neurons are SO(3)-equivariant (or more precisely, O(3)equivariant, as remarked in Melnyk et al. (2022b)):\nV R B(S) X = B(S) RX, V R = M ⊤ R O R R ⊤ O M , (6\n)\nwhere R is a representation of the 3D rotation in R 5 , and V R ∈ G < SO(4) is the 3D rotation representation in the filter bank output space, with M ∈ SO(4) being a changeof-basis matrix that holds the homogeneous coordinates of the tetrahedron vertices in its columns as\nM = m 1 m 2 m 3 m 4 = 1 2     1 1 -1 -1 1 -1 1 -1 1 -1 -1 1 1 1 1 1     .\n(7) We note that with respect to the input vector x ∈ R 3 , a steerable 3D spherical neuron represents a non-linear rotationalequvivariant function F( • ; S) : R 5 → R 4 with the learnable parameters S ∈ R 5 .\nTetraSphere As the first reported attempt to learn steerable 3D spherical neurons in an end-to-end approach, Melnyk et al. (2022b) has presently introduced an approach for O(3)-invariant point cloud classification based on said neurons and the VN-DGCNN architecture (Deng et al., 2021), called TetraSphere.\nGiven the point cloud input X ∈ R N ×3 , the TetraSphere approach suggests to learn 4D features of each point by means of the TetraTransform layer l TT ( • ; S) : R N ×3 → R N ×4×K that consists of K steerable spherical neurons B(S k ) (see ( 5)) that are shared among the points. After the application of TetraTransform, pooling over the K dimensions takes place, and the obtained feature map is then propagated through the VN-DGCNN network as-is. However, the questions of how to combine the steerable neurons in multiple layers and how to make them process data in dimensions other than 3 have remained open." }, { "figure_ref": [], "heading": "Regular simplexes", "publication_ref": [ "b8", "b4" ], "table_ref": [], "text": "Geometrically, a regular n-simplex represents n + 1 equidistant points in nD (Elte, 2006), lying on an nD sphere with unit radius. In the 2D case, the regular simplex is an equilateral triangle; in 3D, a regular tetrahedron, and so on.\nFollowing Cevikalp & Saribas (2023), we compute the Cartesian coordinates of a regular n-simplex as n+1 vectors p i ∈ R n :\np i = n -1/2 1, i = 1 κ 1 + µ e i-1 , 2 ≤ i ≤ n + 1 , κ = - 1 + √ n + 1 n 3/2 , µ = 1 + 1 n ,(8)\nwhere 1 ∈ R n is a vector with all elements equal to 1 and e i is the natural basis vector with the i-th element equal to 1.\nFor the case n = 3, we identify the following connection between ( 7) and ( 8): the columns of M, m i ∈ R 4 , are the coordinates of the regular 3-simplex appended with a constant and normalized to unit length; that is,\nm i = 1 p p i 1/ √ 3 with p = p i 1/ √ 3 , 1 ≤ i ≤ 4." }, { "figure_ref": [], "heading": "Deep Equivariant Hyperspheres", "publication_ref": [], "table_ref": [], "text": "In this section, we provide a complete derivation of the proposed O(n)-equivariant neuron based on a learnable spherical decision surface and multiple transformed copies of it, as well as define and analyze generalized concepts of equivariant bias, non-linearities, and multi-layer setup.\nWhile it is intuitive that in higher dimensions one should use more copies (i.e., vertices) than in the 3D case (Melnyk et al., 2022a), it is uncertain exactly how many are needed. We hypothesize that the vertices should constitute a regular n-simplex (n + 1 vertices) and rigorously prove it in this section." }, { "figure_ref": [], "heading": "The simplex change of basis", "publication_ref": [], "table_ref": [], "text": "We generalize the change-of-basis matrix (7) to nD by introducing M n , an (n + 1) × (n + 1) matrix holding in its columns the coordinates of the regular n-simplex appended with a constant and normalized to unit length:\nM n = m i i=1...n+1 , m i = 1 p p i n -1/2 , p = p i n -1/2 , (9)\nwhere the norms p are constant, since ∥p i ∥ = ∥p j ∥ for all i and j by definition of a regular simplex.\nProposition 1. Let M n be the-change-of-basis matrix defined in (9). Then M n is an (n + 1)D rotation or reflection, i.e., M n ∈ O(n + 1) (see Section B in the Appendix for numeric examples).\nProof. We want to show that M ⊤ n M n = I n+1 , which will prove that M n is orthogonal. The diagonal elements of\nM ⊤ n M n are m ⊤ i m i = ∥m i ∥ 2 = 1 since ∥m i ∥ = 1. The off- diagonal elements are found as m ⊤ i m j = (p ⊤ i p j + n -1 )/p 2\nfor i ̸ = j, where p is defined in (9). Note that p ⊤ i p j is the same for all i and j with i ̸ = j since, by definition of a regular simplex, the vertices p i are spaced uniformly. Note that p ⊤ i p j = -n -1 for all i ̸ = j by definition (8). Hence, the off-diagonal elements of M ⊤ n M n are zeros and\nM ⊤ n M n = I n+1 .\nSince M n ∈ O(n + 1), the sign of det M n is determined by the number of reflections required to form the transformation. In the case of a regular n-simplex, the sign of the determinant depends on the parity of n and the configuration of the simplex vertices. In our case, M n is a rotation for odd n, i.e., det M n = 1, and a reflection for even n. Consider, for example, the case n = 3. The matrix M 3 shown in ( 7) has det M 3 = 1, thus, is a 4D rotation, as stated in Section 3.3.\nLemma 2. Let M n be the change-of-basis matrix defined in (9), and P n an n × (n + 1) matrix holding the regular n-simplex vertices, p i , in its columns, and p = p i n -1/2 , as defined in (9). Then\nM n P ⊤ n = p I n 0 ⊤ .(10)\nProof. We begin by elaborating on (9):\nM n = 1 p P n n -1/2 1 ⊤ . (11\n)\nWe note that the norms of the rows of P n are also equal to p since M n ∈ O(n + 1) (as per Proposition 1). Recall that P n is centered at the origin, and, therefore, for a constant a ∈ R and a vector of ones 1 ∈ R n+1 , we obtain a 1 ⊤ P ⊤ n = 0 ⊤ . Remembering that the product M n P ⊤ n is between R n+1 vectors, we plug (11) into the LHS of ( 10) and obtain\nM n P ⊤ n = 1 p P n n -1/2 1 ⊤ P ⊤ n = p 2 p I n 0 ⊤ = p I n 0 ⊤ . (12\n)" }, { "figure_ref": [], "heading": "Equivariant nD spheres", "publication_ref": [ "b25" ], "table_ref": [], "text": "In this section, we generalize steerable 3D spherical neurons reviewed in Section 3.3. We denote an equivariant nDsphere neuron (an equivariant hypersphere) by means of the (n + 1) × (n + 2) matrix B n (S) for the spherical decision surface S ∈ R n+2 with center c 0 ∈ R n and an nD input\nx ∈ R n embedded as X ∈ R n+2 as F n (X; S) = B n (S) X , B n (S) = (R ⊤ O R Ti R O S) ⊤ i=1...n+1 ,(13)\nwhere {R Ti } n+1 i=1 is the R n+2 rotation isomorphism corresponding to the rotation from the first vertex to the i-th vertex of the regular n-simplex, and R O ∈ SO(n) is the geodesic rotation from the sphere center c 0 to∥c 0 ∥ p 1 (therefore, R T1 = I n+2 ).\nWe now need to prove that\nF n ( • ; S) is O(n)-equivariant.\nProposition 3. Let F n ( • ; S) : R n+2 → R n+1 be the neuron defined in (13) and R ∈ O(n) be an n × n rotation or reflection matrix. Then the transformation representation in the filter output space R n+1 is given by the (n + 1)\n× (n + 1) matrix V n = ρ (R) = M ⊤ n R O R R ⊤ O M n ,(14)\nwhere M n ∈ O(n + 1) is the-change-of-basis matrix defined in (9) and a 1 is appended to the diagonals of R O and R to make them (n + 1) 14) is an orthogonal changeof-basis transformation that represents R ∈ O(n) in the basis constructed by M n and R O . Note that appending one to the diagonal of R ∈ O(n) does not affect the sign of the determinant, which makes\n× (n + 1). Furthermore, V n ∈ G < O(n + 1). Proof. Since M n ∈ O(n + 1), R O ∈ SO(n), and R ∈ O(n) are orthogonal matrices, V n in (\nV n a reflection representation if det R = -1, or a rotation representation if det R = +1. Since R ∈ O(n) and R O ∈ O(n)\n, not all elements of O(n + 1) can be generated by the operation in ( 14). Thus, we conclude that V n belongs to a proper subgroup of O(n + 1), i.e., G < O(n + 1). The original transformation is found directly as\nR = R ⊤ O M n V n M ⊤ n R O ,(15)\nfollowed by the retrieval of the upper-left n × n sub-matrix.\nNoteworthy, the basis determined by R O ∈ SO(n), which depends on the center c 0 of the sphere S ∈ R n+2 (see ( 13)), will be different for different c 0 . Therefore, the representation V n will differ as well.\nTheorem 4. The neuron F n ( • ; S) : R n+2 → R n+1 defined in (13) is O(n)-equivariant.\nProof. To prove the theorem, we need to show that (3) holds for F n ( • ; S).\nWe substitute ( 14) to the LHS and (13) to the RHS, and obtain\nV n B n (S) X = B n (S) RX . (16\n)\nFor the complete proof, please see the Appendix (refer to Section C).\nWe note that with respect to the input vector x ∈ R n , the equivariant hypersphere F n ( • ; S) : R n+2 → R n+1 represents a non-linear O(n)-equivariant function. It is also worth mentioning that the sum of the output Y = B n (S) X is an O(n)-invariant scalar, i.e., the DC-component, due to the regular n-simplex construction.\nThis invariant part can be adjusted by adding a scalar bias parameter to the output Y. The concept of bias is imperative for linear classifiers, but for spherical decision surfaces (Perwass et al., 2003), it is implicitly modeled by the embedding (1). We note, however, that adding a scalar bias parameter, b ∈ R to the output of an equivariant hypersphere (13) respects O(n)-equivariance:\nProposition 5. Let Y ∈ R n+1 be the output of the O(n)- equivariant hypersphere F n ( • ; S) : R n+2 → R n+1 (13)\ngiven the input X ∈ R n+2 , and b ∈ R be a bias parameter. Then Y ′ = Y + b 1, where 1 is the vector of ones in R n+1 , is also O(n)-equivariant.\nProof. We need to show that (16) also holds when the bias b is added. First, we use V n -the representation of R ∈ O(n) from ( 14)-and the fact that R and R O are both appended 1 to their main diagonal to make them (n + 1) × (n + 1). Then\nV n 1 = M ⊤ n R O R R ⊤ O M n 1 = M ⊤ n R O R R ⊤ O 0 p √ n = M ⊤ n 0 p √ n = 1\n, where p is a scalar defined in (8). Since the bias b is a scalar, we use that V n b1 = bV n 1 = b1.\nWe now consider the left-hand side of ( 16):\nV n Y ′ = V n (Y + b1) = V n B n (S) X + V n b1 = V n B n (S) X + b1.\nPlugging the equality ( 16) into the last equation, we complete the proof:\nV n B n (S) X + b1 = B n (S) RX + b1.\nThis result allows us to increase the capacity of the equivariant hypersphere by adding the learnable parameter b ∈ R." }, { "figure_ref": [], "heading": "Normalization and additional nonlinearity", "publication_ref": [ "b15", "b1", "b27" ], "table_ref": [], "text": "An important practical consideration in deep learning is feature normalization (Ioffe & Szegedy, 2015;Ba et al., 2016).\nWe show how the activations of the equivariant hypersphere (13) can be normalized maintaining the equivariance:\nProposition 6. Let Y ∈ R n+1 be the O(n)-equivariant output of the hypersphere filter (13). Then Y/∥Y∥, where ∥Y∥ ∈ R, is also O(n)-equivariant.\nProof. Let Y ′ = Y/∥Y∥. We need to show that (16) holds also in the case of the normalization. We start by rewriting the right-hand side of ( 16):\nV n Y ′ = ∥Y∥ -1 V n Y = ∥Y∥ -1 V n B n (S) X.\nWe then use the original equality in ( 16) and rewrite the last equation:\n∥Y∥ -1 V n B n (S) X = ∥Y∥ -1 B n (S)\nRX, which completes the proof.\nTo increase the descriptive power of the proposed approach, we can add non-linearity to the normalization step, following Ruhe et al. (2023):\nY → Y σ(a) (∥Y∥ -1) + 1 , (17\n)\nwhere a ∈ R is a learnable scalar and σ(•) is the sigmoid function." }, { "figure_ref": [], "heading": "Extracting deep equivariant features", "publication_ref": [], "table_ref": [], "text": "We might want to propagate the equivariant output of F n (13), Y = B n (S) X, through spherical decision surfaces while maintaining the equivariance properties. One way to achieve it is by using (n + 1)D spheres, i.e., F n+1 , since the output Y ∈ R n+1 . Thus, the results established in the previous section not only allow us to use the equivariant hyperspheres (13) for nD inputs but also to cascade them in multiple layers, thus propagating equivariant representations by successively incrementing the feature space dimensionality with a unit step, i.e., nD → (n + 1)D.\nConsider, for example, the point cloud patch X = {x} N i=1 consisting of the coordinates of N points x ∈ R n as the input signal, which we can also consider as the N × n matrix X.\nGiven the equivariant neuron F n ( • ; S), a cascaded nD → (n + 1)D feature extraction procedure using equivariant hyperspheres F n ( • ; S) for the given output dimensionality d (with d > n) can be defined as follows (at the first step, X ← x):\nX ∈ R n → embed(normalize(X + b)) → F n (X; S) → embed(normalize(X + b)) → F n+1 (X; S) → . . . → F d (X; S) → normalize(X + b) → X ∈ R d ,(18)\nwhere embed is the embedding according to (1), normalize is the optional activation normalization (see Proposition 6), and b is an optional scalar bias.\nProposition 7. Given that all operations involved in the procedure 18 are O(n)-equivariant, its output will also be O(n)-equivariant.\nThe proof is given in the Appendix (Section C). Thus, given X as input, the point-wise cascaded application with depth d (18) produces the equivariant features Y = {Y} N i=1 , Y ∈ R n+d , which we can consider as the N × (n + d) matrix Y.\nIn this case, we considered the width of each layer in (18) to be 1, i.e., one equivariant hypersphere. In practice and depending on the task, we often use K l equivariant hyperspheres per layer l, with suitable connectivity between subsequent layers." }, { "figure_ref": [], "heading": "Modelling higher-order interactions", "publication_ref": [ "b19", "b19", "b7", "b34", "b34" ], "table_ref": [], "text": "The theoretical framework established above considers the interaction of one point and one spherical decision surface, copied to construct the regular n-simplex constellation for the equivariant neuron in (13). To increase the expressiveness of a model comprised of equivariant hyperspheres, we propose to consider the relation of two points and a sphere.\nNamely, following the work of Li et al. (2001) 4 , the relation between two points, x 1 , x 2 , and a sphere in R n , all embedded in R n+2 according to (1) as X 1 , X 2 , and S, respectively, is formulated as\nδ = e 12 X ⊤ 1 S X ⊤ 2 S,(19)\n4 See p. 22 in Li et al. (2001).\nwhere e 12 := -1 2 (∥x 1 -x 2 ∥ 2 ) ∈ R models the edge as the squared Euclidean distance between the points.\nTo classify the relative position of the points to the sphere, we use the sign of δ ∈ R, and note that it is only determined by the respective sphere (scalar) activations, i.e., the scalar products X ⊤ i S, since the edges e ij are always negative. Thus, we may omit them, as we further demonstrate by the ablations in the Appendix (see Section E). Also, we note that in order to make δ an invariant quantity, we need to have equivariant activations. Since the single sphere activations are not equivariant (see Section 3.1), we propose to substitute the single sphere S with our equivariant hyperspheres B n (S) (13).\nGiven the input X ∈ R N ×n and the corresponding extracted equivariant features Y ∈ R N ×(n+d) , we compute\n∆ = X ⊤ B n (S) (X ⊤ B n (S)) ⊤ = Y Y ⊤ . (20\n)\nThe O(n)-invariance of ∆ ∈ R N ×N follows from the fact that it is comprised of the Gram matrix Y Y ⊤ that consists of the pair-wise inner products of equivariant features, which are invariant (Deng et al., 2021;Melnyk et al., 2022b), just as in the case of directly computing the auto-product of the points (Xu et al., 2021). When permutation-invariance is desired, we achieve it by aggregation over the points, first following the procedure by Xu et al. (2021) and sorting the rows/columns of ∆, and then applying max and/or mean pooling over N . If multiple (K l ) equivariant hyperspheres per layer are used, ( 20) is computed independently for all K l , by computing K l Gram matrices, resulting in ∆ ∈ R N ×N ×K l . We show the effectiveness of the proposed invariant operator (20) by the corresponding ablation in the Appendix (Section E)." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Experimental validation", "publication_ref": [ "b23" ], "table_ref": [], "text": "In this section, we experimentally verify our theoretical results derived in Section 4 by evaluating our Deep Equivariant Hyperspheres, constituting feed-forward point-wise architectures, on real and synthetic O(n)-equivariant benchmarks. In each experiment, we train the models using the same hyperparameters and present the test-set performance of the models chosen based on their validation-set performance.\nFor the sake of a fair comparison, all the models have approximately the same number of learnable parameters, and their final fully-connected layer part is the same. A more detailed description of the used architectures is presented in the Appendix (see Section D). In addition to the performance comparison in Figure 2, we compare the time complexity (i.e., the inference speed) of the considered methods5 in Figure 3. Furthermore, we present various ablations in the Appendix (see Section E). All the models are implemented in PyTorch (Paszke et al., 2019). " }, { "figure_ref": [ "fig_2", "fig_0", "fig_1" ], "heading": "O(3) Action recognition", "publication_ref": [ "b33", "b34", "b7", "b16", "b27" ], "table_ref": [], "text": "First, we test the ability of our method to utilize O(3)equivariance as the inductive bias. For this experiment, we select the task of classifying the 3D skeleton data, presented and extracted by Melnyk et al. (2022a) from the UTKinect-Action3D dataset by Xia et al. (2012). Each skeleton is a 20 × 3 point cloud, belonging to one of the 10 action categories; refer to the work of Melnyk et al. (2022a) for details. We formulate the task to be both permutation-and O(3)-invariant.\nWe construct an O(3)-equivariant point-wise feedforward model using layers with our equivariant hyperspheres (according to the blueprint of ( 18)) with the two-point interaction described in Section 4.5, which we call DEH (see the illustration in Figure 4). We also build a variant of the invariant SGM descriptor (Xu et al., 2021) computing the Gram matrix of the input points, point-wise equivariant VN (Deng et al., 2021), GVP (Jing et al., 2021), and CGENN (Ruhe et al., 2023) models and, as non-equivariant baselines, MLPs, in which the equivariant layers are substituted with regular non-linear ones. We train one version of the baseline MLP with O(3)-augmentation, whereas our method is only trained on non-transformed skeletons.\nWe evaluate the performance of the methods on the randomly O(3)-transformed test data. The results are presented in Figure 2 (left): our DEH model, trained on the data in a single orientation, captures equivariant features that enable outperforming the non-equivariant baseline trained on the augmented data (MLP Aug). Moreover, DEH consistently outperforms the competing equivariant methods (VN, GVP, CGENN) and the invariant SGM model, demonstrating a favorable speed/performance trade-off, as seen in Figure 3 (left)." }, { "figure_ref": [ "fig_0" ], "heading": "O(5) Regression", "publication_ref": [ "b27", "b10" ], "table_ref": [], "text": "Originally introduced by Finzi et al. ( 2021), the task\nis to model the O(n)(5)-invariant function f (x 1 , x 2 ) := sin(∥x 1 ∥) -∥x 2 ∥ 3 /2 + x ⊤ 1 x2\n∥x1∥∥x2∥ , where the two vectors x 1 ∈ R 5 and x 2 ∈ R 5 are sampled from a standard Gaussian distribution to construct train, validation, and test sets. We use the same training hyperparameters and evaluation setup as Ruhe et al. (2023).\nHere, we employ a DEH architecture similar to that in Section 5.1, and compare it to the equivariant EMLPs (Finzi et al., 2021), CGENN, VN, and GVP, and non-equivariant MLPs. Refer to the Appendix (Section D) for the architecture details. Our results together with those of the related methods are presented in Figure 2 (center). As we can see, our DEH is more stable than CGENN, as shown by the dependency over the training set size, and outperforms it in most cases. Our method also outperforms the vanilla MLP and the MLP trained with augmentation (MLP Aug), as well as the O(5)-and SO(5)-equivariant EMLP and VN, and the invariant SGM method." }, { "figure_ref": [ "fig_0" ], "heading": "O(5) Convex hull volume prediction", "publication_ref": [ "b27" ], "table_ref": [], "text": "In the progression of successively more challenging tasks, our third experiment addresses the task of estimating the volume of the convex hull generated by 16 5D points, described by Ruhe et al. (2023). The problem is O(5)-invariant in nature, i.e., rotations and reflections of a convex hull do not change its volume. We exploit the same network architecture as in Section 5.1 (see the Appendix for details).\nWe present our results alongside those of the related methods in Figure 2 " }, { "figure_ref": [ "fig_1" ], "heading": "Conclusion", "publication_ref": [ "b19" ], "table_ref": [], "text": "In this manuscript, we presented Deep Equivariant Hyperspheres -nD neurons based on spheres and regular n-simplexes -equivariant under orthogonal transformations of dimension n.\nWe defined and analyzed generalized components for a network composed of the proposed neurons, such as equivariant bias, non-linearity, and multi-layer configuration (see Section 4 and the ablations in the Appendix) . In addition, we proposed the invariant operator (20) modeling the relation between two points and a sphere, inspired by the work of Li et al. (2001), and demonstrated its effectiveness (see the Appendix).\nWe evaluated our method on both synthetic and real-world data and demonstrated the utility of the developed theoretical framework in nD by outperforming the competing methods and achieving a favorable speed/performance trade-off (see Figure 3).\nInvestigating the design of more advanced architectures of the proposed equivariant hyperspheres forms a clear direction for future work." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b29" ], "table_ref": [], "text": "If translation equivariance is additionally desired, i.e., the full E(n) group, a common way to address this is by using graph neural network (GNN) backbones, e.g., Satorras et al. (2021). Since the focus of this paper is on the O(n)equivariance, the combination of the proposed equivariant hyperspheres with GNNs was not considered." }, { "figure_ref": [], "heading": "Impact statements", "publication_ref": [], "table_ref": [], "text": "This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here. An exception is possibly the area of molecular physics with applications in material science; the development of new materials might have a significant impact on sustainability.\n  . n = 3 : P 3 = 1 √ 3   1 1 -1 -1 1 -1 1 -1 1 -1 -1 1   , p = 2/ √ 3, M 3 = 1 2     1 1 -1 -1 1 -1 1 -1 1 -1 -1 1 1 1 1 1     . n = 4 : P 4 = 1 2      1 (3 √ 5 -1)/4 -( √ 5 + 1)/4 -( √ 5 + 1)/4 -( √ 5 + 1)/4 1 -( √ 5 + 1)/4 (3 √ 5 -1)/4 -( √ 5 + 1)/4 -( √ 5 + 1)/4 1 -( √ 5 + 1)/4 -( √ 5 + 1)/4 (3 √ 5 -1)/4 -( √ 5 + 1)/4 1 -( √ 5 + 1)/4 -( √ 5 + 1)/4 -( √ 5 + 1)/4 (3 √ 5 -1)/4      , p = √ 5/2, M 4 = 1 √ 5        1 (3 √ 5 -1)/4 -( √ 5 + 1)/4 -( √ 5 + 1)/4 -( √ 5 + 1)/4 1 -( √ 5 + 1)/4 (3 √ 5 -1)/4 -( √ 5 + 1)/4 -( √ 5 + 1)/4 1 -( √ 5 + 1)/4 -( √ 5 + 1)/4 (3 √ 5 -1)/4 -( √ 5 + 1)/4 1 -( √ 5 + 1)/4 -( √ 5 + 1)/4 -( √ 5 + 1)/4 (3 √ 5 -1)/4 1 1 1 1 1       \n." }, { "figure_ref": [], "heading": "C Complete proofs", "publication_ref": [], "table_ref": [ "tab_0", "tab_1" ], "text": "In this section, we provide complete proof of the propositions and theorems stated in the main paper.\nTheorem. (Restating Theorem 4:)The neuron F n ( • ; S) : R n+2 → R n+1 defined in (13) is O(n)-equivariant.\nProof. We need to show that (3) holds for F n ( • ; S). We substitute ( 14) to the LHS and (13) to the RHS, and obtain\nV n B n (S) X = B n (S) RX . (22\n)\nKeeping in mind that the (n + 1)-th and (n + 2)-th components, s n+1 and s n+2 , of the sphere S ∈ R n+2 with center c 0 ∈ R n (1) are O(n)-invariant, as well as our convention on writing the rotation matrices (see the last paragraph of Section 3.2), we rewrite the (n + 1) × (n + 2) matrix B n (S) using its definition (13):\nB n (S) = (R ⊤ O R Ti R O S) ⊤ i=1...n+1 = c ⊤ 0 R ⊤ O R ⊤ Ti R O s n+1 s n+2 i=1...n+1 . (23\n)\nBy definition of the rotation R O (13), we have that R O c 0 = ∥c 0 ∥p 1 , where p 1 ∈ R n is the first vertex of the regular simplex according to (8). Since R Ti rotates p 1 into p i , we obtain\nR Ti R O c 0 = ∥c 0 ∥p i , 1 ≤ i ≤ n + 1 . (24\n)\nThus, we can write the RHS of ( 22) using the sphere definition (1) as\nB n (S) RX = ∥c 0 ∥p ⊤ i R O s n+1 s n+2 i=1...n+1 RX = ∥c 0 ∥ P ⊤ n R O R s n+1 1 s n+2 1 X.(25)\nWe now use the definition of V n from ( 14) along with ( 10), ( 11), and ( 24) to rewrite the LHS of ( 22) as\nV n B n (S)X = M ⊤ n R O R R ⊤ O M n ∥c 0 ∥ P ⊤ n s n+1 1 R O s n+2 1 X = M ⊤ n R O R R ⊤ O      p ∥c 0 ∥ I n 0 ⊤ 0 p √ n s n+1   R O 0 p √ n s n+2    X = M ⊤ n R O R   p ∥c 0 ∥ 0 0 ⊤ p √ n s n+1 R ⊤ O R O 0 p √ n s n+2   X = 1 p P ⊤ n R O R n -1/2 1 p ∥c 0 ∥ I n 0 0 0 ⊤ p √ n s n+1 p √ n s n+2 X = ∥c 0 ∥ P ⊤ n R O R √ n √ n s n+1 1 √ n √ n s n+2 1 X = B n (S) RX. (26)\nProposition. (Restating Proposition 7:) Given that all operations involved in the procedure 18 are O(n)-equivariant, its output will also be O(n)-equivariant.\nProof. Let R ∈ O(n) be an orthogonal transformation, ρ i (R) the representation of R in the respective space, e.g., (14) for the equivariant hypersphere output, and x ∈ R n be the input to the procedure 18. We denote the output of the procedure 18 as F(x), where F is the composition of all operations in the procedure 18. Since each operation is equivariant, (3) holds for each operation Φ, i.e., we have Φ i (ρ i (R)X) = ρ i+1 (R)Φ(X). Consider now the output F(x) and the transformed output F(Rx). Since each operation in F is equivariant, we have: In this section, we provide an illustration of the architectures of our DEH model used in the experiments in Section 5. By default, we learned non-normalized hyperspheres and equipped the layers with the equivariant bias and the additional nonlinearity (non-linear normalization in ( 17)). The number of learnable parameters corresponds to the competing methods in the experiments, as seen in Table 1. The DEH architectures are presented in Table 2.\nF(Rx) = Φ d (Φ d-1 (. . . Φ 2 (Φ 1 (Rx)))) = ρ d (R)Φ d (Φ d-1 (. . . Φ 2 (Φ 1 (x)))) = ρ d (R)F(x)." }, { "figure_ref": [], "heading": "D Architecture details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0", "fig_0", "fig_2" ], "heading": "D.1 O(5) Regression architectures clarification", "publication_ref": [ "b27", "b10", "b27" ], "table_ref": [ "tab_1" ], "text": "Note that we used the CGENN model architecture, containing 467 parameters, from the first version of the manuscript (Ruhe et al., 2023), and the corrected evaluation protocol from the latest version. Their model in the latest version has three orders of magnitude more parameters, which is in the range of the EMLPs (Finzi et al., 2021) 2. DEH for the first and the third tasks is also permutation-invariant.\nparameters. However, the error reduction thus achieved is only of one order of magnitude (Ruhe et al., 2023) and only in the maximum training data size regime, which is why we compared the models within the original size range (see Table 2 and Figure 2). Besides, since the number of points in this task is only 2 and the permutation invariance is not required (no aggregation over N ; see Figure 4 and the caption), we used only three out of four entries of ∆ (20) in our model, i.e., only one of the identical off-diagonal elements. Also, we disabled the bias component in our model for this experiment and achieved a lower error (0.0007 vs. 0.0011)." }, { "figure_ref": [ "fig_2" ], "heading": "E Ablations E.1 Invariant feature computation", "publication_ref": [ "b19" ], "table_ref": [ "tab_1", "tab_1" ], "text": "In Table 2, we show the effectiveness of the invariant operator ∆ (20), modeling the relation between two points and a sphere (see Section 4.5), over other invariant operations such as sum or l 2 -norm, applied to the N × (n + d) matrix Y (see Section 4.4 and Figure 4) row-wise.\nWe also considered including the edge computation in ∆, as discussed in Section 4.5, in the following way:\n∆ E = E ⊙ Y Y ⊤ ,(27)\nwhere E := 1 2 (∥x i -x j ∥ 2 + I N ) ∈ R N ×N models the edges as the squared distances between the points (with the identity matrix included to also model interactions between a single point and a sphere). This formulation is slightly closer to the original formulation from Li et al. (2001) than (20) that we used. In Table 2, we present the respective model results." }, { "figure_ref": [], "heading": "E.2 Architecture", "publication_ref": [], "table_ref": [ "tab_2", "tab_3" ], "text": "In Table 3, we present a comparison between a single-vs. a two-layer DEH (which was employed in the experiments with the results in Figure 1). We note that already with one layer, our model exhibits high performance for the presented tasks.\nIncreasing the number of layers in our DEH is therefore only marginally advantageous in these cases. Bias and learnable normalization ablations are presented in Table 4. As we see, the performance of DEH is further improved if the bias is removed, which was also noted in Section D.1. A minor improvement is obtained by removing the learnable parameters from the non-linear normalization, α (one per neuron), (17), while keeping the bias. However, removing both the bias and the learnable parameters from the normalization, results in lower performance. 1)." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work was supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP), by the Swedish Research Council through a grant for the project Algebraically Constrained Convolutional Networks for Sparse Image Data (2018-04673), and the strategic research environment ELLIIT.\nThe computations were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) partially funded by the Swedish Research Council through grant agreement no. 2022-06725.3, and by the Berzelius resource provided by the Knut and Alice Wallenberg Foundation at the National Supercomputer Centre." }, { "figure_ref": [], "heading": "A Additional background A.1 Steerability", "publication_ref": [ "b11", "b17", "b11", "b25", "b20" ], "table_ref": [], "text": "According to Freeman et al. (1991), a function is called steerable if it can be written as a linear combination of rotated versions of itself, as also alternatively presented by Knutsson et al. (1992). In 3D, f R (x, y, z) is thus said to steer if\nwhere f R (x, y, z) is f (x, y, z) rotated by R ∈ SO(3), and each R j ∈ SO(3) orients the corresponding jth basis function. Freeman et al. (1991) further describe the conditions under which the 3D steerability constraint (21) holds and how to find the minimum number of basis functions, that must be uniformly distributed in space.\nIn this context, Melnyk et al. (2022a) showed that in order to steer a spherical neuron defined in (2) (Perwass et al., 2003;Melnyk et al., 2021), one needs to have a minimum of fours basis functions, i.e., rotated versions of the original spherical neuron. This, together with the condition of the uniform distribution of the basis functions, leads to the regular tetrahedron construction of the steerable 3D spherical neuron in (5)." }, { "figure_ref": [], "heading": "B Numeric instances for n = {2, 3, 4}", "publication_ref": [], "table_ref": [], "text": "To facilitate the reader's understanding of the algebraic manipulations in the next section, herein, we present numeric instances of the central components of our theory defined in ( 8) and ( 9), for the cases n = 2, n = 3, and n = 4. For convenience, we write the vertices of the regular simplex (8) as the n × (n + 1) matrix " } ]
2024-02-07
10.1109/ICASSP.1992.226174
[ { "authors": "B Anderson; T S Hy; R Kondor", "journal": "", "ref_id": "b0", "title": "Cormorant: Covariant molecular neural networks", "year": "2019" }, { "authors": "J L Ba; J R Kiros; G E Hinton", "journal": "", "ref_id": "b1", "title": "Layer normalization", "year": "2016" }, { "authors": "M M Bronstein; J Bruna; Y Lecun; A Szlam; P Vandergheynst", "journal": "IEEE Signal Processing Magazine", "ref_id": "b2", "title": "Geometric deep learning: going beyond euclidean data", "year": "2017" }, { "authors": "M M Bronstein; J Bruna; T Cohen; P Veličković", "journal": "", "ref_id": "b3", "title": "Geometric deep learning: Grids, groups, graphs, geodesics, and gauges", "year": "2021" }, { "authors": "H Cevikalp; H Saribas", "journal": "Springer", "ref_id": "b4", "title": "Deep simplex classifier for maximizing the margin in both euclidean and angular spaces", "year": "2023" }, { "authors": "T S Cohen; M Geiger; J Köhler; M Welling", "journal": "", "ref_id": "b5", "title": "Spherical cnns", "year": "2018" }, { "authors": "B Coors; A P Condurache; A Geiger", "journal": "", "ref_id": "b6", "title": "Spherenet: Learning spherical representations for detection and classification in omnidirectional images", "year": "2018-09" }, { "authors": "C Deng; O Litany; Y Duan; A Poulenard; A Tagliasacchi; L J Guibas", "journal": "", "ref_id": "b7", "title": "Vector neurons: A general framework for so (3)-equivariant networks", "year": "2021" }, { "authors": "E L Elte", "journal": "University of Michigan Press", "ref_id": "b8", "title": "The Semiregular Polytopes of the Hyperspaces", "year": "2006" }, { "authors": "C Esteves; C Allen-Blanchette; A Makadia; K Daniilidis", "journal": "", "ref_id": "b9", "title": "Learning SO(3) Equivariant Representations with Spherical CNNs", "year": "2018-09" }, { "authors": "M Finzi; M Welling; A G Wilson", "journal": "PMLR", "ref_id": "b10", "title": "A practical method for constructing equivariant multilayer perceptrons for arbitrary matrix groups", "year": "2021" }, { "authors": "W T Freeman; E H Adelson", "journal": "IEEE Transactions on Pattern analysis and machine intelligence", "ref_id": "b11", "title": "The design and use of steerable filters", "year": "1991" }, { "authors": "F Fuchs; D Worrall; V Fischer; M Welling", "journal": "", "ref_id": "b12", "title": "Se(3)-transformers: 3d roto-translation equivariant attention networks", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b13", "title": "", "year": "2020" }, { "authors": "D Hilbert; S Cohn-Vossen", "journal": "Chelsea Publishing Company", "ref_id": "b14", "title": "Geometry and the Imagination", "year": "1952" }, { "authors": "S Ioffe; C Szegedy", "journal": "PMLR", "ref_id": "b15", "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "year": "2015" }, { "authors": "B Jing; S Eismann; P Suriana; R J L Townshend; R Dror", "journal": "", "ref_id": "b16", "title": "Learning from protein structure with geometric vector perceptrons", "year": "2021" }, { "authors": "H Knutsson; L Haglund; H Bårman; G H Granlund", "journal": "", "ref_id": "b17", "title": "A framework for anisotropic adaptive filtering and analysis of image sequences and volumes", "year": "1992" }, { "authors": "J Lan; A Palizhati; M Shuaibi; B M Wood; B Wander; A Das; M Uyttendaele; C L Zitnick; Z W Ulissi; Adsorbml", "journal": "", "ref_id": "b18", "title": "Accelerating adsorption energy calculations with machine learning", "year": "2022" }, { "authors": "H Li; D Hestenes; A Rockwood", "journal": "Springer", "ref_id": "b19", "title": "Generalized homogeneous coordinates for computational geometry", "year": "2001" }, { "authors": "P Melnyk; M Felsberg; M Wadenbäck", "journal": "", "ref_id": "b20", "title": "Embed me if you can: A geometric perceptron", "year": "2021-10" }, { "authors": "P Melnyk; M Felsberg; M Wadenbäck", "journal": "PMLR", "ref_id": "b21", "title": "Steerable 3D spherical neurons", "year": "2022-07-23" }, { "authors": "P Melnyk; A Robinson; M Wadenbäck; M Felsberg; Tetrasphere", "journal": "", "ref_id": "b22", "title": "A neural descriptor for O(3)-invariant point cloud classification", "year": "2022" }, { "authors": "A Paszke; S Gross; F Massa; A Lerer; J Bradbury; G Chanan; T Killeen; Z Lin; N Gimelshein; L Antiga", "journal": "", "ref_id": "b23", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" }, { "authors": "N Perraudin; M Defferrard; T Kacprzak; R Sgier; Deepsphere", "journal": "Astronomy and Computing", "ref_id": "b24", "title": "Efficient spherical convolutional neural network with healpix sampling for cosmological applications", "year": "2019" }, { "authors": "C Perwass; V Banarer; G Sommer", "journal": "Springer", "ref_id": "b25", "title": "Spherical decision surfaces using conformal modelling", "year": "2003" }, { "authors": "R Ramakrishnan; P O Dral; M Rupp; Von Lilienfeld; O A ", "journal": "Scientific data", "ref_id": "b26", "title": "Quantum chemistry structures and properties of 134 kilo molecules", "year": "2014" }, { "authors": "D Ruhe; J Brandstetter; P Forré", "journal": "", "ref_id": "b27", "title": "Clifford group equivariant neural networks", "year": "2023" }, { "authors": "M Rupp; A Tkatchenko; K.-R Müller; Von Lilienfeld; O A ", "journal": "Physical review letters", "ref_id": "b28", "title": "Fast and accurate modeling of molecular atomization energies with machine learning", "year": "2012" }, { "authors": "V G Satorras; E Hoogeboom; M E Welling", "journal": "PMLR", "ref_id": "b29", "title": "equivariant graph neural networks", "year": "2021" }, { "authors": "Y.-C Su; K Grauman", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b30", "title": "Learning spherical convolution for fast features from 360 imagery", "year": "2017" }, { "authors": "N Thomas; T Smidt; S Kearnes; L Yang; L Li; K Kohlhoff; P Riley", "journal": "", "ref_id": "b31", "title": "Tensor field networks: Rotation-and translation-equivariant neural networks for 3D point clouds", "year": "2018" }, { "authors": "R J Townshend; M Vögele; P Suriana; A Derry; A Powers; Y Laloudakis; S Balachandar; B Jing; B Anderson; S Eismann", "journal": "", "ref_id": "b32", "title": "Atom3d: Tasks on molecules in three dimensions", "year": "2021" }, { "authors": "L Xia; C.-C Chen; J K Aggarwal", "journal": "", "ref_id": "b33", "title": "View invariant human action recognition using histograms of 3d joints", "year": "2012" }, { "authors": "J Xu; X Tang; Y Zhu; J Sun; S Pu; Sgmnet", "journal": "", "ref_id": "b34", "title": "Learning rotation-invariant point cloud representations via sorted Gram matrix", "year": "2021" } ]
[ { "formula_coordinates": [ 2, 154.16, 65.94, 401.15, 39.7 ], "formula_id": "formula_0", "formula_text": "1 ( √ 3 -1)/2 -( √ 3 + 1)/2 1 -( √ 3 + 1)/2 ( √ 3 -1)/2 1 1 1 M2 = 1 √ 3 1 1 -1 -1 1 -1 1 -1 1 -1 -1 1 1 1 1 1       M3 = 1 2" }, { "formula_coordinates": [ 2, 350, 406.43, 205.98, 46.29 ], "formula_id": "formula_1", "formula_text": "X = x 1 , . . . , x n , -1, - 1 2 ∥x∥ 2 ∈ R n+2 , S = c 1 , . . . , c n , 1 2 (∥c∥ 2 -r 2 ), 1 ∈ R n+2 ,(1)" }, { "formula_coordinates": [ 2, 402.49, 491.36, 153.49, 11.99 ], "formula_id": "formula_2", "formula_text": "f S (X; S) = X ⊤ S,(2)" }, { "formula_coordinates": [ 3, 56.23, 226.98, 236.27, 68.95 ], "formula_id": "formula_3", "formula_text": "ρ(R) f (x) = f (Rx) for all R ∈ O(n), x ∈ X . (3) We call a function f : X → Y O(n)-invariant if for every R ∈ O(n), ρ(R) = I n . That is, if f (x) = f (Rx) for all R ∈ O(n), x ∈ X . (4" }, { "formula_coordinates": [ 3, 288.62, 287.29, 3.87, 8.64 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 3, 322.31, 83.95, 223.44, 17.14 ], "formula_id": "formula_5", "formula_text": "F(X; S) = B(S)X , B(S) = (R ⊤ O R Ti R O S) ⊤ i=1...4" }, { "formula_coordinates": [ 3, 333.26, 206.24, 218.85, 12.98 ], "formula_id": "formula_6", "formula_text": "V R B(S) X = B(S) RX, V R = M ⊤ R O R R ⊤ O M , (6" }, { "formula_coordinates": [ 3, 552.1, 208.92, 3.87, 8.64 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 3, 325.22, 299.28, 225.04, 46.17 ], "formula_id": "formula_8", "formula_text": "M = m 1 m 2 m 3 m 4 = 1 2     1 1 -1 -1 1 -1 1 -1 1 -1 -1 1 1 1 1 1     ." }, { "formula_coordinates": [ 4, 95.45, 105.07, 197.05, 57.01 ], "formula_id": "formula_9", "formula_text": "p i = n -1/2 1, i = 1 κ 1 + µ e i-1 , 2 ≤ i ≤ n + 1 , κ = - 1 + √ n + 1 n 3/2 , µ = 1 + 1 n ,(8)" }, { "formula_coordinates": [ 4, 56.69, 232.69, 235.13, 55.31 ], "formula_id": "formula_10", "formula_text": "m i = 1 p p i 1/ √ 3 with p = p i 1/ √ 3 , 1 ≤ i ≤ 4." }, { "formula_coordinates": [ 4, 56.69, 565.06, 247.54, 38.89 ], "formula_id": "formula_11", "formula_text": "M n = m i i=1...n+1 , m i = 1 p p i n -1/2 , p = p i n -1/2 , (9)" }, { "formula_coordinates": [ 4, 56.69, 714.19, 236.79, 25.06 ], "formula_id": "formula_12", "formula_text": "M ⊤ n M n are m ⊤ i m i = ∥m i ∥ 2 = 1 since ∥m i ∥ = 1. The off- diagonal elements are found as m ⊤ i m j = (p ⊤ i p j + n -1 )/p 2" }, { "formula_coordinates": [ 4, 320.17, 119.43, 66.12, 12.98 ], "formula_id": "formula_13", "formula_text": "M ⊤ n M n = I n+1 ." }, { "formula_coordinates": [ 4, 400.06, 336.04, 155.91, 21.79 ], "formula_id": "formula_14", "formula_text": "M n P ⊤ n = p I n 0 ⊤ .(10)" }, { "formula_coordinates": [ 4, 392.08, 392.46, 159.75, 22.31 ], "formula_id": "formula_15", "formula_text": "M n = 1 p P n n -1/2 1 ⊤ . (11" }, { "formula_coordinates": [ 4, 551.83, 399.52, 4.15, 8.64 ], "formula_id": "formula_16", "formula_text": ")" }, { "formula_coordinates": [ 4, 330.89, 507.35, 220.93, 37.47 ], "formula_id": "formula_17", "formula_text": "M n P ⊤ n = 1 p P n n -1/2 1 ⊤ P ⊤ n = p 2 p I n 0 ⊤ = p I n 0 ⊤ . (12" }, { "formula_coordinates": [ 4, 551.83, 536.19, 4.15, 8.64 ], "formula_id": "formula_18", "formula_text": ")" }, { "formula_coordinates": [ 4, 319.92, 652.85, 236.05, 53.38 ], "formula_id": "formula_19", "formula_text": "x ∈ R n embedded as X ∈ R n+2 as F n (X; S) = B n (S) X , B n (S) = (R ⊤ O R Ti R O S) ⊤ i=1...n+1 ,(13)" }, { "formula_coordinates": [ 5, 175.43, 95.65, 117.99, 9.72 ], "formula_id": "formula_20", "formula_text": "F n ( • ; S) is O(n)-equivariant." }, { "formula_coordinates": [ 5, 56.69, 150.94, 236.29, 34.53 ], "formula_id": "formula_21", "formula_text": "× (n + 1) matrix V n = ρ (R) = M ⊤ n R O R R ⊤ O M n ,(14)" }, { "formula_coordinates": [ 5, 56.69, 216.27, 236.3, 49.93 ], "formula_id": "formula_22", "formula_text": "× (n + 1). Furthermore, V n ∈ G < O(n + 1). Proof. Since M n ∈ O(n + 1), R O ∈ SO(n), and R ∈ O(n) are orthogonal matrices, V n in (" }, { "formula_coordinates": [ 5, 56.69, 304.37, 236.88, 33.56 ], "formula_id": "formula_23", "formula_text": "V n a reflection representation if det R = -1, or a rotation representation if det R = +1. Since R ∈ O(n) and R O ∈ O(n)" }, { "formula_coordinates": [ 5, 125.49, 385.7, 167, 12.98 ], "formula_id": "formula_24", "formula_text": "R = R ⊤ O M n V n M ⊤ n R O ,(15)" }, { "formula_coordinates": [ 5, 56.36, 490.95, 235.46, 22.49 ], "formula_id": "formula_25", "formula_text": "Theorem 4. The neuron F n ( • ; S) : R n+2 → R n+1 defined in (13) is O(n)-equivariant." }, { "formula_coordinates": [ 5, 120.51, 571.61, 167.83, 9.7 ], "formula_id": "formula_26", "formula_text": "V n B n (S) X = B n (S) RX . (16" }, { "formula_coordinates": [ 5, 288.35, 571.98, 4.15, 8.64 ], "formula_id": "formula_27", "formula_text": ")" }, { "formula_coordinates": [ 5, 320.17, 105.96, 236.79, 23.18 ], "formula_id": "formula_28", "formula_text": "Proposition 5. Let Y ∈ R n+1 be the output of the O(n)- equivariant hypersphere F n ( • ; S) : R n+2 → R n+1 (13)" }, { "formula_coordinates": [ 5, 320.17, 226.77, 235.13, 45.67 ], "formula_id": "formula_29", "formula_text": "V n 1 = M ⊤ n R O R R ⊤ O M n 1 = M ⊤ n R O R R ⊤ O 0 p √ n = M ⊤ n 0 p √ n = 1" }, { "formula_coordinates": [ 5, 320.17, 284.82, 236.88, 23.95 ], "formula_id": "formula_30", "formula_text": "V n Y ′ = V n (Y + b1) = V n B n (S) X + V n b1 = V n B n (S) X + b1." }, { "formula_coordinates": [ 5, 384.13, 322.98, 147.6, 9.7 ], "formula_id": "formula_31", "formula_text": "V n B n (S) X + b1 = B n (S) RX + b1." }, { "formula_coordinates": [ 5, 320.17, 564.67, 235.13, 23.95 ], "formula_id": "formula_32", "formula_text": "V n Y ′ = ∥Y∥ -1 V n Y = ∥Y∥ -1 V n B n (S) X." }, { "formula_coordinates": [ 5, 320.17, 589.35, 235.13, 23.18 ], "formula_id": "formula_33", "formula_text": "∥Y∥ -1 V n B n (S) X = ∥Y∥ -1 B n (S)" }, { "formula_coordinates": [ 5, 384.47, 679.44, 167.35, 22.36 ], "formula_id": "formula_34", "formula_text": "Y → Y σ(a) (∥Y∥ -1) + 1 , (17" }, { "formula_coordinates": [ 5, 551.83, 686.55, 4.15, 8.64 ], "formula_id": "formula_35", "formula_text": ")" }, { "formula_coordinates": [ 6, 57.2, 296.16, 235.3, 53.84 ], "formula_id": "formula_36", "formula_text": "X ∈ R n → embed(normalize(X + b)) → F n (X; S) → embed(normalize(X + b)) → F n+1 (X; S) → . . . → F d (X; S) → normalize(X + b) → X ∈ R d ,(18)" }, { "formula_coordinates": [ 6, 135.46, 709.73, 157.04, 12.96 ], "formula_id": "formula_37", "formula_text": "δ = e 12 X ⊤ 1 S X ⊤ 2 S,(19)" }, { "formula_coordinates": [ 6, 358.73, 244.47, 193.1, 12.01 ], "formula_id": "formula_38", "formula_text": "∆ = X ⊤ B n (S) (X ⊤ B n (S)) ⊤ = Y Y ⊤ . (20" }, { "formula_coordinates": [ 6, 551.83, 247.14, 4.15, 8.64 ], "formula_id": "formula_39", "formula_text": ")" }, { "formula_coordinates": [ 7, 320.17, 351.38, 235.13, 25.03 ], "formula_id": "formula_40", "formula_text": "is to model the O(n)(5)-invariant function f (x 1 , x 2 ) := sin(∥x 1 ∥) -∥x 2 ∥ 3 /2 + x ⊤ 1 x2" }, { "formula_coordinates": [ 11, 56.69, 404.96, 363.19, 332.02 ], "formula_id": "formula_41", "formula_text": "  . n = 3 : P 3 = 1 √ 3   1 1 -1 -1 1 -1 1 -1 1 -1 -1 1   , p = 2/ √ 3, M 3 = 1 2     1 1 -1 -1 1 -1 1 -1 1 -1 -1 1 1 1 1 1     . n = 4 : P 4 = 1 2      1 (3 √ 5 -1)/4 -( √ 5 + 1)/4 -( √ 5 + 1)/4 -( √ 5 + 1)/4 1 -( √ 5 + 1)/4 (3 √ 5 -1)/4 -( √ 5 + 1)/4 -( √ 5 + 1)/4 1 -( √ 5 + 1)/4 -( √ 5 + 1)/4 (3 √ 5 -1)/4 -( √ 5 + 1)/4 1 -( √ 5 + 1)/4 -( √ 5 + 1)/4 -( √ 5 + 1)/4 (3 √ 5 -1)/4      , p = √ 5/2, M 4 = 1 √ 5        1 (3 √ 5 -1)/4 -( √ 5 + 1)/4 -( √ 5 + 1)/4 -( √ 5 + 1)/4 1 -( √ 5 + 1)/4 (3 √ 5 -1)/4 -( √ 5 + 1)/4 -( √ 5 + 1)/4 1 -( √ 5 + 1)/4 -( √ 5 + 1)/4 (3 √ 5 -1)/4 -( √ 5 + 1)/4 1 -( √ 5 + 1)/4 -( √ 5 + 1)/4 -( √ 5 + 1)/4 (3 √ 5 -1)/4 1 1 1 1 1       " }, { "formula_coordinates": [ 12, 252.25, 168.51, 299.57, 9.7 ], "formula_id": "formula_42", "formula_text": "V n B n (S) X = B n (S) RX . (22" }, { "formula_coordinates": [ 12, 551.83, 168.87, 4.15, 8.64 ], "formula_id": "formula_43", "formula_text": ")" }, { "formula_coordinates": [ 12, 140.73, 243.11, 411.09, 17.14 ], "formula_id": "formula_44", "formula_text": "B n (S) = (R ⊤ O R Ti R O S) ⊤ i=1...n+1 = c ⊤ 0 R ⊤ O R ⊤ Ti R O s n+1 s n+2 i=1...n+1 . (23" }, { "formula_coordinates": [ 12, 551.83, 245.44, 4.15, 8.64 ], "formula_id": "formula_45", "formula_text": ")" }, { "formula_coordinates": [ 12, 225.89, 310.99, 325.94, 10.75 ], "formula_id": "formula_46", "formula_text": "R Ti R O c 0 = ∥c 0 ∥p i , 1 ≤ i ≤ n + 1 . (24" }, { "formula_coordinates": [ 12, 551.83, 311.38, 4.15, 8.64 ], "formula_id": "formula_47", "formula_text": ")" }, { "formula_coordinates": [ 12, 113.15, 361.36, 442.83, 17.15 ], "formula_id": "formula_48", "formula_text": "B n (S) RX = ∥c 0 ∥p ⊤ i R O s n+1 s n+2 i=1...n+1 RX = ∥c 0 ∥ P ⊤ n R O R s n+1 1 s n+2 1 X.(25)" }, { "formula_coordinates": [ 12, 139.26, 418.82, 416.72, 158.79 ], "formula_id": "formula_49", "formula_text": "V n B n (S)X = M ⊤ n R O R R ⊤ O M n ∥c 0 ∥ P ⊤ n s n+1 1 R O s n+2 1 X = M ⊤ n R O R R ⊤ O      p ∥c 0 ∥ I n 0 ⊤ 0 p √ n s n+1   R O 0 p √ n s n+2    X = M ⊤ n R O R   p ∥c 0 ∥ 0 0 ⊤ p √ n s n+1 R ⊤ O R O 0 p √ n s n+2   X = 1 p P ⊤ n R O R n -1/2 1 p ∥c 0 ∥ I n 0 0 0 ⊤ p √ n s n+1 p √ n s n+2 X = ∥c 0 ∥ P ⊤ n R O R √ n √ n s n+1 1 √ n √ n s n+2 1 X = B n (S) RX. (26)" }, { "formula_coordinates": [ 12, 56.69, 716.38, 363.49, 9.72 ], "formula_id": "formula_50", "formula_text": "F(Rx) = Φ d (Φ d-1 (. . . Φ 2 (Φ 1 (Rx)))) = ρ d (R)Φ d (Φ d-1 (. . . Φ 2 (Φ 1 (x)))) = ρ d (R)F(x)." }, { "formula_coordinates": [ 14, 269.66, 523.99, 286.31, 12.01 ], "formula_id": "formula_51", "formula_text": "∆ E = E ⊙ Y Y ⊤ ,(27)" } ]
On Learning Deep O(n)-Equivariant Hyperspheres
In this paper, we utilize hyperspheres and regular nsimplexes and propose an approach to learning deep features equivariant under the transformations of nD reflections and rotations, encompassed by the powerful group of O(n). Namely, we propose O(n)-equivariant neurons with spherical decision surfaces that generalize to any dimension n, which we call Deep Equivariant Hyperspheres. We demonstrate how to combine them in a network that directly operates on the basis of the input points and propose an invariant operator based on the relation between two points and a sphere, which as we show, turns out to be a Gram matrix. Using synthetic and realworld data in nD, we experimentally verify our theoretical contributions and find that our approach is superior to the competing methods for O(n)-equivariant benchmark datasets (classification and regression), demonstrating a favorable speed/performance trade-off.
Pavlo Melnyk; Michael Felsberg; Mårten Wadenbäck; Andreas Robinson; Cuong Le
[ { "figure_caption": "Figure 2 :2Figure 2: Left: real data experiment (the higher the accuracy the better); all the presented models are also permutation-invariant. Center and right: synthetic data experiments (the lower the mean squared error (MSE) the better); dotted lines mean that the results of the methods are copied from Finzi et al. (2021) (O(5) regression) or Ruhe et al. (2023) (O(5) convex hulls). Best viewed in color.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: Speed/performance trade-off (the models are trained on all the available training data). Note that the desired trade-off is toward the top-left corner (higher accuracy and faster inference) in the left figure, and toward the bottom-left corner (lower error and faster inference) in the center and right figures. To measure inference time, we used an NVIDIA A100. Best viewed in color.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Architecture of our DEH model. All the operations are point-wise, i.e., shared amongst N points. Each subsequent layer of equivariant hyperspheres contains K l neurons for each of the d i K i preceding layer channels. The architectures of the non-permutation-invariant variants differ only in that the global aggregation function over N is substituted with the flattening of the feature map.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Thus, the output of the procedure in (18) is equivariant, as desired. Total number of parameters of the models in the experiments presented in Figure2. * see Section D.1. ** an unknown exact number of parameters, somewhere in the range of the other numbers in the column, as indicated byRuhe et al. (2023).", "figure_data": "MethodsO(3) Action recognitionO(5) RegressionO(5) Convex hullsCGENN9.1K467 *58.8KVN8.3K924N/A **GVP8.8K315N/A **SGM8.5K33358.9KMLP8.3K447K (Finzi et al., 2021)N/A **DEH (Ours)8.1K27549.8K", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "(see Figure 2, center) containing 562K Our DEH model architectures employed in the experiments and ablation on the invariant feature computation. The models are trained on all the available training data. The results of the models in bold are presented in Figure", "figure_data": "MethodsEquiv. layer sizes [K 1 , K 2 , . . . ]Invariant operationFC-layer sizeTotal #paramsPerformanceO(3) Action recognitionAcc., % (↑)DEH[8, 6, 2]sum327.8K69.86DEH[3, 2]∆ E328.1K82.92DEH[3, 2]∆328.1K87.36O(5) RegressionMSE (↓)DEH[2]l 2 -norm323430.0084DEH[2]∆ E322750.0033DEH[2]∆322750.0007O(5) Convex hullsMSE (↓)DEH[32, 24]l 2 -norm3257.2K7.5398DEH[8, 6]∆ E3249.8K1.3843DEH[8, 6]∆3249.8K1.3166", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Equiv. layer sizes [K 1 , K 2 , . . . ] Our DEH model: single-and two-layer (the results of which are presented in Figure2). * The performance is averaged across the models trained on the various training set sizes (see Figure2).", "figure_data": "Total #paramsAvg. * performance", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Hyperparameter ablation (using the O(5) convex hull volume prediction task from Section 5: our main DEH model with and without equivariant bias, learnable parameters in the normalization (17). * The MSE is averaged across the models trained on the various training set sizes (see Figure", "figure_data": "", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "(Perwass et al., 2003)", "Explanation": "The cited work by Perwass et al. introduces the concept of spherical neurons, which the citing paper adopts in their research to study the use of spheres as decision surfaces in Euclidean space."}, {"Category": "Methodological Basis", "Citation": "(Melnyk et al., 2021)", "Explanation": "The cited work by Melnyk et al. further builds upon the concept of spherical neurons introduced by Perwass et al., providing a more in-depth analysis of the use of spheres in non-Euclidean geometry."}, {"Category": "Supporting Evidence", "Citation": "(Bronstein et al., 2017Bronstein et al. ( , 2021) )", "Explanation": "The cited work by Bronstein et al. provides foundational evidence in the form of a unifying framework for GDL, which the citing paper utilizes in their research to study the principles of geometry-symmetry and scale separation in high dimensions."}, {"Category": "Extension or Continuation", "Citation": "(Bronstein et al., 2017Bronstein et al. ( , 2021) )", "Explanation": "The cited work by Bronstein et al. serves as a basis for the extension of GDL in the citing paper, which further explores the use of symmetries in preserving structural information of geometric data and ensuring model robustness in high dimensions."}, {"Category": "Methodological Basis", "Citation": "(Rupp et al., 2012)", "Explanation": "The cited work by Rupp et al. (2012) provides a crucial requirement for the study of molecular analysis, protein design, and catalyst design, which the citing paper builds upon in its research on the integration of symmetry structures in nD spheres."}, {"Category": "Methodological Basis", "Citation": "(Ramakrishnan et al., 2014)", "Explanation": "The cited work by Ramakrishnan et al. (2014) contributes to the study of protein design and assessment by providing a methodological basis for integrating symmetry structures in nD spheres."}, {"Category": "Methodological Basis", "Citation": "(Townshend et al., 2021)", "Explanation": "The cited work by Townshend et al. (2021) offers a methodological approach to catalyst design by integrating symmetry structures in nD spheres, which the citing paper builds upon in its research."}, {"Category": "Methodological Basis", "Citation": "(Jing et al., 2021)", "Explanation": "The cited work by Jing et al. (2021) contributes to the study of molecular analysis and protein design by providing a methodological basis for integrating symmetry structures in nD spheres."}, {"Category": "Methodological Basis", "Citation": "(Lan et al., 2022)", "Explanation": "The cited work by Lan et al. (2022) offers a methodological approach to the study of nD spheres in the context of data analysis and application in natural sciences, which the citing paper builds upon in its research."}, {"Category": "Methodological Basis", "Citation": "(Melnyk et al., 2022a)", "Explanation": "The cited work by Melnyk et al. (2022a) provides a theoretical foundation for the study of steerable 3D spherical neurons, which the citing paper extends in its research on n-simplexes and nD spheres."}, {"Category": "Methodological Basis", "Citation": "(Coors et al., 2018)", "Explanation": "The cited work by Coors et al. provides a method for operating on 360 imagery, which the citing paper adopts in their research on spherical CNNs."}, {"Category": "Methodological Basis", "Citation": "(Su & Grauman, 2017)", "Explanation": "The cited work by Su and Grauman also contributes a method for operating on 360 imagery, which the citing paper may have used in their research on spherical CNNs."}, {"Category": "Methodological Basis", "Citation": "(Esteves et al., 2018)", "Explanation": "The cited work by Esteves et al. may have provided a method for operating on 360 imagery that the citing paper utilized in their research on spherical CNNs."}, {"Category": "Methodological Basis", "Citation": "(Cohen et al., 2018)", "Explanation": "The cited work by Cohen et al. may have contributed a method for operating on 360 imagery that the citing paper utilized in their research on spherical CNNs."}, {"Category": "Methodological Basis", "Citation": "(Perraudin et al., 2019)", "Explanation": "The cited work by Perraudin et al. may have provided a method for operating on 360 imagery that the citing paper utilized in their research on spherical CNNs."}, {"Category": "Methodological Basis", "Citation": "(Perwass et al., 2003)", "Explanation": "The cited work by Perwass et al. (2003) provides the foundational idea of using spherical decision surfaces and their symmetries for constructing equivariant models in the 3D case, which the citing paper builds upon in their research on deep equivariant hyperspheres."}, {"Category": "Methodological Basis", "Citation": "(Melnyk et al., 2021)", "Explanation": "The cited work by Melnyk et al. (2021) contributes to the development of equivariant models in the 3D case, which the citing paper further extends in their research on deep equivariant hyperspheres."}, {"Category": "Extension or Continuation", "Citation": "(Melnyk et al., 2022a,b)", "Explanation": "The cited works by Melnyk et al. (2022a,b) are extensions of the research on equivariant models in the 3D case, which the citing paper builds upon in their work on deep equivariant hyperspheres."}, {"Category": "Methodological Basis", "Citation": "(Ruhe et al., 2023)", "Explanation": "The cited work by Ruhe et al. (2023) provides a method of operating directly on input points, which the citing paper adopts in their research on deep equivariant hyperspheres."}, {"Category": "Methodological Basis", "Citation": "(Anderson et al., 2019; Thomas et al., 2018; Fuchs et al., 2020)", "Explanation": "The cited works by Anderson et al. (2019), Thomas et al. (2018), and Fuchs et al. (2020) are methods that require constructing an alternative basis, which the citing paper highlights as a key limitation in their research on deep equivariant hyperspheres."}, {"Category": "Methodological Basis", "Citation": "(Finzi et al., n.d.)", "Explanation": "The cited work by Finzi et al. (n.d.) is a method that the citing paper mentions as another type of method in the field of deep equivariant hyperspheres research."}, {"Category": "Supporting Evidence", "Citation": "(Li et al., 2001)", "Explanation": "The work of Li et al. (2001) is cited in the context of discussing the relation between two points and a sphere for computing invariant features in the citing paper. This work provides foundational information that supports the approach taken in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Perwass et al., 2003)", "Explanation": "The cited work by Perwass et al. (2003) provides the methodology of embedding data vectors and representing spheres using conformal geometric algebra, which the citing paper adopts in its research on spherical neurons."}, {"Category": "Data Source", "Citation": "(Melnyk et al., 2021)", "Explanation": "The cited work by Melnyk et al. (2021) serves as a data source for the research on spherical neurons, providing the information and insights necessary for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Li et al., 2001)", "Explanation": "The cited work by Li et al. (2001) contributes to the methodological basis of the research on spherical neurons by introducing conformal geometric algebra as a way of representing data vectors and spheres."}, {"Category": "Methodological Basis", "Citation": "(Perwass et al., 2003)", "Explanation": "The cited work introduces the concept of the spherical neuron and its use in classification tasks, which the citing paper builds upon by adopting the method of using the scalar product of a point and a sphere to determine the class of the point."}, {"Category": "Methodological Basis", "Citation": "(Melnyk et al., 2021)", "Explanation": "The cited work by Melnyk et al. provides the observation that spherical neurons do not require an activation function due to the inherent non-linearity of the embedding, which the citing paper adopts in its research on spherical neurons."}, {"Category": "Methodological Basis", "Citation": "(Melnyk et al., 2022a,b)", "Explanation": "The cited work by Melnyk et al. provides a convention for writing rotations and reflections in the Euclidean space, which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "(Melnyk et al., 2022a)", "Explanation": "The cited work by Melnyk et al. (2022a) provides the basis for the use of spherical neurons in the study of steerability in 3D case, as it demonstrates the ability of these neurons to be steered and defines steerability in terms of linear combinations of rotated basis functions."}, {"Category": "Supporting Evidence", "Citation": "(Perwass et al., 2003;Melnyk et al., 2021)", "Explanation": "The cited works by Perwass et al. (2003) and Melnyk et al. (2021) provide foundational information on the use of spherical neurons in the context of steerability, contributing to the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Freeman et al., 1991;Knutsson et al., 1992)", "Explanation": "The cited works by Freeman et al. (1991) and Knutsson et al. (1992) provide the basis for the use of the results of Freeman et al. (1991) in the study of steerability in 3D case, as they establish the need for a specific distribution of basis functions in the space."}, {"Category": "Methodological Basis", "Citation": "(Melnyk et al., 2019)", "Explanation": "The cited work by Melnyk et al. provides a method for constructing a steerable 3D spherical neuron using a 4 x 5 matrix, which the citing paper adopts in their research to define the structure of a steerable neuron."}, {"Category": "Methodological Basis", "Citation": "(2022a)", "Explanation": "The cited work introduces the concept of steerable 3D spherical neurons and their SO(3)-equivariance, which the citing paper adopts in their research on filter bank output space and change-of-basis matrices for rotation representation."}, {"Category": "Methodological Basis", "Citation": "(Melnyk et al., 2022b)", "Explanation": "The cited work by Melnyk et al. (2022b) has provided the TetraSphere approach for O(3)-invariant point cloud classification, which the citing paper has adopted in the end-to-end learning of steerable 3D spherical neurons."}, {"Category": "Methodological Basis", "Citation": "(Perwass et al., 2003)", "Explanation": "The cited work introduces the concept of bias in linear classifiers, which the citing paper builds upon to model the bias implicitly in the embedding of the O(n)-equivariant hypersphere."}, {"Category": "Methodological Basis", "Citation": "(Ioffe & Szegedy, 2015)", "Explanation": "The cited work by Ioffe and Szegedy provides the foundational concept of feature normalization in deep learning, which the citing paper adopts in their research on the normalization of equivariant hypersphere activations."}, {"Category": "Methodological Basis", "Citation": "(Ba et al., 2016)", "Explanation": "The cited work by Ba et al. also contributes to the methodological basis of feature normalization in deep learning, providing further insights and techniques for normalizing activations in the hypersphere filter."}, {"Category": "Methodological Basis", "Citation": "(Ruhe et al., 2023)", "Explanation": "The cited work introduces a new normalization step that the citing paper adopts to increase the descriptive power of the proposed approach."}, {"Category": "Methodological Basis", "Citation": "(13)", "Explanation": "The cited work provides the basis for constructing the regular n-simplex constellation for the equivariant neuron in the citing paper, which is used to increase the expressiveness of the model."}, {"Category": "Extension or Continuation", "Citation": "(19)", "Explanation": "The proposed relation between two points and a sphere in the cited work of Li et al. (2001) is extended in the citing paper to further enhance the expressiveness of the model."}, {"Category": "Methodological Basis", "Citation": "(Deng et al., 2021)", "Explanation": "The cited work by Deng et al. provides the foundation for the O(n)-invariance of the Gram matrix Y Y \u22a4 in the proposed invariant operator, which is a key element in the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Melnyk et al., 2022b)", "Explanation": "The cited work by Melnyk et al. contributes to the O(n)-invariance of the Gram matrix Y Y \u22a4 in the proposed invariant operator, as it is comprised of the pair-wise inner products of equivariant features that are invariant in the same way as the auto-product of the points in the work by Xu et al."}, {"Category": "Methodological Basis", "Citation": "(Xu et al., 2021)", "Explanation": "The cited work by Xu et al. provides the method of computing the auto-product of the points, which is used in the proposed invariant operator to achieve permutation-invariance by aggregation over the points and applying max and/or mean pooling over N ."}, {"Category": "Methodological Basis", "Citation": "(Paszke et al., 2019)", "Explanation": "The cited work by Paszke et al. (2019) provides the implementation of the models in PyTorch, which serves as a methodological basis for the experiments conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Melnyk et al., 2022a)", "Explanation": "The cited work by Melnyk et al. (2022a) provides the data and extraction process for the 3D skeleton data from the UTKinect-Action3D dataset, which the citing paper uses to test the ability of their method to utilize O(3)-equivariance as the inductive bias."}, {"Category": "Methodological Basis", "Citation": "(Xu et al., 2021)", "Explanation": "The cited work by Xu et al. (2021) provides the method of computing the Gram matrix of input points, which the citing paper adopts in the construction of the invariant SGM descriptor."}, {"Category": "Methodological Basis", "Citation": "(Deng et al., 2021)", "Explanation": "The cited work by Deng et al. (2021) introduces the point-wise equivariant VN model, which the citing paper uses in the development of the DEH model."}, {"Category": "Methodological Basis", "Citation": "(Jing et al., 2021)", "Explanation": "The cited work by Jing et al. (2021) presents the GVP model, which the citing paper incorporates in the DEH model to improve performance."}, {"Category": "Methodological Basis", "Citation": "(Ruhe et al., 2023)", "Explanation": "The cited work by Ruhe et al. (2023) introduces the CGENN model, which the citing paper uses in the DEH model to enhance the performance of the method."}, {"Category": "Methodological Basis", "Citation": "(Finzi et al., 2021)", "Explanation": "The cited work introduces the task of modeling the O(n)(5)-invariant function, which the citing paper adopts in their research to evaluate the performance of their DEH architecture."}, {"Category": "Extension or Continuation", "Citation": "(Ruhe et al., 2023)", "Explanation": "The cited work provides the same training hyperparameters and evaluation setup as the citing paper, allowing for a direct comparison of the DEH architecture to other methods."}, {"Category": "Methodological Basis", "Citation": "(Ruhe et al., 2023)", "Explanation": "The cited work by Ruhe et al. provides a specific task of estimating the volume of a convex hull generated by 16 5D points, which the citing paper adopts in their research to address a more challenging problem in the progression of tasks."}, {"Category": "Methodological Basis", "Citation": "(Li et al., 2001)", "Explanation": "The cited work by Li et al. (2001) provides the inspiration for the proposed invariant operator (20), which is used in the network architecture of the citing paper to model the relationship between two points and a sphere."}, {"Category": "Methodological Basis", "Citation": "(Satorras et al., 2021)", "Explanation": "The cited work by Satorras et al. (2021) is used as a methodological basis for the proposed equivariant hyperspheres in the citing paper, as it provides a common way to address translation equivariance using graph neural networks (GNNs). The citing paper leverages this method to achieve the desired equivariance in the O(n) group."}, {"Category": "Methodological Basis", "Citation": "(Ruhe et al., 2023)", "Explanation": "The citing paper adopts the CGENN model architecture from the cited work to conduct their research on error reduction in language models."}, {"Category": "Data Source", "Citation": "(Finzi et al., 2021)", "Explanation": "The cited work provides the EMLPs model, which the citing paper uses as a reference for the number of parameters in their model."}, {"Category": "Extension or Continuation", "Citation": "(Ruhe et al., 2023)", "Explanation": "The citing paper further extends the research on error reduction in language models by comparing models within the original size range and achieving a lower error in the maximum training data size regime."}, {"Category": "Methodological Basis", "Citation": "(Ruhe et al., 2023)", "Explanation": "The citing paper uses the corrected evaluation protocol from the latest version of the cited work to ensure accurate and reliable results in their research."}, {"Category": "Methodological Basis", "Citation": "(Li et al., 2001)", "Explanation": "The cited work by Li et al. (2001) provides a formulation for edge computation in the invariant operator \u2206, which the citing paper adopts in the formulation of the operator in Equation (27). This method is used to model the edges as the squared distances between points, including interactions with spheres."}, {"Category": "Supporting Evidence", "Citation": "(Freeman et al., 1991)", "Explanation": "The cited work by Freeman et al. (1991) provides the foundational definition of steerability in the context of 3D functions, which the citing paper builds upon in discussing the conditions for steerability in the 3D space."}, {"Category": "Supporting Evidence", "Citation": "(Knutsson et al., 1992)", "Explanation": "The cited work by Knutsson et al. (1992) further elaborates on the concept of steerability in 3D functions, providing a more detailed and alternative presentation that the citing paper references in its discussion of the topic."}, {"Category": "Supporting Evidence", "Citation": "(Melnyk et al., 2022a)", "Explanation": "The cited work by Melnyk et al. (2022a) shows that in order to steer a spherical neuron defined in (2), one needs a minimum of four basis functions, which the citing paper uses to illustrate the conditions for steerability in the context of spherical neurons."}]
[ { "figure_ref": [ "fig_1" ], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b11" ], "table_ref": [], "text": "Graph neural networks (GNNs) have emerged as a powerful learning paradigm able to treat unstructured data and extract \"object-relation\"/causal relationships while imparting inductive biases which preserve invariances through the underlying graph topology [1][2][3][4]. This framework has proven effective for a wide range of both graph analytics and data-driven physics modeling problems. Despite successes, GNNs have generally struggle to achieve the improved performance with increasing depth typical of other architectures. Well-known pathologies, such as oversmoothing, oversquashing, bottlenecks, and exploding/vanishing gradients yield deep GNNs which are either unstable or lose performance as the number of layers increase [5][6][7][8].\nTo combat this, a number of works build architectures which mimic physical processes to impart desirable numerical properties. For example, some works claim that posing message passing as either a diffusion process or reversible flow may promote stability or help retain information, respectively. These present opposite ends of a spectrum between irreversible and reversible processes, which either dissipate or retain information. It is unclear, however, what role (ir)reversibility plays [9]. One could argue that dissipation entropically destroys information and could promote oversmoothing, so should be avoided. Alternatively, in dynamical systems theory, dissipation is crucial to realize a low-dimensional attractor, and thus dissipation may play an important role in realizing dimensionality reduction. Moreover, recent work has shown that dissipative phenomena can actually sharpen information as well as smooth it [10], although this is not often noticed in practice since typical empirical tricks (batch norm, etc.) lead to a departure from the governing mathematical theory.\nIn physics, Poisson brackets and their metriplectic/port-Hamiltonian generalization to dissipative systems provide an abstract framework for studying conservation and entropy production in dynamical systems. In this work, we construct four novel architectures which span the (ir)reversibility spectrum, using geometric brackets as a means of parameterizing dynamics abstractly without empirically assuming a physical model. This relies on an application of the data-driven exterior calculus (DDEC) [11], which allows a reinterpretation of the message-passing and aggregation of graph attention networks [12] as the fluxes and conservation balances of physics simulators [13], providing a simple but powerful framework for mathematical analysis. In this context, we recast graph attention as an inner-product on graph features, inducing graph derivative \"building-blocks\" which may be used to build geometric brackets. In the process, we generalize classical graph attention [12] to higher-order clique cochains (e.g., labels on edges and loops). The four architectures proposed here scale with identical complexity to classical graph attention networks, and possess desirable properties that have proven elusive in current architectures. On the reversible and irreversible end of the spectrum we have Hamiltonian and Gradient networks. In the middle of the spectrum, Double Bracket and Metriplectic architectures combine both reversibility and irreversibility, dissipating energy to either the environment or an entropic variable, respectively, in a manner consistent with the second law of thermodynamics. We summarize these brackets in Table 1, providing a diagram of their architecture in Figure 1." }, { "figure_ref": [], "heading": "Primary contributions:", "publication_ref": [], "table_ref": [], "text": "Theoretical analysis of GAT in terms of exterior calculus. Using DDEC we establish a unified framework for construction and analysis of message-passing graph attention networks, and provide an extensive introductory primer to the theory in the appendices. In this setting, we show that with our modified attention mechanism, GATs amount to a diffusion process for a special choice of activation and weights.\nGeneralized attention mechanism. Within this framework, we obtain a natural and flexible extension of graph attention from nodal features to higher order cliques (e.g. edge features). We show attention must have a symmetric numerator to be formally structure-preserving, and introduce a novel and flexible graph attention mechanism parameterized in terms of learnable inner products on nodes and edges.\nNovel structure-preserving extensions. We develop four GNN architectures based upon bracketbased dynamical systems. In the metriplectic case, we obtain the first architecture with linear complexity in the size of the graph while previous works are O(N 3 ).\nUnified evaluation of dissipation. We use these architectures to systematically evaluate the role of (ir)reversibility in the performance of deep GNNs. We observe best performance for partially Table 1: The abstract bracket formulations employed in this work. Here x represents a state variable, while E = E(x), S = S(x) are energy and entropy functions. \"Conservative\" indicates purely reversible motion, \"totally dissipative\" indicates purely irreversible motion, and \"partially dissipative\" indicates motion which either dissipates E (in the double bracket case) or generates S (in the metriplectic case).\ndissipative systems, indicating that a combination of both reversibility and irreversibility are important. Pure diffusion is the least performant across all benchmarks. For physics-based problems including optimal control, there is a distinct improvement. All models provide near state-of-the-art performance and marked improvements over black-box GAT/NODE networks." }, { "figure_ref": [], "heading": "Previous works", "publication_ref": [ "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b21", "b22", "b23", "b24", "b25", "b26", "b27", "b28", "b29", "b30", "b31", "b32", "b33", "b34", "b35", "b36", "b37", "b38", "b39", "b40", "b41", "b42", "b43", "b44", "b45", "b46", "b47", "b48", "b49" ], "table_ref": [], "text": "Neural ODEs: Many works use neural networks to fit dynamics of the form ẋ = f (x, θ) to time series data. Model calibration (e.g., UDE [14]), dictionary-based learning (e.g., SINDy [15]), and neural ordinary differential equations (e.g., NODE [16]) pose a spectrum of inductive biases requiring progressively less domain expertise. Structure-preservation provides a means of obtaining stable training without requiring domain knowledge, ideally achieving the flexibility of NODE with the robustness of UDE/SINDy. The current work learns dynamics on a graph while using a modern NODE library to exploit the improved accuracy of high-order integrators [17][18][19].\nStructure-preserving dense networks: For dense networks, it is relatively straightforward to parameterize reversible dynamics, see for example: Hamiltonian neural networks [20][21][22][23], Hamiltonian generative networks [24], Hamiltonian with Control (SymODEN) [25], Deep Lagrangian networks [26] and Lagrangian neural networks [27]. Structure-preserving extensions to dissipative systems are more challenging, particularly for metriplectic dynamics [28] which require a delicate degeneracy condition to preserve discrete notions of the first and second laws of thermodynamics. For dense networks such constructions are intensive, suffering from O(N 3 ) complexity in the number of features [29][30][31]. In the graph setting we avoid this and achieve linear complexity by exploiting exact sequence structure. Alternative dissipative frameworks include Dissipative SymODEN [32] and port-Hamiltonian [33]. We choose to focus on metriplectic parameterizations due to their broad potential impact in data-driven physics modeling, and ability to naturally treat fluctuations in multiscale systems [34].\nPhysics-informed vs structure-preserving: \"Physics-informed\" learning imposes physics by penalty, adding a regularizer corresponding to a physics residual. The technique is simple to implement and has been successfully applied to solve a range of PDEs [35], discover data-driven models to complement first-principles simulators [36][37][38], learn metriplectic dynamics [39], and perform uncertainty quantification [40,41]. Penalization poses a multiobjective optimization problem, however, with parameters weighting competing objectives inducing pathologies during training, often resulting in physics being imposed to a coarse tolerance and qualitatively poor predictions [42,43]. In contrast, structure-preserving architectures exactly impose physics by construction via carefully designed networks. Several works have shown that penalty-based approaches suffer in comparison, with structure-preservation providing improved long term stability, extrapolation and physical realizability.\nStructure-preserving graph networks: Several works use discretizations of specific PDEs to combat oversmoothing or exploding/vanishing gradients, e.g. telegraph equations [44] or various reaction-diffusion systems [45]. Several works develop Hamiltonian flows on graphs [46,47]. For metriplectic dynamics, [48] poses a penalty based formulation on graphs. We particularly focus on GRAND, which poses graph learning as a diffusive process [49], using a similar exterior calculus framework and interpreting attention as a diffusion coefficient. We show in Appendix A.5 that their analysis fails to account for the asymmetry in the attention mechanism, leading to a departure from the governing theory. To account for this, we introduce a modified attention mechanism which retains interpretation as a part of diffusion PDE. In this purely irreversible case, it is of interest whether adherence to the theory provides improved results, or GRAND's success is driven by something other than structure-preservation." }, { "figure_ref": [], "heading": "Theory and fundamentals", "publication_ref": [ "b50", "b51", "b10", "b52", "b10", "b52", "b53", "b54" ], "table_ref": [], "text": "Here we introduce the two essential ingredients to our approach: bracket-based dynamical systems for neural differential equations, and the data-driven exterior calculus which enables their construction.\nA thorough introduction to this material is provided in Appendices A.1, A.3, and A.2.\nBracket-based dynamics: Originally introduced as an extension of Hamiltonian/Lagrangian dynamics to include dissipation [50], bracket formulations are used to inform a dynamical system with certain structural properties, e.g., time-reversibility, invariant differential forms, or property preservation. Even without dissipation, bracket formulations may compactly describe dynamics while preserving core mathematical properties, making them ideal for designing neural architectures.\nBracket formulations are usually specified via some combination of reversible brackets {F, G} = ⟨∇F, L∇G⟩ and irreversible brackets [F, G] = ⟨∇F, M∇G⟩ , {{F, G}} = ∇F, L 2 ∇G for potentially state-dependent operators L * = -L and M * = M. The particular brackets which are used in the present network architectures are summarized in Table 1. Note that complete systems are the dynamical extensions of isolated thermodynamical systems: they conserve energy and produce entropy, with nothing lost to the ambient environment. Conversely, incomplete systems do not account for any lost energy: they only require that it vanish in a prescribed way. The choice of completeness is an application-dependent modeling assumption.\nExterior calculus: In the combinatorial Hodge theory [51], an oriented graph G = {V, E} carries sets of k-cliques, denoted G k , which are collections of ordered subgraphs generated by (k + 1) nodes. This induces natural exterior derivative operators d k : Ω k → Ω k+1 , acting on the spaces of functions on G k , which are the signed incidence matrices between k-cliques and (k + 1)-cliques. An explicit representation of these derivatives is given in Appendix A.1, from which it is easy to check the exact sequence property d k+1 • d k = 0 for any k. This yields a discrete de Rham complex on the graph G (Figure 2). Moreover, given a choice of inner product (say, ℓ 2 ) on Ω k , there is an obvious dual de Rham complex which comes directly from adjointness. In particular, one can define dual derivatives d * k : Ω k+1 → Ω k via the equality ⟨d k f, g⟩ k+1 = ⟨f, d * k g⟩ k , from which nontrivial results such as the Hodge decomposition, Poincaré inequality, and coercivity/invertibility of the Hodge Laplacian [11]). Using the derivatives d k , d * k , it is possible to build compatible discretizations of PDEs on G which are guaranteed to preserve exactness properties such as, e.g.,\n∆ k = d * k d k + d k-1 d * k-1 follow (see e.g.\nd 1 • d 0 = curl • grad = 0.\nThe choice of inner product ⟨•, •⟩ k thus induces a definition of the dual derivatives d * k . In the graph setting [52], one typically selects the ℓ 2 inner product, obtaining the adjoints of the signed incidence matrices as d * k = d ⊺ k . By instead working with the modified inner product\n(v, w) = v ⊺ A k w for a machine-learnable A k , we obtain d * k = A -1 k d ⊺ k A k+1 (see Appendix A.3). This parameterization Ω 0 Ω 1 Ω 2 • • • Ω k Ω 0 Ω 1 Ω 2 • • • Ω k d 0 d ⊺ 0 d 1 d ⊺ 1 d 2 d ⊺ 2 d k-1 d ⊺ k-1 A 0 A 1 d * 0 A 2 d * 1 d * 2 d * k-1 A k\nFigure 2: A commutative diagram illustrating the relationship between the graph derivatives d k , their ℓ 2 adjoints d ⊺ k , and the learnable adjoints d * k . These operators form a de Rham complex due to the exact sequence property\nd i+1 • d i = d ⊺ i • d ⊺ i+1 = d * i • d * i+1 = 0.\nWe show that the learnable A k may encode attention mechanisms, without impacting the preservation of exact sequence structure. inherits the exact sequence property from the graph topology encoded in d k while allowing for incorporation of geometric information from data. This leads directly to the following result, which holds for any (potentially feature-dependent) symmetric positive definite matrix\nA k . Theorem 3.1. The dual derivatives d * k : Ω k+1 → Ω k adjoint to d k : Ω k → Ω k+1 with respect to the learnable inner products A k : Ω k → Ω k satisfy an exact sequence property. Proof. d * k-1 d * k = A -1 k-1 d ⊺ k-1 A k A -1 k d ⊺ k A k+1 = A -1 k-1 (d k d k-1 ) ⊺ A k+1 = 0.\nAs will be shown in Section 4, by encoding graph attention into the A k , we may exploit the exact sequence property to obtain symmetric positive definite diffusion operators, as well as conduct the cancellations necessary to enforce degeneracy conditions necessary for metriplectic dynamics.\nFor a thorough review of DDEC, we direct readers to Appendix A.1 and [11]. For exterior calculus in topological data analysis see [52], and an overview in the context of PDEs see [53,54]." }, { "figure_ref": [], "heading": "Structure-preserving bracket parameterizations", "publication_ref": [ "b11", "b55", "b11", "b49", "b29", "b31" ], "table_ref": [], "text": "We next summarize properties of the bracket dynamics introduced in Section 3 and displayed in Table 1, postponing details and rigorous discussion to Appendices A.3 and A.6. Letting x = (q, p) denote node-edge feature pairs, the following operators will be used to generate our brackets.\nL = 0 -d * 0 d 0 0 , G = ∆ 0 0 0 ∆ 1 = d * 0 d 0 0 0 d * 1 d 1 + d 0 d * 0 , M = 0 0 0 A 1 d * 1 d 1 A 1 .\nAs mentioned before, the inner products A 0 , A 1 , A 2 on Ω k which induce the dual derivatives d * 0 , d * 1 , are chosen in such a way that their combination generalizes a graph attention mechanism. The precise details of this construction are given below, and its relationship to the standard GAT network from [12] is shown in Appendix A.5. Notice that L * = -L, while G * = G, M * = M are positive semidefinite with respect to the block-diagonal inner product (•, •) defined by A = diag (A 0 , A 1 ) (details are provided in Appendix A.6). Therefore, L generates purely reversible (Hamiltonian) dynamics and G, M generate irreversible (dissipative) ones. Additionally, note that state-dependence in L, M, G enters only through the adjoint differential operators, meaning that any structural properties induced by the topology of the graph G (such as the exact sequence property mentioned in Theorem 3.1) are automatically preserved. Remark 4.1. Strictly speaking, L is guaranteed to be a truly Hamiltonian system only when d * 0 is state-independent, since it may otherwise fail to satisfy Jacobi's identity. On the other hand, energy conservation is always guaranteed due to the fact that L is skew-adjoint.\nIn addition to the bracket matrices L, M, G, it is necessary to have access to energy and entropy functions E, S and their associated functional derivatives with respect to the inner product on Ω 0 ⊕ Ω 1 defined by A. For the Hamiltonian, gradient, and double brackets, E is chosen simply as the \"total kinetic energy\"\nE(q, p) = 1 2 |q| 2 + |p| 2 = 1 2 i∈V |q i | 2 + 1 2 α∈E |p α | 2 ,\nwhose A-gradient (computed in Appendix A.6) is just ∇E(q, p) = A -1 0 q A -1 1 p ⊺ . Since the metriplectic bracket uses parameterizations of E, S which are more involved, discussion of this case is deferred to later in this Section.\nAttention as learnable inner product: Before describing the dynamics, it remains to discuss how the matrices A i , 0 ≤ i ≤ 2, are computed in practice, and how they relate to the idea of graph attention. Recall that if n V > 0 denotes the nodal feature dimension, a graph attention mechanism takes the form a(q i , q j ) = f (ã ij ) / j f (ã ij ) for some differentiable pre-attention function ã : n V × n V → R (e.g., for scaled dot product [55]) one typically represents a(q i , q j ) as a softmax, so that f = exp(q)). This suggests a decomposition a(q i , q j ) = A -1 0 A 1 where A 0 = (a 0,ii ) is diagonal on nodes and A 1 = (a 1,ij ) is diagonal on edges,\na 0,ii = j∈N (i) f (ã (q i , q j )) , a 1,ij = f (ã (q i , q j )) .\nTreating the numerator and denominator of the standard attention mechanism separately in A 0 , A 1 allows for a flexible and theoretically sound incorporation of graph attention directly into the adjoint differential operators on G. In particular, if A 1 is symmetric with respect to edge-orientation and p is an edge feature which is antisymmetric, it follows that\n(d * 0 p) i = A -1 0 d ⊺ 0 A 1 p i = j∈N (i)\na (q i , q j ) p ji , which is just graph attention combined with edge aggregation. This makes it possible to give the following informal statement regarding graph attention networks which is explained and proven in Appendix A.5. Remark 4.2. The GAT layer from [12] is almost the forward Euler discretization of a metric heat equation.\nThe \"almost\" appearing here has to do with the fact that (1) the attentional numerator f (ã(q i , q j )) is generally asymmetric in i, j, and is therefore symmetrized by the divergence operator d ⊺ 0 , (2) the activation function between layers is not included, and (3) learnable weight matrices W k in GAT are set to the identity. Remark 4.3. The interpretation of graph attention as a combination of learnable inner products admits a direct generalization to higher-order cliques, which is discussed in Appendix A.4.\nHamiltonian case: A purely conservative system is generated by solving ẋ = L(x)∇E(x), or\nq ṗ = 0 -d * 0 d 0 0 A -1 0 0 0 A -1 1 q p = -d * 0 A -1 1 p d 0 A -1 0 q\n. This is a noncanonical Hamiltonian system which generates a purely reversible flow. In particular, it can be shown that\nĖ(x) = ( ẋ, ∇E(x)) = (L(x)∇E(x), ∇E(x)) = -(∇E(x), L(x)∇E(x)) = 0,\nso that energy is conserved due to the skew-adjointness of L.\nGradient case: On the opposite end of the spectrum are generalized gradient flows, which are totally dissipative. Consider solving ẋ = -G(x)∇E(x), or\nq ṗ = - ∆ 0 0 0 ∆ 1 A -1 0 0 0 A -1 1 q p = - ∆ 0 A -1 0 q ∆ 1 A -1 1 p .\nThis system is a metric diffusion process on nodes and edges separately. Moreover, it corresponds to a generalized gradient flow, since\nĖ(x) = ( ẋ, ∇E(x)) = -(G(x)∇E(x), ∇E(x)) = -|∇E(x)|2\nG ≤ 0, due to the self-adjoint and positive semi-definite nature of G. Remark 4.4. The architecture in GRAND [49] is almost a gradient flow, however the pre-attention mechanism lacks the requisite symmetry to formally induce a valid inner product. Double bracket case: Another useful formulation for incomplete systems is the so-called doublebracket formalism. Consider solving ẋ = L∇E + L 2 ∇E, or\nq ṗ = 0 -d * 0 d 0 0 A -1 0 q A -1 1 p + -d * 0 d 0 0 0 -d 0 d * 0 A -1 0 q A -1 1 p = -∆ 0 A -1 0 q -d * 0 A -1 1 p d 0 A -1 0 q -d 0 d * 0 A -1 1 p\n. This provides a dissipative relationship which preserves the Casimirs of the Poisson bracket generated by L, since L∇C = 0 implies L 2 ∇C = 0. In particular, it follows that\nĖ(x) = ( ẋ, ∇E(x)) = L(x)∇E(x) + L 2 (x)∇E(x), ∇E(x) = 0 -|L(x)∇E(x)| 2 ≤ 0,\nsince L is skew-adjoint and therefore L 2 is self-adjoint. Remark 4.5. It is interesting to note that the matrix L is essentially a Dirac operator (square root of the Hodge Laplacian ∆ = (d + d * ) 2 ) restricted to cliques of degree at most 1. However, here L 2 = -∆, so that L is in some sense \"pure imaginary\".\nMetriplectic case: Metriplectic systems are expressible as ẋ = L∇E + M∇S where E, S are energy resp. entropy functions which satisfy the degeneracy conditions L∇S = M∇E = 0. One way of setting this up in the present case is to define the energy and entropy functions\nE(q, p) = f E (s(q)) + g E (s (d 0 d ⊺ 0 p)) , S(q, p) = g S (s (d ⊺ 1 d 1 p))\n, where s is sum aggregation over nodes resp. edges, f E : R n V → R acts on node features, and g E , g S : R n E → R act on edge features. Denoting the \"all ones\" vector (of variable length) by 1, it is shown in Appendix A.6 that the A-gradients of energy and entropy can be computed as\n∇E(x) = A -1 0 1 ⊗ ∇f E (h(q)) A -1 1 d 0 d ⊺ 0 1 ⊗ ∇g E (h (d 0 d ⊺ 0 p)) , ∇S(x) = 0 A -1 1 d ⊺ 1 d 1 1 ⊗ ∇g S (h (d ⊺ 1 d 1 p))\n.\nSimilarly, it is shown in Appendix A.6 that the degeneracy conditions L∇S = M∇E = 0 are satisfied by construction. Therefore, the governing dynamical system becomes\nq ṗ = L∇E + M∇S = -A -1 0 d ⊺ 0 d 0 d ⊺ 0 1 ⊗ ∇g E (s (d 0 d ⊺ 0 p)) d 0 A -1 0 1 ⊗ ∇f E (s(q)) + A 1 d * 1 d 1 d ⊺ 1 d 1 1 ⊗ ∇g S (s (d ⊺ 1 d 1 p))\n.\nWith this, it follows that the system obeys a version of the first and second laws of thermodynamics,\nĖ(x) = ( ẋ, ∇E(x)) = (L∇E(x), ∇E(x)) + (M∇S(x), ∇E(x)) = (∇S(x), M∇E(x)) = 0, Ṡ(x) = ( ẋ, ∇S(x)) = (L∇E(x), ∇S(x)) + (M∇S(x), ∇S(x)) = 0 + |∇S(x)| 2 M ≥ 0. Remark 4.6.\nAs seen in the increased complexity of this formulation, enforcing the degeneracy conditions necessary for metriplectic structure is nontrivial. This is accomplished presently via an application of the exact sequence property in Theorem 3.1, which we derive in Appendix A.6. Remark 4.7. It is worth mentioning that, similar to the other architectures presented in this Section, the metriplectic network proposed here exhibits linear O(N ) scaling in the graph size. This is in notable contrast to [29,31] which scale as O(N 3 )." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "This section reports results on experiments designed to probe the influence of bracket structure on trajectory prediction and nodal feature classification. Additional experimental details can be found in Appendix B. In each Table, orange indicates the best result by our models, and blue indicates the best of those compared. We consider both physical systems, where the role of structure preservation is explicit, as well as graph-analytic problems." }, { "figure_ref": [], "heading": "Damped double pendulum", "publication_ref": [ "b57" ], "table_ref": [ "tab_1" ], "text": "As a first experiment, consider applying one of these architectures to reproduce the trajectory of a double pendulum with a damping force proportional to the angular momenta of the pendulum masses (see Appendix B.1 for details). Since this system is metriplectic when expressed in position-momentum-entropy coordinates (c.f. [56]), it is useful to see if any of the brackets from Section 4 can adequately capture these dynamics without an entropic variable. The results of applying the architectures of Section 4 to reproduce a trajectory of five periods are displayed in Table 2, alongside comparisons with a black-box NODE network and a latent NODE with feature encoder/decoder. While each network is capable of producing a small mean absolute error, it is clear that the metriplectic and Hamiltonian networks produce the most accurate trajectories. It is remarkable both that the Hamiltonian bracket does so well here and that the gradient bracket does so poorly, being that the damped double pendulum system is quite dissipative. On the other hand, it is unlikely to be only the feature encoder/decoder leading to good performance here, as both the NODE and NODE+AE architectures perform worse on this task by about one order of magnitude." }, { "figure_ref": [], "heading": "MuJoCo Dynamics", "publication_ref": [ "b58", "b23" ], "table_ref": [ "tab_2" ], "text": "Next we test the proposed models on more complex physical systems that are generated by the Multi-Joint dynamics with Contact (MuJoCo) physics simulator [57]. We consider the modified versions of Open AI Gym environments [23]: HalfCheetah, Hopper, and Swimmer.\nWe represent an object in an environment as a fully-connected graph, where a node corresponds to a body part of the object and, thus, the nodal feature q i corresponds to a position of a body part or an angle of a joint. 3 As the edge features, a pair of nodal velocities p α = (v src(α) , v dst(α) ) are provided, where v src(α) and v dst(α) denote velocities of the source and destination nodes connected to the edge.\nSince the MuJoCo environments contain an actor applying controls, additional control input is accounted for with an additive forcing term which is parameterized by a multi-layer perceptron and introduced into the bracket-based dynamics models. See Appendix B.2 for additional experimental details. The problem therefore consists of finding an optimal control MLP, and we evaluate the improvement which comes from representing the physics surrogate with bracket dynamics over NODE.\nAll models are trained via minimizing the MSE between the predicted positions q and the ground truth positions q and are tested on an unseen test set. Table 3 reports the errors of network predictions on the test set measured in the relative ℓ 2 norm, ∥q -q∥ 2 /∥q∥ 2 ∥q∥ 2 . Similar to the double pendulum experiments, all models are able to produce accurate predictions with around or less than 10% errors. While the gradient bracket makes little to no improvements over NODEs, the Hamiltonian, double, and metriplectic brackets produce more accurate predictions. Interestingly, the Hamiltonian bracket performs the best in this case as well, meaning that any dissipation present is effectively compensated for by the autoencoder which transforms the features." }, { "figure_ref": [], "heading": "Node classification", "publication_ref": [ "b59", "b60", "b61", "b62", "b63", "b11", "b49", "b64", "b49", "b49" ], "table_ref": [ "tab_3", "tab_4" ], "text": "Moving beyond physics-based examples, it remains to see how bracket-based architectures perform on \"black-box\" node classification problems. Table 4 and Table 5 present results on common benchmark problems including the citation networks Cora [58], Citeseer [59], and Pubmed [60], as well as the coauthor graph, CoauthorCS [61], and the Amazon co-purchasing graphs, Computer and Photo [62]. For comparison, we report results on a standard GAT [12], a neural graph differential equation [49].\narchitecture (GDE) [63], and the nonlinear GRAND architecture (GRAND-nl) from [49] which is closest to ours. Since our experimental setting is similar to that of [49], the numbers reported for GAT, GDE, and GRAND-nl are taken directly from this paper. Note that, despite the similar O(N ) scaling in the metriplectic architecture, the high dimension of the node and edge features on the latter three datasets led to trainable E, S functions which exhausted the memory on our available machines, and therefore results are not reported for these cases. A full description of experimental details is provided in Appendix B.3.\nRemark 5.1. To highlight the effect of bracket structure on network performance, only minimal modifications are employed during network training. In particular, we do not include any additional regularization, positional encoding, graph rewiring, extraction of connected components, extra terms on the right-hand side, or early stopping. While it is likely that better classification performance could be achieved with some of these modifications included, it becomes very difficult to isolate the effect of structure-preservation. A complete list of tunable hyperparameters is given in Appendix B.3.\nThe results show different behavior produced by each bracket architecture. It is empirically clear that there is some value in full or partial reversibility, since the Hamiltonian and double bracket architectures both perform better than the corresponding gradient architecture on datasets such as Computer and Photo. Moreover, it appears that the partially reversible double bracket performs the best of the bracket architectures in every case, which is consistent with the idea that both reversible and irreversible dynamics are critical for capturing the behavior of general dynamical systems. Interestingly, the metriplectic bracket performs worse on these tasks by a large margin. We conjecture this architecture may be harder to train for larger problems despite its O(N ) complexity in the graph size, suggesting that more sophisticated training strategies may be required for large problems." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b49" ], "table_ref": [], "text": "This work presents a unified theoretical framework for analysis and construction of graph attention networks. The exact sequence property of graph derivatives and coercivity of Hodge Laplacians which follow from the theory allow the construction of four structure-preserving brackets, which we use to evaluate the role of irreversibility in both data-driven physics simulators and graph analytics problems. In all contexts, the pure diffusion bracket performed most poorly, with mixed results between purely reversible and partially dissipative brackets. [49].\nThe linear scaling achieved by the metriplectic brackets has a potential major impact for data-driven physics modeling. Metriplectic systems emerge naturally when coarse-graining multiscale systems. With increasing interest in using ML to construct digital twins, fast data-driven surrogates for complex multi-physics acting over multiple scales will become crucial. In this setting the stability encoded by metriplectic dynamics translates to robust surrogates, with linear complexity suggesting the possibility of scaling up to millions of degrees of freedom.\nLimitations: All analysis holds under the assumption of modified attention mechanisms which allow interpretation of GAT networks as diffusion processes; readers should take care that the analysis is for a non-standard attention. Secondly, for all brackets we did not introduce empirical modifications (e.g. regularization, forcing, etc) to optimize performance so that we could study the role of (ir)reversibility in isolation. With this in mind, one may be able to add \"tricks\" to e.g. obtain a diffusion architecture which outperforms those presented here. Finally, note that the use of a feature autoencoder in the bracket architectures means that structure is enforced in the transformed space. This allows for applicability to more general systems, and can be easily removed when appropriate features are known." }, { "figure_ref": [], "heading": "Broader impacts:", "publication_ref": [], "table_ref": [], "text": "The work performed here is strictly foundational mathematics and is intended to improve the performance of GNNs in the context of graph analysis and data-driven physics modeling. Subsequent application of the theory may have societal impact, but the current work anticipated to improve the performance of machine learning in graph settings only at a foundational level." }, { "figure_ref": [], "heading": "Glossary of Notation and Symbols", "publication_ref": [], "table_ref": [], "text": "The next list describes several symbols that will be later used within the body of the document \nL ⊺ = -L N (i), N (i) Neighbors of node i ∈ V, neighbors of node i ∈ V including i [S]\nIndicator function of the statement S δf, ∇f Adjoint of df with respect to ⟨•, •⟩, adjoint of df with respect to (•, •)\n∆ k Hodge Laplacian d k d * k + d * k d k δ ij\nKronecker delta ḟ Derivative of f with respect to time \n(•, •) k Learnable metric inner product on k-cliques with matrix representation A k ⟨•, •⟩ k Euclidean ℓ 2 inner product on k-cliques G, V, E Oriented graph, set of nodes, set of edges G k , Ω k Set of k-cliques,\nd ⊺ k , d * k Adjoint of d k with respect to ⟨•, •⟩ k , adjoint of d k with respect to (•, •) k" }, { "figure_ref": [], "heading": "A Mathematical foundations", "publication_ref": [], "table_ref": [], "text": "This Appendix provides the following: (1) an introduction to the ideas of graph exterior calculus, A.1, and bracket-based dynamical systems, A.2, necessary for understanding the results in the body, (2) additional explanation regarding adjoints with respect to generic inner products and associated computations, A.3, (3) a mechanism for higher-order attention expressed in terms of learnable inner products, A.4, (4) a discussion of GATs in the context of exterior calculus, A.5, and (5) proofs which are deferred from Section 4, A.6." }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "A.1 Graph exterior calculus", "publication_ref": [ "b10", "b51", "b65", "b53", "b2", "b10" ], "table_ref": [], "text": "Here some basic notions from the graph exterior calculus are recalled. More details can be found in, e.g., [11,51,64]. As mentioned in Section 3, an oriented graph G = {V, E} carries sets of k-cliques, denoted G k , which are collections of ordered subgraphs generated by (k + 1) nodes. For example, the graph in Figure 3 contains six 0-cliques (nodes), six 1-cliques (edges), and one 2-clique. A notion of combinatorial derivative is then given by the signed incidence matrices d k : Ω k → Ω k+1 , operating on the space Ω k of differentiable functions on k-cliques, whose entries (d k ) ij are 1 or -1 if the j th k-clique is incident on the i th (k + 1)-clique, and zero otherwise. For the example in Figure 3, these are:\nd 0 =        -1 1 0 0 0 0 0 -1 1 0 0 0 0 -1 0 1 0 0 0 0 0 -1 1 0 0 1 0 0 -1 0 0 0 0 0 -1 1        , d 1 = (0 0 1 1 1 0) .\nRemark A.1. While the one-hop neighborhood of node i in G, denoted N (i), does not include node i itself, many machine learning algorithms employ the extended neighborhood\nN (i) = N (i) ∪ {i}.\nSince this is equivalent to considering the one-hop neighborhood of node i in the self-looped graph G, this modification does not change the analysis of functions on graphs.\nIt can be shown that the action of these matrices can be conveniently expressed in terms of totally antisymmetric functions f ∈ Ω k , via the expression\n(d k f ) (i 0 , i 1 , ..., i k+1 ) = k+1 j=0\n(-1) j f i 0 , ..., i j , ..., i k+1 ,\nwhere (i 0 , ..., i k+1 ) denotes a (k + 1)-clique of vertices v ∈ V. As convenient shorthand, we often write subscripts, e.g., (d k f ) i0i1...i k+1 , instead of explicit function arguments. Using [S] to denote the indicator function of the statement S, it is straightforward to check that d\n• d = 0, (d k d k-1 f ) i0,...,i k+1 = k+1 j=0 (-1) j (d k-1 f ) i0,..., ij ,...,i k+1 = k+1 j=0 k+1 l=0 [l < j] (-1) j+l f i0... i l ... ij ...i k+1 + k+1 j=0 k+1 l=0 [l > j] (-1) j+l-1 f i0... ij ... i l ...i k+1 = l<j (-1) j+l f i0... i l ... ij ...i k+1 - l<j (-1) j+l f i0... i l ... ij ...i k+1 = 0,\nsince (-1) j+l-1 = (-1) -1 (-1) j+l = (-1)(-1) j+l and the final sum follows from swapping the labels j, l. This shows that the k-cliques on G form a de Rham complex [53]: a collection of function spaces Ω k equipped with mappings d k satisfying Im d k-1 ⊂ Ker d k as shown in Figure 4. When\nΩ 0 Ω 1 Ω 2 • • • Ω K d0 d1 d2 d K-1\nFigure 4: Illustration of the de Rham complex on G induced by the combinatorial derivatives, where K > 0 is the maximal clique degree. K = 3, this is precisely the graph calculus analogue of the de Rham complex on R 3 formed by the Sobolev spaces H 1 , H(curl), H(div), L 2 which satisfies div\n• curl = curl • grad = 0.\nWhile the construction of the graph derivatives and their associated de Rham complex is purely topological, building elliptic differential operators such as the Laplacian relies on a dual de Rham complex, which is specified by an inner product on Ω k . In the case of ℓ 2 , this leads to dual derivatives which are the matrix transposes of the d k having the following explicit expression.\nProposition A.1. The dual derivatives d ⊺ k : Ω k+1 → Ω k adjoint to d k through the ℓ 2 inner product are given by\n(d ⊺ k f ) (i 0 , i 1 , ..., i k ) = 1 k + 2 i k+1 k+1 j=0 f (i 0 , ..., [i j , ..., i k+1 ]) ,\nwhere [i j , ..., i k+1 ] = i k+1 , i j , ..., i k indicates a cyclic permutation forward by one index.\nProof. This is a direct calculation using the representation of d k in terms of antisymmetric functions. More precisely, let an empty sum Σ denote summation over all unspecified indices. Then, for any\ng ∈ Ω k , ⟨d k f, g⟩ = i0...i k+1 ∈G k+1 (d k f ) i0...i k+1 g i0...,i k+1 = 1 (k + 2)!   k+1 j=0 (-1) j f i0... ij ...i k+1   g i0...i k+1 = 1 (k + 2)! f i0...i k   i k+1 k+1 j=0 (-1) j g i0...[ij ...i k+1 ]   = 1 k + 2 i0i1...i k ∈G k f i0...i k   i k+1 k+1 j=0 (-1) j g i0...[ij ...i k+1 ]   = i0i1...i k ∈G k f i0...i k (d ⊺ k g) i0...i k = ⟨f, d ⊺ k g⟩ ,\nwhich establishes the result.\nProposition A.1 is perhaps best illustrated with a concrete example. Consider the graph gradient, defined for edge α = (i, j) as\n(d 0 f ) α = (d 0 f ) ij = f j -f i .\nNotice that this object is antisymmetric with respect to edge orientation, and measures the outflow of information from source to target nodes. From this, it is easy to compute the ℓ 2 -adjoint of d 0 , known as the graph divergence, via\n⟨d 0 f, g⟩ = α=(i,j) (f j -f i ) g ij = i (j>i)∈N (i) g ij f j -g ij f i = 1 2 i j∈N (i) f i (g ji -g ij ) = ⟨f, d ⊺ 0 g⟩ ,\nwhere we have re-indexed under the double sum, used that i ∈ N (j) if and only if j ∈ N (i), and used that there are no self-edges in E. Therefore, it follows that the graph divergence at node i is given by\n(d ⊺ 0 g) i = α∋i g -α -g α = 1 2 j∈N (i) g ji -g ij ,\nwhich reduces to the common form (d ⊺ 0 g) i =j g ij if and only if the edge feature g ij is antisymmetric. Remark A.2. When the inner product on edges E is not L 2 , but defined in terms of a nonnegative, orientation-invariant, and (edge-wise) diagonal weight matrix W = (w ij ), a similar computation shows that the divergence becomes\n(d * 0 f ) i = 1 2 j∈N (i) w ij (f ji -f ij ) .\nThe more general case of arbitrary inner products on V, E is discussed in section A. 3. \nThe differential operators d ⊺ k induce a dual de Rham complex since d ⊺ k-1 d ⊺ k = (d k d k-1 ) ⊺ = 0, which enables both the construction of Laplace operators on k-cliques, ∆ k = d ⊺ k d k + d k-1 d ⊺ k-1 ,\nΩ k = Im d k-1 ⊕ Ker ∆ k ⊕ Im d ⊺ k .\nIn the case where the dual derivatives d * k are adjoint with respect to a learnable inner product which does not depend on graph features, the conclusion of Theorem A.3 continues to hold, leading to an interesting well-posedness result proved in [11] involving nonlinear perturbations of a Hodge-Laplace problem in mixed form.\nTheorem A.4. ([11, Theorem 3.6]) Suppose f k ∈ Ω k , and g (x; ξ) is a neural network with parameters ξ which is Lipschitz continuous and satisfies g(0) = 0. Then, the problem\nw k-1 = d * k-1 u k + ϵg d * k-1 u k ; ξ , f k = d k-1 w k-1 + d * k d k u k , has a unique solution on Ω k /Ker ∆ k .\nThis result shows that initial-value problems involving the Hodge-Laplacian are stable under nonlinear perturbations. Moreover, when ∆ 0 is the Hodge Laplacian on nodes, there is a useful connection between ∆ 0 and the degree and adjacency matrices of the graph G. Recall that the degree matrix D = (d ij ) is diagonal with entries d ii = j∈N (i) 1, while the adjacency matrix A = (a ij ) satisfies a ij = 1 when j ∈ N (i) and a ij = 0 otherwise. Proposition A.2. The combinatorial Laplacian on V, denoted\n∆ 0 = d ⊺ 0 d 0 , satisfies ∆ 0 = D -A.\nProof. Notice that\n(d ⊺ 0 d 0 ) ij = α∈E (d 0 ) αi (d 0 ) αj = [i = j] α∈E ((d 0 ) αi ) 2 + [i ̸ = j] α=(i,j) (d 0 ) αi (d 0 ) αj = [i = j] d ii -[i ̸ = j] a ij = d ij -a ij = D -A,\nwhere we used that D is diagonal, A is diagonal-free, and (d 0 ) αi (d 0 ) αj = -1 whenever α = (i, j) is an edge in E, since one of (d 0 ) αi , (d 0 ) αj is 1 and the other is -1." }, { "figure_ref": [], "heading": "A.2 Bracket-based dynamical systems", "publication_ref": [ "b50", "b66", "b67", "b68", "b66", "b29", "b69", "b31", "b68", "b29" ], "table_ref": [], "text": "Here we mention some additional facts regarding bracket-based dynamical systems. More information can be found in, e.g., [50,[65][66][67].\nAs mentioned before, the goal of bracket formalisms is to extend the Hamiltonian formalism to systems with dissipation. To understand where this originates, consider an action functional A(q) = b a L (q, q) dt on the space of curves q(t), defined in terms of a Lagrangian L on the tangent bundle to some Riemannian manifold. Using L q , L q to denote partial derivatives with respect to the subscripted variable, it is straightforward to show that, for any compactly supported variation δq of q, we have\ndA(q)δq = b a dL (q, q) δq = b a L q δq + L q δ q = b a (L q -∂ t L q ) δq,\nwhere the final equality follows from integration-by-parts and the fact that variational and temporal derivatives commute in this setting. It follows that A is stationary (i.e., dA = 0) for all variations only when ∂ t L q = L q . These are the classical Euler-Lagrange equations which are (under some regularity conditions) transformed to Hamiltonian form via a Legendre transformation, H(q, p) = sup q (⟨p, q⟩ -L(q, q)) , which defines the Hamiltonian functional H on phase space, and yields the conjugate momentum vector p = L q . Substituting L = ⟨p, q⟩ -H into the previously derived Euler-Lagrange equations leads immediately to Hamilton's equations for the state x = (q p) ⊺ ,\nẋ = q ṗ = 0 1 -1 0 H q H p = J∇H,\nwhich are an equivalent description of the system in question in terms of the anti-involution J and the functional gradient ∇H.\nAn advantage of the Hamiltonian description is its compact bracket-based formulation, ẋ = J∇H = {x, H}, which requires only the specification of an antisymmetric Poisson bracket {•, •} and a Hamiltonian functional H. Besides admitting a direct generalization to more complex systems such as Korteweg-de Vries or incompressible Euler, where the involved bracket is state-dependent, this formulation makes the energy conservation property of the system obvious. In particular, it follows immediately from the antisymmetry of {•, •} that Ḣ = ⟨ ẋ, ∇H⟩ = {H, H} = 0, while it is more difficult to see immediately that the Euler-Lagrange system obeys this same property. The utility and ease-of-use of bracket formulations is what inspired their extension to other systems of interest which do not conserve energy. On the opposite end of this spectrum are the generalized gradient flows, which can be written in terms of a bracket which is purely dissipative. An example of this is heat flow q = ∆q := -[q, D], which is the L 2 -gradient flow of Dirichlet energy D(q) = (1/2) b a |q ′ | 2 dt (c.f. Appendix A.3). In this case, the functional gradient ∇D = -∂ tt is the negative of the usual Laplace operator, so that the positive-definite bracket [•, •] is generated by the identity operator M = id. It is interesting to note that the same system could be expressed using the usual kinetic energy E(q) = (1/2) b a |q| 2 dt instead, provided that the corresponding bracket is generated by M = -∆. This is a good illustration of the flexibility afforded by bracket-based dynamical systems.\nSince physical systems are not always purely reversible or irreversible, other useful bracket formalisms have been introduced to capture dynamics which are a mix of these two. The double bracket ẋ = {x, E} + {{x, E}} = L∇E + L 2 ∇E is a nice extension of the Hamiltonian bracket particularly because it is Casimir preserving, i.e., those quantities which annihilate the Poisson bracket {•, •} also annihilate the double bracket. This allows for the incorporation of dissipative phenomena into idealized Hamiltonian systems without affecting desirable properties such as mass conservation, and has been used to model, e.g., the Landau-Lifschitz dissipative mechanism, as well as a mechanism for fluids where energy decays but entrophy is preserved (see [65] for additional discussion). A complementary but alternative point of view is taken by the metriplectic bracket formalism, which requires that any dissipation generated by the system is accounted for within the system itself through the generation of entropy. In the metriplectic formalism, the equations of motion are ẋ = {x, E} + [x, S] = L∇E + M∇S, along with important and nontrivial compatibility conditions L∇S = M∇E = 0, also called degeneracy conditions, which ensure that the reversible and irreversible mechanisms do not cross-contaminate. As shown in the body of the paper, this guarantees that metriplectic systems obey a form of the first and second thermodynamical laws. Practically, the degeneracy conditions enforce a good deal of structure on the operators L, M which has been exploited to generate surrogate models [29,68,31]. In particular, it can be shown that the reversible and irreversible brackets can be parameterized in terms of a totally antisymmetric order-3 tensor ξ = (ξ ijk ) and a partially symmetric order-4 tensor ζ = (ζ ik,jl ) through the relations (Einstein summation assumed)\n{A, B} = ξ ijk ∂ i A ∂ j B ∂ k S, [A, B] = ζ ik,jl ∂ i A ∂ k E ∂ j B ∂ l E.\nMoreover, using the symmetries of ζ, it follows (see [67]) that this tensor decomposes into the product ζ ik,jl = Λ m ik D mn Λ n jl of a symmetric matrix D and an order-3 tensor Λ which is skew-symmetric in its lower indices. Thus, by applying symmetry relationships, it is easy to check that {•, S} = [•, E] = 0.\nRemark A.5. In [29], trainable 4-and 3-tensors ξ ijk and ζ ik,jl are constructed to achieve the degeneracy conditions, mandating a costly O(N 3 ) computational complexity. In the current work we overcome this by instead achieving degeneracy through the exact sequence property." }, { "figure_ref": [], "heading": "A.3 Adjoints and gradients", "publication_ref": [ "b70", "b71" ], "table_ref": [], "text": "Beyond the basic calculus operations discussed in section A.1 which depend only on graph topology, the network architectures discussed in the body also make extensive use of learnable metric information coming from the nodal features. To understand this, it is useful to recall some information about general inner products and the derivative operators that they induce. First, recall that the usual ℓ 2 inner product on node features a, b ∈ R |V| , ⟨a, b⟩ = a ⊺ b, is (in this context) a discretization of the standard L 2 inner product V ab dµ which aggregates information from across the vertex set V. While this construction is clearly dependent only on the graph structure (i.e., topology), any symmetric positive definite (SPD) matrix A 0 : Ω 0 → Ω 0 also defines an inner product on functions a ∈ Ω 0 through the equality (a, b) 0 := ⟨a, A 0 b⟩ = a ⊺ A 0 b, which gives a different way of measuring the distance between a and b. The advantage of this construction is that A 0 can be chosen in a way that incorporates geometric information which implicitly regularizes systems obeying a variational principle. This follows from the following where δE denotes the ℓ 2 -gradient of E and ∇E denotes its A 0 -gradient, i.e., its gradient with respect to the derivative operator induced by the inner product involving A 0 . From this, it is clear that δE = A 0 ∇E, so that the A 0 -gradient is just an anisotropic rescaling of the ℓ 2 version. The advantage of working with ∇ over δ in the present case of graph networks is that A 0 can be learned based on the features of the graph. This means that learnable feature information (i.e., graph attention) can be directly incorporated into the differential operators governing our bracket-based dynamical systems by construction.\nThe prototypical example of where this technique is useful is seen in the gradient flow of Dirichlet energy. Recall that the Dirichlet energy of a differentiable function u : R n → R is given by\nD(u) = (1/2) |∇u| 2 dµ\n, where ∇ now denotes the usual ℓ 2 -gradient of the function u on R n .\nUsing integration-by-parts, it is easy to see that dD(u)v = -v∆u for any test function v with compact support, implying that the L 2 -gradient of D is -∆ and u = ∆u is the L 2 -gradient flow of Dirichlet energy: the motion which decreases the quantity D(u) the fastest as measured by the L 2 norm. It can be shown that high-frequency modes decay quickly under this flow, while low-frequency information takes much longer to dissipate. On the other hand, we could alternatively run the H 1 -gradient flow of D, which is motion of fastest decrease with respect to the H 1 inner product (u, v) = ⟨∇u, ∇v⟩ dµ. This motion is prescribed in terms of the H 1 -gradient of D, which by the discussion above with A 0 = -∆ is easily seen to be the identity. This means that the H 1 -gradient flow is given by u = -u, which retains the minimizers of the L 2 -flow but with quite different intermediate character, since it functions by simultaneously flattening all spatial frequencies. The process of preconditioning a gradient flow by matching derivatives is known as a Sobolev gradient method (c.f. [69]), and these methods often exhibit faster convergence and better numerical behavior than their L 2 counterparts [70].\nReturning to the graph setting, our learnable matrices A k on k-cliques will lead to inner products (•, •) k on functions in Ω k , and this will induce dual derivatives as described in Appendix A.1. However, in this case we will not have d * 0 = d ⊺ 0 , but instead the expression given by the following result: Proposition A.3. The A k -adjoints d * k to the graph derivative operators d k are given by d\n* k = A -1 k d ⊺ k A k+1 .\nSimilarly, for any linear operator B :\nΩ k → Ω k , the A k -adjoint B * = A -1 k B ⊺ A.\nProof. Let q, p denote vectors of k-clique resp. (k + 1)-clique features. It follows that\n(d k q, p) k+1 = ⟨d k q, A k+1 p⟩ = ⟨q, d ⊺ k A k+1 p⟩ = ⟨q, A k d * k p⟩ = (q, d * k p) k . Therefore, we see that d ⊺ k A k+1 = A k d * k and hence d * k = A -1 k d ⊺ k A k+1 . Similarly, if q, q ′ denote vectors of k-clique features, it follows from the ℓ 2 -self-adjointness of A k that q, Bq ′ k = q, A k Bq ′ = ⟨B ⊺ A k q, q ′ ⟩ = A -1 k B ⊺ A k q, A k q ′ = (B * q, q ′ ) k , establishing that B * = A -1 k B ⊺ A k . Remark A.6.\nIt is common in graph theory to encounter the case where a i > 0 are nodal weights and w ij > 0 are edge weights. These are nothing more than the (diagonal) inner products A 0 , A 1 in disguise, and so Proposition A.3 immediately yields the familiar formula for the induced divergence\n(d * 0 p) i = 1 a i j:(i,j)∈E w ij (p ji -p ij ) .\nNote that all of these notions extend to the case of block inner products in the obvious way. For example, if q, p are node resp. edge features, it follows that A = diag (A 0 , A 1 ) is an inner product on node-edge feature pairs, and the adjoints of node-edge operators with respect to A are computed as according to Proposition A.3. Remark A.7. For convenience, this work restricts to diagonal matrices A 0 , A 1 . However, note that a matrix which is diagonal in \"edge space\" G 2 is generally full in a nodal representation. This is because an (undirected) edge is uniquely specified by the two nodes which it connects, meaning that a purely local quantity on edges is necessarily nonlocal on nodes." }, { "figure_ref": [], "heading": "A.4 Higher order attention", "publication_ref": [], "table_ref": [], "text": "As mentioned in the body, when f = exp and ã(q i , q j ) = (1/d) ⟨W K q i , W Q q j ⟩, defining the learnable inner products A 0 = (a 0,ii ) , A 1 = (a 1,ij ) as\na 0,ii = j∈N (i) f (ã (q i , q j )) , a 1,ij = f (ã (q i , q j )) ,\nrecovers scaled dot product attention as A -1 0 A 1 . Remark A.8. Technically, A 1 is an inner product only with respect to a predefined ordering of the edges α = (i, j), since we do not require A 1 be orientation-invariant. On the other hand, it is both unnecessary and distracting to enforce symmetry on A 1 in this context, since any necessary symmetrization will be handled automatically by the differential operator d * 0 .\nSimilarly, other common attention mechanisms are produced by modifying the pre-attention function ã. While A -1 0 A 1 never appears in the brackets of Section 4, letting α = (i, j) denote a global edge with endpoints i, j, it is straightforward to calculate the divergence of an antisymmetric edge feature p at node i,\n(d * 0 p) i = A -1 0 d ⊺ 0 A 1 p i = a -1 0,ii α (d ⊺ 0 ) iα (A 1 p) α = a -1 0,ii α∋i (A 1 p) -α -(A 1 p) α = - j∈N (i) a 1,ji + a i,ij a 0,ii p ij .\nThis shows that b(q i , q j ) = (a 1,ij + a 1,ji ) /a 0,ii appears under the divergence in d * 0 = A -1 0 d ⊺ 0 A 1 , which is the usual graph attention up to a symmetrization in A 1 . Remark A.9. While A 1 is diagonal on global edges α = (i, j), it appears sparse nondiagonal in its nodal representation. Similarly, any diagonal extension A 2 to 2-cliques will appear as a sparse 3-tensor A 2 = (a 2,ijk ) when specified by its nodes. This inspires a straightforward extension of graph attention to higher-order cliques. In particular, denote by K > 0 the highest degree of clique under consideration, and define\nA K-1 = (a K-1,i1i2...i K ) by a K-1,i1i2...i K = f (W (q i1 , q i2 , ..., q i K )) , where W ∈ R ⊗ K n V is a learnable K-tensor. Then, for any 0 ≤ k ≤ K -2 define A k = a k,i1i2...i k+1 by a k,i1i2...i k+1 = i K ,...,i K-k-1 a K-1,i1i2...i K .\nThis recovers the matrices A 0 , A 1 from before when K = 2, and otherwise extends the same core idea to higher-order cliques. It's attractive that the attention mechanism captured by d * k remains asymmetric, meaning that the attention of any one node to the others in a k-clique need not equal the attention of the others to that particular node. Remark A.10. A more obvious but less expressive option for higher-order attention is to let\na k,i1i2...i k+1 = a K-1,i1i2...i K i K ,...,i K-k-1 a K-1,i1i2...i K , for any 0 ≤ k ≤ K -2. However, application of the combinatorial codifferential d ⊺ k-1 appearing in d *\nk-1 will necessarily symmetrize this quantity, so that the asymmetry behind the attention mechanism is lost in this formulation.\nTo illustrate how this works more concretely, consider the extension K = 3 to 2-cliques, and let N (i, j) = N (i) ∩ N (j). We have the tensors A 2 = (a 2,ijk ), A 1 = (a 1,ij ), and A 0 = (a 0,i ) defined by\na 2,ijk = f (W (q i , q j , q k )) , a 1,ij = k∈N (i,j) a 2,ijk , a 0,i = j∈N (i) k∈N (i,j) a 2,ijk .\nThis provides a way for (features on) 3-node subgraphs of G to attend to each other, and can be similarly built-in to the differential operator\nd * 1 = A -1 1 d ⊺ 0 A 2 ." }, { "figure_ref": [], "heading": "A.5 Exterior calculus interpretation of GATs", "publication_ref": [ "b11", "b49" ], "table_ref": [], "text": "Let N (i) denote the one-hop neighborhood of node i, and let N (i) = N (i) ∪ {i}. Recall the standard (single-headed) graph attention network (GAT) described in [12], described layer-wise as\nq k+1 i = σ   j∈N (i) a q k i , q k j W k q k j   ,(1)\nwhere σ is an element-wise nonlinearity, W k is a layer-dependent embedding matrix, and a (q i , q j ) denotes the attention node i pays to node j. Traditionally, the attention mechanism is computed through a (q i , q j ) = Softmax j ã (q i , q j ) = e ã(qi,qj ) σ i ,\nwhere the pre-attention coefficients ã (q i , q j ) and nodal weights σ i are defined as\nã (q i , q j ) = LeakyReLU (a ⊺ (W ⊺ q i || W ⊺ q j )) , σ i = j∈N (i)\ne ã(qi,qj ) .\nHowever, the exponentials in the outer Softmax are often replaced with other nonlinear functions, e.g. Squareplus, and the pre-attention coefficients ã appear as variable (but learnable) functions of the nodal features. First, notice that (1) the attention coefficients a (q i , q j ) depend on the node features q and not simply the topology of the graph, and (2) the attention coefficients are not symmetric, reflecting the fact that the attention paid by node i to node j need not equal the attention paid by node j to node i. A direct consequence of this is that GATs are not purely diffusive under any circumstances, since it was shown in Appendix A.1 that the combinatorial divergence d ⊺ 0 will antisymmetrize the edge features it acts on. In particular, it is clear that the product a (q i , q j ) (q i -q j ) is asymmetric in i, j under the standard attention mechanism, since even the pre-attention coefficients ã (q i , q j ) are not symmetric, meaning that there will be two distinct terms after application of the divergence. More precisely, there is the following subtle result.\nProposition A.4. Let q ∈ R |V|×n V denote an array of nodal features. The expression j∈N (i) a (q i , q j ) (q i -q j ) , where a = A -1 0 A 1 is not the action of a Laplace operator whenever A 1 is not symmetric.\nProof. From Appendix A.3, we know that any Laplace operator on nodes is expressible as\nd * 0 d 0 = A -1 0 d ⊺ 0 A 1 d 0 for some positive definite A 0 , A 1 .\nSo, we compute the action of the Laplacian at node i,\n(∆ 0 q) i = (d * 0 d 0 q) i = A -1 0 d ⊺ 0 A 1 d 0 q i = a -1 0,ii α (d ⊺ 0 ) iα (A 1 d 0 q) α = a -1 0,ii α∋i (A 1 d 0 q) -α -(A 1 d 0 q) α = - 1 2 j∈N (i) a 1,ji + a i,ij a 0,ii (q j -q i ) , = j∈N (i)\na (q i , q j ) (q j -q i ) , which shows that a (q i , q j ) = (1/2) (a 1,ji + a 1,ij ) /a 0,ii must have symmetric numerator.\nWhile this result shows that GATs (and their derivatives, e.g., GRAND) are not purely diffusive, it also shows that it is possible to get close to GAT (at least syntactically) with a learnable diffusion mechanism. In fact, setting σ = W k = I in (1) yields precisely a single-step diffusion equation provided that a q k i , q k j is right-stochastic (i.e., j a (q i , q j ) 1 j = 1 i ) and built as dictated by Proposition A.4. Theorem A.11. The GAT layer (1) is a single-step diffusion equation provided that σ = W k = I, and the attention mechanism a (q i , q j ) = (1/2) (a 1,ji + a 1,ij ) /a 0,ii is right-stochastic.\nProof. First, notice that the Laplacian with respect to an edge set which contains self-loops is computable via\n(∆ 0 q) i = - j∈N (i) a (q i , q j ) (q j -q i ) = q i - j∈N (i) a (q i , q j ) q j .\nTherefore, taking a single step of heat flow q = -∆ 0 q with forward Euler discretization and time step τ = 1 is equivalent to\nq k+1 i = q k i -τ ∆ 0 q k i = j∈N (i) a q k i , q k j q k j ,\nwhich is just a modified and non-activated GAT layer with W k = I and attention mechanism a.\nRemark A.12. Since Softmax and its variants are right-stochastic, Theorem A.11 is what establishes equivalence between the non-divergence equation\nqi = j∈N (i)\na (q i , q j ) (q j -q i ) ,\nand the standard GAT layer seen in, e.g., [49], when a(q i , q j ) is the usual attention mechanism.\nRemark A.13. In the literature, there is an important (and often overlooked) distinction between the positive graph/Hodge Laplacian ∆ 0 and the negative \"geometer's Laplacian\" ∆ which is worth noting here. Particularly, we have from integration-by-parts that the gradient ∇ = d 0 is L 2 -adjoint to minus the divergence -∇• = d ⊺ 0 , so that the two Laplace operators ∆ 0 = d ⊺ 0 d 0 and ∆ = ∇ • ∇ differ by a sign. This is why the same ℓ 2 -gradient flow of Dirichlet energy can be equivalently expressed as q = ∆q = -∆ 0 q, but not by, e.g., q = ∆ 0 q. This shows that, while they are not equivalent, there is a close relationship between attention and diffusion mechanisms on graphs. The closest analogue to the standard attention expressible in this format is perhaps the choice a 1,ij = f (ã (q i , q j )), a 0,ii = j∈ N (i) a 1,ij , discussed in Section 4 and Appendix A.4, where f is any scalar-valued positive function. For example, when f (x) = e x , it follows that\n(∆ 0 q) i = - 1 2 j∈N (i)\ne ã(qi,qj ) + e ã(qj ,qi)\nσ i (q j -q i ) = - 1 2 j∈N (i)\na (q i , q j ) + e ã(qj ,qi) σ i (q j -q i ) , which leads to the standard GAT propagation mechanism plus an extra term arising from the fact that the attention a is not symmetric.\nRemark A.14. Practically, GATs and their variants typically make use of multi-head attention, defined in terms of an attention mechanism which is averaged over some number |h| of independent \"heads\",\na (q i , q j ) = 1 |h| h a h (q i , q j ) ,\nwhich are distinct only in their learnable parameters. While the results of this section were presented in terms of |h| = 1, the reader can check that multiple attention heads can be used in this framework provided it is the pre-attention ã that is averaged instead." }, { "figure_ref": [], "heading": "A.6 Bracket derivations and properties", "publication_ref": [ "b15", "b5" ], "table_ref": [], "text": "Here the architectures in the body are derived in greater detail. First, it will be shown that L * = -L, G * = G, and M * = M, as required for structure-preservation.\nProposition A.5. For L, G, M defined in Section 4, we have L * = -L, G * = G, and M * = M.\nProof. First, denoting A = diag (A 0 , A 1 ), it was shown in section A.3 that B * = A -1 B ⊺ A for any linear operator B of appropriate dimensions. So, applying this to L, it follows that\nL * = A -1 0 0 0 A -1 1 0 -d * 0 d 0 0 ⊺ A 0 0 0 A 1 = 0 A -1 0 d ⊺ 0 A 1 -A -1 1 (d * 0 ) ⊺ A 0 0 = 0 d * 0 -d 0 0 = -L.\nSimilarly, it follows that\nG * = A -1 0 0 0 A -1 1 d * 0 d 0 0 0 d * 1 d 1 ⊺ A 0 0 0 A 1 = A -1 0 d ⊺ 0 (d * 0 ) ⊺ A 0 0 0 A -1 1 d ⊺ 1 (d * 1 ) ⊺ A 1 = d * 0 d 0 0 0 d * 1 d 1 = G, M * = A -1 0 0 0 A -1 1 0 0 0 A 1 d * 1 d 1 A 1 ⊺ A 0 0 0 A 1 = 0 0 0 d ⊺ 1 (d * 1 ) ⊺ A 2 1 = 0 0 0 A 1 d * 1 d 1 A 1 = M,\nwhere the second-to-last equality used that A 1 A -1 1 = I. Remark A.15. Note that the choice of zero blocks in L, G is sufficient but not necessary for these adjointness relationships to hold. For example, one could alternatively choose the diagonal blocks of L to contain terms like B -B * for an appropriate message-passing network B.\nNext, we compute the gradients of energy and entropy with respect to (•, •). Proposition A.6. The A-gradient of the energy\nE(q, p) = 1 2 |q| 2 + |p| 2 = 1 2 i∈V |q i | 2 + 1 2 α∈E |p α | 2 , satisfies ∇E(q, p) = A -1 0 0 0 A -1 1 q p = A -1 0 q A -1 1 p\n.\nMoreover, given the energy and entropy defined as\nE(q, p) = f E (s(q)) + g E (s (d 0 d ⊺ 0 p)) , S(q, p) = g S (s (d ⊺ 1 d 1 p))\n, where f E : R n V → R acts on node features, g E , g S : R n E → R act on edge features, and s denotes sum aggregation over nodes or edges, the A-gradients are\n∇E(q, p) = A -1 0 1 ⊗ ∇f E (s(q)) A -1 1 d 0 d ⊺ 0 1 ⊗ ∇g E (s (d 0 d ⊺ 0 p)) , ∇S(q, p) = 0 A -1 1 d ⊺ 1 d 1 1 ⊗ ∇g S (s (d ⊺ 1 d 1 p)) ,\nProof. Since the theory of A-gradients in section A.3 establishes that ∇E = A -1 δE, it is only necessary to compute the L 2 -gradients. First, letting x = (q p) ⊺ , it follows for the first definition of energy that\ndE(x) = i∈V ⟨q i , dq i ⟩ + α∈E ⟨p α , dp α ⟩ = ⟨q, dq⟩ + ⟨p, dp⟩ = ⟨x, dx⟩ ,\nshowing that δE(q, p) = (q p) ⊺ , as desired. Moving to the metriplectic definitions, since each term of E, S has the same functional form, it suffices to compute the gradient of f (s (Bq)) for some function f : R n f → R and matrix B : R |V| → R |V| . To that end, adopting the Einstein summation convention where repeated indices appearing up-and-down in an expression are implicitly summed, if 1 ≤ a, b ≤ n f and 1 ≤ i, j ≤ |V|, we have d (s(q)) = i∈|V| dq i = i∈|V| δ j i dq j = 1 j dq a j e a = (1 ⊗ I) : dq = ∇ (s(q)) : dq, implying that ∇s(q) = 1 ⊗ I. Continuing, it follows that\nd (f • s • Bq) = f ′ (s (Bq)) a s ′ (Bq) a i B ij dq a j = f ′ (s (Bq)) a e a (B ⊺ ) ij 1 j dq a i = ⟨∇f (s (Bq)) ⊗ B ⊺ 1, dq⟩ = ⟨∇ (f • s • Bq) , dq⟩ , showing that ∇ (f • s • B)\ndecomposes into an outer product across modalities. Applying this formula to the each term of E, S then yields the L 2 -gradients,\nδE(q, p) = 1 ⊗ ∇f E (s(q)) d 0 d ⊺ 0 1 ⊗ ∇g E (s (d 0 d ⊺ 0 p)) , δS(q, p) = 0 d ⊺ 1 d 1 1 ⊗ ∇g S (s (d ⊺ 1 d 1 p))\n, from which the desired A-gradients follow directly.\nFinally, we can show that the degeneracy conditions for metriplectic structure are satisfied by the network in Section 4.\nTheorem A. 16. The degeneracy conditions L∇S = M∇E = 0 are satisfied by the metriplectic bracket in Section A.6.\nProof. This is a direct calculation using Theorem 3.1 and Proposition A. 6. In particular, it follows that\nL∇S = 0 -d * 0 d 0 0 0 A -1 1 d ⊺ 1 d 1 1 ⊗ ∇g S (s (d ⊺ 1 d 1 p)) = -A -1 0 (d 1 d 0 ) ⊺ d 1 1 ⊗ ∇g S (s (d ⊺ 1 d 1 p)) 0 = 0 0 , M∇E = 0 0 0 A 1 d * 1 d 1 A 1 A -1 0 1 ⊗ ∇f E (s(q)) A -1 1 d 0 d ⊺ 0 1 ⊗ ∇g E (s (d 0 d ⊺ 0 p)) = 0 A 1 d * 1 (d 1 d 0 )d ⊺ 0 1 ⊗ ∇g E (s (d 0 d ⊺ 0 p)) = 0 0 ,\nsince d 1 d 0 = 0 as a consequence of the graph calculus. These calculations establish the validity of the energy conservation and entropy generation properties seen previously in the manuscript body.\nRemark A.17. Clearly, this is not the only possible metriplectic formulation for GNNs. On the other hand, this choice is in some sense maximally general with respect to the chosen operators L, G, since only constants are in the kernel of d 0 (hence there is no reason to include a nodal term in S), and only elements in the image of d ⊺ 1 (which do not exist in our setting) are guaranteed to be in the kernel of d ⊺ 0 for any graph. Therefore, M is chosen to be essentially G without the ∆ 0 term, whose kernel is graph-dependent and hence difficult to design." }, { "figure_ref": [], "heading": "B Experimental details and more results", "publication_ref": [ "b72", "b15" ], "table_ref": [], "text": "This Appendix provides details regarding the experiments in Section 5, as well as any additional information necessary for reproducing them. We implement the proposed algorithms with PYTHON and PYTORCH [71] that supports CUDA. The experiments are conducted on systems that are equipped with NVIDIA RTX A100 and V100 GPUs. For NODEs capabilities, we use the TORCHDIFFEQ library [16]." }, { "figure_ref": [], "heading": "B.1 Damped double pendulum", "publication_ref": [ "b73", "b15", "b74" ], "table_ref": [], "text": "The governing equations for the damped double pendulum can be written in terms of four coupled first-order ODEs for the angles that the two pendula make with the vertical axis θ 1 , θ 2 and their associated angular momenta ω 1 , ω 2 (see [72]), θi = ω i ,\n1 ≤ i ≤ 2,(2)\nω1 = m 2 l 1 ω 2 1 sin (2∆θ) + 2m 2 l 2 ω 2 2 sin (∆θ) + 2gm 2 cos θ 2 sin ∆θ + 2gm 1 sin θ 1 + γ 1 -2l 1 m 1 + m 2 sin 2 ∆θ ,(3)\nω2 = m 2 l 2 ω 2 2 sin (2∆θ) + 2 (m 1 + m 2 ) l 1 ω 2 1 sin ∆θ + 2g (m 1 + m 2 ) cos θ 1 sin ∆θ + γ 2 2l 2 m 1 + m 2 sin 2 ∆θ ,(4)\nwhere m 1 , m 2 , l 1 , l 2 are the masses resp. lengths of the pendula, ∆θ = θ 1 -θ 2 is the (signed) difference in vertical angle, g is the acceleration due to gravity, and\nγ 1 = 2k 1 θ1 -2k 2 θ2 cos ∆θ. γ 2 = 2k 1 θ1 cos ∆θ - 2 (m 1 + m 2 ) m 2 k 2 θ2 , for damping constants k 1 , k 2 .\nDataset. A trajectory of the damped double pendulum by solving an initial value problem associated with the ODE 2. The initial condition used is (1.0, π/2, 0.0, 0.0), and the parameters are m 1 = m 2 = 1, g = 1, l 1 = 1, l 2 = 0.9, k 1 = k 2 = 0.1. For time integrator, we use the TorchDiffeq library [16] with Dormand-Prince 5 (DOPRI5) as the numerical solver. The total simulation time is 50 (long enough for significant dissipation to occur), and solution snapshots are collected at 500 evenly-spaced temporal indices.\nTo simulate the practical case where only positional data for the system is available, the double pendulum solution is integrated to time T = 50 (long enough for significant dissipation to occur) and post-processed once the angles and angular momenta are determined from the equations above, yielding the (x, y)-coordinates of each pendulum mass at intervals of 0.1s. This is accomplished using the relationships\nx 1 = l 1 sin θ 1 y 1 = -l 1 cos θ 1 x 2 = x 1 + l 2 sin θ 2 = l 1 sin θ 1 + l 2 sin θ 2 y 2 = y 1 -l 2 cos θ 2 = -l 1 cos θ 1 -l 2 cos θ 2 .\nThe double pendulum is then treated as a fully connected three-node graph with positional coordinates q i = (x i , y i ) as nodal features, and relative velocities p α = (d 0 q) α as edge features. Note that the positional coordinates (x 0 , y 0 ) = (0, 0) of the anchor node are held constant during training. To allow for the necessary flexibility of coordinate changes, each architecture from Section 4 makes use of a message-passing feature encoder before time integration, acting on node features and edge features separately, with corresponding decoder returning the original features after time integration.\nTo elicit a fair comparison, both the NODE and NODE+AE architectures are chosen to contain comparable numbers of parameters to the bracket architectures (∼ 30k), and all networks are trained for 100,000 epochs. For each network, the configuration of weights producing the lowest overall error during training is used for prediction.\nHyperparameters. The networks are trained to reconstruct the node/edge features in mean absolute error (MAE) using the Adam optimizer [73]. The NODEs and metriplectic bracket use an initial learning rate of 10 -4 , while the other models use an initial learning rate of 10 -3 . The width of the hidden layers in the message passing encoder/decoder is 64, and the number of hidden features for nodes/edges is 32. The time integrator used is simple forward Euler.\nNetwork architectures. The message passing encoders/decoders are 3-layer MLPs mapping, in the node case, nodal features and their graph derivatives, and in the edge case, edge features and their graph coderivatives, to a hidden representation. For the bracket architectures, the attention mechanism used in the learnable coderivatives is scaled dot product. The metriplectic network uses 2-layer MLPs f E , g E , g S with scalar output and hidden width 64. For the basic NODE, node and edge features are concatenated, flattened, and passed through a 4-layer fully connected network of width 128 in each hidden layer, before being reshaped at the end. The NODE+AE architecture uses a 3-layer fully connected network which operates on the concatenated and flattened latent embedding of size 32 * 6 = 192, with constant width throughout all layers." }, { "figure_ref": [], "heading": "B.2 Mujoco", "publication_ref": [ "b32", "b75" ], "table_ref": [], "text": "We represent an object as a fully-connected graph, where a node corresponds to a body part of the object and, thus, the nodal feature corresponds to a position of a body part or joint. To learn the dynamics of an object, we again follow the encoder-decoder-type architecture considered in the double-pendulum experiments. First we employ a node-wise linear layer to embed the nodal feature into node-wise hidden representations (i.e., the nodal feature q i corresponds to a position of a body part or an angle of a joint.). As an alternative encoding scheme for the nodal feature, in addition to the position or the angle, nodal velocities are considered as additional nodal features, i.e., q i = (q i , v i ).\nThe experimental results of the alternative scheme is represented in the following section B.2.2.\nThe proposed dynamics models also require edge features (e.g., edge velocity), which are not presented in the dataset. Thus, to extract a hidden representation for an edge, we employ a linear layer, which takes velocities of the source and destination nodes of the edge as an input and outputs edge-wise hidden representations, i.e., the edge feature correspond to a pair of nodal velocities p α = (v src(α) , v dst(α) ), where v src(α) and v dst(α) denote velocities of the source and destination nodes connected to the edge.\nThe MuJoCo trajectories are generated in the presence of an actor applying controls. To handle the changes in dynamics due to the control input, we introduce an additive forcing term, parameterized by an MLP, to the dynamics models, which is a similar approach considered in dissipative SymODEN [32]. In dissipative SymODEN, the forcing term is designed to affect only the change of the generalized momenta (also known as the port-Hamiltonian dynamics [74]). As opposed to this approach, our proposed forcing term affects the evolution of both the generalized coordinates that are defined in the latent space. Once the latent states are computed at specified time indices, a node-wise linear decoder is applied to reconstruct the position of body parts of the object. Then the models are trained based on the data matching loss measured in mean-square errors between the reconstructed and the ground-truth positions." }, { "figure_ref": [], "heading": "B.2.1 Experiment details", "publication_ref": [ "b23", "b23", "b76", "b23" ], "table_ref": [], "text": "We largely follow the experimental settings considered in [23].\nDataset. As elaborated in [23], the standard Open AI Gym [75] environments preprocess observations in ad-hoc ways, e.g., Hopper clips the velocity observations to [-10, 10] d . Thus, the authors in [23] modified the environments to simply return the position and the velocity (q, v) as the observation and we use the same dataset, which is made publicly available by the authors. " }, { "figure_ref": [], "heading": "Hyperparameters.", "publication_ref": [ "b74" ], "table_ref": [], "text": "For training, we use the Adam optimizer [73] with the initial learning rate 5e-3 and weight decay 1e-4. With the batch size of 200 trajectories, we train the models for 256 epochs. We also employ a cosine annealing learning rate scheduler with the minimum learning rate 1e-6. For time integrator, we use the Torchdiffeq library with the Euler method.\nNetwork architectures. The encoder and decoder networks are parameterized as a linear layer and the dimension of the hidden representations is set to 80. For attention, the scaled dot-product attention is used with 8 heads and the embedding dimension is set to 16. The MLP for handling the forcing term consists of three fully-connected layers (i.e., input, output layers and one hidden layer with 128 neurons). The MLP used for parameterizing the \"black-box\" NODEs also consists of three fully-connected layers with 128 neurons in each layer." }, { "figure_ref": [ "fig_9" ], "heading": "B.2.2 Additional results", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "Figure 6 reports the loss trajectories for all considered dynamics models. For the given number of maximum epochs (i.e., 256), the Hamiltonian and double bracket models tend to reach much lower training losses (an order of magnitude smaller) errors than the NODE and Gradient models do. The metriplectic model produces smaller training losses compared to the NODE and gradient models after a certain number of epochs (e.g., 100 epochs for Hopper). In the next set of experiments, we provide not only positions/angles of body parts as nodal features, but velocities of the body parts as nodal features (i.e., q i = (q i , v i )). Table 6 reports the relative errors measured in L2-norm; again, the Hamiltonian, double bracket, and metriplectic outperform other dynamics models. In particular, the metriplectic bracket produces the most accurate predictions in the Hopper and Swimmer environments. " }, { "figure_ref": [], "heading": "B.3 Node classification", "publication_ref": [ "b62" ], "table_ref": [], "text": "To facilitate comparison with previous work, we follow the experimental methodology of [61]. " }, { "figure_ref": [], "heading": "B.3.1 Experiment details", "publication_ref": [ "b59", "b60", "b61", "b62", "b63", "b77", "b49" ], "table_ref": [ "tab_3", "tab_4" ], "text": "Datasets. We consider the three well-known citation networks, Cora [58], Citeseer [59], and Pubmed [60]; the proposed models are tested on the datasets with the original fixed Planetoid traing/test splits, as well as random train/test splits. In addition, we also consider the coauthor graph, CoauthorCS [61] and the Amazon co-purchasing graphs, Computer and Photo [62]. Hyperparameters The bracket architectures employed for this task are identical to those in Section 4 except for that the right-hand side of the Hamiltonian, gradient, and double bracket networks is scaled by a learnable parameter Sigmoid (α) > 0, and the matrix A 2 = I is used as the inner product on 3-cliques. It is easy to verify that this does not affect the structural properties or conservation character of the networks. Nodal features q i are specified by the datasets, and edge features p α = (d 0 q) α are taken as the combinatorial gradient of the nodal features. In order to determine good hyperparameter configurations for each bracket, a Bayesian search is conducted using Weights and Biases [76] for each bracket and each dataset using a random 80/10/10 train/valid/test split with random seed 123. The number of runs per bracket was 500 for CORA, CiteSeer, and PubMed, and 250 for CoauthorCS, Computer, and Photo. The hyperparameter configurations leading to the best validation accuracy are used when carrying out the experiments in Table 4 and Table 5.\nSpecifically, the hyperparameters that are optimized are as follows: initial learning rate (from 0.0005 to 0.05), number of training epochs (from 25 to 150), method of integration (rk4 or dopri5), integration time (from 1 to 5), latent dimension (from 10 to 150 in increments of 10), pre-attention mechanism ã (see below), positive function f (either exp or Squareplus), number of pre-attention heads (from 1 to 15, c.f. Remark A.14), attention embedding dimension (from 1× to 15× the number of heads), weight decay rate (from 0 to 0.05), dropout/input dropout rates (from 0 to 0.8), and the MLP activation function for the metriplectic bracket (either relu, tanh, or squareplus). The pre-attention is chosen from one of four choices, defined as follows: ã (q i , q j ) = (W K q i ) ⊺ W Q q j d scaled dot product, ã (q i , q j ) = (W K q i ) ⊺ W Q q j |W K q i | |W Q q j | cosine similarity, ã (q i , q j ) = W K q i -W K q i ⊺ W Q q j -W Q q j W K q i -W K q i W Q q j -W Q q j , Pearson correlation ã (q i , q j ) = (σ u σ x )\n2 exp - |W K u i -W Q u j | 2 2ℓ 2 u exp - |W K x i -W Q x j | 2 2ℓ2\nx ,exponential kernel Network architectures. The architectures used for this experiment follow that of GRAND [49], consisting of the learnable affine encoder/decoder networks ϕ, ψ and learnable bracket-based dynamics in the latent space. However, recall that the bracket-based dynamics require edge features, which are manufactured as p α = (d 0 q) α . In summary, the inference procedure is as follows:\nq(0) = ϕ(q) (nodal feature encoding), p(0) = d 0 q(0) (edge feature manufacturing),\n(q(T ), p(T )) = (q(0), p(0)) + T 0 ( q, ṗ) dt, (latent dynamics) q = ψ (q(T )) , (nodal feature decoding) y = c(q).\n(class prediction)\nTraining is accomplished using the standard cross entropy\nH (t, y) = |V| i=1 t ⊺ i log y i ,\nwhere t i is the one-hot truth vector corresponding to the i th node. In the case of the metriplectic network, the networks f E , g E , g S are 2-layer MLPs with hidden dimension equal to the latent feature dimension and output dimension 1." }, { "figure_ref": [], "heading": "B.3.2 Additional depth study", "publication_ref": [], "table_ref": [], "text": "Here we report the results of a depth study on Cora with the Planetoid train/val/test split. Table 9 shows the train/test accuracy of the different bracket architectures under two different increased depth conditions, labeled Task 1 and Task 2, respectively. Task 1 refers to fixing the integration step-size at ∆t = 1 and integrating to a variable final time T , while Task 2 instead fixes the final time T to the value identified by the hyperparameter search (see Table 8) and instead varies the step-size ∆t. Notice that both tasks involve repeatedly composing the trained network and hence simulate increasing depth, so that any negative effects of network construction such as oversmoothing, oversquashing, or vanishing/exploding gradients should appear in both cases. For more information, Table 10 provides a runtime comparison corresonding to the depth studies in Table 9.\nObserve that every bracket-based architecture exhibits very stable performance in Task 2, where the final time is held fixed while the depth is increased. This suggests that our proposed networks are dynamically stable and effectively mitigate negative effects like oversmoothing which are brought by repeated composition and often seen in more standard GNNs. Interestingly, despite success on Task 2, only the gradient and metriplectic architectures perfectly maintain or improve their performance during the more adversarial Task 1 where the final time is increased with a fixed step-size. This suggests that, without strong diffusion, the advection experienced during conservative dynamics has the potential to radically change label classification over time, as information is moved through the feature domain in a loss-less fashion. Remark B.1. It is interesting that the architecture most known for oversmoothing (i.e., gradient) exhibits the most improved classification performance with increasing depth on Task 1. This is perhaps due to the fact that the gradient system decouples over nodes and edges, while the others" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Github repository https://github.com/natrask/BracketGraphs." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "* K. Lee acknowledges the support from the U.S. National Science Foundation under grant CNS2210137. † Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. This article has been co-authored by an employee of National Technology & Engineering Solutions of Sandia, LLC under Contract No. DE-NA0003525 with the U.S. Department of Energy (DOE). The employee owns all right, title and interest in and to the article and is solely responsible for its contents. federally sponsored research in accordance with the DOE Public Access Plan https://www.energy.gov/downloads/doe-public-access-plan. The work of N. Trask and A. Gruber is supported by the U.S. Department of Energy, Office of Advanced Computing Research under the \"Scalable and Efficient Algorithms -Causal Reasoning, Operators and Graphs\" (SEA-CROGS) project, the DoE Early Career Research Program, and the John von Neumann fellowship at Sandia." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "do not, meaning that the gradient network does not have the added challenge of learning a useful association between the manufactured edge feature information and the nodal labels. It remains to be seen if purely node-based bracket dynamics exhibit the same characteristics as the node-edge formulations presented here. " }, { "figure_ref": [], "heading": "CORA networks", "publication_ref": [], "table_ref": [], "text": "" } ]
2023-12-21
[ { "authors": "Marco Gori; Gabriele Monfardini; Franco Scarselli", "journal": "IEEE", "ref_id": "b0", "title": "A new model for learning in graph domains", "year": "2005" }, { "authors": "Jie Zhou; Ganqu Cui; Shengding Hu; Zhengyan Zhang; Cheng Yang; Zhiyuan Liu; Lifeng Wang; Changcheng Li; Maosong Sun", "journal": "AI open", "ref_id": "b1", "title": "Graph neural networks: A review of methods and applications", "year": "2020" }, { "authors": "Will Hamilton; Zhitao Ying; Jure Leskovec", "journal": "Advances in neural information processing systems", "ref_id": "b2", "title": "Inductive representation learning on large graphs", "year": "2017" }, { "authors": "Rex William L Hamilton; Jure Ying; Leskovec", "journal": "", "ref_id": "b3", "title": "Representation learning on graphs: Methods and applications", "year": "2017" }, { "authors": "Deli Chen; Yankai Lin; Wei Li; Peng Li; Jie Zhou; Xu Sun", "journal": "", "ref_id": "b4", "title": "Measuring and relieving the over-smoothing problem for graph neural networks from the topological view", "year": "2020" }, { "authors": "Joan Michael M Bronstein; Taco Bruna; Petar Cohen; Veličković", "journal": "", "ref_id": "b5", "title": "Geometric deep learning: Grids, groups, graphs, geodesics, and gauges", "year": "2021" }, { "authors": "Kaixiong Zhou; Xiao Huang; Yuening Li; Daochen Zha; Rui Chen; Xia Hu", "journal": "Advances in neural information processing systems", "ref_id": "b6", "title": "Towards deeper graph neural networks with differentiable group normalization", "year": "2020" }, { "authors": "Chen Cai; Yusu Wang", "journal": "", "ref_id": "b7", "title": "A note on over-smoothing for graph neural networks", "year": "2020" }, { "authors": "Yifei Wang; Yisen Wang; Jiansheng Yang; Zhouchen Lin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b8", "title": "Dissecting the diffusion process in linear graph convolutional networks", "year": "2021" }, { "authors": "Francesco Di; Giovanni ; James Rowbottom; Benjamin P Chamberlain; Thomas Markovich; Michael M Bronstein", "journal": "", "ref_id": "b9", "title": "Understanding convolution on graphs via energies", "year": "2023" }, { "authors": "Nathaniel Trask; Andy Huang; Xiaozhe Hu", "journal": "", "ref_id": "b10", "title": "Enforcing exact physics in scientific machine learning: a data-driven exterior calculus on graphs", "year": "2020" }, { "authors": "Petar Veličković; Guillem Cucurull; Arantxa Casanova; Adriana Romero; Pietro Lio; Yoshua Bengio", "journal": "", "ref_id": "b11", "title": "Graph attention networks", "year": "2017" }, { "authors": "Khemraj Shukla; Mengjia Xu; Nathaniel Trask; George E Karniadakis", "journal": "Data-Centric Engineering", "ref_id": "b12", "title": "Scalable algorithms for physics-informed neural and graph networks", "year": "2022" }, { "authors": "Christopher Rackauckas; Yingbo Ma; Julius Martensen; Collin Warner; Kirill Zubov; Rohit Supekar; Dominic Skinner; Ali Ramadhan; Alan Edelman", "journal": "", "ref_id": "b13", "title": "Universal differential equations for scientific machine learning", "year": "2020" }, { "authors": "Joshua L Steven L Brunton; Nathan Proctor; Kutz", "journal": "Proceedings of the national academy of sciences", "ref_id": "b14", "title": "Discovering governing equations from data by sparse identification of nonlinear dynamical systems", "year": "2016" }, { "authors": "Yulia Ricky Tq Chen; Jesse Rubanova; David Bettencourt; Duvenaud", "journal": "", "ref_id": "b15", "title": "Neural ordinary differential equations", "year": "2018" }, { "authors": "Michael Poli; Stefano Massaroli; Atsushi Yamashita; Hajime Asama; Jinkyoo Park; Stefano Ermon", "journal": "", "ref_id": "b16", "title": "Torchdyn: Implicit models and neural numerical methods in pytorch", "year": "2020" }, { "authors": "Louis-Pascal Xhonneux; Meng Qu; Jian Tang", "journal": "PMLR", "ref_id": "b17", "title": "Continuous graph neural networks", "year": "2020" }, { "authors": "Fangda Gu; Heng Chang; Wenwu Zhu; Somayeh Sojoudi; Laurent El Ghaoui", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b18", "title": "Implicit graph neural networks", "year": "2020" }, { "authors": "Samuel Greydanus; Misko Dzamba; Jason Yosinski", "journal": "", "ref_id": "b19", "title": "Hamiltonian neural networks", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b20", "title": "", "year": "2019" }, { "authors": "Marc Finzi; Ke ; Alexander Wang; Andrew G Wilson", "journal": "Advances in neural information processing systems", "ref_id": "b21", "title": "Simplifying hamiltonian and lagrangian neural networks via explicit constraints", "year": "2020" }, { "authors": "Renyi Chen; Molei Tao", "journal": "PMLR", "ref_id": "b22", "title": "Data-driven prediction of general hamiltonian dynamics via learning exactlysymplectic maps", "year": "2021" }, { "authors": "Nate Gruver; Anton Marc; Finzi; Don Samuel; Andrew Stanton; Wilson Gordon", "journal": "", "ref_id": "b23", "title": "Deconstructing the inductive biases of hamiltonian neural networks", "year": "2020" }, { "authors": "Peter Toth; Danilo J Rezende; Andrew Jaegle; Sébastien Racanière; Aleksandar Botev; Irina Higgins", "journal": "", "ref_id": "b24", "title": "Hamiltonian generative networks", "year": "2019" }, { "authors": "Yaofeng Desmond Zhong; Biswadip Dey; Amit Chakraborty", "journal": "", "ref_id": "b25", "title": "Symplectic ODE-Net: Learning Hamiltonian dynamics with control", "year": "2019" }, { "authors": "Michael Lutter; Christian Ritter; Jan Peters", "journal": "", "ref_id": "b26", "title": "Deep lagrangian networks: Using physics as model prior for deep learning", "year": "2018" }, { "authors": "Miles Cranmer; Sam Greydanus; Stephan Hoyer; Peter Battaglia; David Spergel; Shirley Ho", "journal": "", "ref_id": "b27", "title": "Lagrangian neural networks", "year": "2020" }, { "authors": "Partha Guha", "journal": "Journal of Mathematical Analysis and Applications", "ref_id": "b28", "title": "Metriplectic structure, leibniz dynamics and dissipative systems", "year": "2007" }, { "authors": "Kookjin Lee; Nathaniel Trask; Panos Stinis", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b29", "title": "Machine learning structure preserving brackets for forecasting irreversible processes", "year": "2021" }, { "authors": "Kookjin Lee; Nathaniel Trask; Panos Stinis", "journal": "PMLR", "ref_id": "b30", "title": "Structure-preserving sparse identification of nonlinear dynamics for data-driven modeling", "year": "2022" }, { "authors": "Zhen Zhang; Yeonjong Shin; George Em Karniadakis", "journal": "Philosophical Transactions of the Royal Society A", "ref_id": "b31", "title": "Gfinns: Generic formalism informed neural networks for deterministic and stochastic dynamical systems", "year": "2022" }, { "authors": "Yaofeng Desmond Zhong; Biswadip Dey; Amit Chakraborty", "journal": "", "ref_id": "b32", "title": "Dissipative symoden: Encoding hamiltonian dynamics with dissipation and control into deep learning", "year": "2020" }, { "authors": "A Shaan; Marios Desai; David Mattheakis; Pavlos Sondak; Stephen J Protopapas; Roberts", "journal": "Physical Review E", "ref_id": "b33", "title": "Porthamiltonian neural networks for learning explicit time-dependent dynamical systems", "year": "2021" }, { "authors": "Miroslav Grmela", "journal": "Journal of Physics Communications", "ref_id": "b34", "title": "Generic guide to the multiscale dynamics and thermodynamics", "year": "2018" }, { "authors": "Maziar Raissi; Paris Perdikaris; George E Karniadakis", "journal": "Journal of Computational physics", "ref_id": "b35", "title": "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations", "year": "2019" }, { "authors": "Maziar Raissi; George Em Karniadakis", "journal": "Journal of Computational Physics", "ref_id": "b36", "title": "Hidden physics models: Machine learning of nonlinear partial differential equations", "year": "2018" }, { "authors": "Filippo Masi; Ioannis Stefanou", "journal": "Computer Methods in Applied Mechanics and Engineering", "ref_id": "b37", "title": "Multiscale modeling of inelastic materials with thermodynamics-based artificial neural networks (tann)", "year": "2022" }, { "authors": "G Ravi; Indu Patel; Nathaniel A Manickam; Mitchell A Trask; Myoungkyu Wood; Ignacio Lee; Eric C Tomas; Cyr", "journal": "Journal of Computational Physics", "ref_id": "b38", "title": "Thermodynamically consistent physics-informed neural networks for hyperbolic systems", "year": "2022" }, { "authors": "Quercus Hernández; Alberto Badías; David González; Francisco Chinesta; Elías Cueto", "journal": "Journal of Computational Physics", "ref_id": "b39", "title": "Structurepreserving neural networks", "year": "2021" }, { "authors": "Yibo Yang; Paris Perdikaris", "journal": "Journal of Computational Physics", "ref_id": "b40", "title": "Adversarial uncertainty quantification in physics-informed neural networks", "year": "2019" }, { "authors": "Dongkun Zhang; Lu Lu; Ling Guo; George Em Karniadakis", "journal": "Journal of Computational Physics", "ref_id": "b41", "title": "Quantifying total uncertainty in physicsinformed neural networks for solving forward and inverse stochastic problems", "year": "2019" }, { "authors": "Sifan Wang; Yujun Teng; Paris Perdikaris", "journal": "SIAM Journal on Scientific Computing", "ref_id": "b42", "title": "Understanding and mitigating gradient flow pathologies in physics-informed neural networks", "year": "2021" }, { "authors": "Sifan Wang; Xinling Yu; Paris Perdikaris", "journal": "Journal of Computational Physics", "ref_id": "b43", "title": "When and why pinns fail to train: A neural tangent kernel perspective", "year": "2022" }, { "authors": "Konstantin Rusch; Ben Chamberlain; James Rowbottom; Siddhartha Mishra; Michael Bronstein", "journal": "PMLR", "ref_id": "b44", "title": "Graph-coupled oscillator networks", "year": "2022" }, { "authors": "Jeongwhan Choi; Seoyoung Hong; Noseong Park; Sung-Bae Cho", "journal": "", "ref_id": "b45", "title": "Gread: Graph neural reactiondiffusion equations", "year": "2022" }, { "authors": "Alvaro Sanchez-Gonzalez; Victor Bapst; Kyle Cranmer; Peter Battaglia", "journal": "", "ref_id": "b46", "title": "Hamiltonian graph networks with ode integrators", "year": "2019" }, { "authors": "Suresh Bishnoi; Ravinder Bhattoo; Jayadeva Jayadeva; Sayan Ranu; Nm Anoop Krishnan", "journal": "", "ref_id": "b47", "title": "Enhancing the inductive biases of graph neural ode for modeling physical systems", "year": "2022" }, { "authors": "Quercus Hernandez; Alberto Badias; Francisco Chinesta; Elias Cueto", "journal": "IEEE Transactions on Artificial Intelligence", "ref_id": "b48", "title": "Thermodynamics-informed graph neural networks", "year": "2022" }, { "authors": "Ben Chamberlain; James Rowbottom; Maria I Gorinova; Michael Bronstein; Stefan Webb; Emanuele Rossi", "journal": "PMLR", "ref_id": "b49", "title": "Grand: Graph neural diffusion", "year": "2021" }, { "authors": " Pj Morrison", "journal": "Journal of Physics: Conference Series", "ref_id": "b50", "title": "Thoughts on brackets and dissipation: old and new", "year": "2009" }, { "authors": "Oliver Knill", "journal": "", "ref_id": "b51", "title": "The dirac operator of a graph", "year": "2013" }, { "authors": "Xiaoye Jiang; Lek-Heng Lim; Yuan Yao; Yinyu Ye", "journal": "Mathematical Programming", "ref_id": "b52", "title": "Statistical ranking and combinatorial hodge theory", "year": "2011" }, { "authors": "B Pavel; James M Bochev; Hyman", "journal": "Springer", "ref_id": "b53", "title": "Principles of mimetic discretizations of differential operators", "year": "2006" }, { "authors": "Arnold Douglas", "journal": "SIAM", "ref_id": "b54", "title": "Finite element exterior calculus", "year": "2018" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Ł Ukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b55", "title": "Attention is all you need", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b56", "title": "", "year": "2017" }, { "authors": "Ignacio Romero", "journal": "International Journal for Numerical Methods in Engineering", "ref_id": "b57", "title": "Thermodynamically consistent time-stepping algorithms for non-linear thermomechanical systems", "year": "2009" }, { "authors": "Emanuel Todorov; Tom Erez; Yuval Tassa", "journal": "IEEE", "ref_id": "b58", "title": "Mujoco: A physics engine for model-based control", "year": "2012" }, { "authors": "Andrew Kachites Mccallum; Kamal Nigam; Jason Rennie; Kristie Seymore", "journal": "Information Retrieval", "ref_id": "b59", "title": "Automating the construction of internet portals with machine learning", "year": "2000" }, { "authors": "Prithviraj Sen; Galileo Namata; Mustafa Bilgic; Lise Getoor; Brian Galligher; Tina Eliassi-Rad", "journal": "AI magazine", "ref_id": "b60", "title": "Collective classification in network data", "year": "2008" }, { "authors": "Galileo Namata; Ben London; Lise Getoor; Bert Huang; Edu", "journal": "", "ref_id": "b61", "title": "Query-driven active surveying for collective classification", "year": "2012" }, { "authors": "Oleksandr Shchur; Maximilian Mumme; Aleksandar Bojchevski; Stephan Günnemann", "journal": "", "ref_id": "b62", "title": "Pitfalls of graph neural network evaluation", "year": "2018" }, { "authors": "Julian Mcauley; Christopher Targett; Qinfeng Shi; Anton Van Den; Hengel", "journal": "", "ref_id": "b63", "title": "Image-based recommendations on styles and substitutes", "year": "2015" }, { "authors": "Michael Poli; Stefano Massaroli; Junyoung Park; Atsushi Yamashita; Hajime Asama; Jinkyoo Park", "journal": "", "ref_id": "b64", "title": "Graph neural ordinary differential equations", "year": "2019" }, { "authors": "Xiaoye Jiang; Lek-Heng Lim; Yuan Yao; Yinyu Ye", "journal": "Mathematical Programming", "ref_id": "b65", "title": "Statistical ranking and combinatorial hodge theory", "year": "2010-11" }, { "authors": "Anthony Bloch; P S Krishnaprasad; Jerrold E Marsden; Tudor S Ratiu", "journal": "Communications in Mathematical Physics", "ref_id": "b66", "title": "The euler-poincaré equations and double bracket dissipation", "year": "1996" }, { "authors": "Darryl D Holm; Jerrold E Marsden; Tudor S Ratiu", "journal": "Advances in Mathematics", "ref_id": "b67", "title": "The euler-poincaré equations and semidirect products with applications to continuum theories", "year": "1998" }, { "authors": "Hans Christian; Oettinger ", "journal": "Physical Review E", "ref_id": "b68", "title": "Irreversible dynamics, onsager-casimir symmetry, and an application to turbulence", "year": "2014" }, { "authors": "Anthony Gruber; Max Gunzburger; Lili Ju; Zhu Wang", "journal": "Computer Methods in Applied Mechanics and Engineering", "ref_id": "b69", "title": "Energetically consistent model reduction for metriplectic systems", "year": "2023" }, { "authors": "Robert J Renka", "journal": "", "ref_id": "b70", "title": "A simple explanation of the sobolev gradient method", "year": "2006" }, { "authors": "Chris Yu; Henrik Schumacher; Keenan Crane", "journal": "ACM Trans. Graph", "ref_id": "b71", "title": "Repulsive curves", "year": "2021" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga", "journal": "Advances in neural information processing systems", "ref_id": "b72", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" }, { "authors": "Joe Chen", "journal": "", "ref_id": "b73", "title": "Chaos from simplicity: an introduction to the double pendulum", "year": "2008" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b74", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Romeo Ortega; Arjan Van Der; Bernhard Schaft; Gerardo Maschke; Escobar", "journal": "Automatica", "ref_id": "b75", "title": "Interconnection and damping assignment passivity-based control of port-controlled hamiltonian systems", "year": "2002" }, { "authors": "Greg Brockman; Vicki Cheung; Ludwig Pettersson; Jonas Schneider; John Schulman; Jie Tang; Wojciech Zaremba", "journal": "", "ref_id": "b76", "title": "OpenAI Gym", "year": "2016" }, { "authors": "Lukas Biewald", "journal": "", "ref_id": "b77", "title": "Experiment tracking with weights and biases", "year": "2020" } ]
[ { "formula_coordinates": [ 4, 273.98, 637.65, 171.53, 12.55 ], "formula_id": "formula_0", "formula_text": "∆ k = d * k d k + d k-1 d * k-1 follow (see e.g." }, { "formula_coordinates": [ 4, 336.77, 661.05, 108.2, 9.65 ], "formula_id": "formula_1", "formula_text": "d 1 • d 0 = curl • grad = 0." }, { "formula_coordinates": [ 4, 108, 699.68, 396, 24.68 ], "formula_id": "formula_2", "formula_text": "(v, w) = v ⊺ A k w for a machine-learnable A k , we obtain d * k = A -1 k d ⊺ k A k+1 (see Appendix A.3). This parameterization Ω 0 Ω 1 Ω 2 • • • Ω k Ω 0 Ω 1 Ω 2 • • • Ω k d 0 d ⊺ 0 d 1 d ⊺ 1 d 2 d ⊺ 2 d k-1 d ⊺ k-1 A 0 A 1 d * 0 A 2 d * 1 d * 2 d * k-1 A k" }, { "formula_coordinates": [ 5, 204.4, 189.38, 156.88, 12.83 ], "formula_id": "formula_3", "formula_text": "d i+1 • d i = d ⊺ i • d ⊺ i+1 = d * i • d * i+1 = 0." }, { "formula_coordinates": [ 5, 107.67, 257.03, 396.33, 60.29 ], "formula_id": "formula_4", "formula_text": "A k . Theorem 3.1. The dual derivatives d * k : Ω k+1 → Ω k adjoint to d k : Ω k → Ω k+1 with respect to the learnable inner products A k : Ω k → Ω k satisfy an exact sequence property. Proof. d * k-1 d * k = A -1 k-1 d ⊺ k-1 A k A -1 k d ⊺ k A k+1 = A -1 k-1 (d k d k-1 ) ⊺ A k+1 = 0." }, { "formula_coordinates": [ 5, 111.61, 469, 388.77, 23.11 ], "formula_id": "formula_5", "formula_text": "L = 0 -d * 0 d 0 0 , G = ∆ 0 0 0 ∆ 1 = d * 0 d 0 0 0 d * 1 d 1 + d 0 d * 0 , M = 0 0 0 A 1 d * 1 d 1 A 1 ." }, { "formula_coordinates": [ 5, 189.08, 698.01, 233.84, 26.8 ], "formula_id": "formula_6", "formula_text": "E(q, p) = 1 2 |q| 2 + |p| 2 = 1 2 i∈V |q i | 2 + 1 2 α∈E |p α | 2 ," }, { "formula_coordinates": [ 6, 191.57, 203.39, 228.86, 20.56 ], "formula_id": "formula_7", "formula_text": "a 0,ii = j∈N (i) f (ã (q i , q j )) , a 1,ij = f (ã (q i , q j )) ." }, { "formula_coordinates": [ 6, 206.44, 289.37, 140.95, 22.75 ], "formula_id": "formula_8", "formula_text": "(d * 0 p) i = A -1 0 d ⊺ 0 A 1 p i = j∈N (i)" }, { "formula_coordinates": [ 6, 190.42, 485.53, 228.54, 25.06 ], "formula_id": "formula_9", "formula_text": "q ṗ = 0 -d * 0 d 0 0 A -1 0 0 0 A -1 1 q p = -d * 0 A -1 1 p d 0 A -1 0 q" }, { "formula_coordinates": [ 6, 143.87, 543.51, 327.67, 11.26 ], "formula_id": "formula_10", "formula_text": "Ė(x) = ( ẋ, ∇E(x)) = (L(x)∇E(x), ∇E(x)) = -(∇E(x), L(x)∇E(x)) = 0," }, { "formula_coordinates": [ 6, 184.19, 607.65, 252.75, 25.06 ], "formula_id": "formula_11", "formula_text": "q ṗ = - ∆ 0 0 0 ∆ 1 A -1 0 0 0 A -1 1 q p = - ∆ 0 A -1 0 q ∆ 1 A -1 1 p ." }, { "formula_coordinates": [ 6, 163.84, 666.92, 263.12, 11.71 ], "formula_id": "formula_12", "formula_text": "Ė(x) = ( ẋ, ∇E(x)) = -(G(x)∇E(x), ∇E(x)) = -|∇E(x)|2" }, { "formula_coordinates": [ 7, 121.47, 102.31, 366.44, 25.06 ], "formula_id": "formula_13", "formula_text": "q ṗ = 0 -d * 0 d 0 0 A -1 0 q A -1 1 p + -d * 0 d 0 0 0 -d 0 d * 0 A -1 0 q A -1 1 p = -∆ 0 A -1 0 q -d * 0 A -1 1 p d 0 A -1 0 q -d 0 d * 0 A -1 1 p" }, { "formula_coordinates": [ 7, 122.56, 161.3, 370.3, 11.71 ], "formula_id": "formula_14", "formula_text": "Ė(x) = ( ẋ, ∇E(x)) = L(x)∇E(x) + L 2 (x)∇E(x), ∇E(x) = 0 -|L(x)∇E(x)| 2 ≤ 0," }, { "formula_coordinates": [ 7, 224.46, 277.55, 163.07, 27.73 ], "formula_id": "formula_15", "formula_text": "E(q, p) = f E (s(q)) + g E (s (d 0 d ⊺ 0 p)) , S(q, p) = g S (s (d ⊺ 1 d 1 p))" }, { "formula_coordinates": [ 7, 108, 349.99, 387.77, 25.06 ], "formula_id": "formula_16", "formula_text": "∇E(x) = A -1 0 1 ⊗ ∇f E (h(q)) A -1 1 d 0 d ⊺ 0 1 ⊗ ∇g E (h (d 0 d ⊺ 0 p)) , ∇S(x) = 0 A -1 1 d ⊺ 1 d 1 1 ⊗ ∇g S (h (d ⊺ 1 d 1 p))" }, { "formula_coordinates": [ 7, 128.35, 409.06, 352.67, 25.06 ], "formula_id": "formula_17", "formula_text": "q ṗ = L∇E + M∇S = -A -1 0 d ⊺ 0 d 0 d ⊺ 0 1 ⊗ ∇g E (s (d 0 d ⊺ 0 p)) d 0 A -1 0 1 ⊗ ∇f E (s(q)) + A 1 d * 1 d 1 d ⊺ 1 d 1 1 ⊗ ∇g S (s (d ⊺ 1 d 1 p))" }, { "formula_coordinates": [ 7, 108, 457.25, 390.94, 45.3 ], "formula_id": "formula_18", "formula_text": "Ė(x) = ( ẋ, ∇E(x)) = (L∇E(x), ∇E(x)) + (M∇S(x), ∇E(x)) = (∇S(x), M∇E(x)) = 0, Ṡ(x) = ( ẋ, ∇S(x)) = (L∇E(x), ∇S(x)) + (M∇S(x), ∇S(x)) = 0 + |∇S(x)| 2 M ≥ 0. Remark 4.6." }, { "formula_coordinates": [ 15, 108, 154.44, 298.7, 44.92 ], "formula_id": "formula_19", "formula_text": "L ⊺ = -L N (i), N (i) Neighbors of node i ∈ V, neighbors of node i ∈ V including i [S]" }, { "formula_coordinates": [ 15, 108, 223.1, 156.01, 28.25 ], "formula_id": "formula_20", "formula_text": "∆ k Hodge Laplacian d k d * k + d * k d k δ ij" }, { "formula_coordinates": [ 15, 108, 277.94, 332, 61.94 ], "formula_id": "formula_21", "formula_text": "(•, •) k Learnable metric inner product on k-cliques with matrix representation A k ⟨•, •⟩ k Euclidean ℓ 2 inner product on k-cliques G, V, E Oriented graph, set of nodes, set of edges G k , Ω k Set of k-cliques," }, { "formula_coordinates": [ 15, 108, 362.38, 315.01, 13.06 ], "formula_id": "formula_22", "formula_text": "d ⊺ k , d * k Adjoint of d k with respect to ⟨•, •⟩ k , adjoint of d k with respect to (•, •) k" }, { "formula_coordinates": [ 16, 160.05, 88.61, 291.9, 64.26 ], "formula_id": "formula_23", "formula_text": "d 0 =        -1 1 0 0 0 0 0 -1 1 0 0 0 0 -1 0 1 0 0 0 0 0 -1 1 0 0 1 0 0 -1 0 0 0 0 0 -1 1        , d 1 = (0 0 1 1 1 0) ." }, { "formula_coordinates": [ 16, 423.8, 171.47, 81.94, 8.74 ], "formula_id": "formula_24", "formula_text": "N (i) = N (i) ∪ {i}." }, { "formula_coordinates": [ 16, 189.17, 237.23, 116.78, 30.32 ], "formula_id": "formula_25", "formula_text": "(d k f ) (i 0 , i 1 , ..., i k+1 ) = k+1 j=0" }, { "formula_coordinates": [ 16, 167.81, 297.08, 275.38, 174.84 ], "formula_id": "formula_26", "formula_text": "• d = 0, (d k d k-1 f ) i0,...,i k+1 = k+1 j=0 (-1) j (d k-1 f ) i0,..., ij ,...,i k+1 = k+1 j=0 k+1 l=0 [l < j] (-1) j+l f i0... i l ... ij ...i k+1 + k+1 j=0 k+1 l=0 [l > j] (-1) j+l-1 f i0... ij ... i l ...i k+1 = l<j (-1) j+l f i0... i l ... ij ...i k+1 - l<j (-1) j+l f i0... i l ... ij ...i k+1 = 0," }, { "formula_coordinates": [ 16, 209.5, 522.42, 192.01, 16.88 ], "formula_id": "formula_27", "formula_text": "Ω 0 Ω 1 Ω 2 • • • Ω K d0 d1 d2 d K-1" }, { "formula_coordinates": [ 16, 352.78, 593.68, 103.31, 8.74 ], "formula_id": "formula_28", "formula_text": "• curl = curl • grad = 0." }, { "formula_coordinates": [ 16, 182.95, 677.98, 246.1, 30.99 ], "formula_id": "formula_29", "formula_text": "(d ⊺ k f ) (i 0 , i 1 , ..., i k ) = 1 k + 2 i k+1 k+1 j=0 f (i 0 , ..., [i j , ..., i k+1 ]) ," }, { "formula_coordinates": [ 17, 108, 96.98, 333.74, 186.48 ], "formula_id": "formula_30", "formula_text": "g ∈ Ω k , ⟨d k f, g⟩ = i0...i k+1 ∈G k+1 (d k f ) i0...i k+1 g i0...,i k+1 = 1 (k + 2)!   k+1 j=0 (-1) j f i0... ij ...i k+1   g i0...i k+1 = 1 (k + 2)! f i0...i k   i k+1 k+1 j=0 (-1) j g i0...[ij ...i k+1 ]   = 1 k + 2 i0i1...i k ∈G k f i0...i k   i k+1 k+1 j=0 (-1) j g i0...[ij ...i k+1 ]   = i0i1...i k ∈G k f i0...i k (d ⊺ k g) i0...i k = ⟨f, d ⊺ k g⟩ ," }, { "formula_coordinates": [ 17, 227.91, 321.92, 117.92, 11.15 ], "formula_id": "formula_31", "formula_text": "(d 0 f ) α = (d 0 f ) ij = f j -f i ." }, { "formula_coordinates": [ 17, 181.38, 362.1, 248.75, 52.78 ], "formula_id": "formula_32", "formula_text": "⟨d 0 f, g⟩ = α=(i,j) (f j -f i ) g ij = i (j>i)∈N (i) g ij f j -g ij f i = 1 2 i j∈N (i) f i (g ji -g ij ) = ⟨f, d ⊺ 0 g⟩ ," }, { "formula_coordinates": [ 17, 214.45, 452.89, 183.11, 27.27 ], "formula_id": "formula_33", "formula_text": "(d ⊺ 0 g) i = α∋i g -α -g α = 1 2 j∈N (i) g ji -g ij ," }, { "formula_coordinates": [ 17, 235.8, 545.89, 140.39, 27.27 ], "formula_id": "formula_34", "formula_text": "(d * 0 f ) i = 1 2 j∈N (i) w ij (f ji -f ij ) ." }, { "formula_coordinates": [ 17, 107.69, 595.96, 396.3, 25.9 ], "formula_id": "formula_35", "formula_text": "The differential operators d ⊺ k induce a dual de Rham complex since d ⊺ k-1 d ⊺ k = (d k d k-1 ) ⊺ = 0, which enables both the construction of Laplace operators on k-cliques, ∆ k = d ⊺ k d k + d k-1 d ⊺ k-1 ," }, { "formula_coordinates": [ 17, 235.48, 658.62, 141.04, 13.06 ], "formula_id": "formula_36", "formula_text": "Ω k = Im d k-1 ⊕ Ker ∆ k ⊕ Im d ⊺ k ." }, { "formula_coordinates": [ 18, 108, 100.68, 270.78, 42.47 ], "formula_id": "formula_37", "formula_text": "w k-1 = d * k-1 u k + ϵg d * k-1 u k ; ξ , f k = d k-1 w k-1 + d * k d k u k , has a unique solution on Ω k /Ker ∆ k ." }, { "formula_coordinates": [ 18, 359.39, 210.99, 143.1, 12.71 ], "formula_id": "formula_38", "formula_text": "∆ 0 = d ⊺ 0 d 0 , satisfies ∆ 0 = D -A." }, { "formula_coordinates": [ 18, 136.21, 251.28, 338.68, 39.35 ], "formula_id": "formula_39", "formula_text": "(d ⊺ 0 d 0 ) ij = α∈E (d 0 ) αi (d 0 ) αj = [i = j] α∈E ((d 0 ) αi ) 2 + [i ̸ = j] α=(i,j) (d 0 ) αi (d 0 ) αj = [i = j] d ii -[i ̸ = j] a ij = d ij -a ij = D -A," }, { "formula_coordinates": [ 18, 162.21, 443.43, 287.59, 26.29 ], "formula_id": "formula_40", "formula_text": "dA(q)δq = b a dL (q, q) δq = b a L q δq + L q δ q = b a (L q -∂ t L q ) δq," }, { "formula_coordinates": [ 18, 226.32, 586.18, 160.99, 20.56 ], "formula_id": "formula_41", "formula_text": "ẋ = q ṗ = 0 1 -1 0 H q H p = J∇H," }, { "formula_coordinates": [ 19, 234.97, 431.14, 142.07, 27.94 ], "formula_id": "formula_42", "formula_text": "{A, B} = ξ ijk ∂ i A ∂ j B ∂ k S, [A, B] = ζ ik,jl ∂ i A ∂ k E ∂ j B ∂ l E." }, { "formula_coordinates": [ 20, 108, 238.12, 104.85, 11.71 ], "formula_id": "formula_43", "formula_text": "D(u) = (1/2) |∇u| 2 dµ" }, { "formula_coordinates": [ 20, 107.57, 436.29, 396.43, 25.31 ], "formula_id": "formula_44", "formula_text": "* k = A -1 k d ⊺ k A k+1 ." }, { "formula_coordinates": [ 20, 318.26, 448.22, 176.62, 13.38 ], "formula_id": "formula_45", "formula_text": "Ω k → Ω k , the A k -adjoint B * = A -1 k B ⊺ A." }, { "formula_coordinates": [ 20, 107.69, 484.74, 396.31, 87.34 ], "formula_id": "formula_46", "formula_text": "(d k q, p) k+1 = ⟨d k q, A k+1 p⟩ = ⟨q, d ⊺ k A k+1 p⟩ = ⟨q, A k d * k p⟩ = (q, d * k p) k . Therefore, we see that d ⊺ k A k+1 = A k d * k and hence d * k = A -1 k d ⊺ k A k+1 . Similarly, if q, q ′ denote vectors of k-clique features, it follows from the ℓ 2 -self-adjointness of A k that q, Bq ′ k = q, A k Bq ′ = ⟨B ⊺ A k q, q ′ ⟩ = A -1 k B ⊺ A k q, A k q ′ = (B * q, q ′ ) k , establishing that B * = A -1 k B ⊺ A k . Remark A.6." }, { "formula_coordinates": [ 20, 229.49, 597, 153.02, 27.27 ], "formula_id": "formula_47", "formula_text": "(d * 0 p) i = 1 a i j:(i,j)∈E w ij (p ji -p ij ) ." }, { "formula_coordinates": [ 21, 191.57, 125.63, 228.86, 20.56 ], "formula_id": "formula_48", "formula_text": "a 0,ii = j∈N (i) f (ã (q i , q j )) , a 1,ij = f (ã (q i , q j )) ," }, { "formula_coordinates": [ 21, 167.49, 272.71, 277.02, 51.74 ], "formula_id": "formula_49", "formula_text": "(d * 0 p) i = A -1 0 d ⊺ 0 A 1 p i = a -1 0,ii α (d ⊺ 0 ) iα (A 1 p) α = a -1 0,ii α∋i (A 1 p) -α -(A 1 p) α = - j∈N (i) a 1,ji + a i,ij a 0,ii p ij ." }, { "formula_coordinates": [ 21, 107.64, 411.63, 397.52, 82.18 ], "formula_id": "formula_50", "formula_text": "A K-1 = (a K-1,i1i2...i K ) by a K-1,i1i2...i K = f (W (q i1 , q i2 , ..., q i K )) , where W ∈ R ⊗ K n V is a learnable K-tensor. Then, for any 0 ≤ k ≤ K -2 define A k = a k,i1i2...i k+1 by a k,i1i2...i k+1 = i K ,...,i K-k-1 a K-1,i1i2...i K ." }, { "formula_coordinates": [ 21, 108, 558.28, 396, 52.16 ], "formula_id": "formula_51", "formula_text": "a k,i1i2...i k+1 = a K-1,i1i2...i K i K ,...,i K-k-1 a K-1,i1i2...i K , for any 0 ≤ k ≤ K -2. However, application of the combinatorial codifferential d ⊺ k-1 appearing in d *" }, { "formula_coordinates": [ 21, 132.28, 673.03, 347.44, 20.56 ], "formula_id": "formula_52", "formula_text": "a 2,ijk = f (W (q i , q j , q k )) , a 1,ij = k∈N (i,j) a 2,ijk , a 0,i = j∈N (i) k∈N (i,j) a 2,ijk ." }, { "formula_coordinates": [ 21, 284.65, 710.97, 68.22, 13.03 ], "formula_id": "formula_53", "formula_text": "d * 1 = A -1 1 d ⊺ 0 A 2 ." }, { "formula_coordinates": [ 22, 223.86, 123.07, 280.81, 35.38 ], "formula_id": "formula_54", "formula_text": "q k+1 i = σ   j∈N (i) a q k i , q k j W k q k j   ,(1)" }, { "formula_coordinates": [ 22, 152.22, 251.17, 268.83, 23.01 ], "formula_id": "formula_55", "formula_text": "ã (q i , q j ) = LeakyReLU (a ⊺ (W ⊺ q i || W ⊺ q j )) , σ i = j∈N (i)" }, { "formula_coordinates": [ 22, 107.57, 484.52, 396.43, 24.61 ], "formula_id": "formula_56", "formula_text": "d * 0 d 0 = A -1 0 d ⊺ 0 A 1 d 0 for some positive definite A 0 , A 1 ." }, { "formula_coordinates": [ 22, 138.39, 526.95, 335.23, 81.9 ], "formula_id": "formula_57", "formula_text": "(∆ 0 q) i = (d * 0 d 0 q) i = A -1 0 d ⊺ 0 A 1 d 0 q i = a -1 0,ii α (d ⊺ 0 ) iα (A 1 d 0 q) α = a -1 0,ii α∋i (A 1 d 0 q) -α -(A 1 d 0 q) α = - 1 2 j∈N (i) a 1,ji + a i,ij a 0,ii (q j -q i ) , = j∈N (i)" }, { "formula_coordinates": [ 23, 168.31, 108.1, 275.38, 21.79 ], "formula_id": "formula_58", "formula_text": "(∆ 0 q) i = - j∈N (i) a (q i , q j ) (q j -q i ) = q i - j∈N (i) a (q i , q j ) q j ." }, { "formula_coordinates": [ 23, 206.43, 171.48, 199.15, 23.98 ], "formula_id": "formula_59", "formula_text": "q k+1 i = q k i -τ ∆ 0 q k i = j∈N (i) a q k i , q k j q k j ," }, { "formula_coordinates": [ 23, 240.48, 263.36, 46.69, 21.89 ], "formula_id": "formula_60", "formula_text": "qi = j∈N (i)" }, { "formula_coordinates": [ 23, 188.34, 452.37, 85.64, 27.27 ], "formula_id": "formula_61", "formula_text": "(∆ 0 q) i = - 1 2 j∈N (i)" }, { "formula_coordinates": [ 23, 220.99, 459.08, 179.31, 55.24 ], "formula_id": "formula_62", "formula_text": "σ i (q j -q i ) = - 1 2 j∈N (i)" }, { "formula_coordinates": [ 23, 240.16, 584.4, 131.67, 26.88 ], "formula_id": "formula_63", "formula_text": "a (q i , q j ) = 1 |h| h a h (q i , q j ) ," }, { "formula_coordinates": [ 24, 179.04, 100.24, 253.92, 54.25 ], "formula_id": "formula_64", "formula_text": "L * = A -1 0 0 0 A -1 1 0 -d * 0 d 0 0 ⊺ A 0 0 0 A 1 = 0 A -1 0 d ⊺ 0 A 1 -A -1 1 (d * 0 ) ⊺ A 0 0 = 0 d * 0 -d 0 0 = -L." }, { "formula_coordinates": [ 24, 160.82, 174.04, 290.37, 117.18 ], "formula_id": "formula_65", "formula_text": "G * = A -1 0 0 0 A -1 1 d * 0 d 0 0 0 d * 1 d 1 ⊺ A 0 0 0 A 1 = A -1 0 d ⊺ 0 (d * 0 ) ⊺ A 0 0 0 A -1 1 d ⊺ 1 (d * 1 ) ⊺ A 1 = d * 0 d 0 0 0 d * 1 d 1 = G, M * = A -1 0 0 0 A -1 1 0 0 0 A 1 d * 1 d 1 A 1 ⊺ A 0 0 0 A 1 = 0 0 0 d ⊺ 1 (d * 1 ) ⊺ A 2 1 = 0 0 0 A 1 d * 1 d 1 A 1 = M," }, { "formula_coordinates": [ 24, 108, 387.35, 314.92, 67.71 ], "formula_id": "formula_66", "formula_text": "E(q, p) = 1 2 |q| 2 + |p| 2 = 1 2 i∈V |q i | 2 + 1 2 α∈E |p α | 2 , satisfies ∇E(q, p) = A -1 0 0 0 A -1 1 q p = A -1 0 q A -1 1 p" }, { "formula_coordinates": [ 24, 224.46, 472.34, 163.07, 27.73 ], "formula_id": "formula_67", "formula_text": "E(q, p) = f E (s(q)) + g E (s (d 0 d ⊺ 0 p)) , S(q, p) = g S (s (d ⊺ 1 d 1 p))" }, { "formula_coordinates": [ 24, 108, 530.97, 409.01, 25.06 ], "formula_id": "formula_68", "formula_text": "∇E(q, p) = A -1 0 1 ⊗ ∇f E (s(q)) A -1 1 d 0 d ⊺ 0 1 ⊗ ∇g E (s (d 0 d ⊺ 0 p)) , ∇S(q, p) = 0 A -1 1 d ⊺ 1 d 1 1 ⊗ ∇g S (s (d ⊺ 1 d 1 p)) ," }, { "formula_coordinates": [ 24, 158.39, 610.47, 295.22, 20.06 ], "formula_id": "formula_69", "formula_text": "dE(x) = i∈V ⟨q i , dq i ⟩ + α∈E ⟨p α , dp α ⟩ = ⟨q, dq⟩ + ⟨p, dp⟩ = ⟨x, dx⟩ ," }, { "formula_coordinates": [ 25, 108, 88.53, 359.96, 41.5 ], "formula_id": "formula_70", "formula_text": "d (f • s • Bq) = f ′ (s (Bq)) a s ′ (Bq) a i B ij dq a j = f ′ (s (Bq)) a e a (B ⊺ ) ij 1 j dq a i = ⟨∇f (s (Bq)) ⊗ B ⊺ 1, dq⟩ = ⟨∇ (f • s • Bq) , dq⟩ , showing that ∇ (f • s • B)" }, { "formula_coordinates": [ 25, 124.35, 146.95, 351.54, 21.87 ], "formula_id": "formula_71", "formula_text": "δE(q, p) = 1 ⊗ ∇f E (s(q)) d 0 d ⊺ 0 1 ⊗ ∇g E (s (d 0 d ⊺ 0 p)) , δS(q, p) = 0 d ⊺ 1 d 1 1 ⊗ ∇g S (s (d ⊺ 1 d 1 p))" }, { "formula_coordinates": [ 25, 176.32, 276.86, 252.94, 110.66 ], "formula_id": "formula_72", "formula_text": "L∇S = 0 -d * 0 d 0 0 0 A -1 1 d ⊺ 1 d 1 1 ⊗ ∇g S (s (d ⊺ 1 d 1 p)) = -A -1 0 (d 1 d 0 ) ⊺ d 1 1 ⊗ ∇g S (s (d ⊺ 1 d 1 p)) 0 = 0 0 , M∇E = 0 0 0 A 1 d * 1 d 1 A 1 A -1 0 1 ⊗ ∇f E (s(q)) A -1 1 d 0 d ⊺ 0 1 ⊗ ∇g E (s (d 0 d ⊺ 0 p)) = 0 A 1 d * 1 (d 1 d 0 )d ⊺ 0 1 ⊗ ∇g E (s (d 0 d ⊺ 0 p)) = 0 0 ," }, { "formula_coordinates": [ 25, 176.94, 650.1, 327.73, 8.96 ], "formula_id": "formula_73", "formula_text": "1 ≤ i ≤ 2,(2)" }, { "formula_coordinates": [ 25, 121.01, 663.89, 383.66, 25.55 ], "formula_id": "formula_74", "formula_text": "ω1 = m 2 l 1 ω 2 1 sin (2∆θ) + 2m 2 l 2 ω 2 2 sin (∆θ) + 2gm 2 cos θ 2 sin ∆θ + 2gm 1 sin θ 1 + γ 1 -2l 1 m 1 + m 2 sin 2 ∆θ ,(3)" }, { "formula_coordinates": [ 25, 121.01, 693.81, 383.66, 25.55 ], "formula_id": "formula_75", "formula_text": "ω2 = m 2 l 2 ω 2 2 sin (2∆θ) + 2 (m 1 + m 2 ) l 1 ω 2 1 sin ∆θ + 2g (m 1 + m 2 ) cos θ 1 sin ∆θ + γ 2 2l 2 m 1 + m 2 sin 2 ∆θ ,(4)" }, { "formula_coordinates": [ 26, 108, 102.94, 280.87, 57.06 ], "formula_id": "formula_76", "formula_text": "γ 1 = 2k 1 θ1 -2k 2 θ2 cos ∆θ. γ 2 = 2k 1 θ1 cos ∆θ - 2 (m 1 + m 2 ) m 2 k 2 θ2 , for damping constants k 1 , k 2 ." }, { "formula_coordinates": [ 26, 215.65, 306.12, 180.7, 51.34 ], "formula_id": "formula_77", "formula_text": "x 1 = l 1 sin θ 1 y 1 = -l 1 cos θ 1 x 2 = x 1 + l 2 sin θ 2 = l 1 sin θ 1 + l 2 sin θ 2 y 2 = y 1 -l 2 cos θ 2 = -l 1 cos θ 1 -l 2 cos θ 2 ." }, { "formula_coordinates": [ 30, 189.35, 178.52, 244.3, 27.16 ], "formula_id": "formula_78", "formula_text": "2 exp - |W K u i -W Q u j | 2 2ℓ 2 u exp - |W K x i -W Q x j | 2 2ℓ2" }, { "formula_coordinates": [ 30, 255.65, 380.65, 100.71, 31.18 ], "formula_id": "formula_79", "formula_text": "H (t, y) = |V| i=1 t ⊺ i log y i ," } ]
Reversible and irreversible bracket-based dynamics for deep graph neural networks
Recent works have shown that physics-inspired architectures allow the training of deep graph neural networks (GNNs) without oversmoothing. The role of these physics is unclear, however, with successful examples of both reversible (e.g., Hamiltonian) and irreversible (e.g., diffusion) phenomena producing comparable results despite diametrically opposed mechanisms, and further complications arising due to empirical departures from mathematical theory. This work presents a series of novel GNN architectures based upon structurepreserving bracket-based dynamical systems, which are provably guaranteed to either conserve energy or generate positive dissipation with increasing depth. It is shown that the theoretically principled framework employed here allows for inherently explainable constructions, which contextualize departures from theory in current architectures and better elucidate the roles of reversibility and irreversibility in network performance.
Anthony Gruber; Kookjin Lee; Nathaniel Trask
[ { "figure_caption": "Gradient ẋ = -[x, E] M * = M incomplete totally dissipative Double Bracket ẋ = {x, E} + {{x, E}} L * = -L incomplete partially dissipative Metriplectic ẋ = {x, E} + [x, S] L * = -L, M * = M, complete partially dissipative L∇S = M∇E = 0", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: A diagrammatic illustration of the bracket-based architectures introduced in Section 4.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "vector space of real-valued functions on k-cliques d, d k Exterior derivative operator on functions, exterior derivative operator on k-cliques", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: A toy graph with six 0-cliques (nodes), six 1-cliques (edges), and one 2-clique.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "as well as the celebrated Hodge decomposition theorem, stated below. For a proof, see, e.g., [11, Theorem 3.3]. Theorem A.3. (Hodge Decomposition Theorem) The de Rham complexes formed by d k , d ⊺ k induce the following direct sum decomposition of the function space Ω k ,", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "intuitive fact: the Taylor series of a function does not change, regardless of the inner product on its domain. For any differentiable function(al) E : Ω 0 → R, using d to denote the exterior derivative, this means that the following equality holds dE(a)b := lim ε→0 E(a + ϵb) -E(a) ε = ⟨δE(a), b⟩ = (∇E(a), b) 0 ,", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: [Double pendulum] Trajectories of q and p: ground-truth (solid lines) and predictions of the metriplectic bracket model (dashed lines). The evolution of the energy E and the entropy S over the simulation time. Note that slight fluctuations appear in E due to the fact that forward Euler is not a symplectic integrator.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: [Mujoco] Train MSE over epoch for all considered dynamics models. For the nodal feature, only the position or the angle of the body part/joint is considered.", "figure_data": "", "figure_id": "fig_9", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 77reports the loss trajectories for all considered models. Similar to the previous experiments with the position as the only nodal feature, the Hamiltonian and Double bracket produces the lower training losses than the NODE and Gradient models do. For the Hopper and Swimmer environments, however, among all considered models, the metriplectic model produces the lowest training MSEs after 256 training epochs.", "figure_data": "", "figure_id": "fig_10", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: [Mujoco] Train MSE over epoch for all considered dynamics models. For the nodal feature, along with the position or the angle of the body part/joint, the node velocities are also considered.", "figure_data": "", "figure_id": "fig_12", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Mean absolute errors (MAEs) of the network predictions in the damped double pendulum case, reported as avg±stdev over 5 runs.", "figure_data": "Double pendulum MAE qMAE pTotal MAENODE0.0240 ± 0.0150.0299 ± 0.00910.0269 ± 0.012NODE+AE0.0532 ± 0.0290.0671 ± 0.0430.0602 ± 0.035Hamiltonian0.00368 ± 0.00150.00402 ± 0.00150.00369 ± 0.0013Gradient0.00762 ± 0.00230.0339 ± 0.0120.0208 ± 0.0067Double Bracket0.00584 ± 0.00130.0183 ± 0.00710.0120 ± 0.0037Metriplectic0.00364 ± 0.00064 0.00553 ± 0.00029 0.00459 ± 0.00020", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Double Bracket 0.0621 ± 0.0096 0.0297 ± 0.0048 0.0128 ± 0.00070 Metriplectic 0.105 ± 0.0091 0.0398 ± 0.0057 0.0179 ± 0.00059 Relative error of network predictions for the MuJoCo environment on the test set, reported as avg±stdev over 4 runs. Hamiltonian 77.2 ± 0.7 73.0 ± 1.2 78.5 ± 0.3 Gradient 79.9 ± 0.7 71.8 ± 1.4 78.6 ± 0.7 Double Bracket 82.6 ± 0.9 74.2 ± 1.4 79.6 ± 0.6 Metriplectic 57.4 ± 1.0 60.5 ± 1.1 69.8 ± 0.7", "figure_data": "DatasetHalfCheetahHopperSwimmerNODE+AE0.106 ± 0.00110.0780 ± 0.0021 0.0297 ± 0.0036Hamiltonian0.0566 ± 0.0130.0279 ± 0.0019 0.0122 ± 0.00044Gradient0.105 ± 0.00760.0848 ± 0.0011 0.0290 ± 0.0011Planetoid splitsCORACiteSeerPubMedGAT82.8 ± 0.5 69.5 ± 0.9 79.0 ± 0.5GDE83.8 ± 0.5 72.5 ± 0.5 79.9 ± 0.3GRAND-nl83.6 ± 0.5 70.8 ± 1.1 79.7 ± 0.3", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Test accuracy and standard deviations (averaged over 20 randomly initialized runs) using the original Planetoid train-valid-test splits. Comparisons use the numbers reported in", "figure_data": "", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Test accuracy and standard deviations averaged over 20 runs with random 80/10/10 train/val/test splits. Comparisons use the numbers reported in", "figure_data": "Random splitsCORACiteSeerPubMedCoauthor CS ComputerPhotoGAT81.8 ± 1.3 71.4 ± 1.9 78.7 ± 2.3 90.5 ± 0.678.0 ± 19.0 85.7 ± 20.3GDE78.7 ± 2.2 71.8 ± 1.1 73.9 ± 3.7 91.6 ± 0.182.9 ± 0.692.4 ± 2.0GRAND-nl82.3 ± 1.6 70.9 ± 1.0 77.5 ± 1.8 92.4 ± 0.382.4 ± 2.192.4 ± 0.8Hamiltonian76.2 ± 2.1 72.2 ± 1.9 76.8 ± 1.1 92.0 ± 0.284.0 ± 1.091.8 ± 0.2Gradient81.3 ± 1.2 72.1 ± 1.7 77.2 ± 2.1 92.2 ± 0.378.1 ± 1.288.2 ± 0.6Double Bracket 83.0 ± 1.1 74.2 ± 2.5 78.2 ± 2.0 92.5 ± 0.284.8 ± 0.592.4 ± 0.3Metriplectic59.6 ± 2.0 63.1 ± 2.4 69.8 ± 2.1 ---", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "{{•, •}} (Irreversible) double bracket on functions with generator L 2", "figure_data": "[•, •]Degenerate (irreversible) metric bracket on functions with generator M ⊺ = M{•, •} Poisson (reversible) bracket on functions with generator", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The dataset consists of training and test data, which are constructed by randomly splitting the episodes in the replay buffer into training and test data. Training and test data consist of ∼40K and ∼300 or ∼ 85 trajectories, respectively. For both training and test data, we include the first 20 measurements (i.e., 19 transitions) in each trajectory.", "figure_data": "", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Relative errors of the network predictions of the MuJoCo environments on the test set, reported as avg±stdev over 4 runs.", "figure_data": "DatasetHalfCheetahHopperSwimmerNODE+AE0.0848 ± 0.00110.0421 ± 0.00410.0135 ± 0.00082Hamiltonian0.0403 ± 0.00520.0294 ± 0.00280.0120 ± 0.00022Gradient0.0846 ± 0.00358 0.0490 ± 0.00130.0158 ± 0.00030Double Bracket 0.0653 ± 0.0100.0274 ± 0.00090 0.0120 ± 0.00060Metriplectic0.0757 ± 0.00210.0269 ± 0.00035 0.0114 ± 0.00067", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Table 7 provides some basic statistics about each dataset. Dataset statistics.", "figure_data": "DatasetCora Citeseer PubMed CoauthorCS Computer PhotoClasses76315108Features 1433 37035006805767745Nodes2485 21201971718333133817487Edges5069 36794432481894245778119043", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[1][2][3][4]", "Explanation": "The cited works provide the foundational framework of GNNs, which the citing paper builds upon to develop a learning paradigm for treating unstructured data and extracting object-relation/causal relationships."}, {"Category": "Extension or Continuation", "Citation": "[5][6][7][8]", "Explanation": "The cited works highlight the well-known pathologies of GNNs, which the citing paper extends by proposing solutions to improve performance and stability in deep GNNs."}, {"Category": "Supporting Evidence", "Citation": "[9]", "Explanation": "The cited work presents a spectrum of (ir)reversibility in message passing processes, which the citing paper uses to discuss the role of (ir)reversibility in promoting stability and information retention in GNNs."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work provides a data-driven exterior calculus (DDEC) framework that the citing paper uses to re-interpret the message-passing and aggregation of graph attention networks as fluxes and conservation balances in physics simulators."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work on graph attention networks is used as a basis for the message-passing and aggregation techniques employed in the citing paper to analyze the dynamics of physical systems."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work on physics simulators provides a framework for analyzing the conservation and entropy production in dynamical systems, which the citing paper leverages to study the dynamics of physical systems."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work introduces the concept of graph attention, which the citing paper uses to recast graph attention as an inner-product on graph features and build geometric brackets for higher-order clique cochains."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work on model calibration provides a foundational method for fitting dynamics using neural networks, which the citing paper builds upon in their research on structure-preservation."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work on dictionary-based learning using SINDy is a methodological basis for the citing paper in learning dynamics on a graph with a modern NODE library."}, {"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work on neural ordinary differential equations (NODE) provides a method for learning dynamics that the citing paper uses to exploit the improved accuracy of high-order integrators in their research on structure-preservation."}, {"Category": "Data Source", "Citation": "[17][18][19]", "Explanation": "The cited works on high-order integrators are a data source for the citing paper in exploiting the improved accuracy of these methods in their research on learning dynamics on a graph."}, {"Category": "Methodological Basis", "Citation": "[20][21][22][23]", "Explanation": "The cited works on Hamiltonian neural networks provide a methodological basis for the citing paper in parameterizing reversible dynamics in the context of structure-preserving dense networks."}, {"Category": "Methodological Basis", "Citation": "[24]", "Explanation": "The cited work on Hamiltonian generative networks is a methodological basis for the citing paper in learning structure-preserving dynamics in a dense network setting."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work on Hamiltonian with Control (SymODEN) provides a method for learning structure-preserving dynamics in a dense network setting that the citing paper builds upon."}, {"Category": "Methodological Basis", "Citation": "[26][27]", "Explanation": "The cited works on deep Lagrangian networks and Lagrangian neural networks provide methodological bases for the citing paper in learning structure-preserving dynamics in a dense network setting."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work provides a delicate degeneracy condition to preserve discrete notions of the first and second laws of thermodynamics, which the citing paper adopts in their research on structure-preserving extensions to dissipative systems."}, {"Category": "Extension or Continuation", "Citation": "[32]", "Explanation": "The cited work, Dissipative SymODEN, is an alternative dissipative framework that the citing paper extends to further explore the potential impact of metriplectic parameterizations in data-driven physics modeling."}, {"Category": "Data Source", "Citation": "[35]", "Explanation": "The cited work on physics-informed learning by penalty has been successfully applied to solve a range of PDEs, which the citing paper references to support the use of this technique in their own research."}, {"Category": "Supporting Evidence", "Citation": "[36][37][38]", "Explanation": "The cited works on data-driven models to complement first-principles simulators provide supporting evidence for the citing paper to focus on metriplectic parameterizations in data-driven physics modeling."}, {"Category": "Extension or Continuation", "Citation": "[39]", "Explanation": "The cited work on learning metriplectic dynamics is an extension of the research on physics-informed learning by penalty, which the citing paper references to further explore the use of this technique in data-driven physics modeling."}, {"Category": "Supporting Evidence", "Citation": "[40,41]", "Explanation": "The cited works on uncertainty quantification provide supporting evidence for the citing paper to focus on metriplectic parameterizations in data-driven physics modeling, as they are useful in this context."}, {"Category": "Methodological Basis", "Citation": "[42,43]", "Explanation": "The cited works provide a discussion on the issues of using penalization in multiobjective optimization, which the citing paper uses to contrast the advantages of structure-preserving architectures in improving long term stability and physical realizability."}, {"Category": "Extension or Continuation", "Citation": "[44,45]", "Explanation": "The cited works on using specific PDEs to combat oversmoothing and reaction-diffusion systems for structure-preservation are extended in the citing paper to develop Hamiltonian flows and metriplectic dynamics."}, {"Category": "Extension or Continuation", "Citation": "[46,47]", "Explanation": "The cited works on using Hamiltonian flows on graphs are further developed in the citing paper to focus on specific cases of graph learning and diffusive processes."}, {"Category": "Extension or Continuation", "Citation": "[48]", "Explanation": "The cited work on a penalty based formulation on graphs is extended in the citing paper to focus on a particular case of asymmetric attention in the graph learning process."}, {"Category": "Extension or Continuation", "Citation": "[49]", "Explanation": "The cited work on using a diffusive process for graph learning is further developed in the citing paper to analyze the asymmetry in the attention mechanism and its impact on the governing theory."}, {"Category": "Methodological Basis", "Citation": "[50]", "Explanation": "The cited work introduces the concept of bracket formulations, which the citing paper adopts in the design of neural architectures to preserve core mathematical properties in the context of neural differential equations."}, {"Category": "Methodological Basis", "Citation": "[51]", "Explanation": "The cited work introduces the concept of exterior derivative operators in the context of combinatorial Hodge theory, which the citing paper adopts to model the discrete de Rham complex on graphs."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work provides a definition of the dual derivatives d * k that the citing paper uses in the context of building discretizations of PDEs on G that preserve exactness properties."}, {"Category": "Methodological Basis", "Citation": "[52]", "Explanation": "The cited work provides the adjoints of the signed incidence matrices as a method for obtaining d * k in the graph setting, which the citing paper adopts in their research to work with the modified inner product (v, w) = v \u22ba A k w for a machine-learnable A k ."}, {"Category": "Supporting Evidence", "Citation": "[11]", "Explanation": "The cited work provides a thorough review of DDEC, which is used as a foundational element in the citing paper to understand the concept of diffusion operators and the need for degeneracy conditions in metriplectic dynamics."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work provides a standard graph attention mechanism that the citing paper uses as a basis for the construction of the inner products A 0 , A 1 , A 2 on \u2126 k . This method is essential for the generation of the dual derivatives d * 0 , d * 1 , which are crucial for the overall bracket dynamics discussed in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[55]", "Explanation": "The cited work provides a pre-attention function for graph attention mechanism, which the citing paper adopts to represent the attention function a(q i , q j ) in a more efficient and effective way."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work provides a GAT layer that the citing paper adopts in their research to perform forward Euler discretization of a metric heat equation."}, {"Category": "Methodological Basis", "Citation": "[49]", "Explanation": "The cited work, GRAND, is used as a reference for the architecture in the citing paper, which is almost a gradient flow but lacks the requisite symmetry to formally induce a valid inner product."}, {"Category": "Methodological Basis", "Citation": "[29,31]", "Explanation": "The cited works scale as O(N 3 ) in contrast to the method proposed in the citing paper, which is a significant methodological basis for the research conducted in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[56]", "Explanation": "The cited work provides the concept of metriplectic systems in position-momentum-entropy coordinates, which the citing paper uses to analyze the dynamics of a double pendulum with a damping force."}, {"Category": "Methodological Basis", "Citation": "[57]", "Explanation": "The cited work introduces the MuJoCo physics simulator, which the citing paper uses to generate more complex physical systems for testing the proposed models."}, {"Category": "Data Source", "Citation": "[23]", "Explanation": "The cited work provides the Open AI Gym environments that the citing paper modifies to create the MuJoCo environments used in the experiments."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work, GAT, serves as a methodological basis for the comparison in the citing paper by providing a standard benchmark for node classification problems."}, {"Category": "Extension or Continuation", "Citation": "[49]", "Explanation": "The cited work, neural graph differential equation architecture (GDE), is extended in the citing paper by comparing the performance of a new architecture in a similar experimental setting."}, {"Category": "Data Source", "Citation": "[58], [59], [60], [61], [62]", "Explanation": "The cited works provide the data sources for the benchmark problems used in the study of node classification in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[49]", "Explanation": "The cited work provides a theoretical framework for the analysis and construction of graph attention networks, which the citing paper adopts in their research to evaluate the role of irreversibility in data-driven physics simulators and graph analytics problems."}, {"Category": "Methodological Basis", "Citation": "[11,51,64]", "Explanation": "The cited works provide a detailed introduction to the graph exterior calculus, which the citing paper adopts in its research on oriented graphs and the associated concepts of k-cliques and combinatorial derivatives."}, {"Category": "Methodological Basis", "Citation": "[53]", "Explanation": "The cited work provides the concept of a de Rham complex and its properties, which the citing paper adopts to analyze the k-cliques on a given graph G."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work provides a well-posedness result involving nonlinear perturbations of a Hodge-Laplace problem in mixed form, which the citing paper adopts in their research to study the stability of initial-value problems involving the Hodge-Laplace operator."}, {"Category": "Methodological Basis", "Citation": "[50,[65][66][67].", "Explanation": "The cited works provide a framework for understanding the goal of bracket formalisms in extending the Hamiltonian formalism to systems with dissipation, which the citing paper builds upon in its research on bracket-based dynamical systems."}, {"Category": "Methodological Basis", "Citation": "[65]", "Explanation": "The cited work provides a specific mechanism for modeling dissipative phenomena in Hamiltonian systems, which the citing paper adopts in their research to incorporate these phenomena into their system."}, {"Category": "Theoretical Foundation", "Citation": "[29]", "Explanation": "The cited work provides a theoretical basis for the generation of surrogate models in the context of metriplectic systems."}, {"Category": "Theoretical Foundation", "Citation": "[68]", "Explanation": "The cited work further builds upon the theoretical framework for generating surrogate models in the context of metriplectic systems."}, {"Category": "Theoretical Foundation", "Citation": "[31]", "Explanation": "The cited work extends the research on generating surrogate models in the context of metriplectic systems by exploring new dimensions or variables."}, {"Category": "Methodological Basis", "Citation": "[67]", "Explanation": "The cited work provides the necessary information and techniques for the decomposition of the order-4 tensor \u03b6, which the citing paper utilizes in their research to establish the symmetry relationships between the brackets."}, {"Category": "Methodological Basis", "Citation": "[29]", "Explanation": "The cited work introduces a method for achieving degeneracy conditions with a computational complexity of O(N 3 ), which the citing paper adopts to improve the efficiency of their research."}, {"Category": "Methodological Basis", "Citation": "[69]", "Explanation": "The cited work introduces the concept of Sobolev gradient methods, which the citing paper adopts to pre-condition a gradient flow in the graph setting."}, {"Category": "Methodological Basis", "Citation": "[70]", "Explanation": "The cited work highlights the benefits of using Sobolev gradient methods in terms of faster convergence and better numerical behavior, which the citing paper leverages in its research on graph settings."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work describes the standard (single-headed) graph attention network (GAT), which the citing paper adopts in their research to compute the attention mechanism in a layer-wise manner."}, {"Category": "Methodological Basis", "Citation": "[49]", "Explanation": "The cited work provides the standard GAT layer that the citing paper adopts in their research, serving as a methodological basis for the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[71]", "Explanation": "The cited work, PyTorch, is the framework used to implement the proposed algorithms in the citing paper, providing the necessary data and code for reproducing the experiments."}, {"Category": "Methodological Basis", "Citation": "[72]", "Explanation": "The cited work provides the governing equations for the damped double pendulum, which the citing paper adopts to model the system in their research."}, {"Category": "Data Source", "Citation": "[16]", "Explanation": "The cited work is the TorchDiffeq library, which the citing paper uses for the time integrator in the simulation of the double pendulum. The library is acknowledged for providing the necessary tools and methods for the integration process."}, {"Category": "Methodological Basis", "Citation": "[73]", "Explanation": "The cited work by Adam provides the optimization method used in the training of the node and edge features in the NODE and NODE+AE architectures, which is essential for the training process and the overall performance of the networks."}, {"Category": "Methodological Basis", "Citation": "[32]", "Explanation": "The cited work introduces a forcing term in the dynamics models to handle changes in the system due to control inputs, which the citing paper adopts in their own research to model the changes in the system in the presence of an actor applying controls."}, {"Category": "Data Source", "Citation": "[23]", "Explanation": "The cited work provides the standard Open AI Gym environment settings and the modified dataset used in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[73]", "Explanation": "The cited work by Kingma and Ba introduces the Adam optimizer, which the citing paper adopts for training the models in their research on time integration and neural network architectures."}, {"Category": "Data Source", "Citation": "[61]", "Explanation": "The cited work provides the experimental methodology for comparison purposes, serving as a data source for the citing paper."}, {"Category": "Data Source", "Citation": "[58]", "Explanation": "The dataset Cora is used as a source of information for the study conducted in the citing paper on citation networks."}, {"Category": "Data Source", "Citation": "[59]", "Explanation": "The dataset Citeseer is also used as a source of information for the study on citation networks."}, {"Category": "Data Source", "Citation": "[60]", "Explanation": "The dataset Pubmed is used as a source of information for the study on citation networks."}, {"Category": "Data Source", "Citation": "[61]", "Explanation": "The dataset CoauthorCS is used as a source of information for the study on the coauthor graph."}, {"Category": "Data Source", "Citation": "[62]", "Explanation": "The dataset Amazon co-purchasing graphs, Computer and Photo is used as a source of information for the study on the co-purchasing graphs."}, {"Category": "Methodological Basis", "Citation": "[76]", "Explanation": "The cited work, Weights and Biases, is used as a tool to conduct a Bayesian search for hyperparameter configurations in the citing paper. The search is conducted to find the best validation accuracy for each bracket and dataset, which is then used in the experiments presented in Table 4 and Table 5."}, {"Category": "Methodological Basis", "Citation": "[49]", "Explanation": "The cited work, GRAND, provides the network architectures used in the citing paper for the learnable affine encoder/decoder networks and the learnable bracket-based dynamics in the latent space."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b4", "b19", "b8", "b15", "b14", "b2", "b25", "b21", "b11", "b27", "b23", "b10", "b23", "b16", "b9", "b18", "b17", "b3", "b7", "b1" ], "table_ref": [], "text": "Reinforcement Learning (RL) has achieved significant empirical success in the online setting, where the agent continuously interacts with the environment to collect data and improve its performance. However, online exploration is costly and risky in many applications, such as healthcare [5] and autonomous driving [20], in which case it is preferable to learn from a pre-collected observational dataset from doctors or human drivers using their own policies. Due to lack of on-policy interaction with the environment, offline RL faces the fundamental challenge of distribution shift [9]. A standard approach for handling distribution shift is importance sampling [16,15]. More sophisticated approaches have been proposed to alleviate the high variance of importance sampling [3,26]. Recent works [22,12,28] consider estimating the state marginal importance ratio, a more tractable problem.\nExisting work on offline RL requires the dataset to have sufficient coverage. A standard measure for coverage is the concentrability coefficient [24]: C π = max s,a d π (s,a) ρ(s,a) , which is the ratio between the stateaction occupancy measure of a policy π of interest and the (empirical) occupancy measure ρ of the behavior policy generating the offline dataset. However, this can be restrictive as the support of ρ must contain that of d π in order for C π to be finite. Earlier work such as the Fitted Q-iteration (FQI) algorithm [11] requires full coverage, i.e. C π < ∞ for all policies π. More recent works [24,17,10] requires a more relaxed, partial coverage condition C π * < ∞ with π * being optimal policy. Partial coverage is still a fairly strong requirement: the behavior policy must visit every state the optimal policy would visit, and take every action the optimal policy would take.\nIn this paper, we seek to relax the coverage condition for offline policy evaluation in settings where the Markov decision process (MDP) has a latent low-rank structure. Similarly to [19,18], we view the Q function as a matrix and exploit its low-rank structure to infer the entries that were not observed in the offline data. Unlike typical results from the low-rank matrix completion literature, our setting requires completing the matrix under non-uniform sampling, as in [4,8]; moreover, the error is evaluated under a different distribution or weighted norm, leading to the fundamental challenge of distribution shift. By leveraging techniques from weighted and non-uniform matrix completion, we develop a new offline policy evaluation algorithm, which alternates between Q iteration and matrix estimation. For both the infinite and finite sample settings, we show that the evaluation error can be bounded in terms of a novel discrepancy measure between the behavior and target policies. In contrast to the standard concentrability coefficient, our discrepancy measure may remain finite even when the behavior policy does not cover the support of the target policy. We present a concrete example where the concentrability coefficient is infinite but our method achieves a meaningful error bound. Building on the above evaluation algorithm, we further design an offline policy optimization algorithm with provable performance guarantees.\nThere are several challenges that arise when we borrow ideas from low-rank matrix estimation to offline RL. Classical matrix estimation results require two assumptions that are hard to satisfy in MDP. First, independence is assumed between the sampling process and the observation noise. This is clearly not true for MDPs, where observation noise is intertwined with the sampling. For example, if a state-action pair is sampled more frequently, the empirical observations (e.g., transition frequencies) are bound to be less noisy than others sampled less often. Second, sampling in matrix estimation typically requires each entry to have a non-zero probability of being observed. Sampling in MDPs is very different: only entries on the support of the sampling distribution, which is determined by the behavior policy, can be sampled; those off the support have a zero observation probability. We note that various recent works attempt to relax the aforementioned assumptions to make matrix estimation more applicable to real-world sequential decision-making problems. For example, the paper [2] allows for some dependence between the noise and sampling pattern, and the smallest sampling probability can be zero. Their algorithm, which involves finding a maximum biclique, works best with datasets with a specific restrictive structure, which is often not present in offline RL. Our goal in this paper is to derive a performance guarantee for a more general class of sampling patterns. " }, { "figure_ref": [], "heading": "Problem Setup", "publication_ref": [ "b0", "b24", "b5", "b26" ], "table_ref": [], "text": "→ ∆(A)} t∈[H] , the Q function Q π t : S × A → R is defined as Q π t (s, a) = E π [ H i=t r i (s i , a i )|s t = s, a t = a],\nand the total expected reward is\nJ π = E π [ H t=1 r t (s t , a t )|s 1 ∼ µ 1 ]. Let d π t : S × A → [0, 1]\ndenote the state-action occupancy measure at time t ∈ [H] under policy π.\nGiven a dataset generated by the behavior policy π β , our goal is to estimate J π θ for a target policy π θ . Our blanket assumption is that the MDP has the following low-rank structure, which implies that for any policy π, its Q function (viewed as an S-by-A matrix) is at most rank d.\nAssumption. For all t, r t ∈ [0, 1] S×A has rank at most d ′ = ⌊d/2⌋, and P t admits the decomposition\nP t (s ′ |s, a) = d ′ i=1 u t,i (s ′ , s)w t,i (a) or P t (s ′ |s, a) = d ′ i=1 u t,i (s)w t,i (s ′ , a), ∀s ′ , s, a.\nThe above low-rank model is different from the Low-rank MDP model considered in previous works [1,25]. In Low-rank MDPs, the transition kernel P is assumed to have a factorization of the form P (s ′ |s, a) = d ′ i=1 u i (s ′ )w i (s, a), where the factors u i (•) and w i (•, •) are unknown. Closely related is the Linear MDP model [6,27], where the feature maps w i (•, •) are known. In these models, the low-rank/linear structures are with respect to the relationship between the originating state-action pair (s, a) and the destination state s ′ ; they do not imply that Q function is low-rank when viewed as a matrix. In contrast, our model stipulates that the transition kernel can be factorized either between (i) a and (s, s ′ ) or (ii) s and (s ′ , a), both of which imply a low dimensional relationship between the current state s and the action a taken at that state, resulting in a low-rank Q function. A key consequence of the Q function being low-rank is that we can bypass modeling the environment and directly estimate the Q function by leveraging its low-rankness, resulting in a model-free method. On the other hand, most works in Low-rank MDPs consider model-based methods.\nNote that when the transition tensor is fully factorized:\nP t (s ′ |s, a) = d ′ i=1 u t,i (s ′ )v t,i (s)w t,i (a)\n, it satisfies both our assumption and the assumption of Low-rank MDPs." }, { "figure_ref": [], "heading": "Offline Dataset", "publication_ref": [], "table_ref": [], "text": "The offline dataset is denoted by\nD = {(s k t , a k t , r k t )} t∈[H],k∈[K]\n, which contains K independent trajectories generated from the behavior policy π β . We consider two settings: the infinite-sample setting with K → ∞ and the finite-sample setting with K < ∞; we describe these two settings in detail below. For simplicity, we assume the immediate rewards are observed without noise, which can be easily relaxed. The uncertainty in the system completely comes from the transition probability.\nIn the infinite-sample setting, we have partial but noiseless knowledge of the MDP on the support of the state-action occupancy measure induced by the behavior policy. In other words, for all state-action pairs that can be visited using the behavior policy, namely, (s, a) ∈ supp(d π β t ), we know the exact values of the transition probability P t (•|s, a). It is important to note that even in this idealized setting, off-policy evaluation is still non-trivial. When the behavior policy does not have full coverage, i.e., supp(d π β t ) = S × A, we do not have any information for the state-action pairs off the support and they must be estimated by leveraging the low-rank structure of the Q function. The distribution shift that arises in the infinite-sample setting can be attributed to the difference in support, which is precisely reflected in our proposed distribution discrepancy in Definition 1 and the corresponding error bound in Theorem 1.\nIn the finite-sample setting, we have a noisy and unbiased estimate P t (•|s, a) of the true transition probability P t (•|s, a) for (s, a) ∈ supp( d π β t ), where d π β t denotes the empirical data distribution of K independent samples from the true distribution d π β t . Since different estimates of the probability exhibit different levels of uncertainty, only considering the support is no longer sufficient. In particular, the finite-sample distribution shift depends not only on the difference in support, but also the difference in the specific distributions, which will be reflected in our proposed distribution discrepancy in Definition 2 and the subsequent error bound in Theorem 2." }, { "figure_ref": [], "heading": "Notation and Operator Discrepancy", "publication_ref": [ "b20", "b20", "b7" ], "table_ref": [], "text": "For a matrix M ∈ R n×m , let M * denote its nuclear norm (sum of singular values), M op its operator norm (maximum singular value), M ∞ = max i,j |M ij | its entrywise ℓ ∞ norm, and supp(M ) = {(i, j) : M ij = 0} its support. The max norm [21] of M is defined as M max = min U,V :X=UV ⊤ U 2→∞ V 2→∞ , where\n• 2→∞ denotes the maximum row ℓ 2 norm of a matrix. Both max norm and nuclear norm can be viewed as convex surrogates of the matrix rank [21]. For a rank-d matrix M , its max norm can be upper bounded as\nM max ≤ √ d M ∞ .(1)\nThe nuclear norm and the max norm satisfy:\n1 √ nm M * ≤ M max ≤ M * .(2)\nThe indicator matrix 1 M ∈ {0, 1} n×m is a binary matrix encoding the position of the support of M . The entrywise product between two matrices M and M ′ is denoted by M • M ′ . We propose a novel discrepancy measure defined below, and show that it can replace the role of the concentrability coefficient in our infinite-sample error bound under the low-rank assumption.\nDefinition 1 (Operator discrepancy). The operator discrepancy between two probability distributions p, q ∈ ∆(S × A) is defined as Dis(p, q) := min g -q op : g ∈ ∆(S × A), supp(g) ⊆ supp(p) .\n(3)\nNote that Dis(p, q) ≤ p -q op is always finite, and Dis(p, q) = 0 if and only if supp(q) ⊆ supp(p). To provide intuition for Dis(p, q), let the minimizer in (3) be g * . By generalized Hölder's inequality,\nE (s,a)∼g * M (s, a) -E (s,a)∼q M (s, a) = g * , M -q, M ≤ Dis(p, q) • M * .(4)\nIf the nonzero singular values of M are of the same scale, then the RHS of ( 4) is of order Dis(p, q) • rank(M ). Therefore, Dis(p, q) measures the distribution shift between p and q in terms of preserving the expectation of low-rank matrices. Compared to traditional measures such as the concentrability coefficient, the operator discrepancy takes into account the low-rank structure of the model and therefore can allow for a less restrictive coverage condition. Note that Dis(p, q) only depends on the support of p: if supp(p) = supp(p ′ ), then Dis(p, q) = Dis(p ′ , q) for all q. As mentioned before, the infinite-sample distribution shift depends only on the support of the behavior policy and the operator discrepancy reflects exactly that. Moreover, thanks to the minimization in the definition (3), Dis(p, q) can be significantly smaller than p -q op . For instance, if p is the uniform distribution on S × A, then g * = q and hence Dis(p, q) = 0 for all q.\nFor our finite-sample error bound, we consider a different notion of discrepancy. As we have a limited number of samples from the behavior policy, it is natural that the appropriate notion of discrepancy no longer depends only on the support of the distribution, but rather how closely the distributions match. In our analysis, the appropriate measure of closeness is given by the operator norm difference, which is also closely tied to Definition 1 when we restrict g to be equal to p.\nDefinition 2 (Empirical Operator Discrepancy). The empirical operator discrepancy between two probability distributions p, q ∈ ∆(S × A) is defined as\nDis(p, q) := p -q op .(5)\nWhen infinite samples are given from the behavior policy, the error bound for our proposed off-policy evaluation algorithm will be a function of Dis(d π β t , d π θ t ). The operator discrepancy only depends on the support of the behavior policy and not the exact distribution, which is expected under the infinite-sample setting. As such, the operator discrepancy highlights the inherent error induced by distribution shift. In the finite-sample setting, our error bound depends on the empirical operator discrepancy Dis(d π β t , d π θ t ), for which the exact distribution matters. This is expected since we are given observations with varying noise levels determined by the empirical data distribution.\nWe remark in passing that the inequality Dis(d\nπ β t , d π θ t ) ≤ Dis(d π β t , d π θ t )\nholds by definition. Also, the above discrepancy metrics share similarity with the parameter Λ in [8], which also measures the difference in two distributions in the operator norm." }, { "figure_ref": [], "heading": "Algorithm", "publication_ref": [ "b6", "b6" ], "table_ref": [], "text": "In this section, we present our algorithm for offline policy evaluation. The algorithm takes as input an offline dataset\nD = {(s k t , a k t , r k t )} t∈[H],k∈[K]\n, which contains K independent trajectories generated from the behavior policy π β . The algorithm also takes as input the target policy π θ , the initial state distribution µ 1 , weight matrices (ρ t ) t∈[H] , and a matrix estimation algorithm ME(•) which will be specified in (7) and ( 9) for the infinite-sample and finite-sample settings, respectively. The weight matrices {ρ t } are chosen by the user and primarily used as an input to ME(•). As a typical choice, in the infinite-sample setting we set ρ t to be the true state-action occupancy measure d π β t of the behavior policy; in the finite-sample setting we set ρ t to be the empirical measure d π β t . Our algorithm iterates backwards in the horizon from steps t = H to t = 1. For each step t, the algorithm has two parts. First, it applies Q-value iteration to empirically estimate the Q-values for state-action pairs in the support of d π β t . In particular, the data is used to construct unbiased empirical estimates of the transition kernel and occupancy measure of the behavior policy, denoted by P t and d π β t , respectively. Let B π θ t denote the target policy's empirical Bellman operator, which is given by\n( B π θ t f )(s, a) = r t (s, a) + s ′ ,a ′ P t (s ′ |s, a)π θ t (a ′ |s ′ )f (s ′ , a ′ )(6)\nfor all f : S × A → R. Note that we can evaluate ( B π t f )(s, a) only over (s, a) ∈ supp( d π β t ). With the given weight matrix ρ t , which is chosen such that supp(ρ t ) ⊆ supp( d π β t ), the in-support empirical estimate of the Q-value is computed via\nZ t (s, a) ← ( B π θ t Q π θ t+1 )(s, a), for (s, a) ∈ supp(ρ t ).\nSubsequently, to infer the Q values off support, the algorithm uses the low-rank matrix estimation subroutine, ME(•), which takes as input the weight matrix ρ t and the empirical estimates Z t . While ME(•) can be any off-the-shelf matrix estimation algorithm, for the purpose of the analysis, we will use the max norm minimization method due to its computational tractability and robustness under nonuniform sampling. Specifically, our ME(•) subroutines are specified in (7) and ( 9) in the next section, in which different constraints are used for the infinite-sample and finite-sample settings.\nThe pseudocode for our algorithm is given below. Our algorithm is computationally efficient and easy to implement.\nAlgorithm 1: Matrix Completion in Low-Rank Offline RL Data: dataset D, π θ , initial state distribution µ 1 , weight matrices (ρ t ) t∈[H] , and ME(•). Result: estimator J. 1 Q π θ H+1 (s, a) ← 0, ∀(s, a) ∈ S × A. 2 for t = H, H-1, . . . , 1 do 3 Q iteration: Z t (s, a) ← ( B π θ t Q π θ t+1 )(s, a)\n, for all (s, a) ∈ supp(ρ t ).\n4\nMatrix estimation:\nQ π θ t ← ME (ρ t , Z t ). 5 end 6 Output J ← s,a µ 1 (s)π θ 1 (a|s) Q π θ 1 (s, a)." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "We present evaluation error bounds under both the infinite-sample setting K → ∞ and the finite-sample setting K < ∞. Define the population Bellman operator B π θ t , which is given by equation ( 6) with P t replaced by P t . Define the matrix\nY t ∈ R S×A via Y t (s, a) = (B π θ t Q π θ t+1 )(s, a), which is the population version of Z t computed in Algorithm 1." }, { "figure_ref": [], "heading": "Infinite-sample setting", "publication_ref": [ "b6" ], "table_ref": [], "text": "In the infinite-sample setting, we have d π β t (s, a) → d π β t (s, a) and P t (s, a) → P t (s, a) for all (s, a) ∈ supp(d π β t ). Consequently, both B π θ t and Z t converge to their population versions B π θ t and Y t , respectively. Note that the infinite samples does not imply complete knowledge of the MDP. Instead, we only know a subset of transition probabilities on the support of the state-action occupancy measure induced by the behavior policy. The matrix estimation subroutine is given by the following max norm minimization program with ρ t = d π β t and L t := H -t + 1:\nME(ρ t , Y t ) = argmin M∈R S×A M max s.t. 1 ρt • M = 1 ρt • Y t , M ∞ ≤ L t .(7)\nWe impose an entrywise equality constraint because the information on the support of ρ t is assumed to be noiseless and naturally we want the solution to exactly fit those entries. We have the following performance guarantee. The proof of Theorem 1 involves two steps. We first decompose the evaluation error as a summation of the matrix estimation accuracy from future timesteps and then bound the accuracy at each timestep by a standard application of Hölder's inequality. The complete proof is deferred to Appendix A.1.\nTheorem 1 (Infinite samples). In the infinite-sample setting, under Algorithm 1 with ρ t = d π β t and ME(•) being (7), the output estimator J satisfies\nJ -J π θ ≤ 2H √ dSA H t=1 Dis(d π β t , d π θ t ).(8)\nIn the above bound, note that the operator discrepancy only depends on the support of d π β t , not the specific distribution. This makes sense since the information of the data entirely depends on the support of \nd π β t ," }, { "figure_ref": [], "heading": "Finite-sample setting", "publication_ref": [ "b9", "b20", "b8", "b9" ], "table_ref": [], "text": "Next consider the setting with a finite dataset\nD = {(s k t , a k t , r k t )} t∈[H],k∈[K] . Let n t (s, a) := k∈[K] 1 (s k t ,a k t )=(s,a)\nbe the visitation count of each state-action pair. Accordingly, the empirical occupancy of π β is given by\nd π β t (s, a) = n t (s, a)/K. Let ρ t = d π β t\nin Algorithm 1. The matrix estimation subroutine ME(•) is given by the following max norm minimization program:\nME(ρ t , Z t ) = argmin M∈R S×A M max s.t. | ρ t , M -Z t | ≤ | ρ t , Z t -Y t | , M ∞ ≤ L t .(9)\nWe state the following guarantee. The proof of Theorem 2 proceeds as follows. We build upon the proof of Theorem 1 to get the first discrepancy term on the RHS of (10). The second error term comes from the finite-sample error in the system and is obtained by applying a generalization error guarantee from [21].\nThe complete proof is deferred to Appendix A.2.\nTheorem 2 (Finite samples). Consider the finite-sample setting under Algorithm 1 with ρ t = d π β t and ME(•) being (9). We assume 2 < K < SA. There exists an absolute constant C > 0 such that with probability at least 1 -δ, we have\nJ -J π θ ≤2H √ dSA t∈[H] Dis(d π β t , d π θ t ) + CH 2 d(S + A) log(HS/δ) K .(10)\nOn the RHS of (10), the first term quantifies the population-level distribution shift and the second term takes into account the statistical error. The finite-sample distribution shift is reflected in the term Dis(d π β t , d π θ t ), which is always finite, given any behavior policy π β and target policy π θ , in contrast to the concentrability coefficient C π = max s,a d π (s,a) ρ(s,a) . Suppose that there exists some (s, a) such that ρ t (s, a) = 0 and d π θ t (s, a) > 0. Then, C π θ = ∞ whereas Dis(d π β t , d π θ t ) is finite and meaningful. We will subsequently present examples to showcase the effectiveness of our bound.\nAs a sanity check, let us consider the setting of evaluating the behavior policy, i.e. π θ = π β . Using our results, we obtain an error bound of\nJ -J π β H 2 d(S + A) log(HS/δ) K ,\nwith probability at least 1 -δ. This implies that for evaluating the behavior policy, our method requires a sample complexity of order H 4 d(S + A), which matches the standard linear dependence on the dimensions in low-rank matrix estimation." }, { "figure_ref": [], "heading": "Examples", "publication_ref": [], "table_ref": [], "text": "We present some concrete examples showcasing the effectiveness of our algorithm." }, { "figure_ref": [], "heading": "Policies with Disjoint Support under Uniform Transitions", "publication_ref": [ "b10" ], "table_ref": [], "text": "Assume S = A = n. Consider the simple setting where the transition is uniform over all state-action pairs. For each s and t, assume π θ t (•|s) selects an action uniformly at random amongst a subset of actions A θ t ⊆ A, where |A θ t | = m, and the subset A θ t is itself sampled uniformly at random amongst all subsets of size m. We assume π β t is generated from the same model independently, i.e. the behavior policy also randomizes uniformly amongst a subset of actions A β t , for a uniformly selected subset of actions. Note that the support of d π β t and d π θ t will be mostly disjoint since A θ t ∩A β t can be very small, making the concentrability coefficient infinite with high probability. Using Theorem 1, we derive the following infinite-sample error bound, the proof of which is deferred to Appendix A.3.\nCorollary 1. Under the aforementioned setting, there exists an absolute constant C > 0 such that when n ≥ C, with probability at least 1 -1 n , we have\nJ -J π θ ≤ CH 2 d log(nH) m .(11)\nIf m satisfies m H 2 d log(nH)\nǫ 2\nfor some ǫ > 0, then we have | J -J π θ | ≤ ǫH. It implies that even when m is logarithmic in n, we can still achieve a consistent error bound. For example, suppose m = n/2. In this setting, the behavior and target policies both randomize over half of the actions, but their actions may have little overlap. Our bound gives | J -J π θ | H 2 d log(nH) n , which can be vanishingly small when n is large. The bound (11) identifies the inherent difficulty of distribution shift, manifested as a quantity proportional to H 2 d m , ignoring the logarithmic factor. When d and H are fixed, we have a larger estimation error when m is small, which is expected since small m indicates little to no overlap between d π θ t and d π β t . For the finite-sample case, we apply Theorem 2 and get the following corollary, the proof of which is deferred to Appendix A.5.\nCorollary 2. Under the same setting as in Corollary 1, there exists an absolute constant C > 0 such that when n ≥ C, with probability at least 1 -1 n , we have\nJ -J π θ ≤ CH 2 d log(nH) m + dn log(nH) K .(12)\nInterestingly, the first term on the RHS of (12) will dominate if K nm, i.e. when there is at least a constant number of samples per state-action pair in the support of the behavior policy. This indicates that when K is sufficiently large or m is small enough, the population-level distribution shift will become the main source of estimation error." }, { "figure_ref": [], "heading": "Contextual Bandits", "publication_ref": [ "b6", "b8" ], "table_ref": [], "text": "In this section, we consider specializing our results to the problem of contextual bandits. Specifically, we consider H = 1. In this case, the states are the contexts and the agent acts based on the given contexts. Theorem 1 yields the following corollary. For notation simplicity, let\nd π θ ≡ d π θ 1 and d π β ≡ d π β 1 .\nCorollary 3 (Infinite samples with H = 1). In the infinite-sample setting, under Algorithm 1 with ρ = d π β and ME(•) being (7), the output estimator J satisfies\nJ -J π θ ≤ 2 √ dSA Dis(d π β , d π θ ).(13)\nThe following result deals with finite-sample setting, which is a direct corollary of Theorem 2.\nCorollary 4 (Finite samples with H = 1). Consider the finite-sample setting under Algorithm 1 with ρ = d π β and ME(•) being (9). There exists an absolute constant C > 0 such that with probability at least 1 -δ, we have\nJ -J π θ ≤2 √ dSA Dis(d π β , d π θ ) + C d(S + A) log(HS/δ) K . (14\n)\nWe now analyze the operator discrepancy between d π β and d π θ . Recall that the environment has a fixed initial state distribution, µ ∈ ∆(S). For all policy π, the state-action occupancy measure d π can be written as d π (s, a) = µ(s)π(a|s). If we view µ ∈ R S as a vector and d π , π ∈ R S×A as matrices, we can write d π = (µ1 ⊤ ) • π, where 1 ∈ R S is an all-one vector. Under this notation, we have\nd π θ -d π β op = (µ1 ⊤ ) • (π θ -π β ) op ≤ µ ∞ π θ -π β op ,\nsince µ1 ⊤ is a rank-1 matrix. Hence, inequality ( 14) can be written as\nJ -J π θ ≤2 √ dSA µ ∞ π θ -π β op + C d(S + A) log(HS/δ) K .\nFor the infinite-sample case, we consider an arbitrary policy π that satisfies supp(π) ⊆ supp(π β ). Consequently,\nwe have supp(d π ) ⊆ supp(d π β ) and Dis(d π β , d π θ ) ≤ d π -d π θ\nop as a result. With a slight abuse of notation, we denote the operator discrepancy between two policies as\nDis(π β , π θ ) = min π -π θ op : policy π, supp(π) ⊆ supp(π β ) .\nThus, the infinite-sample bound ( 13) can be further upper bounded by\nJ -J π θ ≤ 2 √ dSA µ ∞ Dis(π β , π θ ).\nBecause of the simplicity of contextual bandits, we are able to transform the discrepancy between distributions to the discrepancy between policies which is easier to directly evaluate." }, { "figure_ref": [], "heading": "Constrained Off-policy Improvement", "publication_ref": [ "b1", "b13", "b12", "b6", "b17", "b14", "b15" ], "table_ref": [], "text": "In this section, we build on our policy evaluation methods to design an offline policy optimization algorithm. Given a dataset D generated by a behavior policy π β , we use Algorithm 1 to obtain an value estimate J π for each policy π. We then optimize over a subclass of policies for which we can guarantee that the above estimate is reliable. Ideally, we could optimize over the following set of candidate policies Π B , for which the empirical operator discrepancy, as defined in (2), between the candidate and behavior policies is bounded above by some parameter B t ≥ 0 for all t ∈ [H],\nΠ B := π : Dis(d π β t , d π t ) ≤ B t , ∀t ∈ [H] .\nImportantly, when B t > 0, the set Π B contains policies with infinite concentrability coefficients, as demonstrated in the example from the last section. With a bigger B t , the set Π B includes more policies, at a price of weaker evaluation guarantees for these policies. Policy constraint/penalty method is prevalent in offline learning to address distribution shift. Researchers have proposed a variety of measures to enforce the constraint. For instance, KL-divergence is a popular choice to make sure the learned policy is close to the behavior policy, as seen in [14,13]. The maximum mean discrepancy (MMD) also proves to be useful in practice [7]. However, both KL-divergence and MMD are sensitive to the support shift, whereas our operator discrepancy imposes a milder condition on the difference between the support.\nDetermining whether a policy π is in Π B is non-trivial as computing the empirical operator discrepancy requires knowledge of the transition dynamics. In practice, we can instead optimize over a smaller set of candidate policies Π B ⊆ Π B , for which determining Π B is feasible; we will subsequently illustrate a construction for Π B via an ε-net of the set specified in (18). Among all candidate policies in Π B , we maximize the estimated values obtained by Algorithm 1 to get\nπ = argmax π∈ ΠB J π . (15\n)\nWe present the following guarantee for π, the proof of which can be found in Appendix A.9.\nTheorem 3. Suppose π β ∈ Π B . We obtain π by solving (15). There exists an absolute constant C > 0 such that with probability at least 1 -δ, we have\nJ π ≥ J π -4H √ dSA t∈[H] B t -C d(S + A) log(| Π B |HS/δ) K , ∀π ∈ Π B .(16)\nThe above bound shows that we are able to find a policy π with a nearly optimal value, compared to other policies in Π B . How close π is to the optimal policy in Π B depends on how accurately we can evaluate all policies in Π B . According to Theorem 2, the estimations are accurate if t∈[H] B t is small (policies are close to behavior) and K is large (dataset is large), which is reflected in the bound (16). Similarly as before, the two error terms in ( 16) quantify the fundamental difficulty of distribution shift and finite-sample noise, respectively.\nOne way to find such a subset Π B is by constraining the policy directly. We do this with the help of the following lemma, which implies two policies that are close in terms of operator norm difference produce state-action occupancy measures close in empirical operator discrepancy. The proof of Lemma 1 can be found in Appendix A.8." }, { "figure_ref": [], "heading": "Lemma 1. For an arbitrary pair of policies", "publication_ref": [], "table_ref": [], "text": "π θ = {π θ t } t∈[H] , π β = {π β t } t∈[H] , we have Dis(d π θ t , d π β t ) ≤ t i=1 √ dS 2 A t-i π θ i -π β i op , ∀t ∈ [H].(17)\nFollowing Lemma 1, we can define Π B as a finite subset (e.g. an ε net) of the following set:\nπ : π θ t -π β t op ≤ B t √ dS 2 A t-H , ∀t ∈ [H] .(18)\nFor all π ∈ Π B , we have Dis(d\nπ θ t , d π β t ) ≤ B t for all t ∈ [H], indicating π ∈ Π B . The exponential factor √ dS 2 A\nt-H restricts the candidate policies to be exceedingly close to the behavior policy, especially at earlier stages. Intuitively, this makes sense since if the policies at early stages are too different, the resulting deviation will be amplified in later steps of the horizon, resulting in the exponentially growing multiplicative factor in the discrepancy bound." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We propose a novel algorithm for efficient offline evaluation when low-rank structure is present in the MDP. Our algorithm is a combination of Q iteration and low-rank matrix estimation, which is easy to implement. We show that the proposed operator discrepancy measure better captures the difficulty of policy evaluation in the offline setting, compared to the traditional concentrability coefficient. We also combine the evaluation algorithm with policy optimization and provide performance guarantee. We believe that this work is a first step in exploiting the benefit of low-rank structure in the Q function in offline RL. Future directions of interest include extending our results to the infinite-horizon setting with stationary policies, and understanding lower bounds for estimation that would provide insight on whether or not our estimation error bounds are optimal." }, { "figure_ref": [], "heading": "A Proofs", "publication_ref": [], "table_ref": [], "text": "Let µ π t : S → [0, 1] denote the state occupancy measure at time t ∈ [H] under policy π." }, { "figure_ref": [], "heading": "A.1 Proof of Theorem 1", "publication_ref": [], "table_ref": [], "text": "We present two lemmas before analyzing the evaluation error. The first one analyzes the error incurred at the matrix estimation step. The proof is deferred to Appendix A.6.\nLemma 2. For arbitrary real matrices A, B, P, W ∈ R m×n , we have\ni,j W ij (A ij -B ij ) ≤ i,j P ij (A ij -B ij ) + ( A * + B * ) P -W op .\nRemark. Under the matrix estimation framework, we can interpret matrix P as the sampling pattern and W as the weights for evaluation.\nNext, we introduce a lemma decomposing the evaluation error as a summation of the matrix estimation accuracy from future timesteps. The proof can be found in Appendix A.7." }, { "figure_ref": [], "heading": "Lemma 3. For the Q function and its estimator", "publication_ref": [ "b6" ], "table_ref": [], "text": "Q π θ t , Q π θ t ∈ R S×A , we have d π θ t , Q π θ t -Q π θ t = d π θ t , Q π θ t -Y t + d π θ t+1 , Q π θ t+1 -Q π θ t+1 , t ∈ [H],\nand consequently\nd π θ 1 , Q π θ 1 -Q π θ 1 = H t=1 d π θ t , Q π θ t -Y t .\nBased on Lemma 2 and 3, we derive the following error bound. For each t ∈ [H] and an arbitrary g t ∈ ∆(S × A) with supp(g t ) ⊆ supp(d π β t ), we have s,a\nd π θ t (s, a) Q π θ t (s, a) -Y t (s, a) ≤ s,a g t (s, a) Q π θ t (s, a) -Y t (s, a) + Y t * + Q π θ t * d π θ t -g t op = Y t * + Q π θ t * d π θ t -g t op ≤ √ SA Y t max + Q π θ t max d π θ t -g t op ≤2 √ SA Y t max d π θ t -g t op . ≤2H √ dSA d π θ t -g t op .\nwhere the first step follows from Lemma 2, the second equality follows from the constraints in (7) and the remaining steps follow from properties of the max norm (1) and (2). Then, combining with the decomposition in Lemma 3, we obtain the desired bound by minimizing over all such g t ." }, { "figure_ref": [], "heading": "A.2 Proof of Theorem 2", "publication_ref": [ "b20", "b18", "b20", "b19" ], "table_ref": [], "text": "Fix t ∈ [H]. The solution Q π θ t satisfies Q π θ t max ≤ Y t max ≤ √ dH(19)\nρ t , Q π θ t -Z t ≤ | ρ t , Z t -Y t |(20)\nWe apply Theorem 6 in [21] with loss function g(x; y) = x -y, target matrix Y = Y t , distribution P = d π β t . The discrepancy weighted by P corresponds to d π β t , Q π θ t -Y t and the empirical discrepancy is ρ t , Q π θ t -Y t . With probability at least 1 -δ 2H , we obtain\nd π β t , Q π θ t -Y t ≤ ρ t , Q π θ t -Y t + 17 dH 2 (S + A) + log(2H/δ) K ,(21)\nwhere we use (19). Hence, the estimation error can be upper bounded by\nJ -J π θ ≤ t∈[H] d π θ t , Q π θ t -Yt (i) ≤ t∈[H] d π β t , Q π θ t -Yt + 2 t∈[H] ( Yt * + Q π θ t * ) • d π θ t -d π β t op (ii) ≤ t∈[H] ρt, Q π θ t -Yt + 17H dH 2 (S + A) + log(2H/δ) K + 2H √ dSA t∈[H] d π θ t -d π β t op (iii) ≤ 2 t∈[H] ρt, Zt -Yt + 17H dH 2 (S + A) + log(2H/δ) K + 2H √ dSA t∈[H] d π θ t -d π β t op\n, where step (i) invokes Lemma 2; step (ii) follows from (21), and properties of the max norm ( 1) and (2); step (iii) uses the triangle inequality and (20). Applying Lemma 4 and the union bound, we conclude that\nJ -J π θ ≤ 2CH 2 S log(HS/δ) K + 17H dH 2 (S + A) + log(2H/δ) K + 2H √ dSA t∈[H] d π θ t -d π β t op ≤ 2H √ dSA t∈[H] d π θ t -d π β t op + C ′ H 2 d(S + A) log(HS/δ) K ≤ 2H √ dSA t∈[H] d π θ t -d π β t op + C ′ H 2 d(S + A) log(HS/δ) K ,\nwith probability at least 1 -δ.\nLemma 4. There exists an absolute constant C > 0 such that with probability at least 1 -δ, we have\n| ρ t , Z t -Y t | ≤ CH S log(HS/δ) K , ∀t ∈ [H].(22)\nProof.\nFix t ∈ [H]. Recall that Z t (s, a) -Y t (s, a) = s ′ ,a ′ ( P t (s ′ |s, a) -P t (s ′ |s, a))π θ t+1 (a ′ |s ′ ) Q π θ t+1 (s ′ , a ′ ).\nTo further analyze the above expression, we write down an explicit formula for the empirical transition probability: P t (s ′ |s, a) is obtained by\nP t (s ′ |s, a) = 1 n t (s, a) K k=1 1 {(s k t ,a k t ,s k t+1 )=(s,a,s ′ )} .\nUnder the above notations, we get\nZt(s, a) -Yt(s, a) = s ′ 1 nt(s, a) K k=1 1 {(s k t ,a k t ,s k t+1 )=(s,a,s ′ )} -Pt(s ′ |s, a) a ′ π θ t+1 (a ′ |s ′ ) Q π θ t+1 (s ′ , a ′ ).\nFor simplicity, define\nf t (s ′ ) = a ′ π θ t+1 (a ′ |s ′ ) Q π θ t+1 (s ′ , a ′ ). It is guaranteed that |f t (s ′ )| ≤ H.\nInvoke the identity ρ t (s, a) = nt(s,a) K and we obtain s,a ρ t (s, a) (Z t (s, a) -Y t (s, a))\n= 1 K s ′ f t (s ′ ) K k=1 s,a 1 {(s k t ,a k t )=(s,a)} 1 {s k t+1 =s ′ } -1 {(s k t ,a k t )=(s,a)} P t (s ′ |s, a) α .\nFor α, we first use the fact that f t (s ′ ) is bounded to derive that\n|α| ≤ H K s ′ K k=1 s,a 1 {(s k t ,a k t )=(s,a)} 1 {s k t+1 =s ′ } -1 {(s k t ,a k t )=(s,a)} P t (s ′ |s, a) . Let X s ′ k = s,a 1 {(s k t ,a k t )=(s,a)} 1 {s k t+1 =s ′ } -1 {(s k t ,a k t )=(s,a)} P t (s ′ |s, a) for all s ′ and k. If we fix s ′ , it is easy to see that {X s ′ k , k ∈ [K]} are independent. Even if X s ′\nk is defined as a sum, exactly one of the indicators can be non-zero. Hence, we get X s ′ k ≤ 2. However, this upper bound is not enough for a tight concentration. To resolve this, we calculate the variance of X s ′ k . Fix s ′ and k = 1. We rewrite X s ′ 1 as\nX s ′ 1 = s,a 1 {(s 1 t ,a 1 t )=(s,a)} 1 {s k t+1 =s ′ } -P t (s ′ |s, a) . Define F s,a = 1 {(s 1 t ,a1\nt )=(s,a)} , which follows a Bernoulli distribution with success probability d π β t (s, a). Define G s ′ = 1 {s k t+1 =s ′ } , which follows a Bernoulli distribution with success probability µ π β t+1 (s ′ ). We use the shorthand P s ′ |s,a = P t (s ′ |s, a). Under these notations, we get X s ′ 1 = s,a F s,a G s ′ -P s ′ |s,a . Next, we calculate the variance of X s ′ 1 as\nE X s ′ 1 2 = E   s,a F s,a G s ′ -P s ′ |s,a 2   (i) = s,a E F 2 s,a G s ′ -P s ′ |s,a 2 = s,a d π β t (s, a) µ π β t+1 (s ′ ) 1 -P s ′ |s,a 2 + 1 -µ π β t+1 (s ′ ) P 2 s ′ |s,a = s,a d π β t (s, a) µ π β t+1 (s ′ ) -2µ π β t+1 (s ′ )P s ′ |s,a + P 2 s ′ |s,a(ii)\n≤ µ π β t+1 (s ′ ) -2 µ π β t+1 (s ′ ) 2 + µ π β t+1 (s ′ ) = 2µ π β t+1 (s ′ ) 1 -µ π β t+1 (s ′ ) ≤ 2µ π β t+1 (s ′ ),\nwhere step (i) ignores all cross terms since F s,a F s,ā = 0 for (s, a) = (s, ā); step (ii) follows from P 2 s ′ |s,a ≤ P s ′ |s,a . Therefore, we can control the sampling error by applying Bernstein's inequality over the sums " }, { "figure_ref": [], "heading": "A.3 Proof of Corollary 1", "publication_ref": [], "table_ref": [], "text": "For simplicity, define b := m n . Since the transition is uniform, the state occupancy µ π t (•) is uniform under any policy π, i.e. µ π t (s) = 1 n . By the way the policies are generated,\nd π θ t (•, •) = µ π θ t (•)π θ t (•|•) ∈ R n×n\nis supported on mn entries whose locations are realization of random sampling, and on these entries d π θ t (s, a) = 1 mn . Specifically, all d π θ t (s, a) are i.i.d. Bernoulli random variables that take the value 1 with probability b. The behavior policy π β is generated independently via the same process. Let M := d π θ t -d π β t and we have\nM ij =      1 mn with probability b(1 -b) -1 mn with probability b(1 -b) 0 with probability 1 -2b(1 -b)(23)\nindependently across all entries (i, j). By matrix Bernstein inequality, we obtain the following result, the proof of which is deferred to Appendix A.4.\nLemma 5. There exists an absolute constant C > 0 such that when n ≥ C, with probability at least 1 -1 n , we have\nM op ≤ C log n n 2 m .(24)\nCombining Theorem 1, ( 24) and the union bound over all t ∈ [H], with probability at least 1 -1 n , we get\nJ -J π θ ≤ 2nH √ d H t=1 d π β t -d π θ t op H • nH √ d • log(nH) n 2 m = H 2 d log(nH) m ,\nwhere the first upper bound is obtained by plugging d π β t into the objective of (3)." }, { "figure_ref": [], "heading": "A.4 Proof of Lemma 5", "publication_ref": [ "b22" ], "table_ref": [], "text": "We apply matrix Berstein's inequality (Theorem 6.1.1 in [23]). Let\nS k = M ij e i e ⊤ j , for all k ∈ [n 2 ]. Since |M ij | ≤ 1 mn , we derive that S k op ≤ 1 mn . We calculate that k E[S k S ⊤ k ] = i,j E[M 2 ij ]e i e ⊤ i = 2b(1 -b) 1 m 2 n I n .\nAs a result, we have\nk E[S k S ⊤ k ] op = 2b(1 -b) 1 m 2 n = 2(n -m) n 3 m ≤ 2 n 2 m .\nBy symmetry, we also have Applying the union bound, we get\nk E[S ⊤ k S k ] op ≤ 2 n 2 m .\nJ -J π θ H 2 d log(nH) m + H 2 dn log(nH) K ,\nwith probability at least 1 -1 n ." }, { "figure_ref": [], "heading": "A.6 Proof of Lemma 2", "publication_ref": [], "table_ref": [], "text": "The proof uses the following result, which holds for any pairs of dual norms. In this paper, we only consider using • * and • op .\nLemma 6. For a real matrix M ∈ R m×n and two weight matrices P, W ∈ R m×n , we have that i,j\nP ij M ij - i,j W ij M ij ≤ M * P -W op . Proof. We can rewrite i,j P ij M ij -i,j W ij M ij as M, P -W ,\nwhere •, • denotes the trace inner product between matrices. Applying Hölder's inequality, we obtain\n| M, P -W | ≤ M * P -W op .\nSubstituing M ij = A ij -B ij in Lemma 6, we immediately obtain the desired results in Lemma 2." }, { "figure_ref": [], "heading": "A.7 Proof of Lemma 3", "publication_ref": [ "b24" ], "table_ref": [], "text": "Recall that\nQ π θ t (s, a) = B π θ t Q π θ t+1 (s, a),(25)\nY t (s, a) = B π θ t Q π θ t+1 (s, a).(26)\nFor each (s, a) ∈ S × A, we have \nQ π θ t (s, a) -Q π θ t (s, a) = Q π θ t (\n) Q π θ t+1 (s ′ , a ′ ) -Q π θ t+1 (s ′ , a ′ ) ,\nwhere the last step follows from equations ( 26) and (25). Multiplying both sides by d π θ t (s, a) and summing over (s, a), we obtain\nd π θ t , Q π θ t -Q π θ t = d π θ t , Q π θ t -Y t +\ns ′ ,a ′ s,a d π θ t (s, a)P t (s ′ |s, a)π θ t+1 (a ′ |s ′ )\n=d π θ t+1 (s ′ ,a ′ ) Q π θ t+1 (s ′ , a ′ ) -Q π θ t+1 (s ′ , a ′ ) = d π θ t , Q π θ t -Y t + d π θ t+1 , Q π θ t+1 -Q π θ t+1 ,\nthereby proving the first equation in the lemma. Continuing the above recursion yields the second equation." }, { "figure_ref": [], "heading": "A.8 Proof of Lemma 1", "publication_ref": [ "b26" ], "table_ref": [], "text": "The state marginal distribution at time t for an arbitrary policy π satisfies µ π t (s) = s ′ ,a ′ d π t-1 (s ′ , a ′ )P t-1 (s|s ′ , a ′ ). Accordingly, the state-action occupancy measure satisfies d π t (s, a) = s µ π t (s)π t (a|s). Equivalently, we write d π t = (µ π t 1 ⊤ ) • π, when we view d π t as a S-by-A matrix. Fix t ≥ 2, we have\nd π θ t -d π β t op = (µ π θ t 1 ⊤ ) • π θ t -(µ π β t 1 ⊤ ) • π β t op ≤ (µ π θ t 1 ⊤ ) • π θ t -(µ π β t 1 ⊤ ) • π θ t op + (µ π β t 1 ⊤ ) • π θ t -(µ π β t 1 ⊤ ) • π β t op ≤ √ dS 2 A d π β t-1 -d π θ t-1 op + π θ t -π β t op ,(27)\nwhere the first inequality uses the triangle inequality and the second one uses Lemma 7 and 8. We take the last display inequality (27) as it is and obtain:\nd π θ t -d π β t op ≤ t i=1 √ dS 2 A t-i π θ i -π β i op\n, for all t ∈ [H]. Finally, we prove the two lemmas used before.\nLemma 7. For all t ≥ 2, we have\n(µ π θ t 1 ⊤ ) • π θ t -(µ π β t 1 ⊤ ) • π θ t op ≤ √ dS 2 A d π β t-1 -d π θ t-1 op\n.\nProof. The matrix µ π θ t 1 ⊤ -µ π β t 1 ⊤ is at most rank 2. Hence, we have\n(µ π θ t 1 ⊤ ) • π θ t -(µ π β t 1 ⊤ ) • π θ t op = (µ π θ t 1 ⊤ -µ π β t 1 ⊤ ) • π θ t op ≤ √ 2 µ π θ t 1 ⊤ -µ π β t 1 ⊤ ∞ π θ t op ≤ √ 2S max s µ π θ t (s) -µ π β t (s) ,\nwhere in the last line we use the fact that π θ t op ≤ √ S since π θ t is a right stochastic matrix (i.e., the sum of each row is 1). For each state s, we have Combining pieces, we obtain the desired result.\nµ π θ t (\nLemma 8. For all t, we have\n(µ π β t 1 ⊤ ) • π θ t -(µ π β t 1 ⊤ ) • π β t op ≤ π θ t -π β t op\n.\nProof. We use the fact that the matrix µ π β t 1 ⊤ is rank 1 to deduce that\n(µ π β t 1 ⊤ ) • π θ t -(µ π β t 1 ⊤ ) • π β t op = (µ π β t 1 ⊤ ) • (π θ t -π β t ) op ≤ µ π β t ∞ π θ t -π β t op ≤ π θ t -π β t op .\nA.9 Proof of Theorem 3\nWe first apply Theorem 2. For π and an arbitrary policy π ∈ Π B , with probability at least 1 -δ, we have Then, we deduce that \nJ -J π ≤ 2H √ dSA\nJ π ≥ J π -2H √ dSA" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgement: C. Yu is partially supported by NSF grants CCF-1948256 and CNS-1955997, AFOSR grant FA9550-23-1-0301, and by an Intel Rising Stars award. Y. Chen is partially supported by NSF grants CCF-1704828 and CCF-2047910." } ]
2023-05-24
[ { "authors": "Alekh Agarwal; Sham Kakade; Akshay Krishnamurthy; Wen Sun", "journal": "Advances in neural information processing systems", "ref_id": "b0", "title": "Flambe: Structural complexity and representation learning of low rank mdps", "year": "2020" }, { "authors": "Anish Agarwal; Munther Dahleh; Devavrat Shah; Dennis Shen", "journal": "", "ref_id": "b1", "title": "Causal matrix completion", "year": "2021" }, { "authors": "Mehrdad Farajtabar; Yinlam Chow; Mohammad Ghavamzadeh", "journal": "", "ref_id": "b2", "title": "More robust doubly robust offpolicy evaluation", "year": "2018" }, { "authors": "Simon Foucart; Deanna Needell; Reese Pathak; Yaniv Plan; Mary Wootters", "journal": "IEEE Transactions on Information Theory", "ref_id": "b3", "title": "Weighted matrix completion from non-random, non-uniform sampling patterns", "year": "2021-02" }, { "authors": "Omer Gottesman; Fredrik Johansson; Matthieu Komorowski; Aldo Faisal; David Sontag; Finale Doshi-Velez; Leo Anthony; Celi ", "journal": "Nature Medicine", "ref_id": "b4", "title": "Guidelines for reinforcement learning in healthcare", "year": "2019" }, { "authors": "Nan Jiang; Akshay Krishnamurthy; Alekh Agarwal; John Langford; Robert E Schapire", "journal": "PMLR", "ref_id": "b5", "title": "Contextual decision processes with low bellman rank are PAC-learnable", "year": "2017" }, { "authors": "Aviral Kumar; Justin Fu; Matthew Soh; George Tucker; Sergey Levine", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b6", "title": "Stabilizing off-policy qlearning via bootstrapping error reduction", "year": "2019" }, { "authors": "Troy Lee; Adi Shraibman", "journal": "", "ref_id": "b7", "title": "Matrix completion from any given set of observations", "year": "2013" }, { "authors": "Sergey Levine; Aviral Kumar; George Tucker; Justin Fu", "journal": "", "ref_id": "b8", "title": "Offline reinforcement learning: Tutorial, review, and perspectives on open problems", "year": "2020" }, { "authors": "Yao Liu; Adith Swaminathan; Alekh Agarwal; Emma Brunskill", "journal": "", "ref_id": "b9", "title": "Provably good batch off-policy reinforcement learning without great exploration", "year": "2020" }, { "authors": "Rémi Munos; Csaba Szepesvári", "journal": "Journal of Machine Learning Research", "ref_id": "b10", "title": "Finite-time bounds for fitted value iteration", "year": "2008-06" }, { "authors": "Ofir Nachum; Yinlam Chow; Bo Dai; Lihong Li", "journal": "", "ref_id": "b11", "title": "Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections", "year": "2019" }, { "authors": "Ashvin Nair; Abhishek Gupta; Murtaza Dalal; Sergey Levine", "journal": "", "ref_id": "b12", "title": "Awac: Accelerating online reinforcement learning with offline datasets", "year": "2020" }, { "authors": "Xue Bin Peng; Aviral Kumar; Grace Zhang; Sergey Levine", "journal": "", "ref_id": "b13", "title": "Advantage-weighted regression: Simple and scalable off-policy reinforcement learning", "year": "2019" }, { "authors": "Doina Precup; Richard S Sutton; Sanjoy Dasgupta", "journal": "", "ref_id": "b14", "title": "Off-policy temporal difference learning with function approximation", "year": "2001" }, { "authors": "Doina Precup; Richard S Sutton; Satinder P Singh", "journal": "", "ref_id": "b15", "title": "Eligibility traces for off-policy policy evaluation", "year": "2000" }, { "authors": "Paria Rashidinejad; Banghua Zhu; Cong Ma; Jiantao Jiao; Stuart Russell", "journal": "", "ref_id": "b16", "title": "Bridging offline reinforcement learning and imitation learning: A tale of pessimism", "year": "2021" }, { "authors": "Tyler Sam; Yudong Chen; Christina Lee Yu", "journal": "ACM SIGMETRICS Performance Evaluation Review", "ref_id": "b17", "title": "Overcoming the long horizon barrier for sample-efficient reinforcement learning with latent low-rank structure", "year": "2023" }, { "authors": "Devavrat Shah; Dogyoon Song; Zhi Xu; Yuzhe Yang", "journal": "", "ref_id": "b18", "title": "Sample efficient reinforcement learning via low-rank matrix estimation", "year": "2020" }, { "authors": "Shai Shalev-Shwartz; Shaked Shammah; Amnon Shashua", "journal": "", "ref_id": "b19", "title": "Safe, multi-agent, reinforcement learning for autonomous driving", "year": "2016" }, { "authors": "Nathan Srebro; Adi Shraibman", "journal": "Springer", "ref_id": "b20", "title": "Rank, trace-norm and max-norm", "year": "2005" }, { "authors": "Richard S Sutton; A Rupam Mahmood; Martha White", "journal": "Journal of Machine Learning Research", "ref_id": "b21", "title": "An emphatic approach to the problem of off-policy temporal-difference learning", "year": "2016-01" }, { "authors": "Joel A Tropp", "journal": "Now Foundations and Trends", "ref_id": "b22", "title": "An Introduction to Matrix Concentration Inequalities", "year": "2015" }, { "authors": "Masatoshi Uehara; Wen Sun", "journal": "", "ref_id": "b23", "title": "Pessimistic model-based offline reinforcement learning under partial coverage", "year": "2022" }, { "authors": "Masatoshi Uehara; Xuezhou Zhang; Wen Sun", "journal": "", "ref_id": "b24", "title": "Representation learning for online and offline RL in low-rank MDPs", "year": "2021" }, { "authors": "Yu-Xiang Wang; Alekh Agarwal; Miroslav Dudík", "journal": "", "ref_id": "b25", "title": "Optimal and adaptive off-policy evaluation in contextual bandits", "year": "2017" }, { "authors": "Lin Yang; Mengdi Wang", "journal": "PMLR", "ref_id": "b26", "title": "Reinforcement learning in feature space: Matrix bandit, kernels, and regret bound", "year": "2020" }, { "authors": "Shangtong Zhang; Bo Liu; Shimon Whiteson", "journal": "", "ref_id": "b27", "title": "GradientDICE: Rethinking generalized offline estimation of stationary values", "year": "2020" } ]
[ { "formula_coordinates": [ 2, 72, 449.6, 468, 31.33 ], "formula_id": "formula_0", "formula_text": "→ ∆(A)} t∈[H] , the Q function Q π t : S × A → R is defined as Q π t (s, a) = E π [ H i=t r i (s i , a i )|s t = s, a t = a]," }, { "formula_coordinates": [ 2, 82.32, 475.14, 252.72, 19.58 ], "formula_id": "formula_1", "formula_text": "J π = E π [ H t=1 r t (s t , a t )|s 1 ∼ µ 1 ]. Let d π t : S × A → [0, 1]" }, { "formula_coordinates": [ 2, 117.96, 565.7, 376.2, 20.94 ], "formula_id": "formula_2", "formula_text": "P t (s ′ |s, a) = d ′ i=1 u t,i (s ′ , s)w t,i (a) or P t (s ′ |s, a) = d ′ i=1 u t,i (s)w t,i (s ′ , a), ∀s ′ , s, a." }, { "formula_coordinates": [ 3, 322.68, 71.3, 165.76, 21.06 ], "formula_id": "formula_3", "formula_text": "P t (s ′ |s, a) = d ′ i=1 u t,i (s ′ )v t,i (s)w t,i (a)" }, { "formula_coordinates": [ 3, 221.52, 132.06, 123.09, 18.14 ], "formula_id": "formula_4", "formula_text": "D = {(s k t , a k t , r k t )} t∈[H],k∈[K]" }, { "formula_coordinates": [ 3, 261.96, 485.25, 279.16, 26.04 ], "formula_id": "formula_5", "formula_text": "M max ≤ √ d M ∞ .(1)" }, { "formula_coordinates": [ 3, 234.96, 531.45, 306.16, 23.76 ], "formula_id": "formula_6", "formula_text": "1 √ nm M * ≤ M max ≤ M * .(2)" }, { "formula_coordinates": [ 3, 133.92, 709.13, 407.32, 18.99 ], "formula_id": "formula_7", "formula_text": "E (s,a)∼g * M (s, a) -E (s,a)∼q M (s, a) = g * , M -q, M ≤ Dis(p, q) • M * .(4)" }, { "formula_coordinates": [ 4, 257.52, 282.44, 283.6, 17.41 ], "formula_id": "formula_8", "formula_text": "Dis(p, q) := p -q op .(5)" }, { "formula_coordinates": [ 4, 298.56, 387.15, 105.39, 19.85 ], "formula_id": "formula_9", "formula_text": "π β t , d π θ t ) ≤ Dis(d π β t , d π θ t )" }, { "formula_coordinates": [ 4, 106.2, 479.22, 121.05, 18.14 ], "formula_id": "formula_10", "formula_text": "D = {(s k t , a k t , r k t )} t∈[H],k∈[K]" }, { "formula_coordinates": [ 4, 181.92, 637.59, 359.2, 20.33 ], "formula_id": "formula_11", "formula_text": "( B π θ t f )(s, a) = r t (s, a) + s ′ ,a ′ P t (s ′ |s, a)π θ t (a ′ |s ′ )f (s ′ , a ′ )(6)" }, { "formula_coordinates": [ 4, 195.84, 709.35, 220.32, 20.21 ], "formula_id": "formula_12", "formula_text": "Z t (s, a) ← ( B π θ t Q π θ t+1 )(s, a), for (s, a) ∈ supp(ρ t )." }, { "formula_coordinates": [ 5, 77.52, 183.8, 388.45, 84.13 ], "formula_id": "formula_13", "formula_text": "Algorithm 1: Matrix Completion in Low-Rank Offline RL Data: dataset D, π θ , initial state distribution µ 1 , weight matrices (ρ t ) t∈[H] , and ME(•). Result: estimator J. 1 Q π θ H+1 (s, a) ← 0, ∀(s, a) ∈ S × A. 2 for t = H, H-1, . . . , 1 do 3 Q iteration: Z t (s, a) ← ( B π θ t Q π θ t+1 )(s, a)" }, { "formula_coordinates": [ 5, 77.52, 262.83, 189.49, 47.33 ], "formula_id": "formula_14", "formula_text": "Q π θ t ← ME (ρ t , Z t ). 5 end 6 Output J ← s,a µ 1 (s)π θ 1 (a|s) Q π θ 1 (s, a)." }, { "formula_coordinates": [ 5, 72, 390.75, 467.52, 24.53 ], "formula_id": "formula_15", "formula_text": "Y t ∈ R S×A via Y t (s, a) = (B π θ t Q π θ t+1 )(s, a), which is the population version of Z t computed in Algorithm 1." }, { "formula_coordinates": [ 5, 186.84, 533.85, 354.28, 39.96 ], "formula_id": "formula_16", "formula_text": "ME(ρ t , Y t ) = argmin M∈R S×A M max s.t. 1 ρt • M = 1 ρt • Y t , M ∞ ≤ L t .(7)" }, { "formula_coordinates": [ 5, 217.92, 669.21, 323.32, 26.04 ], "formula_id": "formula_17", "formula_text": "J -J π θ ≤ 2H √ dSA H t=1 Dis(d π β t , d π θ t ).(8)" }, { "formula_coordinates": [ 6, 72, 71.91, 18.01, 14.18 ], "formula_id": "formula_18", "formula_text": "d π β t ," }, { "formula_coordinates": [ 6, 267.48, 158.82, 280.55, 18.14 ], "formula_id": "formula_19", "formula_text": "D = {(s k t , a k t , r k t )} t∈[H],k∈[K] . Let n t (s, a) := k∈[K] 1 (s k t ,a k t )=(s,a)" }, { "formula_coordinates": [ 6, 72, 184.47, 163.3, 14.18 ], "formula_id": "formula_20", "formula_text": "d π β t (s, a) = n t (s, a)/K. Let ρ t = d π β t" }, { "formula_coordinates": [ 6, 159.24, 220.41, 381.88, 39.84 ], "formula_id": "formula_21", "formula_text": "ME(ρ t , Z t ) = argmin M∈R S×A M max s.t. | ρ t , M -Z t | ≤ | ρ t , Z t -Y t | , M ∞ ≤ L t .(9)" }, { "formula_coordinates": [ 6, 155.28, 366.33, 385.96, 30.32 ], "formula_id": "formula_22", "formula_text": "J -J π θ ≤2H √ dSA t∈[H] Dis(d π β t , d π θ t ) + CH 2 d(S + A) log(HS/δ) K .(10)" }, { "formula_coordinates": [ 6, 224.16, 522.57, 167.04, 23.88 ], "formula_id": "formula_23", "formula_text": "J -J π β H 2 d(S + A) log(HS/δ) K ," }, { "formula_coordinates": [ 7, 242.52, 168.45, 298.72, 23.76 ], "formula_id": "formula_24", "formula_text": "J -J π θ ≤ CH 2 d log(nH) m .(11)" }, { "formula_coordinates": [ 7, 187.08, 210.16, 6.75, 6.73 ], "formula_id": "formula_25", "formula_text": "ǫ 2" }, { "formula_coordinates": [ 7, 196.44, 371.97, 344.8, 23.76 ], "formula_id": "formula_26", "formula_text": "J -J π θ ≤ CH 2 d log(nH) m + dn log(nH) K .(12)" }, { "formula_coordinates": [ 7, 373.08, 511.95, 111.85, 19.73 ], "formula_id": "formula_27", "formula_text": "d π θ ≡ d π θ 1 and d π β ≡ d π β 1 ." }, { "formula_coordinates": [ 7, 235.32, 564.45, 305.92, 26.04 ], "formula_id": "formula_28", "formula_text": "J -J π θ ≤ 2 √ dSA Dis(d π β , d π θ ).(13)" }, { "formula_coordinates": [ 7, 177.24, 659.73, 359.55, 26.04 ], "formula_id": "formula_29", "formula_text": "J -J π θ ≤2 √ dSA Dis(d π β , d π θ ) + C d(S + A) log(HS/δ) K . (14" }, { "formula_coordinates": [ 7, 536.79, 666.44, 4.45, 9.96 ], "formula_id": "formula_30", "formula_text": ")" }, { "formula_coordinates": [ 8, 225.12, 107.43, 166.75, 41.57 ], "formula_id": "formula_31", "formula_text": "d π θ -d π β op = (µ1 ⊤ ) • (π θ -π β ) op ≤ µ ∞ π θ -π β op ," }, { "formula_coordinates": [ 8, 165.36, 176.73, 284.52, 25.92 ], "formula_id": "formula_32", "formula_text": "J -J π θ ≤2 √ dSA µ ∞ π θ -π β op + C d(S + A) log(HS/δ) K ." }, { "formula_coordinates": [ 8, 71.64, 223.95, 264.85, 19.73 ], "formula_id": "formula_33", "formula_text": "we have supp(d π ) ⊆ supp(d π β ) and Dis(d π β , d π θ ) ≤ d π -d π θ" }, { "formula_coordinates": [ 8, 165.12, 264.9, 281.76, 18.62 ], "formula_id": "formula_34", "formula_text": "Dis(π β , π θ ) = min π -π θ op : policy π, supp(π) ⊆ supp(π β ) ." }, { "formula_coordinates": [ 8, 226.32, 305.01, 162.72, 26.04 ], "formula_id": "formula_35", "formula_text": "J -J π θ ≤ 2 √ dSA µ ∞ Dis(π β , π θ )." }, { "formula_coordinates": [ 8, 217.92, 481.59, 176.16, 20.33 ], "formula_id": "formula_36", "formula_text": "Π B := π : Dis(d π β t , d π t ) ≤ B t , ∀t ∈ [H] ." }, { "formula_coordinates": [ 8, 271.44, 681.06, 265.35, 21.24 ], "formula_id": "formula_37", "formula_text": "π = argmax π∈ ΠB J π . (15" }, { "formula_coordinates": [ 8, 536.79, 682.64, 4.45, 9.96 ], "formula_id": "formula_38", "formula_text": ")" }, { "formula_coordinates": [ 9, 145.56, 108.33, 395.68, 30.32 ], "formula_id": "formula_39", "formula_text": "J π ≥ J π -4H √ dSA t∈[H] B t -C d(S + A) log(| Π B |HS/δ) K , ∀π ∈ Π B .(16)" }, { "formula_coordinates": [ 9, 179.52, 281.82, 361.72, 52.19 ], "formula_id": "formula_40", "formula_text": "π θ = {π θ t } t∈[H] , π β = {π β t } t∈[H] , we have Dis(d π θ t , d π β t ) ≤ t i=1 √ dS 2 A t-i π θ i -π β i op , ∀t ∈ [H].(17)" }, { "formula_coordinates": [ 9, 206.04, 359.37, 335.2, 26.16 ], "formula_id": "formula_41", "formula_text": "π : π θ t -π β t op ≤ B t √ dS 2 A t-H , ∀t ∈ [H] .(18)" }, { "formula_coordinates": [ 9, 77.04, 393.27, 463.25, 26.44 ], "formula_id": "formula_42", "formula_text": "π θ t , d π β t ) ≤ B t for all t ∈ [H], indicating π ∈ Π B . The exponential factor √ dS 2 A" }, { "formula_coordinates": [ 12, 152.64, 207.81, 312.84, 20.72 ], "formula_id": "formula_43", "formula_text": "i,j W ij (A ij -B ij ) ≤ i,j P ij (A ij -B ij ) + ( A * + B * ) P -W op ." }, { "formula_coordinates": [ 12, 159, 299.19, 300, 44.57 ], "formula_id": "formula_44", "formula_text": "Q π θ t , Q π θ t ∈ R S×A , we have d π θ t , Q π θ t -Q π θ t = d π θ t , Q π θ t -Y t + d π θ t+1 , Q π θ t+1 -Q π θ t+1 , t ∈ [H]," }, { "formula_coordinates": [ 12, 221.16, 362.22, 175.8, 30.59 ], "formula_id": "formula_45", "formula_text": "d π θ 1 , Q π θ 1 -Q π θ 1 = H t=1 d π θ t , Q π θ t -Y t ." }, { "formula_coordinates": [ 12, 151.56, 442.47, 308.35, 156.05 ], "formula_id": "formula_46", "formula_text": "d π θ t (s, a) Q π θ t (s, a) -Y t (s, a) ≤ s,a g t (s, a) Q π θ t (s, a) -Y t (s, a) + Y t * + Q π θ t * d π θ t -g t op = Y t * + Q π θ t * d π θ t -g t op ≤ √ SA Y t max + Q π θ t max d π θ t -g t op ≤2 √ SA Y t max d π θ t -g t op . ≤2H √ dSA d π θ t -g t op ." }, { "formula_coordinates": [ 12, 72, 675.75, 469.24, 44.69 ], "formula_id": "formula_47", "formula_text": "Fix t ∈ [H]. The solution Q π θ t satisfies Q π θ t max ≤ Y t max ≤ √ dH(19)" }, { "formula_coordinates": [ 13, 237.72, 72.75, 303.52, 20.21 ], "formula_id": "formula_48", "formula_text": "ρ t , Q π θ t -Z t ≤ | ρ t , Z t -Y t |(20)" }, { "formula_coordinates": [ 13, 165, 160.13, 376.24, 24.4 ], "formula_id": "formula_49", "formula_text": "d π β t , Q π θ t -Y t ≤ ρ t , Q π θ t -Y t + 17 dH 2 (S + A) + log(2H/δ) K ,(21)" }, { "formula_coordinates": [ 13, 114.36, 208.47, 381.67, 115.85 ], "formula_id": "formula_50", "formula_text": "J -J π θ ≤ t∈[H] d π θ t , Q π θ t -Yt (i) ≤ t∈[H] d π β t , Q π θ t -Yt + 2 t∈[H] ( Yt * + Q π θ t * ) • d π θ t -d π β t op (ii) ≤ t∈[H] ρt, Q π θ t -Yt + 17H dH 2 (S + A) + log(2H/δ) K + 2H √ dSA t∈[H] d π θ t -d π β t op (iii) ≤ 2 t∈[H] ρt, Zt -Yt + 17H dH 2 (S + A) + log(2H/δ) K + 2H √ dSA t∈[H] d π θ t -d π β t op" }, { "formula_coordinates": [ 13, 90, 364.53, 434.83, 102.44 ], "formula_id": "formula_51", "formula_text": "J -J π θ ≤ 2CH 2 S log(HS/δ) K + 17H dH 2 (S + A) + log(2H/δ) K + 2H √ dSA t∈[H] d π θ t -d π β t op ≤ 2H √ dSA t∈[H] d π θ t -d π β t op + C ′ H 2 d(S + A) log(HS/δ) K ≤ 2H √ dSA t∈[H] d π θ t -d π β t op + C ′ H 2 d(S + A) log(HS/δ) K ," }, { "formula_coordinates": [ 13, 201.96, 519.33, 339.28, 23.76 ], "formula_id": "formula_52", "formula_text": "| ρ t , Z t -Y t | ≤ CH S log(HS/δ) K , ∀t ∈ [H].(22)" }, { "formula_coordinates": [ 13, 103.2, 551, 356.16, 44.01 ], "formula_id": "formula_53", "formula_text": "Fix t ∈ [H]. Recall that Z t (s, a) -Y t (s, a) = s ′ ,a ′ ( P t (s ′ |s, a) -P t (s ′ |s, a))π θ t+1 (a ′ |s ′ ) Q π θ t+1 (s ′ , a ′ )." }, { "formula_coordinates": [ 13, 206.16, 638.1, 199.68, 30.95 ], "formula_id": "formula_54", "formula_text": "P t (s ′ |s, a) = 1 n t (s, a) K k=1 1 {(s k t ,a k t ,s k t+1 )=(s,a,s ′ )} ." }, { "formula_coordinates": [ 13, 115.56, 694.67, 380.83, 25.53 ], "formula_id": "formula_55", "formula_text": "Zt(s, a) -Yt(s, a) = s ′ 1 nt(s, a) K k=1 1 {(s k t ,a k t ,s k t+1 )=(s,a,s ′ )} -Pt(s ′ |s, a) a ′ π θ t+1 (a ′ |s ′ ) Q π θ t+1 (s ′ , a ′ )." }, { "formula_coordinates": [ 14, 172.2, 71.91, 315.37, 19.73 ], "formula_id": "formula_56", "formula_text": "f t (s ′ ) = a ′ π θ t+1 (a ′ |s ′ ) Q π θ t+1 (s ′ , a ′ ). It is guaranteed that |f t (s ′ )| ≤ H." }, { "formula_coordinates": [ 14, 135.48, 134.94, 341.04, 45.11 ], "formula_id": "formula_57", "formula_text": "= 1 K s ′ f t (s ′ ) K k=1 s,a 1 {(s k t ,a k t )=(s,a)} 1 {s k t+1 =s ′ } -1 {(s k t ,a k t )=(s,a)} P t (s ′ |s, a) α ." }, { "formula_coordinates": [ 14, 72, 202.5, 468.46, 75.14 ], "formula_id": "formula_58", "formula_text": "|α| ≤ H K s ′ K k=1 s,a 1 {(s k t ,a k t )=(s,a)} 1 {s k t+1 =s ′ } -1 {(s k t ,a k t )=(s,a)} P t (s ′ |s, a) . Let X s ′ k = s,a 1 {(s k t ,a k t )=(s,a)} 1 {s k t+1 =s ′ } -1 {(s k t ,a k t )=(s,a)} P t (s ′ |s, a) for all s ′ and k. If we fix s ′ , it is easy to see that {X s ′ k , k ∈ [K]} are independent. Even if X s ′" }, { "formula_coordinates": [ 14, 72, 309.98, 346.2, 45.7 ], "formula_id": "formula_59", "formula_text": "X s ′ 1 = s,a 1 {(s 1 t ,a 1 t )=(s,a)} 1 {s k t+1 =s ′ } -P t (s ′ |s, a) . Define F s,a = 1 {(s 1 t ,a1" }, { "formula_coordinates": [ 14, 142.92, 398.75, 321, 144.07 ], "formula_id": "formula_60", "formula_text": "E X s ′ 1 2 = E   s,a F s,a G s ′ -P s ′ |s,a 2   (i) = s,a E F 2 s,a G s ′ -P s ′ |s,a 2 = s,a d π β t (s, a) µ π β t+1 (s ′ ) 1 -P s ′ |s,a 2 + 1 -µ π β t+1 (s ′ ) P 2 s ′ |s,a = s,a d π β t (s, a) µ π β t+1 (s ′ ) -2µ π β t+1 (s ′ )P s ′ |s,a + P 2 s ′ |s,a(ii)" }, { "formula_coordinates": [ 14, 196.8, 536.09, 169.2, 45.76 ], "formula_id": "formula_61", "formula_text": "≤ µ π β t+1 (s ′ ) -2 µ π β t+1 (s ′ ) 2 + µ π β t+1 (s ′ ) = 2µ π β t+1 (s ′ ) 1 -µ π β t+1 (s ′ ) ≤ 2µ π β t+1 (s ′ )," }, { "formula_coordinates": [ 15, 349.32, 104.67, 133.68, 19.73 ], "formula_id": "formula_62", "formula_text": "d π θ t (•, •) = µ π θ t (•)π θ t (•|•) ∈ R n×n" }, { "formula_coordinates": [ 15, 205.32, 163.79, 335.92, 55.41 ], "formula_id": "formula_63", "formula_text": "M ij =      1 mn with probability b(1 -b) -1 mn with probability b(1 -b) 0 with probability 1 -2b(1 -b)(23)" }, { "formula_coordinates": [ 15, 267.84, 279.69, 273.4, 23.76 ], "formula_id": "formula_64", "formula_text": "M op ≤ C log n n 2 m .(24)" }, { "formula_coordinates": [ 15, 225.36, 334.02, 163.99, 88.71 ], "formula_id": "formula_65", "formula_text": "J -J π θ ≤ 2nH √ d H t=1 d π β t -d π θ t op H • nH √ d • log(nH) n 2 m = H 2 d log(nH) m ," }, { "formula_coordinates": [ 15, 72, 479.21, 467.98, 88.71 ], "formula_id": "formula_66", "formula_text": "S k = M ij e i e ⊤ j , for all k ∈ [n 2 ]. Since |M ij | ≤ 1 mn , we derive that S k op ≤ 1 mn . We calculate that k E[S k S ⊤ k ] = i,j E[M 2 ij ]e i e ⊤ i = 2b(1 -b) 1 m 2 n I n ." }, { "formula_coordinates": [ 15, 197.16, 589.17, 228.24, 29.5 ], "formula_id": "formula_67", "formula_text": "k E[S k S ⊤ k ] op = 2b(1 -b) 1 m 2 n = 2(n -m) n 3 m ≤ 2 n 2 m ." }, { "formula_coordinates": [ 15, 209.76, 624.53, 91.93, 18.76 ], "formula_id": "formula_68", "formula_text": "k E[S ⊤ k S k ] op ≤ 2 n 2 m ." }, { "formula_coordinates": [ 16, 204.48, 217.77, 206.28, 23.76 ], "formula_id": "formula_69", "formula_text": "J -J π θ H 2 d log(nH) m + H 2 dn log(nH) K ," }, { "formula_coordinates": [ 16, 72, 364.05, 338.16, 72.72 ], "formula_id": "formula_70", "formula_text": "P ij M ij - i,j W ij M ij ≤ M * P -W op . Proof. We can rewrite i,j P ij M ij -i,j W ij M ij as M, P -W ," }, { "formula_coordinates": [ 16, 229.56, 463.65, 152.88, 17.04 ], "formula_id": "formula_71", "formula_text": "| M, P -W | ≤ M * P -W op ." }, { "formula_coordinates": [ 16, 240.96, 570.39, 300.28, 14.66 ], "formula_id": "formula_72", "formula_text": "Q π θ t (s, a) = B π θ t Q π θ t+1 (s, a),(25)" }, { "formula_coordinates": [ 16, 249, 592.35, 292.24, 14.66 ], "formula_id": "formula_73", "formula_text": "Y t (s, a) = B π θ t Q π θ t+1 (s, a).(26)" }, { "formula_coordinates": [ 16, 130.92, 641.31, 98.78, 33.62 ], "formula_id": "formula_74", "formula_text": "Q π θ t (s, a) -Q π θ t (s, a) = Q π θ t (" }, { "formula_coordinates": [ 16, 348.6, 682.23, 132.48, 20.21 ], "formula_id": "formula_75", "formula_text": ") Q π θ t+1 (s ′ , a ′ ) -Q π θ t+1 (s ′ , a ′ ) ," }, { "formula_coordinates": [ 17, 119.16, 108.27, 89.46, 42.17 ], "formula_id": "formula_76", "formula_text": "d π θ t , Q π θ t -Q π θ t = d π θ t , Q π θ t -Y t +" }, { "formula_coordinates": [ 17, 119.16, 130.23, 367.71, 68.93 ], "formula_id": "formula_77", "formula_text": "=d π θ t+1 (s ′ ,a ′ ) Q π θ t+1 (s ′ , a ′ ) -Q π θ t+1 (s ′ , a ′ ) = d π θ t , Q π θ t -Y t + d π θ t+1 , Q π θ t+1 -Q π θ t+1 ," }, { "formula_coordinates": [ 17, 123, 309.63, 418.24, 67.85 ], "formula_id": "formula_78", "formula_text": "d π θ t -d π β t op = (µ π θ t 1 ⊤ ) • π θ t -(µ π β t 1 ⊤ ) • π β t op ≤ (µ π θ t 1 ⊤ ) • π θ t -(µ π β t 1 ⊤ ) • π θ t op + (µ π β t 1 ⊤ ) • π θ t -(µ π β t 1 ⊤ ) • π β t op ≤ √ dS 2 A d π β t-1 -d π θ t-1 op + π θ t -π β t op ,(27)" }, { "formula_coordinates": [ 17, 207.48, 419.7, 197.59, 30.71 ], "formula_id": "formula_79", "formula_text": "d π θ t -d π β t op ≤ t i=1 √ dS 2 A t-i π θ i -π β i op" }, { "formula_coordinates": [ 17, 178.8, 495.93, 254.95, 26.28 ], "formula_id": "formula_80", "formula_text": "(µ π θ t 1 ⊤ ) • π θ t -(µ π β t 1 ⊤ ) • π θ t op ≤ √ dS 2 A d π β t-1 -d π θ t-1 op" }, { "formula_coordinates": [ 17, 165.24, 556.23, 286.51, 66.41 ], "formula_id": "formula_81", "formula_text": "(µ π θ t 1 ⊤ ) • π θ t -(µ π β t 1 ⊤ ) • π θ t op = (µ π θ t 1 ⊤ -µ π β t 1 ⊤ ) • π θ t op ≤ √ 2 µ π θ t 1 ⊤ -µ π β t 1 ⊤ ∞ π θ t op ≤ √ 2S max s µ π θ t (s) -µ π β t (s) ," }, { "formula_coordinates": [ 17, 169.2, 668.31, 19.61, 14.54 ], "formula_id": "formula_82", "formula_text": "µ π θ t (" }, { "formula_coordinates": [ 18, 202.92, 178.59, 206.83, 20.21 ], "formula_id": "formula_83", "formula_text": "(µ π β t 1 ⊤ ) • π θ t -(µ π β t 1 ⊤ ) • π β t op ≤ π θ t -π β t op" }, { "formula_coordinates": [ 18, 150.6, 232.83, 316.32, 44.09 ], "formula_id": "formula_84", "formula_text": "(µ π β t 1 ⊤ ) • π θ t -(µ π β t 1 ⊤ ) • π β t op = (µ π β t 1 ⊤ ) • (π θ t -π β t ) op ≤ µ π β t ∞ π θ t -π β t op ≤ π θ t -π β t op ." }, { "formula_coordinates": [ 18, 168.12, 357.09, 88.95, 26.04 ], "formula_id": "formula_85", "formula_text": "J -J π ≤ 2H √ dSA" }, { "formula_coordinates": [ 18, 162.96, 480.69, 90.99, 26.04 ], "formula_id": "formula_86", "formula_text": "J π ≥ J π -2H √ dSA" } ]
Matrix Estimation for Offline Reinforcement Learning with Low-Rank Structure
We consider offline Reinforcement Learning (RL), where the agent does not interact with the environment and must rely on offline data collected using a behavior policy. Previous works provide policy evaluation guarantees when the target policy to be evaluated is covered by the behavior policy, that is, state-action pairs visited by the target policy must also be visited by the behavior policy. We show that when the MDP has a latent low-rank structure, this coverage condition can be relaxed. Building on the connection to weighted matrix completion with non-uniform observations, we propose an offline policy evaluation algorithm that leverages the low-rank structure to estimate the values of uncovered state-action pairs. Our algorithm does not require a known feature representation, and our finite-sample error bound involves a novel discrepancy measure quantifying the discrepancy between the behavior and target policies in the spectral space. We provide concrete examples where our algorithm achieves accurate estimation while existing coverage conditions are not satisfied. Building on the above evaluation algorithm, we further design an offline policy optimization algorithm and provide non-asymptotic performance guarantees.
Xumei Xi; Christina Lee; Yudong Chen
[ { "figure_caption": "2. 11MDP with Low-Rank Structure Consider an MDP M = (S, A, H, P, r, µ 1 ) with finite state space S, finite action space A, horizon H, transition kernel P = {P t } t∈[H] , bounded reward function r = {r t : S × A → [0, 1]} t∈[H] , and initial state distribution µ 1 ∈ ∆(S). Let S = |S| and A = |A|. For each policy π = {π t : S", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "t∈[H]B t + CH 2 d(S + A) log(| Π B |HS/δ) K J -J π ≤ 2H √ dSA t∈[H] B t + CH 2 d(S + A) log(| Π B |HS/δ) K .", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "t∈[H] B t -CH 2 d(S + A) log(| Π B |HS/δ) K ≥ J π -2H √ dSA t∈[H] B t -CH 2 d(S + A) log(| Π B |HS/δ) K ≥ J π -4H √ dSA t∈[H] B t -2CH 2 d(S + A) log(| Π B |HS/δ) K .", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "not the specific distribution. Once a state-action pair is supported, we know exactly what r t (s, a) and P t (•|s, a) are, and therefore it does not matter what the actual value of d π β t (s, a) is. When the support of d π β t is S × A for all t ∈ [H], it means the behavior policy is extremely exploratory and covers the whole state-action space. Consequently, we get a zero estimation bound in (8) because we know the MDP exactly.", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Since generating d π β t is done independently from the offline data collection, we first condition on d π β", "figure_data": "A.5 Proof of Corollary 2d π θ t . d By Theorem 2, with probability at least 1 -1 2n , we havetandJ -J π θ ≤2nH√ dt∈[H]d π θ t -d π β top+ CH 2 dn log(nH) K.Invoking Lemma 5, with probability at least 1 -1 2n , we have d π θ t -d π β toplog(nH) n 2 mfor all t ∈ [H].Hence, we getP M op ≥ t ≤ 2n exp-t 2 /2 3mn mn 2 + t 2.Letting the RHS be upper bounded by 1 n yields the desired result.", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "where the last step invokes Hölder's inequality and the notation P t-1 (s|•, •) denotes a S × A matrix by fixing the next state in the transition probability. Note that by assumption, P t-1 (s|•, •) is at most rank d/2. Hence, we further deduce that P t-1 (s|•, •)", "figure_data": "", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work by [22] is referenced for its approach to estimating the state marginal importance ratio, which the citing paper adopts in their research to address the challenge of distribution shift in offline RL."}, {"Category": "Data Source", "Citation": "[12]", "Explanation": "The cited work by [12] is acknowledged for providing a dataset that the citing paper utilizes in their study of offline RL."}, {"Category": "Extension or Continuation", "Citation": "[28]", "Explanation": "The cited work by [28] is mentioned as a continuation of research in the area of offline RL, exploring new dimensions and variables in the field."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work introduces the Fitted Q-iteration (FQI) algorithm, which the citing paper adopts in their research on offline policy evaluation in settings with a latent low-rank structure."}, {"Category": "Extension or Continuation", "Citation": "[24,17,10]", "Explanation": "The cited works build upon the work of the cited work by requiring a more relaxed partial coverage condition for offline policy evaluation, which the citing paper further extends in their research on the same topic."}, {"Category": "Data Source", "Citation": "[19,18]", "Explanation": "The cited works are used as a reference in the citing paper for their view of the Q function as a matrix and the low-rank structure exploited in the research on offline policy evaluation in settings with a latent low-rank structure."}, {"Category": "Methodological Basis", "Citation": "[4,8]", "Explanation": "The cited works are mentioned in the context of non-uniform sampling in the offline data, which the citing paper uses in their research on completing the matrix under non-uniform sampling in settings with a latent low-rank structure."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work is mentioned in the context of the more relaxed partial coverage condition for offline policy evaluation, which the citing paper adopts in their research on the same topic."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work introduces a novel algorithm for matrix estimation that allows for some dependence between the noise and sampling pattern, which the citing paper builds upon to derive a more general performance guarantee for a wider class of sampling patterns in offline RL."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work introduces the concept of low-rank MDPs, which the citing paper builds upon to develop a new model for transition kernels in the context of reinforcement learning."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work provides a specific model for low-rank MDPs that the citing paper adopts in their research on transition kernels in reinforcement learning."}, {"Category": "Data Source", "Citation": "[6]", "Explanation": "The cited work is a source of the Linear MDP model, which the citing paper uses as a reference for their study on feature maps in transition kernels for reinforcement learning."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work contributes to the Linear MDP model by providing insights on feature maps in transition kernels, which the citing paper leverages in their research on reinforcement learning."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work provides the definitions of the max norm and nuclear norm, which the citing paper adopts in its research to analyze the relationship between the two norms and the matrix rank."}, {"Category": "Supporting Evidence", "Citation": "[8]", "Explanation": "The cited work in [8] is used to measure the difference in two distributions in the operator norm, which is similar to the discrepancy metrics used in the citing paper. This work provides a foundational method for understanding the difference in distributions and contributes to the research in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work provides a generalization error guarantee that is used in the proof of Theorem 2 in the citing paper, which is an important methodological basis for the study of finite-sample error in the system."}, {"Category": "Methodological Basis", "Citation": "[14,13]", "Explanation": "The cited works provide the use of KL-divergence as a method to ensure the learned policy is close to the behavior policy, which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The cited work introduces the maximum mean discrepancy (MMD) as a useful method in practice, which the citing paper builds upon in their research to address the support shift issue."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work provides a theorem that the citing paper uses in their research to establish a relationship between the solution and the target matrix, as well as a way to bound the estimation error."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work provides the matrix Berstein's inequality (Theorem 6.1.1) that the citing paper uses to derive the inequality S k op \u2264 1 mn and calculate the expectation E[S k S \u22a4 k ]."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b5", "b56", "b61", "b19", "b59", "b65", "b7" ], "table_ref": [], "text": "Morphological inflection is a task with widereaching applications in NLP, linguistics, and cognitive science. As the reverse of lemmatization, it is a critical part of natural language generation, particularly for languages with elaborate morphological systems (Bender, 2009;Oflazer and Saraçlar, 2018). Since morphological inflection is a particular type of well-defined regular string-to-string mapping problem (Roark and Sproat, 2007;Chandlee, 2017), it is also useful for testing the properties of different neural network architectures. Within cognitive science and linguistics, computational models of inflection have a long history in arbitrating between competing theories of morphological representation and acquisition (surveyed in Pinker and Ullman, 2002;Seidenberg and Plaut, 2014), and inflection is often a focus of computational typology (Bjerva and Augenstein, 2018;Elsner et al., 2019).\nHowever, despite the task's popularity, standard evaluation practices have significant weaknesses. We discuss three aspects of these practices which hamper investigators' ability to derive informative *Denotes equal contribution Figure 1: The four logically possible train-eval overlap types if evaluation data consists of (lemma, feature set) pairs: both, featsOnly, lemmaOnly, neither, as well as featsAttested= both ∪ featsOnly and featsNovel= lemmaOnly ∪ neither.\nconclusions. (1) Uniform sampling, which creates unnatural train-test splits, (2) Evaluation of single data splits, which yields unstable model rankings, and (3) uncontrolled overlaps between train and test data components, which obscure diagnostic information about systems' ability to perform morphological generalizations." }, { "figure_ref": [], "heading": "Practice 1: Uniform Sampling", "publication_ref": [ "b33", "b47", "b3", "b37", "b16", "b23", "b76" ], "table_ref": [], "text": "Training and evaluation sets have been (with some exceptions) sampled uniformly by type from a corpus such as those available in the UniMorph Database (Kirov et al., 2018;McCarthy et al., 2020;Batsuren et al., 2022). While practical to implement for corpora that lack frequency information, uniform sampling is also unrealistic because morphological forms exhibit a highly skewed Zipfian distribution in any large text (Lignos and Yang, 2018). Thus, uniform sampling creates an unnatural bias towards low-frequency types. Since high frequency is correlated with irregularity across many but not all languages (Bybee, 1991;Fratini et al., 2014;Wu et al., 2019), this creates a bias towards more regular and reliable training items.\nWe provide two alternatives for producing realistic or challenging data sets: (1) a frequencyweighted sampling strategy to achieve a more real-istic distribution of out-of-vocabulary (OOV) lemmas and inflectional categories and better match practical use-cases or input during child language acquisition, and (2) a sampling strategy that explicitly balances OOV lemmas and inflectional categories in order to directly evaluate models' generalization ability along these dimensions." }, { "figure_ref": [], "heading": "Practice 2: Single Data Splits", "publication_ref": [ "b22", "b48", "b71", "b58", "b35", "b8" ], "table_ref": [], "text": "The current practice in inflection evaluation, employed, for example, in the SIGMORPHON, CoNLL-SIGMORPHON and SIGMORPHON-UniMorph shared tasks in recent years (Cotterell et al., 2016(Cotterell et al., , 2017(Cotterell et al., , 2018;;McCarthy et al., 2019;Vylomova et al., 2020;Pimentel et al., 2021;Kodner et al., 2022), examines different models with one particular data set that is considered representative of the language or the inflection task at hand. This data set, and therefore all evaluation, usually consists of one pre-defined train-(dev-)test split.\nHowever, this method is problematic because it implicitly assumes that the results from a single split are informative and generalizable. In reality, this assumption is untenable, particularly when facing severe data limitation (Liu and Prud'hommeaux, 2022), as is the case for the majority of languages in the world (cf. Blasi et al., 2022): In UniMorph 4, for example, data set size varies significantly across languages, with the smallest, Manx (Celtic, IE), containing only one lemma with 14 inflected forms, and the largest, Czech (Slavic, IE) containing approximately 800,000 lemmas with 50.3 million forms. If the performance on a single split is not necessarily representative, then the original model ranking derived from the one particular data split might also not generalize well.\nThe concerns outlined above were demonstrated in Liu and Prud'hommeaux (2022), which investigated model generalizability in low-resource morphological segmentation. Using data from 11 languages, they provided evidence that: (1) there are major differences in the numerical performance and rankings of each evaluated model type when using different splits from the same data set, and (2) even within a single split, large performance variability can arise for each model type when it is trained using different random seeds. These findings illustrate that common methods of model evaluation can lead to largely coincidental conclusions. We extend this approach to morphological inflection by applying multiple data splits, and evaluating variability between splits." }, { "figure_ref": [], "heading": "Practice 3: Uncontrolled Overlaps", "publication_ref": [], "table_ref": [], "text": "The typical morphological inflection task paradigm presents (lemma, inflected form, feature set) triples during training and asks a system to predict inflected forms from (lemma, feature set) pairs during evaluation. Note that since the lemma and feature set can be combined independently, it is possible for either lemmas or feature sets that appeared during training to reappear during test without any individual triple violating train-on-test. Test pairs with OOV lemmas or feature sets require a system to generalize along different morphological dimensions. Performance is likely related to the relative rates of OOV lemmas and feature sets in the evaluation split, yet existing sampling strategies generally leave these variables uncontrolled.\nWe observe that uncontrolled OOV rates vary dramatically between different sampled data splits, and that uncontrolled sampling biases test sets towards \"easier\" items with in-vocabulary lemmas and feature sets. To remedy this, we argue that performance should be reported independently for items with each lemma/feature set overlap type regardless of sampling strategy. Furthermore, if a project's research goal is to evaluate the generalization ability of a model, lemma/feature set overlapaware sampling should be used to ensure that a sufficient number of test items of each overlap type are present." }, { "figure_ref": [], "heading": "Defining Overlap", "publication_ref": [ "b18", "b18", "b37", "b34", "b29", "b29" ], "table_ref": [], "text": "Morphological inflection requires generalization over two primary dimensions: to new lemmas (\"If I have witnessed the 2pl imperfective subjunctive with other verbs, how do I apply that to new verb X?\") and to new inflectional categories (\"If I have seen X inflected in several other categories, how do I create the 2pl imperfect subjunctive of X?\"). Because of the sparsity of morphological inflections in language use (Chan, 2008), both types of generalization are necessary during language acquisition as well as deployment of computational models.\nAs with many linguistic phenomena, the attestation of inflected forms follows an extremely sparse and skewed long-tailed distribution, as do attested lemmas ranked by the proportions of their potential paradigms that are actually attested (paradigm saturation; PS), and inflectional categories ranked by the number of lemmas with which they oc-cur (Chan, 2008). For example, the median PS for Spanish verbs in millions of tokens of childdirected speech is equivalent to two of its three dozen possible forms, and the 2nd person plural imperfect subjunctive only occurs with two lemmas (cf. Lignos and Yang, 2018;Kodner, 2022).\nGiven the importance of both types of generalization, it is necessary to evaluate both to assess the abilities of a morphological learning model. In the evaluation made popular by the SIGMORPHON shared tasks, models are asked to predict inflected forms given (lemma, feature set) pairs, where feature sets can be seen as corresponding to inflectional categories or paradigm cells. Generalization across lemmas is required when an evaluation pair contains a lemma that was out-of-vocabulary (OOV) in training, and generalization across categories is required when an evaluation pair contains a feature set that was OOV. In all, there are four logically possible licit types of evaluation pairs distinguished by their lemma and feature overlap with training. These are expressed visually in Figure 1 along with two types which are unions of the other types:\nboth Overlap: Both the lemma and feature set of an evaluation pair are attested in the training set (but not together in the same triple). lemmaOnly Overlap: An eval pair's lemma is attested in training, but its feature set is novel. featsOnly Overlap: An eval pair's feature set is attested in training, but its lemma is novel. neither Overlap: An evaluation pair is entirely unattested in training. Both its lemma and features are novel. featsAttested: An eval pair's feature set is attested in training (both ∪ featsOnly) featsNovel: An eval pair's feature set is novel (lemmaOnly ∪ neither)\nFor a concrete illustration, consider the training and evaluation sets provided in (1)-(2). Each evaluation pair exhibits a different kind of overlap.\n( Computational work in morphological inflection has generally ignored these dimensions of evaluation. In the shared task, the four overlap types were uncontrolled before 2021, which contains one partial evaluation on featsOnly ∪ neither items. But, recognition of the value of these overlap types has grown recently. Goldman et al. (2022) showed that four models consistently struggle to generalize across lemmas, concluding that test sets should avoid lemma overlap altogether. However, this proposal removes the option to contrast performance on seen and unseen lemmas. Furthermore, they did not control for or evaluate feature overlap, so both vs. lemmaOnly and featsOnly vs. neither also cannot be distinguished. (3) summarizes their partition scheme, which distinguishes two overlap types. We call these lemmaAttested (= both ∪ lemmaOnly) and lemmaNovel (= featsOnly ∪ neither).\n(3) Goldman et al. (2022) Partition Types e0: sit V;PST <--lemmaAttested e1: see V;NFIN <--lemmaAttested e2: eat V;PST <--lemmaNovel e3: run V;PRS;3;SG <--lemmaNovel\nThe 2022 SIGMORPHON-UniMorph shared task was the first to report results on all four overlap types (both, featsOnly, lemmaOnly, neither). Every system submitted to the shared task achieved much better performance on in-vocabulary feature sets (both and featsOnly) than OOV feature sets (lemmaOnly or neither). This discrepancy even held for languages for which a model should be able to generalize: highly regular agglutinative morphology for which this type of generalization is often transparent. On the other hand, lemma attestation produced a much smaller discrepancy. Following these observations, we focus our investigation on the four logical overlap types with extra emphasis on the featsAttested vs. featsNovel dichotomy. We address agglutinative languages specifically in Section 5.3" }, { "figure_ref": [], "heading": "Data Sources and Preparation", "publication_ref": [ "b3", "b44" ], "table_ref": [], "text": "We follow prior literature in providing training and evaluation data in UniMorph's format. Data sets were sampled from UniMorph 4 (Batsuren et al., 2022) and3 (McCarthy et al., 2020) 1 aug-mented with frequencies from running text corpora. When possible, frequencies were drawn from childdirected speech (CDS) corpora from the CHILDES database (MacWhinney, 2000), since one possible downstream application of the morphological inflection task is contribution to the computational cognitive science of language acquisition. CHILDES lemma and morphological annotations were converted into UniMorph format and intersected with UniMorph to create frequency lists.2 " }, { "figure_ref": [], "heading": "Languages", "publication_ref": [ "b42", "b4" ], "table_ref": [], "text": "Languages were prioritized for typological diversity and accessibility of text corpora. Quantitative summaries of our frequency+UniMorph data sets are provided in Appendix B.\nArabic (Semitic, AA): Modern Standard Arabic frequencies were drawn from the diacritized and morphologically annotated Penn Arabic Treebank (PATB; Maamouri et al., 2004) and intersected with UniMorph 4 ara ∪ ara_new. Diacritized text is a requirement because orthographic forms drawn from undiacritized text are massively morphologically ambiguous. The text in the CHILDES Arabic corpora is undiacritized and thus unusable.\nGerman (Germanic, IE): German was drawn from the Leo Corpus (Behrens, 2006), the only morphologically annotated German corpus in CHILDES, and intersected with UniMorph 3+4. Only nouns and verbs were extracted because annotation for adjectives is inconsistent.\nEnglish (Germanic, IE): English was included because it is heavily studied despite its relatively sparse morphology. Data was extracted from all morphologically annotated CHILDES English-NA corpora and intersected with UniMorph 3+4. 3 Only nouns and verbs were extracted due to inconsistent adjective annotation in both data sources.\nSpanish (Romance, IE): Spanish exhibits a variety of fusional and agglutinative patterns. Data was extracted from all morphologically annotated Spanish CHILDES corpora intersected with Spanish UniMorph 3+4. Non-Spanish vocabulary was removed by intersecting with UniMorph. Only nouns and verbs were extracted.\nSwahili (Bantu, Niger-Congo): Swahili morphology is highly regular and agglutinative with very large paradigms. Frequencies were drawn from Swahili Wikipedia dump 20221201 accessed through Huggingface (Wikimedia, 2022) and intersected with UniMorph 4 swc ∪ swc.sm. In cases where mapping inflected forms to UniMorph creates ambiguity due to syncretism, frequency was divided evenly across each triple sharing the inflected form. This ensured that the frequencies of inflected forms remain consistent with Wikipedia. Intersecting with UniMorph removed the large amount of non-Swahili vocabulary in the Wikipedia text.\nTurkish (Turkic): Turkish is also highly regular and agglutinative with very large paradigms. Frequencies were drawn from Turkish Wikipedia dump 20221201 accessed through Huggingface, intersected with UniMorph 4, and processed identically to Swahili." }, { "figure_ref": [], "heading": "Data Splits", "publication_ref": [], "table_ref": [], "text": "We employed three distinct sampling strategies to generate small (400 items) and large (1600) training, small (100) and large (400) fine-tuning, development (500), and test (1000) sets for each language. 4 Small training and fine-tuning are subsets of large training and fine-tuning. Each splitting strategy was applied five times with unique random seeds to produce distinct data sets.\nUNIFORM: Raw UniMorph 3+4 corpora were partitioned uniformly at random. This approach is most similar to that employed by SIGMORPHON shared tasks, except for 2017 and 2022.\nWEIGHTED: Identical to UNIFORM except splits were partitioned at random weighted by frequency. Small training+fine-tuning were sampled first, then additional items were sampled to create large training+fine-tuning. Training and fine-tuning sets were then split uniformly at random. Dev+test was next sampled by weight and then were separated uniformly. This frequencyweighted sampling is reminiscent of the 2017 shared task: it strongly biases the small training set towards high-frequency items and dev+test towards low-frequency items. Since most UniMorph forms do not occur in our corpora due to morphological sparsity, most triples had zero weight and were never sampled.\nOVERLAPAWARE: Similar to the 2022 SIG-MORPHON shared task. It enforces a maximum proportion of featsAttested pairs in the test set relative to train+fine-tuning: as close to 50% as pos-sible without exceeding it. This ensures that there is ample representation of each overlap type in test. It is adversarial, since featsNovel pairs are expected to be more challenging than featsAttested pairs. This process also tends to increase the proportion of lemmaOnly items in the test set. Only items with non-zero frequency were sampled.\nUNIFORM produces a heavy bias towards lower frequency words. For all languages and splits, the median frequency of sampled items is actually zero: that is, the majority of sampled items were not attested in our corpora. This is a consequence of the extreme sparsity of morphological forms discussed in Section 2. As a consequence, overlap between splits from different seeds is orders of magnitude lower for UNIFORM than the other strategies. WEIGHTED achieves the expected high-frequency bias in training sets relative to test sets.\nTable 1 provides average means and standard deviations for the proportion of featsAttested and featsNovel in test sets relative to small and large train. OVERLAPAWARE consistently achieves a roughly 50-50 split with low variability across languages and seeds. The other strategies bias test sets heavily towards featsAttested with high variance across languages and seeds. " }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b75", "b73" ], "table_ref": [], "text": "One non-neural and three neural systems were evaluated. These were chosen based on their availability and performance in recent shared tasks:\nCHR-TRM (Wu et al., 2021) is a character-level transformer that was used as a baseline in 2021 and 2022. We used the hyper-parameters suggested by the original authors for small training conditions.\nCLUZH-GR and CLUZH-B4 (Wehrli et al., 2022) is a character-level transducer which substantially outperformed CHR-TRM in the 2022 shared task. The results submitted for the shared task are from an elaborate ensemble model optimized for each language. For this work, we evaluate two published variants with consistent hyper-parameters across languages, CLUZH-GR with greedy decoding and CLUZH-B4 with beam decoding, beam size = 4.\nNONNEUR (Cotterell et al., 2017) has been used as a baseline in SIGMORPHON shared tasks since 2017. It heuristically extracts transformations between lemmas and inflected forms and applies a majority classifier conditioned on the associated feature sets. NONNEUR was trained on combined training and fine-tuning sets so that each architecture was exposed to the same amount of data." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "This section presents our analyses of the results. All evaluations report exact match accuracy. Overall accuracy refers to average accuracy on an entire evaluation set. Average overall accuracy refers to the mean of overall accuracy over all five seeds. See Appendix C for full breakdowns by language and architecture." }, { "figure_ref": [], "heading": "Effect of Training Size", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We begin by comparing average overall accuracy for each training size. All reported analyses focus on test, but there were no observable qualitative differences in behavior between dev and test. We summarize the results in Table 2, broken down by overlap partition and sampling strategy. The large training size consistently leads to higher accuracies than small training. Across languages, the average accuracy score difference between the two training sizes is 9.52%. Taking Arabic as an illustrative example, the score difference between the two training sizes ranges from 1.74% to 19.32% depending on model type and splitting strategy, with an average of 12.05%. " }, { "figure_ref": [ "fig_0" ], "heading": "Effect of Sampling Strategy", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We next turn to measuring the effect of sampling strategy on overall accuracy. Figure 2 WEIGHTED sampling leads to a higher accuracy compared to the other two strategies for every model type other than CHR-TRM, where the result from UNIFORM sampling (71.90%) is again slightly higher than that of WEIGHTED (71.60%).\nWhen considering other languages, we also find some variation. WEIGHTED sampling also yields the highest average accuracy scores across model types for Arabic, German, Spanish, and Turkish for both training sizes, except for Spanish under the large training condition with CLUZH-GR, where UNIFORM leads. In contrast, UNIFORM consistently results in the highest average accuracy on English and Swahili for both training sizes.\nAcross languages, the average accuracy from WEIGHTED is the highest for both large (83.75%) and small (74.22%) training sizes, followed by UNIFORM (large: 79.20%, small: 66.16%). OVER-LAPAWARE always yields the lowest accuracy. These observations align with our expectations about the adversarial nature of OVERLAPAWARE, where challenging featsNovel (Table 2) constitutes a much larger proportion test set (Table 1)." }, { "figure_ref": [ "fig_1" ], "heading": "Effect of Overlap", "publication_ref": [ "b29" ], "table_ref": [], "text": "We now provide an analysis of accuracy scores by overlap partition. Figure 3 provides a visualization of accuracy by partition across seeds broken down by training size, language, model type. Using Arabic again as an illustration, the average accuracy across model types and sampling strategies for large training is much higher for featsAttested (77.70%) than for featsNovel (41.92%), somewhat higher accuracy is achieved for both (79.53%) than for featsOnly (77.28%), and higher accuracy is achieved for lemmaOnly (49.12%) than for neither (41.92%). This ranking is consistent across model types, sampling strategies, and training sizes. Scores from these two overlap partitions are also higher than those from lemmaOnly and neither.\nThese patterns hold across languages. Specifically, we observe two general tendencies. First, the accuracy averaged across model types and sampling strategies is always substantially higher for featsAttested than it is for featsNovel; the average accuracy difference between the two is 49.75% for the large training, and 48.02% for small training. This is reflected in a full breakdown by overlap type: higher accuracy is consistently achieved for both and featsOnly, than for neither and lemmaOnly. This large asymmetry corresponds to our expectations regarding the effect of feature overlap on performance.\nWe provide three sub-analyses to further investigate this asymmetry and compare it with the lemma-based division advocated for by (Goldman et al., 2022). First, we compute the average accuracy difference between lemmaAttested (both ∪ lemmaOnly) and lemmaNovel (featsOnly ∪ neither). The score difference between lemmaAttested and lemmaNovel is less than 2% averaged across languages for both training sizes, which is an order of magnitude smaller than the difference between featsAttested and featsNovel. This trend is consistent with the results of the 2022 SIGMORPHON shared task, which also found a much greater impact of feature set attestation than lemma attestation.\nSecond, we measure the correlation between the proportion of featsAttested items (number featsAttested items divided by the size of the dev or test set), and overall accuracy (average accuracy on an entire dev or test set), as well as between the proportion of lemmaAttested and overall accuracy. We used Spearman's ρ, which assesses if there is any monotonic (not necessarily linear) relationship between the two variables. 6 If ρ between an overlap type and overall accuracy is high, it would suggest that the distribution of overlaps is an important driver of performance. lemmaAttested shows little correlation (small: 0.01, large: -0.10). However, we find substantial positive correlations for featsAttested (small: 0.69, large: 0.68).\nThird, we compute the correlation between the accuracy score of individual partitions and the overall accuracy score on UNIFORM and WEIGHTED vs. on OVERLAPAWARE. This demonstrates to what extent evaluation results based on each overlap partition resemble those captured by the overall accuracy and how it differs when overlaps are controlled during sampling. If the correlation is small, it suggests that the performance on a particular overlap partition is largely independent of the others and should be evaluated independently.\nWhen overlaps are not explicitly controlled, correlations are particularly strong for featsAttested because this partition makes up a large majority of the test set (Table 3). These partitions are also the ones that tend to show the highest performance, which is then reflected in the overall accuracy. However, for OVERLAPAWARE, correlations are higher between overall accuracy and the challenging partitions: featsNovel, lemmaOnly, and neither. They are also higher not only for featsNovel, but also lemmaAttested, and lemmaNovel even though these overlaps were not explicitly controlled. This demonstrates that OVERLAPAWARE sampling better balances individual partitions in its overall accuracy scores and can be expected to produce a more challenging evaluation. However, all partitions should be evaluated regardless of sampling strategy. Up to this point, we have considered all languages in the analysis. However, whether or not it is reasonable to expect a system to achieve high accuracy on featsNovel items varies typologically. For languages with highly regular and agglutinative morphologies, such as Swahili and Turkish, each feature in a feature set roughly corresponds to a single affix in a certain order with a limited number of allomorphs. For these languages, this dimension of generalization should often be straightforward. For languages with mixed systems, like Spanish and Arabic, and languages with fusional systems like English, the individual members of a feature set often do not have direct bearing on the inflected form. For these languages, generalization to a novel feature set is sometimes impossible when it cannot be inferred from its component features. The same problem applies to lemmas with erratic stem changes or suppletion.\nThus, if a model type can generalize to novel feature sets, one would expect that the accuracy gap between featsAttested and featsNovel would be lower for Swahili and Turkish than for the other languages. However, the gaps for these are actually larger than for German or Arabic. One would also expect the correlation between the proportion of featsAttested in the data and overall accuracy to be lower for Swahili and Turkish, however this is not borne out either. These findings, provided in Table 4, reveal that current leading inflection models do not necessarily generalize well to novel feature sets even in precisely the cases where they should be able to." }, { "figure_ref": [], "heading": "Model Ranking", "publication_ref": [], "table_ref": [], "text": "In this section, we analyze how performance varies across the four model types. We first compare Model rankings for individual languages are much more variable, especially for large training. There is not a single model ranking that holds for every language. While CLUZH-B4 yields the best performance for three languages (German, Spanish, and Turkish), CHR-TRM outperforms other model types for Arabic and Swahili, and NONNEUR leads to the highest accuracy for English. There is less variation in model rankings for small training; the same model ranking was observed for German, English, and Spanish (NONNEUR > CLUZH-B4 > CLUZH-GR > CHR-TRM). Notably, for each individual language, the model rankings were always inconsistent between the two training sizes.\nSeveral trends emerge in model rankings by overlap partition. First, the model rankings based on the overall accuracy do not hold for the overlap partitions except for Arabic and Swahili large training. Second, within each overlap partition, model rankings are more stable across languages for small train than large. Third, on average, CLUZH-B4 outperforms the other model types on partitions with feature overlap whereas CHR-TRM leads on partitions without feature overlap. These tendencies resonate with our proposal in Section 2: future models of morphological inflection should be evaluated based on alternative metrics in addition to overall accuracy. They also reveal difference generalization strengths across models.\nWhen comparing performance by sampling strategy, we found lower variability for each language. For example, with UNIFORM large training, two model rankings turn out to be the most frequent, each observed in two languages. Among the models, CLUZH-B4 and CHR-TRM achieve the best performance. For small training, one model ranking holds for three out of the six languages (CLUZH-B4 > CLUZH-GR > CHR-TRM > NONNEUR). Considering both training sizes, there are no noticeable differences in terms of the most frequent model ranking across the three sampling strategies. For UNIFORM and WEIGHTED, the neural systems are always ranked among the highest for both training sizes; yet for OVERLAPAWARE with small training, NONNEUR achieves the highest performance for German, English, and Spanish." }, { "figure_ref": [], "heading": "Variability across Random Seeds", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "Analysis so far relies on accuracy scores averaged across random seeds. The final component of our analysis investigates how much variation arises due to random data sampling. Given the five random seeds for each combination of language, sampling strategy, overlap partition, and model type, we calculated the score range, which is the difference between the lowest and the highest overall accuracy, as well as the standard deviation of the accuracy scores across the seeds, which we refer to as random seed variability.\nWe first considered the score range for overall accuracy for each language. For large training, the mean score range spans from 4.41% for Arabic, to 8.38% for English; the mean random seed variability follows the same trend (1.73% to 3.54%). For every language, the score range and random seed variability for the large training size are consistently larger than those derived from small training. In both cases, score ranges are non-negligible. Next, for each language, we analyze the average score range for each sampling strategy and model type separately. Comparing results from the three sampling strategies in Table 5, OVERLA-PAWARE sampling consistently yields the highest score range and random seed variability. This indicates that OVERLAPAWARE, despite exhibiting the least variability in overlap partition sizes, is also the most variable in terms of model performance. This likely suggests that it is not just feature set attestation in general, but also exactly which feature sets that happen to appear in train vs. test drive performance. Finally, when looking at results for each individual model type, CLUZH-GR demonstrates the most variable performance. Its average score range (9.47% for large training, 7.94% for small) and its average random seed variability (4.03% for large training, 3.31% for small) end up being the highest." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "We investigated the roles that sampling strategy, random seeds, and overlap types play in evaluating and analyzing the results of morphological inflection tasks and conclude that common practices leave much to be desired. We argue for frequencyweighted splitting to achieve more realistic traintest distributions and feature/lemma overlap-aware sampling for directly investigating the generalization abilities of different models. The high score range observed for overlap-aware sampling relative to other strategies suggests that which feature sets happen to appear in train vs. test play a major role in the ability of a model to generalize, though future work would need to confirm this.\nRegardless of sampling strategy, evaluation items of each overlap type should be used in addition to an overall analysis. The evaluation in this work reveals that all model types under investigation struggle to generalize to unseen feature sets, even for languages where that should be possible, a fact that has been overlooked in prior studies. Finally, results drawn from one data split are unlikely to be representative, so multiple splits should be made with different random seeds and compared, particularly for shared tasks and leader boards where final model rankings matter." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b35", "b29", "b29" ], "table_ref": [], "text": "Our suggested approaches have two primary practical limitations: First, WEIGHTED sampling is restricted to languages with available running text sources for extracting frequencies. A project on extremely low-resource languages (e.g., Liu et al., 2022) may be restricted to UNIFORM and OVER-LAPAWARE sampling. Second, as the number of seeds increases, so do requirements for training time and/or computing power. A shared task, for example, might limit itself to only a few seeds in order to assure on-time submissions. Future work would benefit from a wider selection of model architectures, along with more sampling strategies, and of course a wider sample of typologically diverse languages.\nNotably, this work reproduces the effect observed in the SIGMORPHON 2022 shared task (Kodner et al., 2022), which found a substantial performance hit for featsNovel relative to featsAttested, but not lemmaNovel relative to lemmaAttested. However, both this work and the shared task fail to replicate the effect observed in Goldman et al. (2022), which reports a 95% performance hit on lemmaNovel vs. lemmaAttested. This may have something to do with differences in splitting algorithms, unmeasured feature overlap in Goldman et al. (2022), or choice of model architectures." }, { "figure_ref": [], "heading": "B Splitting Strategy Data Summaries", "publication_ref": [], "table_ref": [ "tab_8", "tab_9", "tab_10", "tab_12" ], "text": "This appendix contains Tables 6789. " }, { "figure_ref": [], "heading": "C Detailed Results", "publication_ref": [], "table_ref": [ "tab_13", "tab_14" ], "text": "This appendix contains Tables 1011. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank Charles Yang, Jeffrey Heinz, Mitch Marcus, and the audience at Stony Brook University AT-LaC for their helpful discussion. Experiments were performed on the SeaWulf HPC cluster maintained by RCC and the Institute for Advanced Computational Science (IACS) at Stony Brook University and made possible by National Science Foundation (NSF) grant No. 1531492. The second author gratefully acknowledges funding through the IACS Graduate Research Fellowship and the NSF Gradu-7 https://catalog.ldc.upenn.edu/LDC2005T20 ate Research Fellowship Program under NSF Grant No. 2234683." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [ "b42" ], "table_ref": [], "text": "To the best of our knowledge, all results published in this paper are accurate, and we have represented prior work fairly to the best of our abilities. All data sources are free and publicly available, except for the Penn Arabic Treebank (Maamouri et al., 2004), which is accessible through the LDC. 7 No sensitive data was used which could violate individuals' privacy or confidentiality. Authorship and acknowledgements fairly reflect contributions." } ]
2023-05-25
10.18653/v1/N18-1083
[ { "authors": "Javier Aguado-Orea; Julian M Pine", "journal": "PloS One", "ref_id": "b0", "title": "Comparing different models of the development of verb inflection in early child Spanish", "year": "2015" }, { "authors": "Janet Bang; Aparna Nadig", "journal": "Autism Research", "ref_id": "b1", "title": "Learning language in autism: Maternal linguistic input contributes to later vocabulary", "year": "2015" }, { "authors": "Elizabeth Bates; Inge Bretherton; Lynn Sebestyen Snyder", "journal": "Cambridge University Press", "ref_id": "b2", "title": "From first words to grammar: Individual differences and dissociable mechanisms", "year": "1991" }, { "authors": "Khuyagbaatar Batsuren; Omer Goldman; Salam Khalifa; Nizar Habash; Witold Kieraś; Gábor Bella; Brian Leonard; Garrett Nicolai; Kyle Gorman; Yustinus Ghanggo Ate; Maria Ryskina; Sabrina Mielke; Elena Budianskaya; Charbel El-Khaissi; Tiago Pimentel; Michael Gasser; William Abbott Lane; Mohit Raj; Matt Coler; Jaime Rafael Montoya Samame; Delio Siticonatzi Camaiteri; Esaú Zumaeta Rojas; Didier López Francis; Arturo Oncevay; Juan López Bautista; Gema ; Celeste Silva Villegas; Lucas Torroba Hennigen; Adam Ek; David Guriel; Peter Dirix; Jean-Philippe Bernardy; Andrey Scherbakov; Aziyana Bayyr-Ool; Antonios Anastasopoulos; Roberto Zariquiey; Karina Sheifer; Sofya Ganieva; Hilaria Cruz; Ritván Karahóǧa; Stella Markantonatou; George Pavlidis; Matvey Plugaryov; Elena Klyachko; Ali Salehi; Candy Angulo; Jatayu Baxi; Andrew Krizhanovsky; Natalia Krizhanovskaya; Elizabeth Salesky; Clara Vania; Sardana Ivanova; Jennifer White; Rowan Hall Maudslay; Josef Valvoda; Ran Zmigrod; Paula Czarnowska; Irene Nikkarinen; Aelita Salchak; Brijesh Bhatt; Christopher Straughn; Zoey Liu; Jonathan North Washington; Yuval Pinter; Duygu Ataman; Marcin Wolinski; Totok Suhardijanto; Anna Yablonskaya; Niklas Stoehr; Hossep Dolatian; Zahroh Nuriah; Shyam Ratan; Francis M Tyers; M Edoardo; Grant Ponti; Aryaman Aiton; Richard J Arora; Ritesh Hatcher; Jeremiah Kumar; Daria Young; Anastasia Rodionova; Taras Yemelina; Igor Andrushko; Polina Marchenko; Alexandra Mashkovtseva; Emily Serova; Maria Prud'hommeaux; Fausto Nepomniashchaya; Eleanor Giunchiglia; Mans Chodroff; Miikka Hulden; Silfverberg; D Arya; David Mc-Carthy; Ryan Yarowsky; Reut Cotterell; Ekaterina Tsarfaty; Vylomova", "journal": "European Language Resources Association", "ref_id": "b3", "title": "UniMorph 4.0: Universal Morphology", "year": "2022" }, { "authors": "Heike Behrens", "journal": "Language and cognitive processes", "ref_id": "b4", "title": "The input-output relationship in first language acquisition", "year": "2006" }, { "authors": "Emily M Bender", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Linguistically naïve != language independent: Why NLP needs linguistic typology", "year": "2009" }, { "authors": " Berl; B Balsamo; Xu; Moore; J A Sl Weinstein; Conry; Pearl; Sachs; C Cb Grandin; Frattali", "journal": "Neurology", "ref_id": "b6", "title": "Seizure focus affects regional language networks assessed by fMRI", "year": "2005" }, { "authors": "Johannes Bjerva; Isabelle Augenstein", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "From phonology to syntax: Unsupervised linguistic typology at different levels with language embeddings", "year": "2018" }, { "authors": "Damian Blasi; Antonios Anastasopoulos; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Systematic inequalities in language technology performance across the world's languages", "year": "2022" }, { "authors": "Lynn Bliss", "journal": "The journal of applied developmental psychology", "ref_id": "b9", "title": "The development of modals", "year": "1988" }, { "authors": "Lois Bloom; Lois Hood; Patsy Lightbown", "journal": "Cognitive psychology", "ref_id": "b10", "title": "Imitation in language development: If, when, and why", "year": "1974" }, { "authors": "Lois Masket Bloom", "journal": "", "ref_id": "b11", "title": "Language development: Form and function in emerging grammars", "year": "1970" }, { "authors": "John Neil Bohannon; Iii ; Angela Lynn; Marquis ", "journal": "Child Development", "ref_id": "b12", "title": "Children's control of adult speech", "year": "1977" }, { "authors": "Susan R Braunwald", "journal": "Word", "ref_id": "b13", "title": "Mother-child communication: the function of maternal-language input", "year": "1971" }, { "authors": "R Michael; Jeffrey Brent; Siskind Mark", "journal": "Cognition", "ref_id": "b14", "title": "The role of exposure to isolated words in early vocabulary development", "year": "2001" }, { "authors": "Roger Brown", "journal": "Harvard University Press", "ref_id": "b15", "title": "A first language: The early stages", "year": "1973" }, { "authors": "Joan L Bybee", "journal": "Crosscurrents in second language acquisition and linguistic theories", "ref_id": "b16", "title": "Natural morphology: The organization of paradigms and language acquisition", "year": "1991" }, { "authors": "Giuseppe Capelli; Victoria Marrero; María José; Albala ", "journal": "Procesamiento del Lenguaje Natural", "ref_id": "b17", "title": "Aplicación del sistema morfo a una muestra de lenguaje infantil", "year": "1994" }, { "authors": "Erwin Chan", "journal": "", "ref_id": "b18", "title": "Structures and distributions in morphological learning", "year": "2008" }, { "authors": "Jane Chandlee", "journal": "Morphology", "ref_id": "b19", "title": "Computational locality in morphological maps", "year": "2017" }, { "authors": "Eve ", "journal": "", "ref_id": "b20", "title": "", "year": "" }, { "authors": " Clark", "journal": "Springer", "ref_id": "b21", "title": "Awareness of language: Some evidence from what children say and do", "year": "1978" }, { "authors": "Ryan Cotterell; Christo Kirov; John Sylak-Glassman; Géraldine Walther; Ekaterina Vylomova; D Arya; Katharina Mc-Carthy; Sabrina J Kann; Garrett Mielke; Miikka Nicolai; David Silfverberg; Jason Yarowsky; Mans Eisner; Hulden", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "The CoNLL-SIGMORPHON 2018 shared task: Universal morphological reinflection", "year": "2018" }, { "authors": "Viviana Fratini; Joana Acha; Itziar Laka", "journal": "Corpus Linguistics and Linguistic Theory", "ref_id": "b23", "title": "Frequency and morphological irregularity are independent variables. Evidence from a corpus study of Spanish verbs", "year": "2014" }, { "authors": "Catherine Garvey; Robert Hogan", "journal": "Child Development", "ref_id": "b24", "title": "Social speech and social interaction: Egocentrism revisited", "year": "1973" }, { "authors": "C Virginia; Gathercole", "journal": "Journal of Child Language", "ref_id": "b25", "title": "The acquisition of the present perfect: Explaining differences in the speech of Scottish and American children", "year": "1986" }, { "authors": "John D Susan A Gelman; Karl S Coley; Erin Rosengren; Athina Hartman; Frank C Pappas; Keil", "journal": "", "ref_id": "b26", "title": "Beyond labeling: The role of maternal input in the acquisition of richly structured categories", "year": "1998" }, { "authors": "Ronald Bradley; Gillam ; Nils A Pearson", "journal": "Pro-ed", "ref_id": "b27", "title": "TNL: test of narrative language", "year": "2004" }, { "authors": "Jean Berko; Gleason ", "journal": "Elsevier", "ref_id": "b28", "title": "The acquisition of social speech routines and politeness formulas", "year": "1980" }, { "authors": "Omer Goldman; David Guriel; Reut Tsarfaty", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "un)solving morphological inflection: Lemma overlap artificially inflates models' performance", "year": "2022" }, { "authors": "S William; William C Hall; Tirre", "journal": "Center for the Study of Reading Technical Report", "ref_id": "b30", "title": "The communicative environment of young children: Social class, ethnic, and situational differences", "year": "1979" }, { "authors": "John Heilmann; Susan Ellis Weismer; Julia Evans; Christine Hollar", "journal": "American Journal of Speech-Language Patholog", "ref_id": "b31", "title": "Utility of the MacArthur-Bates Communicative Development Inventory in identifying language abilities of latetalking and typically developing toddlers", "year": "2005" }, { "authors": "Roy Patrick; Higginson ", "journal": "", "ref_id": "b32", "title": "Fixing: Assimilation in language acquisition", "year": "1985" }, { "authors": "Christo Kirov; Ryan Cotterell; John Sylak-Glassman; Géraldine Walther; Ekaterina Vylomova; Patrick Xia; Manaal Faruqui; Sabrina J Mielke; Arya Mccarthy; Sandra Kübler; David Yarowsky; Jason Eisner; Mans Hulden", "journal": "European Language Resources Association (ELRA", "ref_id": "b33", "title": "UniMorph 2.0: Universal Morphology", "year": "2018" }, { "authors": "Jordan Kodner", "journal": "Oxford University Press", "ref_id": "b34", "title": "Computational Models of Morphological Learning", "year": "2022" }, { "authors": "Jordan Kodner; Salam Khalifa; Khuyagbaatar Batsuren; Hossep Dolatian; Ryan Cotterell; Faruk Akkus; Antonios Anastasopoulos; Taras Andrushko; Aryaman Arora; Nona Atanalov; Gábor Bella; Elena Budianskaya; Yustinus Ghanggo Ate; Omer Goldman; David Guriel; Simon Guriel; Silvia Guriel-Agiashvili; Witold Kieraś; Andrew Krizhanovsky; Natalia Krizhanovsky; Igor Marchenko; Magdalena Markowska; Polina Mashkovtseva; Maria Nepomniashchaya; Daria Rodionova; Karina Scheifer; Alexandra Sorova; Anastasia Yemelina; Jeremiah Young; Ekaterina Vylomova", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "SIGMORPHON-UniMorph 2022 shared task 0: Generalization and typologically diverse morphological inflection", "year": "2022" }, { "authors": "Stan A Kuczaj; I I ", "journal": "Journal of verbal learning and verbal behavior", "ref_id": "b36", "title": "The acquisition of regular and irregular past tense forms", "year": "1977" }, { "authors": "Constantine Lignos; Charles Yang", "journal": "Cambridge handbook of morphology", "ref_id": "b37", "title": "Morphology and language acquisition", "year": "2018" }, { "authors": "Josetxu Linaza; María Eugenia Sebastián; Cristina Del Barrio", "journal": "Infancia y Aprendizaje", "ref_id": "b38", "title": "Lenguaje, comunicación y comprensión: Conferencia a nual de la sección de psicología del desarrollo de la british psychological society", "year": "1981" }, { "authors": "Zoey Liu; Emily Prud; ' Hommeaux", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b39", "title": "Datadriven model generalizability in crosslinguistic lowresource morphological segmentation", "year": "2022" }, { "authors": "Zoey Liu; Crystal Richardson; Richard Hatcher; Emily Prud; ' Hommeaux", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Not always about you: Prioritizing community needs when developing endangered language technology", "year": "2022" }, { "authors": "Susana López; Ornat ", "journal": "Contemporary perspectives on the acquisition of Spanish", "ref_id": "b41", "title": "What lies in between a pre-grammatical and a grammatical representation? Evidence on nominal and verbal form-function mappings in Spanish from 1; 7 to 2; 1", "year": "1997" }, { "authors": "Mohamed Maamouri; Ann Bies; Tim Buckwalter; Wigdan Mekki", "journal": "", "ref_id": "b42", "title": "The Penn Arabic Treebank: Building a large-scale annotated Arabic corpus", "year": "2004" }, { "authors": "Brian Macwhinney", "journal": "Journal of Speech, Language and Hearing Research", "ref_id": "b43", "title": "The CHILDES language project: Tools for analyzing talk", "year": "1991" }, { "authors": "Brian Macwhinney", "journal": "Psychology Press", "ref_id": "b44", "title": "The CHILDES Project: The Database", "year": "2000" }, { "authors": "Brian Macwhinney; Catherine Snow", "journal": "Journal of Child Language", "ref_id": "b45", "title": "The child language data exchange system", "year": "1985" }, { "authors": "María Del; Carmen Aguirre Martínez; Sonia Mariscal; Altares ", "journal": "Editorial UNED", "ref_id": "b46", "title": "Cómo adquieren los niños la gramática de su lengua: perspectivas teóricas", "year": "2005" }, { "authors": "D Arya; Christo Mccarthy; Matteo Kirov; Amrit Grella; Patrick Nidhi; Kyle Xia; Ekaterina Gorman; Sabrina J Vylomova; Garrett Mielke; Miikka Nicolai; Timofey Silfverberg; Nataly Arkhangelskiy; Andrew Krizhanovsky; Elena Krizhanovsky; Alexey Klyachko; John Sorokin; Valts Mansfield; Yuval Ernštreits; Cassandra L Pinter; Ryan Jacobs; Mans Cotterell; David Hulden; Yarowsky", "journal": "European Language Resources Association", "ref_id": "b47", "title": "UniMorph 3.0: Universal Morphology", "year": "2020" }, { "authors": "D Arya; Ekaterina Mccarthy; Shijie Vylomova; Chaitanya Wu; Lawrence Malaviya; Garrett Wolf-Sonkin; Christo Nicolai; Miikka Kirov; Sabrina J Silfverberg; Jeffrey Mielke; Ryan Heinz; Mans Cotterell; Hulden", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "The SIGMORPHON 2019 shared task: Morphological analysis in context and crosslingual transfer for inflection", "year": "2019" }, { "authors": "Lorraine Mccune", "journal": "Developmental psychology", "ref_id": "b49", "title": "A normative study of representational play in the transition to language", "year": "1995" }, { "authors": "Rosa Montes", "journal": "", "ref_id": "b50", "title": "Secuencias de clarificación en conversaciones con niños (morphe 3-4)", "year": "1987" }, { "authors": "Colleen E Morisset; Kathryn E Barnard; Cathryn L Booth", "journal": "Developmental Psychology", "ref_id": "b51", "title": "Toddlers' language development: Sex differences within social risk", "year": "1995" }, { "authors": "Katherine Nelson", "journal": "Harvard University Press", "ref_id": "b52", "title": "Narratives from the crib", "year": "2006" }, { "authors": "Meredith L Rochelle S Newman; Nan Bernstein Rowe; Ratner", "journal": "Journal of child language", "ref_id": "b53", "title": "Input and uptake at 7 months predicts toddler vocabulary: The role of child-directed speech and infant processing skills in language development", "year": "2016" }, { "authors": "G Johanna; Ann E Nicholas; Geers", "journal": "Journal of Speech, Language, and Hearing Research", "ref_id": "b54", "title": "Communication of oral deaf and normally hearing children at 36 months of age", "year": "1997" }, { "authors": "Anat Ninio; Catherine E Snow; Barbara A Pan; Pamela R Rollins", "journal": "Journal of communication disorders", "ref_id": "b55", "title": "Classifying communicative acts in children's interactions", "year": "1994" }, { "authors": "Kemal Oflazer; Murat Saraçlar", "journal": "Springer", "ref_id": "b56", "title": "Turkish natural language processing", "year": "2018" }, { "authors": "Ann M Peters", "journal": "Text-Interdisciplinary Journal for the Study of Discourse", "ref_id": "b57", "title": "The role of imitation in the developing syntax of a blind child", "year": "1987" }, { "authors": "Tiago Pimentel; Maria Ryskina; Sabrina J Mielke; Shijie Wu; Eleanor Chodroff; Brian Leonard; Garrett Nicolai; Yustinus Ghanggo Ate; Salam Khalifa; Nizar Habash; Charbel El-Khaissi; Omer Goldman; Michael Gasser; William Lane; Matt Coler; Arturo Oncevay; Jaime Rafael Montoya Samame; Gema ; Celeste Silva Villegas; Adam Ek; Jean-Philippe Bernardy; Andrey Shcherbakov; Aziyana Bayyr-Ool; Karina Sheifer; Sofya Ganieva; Matvey Plugaryov; Elena Klyachko; Ali Salehi; Andrew Krizhanovsky; Natalia Krizhanovsky; Clara Vania; Sardana Ivanova; Aelita Salchak; Christopher Straughn; Zoey Liu; Jonathan North Washington; Duygu Ataman; Witold Kieraś; Marcin Woliński; Totok Suhardijanto; Niklas Stoehr; Zahroh Nuriah; Shyam Ratan; Francis M Tyers; M Edoardo; Grant Ponti; Richard J Aiton; Emily Hatcher; Ritesh Prud'hommeaux; Mans Kumar; Botond Hulden; Dorina Barta; Gábor Lakatos; Judit Szolnok; Mohit Ács; David Raj; Ryan Yarowsky; Ben Cotterell; Ekaterina Ambridge; Vylomova", "journal": "Association for Computational Linguistics", "ref_id": "b58", "title": "SIGMORPHON 2021 shared task on morphological reinflection: Generalization across languages", "year": "2021" }, { "authors": "Steven Pinker; Michael T Ullman", "journal": "Trends in Cognitive Sciences", "ref_id": "b59", "title": "The past and future of the past tense", "year": "2002" }, { "authors": " Remedi", "journal": "", "ref_id": "b60", "title": "Creación de corpus de datos sobre estudio longitudinal de adquisición de lenguaje de una niña de la región central de Argentina", "year": "2014" }, { "authors": "Brian Roark; Richard Sproat", "journal": "Oxford University Press", "ref_id": "b61", "title": "Computational approaches to morphology and syntax", "year": "2007" }, { "authors": "Pamela Rosenthal; Rollins ", "journal": "Applied Psycholinguistics", "ref_id": "b62", "title": "Caregivers' contingent comments to 9-month-old infants: Relationships with later language", "year": "2003" }, { "authors": "Jacqueline Sachs; Nelson", "journal": "Children's Language", "ref_id": "b63", "title": "Talking about the there and then: The emergence of displaced reference in parent-child discourse", "year": "1983" }, { "authors": "Sawyer Keith", "journal": "Psychology Press", "ref_id": "b64", "title": "Pretend play as improvisation: Conversation in the preschool classroom", "year": "2013" }, { "authors": "Mark S Seidenberg; D Plaut", "journal": "Cognitive Science", "ref_id": "b65", "title": "Quasiregularity and its discontents: The legacy of the past tense debate", "year": "2014" }, { "authors": "Melanie Soderstrom; Megan Blossom; Rina Foygel; James L Morgan", "journal": "Journal of Child Language", "ref_id": "b66", "title": "Acoustical cues and grammatical units in speech to two preverbal infants", "year": "2008" }, { "authors": "Richard A Sprott", "journal": "Discourse Processes", "ref_id": "b67", "title": "Children's use of discourse markers in disputes: Form-function relations and discourse in child language", "year": "1992" }, { "authors": "Patrick Suppes", "journal": "American Psychologist", "ref_id": "b68", "title": "The semantics of children's language", "year": "1974" }, { "authors": "Virginia Valian", "journal": "Cognition", "ref_id": "b69", "title": "Syntactic subjects in the early speech of American and Italian children", "year": "1991" }, { "authors": "Lori J Van Houton", "journal": "", "ref_id": "b70", "title": "The role of maternal input in the acquisition process: The communicative strategies of adolescent and older mothers with the language learning children", "year": "1986" }, { "authors": "Ekaterina Vylomova; Jennifer White; Elizabeth Salesky; Sabrina J Mielke; Shijie Wu; Maria Edoardo; Rowan Ponti; Ran Hall Maudslay; Josef Zmigrod; Svetlana Valvoda; Francis Toldova; Elena Tyers; Ilya Klyachko; Natalia Yegorov; Paula Krizhanovsky; Irene Czarnowska; Andrew Nikkarinen; Tiago Krizhanovsky; Lucas Pimentel; Christo Torroba Hennigen; Garrett Kirov; Adina Nicolai; Antonios Williams; Hilaria Anastasopoulos; Eleanor Cruz; Ryan Chodroff; Miikka Cotterell; Mans Silfverberg; Hulden", "journal": "Association for Computational Linguistics", "ref_id": "b71", "title": "SIGMORPHON 2020 shared task 0: Typologically diverse morphological inflection", "year": "2020" }, { "authors": "Amye Warren-Leubecker", "journal": "", "ref_id": "b72", "title": "Sex differences in speech to children", "year": "1982" }, { "authors": "Silvan Wehrli; Simon Clematide; Peter Makarov", "journal": "Association for Computational Linguistics", "ref_id": "b73", "title": "CLUZH at SIGMORPHON 2022 shared tasks on morpheme segmentation and inflection generation", "year": "2022" }, { "authors": "Aleksandra Richard M Weist; Karen Pawlak; Hoffman", "journal": "Linguistics", "ref_id": "b74", "title": "Finiteness systems and lexical aspect in child Polish and English", "year": "2009" }, { "authors": "Shijie Wu; Ryan Cotterell; Mans Hulden", "journal": "Association for Computational Linguistics", "ref_id": "b75", "title": "Applying the transformer to character-level transduction", "year": "2021" }, { "authors": "Shijie Wu; Ryan Cotterell; Timothy O' Donnell", "journal": "Association for Computational Linguistics", "ref_id": "b76", "title": "Morphological irregularity correlates with frequency", "year": "2019" }, { "authors": "Karina Hess Zimmermann", "journal": "Bloom", "ref_id": "b77", "title": "El desarrollo linguístico en los años escolares: análisis de narraciones infantiles", "year": "1970" }, { "authors": " Bloom", "journal": "GRERLI", "ref_id": "b78", "title": "A.2 Spanish The following CHILDES corpora were used to create the Spanish data set", "year": "1971" }, { "authors": "; Macwhinney; Hess (zimmermann", "journal": "", "ref_id": "b79", "title": "AguadoOrea/Pine", "year": "1981" }, { "authors": "Aguado-Orea ; Pine ", "journal": "MacWhinney", "ref_id": "b80", "title": "Ornat (López Ornat", "year": "1997" } ]
[]
Morphological Inflection: A Reality Check
Morphological inflection is a popular task in sub-word NLP with both practical and cognitive applications. For years now, state-of-theart systems have reported high, but also highly variable, performance across data sets and languages. We investigate the causes of this high performance and high variability; we find several aspects of data set creation and evaluation which systematically inflate performance and obfuscate differences between languages. To improve generalizability and reliability of results, we propose new data sampling and evaluation strategies that better reflect likely usecases. Using these new strategies, we make new observations on the generalization abilities of current inflection systems.
Jordan Kodner; Sarah Payne; Salam Khalifa; Zoey Liu; Ryan Cotterell; Christo Kirov; John Sylak-Glassman; Géraldine Walther; Micha Elsner; Andrea D Sims; Alexander Erdmann; Antonio Hernandez; Evan Jaffe; Lifeng Jin
[ { "figure_caption": "Figure 2 :2Figure 2: Overall accuracy for each language/seed by training size, sampling strategy, and model type.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Accuracy on OVERLAPAWARE splits for each partition/seed by training size, language, and model type. featsAttested = both (green) and featsOnly (gold). featsNovel = lemmaOnly (violet) and neither (red).", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "5 ", "figure_data": "Test vs S Trainµ %featsAttestedσUNIFORM80.33%19.50%WEIGHTED90.4411.13OVERLAPAWARE48.810.98Test vs L Trainµ %featsAttestedσUNIFORM96.17%5.55%WEIGHTED95.367.28OVERLAPAWARE49.920.17Table 1:Language-by-language average meanpercentage and standard deviation for propor-tion of featsAttested attested in test relative tosmall and large training. %featsNovel= 100 -%featsAttested.", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Overall accuracy across languages by overlap type in test.", "figure_data": "Test vs S TrainfeatsAttested featsNovelUNIFORM70.47%33.57%WEIGHTED79.2522.77OVERLAPAWARE79.6031.13Test vs L TrainfeatsAttested featsNovelUNIFORM80.00%55.57%WEIGHTED85.9423.74OVERLAPAWARE86.2235.51", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "CLUZH-B4 (68.58%) > CLUZH-GR (67.97%) > CHR-", "figure_data": "Train Language Avg. Score featsAttestedSizeStrategyDifference∼Accuracy ρSmallArabic33.00%0.57Swahili40.040.63German40.350.23Turkish41.960.83Spanish52.600.75English74.100.66LargeArabic35.79%0.44German36.190.73Swahili39.260.64Turkish52.140.59Spanish61.010.64English80.170.82Table 4: Avg. score difference between featsAttestedand featsNovel and correlation between propor-tion featsAttested and overall accuracy by lan-guage/training size, ranked by score difference.model performance based on the average overall ac-curacy. Averaged across the six languages, CLUZH-B4 ranks among the highest, while NONNEUR con-sistently achieves the lowest performance.", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Average score range and random seed variability across languages for each sampling strategy for both training sizes.", "figure_data": "", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Average training and test item mean corpus frequency (µµ) and median frequency (µM ).", "figure_data": "TrainTestArabicµµµMµµµMUNIFORM0.4600.470WEIGHTED57.531826.4412OVERLAPAWARE6.7226.462EnglishµµµMµµµMUNIFORM9.7101.240WEIGHTED1840.51362122.5567OVERLAPAWARE182.295163.225GermanµµµMµµµMUNIFORM0.1400.180WEIGHTED111.99209.565OVERLAPAWARE25.46230.422SpanishµµµMµµµMUNIFORM0.1200.130WEIGHTED119.152913.898OVERLAPAWARE25.50221.972SwahiliµµµMµµµMUNIFORM40.13038.380WEIGHTED518.95888.114OVERLAPAWARE130.003143.393TurkishµµµMµµµMUNIFORM26.63026.60WEIGHTED4854.131252588.76348OVERLAPAWARE436.4112397.9412", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Average Jaccard similarity quantifying overlap between large training samples (J LT rain ) across random seeds and similarity between test samples (J T est ) across seeds. J ∈ [0, 100] where 100 indicates that all UniMorph triples appear in all training sets", "figure_data": "Raw UniMorphUniMorph×Freq#L#F#T#L#F#TArabic128155678341131162830056035English399758117160938370616528German3941711359914144604410501Spanish656891751286348359211711337Swahili184257151491802253725Turkish3579883570420164924224332", "figure_id": "tab_9", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": Type frequencies for lemmas (#L), feature sets(#F), and triples (#T) for each language data set. RawUniMorph (3+)4 and intersected with frequency.", "figure_id": "tab_10", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Language-by-language average mean percentage of each overlap type in test sets relative to small and large training. Standard deviations are (italicized). OVERLAPAWARE targets a featsAttested relative to large train as close to 50% as possible without exceeding it. %featsAttested = %both + %featsOnly and %featsNovel = %lemmaOnly + %neither.", "figure_data": "NONNEUR Test vs S Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM70.9266.7517.1619.1067.5016.9459.83WEIGHTED67.8677.938.1513.0774.989.9168.79OVERLAPAWARE66.4775.4317.7926.5573.3924.6348.30NONNEUR Test vs L Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM73.5966.0021.8525.7571.6631.7270.33WEIGHTED75.3583.628.069.1779.157.6176.1oOVERLAPAWARE74.5282.4918.5729.3177.8424.3351.03CHR-TRM Test vs S Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM70.0261.0558.6130.4867.7039.3665.33WEIGHTED79.1869.3643.6026.2075.0836.1572.27OVERLAPAWARE80.2872.4638.1530.8678.0635.9756.67CHR-TRM Test vs L Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM79.6076.6163.8539.9279.5155.7278.82WEIGHTED89.4285.4259.6237.8189.4852.6488.56OVERLAPAWARE89.7886.5645.6538.8789.8343.9266.85CLUZH-B4 Test vs S Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM77.0971.7557.1333.2273.8739.7270.29WEIGHTED78.3586.2226.1821.4083.6722.6378.09OVERLAPAWARE79.9784.8630.4332.0083.6632.1657.38CLUZH-B4 Test vs L Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM88.1479.8072.6647.3486.0269.8685.42WEIGHTED86.1490.3920.6320.9388.2217.7185.83OVERLAPAWARE88.3191.8135.3541.2089.7837.6863.70CLUZH-GR Test vs S Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM75.7270.7755.2731.8972.8338.2769.21WEIGHTED77.7985.9125.7521.2283.2822.3877.72OVERLAPAWARE79.7884.5029.9831.4983.2831.7857.00CLUZH-GR Test vs L Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM85.1575.8365.5443.4382.8365.0082.24WEIGHTED84.6589.1720.1717.1386.8917.0184.52OVERLAPAWARE85.7689.6433.9140.0487.4236.1261.74", "figure_id": "tab_12", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Average percent accuracy across seeds and models on the test set by architecture.", "figure_data": "Overall Test vs S Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM73.4467.5847.0528.6770.4733.5766.16WEIGHTED75.7979.8625.9220.4779.2522.7774.22OVERLAPAWARE76.6279.3129.0930.2279.6031.1354.84Overall Test vs L Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM81.6274.5655.9739.1180.0055.5779.20WEIGHTED83.8987.1527.1221.2685.9423.7483.75OVERLAPAWARE84.5987.6333.3737.3686.2235.5160.83Ara Test vs S Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM72.5267.8654.8450.5868.0650.8062.80WEIGHTED73.8273.1535.7923.9873.2426.5468.82OVERLAPAWARE63.7766.3333.4230.9766.1431.1147.81Ara Test vs L Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM83.6076.5262.5744.3177.6748.6276.76WEIGHTED79.9278.9538.2923.6779.3431.0477.76OVERLAPAWARE75.0776.3646.4945.9976.0946.0961.06Deu Test vs S Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM63.6160.00-28.2760.0628.2759.65WEIGHTED78.2276.7326.0616.4876.9120.1875.81OVERLAPAWARE73.9073.8838.9841.8074.1241.6057.84Deu Test vs L Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM75.3773.07-73.3373.1673.3373.14WEIGHTED85.3584.3725.000.0084.7214.5884.64OVERLAPAWARE81.2282.0040.0244.2581.8443.2462.54Eng Test vs S Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM97.2293.34-0.0093.350.0093.14WEIGHTED76.9088.43--87.20-87.20OVERLAPAWARE84.3088.5317.1019.1488.4518.9953.72Eng Test vs L Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM95.6696.49--96.48-96.48WEIGHTED84.2595.26--91.83-91.83OVERLAPAWARE89.9692.1117.8119.8091.9519.3255.63Spa Test vs S Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM75.0971.2446.8739.5871.3539.6767.67WEIGHTED65.9783.0310.028.3677.749.5972.22OVERLAPAWARE68.6084.409.9427.1479.9021.9250.35Spa Test vs L Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM84.0983.39--83.50-83.50WEIGHTED80.7392.1624.6038.8985.9424.7484.77OVERLAPAWARE82.5794.2016.0635.4287.9224.8356.37Swc Test vs S Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM89.6869.8963.6131.1487.0260.0882.22WEIGHTED80.4175.5629.4126.0479.2729.1262.79OVERLAPAWARE85.8378.3143.1631.0584.7941.7562.28Swc Test vs L Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM90.7458.5659.706.2589.2657.2788.01WEIGHTED82.3077.4040.7733.7581.8840.6673.36OVERLAPAWARE88.5388.4244.1143.2488.5644.0166.14Tur Test vs S Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM42.5143.1422.8522.4642.9922.6131.51WEIGHTED79.4682.2428.3227.5181.1528.4178.46OVERLAPAWARE83.3384.4231.9331.2384.1831.4357.03Tur Test vs L Trainboth%featsOnlylemmaOnlyneitherfeatsAttestedfeatsNoveloverallUNIFORM60.2459.3445.6532.5559.9443.0857.33WEIGHTED90.8094.756.9310.0091.917.7090.16OVERLAPAWARE90.2192.6735.7235.4490.9435.5963.23", "figure_id": "tab_13", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Language-by-language average percent accuracy across seeds and models on the test set. Dashes indicate overlap partitions with size zero.", "figure_data": "", "figure_id": "tab_14", "figure_label": "11", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "(Bender, 2009)", "Explanation": "The cited work by Bender (2009) provides foundational evidence on the importance of morphological inflection in natural language generation for languages with complex morphological systems."}, {"Category": "Extension or Continuation", "Citation": "(Oflazer and Sara\u00e7lar, 2018)", "Explanation": "The cited work by Oflazer and Sara\u00e7lar (2018) extends the research on morphological inflection by focusing on the specific application of natural language generation in languages with complex morphological systems."}, {"Category": "Methodological Basis", "Citation": "(Roark and Sproat, 2007)", "Explanation": "The cited work by Roark and Sproat (2007) provides a methodological basis for the string-to-string mapping problem in morphological inflection, which is a critical element in the study of neural network architectures."}, {"Category": "Supporting Evidence", "Citation": "(Chandlee, 2017)", "Explanation": "The cited work by Chandlee (2017) provides additional evidence on the use of morphological inflection in string-to-string mapping problems, which is a key aspect in testing the properties of neural network architectures."}, {"Category": "Extension or Continuation", "Citation": "(Pinker and Ullman, 2002)", "Explanation": "The cited work by Pinker and Ullman (2002) extends the research on computational models of inflection in the field of cognitive science and linguistics, focusing on the use of computational models to arbitrate between theories of morphological representation and acquisition."}, {"Category": "Extension or Continuation", "Citation": "(Seidenberg and Plaut, 2014)", "Explanation": "The cited work by Seidenberg and Plaut (2014) further extends the research on computational models of inflection, providing evidence on the use of computational models in the study of morphological representation and acquisition in languages."}, {"Category": "Extension or Continuation", "Citation": "(Bjerva and Augenstein, 2018)", "Explanation": "The cited work by Bjerva and Augenstein (2018) extends the research on computational typology by focusing on the use of computational models in the study of morphological inflection in different languages."}, {"Category": "Extension or Continuation", "Citation": "(Elsner et al., 2019)", "Explanation": "The cited work by Elsner et al. (2019) further extends the research on computational typology by exploring the use of computational models in the study of morphological inflection in different languages."}, {"Category": "Data Source", "Citation": "(Kirov et al., 2018)", "Explanation": "The cited work provides the UniMorph Database as a data source for training and evaluation sets in the citing paper."}, {"Category": "Data Source", "Citation": "(McCarthy et al., 2020)", "Explanation": "The cited work contributes the data from the UniMorph Database to the training and evaluation sets in the citing paper."}, {"Category": "Data Source", "Citation": "(Batsuren et al., 2022)", "Explanation": "The cited work provides the data from the UniMorph Database to the training and evaluation sets in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Lignos and Yang, 2018)", "Explanation": "The cited work provides evidence that morphological forms exhibit a highly skewed Zipfian distribution in any large text, which supports the claim in the citing paper that uniform sampling is unrealistic and creates an unnatural bias towards low-frequency types."}, {"Category": "Supporting Evidence", "Citation": "(Bybee, 1991)", "Explanation": "The cited work provides evidence that high frequency is correlated with irregularity across many but not all languages, which supports the claim in the citing paper that this bias towards more regular and reliable training items is a result of the uniform sampling method."}, {"Category": "Supporting Evidence", "Citation": "(Fratini et al., 2014)", "Explanation": "The cited work provides evidence that high frequency is correlated with irregularity across many but not all languages, which supports the claim in the citing paper that this bias towards more regular and reliable training items is a result of the uniform sampling method."}, {"Category": "Supporting Evidence", "Citation": "(Wu et al., 2019)", "Explanation": "The cited work provides evidence that high frequency is correlated with irregularity across many but not all languages, which supports the claim in the citing paper that this bias towards more regular and reliable training items is a result of the uniform sampling method."}, {"Category": "Data Source", "Citation": "(Cotterell et al., , 2016)", "Explanation": "The cited work provides the data set that is used in the evaluation of different models in the shared tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Cotterell et al., , 2017)", "Explanation": "The cited work also contributes to the data set used in the evaluation of different models in the shared tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Cotterell et al., , 2018)", "Explanation": "The cited work is another data set that is used in the evaluation of different models in the shared tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(McCarthy et al., 2019)", "Explanation": "The cited work provides the data set that is used in the evaluation of different models in the shared tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Vylomova et al., 2020)", "Explanation": "The cited work is another data set that is used in the evaluation of different models in the shared tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Pimentel et al., 2021)", "Explanation": "The cited work contributes to the data set used in the evaluation of different models in the shared tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Kodner et al., 2022)", "Explanation": "The cited work is the data set that is used in the evaluation of different models in the shared tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Liu and Prud'hommeaux, 2022)", "Explanation": "The cited work highlights the data limitation that is a challenge in the evaluation of different models in the shared tasks mentioned in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Liu and Prud'hommeaux, 2022)", "Explanation": "The cited work by Liu and Prud'hommeaux (2022) provides evidence of the concerns raised in the citing paper regarding the generalizability of model performance in low-resource morphological segmentation. The citing paper extends this approach to morphological inflection by applying multiple data splits and evaluating variability between splits."}, {"Category": "Methodological Basis", "Citation": "(Goldman et al., 2022)", "Explanation": "The cited work by Goldman et al. (2022) provides a method for evaluating morphological inflection models that involves controlling for feature overlap and avoiding lemma overlap in test sets. The citing paper builds upon this method to further improve the evaluation of these models by also considering the value of feature overlap in the evaluation process."}, {"Category": "Methodological Basis", "Citation": "(3)", "Explanation": "The cited work by Goldman et al. (2022) provides a partition scheme for distinguishing two overlap types in a system, which the citing paper adopts to structure the analysis of the system performance in terms of in-vocabulary feature sets and OOV feature sets."}, {"Category": "Data Source", "Citation": "(MacWhinney, 2000)", "Explanation": "The cited work provides the child-directed speech (CDS) corpora used in the data sets sampled from UniMorph 4 and 3 to create frequency lists for the morphological inflection task."}, {"Category": "Data Source", "Citation": "(Maamouri et al., 2004)", "Explanation": "The cited work by Maamouri et al. (2004) provides the Modern Standard Arabic frequencies that were used in the creation of the diacritized and morphologically annotated Penn Arabic Treebank, which in turn served as a data source for the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Behrens, 2006)", "Explanation": "The cited work by Behrens (2006) is the source of the morphologically annotated German data in the Leo Corpus, which was used in the creation of the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Maamouri et al., 2004)", "Explanation": "The cited work by Maamouri et al. (2004) is the source of the diacritized text in the Penn Arabic Treebank, which was used in the creation of the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Behrens, 2006)", "Explanation": "The cited work by Behrens (2006) is the source of the morphologically annotated German data in the Leo Corpus, which was used in the creation of the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Maamouri et al., 2004)", "Explanation": "The cited work by Maamouri et al. (2004) is the source of the Modern Standard Arabic frequencies that were used in the creation of the diacritized and morphologically annotated Penn Arabic Treebank, which in turn served as a data source for the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Behrens, 2006)", "Explanation": "The cited work by Behrens (2006) is the source of the morphologically annotated German data in the Leo Corpus, which was used in the creation of the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Maamouri et al., 2004)", "Explanation": "The cited work by Maamouri et al. (2004) is the source of the diacritized text in the Penn Arabic Treebank, which was used in the creation of the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Behrens, 2006)", "Explanation": "The cited work by Behrens (2006) is the source of the morphologically annotated German data in the Leo Corpus, which was used in the creation of the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Maamouri et al., 2004)", "Explanation": "The cited work by Maamouri et al. (2004) is the source of the Modern Standard Arabic frequencies that were used in the creation of the diacritized and morphologically annotated Penn Arabic Treebank, which in turn served as a data source for the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Behrens, 2006)", "Explanation": "The cited work by Behrens (2006) is the source of the morphologically annotated German data in the Leo Corpus, which was used in the creation of the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Maamouri et al., 2004)", "Explanation": "The cited work by Maamouri et al. (2004) is the source of the diacritized text in the Penn Arabic Treebank, which was used in the creation of the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Behrens, 2006)", "Explanation": "The cited work by Behrens (2006) is the source of the morphologically annotated German data in the Leo Corpus, which was used in the creation of the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Maamouri et al., 2004)", "Explanation": "The cited work by Maamouri et al. (2004) is the source of the Modern Standard Arabic frequencies that were used in the creation of the diacritized and morphologically annotated Penn Arabic Treebank, which in turn served as a data source for the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Behrens, 2006)", "Explanation": "The cited work by Behrens (2006) is the source of the morphologically annotated German data in the Leo Corpus, which was used in the creation of the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Maamouri et al., 2004)", "Explanation": "The cited work by Maamouri et al. (2004) is the source of the diacritized text in the Penn Arabic Treebank, which was used in the creation of the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Behrens, 2006)", "Explanation": "The cited work by Behrens (2006) is the source of the morphologically annotated German data in the Leo Corpus, which was used in the creation of the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Maamouri et al., 2004)", "Explanation": "The cited work by Maamouri et al. (2004) is the source of the Modern Standard Arabic frequencies that were used in the creation of the diacritized and morphologically annotated Penn Arabic Treebank, which in turn served as a data source for the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Behrens, 2006)", "Explanation": "The cited work by Behrens (2006) is the source of the morphologically annotated German data in the Leo Corpus, which was used in the creation of the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Maamouri et al., 2004)", "Explanation": "The cited work by Maamouri et al. (2004) is the source of the diacritized text in the Penn Arabic Treebank, which was used in the creation of the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Behrens, 2006)", "Explanation": "The cited work by Behrens (2006) is the source of the morphologically annotated German data in the Leo Corpus, which was used in the creation of the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Maamouri et al., 2004)", "Explanation": "The cited work by Maamouri et al. (2004) is the source of the Modern Standard Arabic frequencies that were used in the creation of the diacritized and morphologically annotated Penn Arabic Treebank, which in turn served as a data source for the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Behrens, 2006)", "Explanation": "The cited work by Behrens (2006) is the source of the morphologically annotated German data in the Leo Corpus, which was used in the creation of the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Maamouri et al., 2004)", "Explanation": "The cited work by Maamouri et al. (2004) is the source of the diacritized text in the Penn Arabic Treebank, which was used in the creation of the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Behrens, 2006)", "Explanation": "The cited work by Behrens (2006) is the source of the morphologically annotated German data in the Leo Corpus, which was used in the creation of the frequency+UniMorph data set in the citing paper."}, {"Category": "Data Source", "Citation": "(Maamouri et al., 2004)", "Explanation": "The cited work by Maamouri et al. (2004) is the source of the Modern Standard Arabic frequencies that were used in the creation of the diacritized and morphologically annotated Penn Arabic Treebank, which in turn served as a data source for the frequency+UniMorph data set in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Wu et al., 2021)", "Explanation": "The cited work, CHR-TRM, is a character-level transformer that serves as a baseline in the evaluation of the non-neural and neural systems in the citing paper. The hyper-parameters suggested by the original authors for small training conditions are used in the evaluation process."}, {"Category": "Extension or Continuation", "Citation": "(Wehrli et al., 2022)", "Explanation": "The cited work, CLUZH-GR and CLUZH-B4, is a character-level transducer that outperformed CHR-TRM in the 2022 shared task. The citing paper evaluates two published variants of the transducer with consistent hyper-parameters across languages, building upon the research of the cited work."}, {"Category": "Data Source", "Citation": "(Cotterell et al., 2017)", "Explanation": "The cited work, NONNEUR, has been used as a baseline in SIGMORPHON shared tasks since 2017. The citing paper uses the baseline system in its evaluation of the non-neural and neural systems, acknowledging the data source for the evaluation process."}, {"Category": "Supporting Evidence", "Citation": "(Goldman et al., 2022)", "Explanation": "The cited work by Goldman et al. provides a division based on lemma-based analysis, which is used as a reference point in the citing paper to compare the results of the full breakdown by overlap type."}, {"Category": "Methodological Basis", "Citation": "(Kodner et al., 2022)", "Explanation": "The cited work provides the results of the SIGMORPHON 2022 shared task, which the citing paper uses to support the claim that there is a substantial performance hit for featsNovel relative to featsAttested."}, {"Category": "Data Source", "Citation": "(Goldman et al., 2022)", "Explanation": "The cited work reports a performance hit on lemmaNovel vs. lemmaAttested, which the citing paper uses as a data source to support the claim that the effect observed in Goldman et al. (2022) is not replicated in the current study."}, {"Category": "Methodological Basis", "Citation": "(Goldman et al., 2022)", "Explanation": "The cited work by Goldman et al. (2022) is used as a reference for the unmeasured feature overlap in the study conducted in the citing paper. The authors acknowledge the need to consider this factor in their research."}, {"Category": "Data Source", "Citation": "(Maamouri et al., 2004)", "Explanation": "The cited work provides the data source for the Penn Arabic Treebank, which is used in the study conducted in the citing paper."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b35", "b45", "b32", "b36", "b1", "b10", "b45", "b6", "b39", "b24", "b24", "b0", "b13", "b13", "b25", "b25" ], "table_ref": [], "text": "Graph neural networks (GNNs) have emerged as a popular tool for the representation learning on the graph-structured data [35]. To enhance the learning power of GNNs, many attempts have been made by considering the propagation of GNNs via different aspects such as optimization [45,32], statistical test [36] and gradient flow [2,10]. In particular, treating GNNs propagation as an optimization manner allows one to assign different types of regularizers on the GNNs' output so that the variation of the node features, usually measured by so-called Dirichlet energy, can be properly constrained [45,6]. The underlying reason for this regularization operation is due to the recently identified computational issue of GNNs on different types of graphs, namely homophily and heterophily [39]. With the former most of the nodes are connected with those nodes with identical labels, and the latter is not [24]. Accordingly, an ideal GNN shall be able to produce a rather smoother node features for homophily graph and more distinguishable node features when the input graph is heterophilic [24,1].\nBased on the above statement, a proper design of the regularizer that is flexible to let GNN fit both two types of the graph naturally becomes the next challenge. A recent research [13] proposed new energy based regularizer, namely p-Laplacian based regularizer to the optimization of GNN and resulted in an iterative algorithm to approximate the so-called implicit layer induced from the solution of the regularization. To engage a more flexible design of p-Laplacian GNN in [13], [25] further proposed p-Laplacian based graph framelet GNN (pL-UFG) to assign the p-Laplacian based regularization act on multiscale GNNs (e.g., graph framelet). While remarkable learning accuracy has been observed empirically, the underlying properties of the models proposed in [25] are still unclear. In this paper, our primary focus is on pL-UFG (see Section 2 for the formulation). Our objective is to analyze pL-UFG from various perspectives, including convergence of its implicit layer, model's asymptotic energy behavior, changes of model's dynamics due to the implicit layer, and relationship with existing diffusion models. To the best of our knowledge, these aspects have not been thoroughly explored in the context of p-Laplacian based GNNs, leaving notable knowledge gaps. Accordingly, we summarize our contribution as follows:\n• We rigorously prove the convergence of pL-UFG, providing insights into the asymptotic behavior of the model. This analysis addresses a crucial gap in the understanding of GNN models regularized with p-Laplacian based energy regularizer.\n• We show that by assigning the proper values of two key model parameters (denoted as µ and p) of pL-UFG based on our theoretical analysis, the (generalized) Dirichlet energy of the node feature produced from pL-UFG will never converge to 0; thus the inclusion of the implicit layer will prevent the model (graph framelet) from potential over-smoothing issue.\n• We demonstrate how the implicit layer in pL-UFG interacts with the energy dynamics of the graph framelet. Furthermore, we prove that pL-UFG can adapt to both homophily and heterophily graphs, enhancing its versatility and applicability.\n• We establish that the propagation mechanism within pL-UFG enables a generalized non-linear graph diffusion. The conclusions based on our analysis from different perspectives are unified at the end of the paper, suggesting a promising framework for evaluating other GNNs.\n• Based on our theoretical results, we propose two generalized pL-UFG models with controlled model dynamics, namely pL-UFG low-frequency dominant model (pL-UFG-LFD) and pL-UFG high frequency dominant model (pL-UFG-HFD). we further show that with controllable model dynamics, the computational cost of pL-UFG is largely reduced, making our proposed model capable of handling large-scale graph datasets.\n• We conduct extensive experiments to validate our theoretical claims. The empirical results not only confirm pL-UFG's capability to handle both homophily and heterophily graphs but also demonstrate that our proposed models achieve comparable or superior classification accuracy with reduced computational cost. These findings are consistent across commonly tested and large-scale graph datasets.\nThe remaining sections of this paper are structured as follows. Section 2 presents fundamental notations related to graphs, GNN models, graph framelets and pL-UFG. In Section 3, we conduct a theoretical analysis on pL-UFG, focusing on the aforementioned aspects. Specifically, Section 3.1 presents the convergence analysis, while Section 3.2 examines the behavior of the p-Laplacian based implicit layer through a generalized Dirichlet energy analysis. Furthermore, Section 3.3 demystifies the interaction between the implicit layer and graph framelets from an energy dynamic perspective. We provide our proposed models (pL-UFG-LFD and pL-UFG-HFD) in section 3.4. Lastly, in Section 3.5, we demonstrate that the iterative algorithm derived from the implicit layer is equivalent to a generalized non-linear diffusion process on the graph. Additionally, in Section 4 we further verify our theoretical claims by comprehensive numerical experiments. Lastly, in conclusion 5, we summarize the findings of this paper and provide suggestions for future research directions." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "In this section, we provide necessary notations and formulations utilized in this paper. We list the necessary notations with their meanings in the Table 1 below, although we will mention the meaning of them again when we first use them." }, { "figure_ref": [], "heading": "Table 1: Necessary notations", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Notations Brief Interpretation H(G)", "publication_ref": [ "b25" ], "table_ref": [], "text": "Heterophily index of a given graph G X Initial node feature matrix F (k) Feature representation on k-th layer of GNN model f i Individual row of F F i,:\nOne or more operation acts on each row of F D Graph degree matrix A Normalized adjacency matrix L Normalized Laplacian matrix W Graph weight matrix W Framelet decomposition matrix I Index set of all framelet decomposition matrices. W Learnable weight matrix in GNN models W,Ω, W Learnable weight matrices in defining generalized Dirichlet energy Y Feature propagation result for the pL-UFG defined in [25]. θ N-dimensional vector for diagonal scaling (diag(θ)) in framelet models.\nE P F (F)\nGeneralized Dirichlet energy for node feature induced from implicit layer\nE F r (F) Generalized framelet Dirichlet energy E total (F) Total generalized Dirichlet energy {λ i , u i } N i=1" }, { "figure_ref": [], "heading": "Eigen-pairs of L", "publication_ref": [], "table_ref": [], "text": "We also provide essential background information on the developmental history before the formulation of certain models, serving as a concise introduction to the related works." }, { "figure_ref": [], "heading": "Graph, Graph Convolution and Graph Consistency We denote a weighted graph as", "publication_ref": [ "b9" ], "table_ref": [], "text": "G = (V, E, W) with nodes set V = {v 1 , v 2 , • • • , v N } of total N nodes, edge set E ⊆ V × V and graph adjacency matrix W, where W = [w i,j ] ∈ R N ×N and w i,j = 1 if (v i , v j ) ∈ E,\nelse, w i,j = 0. The nodes feature matrix is X ∈ R N ×c for G with each row x i ∈ R c as the feature vector associated with node v i . For a matrix A, we denote its transpose as A ⊤ , and we use [N ] for set {1, 2, . . . , N }. Throughout this paper, we will only focus on the undirect graph and use matrix A and W interchangeably for graph adjacency matrix1 . The normalized graph Laplacian is defined as\nL = I -D -1 2 (W + I)D -1 2 , where D = diag(d 1,1 , . . . , d N,N\n) is a diagonal degree matrix with d i,i = N j=1 w i,j for i = 1, . . . , N , and I is the identity matrix. From the spectral graph theory [9], we have L ⪰ 0, i.e. L is a positive semi-definite (SPD) matrix. Let {λ i } N i=1 in decreasing order be all the eigenvalues of L, also known as graph spectra, and λ i ∈ [0, 2]. For any given graph, we let ρ L be the largest eigenvalue of L." }, { "figure_ref": [], "heading": "Lastly, for any vector", "publication_ref": [ "b21", "b2", "b44", "b14", "b25", "b27", "b18", "b11", "b40", "b42", "b46", "b38", "b38", "b38", "b38", "b4", "b40", "b41", "b38", "b26", "b6", "b19", "b40", "b6", "b12", "b15", "b20", "b29", "b14", "b43", "b25", "b43", "b43", "b43", "b14", "b25", "b14", "b43", "b14", "b2", "b15", "b25", "b17", "b25", "b25", "b18", "b25", "b25", "b18", "b19", "b19" ], "table_ref": [], "text": "x = [x 1 , ..., x c ] ∈ R c , ∥x∥ 2 = ( c i=1 x 2 i ) 1 2\nis the L 2 -norm of x, and similarly, for any matrix M = [m i,j ], denote by ∥M∥ := ∥M∥ F = ( i,j m 2 i,j )\n1 2 the matrix Frobenius norm. Graph convolution network (GCN) [21] produces a layer-wise (node feature) propagation rule based on the information from the normalized adjacency matrix as:\nF (k+1) = σ AF (k) W (k) ,(1)\nwhere F (k) is the embedded node feature, W (k) the weight matrix for channel mixing [3], and σ any activation function such as sigmoid. The superscript (k) indicates the quantity associated with layer k, and F (0) = X. We write A = D -1 2 (W + I)D -1 2 , the normalized adjacency matrix of G. It is easy to see that the operation conducted in GCN before activation can be interpreted as a localized filter by the graph Fourier transform, i.e., F (k+1) = U(I n -Λ)U ⊤ F (k) , where U, Λ are from the eigendecomposition L = UΛU ⊤ . In fact, UF is known as the Fourier transform of graph signals in F.\nOver the development of GNNs, most of GNNs are designed under the homophily assumption in which connected (neighbouring) nodes are more likely to share the same label. The recent work by [44] identifies that the general topology GNN fails to obtain outstanding results on the graphs with different class labels and dissimilar features in their connected nodes, such as the so-call heterophilic graphs. The definition of homophilic and heterophilic graphs are given by: Definition 1 (Homophily and Heterophily [14]). The homophily or heterophily of a network is used to define the relationship between labels of connected nodes. The level of homophily of a graph can be measured by\nH(G) = E v i ∈V [|{v j } j∈N i,y i =y i |/|N i |]\n, where |{v j } j∈N i,y i =y i | denotes the number of neighbours of v i ∈ V that share the same label as v i , i.e. y i = y j . H(G) → 1 corresponds to strong homophily while H(G) → 0 indicates strong heterophily. We say that a graph is a homophilic (heterophilic) graph if it has strong homophily (heterophily).\nGraph Framelet. As the main target for this paper to explore is pL-UFG defined in [25] in which p-Laplacian based implicit layer is combined with so-called graph framelet or framelets in short. Framelets are a type of wavelet frames arising from signal processing which can be extended for analysing graph signals. The first wavelet frame with a lifting scheme for graph analysis was presented in [27]. As computational power increased, [18] proposed a framework for wavelet transformation on graphs using Chebyshev polynomials for approximations. Later, [11] developed tight framelets on graphs by approximating smooth functions with filtered Chebyshev polynomials.\nFramelets have been applied to graph learning tasks with outstanding results, as demonstrated in [40]. They are capable of decomposing graph signals and re-aggregating them effectively, as shown in the study on graph noise reduction by [42] Combining framelets with singular value decomposition (SVD) has also made them applicable to directed graphs [46]. Recently, [38] suggested a simple method for building more versatile and stable framelet families, known as Quasi-Framelets. In this study, we will introduce graph framelets using the same architecture described in [38]. To begin, we define the filtering functions for Quasi-framelets. Definition 2. A set of R + 1 positive functions F = {g 0 (ξ), g 1 (ξ), ..., g R (ξ)} defined on the interval [0, π] is considered as (a set of) Quasi-Framelet scaling functions, if these functions adhere to the following identity condition:\ng 0 (ξ) 2 + g 1 (ξ) 2 + • • • + g R (ξ) 2 ≡ 1, ∀ξ ∈ [0, π].(2)\nThe identity condition (2) ensures a perfect reconstruction of a signal from its spectral space to the spatial space, see [38] for a proof. Particularly we are interested in the scaling function set in which g 0 descents from 1 to 0, i.e., g 0 (0) = 1 and g 0 (π) = 0 and g R ascends from 0 to 1, i.e., g R (0) = 0 and g R (π) = 1. The purpose of setting these conditions is for g 0 to regulate the highest frequency and for g R to control the lowest frequency, while the remaining functions govern the frequencies lying between them.\nWith a given set of framelet scaling functions, the so-called Quasi-Framelet signal transformation can be defined by the following transformation matrices:\nW 0,J = Ug 0 ( Λ 2 m+J ) • • • g 0 ( Λ 2 m )U ⊤ ,(3)\nW r,0 = Ug r ( Λ 2 m )U ⊤ , for r = 1, ..., R,(4)\nW r,ℓ = Ug r ( Λ 2 m+ℓ )g 0 ( Λ 2 m+ℓ-1 ) • • • g 0 ( Λ 2 m )U ⊤ ,(5)\nfor r = 1, ..., R, ℓ = 1, ..., J,\nwhere F is a given set of Quasi-Framelet functions satisfying (2) and J ≥ 0 is a given level on a graph G = (V, E) with normalized graph Laplacian L = U ⊤ ΛU. W 0,J is defined as the product of J + 1 Quasi-Framelet scaling functions g 0 applied to the Laplacian spectra Λ at different scales. defined as g r ( Λ 2 m ) applied to spectra Λ, where m is the coarsest scale level which is the smallest value satisfying 2 -m λ n ≤ π. For 1 ≤ r ≤ R and 1 ≤ ℓ ≤ J, W r,ℓ is defined as the product of J -ℓ + 1 Quasi-Framelet scaling functions g 0 and ℓ Quasi-Framelet scaling functions g r applied to spectra Λ.\nLet W = [W 0,J ; W 1,0 ; ...; W R,0 ] be the stacked matrix. It can be proven that W T W = I, see [38], which provides a signal decomposition and reconstruction process based on W. This is referred to as the graph Quasi-Framelet transformation.\nSince the computation of the Quasi-framelet transformation matrices requires the eigendecomposition of graph Laplacian, to reduce the computational cost, Chebyshev polynomials are used to approximate the Quasi-Framelet transformation matrices. The approximated transformation matrices are defined by replacing g r (ξ) in (3)- (5) with Chebyshev polynomials T r (ξ) of a fixed degree, which is typically set to 3. The Quasi-Framelet transformation matrices defined in (3) -( 5) can be approximated by,\nW 0,J ≈ T 0 ( 1 2 m+J L) • • • T 0 ( 1 2 m L),(6)\nW r,0 ≈ T r ( 1 2 m L), for r = 1, ..., R,\nW r,ℓ ≈ T r ( 1 2 m+ℓ L)T 0 ( 1 2 m+ℓ-1 L) • • • T 0 ( 1 2 m L),(7)\nfor r = 1, ..., R, ℓ = 1, ..., J.\nBased on the approximated Quasi-Framelet transformation defined above, two types of graph framelet convolutions have been developed recently:\n1. The Spectral Framelet Models [40,41,38,26]:\nF (k+1) = σ W ⊤ diag(θ)WF (k) := σ   (r,ℓ)∈I W ⊤ r,ℓ diag(θ r,ℓ )W r,ℓ F (k) W (k)   ,(9)\nwhere θ r,ℓ ∈ R N , W (k) are learnable matrices for channel/feature mixing, and I = {(r, j) : r = 1, ..., R, ℓ = 0, 1, ..., J} ∪ {(0, J)} is the index set for all framelet decomposition matrices.\n2. The Spatial Framelet Models [6]:\nF (k+1) = σ   W ⊤ 0,J AW 0,J F (k) W (k) 0,J + r,ℓ W ⊤ r,ℓ AW r,ℓ F (k) W (k) r,ℓ   .(10)\nThe spectral framelet models conduct framelet decomposition and reconstruction on the spectral domain of the graph. Clearly θ r,ℓ ∈ R N can be interpreted as the frequency filters, given that the framelet system provides a perfect reconstruction on the input graph signal (i.e., W ⊤ W = I). Instead of frequency domain filtering, the spatial framelet models implement the framelet-based propagation via spatial (graph adjacency) domain.\nThere is a major difference between two schemes. In the spectral framelet methods, the weight matrix W (k) is shared across different (filtered) frequency domains, while in the spatial framelet methods, an individual weight matrix W (k) r,ℓ is applied to each (filtered) spatial domain to produce the graph convolution.\nFinally, it is worth to noting that applying framelet/quasi-framelet transforms on graph signals can decomposes graph signals on different frequency domains for processing, e.g., the filtering used in the spectral framelet models and the spatial aggregating used in the spatial framelet models, thus the perfect reconstruction property guarantees less information loss in the signal processing pipeline. The learning advantage of graph framelet models has been proved via both theoretical and empirical studies [19,40,6].\nGeneralized p-Laplacian Regularized Framelet GCN. In this part, we provide several additional definitions to formulate the model (pL-UFG) that we are interested in analyzing.\nDefinition 3 (The p-Laplace Operator [12]). Let Ω ⊂ R d be a domain and u is a function defined on Ω. The p-Laplace operator ∆ over functions is defined as\n∆u := ∇ • (∥∇u∥ p-2 ∇u)\nwhere ∇ is the gradient operator and ∥ • ∥ is the Euclidean norm and p is a scalar satisfying 1 < p < +∞. The p-Laplace operator, is known as a quasi-linear elliptic partial differential operator.\nThere are a line of research on the properties of p-Laplacian in regarding to its uniqueness and existence [15], geometrical property [20] and boundary conditions on so-called p-Laplacian equation [29].\nThe concept of p-Laplace operator can be extended for discrete domains such as graph (nodes) based on the concepts of the so-called graph gradient and divergence, see below, one of the recent works [14] considers assigning an adjustable p-Laplacian regularizer to the (discrete) graph regularization problem that is conventionally treated as a way of producing GNN outcomes (i.e., Laplacian smoothing) [43]. In view of the fact that the classic graph Laplacian regularizer measures the graph signal energy along edges under L 2 metric, it would be beneficial if GNN training process can be regularized under L p metric in order to adapt to different graph inputs. Following these pioneer works, [25] further integrated graph framelet and a generalized p-Laplacian regularizer to develop the so-called generalized p-Laplacian regularized framelet model. It involves a regularization problem over the energy quadratic form induced from the graph p-Laplacian. To show this, we start by defining graph gradient as follows:\nTo introduce graph gradient and divergence, we define the following notation: Given a graph G = (V, E, W), let F V := {F|F : V → R d } be the space of the vector-valued functions defined on V and F E := {g|g : E → R d } be the vector-valued function space on edges, respectively.\nDefinition 4 (Graph Gradient [43]). For a given function F ∈ F V , its graph gradient is an operator ∇ W :F V → F E defined as for all (v i , v j ) ∈ E,\n(∇ W F)([i, j]) := w i,j d j,j f j - w i,j d i,i f i ,(11)\nwhere f i and f j are the signal vectors on nodes v i and v j , i.e., the rows of F.\nFor simplicity, we denote ∇ W F as ∇F as the graph gradient. The definition of (discrete) graph gradient is analogized from the notion of gradient from the continuous space. Similarly, we can further define the so-called graph divergence: Definition 5 (Graph Divergence [43]). The graph divergence is an operator div : F E → F V which is defined by the following way. For a given function g ∈ F E , the resulting div(g) ∈ F V satisfies the following condition, for any functions\nF ∈ F V , ⟨∇F, g⟩ = ⟨F, -div(g)⟩. (12\n)\nIt is easy to check that the graph divergence can be computed by: div(g\n)(i) = N j=1 w i,j d i,i (g[i, j] -g[j, i]).(13)\nWith the formulation of graph gradient and divergence we are ready to define the graph p-Laplacian operator and the corresponding p-Dirichlet form [43,14] that serves as the regularizer in the model developed in [25]. The graph p-Laplacian can be defined as follows:\nDefinition 6 (Graph p-Laplacian). Given a graph G = (V, E, W) and a multiple channel signal function F : V → R d , the graph p-Laplacian is an operator ∆ p : F V → F V , defined by:\n∆ p F := - 1 2 div(∥∇F∥ p-2 ∇F), for p ≥ 1.(14)\nwhere ∥ • ∥ p-2 is element-wise power over the node gradient ∇F.\nThe corresponding p-Dirichlet form can be denoted as:\nS p (F) = 1 2 (v i ,v j )∈E w i,j d j,j f j - w i,j d i,i f i p ,(15)\nwhere we adopt the definition of p-norm as [14]. It is not difficult to verify that once we set p = 2, we recover the graph Dirichlet energy [43] that is widely used to measure the difference between node features along the GNN propagation process.\nRemark 1 (Dirichlet Energy, Graph Homophily and Heterophily). Graph Dirichlet energy [14,3] has become a commonly applied measure of variation between node features via GNNs. It has been shown that once the graph is highly heterophilic where the connected nodes are not likely to share identical labels, one may prefer GNNs that exhibit nodes feature sharpening effect, thus increasing Dirichlet energy, such that the final classification output of the connected nodes from these GNNs tend to be different. Whereas, when the graph is highly homophilic, a smoothing effect (thus a decrease of Dirichlet energy) is preferred.\n[25] further generalized the p-Dirichlet form in (15) as:\nS p (F) = 1 2 (v i ,v j )∈E ∥∇ W F([i, j])∥ p = 1 2 v i ∈V      v j ∼v i ∥∇ W F([i, j])∥ p   1 p    p = 1 2 v i ∈V ∥∇ W F(v i )∥ p p ,(16)\nwhere v j ∼ v i stands for the node v j that is connected to node v i and\n∇ W F(v i ) = (∇ W F([i, j])) v j :(v i ,v j )∈E\nis the node gradient vector for each node v i and ∥ • ∥ p is the vector p-norm. Moreover, we can further generalize the regularizer S p (F) by considering any positive convex function ϕ as:\nS ϕ p (F) = 1 2 v i ∈V ϕ(∥∇ W F(v i )∥ p ).(17)\nThere are many choices of ϕ and p. When ϕ(ξ) = ξ p , we recover the p-Laplacian regularizer. Interestingly, by setting ϕ(ξ) = ξ 2 , we recover the so-called Tikhonov regularization which is frequently applied in image processing. When ϕ(ξ) = ξ, i.e. identity map written as id, and p = 1, S id 1 (F) becomes the classic total variation regularization. Last but not the least, ϕ(ξ) = r 2 log(1 + ξ 2 /r 2 ) gives nonlinear diffusion. We note that there are many other choices on the form of ϕ. In this paper we will only focus on those mentioned in [25] (i.e., the smooth ones). As a result, the flexible design of the energy regularizer in (17) provides different penalty strength in regularizing the node features propagated from GNNs.\nAccordingly, the regularization problem proposed in [25] is:\nF = arg min F S ϕ p (F) + µ∥F -W ⊤ diag(θ)WF (k) ∥ 2 F ,(18)\nwhere Y := W ⊤ diag(θ)WF (k) stands for the node feature generated by the spectral framelet models (9) without activation σ. This is the implicit layer proposed in [25]. As the optimization problem defined in (18) does not have a closed-form solution when p ̸ = 2, an iterative algorithm is developed in [25] to address this issue. The justification is summarized by the following proposition (Theorem 1 in [25]):\nProposition 1. For a given positive convex function ϕ(ξ), define\nM i,j = w i,j 2 ∥∇ W F([i, j])∥ p-2 • ϕ ′ (∥∇ W F(v i )∥ p ) ∥∇ W F(v i )∥ p-1 p + ϕ ′ (∥∇ W F(v j )∥ p ) ∥∇ W F(v j )∥ p-1 p , α ii =1/   v j ∼v i M i,j d i,i + 2µ   , β ii = 2µα ii ,\nand denote the matrices\nM = [M i,j ], α = diag(α 11 , ..., α N N ) and β = diag(β 11 , ..., β N N ).\nThen problem ( 18) can be solved by the following message passing process\nF (k+1) = α (k) D -1/2 M (k) D -1/2 F (k) + β (k) Y,(19)\nwith an initial value, e.g., F (0) = 0 or Y. Note, k denotes the discrete time index (iteration).\nIn this paper, we call model (18) together with its iteration algorithm (19) the pL-UFG model. Due to the extensive analysis conducted on the graph framelet's properties, our subsequent analysis will primarily concentrate on the iterative scheme presented in (19). However, we will also unveil the interaction between this implicit layer and the framelet in the following sections." }, { "figure_ref": [], "heading": "Theoretical Analysis of the pL-UFG", "publication_ref": [ "b19", "b19", "b22" ], "table_ref": [], "text": "In this section, we show detailed analysis on the convergence (Section 3.1) and energy behavior (Section 3.2) of the iterative algorithm in solving the implicit layer presented in (19). In addition, we will also present some results regarding to the interaction between the implicit layer and graph framelet in Section 3.3 via the energy dynamic aspect based on the conclusion from Section 3.2. Lastly in Section 3.5, we will verify that the iterative algorithm induced from the p-Laplacian implicit layer admits a generalized non-linear diffusion process, thereby connecting the discrete iterative algorithm to the differential equations on graph.\nFirst, we consider the form of matrix M in (19). Write\nζ ϕ i,j (F) = 1 2 ϕ ′ (∥∇ W F (k+1) (v i )∥ p ) ∥∇ W F (k+1) (v i )∥ p-1 p + ϕ ′ (∥∇ W F (k+1) (v j )∥ p ) ∥∇ W F (k+1) (v j )∥ p-1 p .(20)\nM i,j can be simplified as\nM i,j = ζ ϕ i,j (F)w i,j ∥∇ W F([i, j])∥ p-2 . (21\n)\nζ ϕ i,j (F) is bounded as shown in the following lemma. Lemma 1. Assume\nϕ ′ (ξ) ξ p-1 ≤ C,(22)\nfor a suitable constant C. We have\n|ζ ϕ i,j (F)| ≤ C.\nThe proof is trivial thus we omit it here. In the sequel, we use\nζ i,j (F) for ζ ϕ i,j (F) instead. Remark 2.\nIt is reasonable for assuming condition in (22) in Lemma 1 so that ζ i,j (F) is bounded. For example, one can easily verify that when ϕ(ξ) = ξ p , ζ i,j (F) is bounded for all p. In particular, when p = 2, i.e., ϕ(ξ) = ξ 2 , we have ϕ ′ (ξ)\nξ p-1 = 2ξ ξ p-1 = 2 ξ 2 ξ p , thus ζ i,j (F) is bounded for all 0 < p ≤ 2. Furthermore, when ϕ(ξ) = ξ, then ϕ ′ (ξ) ξ p-1 = ξ ξ p-1 , indicating ζ i,j (F) is bounded for all 0 < p ≤ 1. In addition, when ϕ(ξ) = ξ 2 + ϵ 2 -ϵ, we have ϕ ′ (ξ) ξ p-1 = (ξ 2 +ϵ 2 ) 1/2 ξ ξ p-1 ≤ C ξ ξ p-1 . Therefore ζ i,j (F) is bounded for all 0 < p ≤ 2. Lastly, when ϕ(ξ) = r 2 log(1 + ξ 2 r 2 ), the result of ϕ ′ (ξ) ξ p-1 yields r 2 1 1+ ξ 2 r 2 • 2 r 2 ξ ξ p-1 ≤ 2 ξ ξ p-1 . Hence ζ i,j (F) remain bounded for all 0 < p ≤ 2.\nIn summary, for all forms of ϕ we included in the model, ζ i,j (F) is bounded.\nThe boundedness of ζ i,j (F) from Lemma 1 is useful in the following convergence analysis." }, { "figure_ref": [], "heading": "Convergence Analysis of pL-UFG", "publication_ref": [ "b19", "b13", "b19", "b13", "b13", "b13", "b19", "b20", "b25", "b27", "b23", "b29", "b15", "b21", "b35", "b3", "b10" ], "table_ref": [], "text": "We show the iterative algorithm presented in (19) will converge with a suitable choice of µ. We further note that although the format of Theorem 1 is similar to Theorem 2 in [13], our message passing scheme presented in (19) is different compared to the one defined in [13] via the forms of M, α and β. In fact, the model defined in [13] can be considered as a special case where ϕ(ξ) = ξ p . As a generalization of the model proposed in [13], we provide a uniform convergence analysis for the pL-UFG.\nTheorem 1 (Weak Convergence of the Proposed Model). Given a graph G(V, E, W) with node features X, if α (k) , β (k) , M (k) and F (k) are updated according to (19), then there exist some real positive value µ, which depends on the input graph (G, X) and the quantity of p, updated in each iteration, such that:\nL ϕ p (F (k+1) ) ≤ L ϕ p (F (k) ),\nwhere\nL ϕ p (F) := S ϕ p (F) + µ∥F -Y∥ 2 F . Proof. First, write M (k) i,j = w i,j 2 ∇ W F (k) ([i, j]) p-2 • ϕ ′ (∥∇ W F (k) (v i )∥ p ) ∥∇ W F (k) (v i )∥ p-1 p + ϕ ′ (∥∇ W F (k) (v j )∥ p ) ∥∇ W F (k) (v j )∥ p-1 p . (23\n)\nThe derivative of the regularization problem defined in ( 18) is:\n∂L ϕ p (F) ∂F i,: F (k) =2µ(F (k) i,: -Y i,: ) + v j ∼v i M (k) ij 1 d ii w ij ∇ W F (k) ([j, i]) =2µ(F (k) i,: -Y i,: ) + v j ∼v i M (k) ij 1 d ii F (k) i,:\n-\n1 d ii d jj F (k) j,: =(2µ + v j ∼v i M (k) ij /d ii )F (k) i,: -2µY i,: - v j ∼v i M (k) ij d ii d jj F (k) j,: = 1 α (k) ii F (k) i,: - 1 α (k) ii   β (k) ii Y i,: + α (k) ii v j ∼v i M (k) ij d ii d jj F (k) j,:  (24)\nThus, according the update rule of F (k+1) in ( 19), we have\n∂L ϕ p (F) ∂F i,: F (k) = F (k) i,: -F (k+1) i,: α (k) ii . (25\n)\nFor our purpose, we denote the partial derivative at F ( * ) of the objective function with respect to the node feature F i,; as\n∂L ϕ p (F ( * ) i,: ) := ∂L ϕ p (F) ∂F i,: F ( * )(26)\nFor all i, j ∈ [N ], let v ∈ R 1×c be a disturbance acting on node i. Define the following:\nN (k) i,j = W i,j W i,j D i,i F (k) i,: - W i,j D j,j F (k) j,: p-2 N ′(k) i,j = W i,j W i,j D i,i (F (k) i,: + v) - W i,j D j,j F (k) j,: p-2 M (k) i,j = N (k) ij ζ i,j (F (k) ), M ′(k) i,j = N ′(k) ij ζ i,j (F (k) + v) α ′(k) ii = 1/   v j ∼v i M ′(k) i,j D i,i + 2µ   , β ′(k) ii = 2µα ′(k) ii F ′(k+1) i,: = α ′(k) i,i v j ∼v i M ′(k) i,j D i,i D j,j F (k) j,: + β ′(k) Y i,: ,(27)\nwhere ζ ij (F) is defined as (20) and\nF (k) + v means that v only applies to the i-th of F (k) 2 .\n2 With slightly abuse of notation, we denote N ′ as the matrix after assigning the disturbance v to the matrix N .\nSimilar to (25), we compute\n∂L ϕ p (F (k) i,: + v) = 1 α ′(k) i,i (F (k) i,: + v) -F ′(k+1) i,:\n.\nHence from both ( 25) and ( 28) we will have\n∂L ϕ p (F (k) i,: + v) -∂L ϕ p (F (k) i,: ) = 1 α ′(k) i,i (F (k) i,: + v) -F ′(k+1) i,:\n-1\nα (k) i,i F (k) i,: -F (k+1) i,: ≤ 1 α ′(k) i,i ∥v∥ + 1 α ′(k) i,i F (k) i,: -F ′(k+1) i,:\n-\n1 α (k) i,i F (k) i,: -F (k+1) i,: = 1 α ′(k) i,i ∥v∥ + 1 α ′(k) i,i - 1 α (k) i,i F (k) i,: - 1 α ′(k) i,i F ′(k+1) i,: + 1 α (k) i,i F (k+1) i,: = 1 α ′(k) i,i ∥v∥ + v j ∼v i M ′(k) i,j D i,i - M (k) i,j D i,i F (k) i,: - v j ∼v i M ′(k) i,j D i,i D j,j F ′(k) j,: + M (k) i,j D i,i D j,j F (k) j,: =   v j ∼v i M (k) i,j D i,i + 2µ   ∥v∥ + v j ∼v i M ′(k) i,j -M (k) i,j D i,i ∥v∥ + v j ∼v i M ′(k) i,j -M (k) i,j D i,i F (k) i,: - v j ∼v i M ′(k) i,j -M (k) i,j D i,i D j,j F (k) j,: .\nNote that in (27), ∥ • ∥ p-2 is the matrix L 2 norm raised to power p -2, that is ∥X∥ p-2 = ( i,j x 2 i,j )\n1 2 p-2\n. It is known that the matrix L 2 norm as a function is Lipschitz [23], so is its exponential to p -2. Furthermore, it is easy to verify that ∥N ′ -N∥ ≤ c∥v∥ due to the property of N and N ′ . Hence, according to Lemma 1, the following holds\n|M ′(k) i,j -M (k) i,j | ≤ C|N ′(k) i,j -N (k) i,j | ≤ C ′ ∥v∥.\nCombining all the above, we have\n∂L ϕ p (F (k) i,: + v) -∂L ϕ p (F (k) i,: ) ≤   v j ∼v i M (k) i,j D i,i + 2µ + o(G, v, X, p)   ∥v∥,(29)\nwhere o(G, v, X, p) is bounded. It is worth noting that the quantity of o(G, v, X, p) is bounded by\nv j ∼v i M ′(k) i,j -M (k) i,j D i,i ∥v∥ + v j ∼v i M ′(k) i,j -M (k) i,j D i,i F (k) i,: - v j ∼v i M ′(k) i,j -M (k) i,j D i,i D j,j F (k) j,: . Let o = o(G, v, X, p), γ = {γ 1 , ...γ N } ⊤ ,\nand η ∈ R N ×c . By the Taylor expansion theorem we have:\nL ϕ p (F (k) i,: + γ i η i,: ) = L ϕ p (F (k) i,: ) + γ i 1 0 ⟨∂L ϕ p (F (k) i,: + ϵγ i η i,: ), η i,: ⟩dϵ ∀i = L ϕ p (F (k) i,: ) + ⟨∂L ϕ p (F (k) i,: ), η i,: ⟩ + γ i 1 0 ∂L ϕ p F (k) i,: + ϵγ i η i,: -∂L ϕ p F (k) i,:\n, η i,: dϵ\n≤ L ϕ p (F (k) i,: ) + ⟨∂L ϕ p (F (k) i,: ), η i,: ⟩γ i + γ i 1 0 ∂L ϕ p F (k) i,: + ϵγ i η i,: -∂L ϕ p F (k) i,: η i,: dϵ ≤ L ϕ p (F (k) i,: ) + ⟨∂L ϕ p (F (k) i,: ), η i,: ⟩γ i + 1 α (k) i,i + o γ 2 i ∥η i,: ∥ 2\nwhere the last inequality comes from (29).\nTaking\nγ i = α (k)\nii and η i,: = -∂L ϕ p (F\ni,: ) in the above inequality gives\nL ϕ p F (k) i,: -α (k) ii ∂L ϕ p (F (k) i,: ) ≤L ϕ p (F (k) i,: ) -α (k) ii ∂L ϕ p (F (k) i,: ), ∂L ϕ p (F (k) i,: ) + 1 2 1 α (k) i,i + o α 2(k) i,i ∥∂L ϕ p (F (k) i,: )∥ 2 =L ϕ p (F(k)\ni,: ) -\n1 2 α (k) i,i 1 -α (k) i,i o ∥∂L ϕ p (F (k) i,: )∥ 2 . (30\n)\nGiven that o is bounded, if we choose a large µ, e.g., 2µ > o, we will have\n1 -α (k) i,i o = 1 - o v j ∼v i M (k) i,j D i,i + 2µ > 0.\nThus the second term in (30) is positive. Hence we have\nL ϕ p (F(k+1) i,:\n)\n:= L ϕ p F (k) i,: -α (k) ii ∂L ϕ p (F (k) i,: ) ≤ L ϕ p (F (k) i,: ).\nThis completes the proof.\nTheorem 1 shows that with an appropriately chosen value of µ, the iteration scheme for the implicit layer ( 18) is guaranteed to coverage. This inspires us to explore further on the variation of the node feature produced from implicit layer asymptotically. Recall that to measure the difference between node features, one common choice is to analyze its Dirichlet energy, which is initially considered in the setting p = 2 in (15). It is known that the Dirichlet energy of the node feature tend to approach to 0 after sufficiently large number of iterations in many GNN models [21,35,4,10], known as over-smoothing problem. However, as we will show in the next section, by taking large µ or small p, the iteration from the implicit layer will always lift up the Dirichlet energy of the node features, and over-smoothing issue can be resolved completely in pL-UFG." }, { "figure_ref": [], "heading": "Energy Behavior of the pL-UFG", "publication_ref": [ "b2", "b21", "b36", "b31", "b2", "b19", "b33", "b18", "b35", "b37", "b38", "b25", "b38", "b38" ], "table_ref": [], "text": "In this section, we show the energy behavior of the p-Laplacian based implicit layer. Specifically, we are interested in analyzing the property of the generalized Dirichlet energy defined in [3].We start by denoting generalized graph convolution as follows:\nF (k+τ ) = F (k) + τ σ -F (k) Ω (k) + AF (k) W (k) -F (0) W (k) ,(31)\nwhere Ω (k) , W (k) and W (k) ∈ R c×c act on each node feature vector independently and perform channel mixing. When τ = 1, and Ω (k) = W (k) = 0, it returns to GCN [21]. Additionally, by setting Ω (k) ̸ = 0, we have the anisotropic instance of GraphSAGE [36]. To quantify the quality of the node features generated by (31), specifically, [3] considered a new class of energy as defined below,\nE(F) = 1 2 N i=1 ⟨f i , Ωf i ⟩ - 1 2 N i,j=1 A i,j ⟨f i , Wf j ⟩ + φ (0) (F, F (0) ),(32)\nin which φ (0) (F, F (0) ) serves a function of that induces the source term from F or F (0) . It is worth noting that by setting Ω = W = I c and φ (0) = 0, we recover the classic Dirichlet energy when\nsetting p = 2 in (15) that is, E(F) = 1 2 (v i ,v j )∈E w i,j d j,j f j - w i,j d i,i f i 2\n. Additionally, when we set\nφ (0) (F, F (0) ) = i ⟨f i , Wf(0)\ni ⟩, (32) can be rewritten as:\nE(F) = vec(F), 1 2 (Ω ⊗ I N -W ⊗ A)vec(F) + ( W ⊗ I N )vec(F (0) ) .(33)\nRecall that (19) produces the node feature F (k+1) according to the edge diffusion αD -1/2 MD -1/2 on F (k) and the scaled source term 2µαF (0) where F (0) can be set to Y. To be specific, in (33), we set Ω = W = W = I c and replace the edge diffusion A with αD -1/2 MD -1/2 and set the identity matrix I N in the residual term to be the diagonal matrix 2µα. Finally we propose Definition 7 (The Generalized Dirichlet Energy). Based on the previous notation setting, the generalized Dirichlet energy for the node features F (k+1) in ( 19) is:\nE P F (F (k+1) ) = vec(F (k+1) ), 1 2 I c ⊗ I N -I c ⊗ α (k+1) D -1/2 M (k+1) D -1/2 vec(F (k+1) ) + (I c ⊗ 2µα (k+1) )vec(F (0) ) ,(34)\nwhere the superscript \" P F \" is short for p-Laplacian based framelet models.\nIt is worth noting that the generalized Dirichlet energy defined in ( 34) is dynamic along the iterative layers due to the non-linear nature of the implicit layer defined in (18). We are now able to analyze the energy (E P F (F)) behavior of the pL-UFG, concluded as the following proposition.\nProposition 2 (Energy Behavior). Assume G is connected, unweighted and undirected. There exists sufficiently large value of µ or small value of p such that E P F (F) will stay away above 0 at each iterative layer k and increases with the increase of µ or the decrease of p.\nProof. We start with the definition of the generalized Dirichlet energy above, we can re-write E P F (F (k+1) ) in the following inner product between vec(F (k+1) ) and vec(F (0) ), based on M, α, β and the iterative scheme defined in ( 19):\nE P F (F (k+1) ) = vec(F (k+1) ), 1 2 I c ⊗ I N -I c ⊗ α (k+1) D -1/2 M (k+1) D -1/2 vec(F (k+1) ) + (I c ⊗ 2µα (k+1) )vec(F (0) ) = vec(F (k+1) ), vec(F (k+1) ) - 1 2 vec(F (k+1) ), I c ⊗ α (k+1) D -1/2 M (k+1) D -1/2 vec(F (k+1) ) -(I c ⊗ 4µα (k+1) )vec(F (0) ) .(35)\nBased on the form of (35), it is straightforward to see that to let E P F (F (k+1) ) > 0 and further increase with the desired quantities of µ and p, it is sufficient to require 3 :\nI c ⊗ α (k+1) D -1/2 M (k+1) D -1/2 vec(F (k+1) ) -(I c ⊗ 2µα (k+1) )vec(F (0) ) < 0. (36\n)\nTo explicitly show how the quantities of µ and p affect the term in (36), we start with the case when k = 0. When k = 0, (36) becomes:\nI c ⊗ α (1) D -1/2 M (1) D -1/2 vec(F (1) ) -(I c ⊗ 2µα (1) )vec(F (0) ) =I c ⊗ α (1) D -1/2 M (1) D -1/2 vec α (0) D -1/2 M (0) D -1/2 F (0) + 2µα (0) F (0) -(I c ⊗ 2µα (1) )vec(F (0) ), =I c ⊗ α (1) D -1/2 M (1) D -1/2 I c ⊗ α (0) D -1/2 M (0) D -1/2 + 2µα (0) vec(F (0) ) -(I c ⊗ 2µα (1) )vec(F (0) ), =I c ⊗ 1 s=0 α (s) D -1/2 M (s) D -1/2 + α (1) D -1/2 M (1) D -1/2 2µα (0) -2µα (1) vec(F (0) ). (37\n)\nWe note that, in (37),\n1 s=0 α (s) D -1/2 M (s) D -1/2 + α (1) D -1/2 M (1) D -1/2 2µα (0) -2µα (1) can\n3 Strictly speaking, one shall further require all elements in F (k+1) larger than or equal to 0. As this can be achieved by assigned a non-linear activation function (i.e., ReLU) to the framelet, we omit it here in our main analysis.\nbe computed as:\n1 s=0 α (s) i,i d -1/2 i,i M (s) i,j d -1/2 j,j + α (1) i,i d -1/2 i,i M (1) i,j d -1/2 j,j 2µα (0) i,i -2µα (1) i,i = 1 s=0     1/   v j ∼v i M (s) i,j d i,i + 2µ     ∇ W F (s) ([i, j]) p-2 d i,i d j,j   +     1/   v j ∼v i M (1) i,j d i,i + 2µ     ∇ W F (1) ([i, j]) p-2 d i,i d j,j   2µ/   v j ∼v i M (0) i,j d i,i + 2µ       -   2µ/   v j ∼v i M (1) i,j d i,i + 2µ     , =     ∇ W F (0) ([i, j]) p-2 v j ∼v i M (0) i,j d i,i + 2µ • d i,i d j,j         ∇ W F (1) ([i, j]) p-2 v j ∼v i M (1) i,j d i,i + 2µ • d i,i d j,j     +     ∇ W F (1) ([i, j]) p-2 v j ∼v i M (1) i,j d i,i + 2µ • d i,i d j,j • 2µ/   v j ∼v i M (0) i,j d i,i + 2µ       -   2µ/   v j ∼v i M (1) i,j d i,i + 2µ     .(38)\nNow we see that by assigning a sufficient large of µ or small value of p, we can see terms like\n∥∇W F (1) ([i,j])∥ p-2 v j ∼v i M (1) i,j d i,i +2µ • √ d i,i d j,j\nin (38) are getting smaller Additionally, we have both 2µ/\nv j ∼v i M (0) i,j d i,i + 2µ and 2µ/ v j ∼v i M (1) i,j d i,i + 2µ ≈ 1.\nTherefore, the summation result of (38) tends to be negative. Based on ( 35), E P F (F (k+1) ) will stay above 0.\nFor the case that k ≥ 1, by taking into the iterative algorithm ( 19), (37) becomes:\nI c ⊗ k+1 s=0 α (s) D -1/2 M (s) D -1/2 + k+1 s=0 k+1 l=k-s α (l) D -1/2 M (l) D -1/2 2µα (l-1) -2µα (k+1) vec(F (0) ).\nApplying the same reasoning as before, it is not hard to verify that with sufficient large of µ and small of p, the term\nk+1 s=0 α (s) D -1/2 M (s) D -1/2 + k+1 s=0 k+1 l=k-s α (l) D -1/2 M (l) D -1/2 2µα (l-1) -2µα (k+1)\nin the above equation tend to be negative, yielding a positive E P F (F (k+1) ). Asymptotically, we have:\nE P F (F (k+1) )≈ vec(F (k+1) ), vec(F (k+1) ) + vec(F (k+1) ), 1 2 I c ⊗ 4µα (k+1) + I N vec(F (0) ) .(39)\nThis shows that the energy increases along with the magnitude of µ, and it is not hard to express (39) as the similar form of (38) and verify that the energy decreases with the quantity of p. This completes the proof.\nRemark 3. Proposition 2 shows that, for any of our framelet convolution models, the p-Laplacian based implicit layer will not generate identical node feature across graph nodes, and thus the so-called over-smoothing issue will not appear asymptotically. Furthermore, it is worth noting that the result from Proposition 2 provides the theoretical justification of the empirical observations in [25], where a large value of µ or small value of p is suitable for fitting heterophily datasets which commonly require the output of GNN to have higher Dirichlet energy.\nRemark 4 (Regarding to the quantity of p). The conclusion of Proposition 2 is under sufficient large of µ or small of p. However, it is well-known that the quantity of p cannot be set as arbitrary and in fact it is necessary to have p ≥ 1 so that the iteration for the solution of the optimization problem defined in (18) can converge. Therefore, it is not hard to see that the effect of p is weaker than µ in terms of analyzing the asymptotic behavior of the model (i.e., via (38)). Without loss of generality, in the sequel, when we analyze the property of the model with conditions on µ and p, we mainly target on the effect from µ and one can check from (38) µ and p are with opposite effect on the model." }, { "figure_ref": [], "heading": "Interaction with Framelet Energy Dynamic", "publication_ref": [ "b25", "b10", "b15", "b9", "b33", "b31", "b22", "b19", "b10", "b10", "b19" ], "table_ref": [], "text": "To analyze the interaction between the energy dynamic of framelet convolution defined in ( 9) and the p-Laplacian based implicit propagation [25], We first briefly review some recent work on the energy dynamic of the GNNs. In [10], the propagation of GNNs was considered as the gradient flow of the Dirichlet energy that can be formulated as:\nE(F) = 1 2 N i=1 N j=1 w i,j d j,j f j - w i,j d i,i f i 2 ,(40)\nand similarly by setting the power from 2 to p, we recover the p-Dirichlet form presented in (15). The gradient flow of the Dirichlet energy yields the so-called graph heat equation [9] as Ḟ(k) = -∇E(F (k) ) = -LF (k) . Its Euler discretization leads to the propagation of linear GCN models [33,31]. The process is called Laplacian smoothing [22] and it converges to the kernel of L, i.e., ker( L) as k -→ ∞, resulting in non-separation of nodes with same degrees, known as the over-smoothing issue.\nFollowing this observation, the work [19,10] also show even with the help of the non-linear activation function and the weight matrix via classic GCN ((1)), the process described is still dominated by the low frequency (small Laplacian eigenvalues) of the graph, hence eventually converging to the kernel of L, for almost every initialization. To quantify such behavior, [10,19] consider a general dynamic as Ḟ(k) = GNN θ (F (k) , k), with GNN θ (•) as an arbitrary graph neural network function, and also characterizes its behavior by low/high-frequency-dominance (L/HFD).\nDefinition 8 ([10]). Ḟ(k) = GNN θ (F (k) , k) is Low-Frequency-Dominant (LFD) if E F (k) /∥F (k) ∥ - → 0 as k - → ∞, and is High-Frequency-Dominant (HFD) if E F (k) /∥F (k) ∥ - → ρ L /2 as t - → ∞." }, { "figure_ref": [], "heading": "Lemma 2 ([10]", "publication_ref": [ "b10", "b19", "b19", "b10", "b19", "b38", "b9", "b39", "b41", "b38" ], "table_ref": [], "text": "). A GNN model is LFD (resp. HFD) if and only if for each t j -→ ∞, there exists a sub-sequence indexed by\nk j l - → ∞ and F ∞ such that F(k j l )/∥F(k j l )∥ -→ F ∞ and LF ∞ = 0 (resp. LF ∞ = ρ L F ∞ ).\nRemark 5 (LFD, HFD and graph homophily). Based on Definition 8 and Lemma 2, for a given GNN model, if G is homophilic, i.e., adjacency nodes are more likely to share the same label, one may prefer for the model to induce a LFD dynamic in order to fit the characteristic of G. On the other hand, if G is heterophilic, the model is expected to induce a HFD dynamic, so that even in the adjacent nodes, their predicted labels still tend to be different. Thus, ideally, a model should be flexible enough to accommodate both LFD and HFD dynamics.\nGeneralized from the energy dynamic framework provided in [10], [19] developed a framelet Dirichlet energy and analyzed the energy behavior of both spectral (( 9)) and spatial framelet ((10)) convolutions. Specifically, let\nE F r (F) = 1 2 Tr (W r,ℓ F) ⊤ W r,ℓ FΩ r,ℓ - 1 2 Tr (W r,ℓ F) ⊤ diag(θ) r,ℓ W r,ℓ F W\nfor all (r, ℓ) ∈ I. The generated framelet energy is given by:\nE F r (F) = E F r 0,J (F) + r,ℓ E F r r,ℓ (F) = 1 2 (r,ℓ)∈I vec(F), Ω r,ℓ ⊗ W ⊤ r,ℓ W r,ℓ -W ⊗ W ⊤ r,ℓ diag(θ) r,ℓ W r,ℓ vec(F) ,(41)\nwhere the superscript \" F r \" stands for the framelet convolution. This definition is based on the fact that the total Dirichlet energy is conserved under framelet decomposition [19,10]. By analyzing the gradient flow of the framelet energy4 defined above, [19] concluded the energy dynamic of framelet as:\nProposition 3 ([19]\n). The spectral graph framelet convolution (9) with Haar-type filter (i.e. R = 1 in the case of scaling function set) can induce both LFD and HFD dynamics. Specifically, let θ 0,ℓ = 1 N and θ r,ℓ = θ1 N for r = 1, ..., L, ℓ = 1, ..., J where 1 N is a size N vector of all 1s. When θ ∈ [0, 1), the spectral framelet convolution is LFD and when θ > 1, the spectral framelet convolution is HFD.\nIt is worth noting that there are many other settings rather than θ 0,ℓ = 1 N and θ r,ℓ = θ1 N , i.e. adjusting θ, for inducing LFD/HFD from framelet. However, in this paper, we only consider the conditions described in Proposition 3. To properly compare the energy dynamics between the framelet models, we present the following definition.\nDefinition 9 (Stronger/Weaker Dynamic). Let Q θ be the family of framelet models with the settings described in Proposition 3 and choice of θ. We say that one framelet model Q θ 1 is with a stronger LFD than another framelet model Q θ 2 if θ 1 < θ 2 , and weaker otherwise. Similarly, we say Q θ 1 is with a stronger HFD than Q θ 2 if θ 1 > θ 2 , and weaker otherwise5 . Remark 6. Similar reasoning of Proposition 3 can be easily generalized to other commonly used framelet types such as Linear, Sigmoid and Entropy [38].\nBefore we present our conclusion, we note that to evaluate the changes of (framelet) energy behavior from the impact of implicit layer, one shall also define a layer-wised framelet energy such as E P F (F (k+1) ) by only considering the energy from one step of propagation of graph framelet. With all these settings, we summarize the interaction between framelet and p-Laplacian based implicit propagation as: Lemma 3 (Stronger HFD). Based on the condition described in Proposition 3, when framelet is HFD, with sufficient large value of µ or small of p, the p-Laplacian implicit propagation further amplify the energy E F r (F) in (41) of the node feature (i.e., Y in (18)) produced from the framelets, and thus achieving a higher HFD dynamic than original framelet in (9).\nProof. Recall that by setting sufficient large of µ or small of p, E P F (F (k+1) ) in (39) has the form\nE P F (F (k+1) ) ≈ vec(F (k+1) ), vec(F (k+1) ) + vec(F (k+1) ), 1 2 I c ⊗ 4µα (k+1) + I N vec(F (0) ) .\nSimilarly, when framelet is HFD, with θ 0,ℓ = 1 N , θ r,ℓ = θ1 N and θ > 1, the Dirichlet energy (of F (k+1) ) ( 41) can be rewritten as:\nE F r (F (k+1) ) = 1 2 (r,ℓ)∈I vec(F (k+1) ), Ω r,ℓ ⊗ W ⊤ r,ℓ W r,ℓ -W ⊗ W ⊤ r,ℓ diag(θ) r,ℓ W r,ℓ vec(F (k+1) ) , = 1 2 (r,ℓ)∈I vec(F (k+1) ), W ⊗ W ⊤ r,ℓ W r,ℓ -W ⊤ r,ℓ diag(θ) r,ℓ W r,ℓ vec(F (k+1) ) ,(42)\nwhere the last equality is achieved by letting Ω = W, meaning that no external force6 exist within the space that contains the node features. We note that it is reasonable to have such assumption in order to explicitly analyze the energy changes in (41) via the changes of θ. Now we take the Haar framelet with ℓ = 1 as an example, meaning there will be only one high-pass and low-pass frequency domain in the framelet model. Specifically, the R.H.S of ( 42) can be further rewritten as:\n1 2 (r,ℓ)∈I vec(F (k+1) ), W ⊗ W ⊤ r,ℓ W r,ℓ -W ⊤ r,ℓ diag(θ) r,ℓ W r,ℓ vec(F (k+1) ) ≈ 1 2 vec(F (k+1) ), W ⊗ I N -W ⊤ 1,1 diag(θ) 1,1 W 1,1 vec(F (k+1) ) . (43\n)\nThe inclusion of W ⊤ 1,1 diag(θ) 1,1 W 1,1 is based on the form of Haar type framelet with one scale. In addition, the approximation in ( 43) is due to the outcome of HFD 7 . Now we combine the framelet energy in the above equation (( 43)) with the energy induced from p-Laplacian based implicit propagation (( 39)). Denote the total energy induced from framelet and implicit layer as:\nE (total) (F (k+1) ) = vec(F (k+1) ), vec(F (k+1) ) (44) + 1 2 vec(F (k+1) ), W ⊗ I N -W ⊤ 1,1 diag(θ) 1,1 W 1,1 vec(F (k+1) ) + I c ⊗ 4µα (k+1) + I N vec(F (0) ) .\nIt is not hard to check that E (total) (F (k+1) ) is larger than E F r (F (k+1) ) (the framelet energy under HFD). Hence we have verified that the implicit layer further amplifies the Dirichlet energy. Moreover, one can approximate this stronger dynamic by re-parameterizing E (total) (F (k+1) ) via assigning a higher quantity of θ ′ > θ > 0 and excluding the residual term. Hence, the inclusion of the implicit layer induces a higher HFD dynamic to framelet, and that completes the proof.\nCorollary 1 (Escape from Over-smoothing). With the same conditions in Proposition 3, when framelet is LFD, the implicit layer (with sufficient large µ or small p) ensures the Dirichlet energy of node features does not converge to 0, thus preventing the model from the over-smoothing issue.\nProof. The proof can be done by combining Proposition 3 and Proposition 2, with the former illustrates that when model is HFD, there will be no over-smoothing problem, and the latter shows that even when the model is LFD, the Dirichlet energy of the node features will not converge to 0. Accordingly, pL-UFG is capable of escaping from over-smoothing issue.\nRemark 7 (Stronger LFD). Based on the condition described in Proposition 3, when framelet is LFD, with sufficient small of µ or larger of p, it is not hard to verify that according to (38), the p-Laplacian implicit propagation further shrink the Dirichlet energy of the node feature produced from framelet, and thus achieving a stronger LFD dynamic.\nRemark 8. In Proposition 2 we showed that the Dirichlet energy of the node features produced from the implicit layer will not coverage to zero, indicating the robustness of the implicit layer in regarding to the over-smoothing issue. Additionally, we further verified in Lemma 3 that when graph framelet is with a monotonic dynamic (e.g., L/HFD), the inclusion of the implicit layer can even amplify the dynamic of framelet by a proper setting of µ and p. Our conclusion explicitly suggests the effectiveness on incorporating p-Laplacian based implicit propagation to multiscale GNNs which is with flexible control of model dynamics." }, { "figure_ref": [], "heading": "Proposed Model with Controlled Dynamics", "publication_ref": [ "b19", "b18" ], "table_ref": [], "text": "Based on the aforementioned conclusions regarding energy behavior and the interaction between the implicit layer and framelet's energy dynamics, it becomes evident that irrespective of the homophily index of any given graph input, one can readily apply the condition of θ(s) in Proposition 3 to facilitate the adaptation of the pL-UFG model to the input graph by simply adjusting the quantities of µ and p. This adjustment significantly reduces the training cost of the graph framelet. For instance, consider the case of employing a Haar type frame with ℓ = 1, where we have only one low-pass and one high-pass domain. In this scenario, the trainable matrices for this model are θ 0,1 , θ 1,1 , and W.\nBased on our conclusions, we can manually set both θ 0,1 and θ 1,1 to our requested quantities, thereby inducing either LFD or HFD. Consequently, the only remaining training cost is associated with W, leading to large reduction on the overall training cost while preserving model's capability of handling both types of graphs. Accordingly, we proposed two additional pL-UFG variants with controlled model dynamics, namely pL-UFG-LFD and pL-UFG-HFD. More explicitly, the propagation of graph framelet with controlled dynamic takes the form as:\nF (k+1) =σ W ⊤ 0,1 diag(1 N )W 0,1 F (k) W + W ⊤ 1,1 diag(θ1 N )W 1,1 F (k) W ,\nafter which the output node features will be propagated through the iterative layers in defined in (19) for the implicit layer (18) for certain layers, and the resulting node feature will be forwarded to the next graph framelet convolution and implicit layer propagation. We note that to properly represent the Dirichlet energy of node features, we borrow the concept of electronic orbital energy levels in Figure . 1. The shaded outermost electrons correspond to higher energy levels, which can be analogously interpreted as higher variations in node features. Conversely, the closer the electrons are to the nucleus, the lower their energy levels, indicating lower variations in node features." }, { "figure_ref": [], "heading": "Equivalence to Non-Linear Diffusion", "publication_ref": [ "b4", "b28", "b4", "b3", "b28", "b14", "b11", "b45", "b46", "b7", "b7", "b39" ], "table_ref": [], "text": "Diffusion on graph has gained its popularity recently [5,28] by providing a framework (i.e., PDE) to understand the GNNs architecture and as a principled way to develop a broad class of new methods.\nTo the best of our knowledge, although the GNNs based on linear diffusion on graph [5,4,28] have been intensively explored, models built from non-linear graph diffusion have not attracted much attention in general. In this section, we aim to verify that the iteration (19) admits a scaled nonlinear diffusion with a source term. To see this, recall that p-Laplacian operator defined in (14) has the form:\n∆ p F := - 1 2 div(∥∇F∥ p-2 ∇F), for p ≥ 1.(45)\nPlugging in the definition of graph gradient and divergence defined in (11) and ( 13) into the above equation, one can compactly write out the form of p-Laplacian as:\n(∆ p F)(i) = v j ∼v i w i,j d i,i ∥∇ W F([i, j])∥ p-2 w i,j d i,i f i - w i,j d j,j f j .(46)\nFurthermore, if we treat the iteration equation ( 19) as a diffusion process, its forward Euler scheme has the form:\nF (k+1) -F (k) τ = α (k) D -1/2 M (k) D -1/2 F (k) -F (k) + β (k) Y, = α (k) D -1/2 M (k) D -1/2 -I F (k) + β (k) Y.(47)\nWe set τ = 1 for the rest of analysis for the convenience reasons. With all these setups, we summarize our results in the following:\nLemma 4 (Non-Linear Diffusion). Assuming G is connected, the forward Euler scheme presented in (47) admits a generalized non-linear diffusion on the graph. Specifically, we have:\nα (k) D -1/2 M (k) D -1/2 -I F (k) + β (k) Y = α div(∥∇F (k) ∥ p-2 ∇F (k) ) + 2µα (k) DF (k) + 2µα (k) F (0) .(48)\nProof. The proof can be done by verification. We can explicitly write out the computation on the i-th row of the left hand side of (48) as: First let us denote the rows of F (k) as f (k) (i)'s.\nv j ∼v i α (k) i,i d -1/2 ii M (k) i,j d -1/2 jj f (k) (j) -f (k) (i) + β (k) i,i Y (i) =α (k) i,i   v j ∼v i M ij √ d ii d jj f (k) (j) - 1 α (k) i,i f (k) (i)   + 2µα (k) i,i f (0) (i) =α (k) i,i   v j ∼v i M ij √ d ii d jj f (k) (j) - v j ∼v i M ij d ii + 2µ f (k) (i)   + 2µα (k) i,i f (0) (i) =α (k) i,i   v j ∼v i w i,j d i,i ∥∇ W F([i, j])∥ p-2 w i,j d j,j f (k) j - w i,j d i,i f (k) i + 2µ v j ∼v i f (k) i   + 2µα (k) i,i f (0) (i) =α (k) i,i ((∆ p F)(i)) + 2µα (k) i,i d ii f (k) i + 2µα (k) i,i f (0) i (49)\nWhen i takes from 1 to N , it gives (48) according to (45) and (46).Thus we complete the proof.\nBased on the conclusion of Lemma 4, it is clear that the propagation via p-Laplacian implicit layer admits a scaled non-linear diffusion with two source terms. We note that the form of our non-linear diffusion coincidences to the one developed in [7]. However, in [7] the linear operator is assigned via the calculation of graph Laplacian whereas in our model, the transformation acts over the whole p-Laplacian. Finally, it is worth noting that the conclusion in Lemma 4 can be transferred to the implicit schemes 8 . We omit it here. Remark 9. With sufficiently large µ or small p, one can check that the strength of the diffusion, i.e. div(∥∇F (k) ∥ p-2 ∇F (k) ), is diluted. Once two source terms 2µα (k) DF (k) + 2µα (k) F (0) dominant the whole process, the generated node features approach to DF (k) + F (0) , which suggests a framelet together with two source terms. The first term can be treated as the degree normalization of the node features from the last layer and the second term simply maintains the initial feature embedding. Therefore, the energy of the remaining node features in this case is just with the form presented in (39), suggesting a preservation of node feature variations. Furthermore, this observation suggests our conclusion on the energy behavior of pL-UFG (Proposition 2); the interaction within pL-UFG described in Lemma 3 and Corollary 1 and lastly, the conclusion from Lemma 4 can be unified and eventually forms a well defined framework in assessing and understanding the property of pL-UFG." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [ "b25", "b38" ], "table_ref": [], "text": "Experiment outlines In this section, we present comprehensive experimental results on the claims that we made from the theoretical aspects of our model. All experiments were conducted in PyTorch on NVIDIA Tesla V100 GPU with 5,120 CUDA cores and 16GB HBM2 mounted on an HPC cluster. In addition, for the sake of convenience, we listed the summary of each experimental section as follows:\n• In Section 4.1, we show how a sufficient large/small µ can affect model's performance on heterophilic/homophilic graphs, and the results are almost invariant to the choice of p.\n• In Section 4.2 we show some tests regarding to the results (i.e., Remark 7 and Lemma 3) of model's dynamics. Specifically, we verified the conclusions of stronger LFD and HFD in Section 3.3 with controlled model dynamics (quantity of θ ) of framelet to illustrate how the p-Laplacian based implicit layer interact with framelet model.\n• In Section 4.3 we test the performances of pL-UFG-LFD and pL-UFG-HFD via real-world graph benchmarks versus various baseline models. Furthermore, as these two controllable pL-UFG models largely reduced the computational cost (as we claimed in Section 3.4), we show pL-UFG-LFD and pL-UFG-HFD can even handle the large-scale graph datasets and achieve remarkable learning accuracies.\nHyper-parameter tuning We applied exactly same hyper-parameter tunning strategy as [25] to make a fair comparsion. In terms of the settings for graph framelets, the framelet type is fixed as Haar ( [38]) and the level J is set to 1. The dilation scale s ∈ {1, 1.5, 2, 3, 6}, and for n, the degree of Chebyshev polynomial approximation is drawn from {2, 3, 7}. It is worth noting that in graph framelets, the Chebyshev polynomial is utilized for approximating the spectral filtering of the Laplacian eigenvalues. Thus, a d-degree polynomial approximates d-hop neighbouring information of each node of the graph. Therefore, when the input graph is heterophilic, one may prefer to have a relatively larger d as node labels tend to be different between directly connected (1-hop) nodes." }, { "figure_ref": [], "heading": "Synthetic Experiment on Variation of µ", "publication_ref": [ "b25" ], "table_ref": [], "text": "Setup In this section, we show how a sufficiently large/small of µ can affect model's performance on hetero/homophilic graphs. In order to make a fair comparison, all the parameters of pL-UFG followed the settings included in [25]. For this test, we selected two datasets: Cora (heterophilic index: 0.825, 2708 nodes and 5278 edges) and Wisconsin (heterophilic index: 0.15, 499 nodes and 1703 edges) from https://www.pyg.org/. We assigned the quantity of p = {1, 1.5, 2, 2.5} combined with a set of µ = {0.1, 0.5, 1, 5, 10, 20, 30, 50, 70}. The number of epochs was set to 200 and the test accuracy (in %) is obtained as the average test accuracy of 10 runs." }, { "figure_ref": [ "fig_2", "fig_2", "fig_2", "fig_2", "fig_2" ], "heading": "Results and Discussion", "publication_ref": [], "table_ref": [], "text": "The experimental results are presented in Figure 2. When examining the results obtained through the homophily graph (Figure 2a), it is apparent that all variants of pL-UFGs achieved the best performance when µ = 0.1, which is the minimum value of µ. As the value of µ increased, the learning accuracy decreased. This suggests that a larger sharpening effect was induced by the model, as stated in Remark 7 and Proposition 2, causing pL-UFGs to incorporate higher amounts of Dirichlet energy into their generated node features. Consequently, pL-UFGs are better suited for adapting to heterophily graphs. This observation is further supported by the results in Figure 2b, where all pL-UFG variants achieved their optimal performance with a sufficiently large µ when the input graph is heterophilic.\nAdditional interesting observation on the above result is despite the fact that all model variants demonstrated superior learning outcomes on both homophilic and heterophilic graphs when assigned sufficiently large or small values of µ, it can be observed that when the quantity of p is small, pL-UFG requires a smaller value of µ to fit the heterophily graph (blue line in Fig. 2b). On the other hand, when the models have relatively large value of p (i.e., p = 2.5), it is obvious that these models yielded the most robust results when there is an increase of µ (red line in Fig. 2a). These phenomena further support the notion that p and µ exhibit opposite effects on the model's energy behavior as well as its adaptation to homophily and heterophily graphs." }, { "figure_ref": [], "heading": "Synthetic Experiment on Testing of Model's Dynamics", "publication_ref": [ "b25" ], "table_ref": [], "text": "Now, we take one step ahead. Based on Lemma 3 and Remark 7, with the settings of θ provided in Proposition 3, the inclusion of p-Laplacian based implicit layer can further enhance framelet's LFD and HFD dynamics. This suggests that one can control the entries of θ based on the conditions provided in Proposition 3 and by only changing the quantity of µ and p to test model's adaption power on both homophily and heterophily graphs. Therefore, in this section, we show how a (dynamic) controlled framelet model can be further enhanced by the assistant from the p-Laplacian regularizer. Similarly, we applied the same setting to the experiments in [25]." }, { "figure_ref": [ "fig_3" ], "heading": "Setup and Results", "publication_ref": [], "table_ref": [], "text": "To verify the claims on in Lemma 3 and Remark 7, we deployed the same settings mentioned in Proposition 3. Specifically, we utilized Haar frame with ℓ = 1 and set θ 0,1 = I N , θ 0,1 = θI N . For heterophilic graphs (Wisconsin), θ = 2, and for the homophilic graph (Cora), θ = 0.2. The result of the experiment is presented in Figure 3. Similar to the results observed from Section 4.1, it is shown that when the relatively large quantity of µ is assigned, model's capability of adapting to homophily/heterophily graph decreased/increased. This directly verifies that the p-Laplacian based implicit layer interacts and further enhances the (controlled) dynamic of the framelet by the value of p and µ, in terms of adaptation. " }, { "figure_ref": [], "heading": "Real-world Node Classification and Scalability", "publication_ref": [ "b25", "b17", "b21", "b34", "b30", "b37", "b16", "b8", "b14", "b41", "b25", "b25", "b25", "b41" ], "table_ref": [ "tab_0", "tab_1", "tab_2", "tab_1" ], "text": "Previous synthetic numerical results show predictable performance of both pL-UFG-LFD and pL-UFG-HFD. In this section, we present the learning accuracy of our proposed models via real-world homophily and heterophily graphs. Similarly, we deployed the same experimental setting from [25]. In addition, to verify the claim in Remark 3.4, we tested our proposed model via large-scale graph dataset (ogbn-arxiv) to show the proposed model's scalability which is rarely explored. We include the summary statistic of the datasets in Table 2. All datasets are split according to [17].\nFor the settings of µ, p and θ within pL-UFG-LFD and pL-UFG-HFD, we assigned µ = {0.1, 0.5, 1, 2.0}, p = {1, 1.5, 2, 2.5} and θ = {0.2, 0.5, 0.8} for pL-UFG-LFD in order to fit the homophily graphs, and for pL-UFG-HFD, we assigned µ = {10, 20, 30}, p = {1, 1.5, 2, 2.0, 2.5} and θ = {5, 7.5, 10} for heterophily graphs. The learning accuracy are presented in Table 3 and4. Furthermore, rather than only reporting the average accuracy and related standard deviation, to further verify the significance of the improvement, we also computed the 95% confidence interval under t-distribution for the highest learning accuracy of the baselines and mark * to our model's learning accuracy if it is outside the confidence interval.\nWe include a brief introduction on the baseline models used in this experiment:\n• MLP: Standard feedward multiple layer perceptron.\n• GCN [21]: GCN is the first of its kind to implement linear approximation to spectral graph convolutions.\n• SGC [34]: SGC reduces GCNs' complexity by removing nonlinearities and collapsing weight matrices between consecutive layers. Thus serves as a more powerful and efficient GNN baseline.\n• GAT [30]: GAT generates attention coefficient matrix that element-wisely multiplied on the graph adjacency matrix according to the node feature based attention mechanism via each layer to propagate node features via the relative importance between them.\n• JKNet [37]: JKNet offers the capability to adaptively exploit diverse neighbourhood ranges, facilitating enhanced structure-aware representation for individual nodes.\n• APPNP [16]: APPNP leverages personalized PageRank to disentangle the neural network from the propagation scheme, thereby merging GNN functionality.\n• GPRGNN [8]: The GPRGNN architecture dynamically learns General Pagerank (GPR) weights to optimize the extraction of node features and topological information from a graph, irrespective of the level of homophily present.\n• p-GNN [14]:p-GNN is a p-Laplacian based graph neural network model that incorporates a message-passing mechanism derived from a discrete regularization framework. To make a fair comparison, we test p-GNN model with different quantity of p.\n• UFG [41]: UFG, a class of GNNs built upon framelet transforms utilizes framelet decomposition to effectively merge graph features into low-pass and high-pass spectra.\n• pL-UFG [25]: pL-UFG employs a p-Laplacian based implicit layer to enhance the adaptability of multi-scale graph convolution networks (i.e.,UFG) to filter-based domains, effectively improving the model's adaptation to both homophily and heterophily graphs. Furthermore, as two types of pL-UFG models are proposed in [25], we test both two pL-UFG variants as our baseline models. For more details including the precise formulation of the model, please check [25].\nDiscussion on the Results, Scalability and Computational Complexity From both Table 3 and 4, it is clear that our proposed model (pL-UFG-LFD and pL-UFG-HFD) produce state-of-the-art learning accuracy compared to various baseline models. For the datasets (i.e.,Pubmed and Squirrel) on which pL-UFG-LFD and pL-UFG-HFD are not the best, one can observe that pL-UFG-LFD and pL-UFG-HFD still have nearly identical learning outcomes compared to the best pL-UFG results. This suggests even within the pL-UFG with controlled framelet dynamics, by adjusting the values of µ and p, our proposed models are still able to generate state-of-the-art learning results with the computational complexity largely reduced compared to the pL-UFG and UFG. This observation directly verifies Lemma 3 and Remark 7. In addition, due to the reduction of computational cost, our dynamic controlled models (pL-UFG-LFD and pL-UFG-HFD) show a strong capability of handling the large-scale graph dataset, which is a challenging issue (scalability) for some GNNs especially multi-scale graph convolutions such as framelets [41] without additional data pre-processing steps. Accordingly, one can check that pL-UFG-LFD outperforms all included baselines on Arxiv datasets. Lastly, one can also find that the most of the improvements between the learning accuracy produced from our model and the baselines are significant." }, { "figure_ref": [], "heading": "Limitation of the Proposed Models and Future Studies", "publication_ref": [], "table_ref": [], "text": "First, we note that our analysis on the convergence, energy dynamic and equivalence between our proposed model can be applied or partially applied to most of existing GNNs. Based on we have claimed in regarding to the theoretical perspective of pL-UFG, although we assessed model property via different perspective, eventually all theoretical conclusions come to the same conclusion (i.e., the asymptotic behavior of pL-UFG). Therefore, it would be beneficial to deploy our analyzing framework to other famous GNNs. Since the main propose of this paper is to re-assess the property of pL-UFG, we leave this to the future work.\nIn addition, to induce LFD/HFD to pL-UFG, we set the value of θ as constant according to Proposition 3, however, due to large variety of real-world graphs, it is challenging to determine the most suitable θ when we fix it as a constant. This suggests the exploration on controlling model's dynamic via selecting θ is still rough. Moreover, based on Definition 1, the homophily index of a graph is summary statistic over all nodes. However, even in the highly homophilic graph, there are still some nodes with their neighbours with different labels. This suggests the index is only capable of presenting the global rather than local labelling information of the graph. Accordingly, assigning a constant θ to induce LFD/HFD might not be able to equip pL-UFG enough power to capture detailed labelling information of the graph. Therefore, another future research direction is to potentially explore the design of θ via the local labelling information of the graph. Finally, we note that another consequence of setting θ 0,1 and θ 1,1 as constant is such setting narrows the model's parameter space, as one can check the only learnable matrix left via explicit part of pL-UFG (( 9)) is W. Accordingly, the narrowed parameter space might make the solution of the model optimization apart from desired solution as before, causing potential increase of learning variance." }, { "figure_ref": [], "heading": "Concluding Remarks", "publication_ref": [], "table_ref": [], "text": "In this work, we performed theoretical analysis on pL-UFG. Specifically, we verified that by choosing suitable quantify of the model parameters (µ and p), the implicit propagation induced from p-Laplacian is capable of amplifying or shrinking the Dirichlet energy of the node features produced from the framelet. Consequently, such manipulation of the energy results in a stronger energy dynamic of framelet and therefore enhancing model's adaption power on both homophilic and heterophilic graphs. We further explicitly showed the proof of the convergence of pL-UFG, which to our best of knowledge, fills the knowledge gap at least in the field of p-Laplacian based multi-scale GNNs. Moreover, we showed the equivalence between pL-UFG and the non-linear graph diffusion, indicating that pL-UFG can be trained via various training schemes. Finally, it should be noted that for the simplicity of the analysis, we have made several assumptions and only focus on the Haar type frames. It suffices in regards to the scope of this work. However, it would be interesting to consider more complex energy dynamics by reasonably dropping some of the assumptions or from other types of frames, we leave this to future work. " } ]
2023-09-19
[ { "authors": "Wendong Bi; Lun Du; Qiang Fu; Yanlin Wang; Shi Han; Dongmei Zhang", "journal": "", "ref_id": "b0", "title": "Make heterophily graphs better fit gnn: A graph rewiring approach", "year": "2022" }, { "authors": "Cristian Bodnar; Francesco Di Giovanni; Benjamin Paul Chamberlain; Pietro Liò; Michael M Bronstein", "journal": "", "ref_id": "b1", "title": "Neural sheaf diffusion: A topological perspective on heterophily and oversmoothing in gnns", "year": "2022" }, { "authors": "Joan Michael M Bronstein; Taco Bruna; Petar Cohen; Veličković", "journal": "", "ref_id": "b2", "title": "Geometric deep learning: Grids, groups, graphs, geodesics, and gauges", "year": "2021" }, { "authors": "Ben Chamberlain; James Rowbottom; Maria I Gorinova; Michael Bronstein; Stefan Webb; Emanuele Rossi", "journal": "PMLR", "ref_id": "b3", "title": "Grand: Graph neural diffusion", "year": "2021" }, { "authors": "Benjamin Chamberlain; James Rowbottom; Davide Eynard; Francesco Di Giovanni; Xiaowen Dong; Michael Bronstein", "journal": "", "ref_id": "b4", "title": "Beltrami flow and neural diffusion on graphs", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b5", "title": "", "year": "2021" }, { "authors": "Jialin Chen; Yuelin Wang; Cristian Bodnar; Pietro Liò; Yu Guang; Wang ", "journal": "", "ref_id": "b6", "title": "Dirichlet energy enhancement of graph neural networks by framelet augmentation", "year": "2022" }, { "authors": "Qi Chen; Yifei Wang; Yisen Wang; Jiansheng Yang; Zhouchen Lin", "journal": "PMLR", "ref_id": "b7", "title": "Optimization-induced graph implicit nonlinear diffusion", "year": "2022" }, { "authors": "Eli Chien; Jianhao Peng; Pan Li; Olgica Milenkovic", "journal": "", "ref_id": "b8", "title": "Adaptive universal generalized pagerank graph neural network", "year": "2021" }, { "authors": "Fan Rk; Chung ", "journal": "American Mathematical Soc", "ref_id": "b9", "title": "Spectral graph theory", "year": "1997" }, { "authors": "Francesco Di; Giovanni ; James Rowbottom; Thomas Benjamin P Chamberlain; Michael M Markovich; Bronstein", "journal": "", "ref_id": "b10", "title": "Graph neural networks as gradient flows", "year": "2022" }, { "authors": "Bin Dong", "journal": "Applied and Computational Harmonic Analysis", "ref_id": "b11", "title": "Sparse representation on graphs by tight wavelet frames and applications", "year": "2017" }, { "authors": "Pavel Drábek; I Stanislav; Pohozaev", "journal": "Proceedings of the Royal Society of Edinburgh Section A: Mathematics", "ref_id": "b12", "title": "Positive solutions for the p-laplacian: application of the fibrering method", "year": "1997" }, { "authors": "Guoji Fu; Peilin Zhao; Yatao Bian", "journal": "PMLR", "ref_id": "b13", "title": "p-laplacian based graph neural networks", "year": "2022" }, { "authors": "Guoji Fu; Peilin Zhao; Yatao Bian", "journal": "PMLR", "ref_id": "b14", "title": "p-Laplacian based graph neural networks", "year": "2022" }, { "authors": "García Jp; I Azorero; Alonso Peral", "journal": "Communications in Partial Differential Equations", "ref_id": "b15", "title": "Existence and nonuniqueness for the p-laplacian", "year": "1987" }, { "authors": "Johannes Gasteiger; Aleksandar Bojchevski; Stephan Günnemann", "journal": "", "ref_id": "b16", "title": "Predict then propagate: Graph neural networks meet personalized pagerank", "year": "2019" }, { "authors": "Will Hamilton; Zhitao Ying; Jure Leskovec", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b17", "title": "Inductive representation learning on large graphs", "year": "2017" }, { "authors": "Pierre David K Hammond; Rémi Vandergheynst; Gribonval", "journal": "Applied and Computational Harmonic Analysis", "ref_id": "b18", "title": "Wavelets on graphs via spectral graph theory", "year": "2011" }, { "authors": "Andi Han; Dai Shi; Zhiqi Shao; Junbin Gao", "journal": "", "ref_id": "b19", "title": "Generalized energy and gradient flow via graph framelets", "year": "2022" }, { "authors": "Bernd Kawohl; Jiri Horak", "journal": "", "ref_id": "b20", "title": "On the geometry of the p-laplacian operator", "year": "2016" }, { "authors": "N Thomas; Max Kipf; Welling", "journal": "", "ref_id": "b21", "title": "Semi-supervised classification with graph convolutional networks", "year": "2016" }, { "authors": "Qimai Li; Zhichao Han; Xiao-Ming Wu", "journal": "", "ref_id": "b22", "title": "Deeper insights into graph convolutional networks for semi-supervised learning", "year": "2018" }, { "authors": "Remigijus Paulavičius; Julius Žilinskas", "journal": "Technological and Economic Development of Economy", "ref_id": "b23", "title": "Analysis of different norms and corresponding lipschitz constants for global optimization", "year": "2006" }, { "authors": "Hongbin Pei; Bingzhe Wei; Kevin Chen-Chuan; Yu Chang; Bo Lei; Yang", "journal": "", "ref_id": "b24", "title": "Geom-GCN: Geometric graph convolutional networks", "year": "2019" }, { "authors": "Zhiqi Shao; Andi Han; Dai Shi; Andrey Vasnev; Junbin Gao", "journal": "", "ref_id": "b25", "title": "Generalized Laplacian regularized framelet gcns", "year": "2022" }, { "authors": "Dai Shi; Yi Guo; Zhiqi Shao; Junbin Gao", "journal": "", "ref_id": "b26", "title": "How curvature enhance the adaptation power of framelet gcns", "year": "2023" }, { "authors": "Wim Sweldens", "journal": "SIAM Journal on Mathematical Analysis", "ref_id": "b27", "title": "The lifting scheme: A construction of second generation wavelets", "year": "1998" }, { "authors": "Matthew Thorpe; Tan Minh Nguyen; Hedi Xia; Thomas Strohmer; Andrea Bertozzi; Stanley Osher; Bao Wang", "journal": "", "ref_id": "b28", "title": "GRAND++: Graph neural diffusion with a source term", "year": "2022" }, { "authors": "César Torres", "journal": "", "ref_id": "b29", "title": "Boundary value problem with fractional p-laplacian operator", "year": "2014" }, { "authors": "Petar Veličković; Guillem Cucurull; Arantxa Casanova; Adriana Romero; Pietro Liò; Yoshua Bengio", "journal": "", "ref_id": "b30", "title": "Graph attention networks", "year": "2018" }, { "authors": "Yifei Wang; Yisen Wang; Jiansheng Yang; Zhouchen Lin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b31", "title": "Dissecting the diffusion process in linear graph convolutional networks", "year": "2021" }, { "authors": "Quanmin Wei; Jinyan Wang; Jun Hu; Xianxian Li; Tong Yi", "journal": "Neural Computing and Applications", "ref_id": "b32", "title": "Ogt: optimize graph then training gnns for node classification", "year": "2022" }, { "authors": "Felix Wu; Amauri Souza; Tianyi Zhang; Christopher Fifty; Tao Yu; Kilian Weinberger", "journal": "PMLR", "ref_id": "b33", "title": "Simplifying graph convolutional networks", "year": "2019" }, { "authors": "Felix Wu; Tianyi Zhang; Amauri Holanda De; Christopher Souza; Tao Fifty; Kilian Q Yu; Weinberger", "journal": "", "ref_id": "b34", "title": "Simplifying graph convolutional networks", "year": "2019" }, { "authors": "Zonghan Wu; Shirui Pan; Fengwen Chen; Guodong Long; Chengqi Zhang; S Yu; Philip ", "journal": "IEEE transactions on Neural Networks and Learning Systems", "ref_id": "b35", "title": "A comprehensive survey on graph neural networks", "year": "2020" }, { "authors": "Keyulu Xu; Weihua Hu; Jure Leskovec; Stefanie Jegelka", "journal": "", "ref_id": "b36", "title": "How powerful are graph neural networks", "year": "2019" }, { "authors": "Keyulu Xu; Chengtao Li; Yonglong Tian; Tomohiro Sonobe; Ken-Ichi Kawarabayashi; Stefanie Jegelka", "journal": "", "ref_id": "b37", "title": "Representation learning on graphs with jumping knowledge networks", "year": "2018" }, { "authors": "Mengxi Yang; Xuebin Zheng; Jie Yin; Junbin Gao", "journal": "", "ref_id": "b38", "title": "Quasi-framelets: Another improvement to graph neural networks", "year": "2022" }, { "authors": "Xin Zheng; Yixin Liu; Shirui Pan; Miao Zhang; Di Jin; Philip S Yu", "journal": "", "ref_id": "b39", "title": "Graph neural networks for graphs with heterophily: A survey", "year": "2022" }, { "authors": "Xuebin Zheng; Bingxin Zhou; Junbin Gao; Yuguang Wang; Pietro Lió; Ming Li; Guido Montufar", "journal": "PMLR", "ref_id": "b40", "title": "How framelets enhance graph neural networks", "year": "2021" }, { "authors": "Xuebin Zheng; Bingxin Zhou; Yu Guang Wang; Xiaosheng Zhuang", "journal": "Journal of Machine Learning Research", "ref_id": "b41", "title": "Decimated framelet system on graphs and fast g-framelet transforms", "year": "2022" }, { "authors": "Bingxin Zhou; Ruikun Li; Xuebin Zheng; Yu Guang Wang; Junbin Gao", "journal": "IEEE Transactions on Artificial Intelligence", "ref_id": "b42", "title": "Graph denoising with framelet regularizer", "year": "2021" }, { "authors": "Dengyong Zhou; Bernhard Schölkopf", "journal": "Springer", "ref_id": "b43", "title": "Regularization on discrete spaces", "year": "2005" }, { "authors": "Jiong Zhu; Yujun Yan; Lingxiao Zhao; Mark Heimann; Leman Akoglu; Danai Koutra", "journal": "", "ref_id": "b44", "title": "Beyond homophily in graph neural networks: Current limitations and effective designs", "year": "2020" }, { "authors": "Meiqi Zhu; Xiao Wang; Chuan Shi; Houye Ji; Peng Cui", "journal": "", "ref_id": "b45", "title": "Interpreting and unifying graph neural networks with an optimization framework", "year": "2021" }, { "authors": "Chunya Zou; Andi Han; Lequan Lin; Junbin Gao", "journal": "MLP", "ref_id": "b46", "title": "A simple yet effective SVD-GCN for directed graphs", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 107.84, 539.21, 38.2, 11.52 ], "formula_id": "formula_0", "formula_text": "E P F (F)" }, { "formula_coordinates": [ 3, 101.91, 552.75, 325.4, 42.29 ], "formula_id": "formula_1", "formula_text": "E F r (F) Generalized framelet Dirichlet energy E total (F) Total generalized Dirichlet energy {λ i , u i } N i=1" }, { "formula_coordinates": [ 3, 70.72, 655.32, 469.28, 37.73 ], "formula_id": "formula_2", "formula_text": "G = (V, E, W) with nodes set V = {v 1 , v 2 , • • • , v N } of total N nodes, edge set E ⊆ V × V and graph adjacency matrix W, where W = [w i,j ] ∈ R N ×N and w i,j = 1 if (v i , v j ) ∈ E," }, { "formula_coordinates": [ 4, 71.61, 111.97, 469.9, 28.42 ], "formula_id": "formula_3", "formula_text": "L = I -D -1 2 (W + I)D -1 2 , where D = diag(d 1,1 , . . . , d N,N" }, { "formula_coordinates": [ 4, 179.22, 186.66, 190.6, 16.41 ], "formula_id": "formula_4", "formula_text": "x = [x 1 , ..., x c ] ∈ R c , ∥x∥ 2 = ( c i=1 x 2 i ) 1 2" }, { "formula_coordinates": [ 4, 246.72, 252.51, 294.55, 12.68 ], "formula_id": "formula_5", "formula_text": "F (k+1) = σ AF (k) W (k) ,(1)" }, { "formula_coordinates": [ 4, 170.7, 475, 164.62, 13.42 ], "formula_id": "formula_6", "formula_text": "H(G) = E v i ∈V [|{v j } j∈N i,y i =y i |/|N i |]" }, { "formula_coordinates": [ 5, 193.59, 201.33, 347.68, 13.18 ], "formula_id": "formula_7", "formula_text": "g 0 (ξ) 2 + g 1 (ξ) 2 + • • • + g R (ξ) 2 ≡ 1, ∀ξ ∈ [0, π].(2)" }, { "formula_coordinates": [ 5, 197.47, 344.71, 343.8, 24.46 ], "formula_id": "formula_8", "formula_text": "W 0,J = Ug 0 ( Λ 2 m+J ) • • • g 0 ( Λ 2 m )U ⊤ ,(3)" }, { "formula_coordinates": [ 5, 199.26, 371.04, 342.01, 24.46 ], "formula_id": "formula_9", "formula_text": "W r,0 = Ug r ( Λ 2 m )U ⊤ , for r = 1, ..., R,(4)" }, { "formula_coordinates": [ 5, 200.03, 397.37, 341.24, 24.47 ], "formula_id": "formula_10", "formula_text": "W r,ℓ = Ug r ( Λ 2 m+ℓ )g 0 ( Λ 2 m+ℓ-1 ) • • • g 0 ( Λ 2 m )U ⊤ ,(5)" }, { "formula_coordinates": [ 6, 198.24, 96.61, 343.03, 24.43 ], "formula_id": "formula_11", "formula_text": "W 0,J ≈ T 0 ( 1 2 m+J L) • • • T 0 ( 1 2 m L),(6)" }, { "formula_coordinates": [ 6, 200.8, 129.14, 340.47, 43.66 ], "formula_id": "formula_12", "formula_text": "W r,ℓ ≈ T r ( 1 2 m+ℓ L)T 0 ( 1 2 m+ℓ-1 L) • • • T 0 ( 1 2 m L),(7)" }, { "formula_coordinates": [ 6, 140.34, 259.41, 400.93, 37.71 ], "formula_id": "formula_14", "formula_text": "F (k+1) = σ W ⊤ diag(θ)WF (k) := σ   (r,ℓ)∈I W ⊤ r,ℓ diag(θ r,ℓ )W r,ℓ F (k) W (k)   ,(9)" }, { "formula_coordinates": [ 6, 167.38, 372.36, 373.89, 37.26 ], "formula_id": "formula_15", "formula_text": "F (k+1) = σ   W ⊤ 0,J AW 0,J F (k) W (k) 0,J + r,ℓ W ⊤ r,ℓ AW r,ℓ F (k) W (k) r,ℓ   .(10)" }, { "formula_coordinates": [ 7, 248.24, 109.39, 115.53, 12.06 ], "formula_id": "formula_16", "formula_text": "∆u := ∇ • (∥∇u∥ p-2 ∇u)" }, { "formula_coordinates": [ 7, 218.89, 455.67, 322.39, 25.56 ], "formula_id": "formula_17", "formula_text": "(∇ W F)([i, j]) := w i,j d j,j f j - w i,j d i,i f i ,(11)" }, { "formula_coordinates": [ 7, 249.13, 587.99, 287.32, 33.74 ], "formula_id": "formula_18", "formula_text": "F ∈ F V , ⟨∇F, g⟩ = ⟨F, -div(g)⟩. (12" }, { "formula_coordinates": [ 7, 536.45, 610.81, 4.82, 10.91 ], "formula_id": "formula_19", "formula_text": ")" }, { "formula_coordinates": [ 7, 238.55, 655.3, 302.72, 33.71 ], "formula_id": "formula_20", "formula_text": ")(i) = N j=1 w i,j d i,i (g[i, j] -g[j, i]).(13)" }, { "formula_coordinates": [ 8, 205.01, 159.37, 336.27, 24.43 ], "formula_id": "formula_21", "formula_text": "∆ p F := - 1 2 div(∥∇F∥ p-2 ∇F), for p ≥ 1.(14)" }, { "formula_coordinates": [ 8, 205.61, 235.04, 335.66, 34.28 ], "formula_id": "formula_22", "formula_text": "S p (F) = 1 2 (v i ,v j )∈E w i,j d j,j f j - w i,j d i,i f i p ,(15)" }, { "formula_coordinates": [ 8, 163.21, 455.29, 378.06, 78.32 ], "formula_id": "formula_23", "formula_text": "S p (F) = 1 2 (v i ,v j )∈E ∥∇ W F([i, j])∥ p = 1 2 v i ∈V      v j ∼v i ∥∇ W F([i, j])∥ p   1 p    p = 1 2 v i ∈V ∥∇ W F(v i )∥ p p ,(16)" }, { "formula_coordinates": [ 8, 384.47, 550, 166, 13.08 ], "formula_id": "formula_24", "formula_text": "∇ W F(v i ) = (∇ W F([i, j])) v j :(v i ,v j )∈E" }, { "formula_coordinates": [ 8, 230.16, 599.73, 311.11, 30.47 ], "formula_id": "formula_25", "formula_text": "S ϕ p (F) = 1 2 v i ∈V ϕ(∥∇ W F(v i )∥ p ).(17)" }, { "formula_coordinates": [ 9, 188.44, 149.49, 352.83, 20.9 ], "formula_id": "formula_26", "formula_text": "F = arg min F S ϕ p (F) + µ∥F -W ⊤ diag(θ)WF (k) ∥ 2 F ,(18)" }, { "formula_coordinates": [ 9, 137.89, 279.18, 336.23, 71.75 ], "formula_id": "formula_27", "formula_text": "M i,j = w i,j 2 ∥∇ W F([i, j])∥ p-2 • ϕ ′ (∥∇ W F(v i )∥ p ) ∥∇ W F(v i )∥ p-1 p + ϕ ′ (∥∇ W F(v j )∥ p ) ∥∇ W F(v j )∥ p-1 p , α ii =1/   v j ∼v i M i,j d i,i + 2µ   , β ii = 2µα ii ," }, { "formula_coordinates": [ 9, 193.43, 363.31, 314.46, 10.75 ], "formula_id": "formula_28", "formula_text": "M = [M i,j ], α = diag(α 11 , ..., α N N ) and β = diag(β 11 , ..., β N N )." }, { "formula_coordinates": [ 9, 199.78, 396.72, 341.49, 12.68 ], "formula_id": "formula_29", "formula_text": "F (k+1) = α (k) D -1/2 M (k) D -1/2 F (k) + β (k) Y,(19)" }, { "formula_coordinates": [ 9, 160.39, 659.63, 380.89, 29.32 ], "formula_id": "formula_30", "formula_text": "ζ ϕ i,j (F) = 1 2 ϕ ′ (∥∇ W F (k+1) (v i )∥ p ) ∥∇ W F (k+1) (v i )∥ p-1 p + ϕ ′ (∥∇ W F (k+1) (v j )∥ p ) ∥∇ W F (k+1) (v j )∥ p-1 p .(20)" }, { "formula_coordinates": [ 10, 220.49, 97.03, 315.96, 15.55 ], "formula_id": "formula_31", "formula_text": "M i,j = ζ ϕ i,j (F)w i,j ∥∇ W F([i, j])∥ p-2 . (21" }, { "formula_coordinates": [ 10, 536.45, 99.6, 4.82, 10.91 ], "formula_id": "formula_32", "formula_text": ")" }, { "formula_coordinates": [ 10, 281.7, 165.53, 259.57, 26.38 ], "formula_id": "formula_33", "formula_text": "ϕ ′ (ξ) ξ p-1 ≤ C,(22)" }, { "formula_coordinates": [ 10, 241.73, 202.8, 63.27, 15.55 ], "formula_id": "formula_34", "formula_text": "|ζ ϕ i,j (F)| ≤ C." }, { "formula_coordinates": [ 10, 72, 227.44, 438.89, 34.41 ], "formula_id": "formula_35", "formula_text": "ζ i,j (F) for ζ ϕ i,j (F) instead. Remark 2." }, { "formula_coordinates": [ 10, 72, 277.24, 470.12, 101.05 ], "formula_id": "formula_36", "formula_text": "ξ p-1 = 2ξ ξ p-1 = 2 ξ 2 ξ p , thus ζ i,j (F) is bounded for all 0 < p ≤ 2. Furthermore, when ϕ(ξ) = ξ, then ϕ ′ (ξ) ξ p-1 = ξ ξ p-1 , indicating ζ i,j (F) is bounded for all 0 < p ≤ 1. In addition, when ϕ(ξ) = ξ 2 + ϵ 2 -ϵ, we have ϕ ′ (ξ) ξ p-1 = (ξ 2 +ϵ 2 ) 1/2 ξ ξ p-1 ≤ C ξ ξ p-1 . Therefore ζ i,j (F) is bounded for all 0 < p ≤ 2. Lastly, when ϕ(ξ) = r 2 log(1 + ξ 2 r 2 ), the result of ϕ ′ (ξ) ξ p-1 yields r 2 1 1+ ξ 2 r 2 • 2 r 2 ξ ξ p-1 ≤ 2 ξ ξ p-1 . Hence ζ i,j (F) remain bounded for all 0 < p ≤ 2." }, { "formula_coordinates": [ 10, 250.66, 591, 110.68, 14.19 ], "formula_id": "formula_37", "formula_text": "L ϕ p (F (k+1) ) ≤ L ϕ p (F (k) )," }, { "formula_coordinates": [ 10, 72, 612.18, 464.45, 76.77 ], "formula_id": "formula_38", "formula_text": "L ϕ p (F) := S ϕ p (F) + µ∥F -Y∥ 2 F . Proof. First, write M (k) i,j = w i,j 2 ∇ W F (k) ([i, j]) p-2 • ϕ ′ (∥∇ W F (k) (v i )∥ p ) ∥∇ W F (k) (v i )∥ p-1 p + ϕ ′ (∥∇ W F (k) (v j )∥ p ) ∥∇ W F (k) (v j )∥ p-1 p . (23" }, { "formula_coordinates": [ 10, 536.45, 668.29, 4.82, 10.91 ], "formula_id": "formula_39", "formula_text": ")" }, { "formula_coordinates": [ 11, 144.23, 109.42, 307.02, 72.14 ], "formula_id": "formula_40", "formula_text": "∂L ϕ p (F) ∂F i,: F (k) =2µ(F (k) i,: -Y i,: ) + v j ∼v i M (k) ij 1 d ii w ij ∇ W F (k) ([j, i]) =2µ(F (k) i,: -Y i,: ) + v j ∼v i M (k) ij 1 d ii F (k) i,:" }, { "formula_coordinates": [ 11, 204.88, 151.81, 336.4, 113.31 ], "formula_id": "formula_41", "formula_text": "1 d ii d jj F (k) j,: =(2µ + v j ∼v i M (k) ij /d ii )F (k) i,: -2µY i,: - v j ∼v i M (k) ij d ii d jj F (k) j,: = 1 α (k) ii F (k) i,: - 1 α (k) ii   β (k) ii Y i,: + α (k) ii v j ∼v i M (k) ij d ii d jj F (k) j,:  (24)" }, { "formula_coordinates": [ 11, 236.73, 303.81, 299.72, 35.94 ], "formula_id": "formula_42", "formula_text": "∂L ϕ p (F) ∂F i,: F (k) = F (k) i,: -F (k+1) i,: α (k) ii . (25" }, { "formula_coordinates": [ 11, 536.45, 315.88, 4.82, 10.91 ], "formula_id": "formula_43", "formula_text": ")" }, { "formula_coordinates": [ 11, 243.76, 389.74, 297.51, 33.81 ], "formula_id": "formula_44", "formula_text": "∂L ϕ p (F ( * ) i,: ) := ∂L ϕ p (F) ∂F i,: F ( * )(26)" }, { "formula_coordinates": [ 11, 184.51, 457.43, 356.76, 178.87 ], "formula_id": "formula_45", "formula_text": "N (k) i,j = W i,j W i,j D i,i F (k) i,: - W i,j D j,j F (k) j,: p-2 N ′(k) i,j = W i,j W i,j D i,i (F (k) i,: + v) - W i,j D j,j F (k) j,: p-2 M (k) i,j = N (k) ij ζ i,j (F (k) ), M ′(k) i,j = N ′(k) ij ζ i,j (F (k) + v) α ′(k) ii = 1/   v j ∼v i M ′(k) i,j D i,i + 2µ   , β ′(k) ii = 2µα ′(k) ii F ′(k+1) i,: = α ′(k) i,i v j ∼v i M ′(k) i,j D i,i D j,j F (k) j,: + β ′(k) Y i,: ,(27)" }, { "formula_coordinates": [ 11, 240.57, 649.84, 267, 12.18 ], "formula_id": "formula_46", "formula_text": "F (k) + v means that v only applies to the i-th of F (k) 2 ." }, { "formula_coordinates": [ 12, 196.21, 97.22, 207.73, 30.16 ], "formula_id": "formula_47", "formula_text": "∂L ϕ p (F (k) i,: + v) = 1 α ′(k) i,i (F (k) i,: + v) -F ′(k+1) i,:" }, { "formula_coordinates": [ 12, 104.79, 166.46, 280.72, 30.16 ], "formula_id": "formula_49", "formula_text": "∂L ϕ p (F (k) i,: + v) -∂L ϕ p (F (k) i,: ) = 1 α ′(k) i,i (F (k) i,: + v) -F ′(k+1) i,:" }, { "formula_coordinates": [ 12, 88.42, 170.1, 409.74, 64.93 ], "formula_id": "formula_50", "formula_text": "α (k) i,i F (k) i,: -F (k+1) i,: ≤ 1 α ′(k) i,i ∥v∥ + 1 α ′(k) i,i F (k) i,: -F ′(k+1) i,:" }, { "formula_coordinates": [ 12, 88.42, 204.87, 428.6, 196.36 ], "formula_id": "formula_51", "formula_text": "1 α (k) i,i F (k) i,: -F (k+1) i,: = 1 α ′(k) i,i ∥v∥ + 1 α ′(k) i,i - 1 α (k) i,i F (k) i,: - 1 α ′(k) i,i F ′(k+1) i,: + 1 α (k) i,i F (k+1) i,: = 1 α ′(k) i,i ∥v∥ + v j ∼v i M ′(k) i,j D i,i - M (k) i,j D i,i F (k) i,: - v j ∼v i M ′(k) i,j D i,i D j,j F ′(k) j,: + M (k) i,j D i,i D j,j F (k) j,: =   v j ∼v i M (k) i,j D i,i + 2µ   ∥v∥ + v j ∼v i M ′(k) i,j -M (k) i,j D i,i ∥v∥ + v j ∼v i M ′(k) i,j -M (k) i,j D i,i F (k) i,: - v j ∼v i M ′(k) i,j -M (k) i,j D i,i D j,j F (k) j,: ." }, { "formula_coordinates": [ 12, 127, 427.69, 26.95, 14.82 ], "formula_id": "formula_52", "formula_text": "1 2 p-2" }, { "formula_coordinates": [ 12, 204.28, 487.16, 203.44, 16 ], "formula_id": "formula_53", "formula_text": "|M ′(k) i,j -M (k) i,j | ≤ C|N ′(k) i,j -N (k) i,j | ≤ C ′ ∥v∥." }, { "formula_coordinates": [ 12, 140.88, 536.93, 400.39, 37.28 ], "formula_id": "formula_54", "formula_text": "∂L ϕ p (F (k) i,: + v) -∂L ϕ p (F (k) i,: ) ≤   v j ∼v i M (k) i,j D i,i + 2µ + o(G, v, X, p)   ∥v∥,(29)" }, { "formula_coordinates": [ 12, 88.94, 613.14, 429.01, 61.85 ], "formula_id": "formula_55", "formula_text": "v j ∼v i M ′(k) i,j -M (k) i,j D i,i ∥v∥ + v j ∼v i M ′(k) i,j -M (k) i,j D i,i F (k) i,: - v j ∼v i M ′(k) i,j -M (k) i,j D i,i D j,j F (k) j,: . Let o = o(G, v, X, p), γ = {γ 1 , ...γ N } ⊤ ," }, { "formula_coordinates": [ 13, 100.21, 94.07, 342.67, 59.28 ], "formula_id": "formula_56", "formula_text": "L ϕ p (F (k) i,: + γ i η i,: ) = L ϕ p (F (k) i,: ) + γ i 1 0 ⟨∂L ϕ p (F (k) i,: + ϵγ i η i,: ), η i,: ⟩dϵ ∀i = L ϕ p (F (k) i,: ) + ⟨∂L ϕ p (F (k) i,: ), η i,: ⟩ + γ i 1 0 ∂L ϕ p F (k) i,: + ϵγ i η i,: -∂L ϕ p F (k) i,:" }, { "formula_coordinates": [ 13, 103.24, 155.49, 408.55, 65.44 ], "formula_id": "formula_57", "formula_text": "≤ L ϕ p (F (k) i,: ) + ⟨∂L ϕ p (F (k) i,: ), η i,: ⟩γ i + γ i 1 0 ∂L ϕ p F (k) i,: + ϵγ i η i,: -∂L ϕ p F (k) i,: η i,: dϵ ≤ L ϕ p (F (k) i,: ) + ⟨∂L ϕ p (F (k) i,: ), η i,: ⟩γ i + 1 α (k) i,i + o γ 2 i ∥η i,: ∥ 2" }, { "formula_coordinates": [ 13, 124.71, 244.78, 41.8, 14.38 ], "formula_id": "formula_58", "formula_text": "γ i = α (k)" }, { "formula_coordinates": [ 13, 123.64, 273.55, 364.23, 79.33 ], "formula_id": "formula_60", "formula_text": "L ϕ p F (k) i,: -α (k) ii ∂L ϕ p (F (k) i,: ) ≤L ϕ p (F (k) i,: ) -α (k) ii ∂L ϕ p (F (k) i,: ), ∂L ϕ p (F (k) i,: ) + 1 2 1 α (k) i,i + o α 2(k) i,i ∥∂L ϕ p (F (k) i,: )∥ 2 =L ϕ p (F(k)" }, { "formula_coordinates": [ 13, 187.79, 333.81, 348.66, 24.43 ], "formula_id": "formula_61", "formula_text": "1 2 α (k) i,i 1 -α (k) i,i o ∥∂L ϕ p (F (k) i,: )∥ 2 . (30" }, { "formula_coordinates": [ 13, 536.45, 340.46, 4.82, 10.91 ], "formula_id": "formula_62", "formula_text": ")" }, { "formula_coordinates": [ 13, 213.83, 388.03, 184.35, 35.68 ], "formula_id": "formula_63", "formula_text": "1 -α (k) i,i o = 1 - o v j ∼v i M (k) i,j D i,i + 2µ > 0." }, { "formula_coordinates": [ 13, 178.32, 458.81, 47.22, 16 ], "formula_id": "formula_64", "formula_text": "L ϕ p (F(k+1) i,:" }, { "formula_coordinates": [ 13, 233.32, 458.81, 200.36, 16 ], "formula_id": "formula_65", "formula_text": ":= L ϕ p F (k) i,: -α (k) ii ∂L ϕ p (F (k) i,: ) ≤ L ϕ p (F (k) i,: )." }, { "formula_coordinates": [ 14, 163.91, 99.63, 377.36, 12.67 ], "formula_id": "formula_66", "formula_text": "F (k+τ ) = F (k) + τ σ -F (k) Ω (k) + AF (k) W (k) -F (0) W (k) ,(31)" }, { "formula_coordinates": [ 14, 164.78, 196.02, 376.49, 33.71 ], "formula_id": "formula_67", "formula_text": "E(F) = 1 2 N i=1 ⟨f i , Ωf i ⟩ - 1 2 N i,j=1 A i,j ⟨f i , Wf j ⟩ + φ (0) (F, F (0) ),(32)" }, { "formula_coordinates": [ 14, 72, 270.22, 336.82, 20.97 ], "formula_id": "formula_68", "formula_text": "setting p = 2 in (15) that is, E(F) = 1 2 (v i ,v j )∈E w i,j d j,j f j - w i,j d i,i f i 2" }, { "formula_coordinates": [ 14, 72, 294.39, 133.63, 15.96 ], "formula_id": "formula_69", "formula_text": "φ (0) (F, F (0) ) = i ⟨f i , Wf(0)" }, { "formula_coordinates": [ 14, 139.45, 321.78, 401.82, 24.43 ], "formula_id": "formula_70", "formula_text": "E(F) = vec(F), 1 2 (Ω ⊗ I N -W ⊗ A)vec(F) + ( W ⊗ I N )vec(F (0) ) .(33)" }, { "formula_coordinates": [ 14, 81.54, 464, 459.73, 46.44 ], "formula_id": "formula_71", "formula_text": "E P F (F (k+1) ) = vec(F (k+1) ), 1 2 I c ⊗ I N -I c ⊗ α (k+1) D -1/2 M (k+1) D -1/2 vec(F (k+1) ) + (I c ⊗ 2µα (k+1) )vec(F (0) ) ,(34)" }, { "formula_coordinates": [ 15, 83.08, 100.24, 458.19, 92.8 ], "formula_id": "formula_72", "formula_text": "E P F (F (k+1) ) = vec(F (k+1) ), 1 2 I c ⊗ I N -I c ⊗ α (k+1) D -1/2 M (k+1) D -1/2 vec(F (k+1) ) + (I c ⊗ 2µα (k+1) )vec(F (0) ) = vec(F (k+1) ), vec(F (k+1) ) - 1 2 vec(F (k+1) ), I c ⊗ α (k+1) D -1/2 M (k+1) D -1/2 vec(F (k+1) ) -(I c ⊗ 4µα (k+1) )vec(F (0) ) .(35)" }, { "formula_coordinates": [ 15, 121.98, 249.06, 414.47, 13.13 ], "formula_id": "formula_73", "formula_text": "I c ⊗ α (k+1) D -1/2 M (k+1) D -1/2 vec(F (k+1) ) -(I c ⊗ 2µα (k+1) )vec(F (0) ) < 0. (36" }, { "formula_coordinates": [ 15, 536.45, 250.83, 4.82, 10.91 ], "formula_id": "formula_74", "formula_text": ")" }, { "formula_coordinates": [ 15, 72, 317.06, 508.47, 102.63 ], "formula_id": "formula_75", "formula_text": "I c ⊗ α (1) D -1/2 M (1) D -1/2 vec(F (1) ) -(I c ⊗ 2µα (1) )vec(F (0) ) =I c ⊗ α (1) D -1/2 M (1) D -1/2 vec α (0) D -1/2 M (0) D -1/2 F (0) + 2µα (0) F (0) -(I c ⊗ 2µα (1) )vec(F (0) ), =I c ⊗ α (1) D -1/2 M (1) D -1/2 I c ⊗ α (0) D -1/2 M (0) D -1/2 + 2µα (0) vec(F (0) ) -(I c ⊗ 2µα (1) )vec(F (0) ), =I c ⊗ 1 s=0 α (s) D -1/2 M (s) D -1/2 + α (1) D -1/2 M (1) D -1/2 2µα (0) -2µα (1) vec(F (0) ). (37" }, { "formula_coordinates": [ 15, 536.45, 397.01, 4.82, 10.91 ], "formula_id": "formula_76", "formula_text": ")" }, { "formula_coordinates": [ 15, 194.32, 433.27, 345.69, 15.24 ], "formula_id": "formula_77", "formula_text": "1 s=0 α (s) D -1/2 M (s) D -1/2 + α (1) D -1/2 M (1) D -1/2 2µα (0) -2µα (1) can" }, { "formula_coordinates": [ 16, 86.97, 94.59, 454.3, 292.06 ], "formula_id": "formula_78", "formula_text": "1 s=0 α (s) i,i d -1/2 i,i M (s) i,j d -1/2 j,j + α (1) i,i d -1/2 i,i M (1) i,j d -1/2 j,j 2µα (0) i,i -2µα (1) i,i = 1 s=0     1/   v j ∼v i M (s) i,j d i,i + 2µ     ∇ W F (s) ([i, j]) p-2 d i,i d j,j   +     1/   v j ∼v i M (1) i,j d i,i + 2µ     ∇ W F (1) ([i, j]) p-2 d i,i d j,j   2µ/   v j ∼v i M (0) i,j d i,i + 2µ       -   2µ/   v j ∼v i M (1) i,j d i,i + 2µ     , =     ∇ W F (0) ([i, j]) p-2 v j ∼v i M (0) i,j d i,i + 2µ • d i,i d j,j         ∇ W F (1) ([i, j]) p-2 v j ∼v i M (1) i,j d i,i + 2µ • d i,i d j,j     +     ∇ W F (1) ([i, j]) p-2 v j ∼v i M (1) i,j d i,i + 2µ • d i,i d j,j • 2µ/   v j ∼v i M (0) i,j d i,i + 2µ       -   2µ/   v j ∼v i M (1) i,j d i,i + 2µ     .(38)" }, { "formula_coordinates": [ 16, 88.84, 407.34, 101.3, 34.82 ], "formula_id": "formula_79", "formula_text": "∥∇W F (1) ([i,j])∥ p-2 v j ∼v i M (1) i,j d i,i +2µ • √ d i,i d j,j" }, { "formula_coordinates": [ 16, 72, 407.74, 476.32, 61.25 ], "formula_id": "formula_80", "formula_text": "v j ∼v i M (0) i,j d i,i + 2µ and 2µ/ v j ∼v i M (1) i,j d i,i + 2µ ≈ 1." }, { "formula_coordinates": [ 16, 72, 510.32, 502.59, 33.98 ], "formula_id": "formula_81", "formula_text": "I c ⊗ k+1 s=0 α (s) D -1/2 M (s) D -1/2 + k+1 s=0 k+1 l=k-s α (l) D -1/2 M (l) D -1/2 2µα (l-1) -2µα (k+1) vec(F (0) )." }, { "formula_coordinates": [ 16, 152.33, 566.11, 418.17, 15.24 ], "formula_id": "formula_82", "formula_text": "k+1 s=0 α (s) D -1/2 M (s) D -1/2 + k+1 s=0 k+1 l=k-s α (l) D -1/2 M (l) D -1/2 2µα (l-1) -2µα (k+1)" }, { "formula_coordinates": [ 16, 74.78, 607.04, 466.49, 38.4 ], "formula_id": "formula_83", "formula_text": "E P F (F (k+1) )≈ vec(F (k+1) ), vec(F (k+1) ) + vec(F (k+1) ), 1 2 I c ⊗ 4µα (k+1) + I N vec(F (0) ) .(39)" }, { "formula_coordinates": [ 17, 208.05, 369.18, 333.22, 33.71 ], "formula_id": "formula_84", "formula_text": "E(F) = 1 2 N i=1 N j=1 w i,j d j,j f j - w i,j d i,i f i 2 ,(40)" }, { "formula_coordinates": [ 17, 72, 580.62, 468, 30.57 ], "formula_id": "formula_85", "formula_text": "Definition 8 ([10]). Ḟ(k) = GNN θ (F (k) , k) is Low-Frequency-Dominant (LFD) if E F (k) /∥F (k) ∥ - → 0 as k - → ∞, and is High-Frequency-Dominant (HFD) if E F (k) /∥F (k) ∥ - → ρ L /2 as t - → ∞." }, { "formula_coordinates": [ 17, 72, 633.57, 469.66, 27.95 ], "formula_id": "formula_86", "formula_text": "k j l - → ∞ and F ∞ such that F(k j l )/∥F(k j l )∥ -→ F ∞ and LF ∞ = 0 (resp. LF ∞ = ρ L F ∞ )." }, { "formula_coordinates": [ 18, 134.39, 186.88, 338.05, 24.43 ], "formula_id": "formula_87", "formula_text": "E F r (F) = 1 2 Tr (W r,ℓ F) ⊤ W r,ℓ FΩ r,ℓ - 1 2 Tr (W r,ℓ F) ⊤ diag(θ) r,ℓ W r,ℓ F W" }, { "formula_coordinates": [ 18, 112.78, 245.52, 428.49, 59.68 ], "formula_id": "formula_88", "formula_text": "E F r (F) = E F r 0,J (F) + r,ℓ E F r r,ℓ (F) = 1 2 (r,ℓ)∈I vec(F), Ω r,ℓ ⊗ W ⊤ r,ℓ W r,ℓ -W ⊗ W ⊤ r,ℓ diag(θ) r,ℓ W r,ℓ vec(F) ,(41)" }, { "formula_coordinates": [ 18, 72, 383.07, 94.93, 10.91 ], "formula_id": "formula_89", "formula_text": "Proposition 3 ([19]" }, { "formula_coordinates": [ 19, 72.66, 238.02, 466.68, 24.43 ], "formula_id": "formula_90", "formula_text": "E P F (F (k+1) ) ≈ vec(F (k+1) ), vec(F (k+1) ) + vec(F (k+1) ), 1 2 I c ⊗ 4µα (k+1) + I N vec(F (0) ) ." }, { "formula_coordinates": [ 19, 79.34, 310.79, 461.93, 65.45 ], "formula_id": "formula_91", "formula_text": "E F r (F (k+1) ) = 1 2 (r,ℓ)∈I vec(F (k+1) ), Ω r,ℓ ⊗ W ⊤ r,ℓ W r,ℓ -W ⊗ W ⊤ r,ℓ diag(θ) r,ℓ W r,ℓ vec(F (k+1) ) , = 1 2 (r,ℓ)∈I vec(F (k+1) ), W ⊗ W ⊤ r,ℓ W r,ℓ -W ⊤ r,ℓ diag(θ) r,ℓ W r,ℓ vec(F (k+1) ) ,(42)" }, { "formula_coordinates": [ 19, 127.72, 468.79, 408.73, 59.71 ], "formula_id": "formula_92", "formula_text": "1 2 (r,ℓ)∈I vec(F (k+1) ), W ⊗ W ⊤ r,ℓ W r,ℓ -W ⊤ r,ℓ diag(θ) r,ℓ W r,ℓ vec(F (k+1) ) ≈ 1 2 vec(F (k+1) ), W ⊗ I N -W ⊤ 1,1 diag(θ) 1,1 W 1,1 vec(F (k+1) ) . (43" }, { "formula_coordinates": [ 19, 536.45, 510.72, 4.82, 10.91 ], "formula_id": "formula_93", "formula_text": ")" }, { "formula_coordinates": [ 19, 72, 606.41, 499.56, 45.03 ], "formula_id": "formula_94", "formula_text": "E (total) (F (k+1) ) = vec(F (k+1) ), vec(F (k+1) ) (44) + 1 2 vec(F (k+1) ), W ⊗ I N -W ⊤ 1,1 diag(θ) 1,1 W 1,1 vec(F (k+1) ) + I c ⊗ 4µα (k+1) + I N vec(F (0) ) ." }, { "formula_coordinates": [ 20, 142.58, 641.86, 326.84, 14.19 ], "formula_id": "formula_95", "formula_text": "F (k+1) =σ W ⊤ 0,1 diag(1 N )W 0,1 F (k) W + W ⊤ 1,1 diag(θ1 N )W 1,1 F (k) W ," }, { "formula_coordinates": [ 22, 205.01, 94.49, 336.27, 24.43 ], "formula_id": "formula_96", "formula_text": "∆ p F := - 1 2 div(∥∇F∥ p-2 ∇F), for p ≥ 1.(45)" }, { "formula_coordinates": [ 22, 152.18, 165.49, 389.09, 29.81 ], "formula_id": "formula_97", "formula_text": "(∆ p F)(i) = v j ∼v i w i,j d i,i ∥∇ W F([i, j])∥ p-2 w i,j d i,i f i - w i,j d j,j f j .(46)" }, { "formula_coordinates": [ 22, 166.84, 242.57, 374.43, 43.52 ], "formula_id": "formula_98", "formula_text": "F (k+1) -F (k) τ = α (k) D -1/2 M (k) D -1/2 F (k) -F (k) + β (k) Y, = α (k) D -1/2 M (k) D -1/2 -I F (k) + β (k) Y.(47)" }, { "formula_coordinates": [ 22, 78.52, 376.82, 478.11, 30.25 ], "formula_id": "formula_99", "formula_text": "α (k) D -1/2 M (k) D -1/2 -I F (k) + β (k) Y = α div(∥∇F (k) ∥ p-2 ∇F (k) ) + 2µα (k) DF (k) + 2µα (k) F (0) .(48)" }, { "formula_coordinates": [ 22, 116.23, 471.01, 425.04, 198.02 ], "formula_id": "formula_100", "formula_text": "v j ∼v i α (k) i,i d -1/2 ii M (k) i,j d -1/2 jj f (k) (j) -f (k) (i) + β (k) i,i Y (i) =α (k) i,i   v j ∼v i M ij √ d ii d jj f (k) (j) - 1 α (k) i,i f (k) (i)   + 2µα (k) i,i f (0) (i) =α (k) i,i   v j ∼v i M ij √ d ii d jj f (k) (j) - v j ∼v i M ij d ii + 2µ f (k) (i)   + 2µα (k) i,i f (0) (i) =α (k) i,i   v j ∼v i w i,j d i,i ∥∇ W F([i, j])∥ p-2 w i,j d j,j f (k) j - w i,j d i,i f (k) i + 2µ v j ∼v i f (k) i   + 2µα (k) i,i f (0) (i) =α (k) i,i ((∆ p F)(i)) + 2µα (k) i,i d ii f (k) i + 2µα (k) i,i f (0) i (49)" } ]
Revisiting Generalized p-Laplacian Regularized Framelet GCNs: Convergence, Energy Dynamic and Training with Non-Linear Diffusion
This paper presents a comprehensive theoretical analysis of the graph p-Laplacian regularized framelet network (pL-UFG) to establish a solid understanding of its properties. We conduct a convergence analysis on pL-UFG, addressing the gap in the understanding of its asymptotic behaviors. Further by investigating the generalized Dirichlet energy of pL-UFG, we demonstrate that the Dirichlet energy remains non-zero throughout convergence, ensuring the avoidance of over-smoothing issues. Additionally, we elucidate the energy dynamic perspective, highlighting the synergistic relationship between the implicit layer in pL-UFG and graph framelets. This synergy enhances the model's adaptability to both homophilic and heterophilic data. Notably, we reveal that pL-UFG can be interpreted as a generalized non-linear diffusion process, thereby bridging the gap between pL-UFG and differential equations on the graph. Importantly, these multifaceted analyses lead to unified conclusions that offer novel insights for understanding and implementing pL-UFG, as well as other graph neural network (GNN) models. Finally, based on our dynamic analysis, we propose two novel pL-UFG models with manually controlled energy dynamics. We demonstrate empirically and theoretically that our proposed models not only inherit the advantages of pL-UFG but also significantly reduce computational costs for training on large-scale graph datasets.
Dai Shi; Zhiqi Shao; Yi Guo; Qibin Zhao; Junbin Gao
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of the working flow of pL-UFG-LFD and pL-UFG-HFD under the Haar type frame with ℓ = 1. The input graph features are first decomposed onto two frequency domains and further filtered by the diagonal matrix θ 0,1 and θ 1,1 . With controlled model dynamics from Proposition 3 i.e., θ 0,1 = 1 N and θ 1,1 = θθ 0,1 , framelet can induce both LFD and HFD dynamics resulting as different level of Dirichlet energy of the produced node features. It is straightforward to check that when framelet is LFD, the level of node Dirichlet energy is less than its HFD counterpart.The generated node features from graph framelet is then inputted into p-Laplacian (with graph gradient as one component) based implicit layer. Based on our conclusions in Lemma 3 and Remark 7 with small/large quantity of p and large/small quantity of µ, the model's (framelet) dynamics are further strengthened resulting even smaller/higher energy levels.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "(a) Accuracy on Cora via different combinations of µ and p. (b) Accuracy on Wisconsin via different combinations of µ and p.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Performance of pL-UFG with various combinations of the values of µ and p.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Average Accuracy(%) with Changing µ and p under (manually fixed) LFD/HFD framelet models. All framelet model in Fig. 3a are LFD dynamic with θ 0,1 = I N , θ 1,1 = θ1 N , θ = 0.2. On Fig. 3b, all framelet models are HFD with θ 0,1 = I N , θ 1,1 = θ1 N , θ = 2.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Statistics of the datasets, H(G) represent the level of homophily of overall benchmark datasets.", "figure_data": "Datasets Class Feature NodeEdge H(G)Cora7143327085278 0.825CiteSeer6370333274552 0.717PubMed350019717 44324 0.792Computers 1076713381 245778 0.802Photo87457487 119043 0.849CS156805 18333 81894 0.832Physics58415 34493 247962 0.915Arxiv23128 169343 1166243 0.681Chameleon 52325227731371 0.247Squirrel520895201 198353 0.216Actor5932760026659 0.221Wisconsin52514991703 0.150Texas517031832790.097Cornell517031832770.386", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Test accuracy (%) on homophilic graphs, the top two learning accuracies are highlighted in red and blue. The term OOM means out of memory. 04±1.11 68.99±0.48 82.03±0.24 71.89±5.36 86.11±1.35 93.50±0.24 94.56±0.11 55.50±0.78 GCN 84.72±0.38 75.04±1.46 83.19±0.13 78.82±1.87 90.00±1.49 93.00±0.12 95.55±0.09 70.07±0.79 SGC 83.79±0.37 73.52±0.89 75.92±0.26 77.56±0.88 86.44±0.35 92.18±0.22 94.99±0.13 71.01±0.30 GAT 84.37±1.13 74.80±1.00 83.92±0.28 78.68±2.09 89.63±1.75 92.57 ±0.14 95.13±0.15 OOM JKNet 83.69±0.71 74.49±0.74 82.59±0.54 69.32±3.94 86.12±1.12 91.11±0.22 94.45±0.33 OOM APPNP 83.69±0.71 75.84±0.64 80.42±0.29 73.73±2.49 87.03±0.95 91.52±0.14 94.71±0.11 OOM GPRGNN 83.79±0.93 75.94±0.65 82.32±0.25 74.26±2.94 88.69±1.32 91.89 ±0.08 94.85±0.23 OOM UFG 80.64±0.74 73.30±0.19 81.52±0.80 66.39±6.09 86.60±4.69 95.27±0.04 95.77±0.04 71.08±0.49 PGNN 1.0 84.21±0.91 75.38±0.82 84.34±0.33 81.22±2.62 87.64±5.05 94.88±0.12 96.15±0.12 OOM PGNN 1.5 84.42±0.71 75.44±0.98 84.48±0.21 82.68±1.15 91.83±0.77 94.13±0.08 96.14±0.08 OOM PGNN 2.0 84.74±0.67 75.62±1.07 84.25 ±0.35 83.40±0.68 91.71±0.93 94.28±0.10 96.03±0.07 OOM PGNN 2.5 84.48±0.77 75.22±0.73 83.94±0.47 82.91±1.34 91.41±0.66 93.40±0.07 95.75±0.05 OOM pL-UFG1 1.0 84.54±0.62 75.88±0.60 85.56±0.18 82.07±2.78 85.57±19.92 95.03±0.22 96.19±0.06 70.28±9.13 pL-UFG1 1.5 84.96±0.38 76.04±0.85 85.59±0.18 85.04±1.06 92.92±0.37 95.03±0.22 96.27±0.06 71.25±8.37 pL-UFG1 2.0 85.20±0.42 76.12±0.82 85.59±0.17 85.26±1.15 92.65±0.65 94.77±0.27 96.04±0.07 OOM pL-UFG1 2.5 85.30±0.60 76.11±0.82 85.54±0.18 85.18±0.88 91.49±1.29 94.86±0.14 95.96±0.11 OOM pL-UFG2 1.0 84.42±0.32 74.79± 0.62 85.45±0.18 84.88±0.84 85.30±19.50 95.03±0.19 96.06±0.11 71.01±7.28 pL-UFG2 1.5 85.60±0.36 75.61±0.60 85.59±0.18 84.55±1.57 93.00±0.61 95.03±0.19 96.14±0.09 71.21±6.19 pL-UFG2 2.0 85.20±0.42 76.12±0.82 85.59±0.17 85.27±1.15 92.50±0.40 94.77±0.27 96.05±0.07 OOM pL-UFG-LFD 85.64±1.36 77.39 * ±1.59 85.08±1.33 85.36 * ±1.39 93.17 * ±1.30 96.13 * ±1.08 96.49 * ±1.04 71.96±1.25", "figure_data": "MethodCoraCiteSeerPubMed ComputersPhotosCSPhysicsArxivMLP66.", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Test accuracy (%) on heterophilic graphs. the top two learning accuracies are highlighted in red and blue.", "figure_data": "MethodChameleon SquirrelActorWisconsinTexasCornell", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[2,10]", "Explanation": "The cited work on gradient flow is adopted in the citing paper to enhance the learning power of GNNs by considering the propagation of GNNs via different aspects."}, {"Category": "Extension or Continuation", "Citation": "[24]", "Explanation": "The cited work on the computational issue of GNNs on different types of graphs is extended in the citing paper to address the need for a GNN to produce smoother node features for homophily graph and more distinguishable node features for heterophilic graph."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work proposes a new energy-based regularizer for GNN optimization, which the citing paper adopts in their research to improve the model design and performance."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work further builds upon the p-Laplacian based regularizer by proposing pL-UFG to assign the regularization to multiscale GNNs. The citing paper incorporates this method in their research to enhance the flexibility and performance of the model design."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work [25] defines the generalized Dirichlet energy for node features, which the citing paper adopts in their research to calculate the total generalized Dirichlet energy for node features."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work on spectral graph theory provides the necessary theoretical framework and methods for analyzing the eigenvalues of the graph Laplacian matrix in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work on GCN provides the basis for the node feature propagation rule used in the citing paper to model the information flow in a graph."}, {"Category": "Methodological Basis", "Citation": "[44]", "Explanation": "The cited work by [44] identifies the limitations of GNNs in handling heterophilic graphs, which the citing paper uses to guide the design of GNNs that can better handle these types of graphs."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work introduces the concept of p-Laplacian based implicit layer and graph framelets, which the citing paper adopts in their research to explore pL-UFG and analyse graph signals."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work introduced the first wavelet frame with a lifting scheme for graph analysis, which the citing paper adopts as a basis for their research on graph analysis."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work proposed a framework for wavelet transformation on graphs using Chebyshev polynomials for approximations, which the citing paper builds upon in their study of graph analysis."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work developed tight framelets on graphs by approximating smooth functions with filtered Chebyshev polynomials, which the citing paper utilizes in their research on graph analysis."}, {"Category": "Methodological Basis", "Citation": "[40]", "Explanation": "The cited work applied framelets to graph learning tasks with outstanding results, which the citing paper references to support the effectiveness of framelets in graph analysis."}, {"Category": "Methodological Basis", "Citation": "[42]", "Explanation": "The cited work studied the application of framelets to graph noise reduction, which the citing paper cites to highlight the re-aggregation capabilities of framelets in graph analysis."}, {"Category": "Methodological Basis", "Citation": "[46]", "Explanation": "The cited work combined framelets with singular value decomposition (SVD) to make them applicable to directed graphs, which the citing paper references to demonstrate the versatility and stability of framelet families in graph analysis."}, {"Category": "Methodological Basis", "Citation": "[38]", "Explanation": "The cited work proposed a simple method for building more versatile and stable framelet families, known as Quasi-Framelets, which the citing paper builds upon in their study of graph analysis."}, {"Category": "Methodological Basis", "Citation": "[38]", "Explanation": "The cited work provides a proof of the identity condition in the context of perfect signal reconstruction, which the citing paper adopts in its research on scaling function sets."}, {"Category": "Methodological Basis", "Citation": "[38]", "Explanation": "The cited work provides the basis for the signal decomposition and reconstruction process used in the citing paper, as it proves the relationship between the stacked matrix and the identity matrix."}, {"Category": "Methodological Basis", "Citation": "[40,41,38,26]", "Explanation": "The cited works provide the framework and techniques for the channel/feature mixing in the spectral framelet models, which the citing paper adopts in its research."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work provides the framework and techniques for the spatial framelet models, which the citing paper adopts in its research to perform framelet decomposition and feature mixing."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The p-Laplace operator is defined in the cited work and is used in the model formulation of the pL-UFG model in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work introduces the concept of a p-Laplace operator for discrete domains, which the citing paper adopts in the context of graph regularization in GNN training."}, {"Category": "Extension or Continuation", "Citation": "[25]", "Explanation": "The cited work builds upon the concept of a p-Laplace regularizer for GNN training by integrating it with graph framelet to develop a generalized p-Laplace regularized framelet model."}, {"Category": "Methodological Basis", "Citation": "[43]", "Explanation": "The cited work defines the graph gradient operator, which the citing paper adopts in their research to compute the gradient of vector-valued functions on a graph."}, {"Category": "Methodological Basis", "Citation": "[43]", "Explanation": "The cited work provides a definition of the graph divergence operator, which the citing paper adopts in the formulation of the graph p-Laplacian operator and the corresponding p-Dirichlet form as a regularizer in the model development."}, {"Category": "Methodological Basis", "Citation": "(14)", "Explanation": "The cited work introduces the p-Laplacian operator, which the citing paper adopts in the definition of the graph p-Laplacian operator."}, {"Category": "Data Source", "Citation": "(15)", "Explanation": "The cited work provides the definition of the p-Dirichlet form, which the citing paper utilizes in the measurement of node features in the GNN propagation process."}, {"Category": "Supporting Evidence", "Citation": "[14,3]", "Explanation": "The cited work on graph Dirichlet energy has been shown to be a commonly applied measure of variation between node features via GNNs, providing foundational data and theories for the citing paper to build upon in its research on the effects of GNNs on node features."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work generalizes the p-Dirichlet form in (15) to a more complex form in (16), which the citing paper adopts in their research to calculate the node gradient vector for each node in the graph."}, {"Category": "Methodological Basis", "Citation": "(25)", "Explanation": "The cited work provides a range of positive convex functions and p values that the citing paper can use to generalize the regularizer S p (F), which is a key methodological element in the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work provides the iterative algorithm used in the citing paper to solve the optimization problem defined in (18) for the energy regularizer in (17), which is a key methodological element in the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(19)", "Explanation": "The cited work provides the iterative algorithm that the citing paper adopts in its research to ensure convergence in the pL-UFG model."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work provides the necessary information on the matrix L 2 norm and its properties, which the citing paper uses to establish the relationship between the matrix L 2 norm and the exponential function in the context of the research conducted."}, {"Category": "Methodological Basis", "Citation": "[21,35,4,10]", "Explanation": "The cited works provide a known issue of the Dirichlet energy of node features approaching zero in GNN models, which the citing paper addresses by introducing a new method in pL-UFG to resolve the over-smoothing problem."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work provides the definition of the generalized Dirichlet energy, which the citing paper uses in the analysis of the energy behavior of the p-Laplacian based implicit layer."}, {"Category": "Methodological Basis", "Citation": "(31)", "Explanation": "The cited work provides a new class of energy (E(F)) that the citing paper adopts in the generation of node features (f i ) in the context of graph neural networks."}, {"Category": "Supporting Evidence", "Citation": "[25]", "Explanation": "The cited work [25] provides empirical observations that support the theoretical justifications presented in the citing paper regarding the use of large \u00b5 or small p values in GNN models for fitting heterophily datasets."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work introduces the p-Laplacian based implicit propagation method, which the citing paper adopts in the analysis of the energy dynamic of framelet convolution."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work introduces the concept of Laplacian smoothing, which the citing paper adopts in the process of training a graph neural network."}, {"Category": "Methodological Basis", "Citation": "[19,10]", "Explanation": "The cited works show the limitations of classic GCN in the context of graph neural networks, which the citing paper uses to inform the design of a new method for training such networks."}, {"Category": "Data Source", "Citation": "[10,19]", "Explanation": "The cited works provide a general dynamic model for graph neural networks, which the citing paper utilizes in the process of characterizing the behavior of such networks."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work provides a generalized energy dynamic framework that the citing paper adopts in the development of a framelet Dirichlet energy and analysis of energy behavior in spectral and spatial framelet convolutions."}, {"Category": "Supporting Evidence", "Citation": "[19]", "Explanation": "The cited work analyzed the energy behavior of framelet convolutions, providing foundational evidence for the citing paper to further develop the framelet energy and its behavior in the context of framelet convolutions."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work by [19] provides the basis for the analysis of the framelet energy dynamic in the citing paper, which is used to understand the energy conservation under framelet decomposition."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work by [10] is referenced in the definition of the total Dirichlet energy, which is a key element in the analysis of framelet energy dynamic in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[19]", "Explanation": "The cited work by [19] provides a conclusion on the energy dynamic of framelet, which serves as supporting evidence for the analysis conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[38]", "Explanation": "The cited work provides a generalizable approach to defining framelet energy behavior for different types of framelet models, which the citing paper builds upon in its own research on evaluating the impact of implicit layer on framelet energy behavior."}, {"Category": "Methodological Basis", "Citation": "[5,28]", "Explanation": "The cited works provide a framework and a principled way to develop new methods for diffusion on graph, which the citing paper adopts in its research."}, {"Category": "Methodological Basis", "Citation": "(49)", "Explanation": "The cited work provides the equation (49) for the calculation of the p-Laplacian implicit layer, which the citing paper adopts in their research to model the non-linear diffusion process."}, {"Category": "Methodological Basis", "Citation": "(45)", "Explanation": "The cited work provides the equation (45) for the calculation of the p-Laplacian implicit layer, which the citing paper uses to derive the form of the non-linear diffusion."}, {"Category": "Methodological Basis", "Citation": "(46)", "Explanation": "The cited work provides the equation (46) for the calculation of the p-Laplacian implicit layer, which the citing paper uses to complete the proof of the non-linear diffusion form."}, {"Category": "Data Source", "Citation": "(48)", "Explanation": "The cited work provides the equation (48) for the calculation of the p-Laplacian implicit layer, which the citing paper uses to complete the proof of the non-linear diffusion form."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work provides a hyper-parameter tuning strategy that the citing paper adopts to ensure a fair comparison in the research conducted."}, {"Category": "Data Source", "Citation": "[38]", "Explanation": "The cited work is referenced for the use of the framelet type in the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work is mentioned for the use of a specific setting for graph framelets in the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work is referenced for the use of a specific level J in the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work is mentioned for the use of a specific dilation scale s in the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work is referenced for the use of a specific degree of Chebyshev polynomial approximation in the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work is mentioned for the use of a specific input graph setting in the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work provides the settings and parameters for pL-UFG, which the citing paper adopts in their test setup to ensure a fair comparison in the study of hetero/homophilic graphs."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work provides a set of experiments that the citing paper uses to test the adaption power of the framelet model on both homophily and heterophily graphs by controlling the entries of \u03b8 based on the conditions provided in Proposition 3 and changing the quantity of \u00b5 and p."}, {"Category": "Data Source", "Citation": "[25]", "Explanation": "The cited work provides the experimental setting and data for the real-world homophily and heterophily graphs used in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[17]", "Explanation": "The cited work is referenced to show the data splitting method used in the real-world homophily and heterophily graphs in the citing paper."}, {"Category": "Methodological Basis", "Citation": "ogbn-arxiv", "Explanation": "The large-scale graph dataset (ogbn-arxiv) is used in the proposed model to verify the claim in Remark 3.4, demonstrating the scalability of the model in a real-world setting."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work introduces the GCN model, which the citing paper adopts as a baseline for implementing linear approximation to spectral graph convolutions in their research."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work presents the SGC model, which the citing paper uses to reduce the complexity of GCNs by removing nonlinearities and collapsing weight matrices between layers in their research."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work introduces the GAT model, which the citing paper uses to generate attention coefficient matrices and element-wisely multiply them on the graph adjacency matrix based on node features in their research."}, {"Category": "Methodological Basis", "Citation": "[37]", "Explanation": "The cited work presents the JKNet model, which the citing paper adopts to offer the capability to adaptively exploit diverse neighbourhood ranges and enhance structure-aware representation for individual nodes in their research."}, {"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work introduces the APPNP model, which the citing paper uses to leverage personalized PageRank to disentangle the neural network from the propagation scheme and merge GNN functionality in their research."}, {"Category": "Methodological Basis", "Citation": "[8]", "Explanation": "The cited work presents the GPRGNN model, which the citing paper adopts to dynamically learn General Pagerank (GPR) weights to optimize the extraction of node features and topological information from a graph, regardless of the level of homophily present in their research."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The p-GNN model is used as a basis for testing the p-Laplacian based graph neural network model in the citing paper, indicating a methodological connection."}, {"Category": "Extension or Continuation", "Citation": "[41]", "Explanation": "The UFG model is cited to highlight a class of GNNs that utilizes framelet transforms to effectively merge graph features into low-pass and high-pass spectra, suggesting an extension of the research in the cited work."}, {"Category": "Extension or Continuation", "Citation": "[25]", "Explanation": "The pL-UFG model is mentioned to employ a p-Laplacian based implicit layer to enhance the adaptability of multi-scale graph convolution networks, indicating a continuation of the research in the cited work."}, {"Category": "Methodological Basis", "Citation": "[41]", "Explanation": "The cited work introduces the concept of framelets, which the citing paper adopts in the design of their GNN model to address the issue of scalability in large-scale graph datasets."}]
[ { "figure_ref": [ "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b16", "b2", "b11", "b14", "b24", "b8", "b19", "b15", "b21", "b0", "b7" ], "table_ref": [], "text": "For the task of classification, the training and testing sets are generally assumed to be independently and identically distributed (IID), where the examples are drawn from the same distribution over covariates and labels. However, this assumption tends to be violated in the real-world. Dataset shift [17] describes the phenomenon in which the training and testing sets come from different distributions. One scenario that can cause dataset shift is sample selection bias, where an example is non-uniformly chosen from the population to be part of the training process. This type of bias can ultimately cause a set of training examples to be partially observed, where any of the covariates or label of an example is missing, or even completely unobserved. As a result, the performance of classifiers that are trained using a set subject to sample selection bias will be degraded. Most works have proposed solutions to problems dealing with missing-at-random (MAR) bias [3], [12], [15], [25], where the non-inclusion of training samples is assumed to be independent from the label given the observed variables in the training set. However, these proposed solutions cannot properly account for the missing not at random (MNAR) setting, where the non-inclusion of training samples is assumed to not be independent from the label given the observed variables in the training set.\nIn this paper, we focus on MNAR sample selection bias on the labels. As an example of this type of bias, Figure 1 shows the predicted achievement of an admitted freshman student based on their high school GPA and SAT score. The filled red and blue circles represent students with observed label values of achievement while the non-filled circles represent students with missing label values of achievement. There is some unknown mechanism about missing labels. For example, achievement values of students who have not declared their majors are not collected. When the undeclared students are omitted from the training, a biased model is produced and could be very different from the ground truth model that would have been trained had achievement of all students been collected.\nOur goal is to leverage the observed feature information of those records with missing labels in the training such that the trained model would be close to the ground truth model.\nOne classic method proposed to account for MNAR sample selection bias on the labels is Heckman's two-step method [9]. Heckman's method utilizes inverse Mills ratio to model the impact of the selection process on the prediction. With the prediction and selection modeled as linear equations, this method constructs an unbiased model by first estimating the inverse Mills ratio (IMR) using the selection features and then incorporating it into the prediction equation. Due to its short computation time, Heckman's method has been a popular choice for solving linear regression under MNAR sample selection bias. However, applying Heckman's method to correct the same problem in the classification context is difficult. This is because the assumptions made for the use of the IMR may not be present in classifiers, causing them to perform inconsistently [20].\nTo address this challenge, researchers turned their attention back to modeling the joint likelihood of prediction and selection [16,22], which was a popular method before Heckman's method was proposed [1]. The joint likelihood considers the relationship between the noise terms used to model prediction and selection. In this work, we specifically examine the task of computing this likelihood using Greene's method [8]. As a general framework for non-linear regression models under MNAR sample selection bias, Greene's method was one of the first methods to approximate the joint likelihood in order to reduce computational complexity. We find that although the method provides a first-order optimization process to produce an optimal solution, the minimization of its loss function over the biased training set does not take samples with missing labels into account. Thus, Greene's formulation alone cannot be used as an objective function when attempting to learn a classifier robust to MNAR sample selection bias." }, { "figure_ref": [], "heading": "Problem Definition", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "We give a summary of important symbols used throughout the paper in Table 1. Formally, let X be the feature space and Y be the binary target attribute. We first consider the training set D 𝑡𝑟 = {𝑡 𝑖 } 𝑛 𝑖=1 of 𝑛 samples that are originally sampled from the population to be modeled yet biased under MNAR sample selection bias. Each sample 𝑡 𝑖 is defined as:\n𝑡 𝑖 = (𝒙 𝑖 , 𝑦 𝑖 , 𝑠 𝑖 = 1) 1 ≤ 𝑖 ≤ 𝑚 (𝒙 𝑖 , 𝑠 𝑖 = 0) 𝑚 + 1 ≤ 𝑖 ≤ 𝑛(1)\nwhere the binary variable We then have the MNAR assumption, where MAR is violated: " }, { "figure_ref": [], "heading": "Contributions", "publication_ref": [], "table_ref": [], "text": "The contributions of our work are as follows. First, we propose Bi-asCorr, a framework for learning robust classification under MNAR sample selection bias, specifically in the case where the labels of some training samples are missing non-randomly. BiasCorr improves Greene's method by modifying the biased training dataset to assign a pseudolabel and an estimated soft selection value to the samples that have missing labels. These assignments are obtained by training two separate classifiers, one to predict pseudolabels by training on the set of fully observed training samples and another to predict the ground-truth selection value of a training sample. Second, we justify the improvement of BiasCorr over Greene's method by theoretically analyzing the bias of the loss functions estimated in BiasCorr and Greene's method. We provide theoretical guarantee to show that based on the level of missingness in the training set, the bias of BiasCorr is lower than that of Greene's method. Third, we extend BiasCorr to the classic problem of learning robust classification given a set of labeled training samples that are biased due to a hidden non-random selection process and an unbiased set of unlabeled samples. For the extension, we augment the training set with samples from the unbiased set, where the augmented samples are chosen by comparing the empirical frequencies of the biased training set and a set of samples randomly drawn from the unbiased set. Fourth, we conduct experiments on the real-world datasets and report the performance of our algorithms against state-of-the-art robust classifiers that were proposed under sample selection bias." }, { "figure_ref": [], "heading": "RELATED WORK 2.1 Regression under Sample Selection Bias", "publication_ref": [ "b0", "b8", "b19", "b18", "b17", "b4" ], "table_ref": [], "text": "Within the field of statistics, loss functions designed to solve the problem of modeling under MNAR bias have generally been categorized as full information maximum likelihood (FIML) estimators. FIML estimators estimate the coefficients of the prediction model while simultaneously accounting for the observation of labels for each sample. Each FIML estimator is centered around two linear equations: one that models the non-random selection of some data instance based on selection attributes and another that models the prediction of a value for a given data instance. The relationship between the two equations lies in the noise terms, which are assumed to be positively correlated in a bivariate normal distribution [1].\nHeckman's two-step method [9] addresses the issue of sample selection bias within the context of linear regression when the dependent variable of a dataset has values that are MNAR. Unlike FIML estimators, this method requires estimating the inverse Mills ratio using the coefficients of the selection equation. The IMR is included as part of a new noise term for the prediction equation, namely the nonzero conditional noise expectation, in order to account for the bivariate normal relationship between the noise terms. Despite its popularity, Heckman's method has some key limitations when applied to non-linear regression models. One limitation of the method, noted by [20], relates to the nonzero conditional noise expectation term. For non-linear models, this term does not contain the IMR. For example, in the context of sample selection bias for Poisson regression, [19] derived a term for the nonzero conditional noise expectation that included a nonlinear function similar in nature to the IMR, yet the IMR itself was not included. Moreover, the IMR may be incorrectly specified given the collinearity between the coefficients of the selection and prediction equations [18]. In the area of fair machine learning, [5] formulated a fair regression model under the assumption that a subset of training outcomes are MNAR. The model, which has a closed-form solution under some fairness metric, adopts Heckman's method as part of its framework to account for the sample selection bias. Unlike these approaches, where the dependent variable is assumed to be continuous, our approach handles sample selection bias where the dependent variable is categorical. As closed-form solutions do not exist for likelihood equations maximized for logistic regression models, we depend on iterative optimization techniques in order to learn a classifier under MNAR sample selection bias." }, { "figure_ref": [], "heading": "Classification under Sample Selection Bias", "publication_ref": [ "b1", "b24", "b9", "b14", "b1", "b23", "b13", "b13", "b23", "b22", "b12", "b10" ], "table_ref": [], "text": "Most research works in the area of learning under sample selection bias fall in the category of MAR bias. To address MAR sample selection bias, importance weighting techniques are often used, where training data instances are reweighted based on the ratio between the densities of the testing and training distributions [2,25].\nWith the possibility of these techniques resulting in inaccurate estimates due to the influence of data instances with large importance weights on the reweighted loss, other researchers incorporated ideas of minimax estimation to formulate models that are robust to MAR sample selection bias [10,15]. These models consider a worst-case shift between the training and testing distributions to adversarially minimize the reweighted loss. Approaches that handle MAR bias generally assume a labeled training set of biased samples and an unlabeled testing set of unbiased samples [2]. As we address MNAR bias, we differ from these assumptions. In our study, we assume that the testing set cannot be accessed during training and that the training set contains a mixture of labeled and unlabeled examples given that the labels are non-randomly selected.\nIn recommender learning, [24] proposed the joint learning of an imputation model and a prediction model to estimate the performance of rating prediction given MNAR ratings. [14] adopted two propensity scoring methods into its loss function to handle bias of MNAR implicit feedback, where user feedback for unclicked data can be negative or positive-yet-unobserved. While the approaches in [14,24] also use separate propensity estimation models to predict the observation of a label, they consider matrix factorization as the prediction model, which is not for binary classification on tabular data.\nThe problem we define in this work is related to semi-supervised learning [23] where a training sample is treated differently based on whether the sample has a label or not. For labeled samples, the algorithm uses traditional supervision to update the model weights while for unlabeled samples, the algorithm often minimizes the difference in predictions between other similar training samples. In general, semi-supervised learning algorithms do not account for the missing data mechanism when comparing with similar samples. However, in our work, we model the missing data mechanism as we compare similar samples. One popular technique used in this setting is pseudolabel generation [13], where pseudolabels are made available to unlabeled samples based on predictions made by the model early in its training. This technique has been used to address MNAR labels. [11] employed class-aware propensity score and imputation strategies using pseudolabels to develop a semisupervised learning model that is doubly robust against MNAR data. This approach computes the probability of label missingness for a training sample in terms of a class prior. On the other hand, our approach does not require a class prior to compute the probability of label missingness for a training sample." }, { "figure_ref": [], "heading": "GREENE'S METHOD REVISITED", "publication_ref": [ "b7" ], "table_ref": [], "text": "Greene's method [8] is an FIML estimator that accounts for the impact of non-random sample selection bias on the label by considering the relationship between the noise terms in the prediction and selection equations. Unlike Heckman's method, Greene's method estimates the variance and correlation coefficient of the noise terms as the likelihood of the model is maximized. This is important as the noise term of the prediction equation may have different distributions for various non-linear models (e.g., the noise term of the prediction equation for negative binomial regression has a log gamma distribution). As a result, Greene's method can be extended to non-linear regression cases." }, { "figure_ref": [], "heading": "Sample Selection Model", "publication_ref": [], "table_ref": [], "text": "For any\n(𝒙 𝑖 , 𝑦 𝑖 ) ∈ X × Y, the selection equation of the 𝑖th sample is 𝑧 𝑖 = 𝜸𝒙 (𝑠 ) 𝑖 + 𝑢 (𝑠 )\n𝑖 , where 𝜸 is the set of regression coefficients for selection, 𝒙 (𝑠 ) 𝑖 is the set of features for sample selection, and 𝑢 (𝑠 ) 𝑖 ∼ N (0, 1) is the noise term for the selection equation. The selection value of the 𝑖th sample 𝑠 𝑖 is defined as:\n𝑠 𝑖 = 1 𝑧 𝑖 > 0 0 𝑧 𝑖 ≤ 0(2)\nThe prediction equation\n𝑓 (𝑦 𝑖 |𝒙 (𝑝 )\n𝑖 , 𝜖 𝑖 ) of the 𝑖th sample is based on logistic regression with\n𝑓 (𝑦 𝑖 = 1|𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖 ) = exp(𝜷𝒙 (𝑝 ) 𝑖 + 𝜎𝜖 𝑖 ) 1 + exp(𝜷𝒙 (𝑝 ) 𝑖 + 𝜎𝜖 𝑖 ) (3)\nwhere 𝜷 is the set of regression coefficients for prediction, 𝒙 (𝑝 ) 𝑖 is the set of features for prediction, and 𝜎𝜖 𝑖 is the noise term for the prediction equation, with 𝜎 as the standard deviation of the term and 𝜖 𝑖 ∼ N (0, 1) as a random variable. We express 𝜎𝜖 𝑖 as 𝑢 and 𝑣 𝑖 ∼ N (0, 1) is a random variable independent to 𝜖 𝑖 .\n(𝑝 ) 𝑖 , where 𝑢 (𝑝 ) 𝑖 ∼ N (0, 𝜎 2 )." }, { "figure_ref": [], "heading": "The noise terms 𝑢", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Loss Function", "publication_ref": [ "b10", "b11", "b20", "b6" ], "table_ref": [], "text": "Based on the above sample selection model, the loss function\nL = - 1 𝑛 𝑛 ∑︁ 𝑖=1 log 𝑓 (𝑦 𝑖 , 𝑠 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝒙 (𝑠 ) 𝑖 )(4)\nover D 𝑡𝑟 is then derived, which depends on the joint density function\n𝑓 (𝑦 𝑖 , 𝑠 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝒙 (𝑠 ) 𝑖 ). The first step is to consider 𝑓 (𝑦 𝑖 , 𝑠 𝑖 = 1|𝒙 (𝑝 ) 𝑖 , 𝒙 (𝑠 )\n𝑖 ), which is expressed as\n𝑓 (𝑦 𝑖 , 𝑠 𝑖 = 1|𝒙 (𝑝 ) 𝑖 , 𝒙 (𝑠 ) 𝑖 ) = ∫ ∞ -∞ 𝑓 (𝑦 𝑖 , 𝑠 𝑖 = 1|𝒙 (𝑝 ) 𝑖 , 𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) • 𝑓 (𝜖 𝑖 )𝑑𝜖 𝑖 (5)\nBoth 𝑦 𝑖 and 𝑠 𝑖 are independent when conditioned on 𝜖 𝑖 . Thus,\n𝑓 (𝑦 𝑖 , 𝑠 𝑖 = 1|𝒙 (𝑝 ) 𝑖 , 𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) = 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖 ) • 𝑃 (𝑠 𝑖 = 1|𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) (6) Because 𝑢 (𝑠 )\n𝑖 and 𝑢 (𝑝 ) 𝑖 are bivariate normal, we have\n𝑃 (𝑠 𝑖 = 1|𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) = Φ 𝜸𝒙 (𝑠 ) 𝑖 + 𝜌𝜖 𝑖 √︁ 1 -𝜌 2 (7)\nwhere Φ(•) is the standard normal cumulative distribution function.\nSince 𝜖 𝑖 ∼ N (0, 1), 𝑓 (𝜖 𝑖 ) is 𝜙 (𝜖 𝑖 ), where 𝜙 (•) is the standard normal density function. Thus,\n𝑓 (𝑦 𝑖 , 𝑠 𝑖 = 1|𝒙 (𝑝 ) 𝑖 , 𝒙 (𝑠 ) 𝑖 ) = ∫ ∞ -∞ 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖 ) • Φ 𝜸𝒙 (𝑠 ) 𝑖 + 𝜌𝜖 𝑖 √︁ 1 -𝜌 2 • 𝜙 (𝜖 𝑖 )𝑑𝜖 𝑖 (8)\nThe next step is to consider 𝑓 (𝑦 𝑖 , 𝑠 𝑖 = 0|𝒙\n(𝑝 ) 𝑖 , 𝒙 (𝑠 )\n𝑖 ). The term 𝑠 𝑖 = 0 implies that the 𝑦 𝑖 are missing. We have\n𝑓 (𝑦 𝑖 , 𝑠 𝑖 = 0|𝒙 (𝑝 ) 𝑖 , 𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) = 𝑃 (𝑠 𝑖 = 0|𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) (9)\nwhere\n𝑃 (𝑠 𝑖 = 0|𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) = Φ - 𝜸𝒙 (𝑠 ) 𝑖 + 𝜌𝜖 𝑖 √︁ 1 -𝜌 2(10)\nSo\n𝑓 (𝑦 𝑖 , 𝑠 𝑖 = 0|𝒙 (𝑝 ) 𝑖 , 𝒙 (𝑠 ) 𝑖 ) = ∫ ∞ -∞ Φ - 𝜸𝒙 (𝑠 ) 𝑖 + 𝜌𝜖 𝑖 √︁ 1 -𝜌 2 • 𝜙 (𝜖 𝑖 )𝑑𝜖 𝑖(11)\nTherefore, combining Eq. ( 8) and Eq. (11), (12) where\n𝑓 (𝑦 𝑖 , 𝑠 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝒙 (𝑠 ) 𝑖 ) = ∫ ∞ -∞ [(1 -𝑠 𝑖 ) + 𝑠 𝑖 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖 )] • 𝑃 (𝑠 𝑖 |𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) • 𝜙 (𝜖 𝑖 )𝑑𝜖 𝑖\n𝑃 (𝑠 𝑖 |𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) = Φ (2𝑠 𝑖 -1) 𝜸𝒙 (𝑠 ) 𝑖 + 𝜌𝜖 𝑖 √︁ 1 -𝜌 2(13)\nThus the negative log likelihood function L over 𝑛 training data samples is\nL = - 1 𝑛 𝑛 ∑︁ 𝑖=1 log ∫ ∞ -∞ [(1 -𝑠 𝑖 ) + 𝑠 𝑖 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖 )] • 𝑃 (𝑠 𝑖 |𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 )𝜙 (𝜖 𝑖 )𝑑𝜖 𝑖(14)\nL needs to be minimized with respect to 𝜷, 𝜸, 𝜎, and 𝜌. Given that the computation of Eq. ( 14) is intractable, the simulation approach from [21] is used to minimize an approximate form of L, denoted as L.\nL = - 1 𝑛 𝑛 ∑︁ 𝑖=1 l𝑖 (15\n)\nwhere\nl𝑖 = log 1 𝑅 𝑅 ∑︁ 𝑟 =1 [ (1 -𝑠 𝑖 ) + 𝑠 𝑖 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖𝑟 ) ] • 𝑃 (𝑠 𝑖 |𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖𝑟 )(16)\nThis approach involves taking 𝑅 random draws 𝜖 𝑖𝑟 from the standard normal population for the 𝑖th sample. As long as 𝑅 is greater than √ 𝑛, then asymptotically L = L. A proof of this claim is provided in [7]." }, { "figure_ref": [], "heading": "Optimization", "publication_ref": [], "table_ref": [], "text": "Iterative first-order optimization techniques such as stochastic gradient descent can be used to solve Eq. ( 15) and obtain an optimal parameter 𝜷 * for the classifier ℎ. We note that the gradient of Eq. ( 16) with respect to 𝜷 for the 𝑖th training sample is expressed as\n∇ 𝜷 l𝑖 = 1 l𝑖 1 𝑅 𝑅 ∑︁ 𝑟 =1 𝑠 𝑖 • 𝑃 (𝑠 𝑖 |𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖𝑟 ) • 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖𝑟 ) • 𝒙 (𝑝 ) 𝑖 • 𝜕𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖𝑟 ) 𝜕𝜷 (17)\nWe also apply the first-order optimization techniques to compute the other optimal parameters in Eq. ( 15), namely 𝜸 * , 𝜎 * , and 𝜌 * . " }, { "figure_ref": [], "heading": "ROBUST CLASSIFICATION UNDER MNAR SAMPLE SELECTION BIAS", "publication_ref": [], "table_ref": [], "text": "Despite Greene's method incorporating a sample selection model towards fitting logistic regression, the task of training a robust classifier ℎ over D 𝑡𝑟 under MNAR sample selection bias cannot be accomplished using this method. We specifically note a key issue in the optimization process. For any sample in the training set such that 𝑠 𝑖 = 0, the value of Eq. ( 17) is 0, meaning that ∇ 𝜷 l𝑖 would account for only samples such that 𝑦 𝑖 is observed. Thus, using a first-order optimization technique to solve Eq. ( 15) does not result in an iterative solution 𝜷 * such that the classifier ℎ(𝒙 (𝑝 )\n𝑖 ; 𝜷 * ) is robust against MNAR sample selection bias on the label.\nHowever, learning a robust classifier under MNAR sample selection bias can still be achieved by making improvements to Greene's method. First, we can refine the selection value of each sample in D 𝑢 to have a soft value in order to include information regarding the losses of samples in D 𝑢 when optimizing the classifier. While making the refinement, we still assume that each sample in D 𝑠 is assigned 𝑠 𝑖 = 1. Second, we can impute the missing labels in D 𝑢 with pseudolabels to further improve Greene's method." }, { "figure_ref": [ "fig_2" ], "heading": "BiasCorr", "publication_ref": [], "table_ref": [], "text": "To ensure that we learn classifiers that are robust to MNAR sample selection bias, we introduce BiasCorr, a framework that addresses the challenge of training a classifier using Greene's method. In BiasCorr, we ensure that the losses of samples with missing labels are included in the optimization process. Using this framework, we train ℎ(𝒙 (𝑝 ) 𝑖 ; 𝜷) to minimize L′ , which is an enhanced version of Eq. ( 15), over a modified training set 𝐷 ′ 𝑡𝑟 . We make these modifications while conforming to the original MNAR conditions on the label. Figure 2 to minimize the equation\nL′ = - 1 𝑛 𝑛 ∑︁ 𝑖=1 log 1 𝑅 𝑅 ∑︁ 𝑟 =1 [(1-𝑠 ′ 𝑖 )+𝑠 ′ 𝑖 𝑓 (𝑦 ′ 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖𝑟 )] •𝑃 (𝑠 ′ 𝑖 |𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖𝑟 ) (18) over D ′ 𝑡𝑟 = D 𝑠 ∪ D ′ 𝑢 ,\nwhere\n𝑠 ′ 𝑖 = 1 𝑡 𝑖 ∈ D 𝑠 s 𝑡 𝑖 ∈ D 𝑢(19)\nand\n𝑦 ′ 𝑖 = 𝑦 𝑖 𝑡 𝑖 ∈ D 𝑠 ỹ𝑖 𝑡 𝑖 ∈ D 𝑢(20)\nTo estimate the soft selection value s, we start by computing the probability 𝑝 𝑖 for all samples in D 𝑢 . While there is no difference in formulating the loss for each sample in D 𝑠 when comparing Eq. ( 18) to Eq. ( 15), we alter the calculation of the loss for each sample in D 𝑢 in the following ways. First, based on the new binary assignment of 𝑠 ′ 𝑖 , we compute\n𝑃 (𝑠 ′ 𝑖 = s |𝒙 (𝑠 )\n𝑖 , 𝜖 𝑖𝑟 ), which is expressed as\n𝑃 (𝑠 𝑖 = s |𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) = Φ (2s -1) 𝜸𝒙 (𝑠 ) 𝑖 + 𝜌𝜖 𝑖 √︁ 1 -𝜌 2(21)\nCompared to Eq. ( 10), the quantity\n𝜸𝒙 (𝑠 ) 𝑖 +𝜌𝜖 𝑖𝑟 √ 1-𝜌 2 is multiplied by 2s - 1.\nAs this adjusted calculation changes the optimization of the selection coefficients 𝜸 , we see that the value of 𝑃 (𝑠 ′ 𝑖 = 1|𝒙\n(𝑠 )\n𝑖 , 𝜖 𝑖𝑟 ), which is computed using Eq. ( 7), is greater than 𝑃 (𝑠 𝑖 = 1|𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖𝑟 ) after L′ is minimized. As the formulation for computing the loss of each sample in D 𝑠 is kept fixed, this means that the training of ℎ using L is improved as L′ is expected to converge to a value less than L.\nSecond, in Eq. ( 18), we have the term\n[(1 -s) + s 𝑓 ( ỹ𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖𝑟 )] which is multiplied by 𝑃 (𝑠 ′ 𝑖 = s |𝒙 (𝑠 )\n𝑖 , 𝜖 𝑖𝑟 ). This term can be interpreted as a weight that puts more emphasis on the label information of samples in D 𝑢 as the value of s increases. Nevertheless, as we incorporate s as a selection value assignment, it is impossible to compute 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖𝑟 ) for samples in D 𝑢 because of their missing labels due to MNAR bias. This is why we consider the prediction of pseudolabels ỹ𝑖 when computing the loss for each sample in D ′ 𝑢 . The pseudocode for BiasCorr is provided in Algorithm 1. In line 4, we first train 𝑔 𝑠 on D 𝑡𝑟 to predict the original ground-truth selection value 𝑠 𝑖 . In line 5, we train another binary classifier 𝑔 𝑦 (𝒙 𝑡𝑟 we obtain a solution 𝜷 * after minimizing Eq. ( 18) such that ℎ(𝒙 (𝑝 ) 𝑖 ; 𝜷 * ) is robust against non-random sample selection bias on the label.\nThe computational complexity of Algorithm 1 trivially depends on the complexity of training 𝑔 𝑠 , 𝑔 𝑦 , and ℎ to convergence. Similar to the training of ℎ using Eq. ( 15), the complexity of training ℎ by minimizing Eq. ( 18) is 𝑂 (𝑇𝑛), where 𝑇 is the number of training iterations for ℎ.\nWe further note that the types of models used to train 𝑔 𝑠 and 𝑔 𝑦 are listed as inputs to Algorithm 1. In our work, we experiment training 𝑔 𝑠 using the probit and logistic regression models. Compared to logistic regression models, which is based on the sigmoid function, probit models use the normal cumulative distribution function to model binary classification. For 𝑔 𝑦 , we consider logistic regression and multi-layer perceptron." }, { "figure_ref": [], "heading": "Bias Analysis Regarding Loss Function", "publication_ref": [], "table_ref": [], "text": "In this section, we analyze the bias of the loss function estimator for both Greene's method and BiasCorr algorithm. We compare the two biases and show that our BiasCorr algorithm further reduces the bias for classification performance estimation given the ratio of the unlabeled training set is larger than a threshold. We first define the optimized negative log-likelihood loss function where the training data D 𝑡𝑟 is fully observed:\nL * = - 1 |D 𝑡𝑟 | ∑︁ 𝑖 ∈ D 𝑡𝑟 log 𝑃 (𝑦 𝑖 |𝒙 𝑖 ) = - 1 𝑛 𝑛 ∑︁ 𝑖=1 log 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) (22)\nwhere 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) takes the form of logistic regression in our paper. The bias of an arbitrary loss function estimator L is defined as: 𝑖 ), the bias of the loss function estimator for Greene's method shown in Eq. ( 15) is:\nBias(L) = L * -E D 𝑡𝑟 [L]\nBias( L ) = 1 𝑛 𝑛 ∑︁ 𝑖=1 log 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) p (𝑠 𝑖 ) + 𝑝 (𝑠 𝑖 ) p (𝑠 𝑖 ) ( f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) -1) (23)\nLemma 2 (Bias of BiasCorr estimator). Given the definitions of 𝑠 ′ 𝑖 , s, 𝑦 ′ 𝑖 in Section 4.1, the bias of the loss function estimator for BiasCorr algorithm shown in Eq. ( 18) is:\nBias( L′ ) = 1 𝑛 𝑛 ∑︁ 𝑖=1 log 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) p (𝑠 ′ 𝑖 ) + (𝑝 (𝑠 𝑖 ) + s𝜂 ) p (𝑠 ′ 𝑖 ) ( f (𝑦 ′ 𝑖 |𝒙 (𝑝 ) 𝑖 ) -1)(24)\nNote that both Bias( L) and Bias( L′ 𝑖 ) for the samples in D 𝑢 . Due to the fundamental difference between selection model and prediction model, it is very challenging to derive an unbiased estimator for the loss function based on Greene's method. However, by applying the modification from BiasCorr algorithm, we are able to further reduce the bias for the loss function estimator on classification tasks based on an assumption on the ratio 𝜂. We list our main theorem that compares the biases of the two methods as follows: To obtain the result in Theorem 4.1 we consider the difference between the two biases and analyze the terms after subtracting Bias( L) by Bias( L′ ). We first decompose the difference and derive the inequality as follows:\n)\nBias( L) -Bias( L′ ) ≥ s𝜂 • 1 -s𝜂 - 1 𝑛 𝑛 ∑︁ 𝑖=1 2𝑝 (𝑠 𝑖 ) term 1 + 1 𝑛 𝑛 ∑︁ 𝑖=1 f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 )(2𝑝 (𝑠 𝑖 ) + s𝜂) term 2(25)\nAccording to Eq. ( 25) we find that if both term 1 and term 2 are greater than 0, the BiasCorr estimator is guaranteed to achieve lower bias than Greene's model estimator. Since both f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) and 𝑝 (𝑠 𝑖 ) lie in (0, 1) for each tuple 𝑖, term 2 is still a positive value after summation and averaging over all the training tuples. Our theoretical analysis also shows that to guarantee the positivity of term 1, the proportion of the unlabeled training data 𝜂 needs to be larger than 1/(2s). Notice that the condition 𝜂 ≤ 1/(2s) does not necessarily imply Bias( L′ ) is larger than Bias( L). We still need to compare the magnitude of term 1 and term 2, and the value of term 2 heavily depends on the estimated selection model and outcome model. Finally, combining the analysis results leads to the conclusion in Theorem 4.1. For the proof details of Lemma 1, Lemma 2 and Theorem 4.1, please refer to Appendix C." }, { "figure_ref": [], "heading": "Extending BiasCorr", "publication_ref": [ "b2", "b2" ], "table_ref": [], "text": "Most algorithms that have been proposed to learn classification under sample selection bias are trained under the assumption that the training set contains labeled samples D 𝑠 = {(𝒙 𝑖 , 𝑦 𝑖 )} 𝑚 𝑖=1 that come from a biased source distribution. Additionally, they assume that there exists a set of testing samples D 𝑁 = {𝒙 𝑖 } 𝑁 𝑖=1 drawn from an unbiased target distribution. We propose an extension of BiasCorr, BiasCorr * , for this setting. We do so by augmenting the original set of labeled training samples using the set of unlabeled samples from the target distribution. Specifically, given two sets D 𝑠 and D 𝑁 , we construct an augmented training set We note that choosing 𝑛 as the size of D 𝑎𝑢𝑔 is significant in determining the performance of estimating the selection probability and the efficiency of BiasCorr * . First, the following lemma from [3] shows the error of using 𝑎 𝑡 𝑏 𝑡 as an estimate of the selection probability 𝑃 (𝑠 𝑖 = 1|𝑡). Lemma 3. [3] Let 𝛿 > 0. Let 𝑎 ′ be number of distinct samples in D 𝑠 and 𝑝 0 = 𝑚𝑖𝑛 𝑡 ∈ D 𝑎𝑢𝑔 𝑃 (𝑡) ≠ 0. Then, with probability at least 1 -𝛿, the following inequality holds for all distinct 𝑡 ∈ D 𝑠 :\nD 𝑎𝑢𝑔 = D 𝑠 ∪ D 𝑢 of 𝑛\n𝑃 (𝑠 𝑖 = 1|𝑡) - 𝑎 𝑡 𝑏 𝑡 ≤ √︄ log 2𝑎 ′ + log 1 𝛿 𝑝 0 𝑛(26)\nHere we see that for a given number of distinct samples in D 𝑠 , the error of estimating 𝑃 (𝑠 𝑖 = 1|𝑡) depends on the value of 𝑝 0 𝑛, which equals the number of occurrences of the least frequent sample in D 𝑎𝑢𝑔 . This value is dependent on the set D 𝑢 , which may include samples 𝑡 that are not in D 𝑠 . Second, the computational complexity of generating D 𝑎𝑢𝑔 in lines is bounded by 𝑛, where in the worst case D 𝑛 has 𝑛 distinct samples and the last 𝑛 -𝑚 samples in D 𝑛 are added to D 𝑢 . " }, { "figure_ref": [], "heading": "EXPERIMENTS 5.1 Datasets", "publication_ref": [ "b5" ], "table_ref": [ "tab_9" ], "text": "We evaluate the performance of our proposed algorithms on the Adult, German, and Drug datasets [6] using the set of prediction and selection features listed in Table 7 in Appendix A. The Adult dataset contains 45,222 samples that were collected from the 1994 Census database. For each sample, we predict whether the type of workclass is government or private. The German dataset, which consists of attributes such as credit history, age, duration in months, and checking account status, contains 1,000 samples of individuals that have either good or bad credit risk. The task is to predict the level of credit risk for each person. The Drug dataset has 1,885 samples of respondents with known attributes such as gender, education, and ethnicity. Based on these attributes, we predict how likely it is for a respondent to use benzos." }, { "figure_ref": [], "heading": "Experiments on BiasCorr", "publication_ref": [], "table_ref": [ "tab_5", "tab_5", "tab_6", "tab_5" ], "text": "We choose 70% of samples in each dataset to generate the original training set D 𝑡𝑟 . We work with two different bias scenarios in our experiments: one where the condition of the missingness ratio 𝜂 listed in Theorem 4.1 is satisfied and another where the condition is not met. To create the sample selection bias on D 𝑡𝑟 for the Adult dataset, we select a training sample to have an observed label if the years of education is more than 12. As a result, out of 31,655 training samples, the set D 𝑢 contains 23,664 samples (𝜂 = 0.7476).\nFor the German dataset, we select a training sample to be fully observed if the person has been employed for more than 1 year, resulting in D 𝑢 having 162 out of 700 training samples (𝜂 = 0.2314).\nFor the Drug dataset, we create the sample selection bias scenario for the training set by selecting individuals whose Oscore is at most 43. As a result, 860 out of 1,319 samples are in D 𝑢 (𝜂 = 0.6520).\nBaselines and Reproducibility. We compare BiasCorr to the following baselines: (a) logistic regression without sample selection bias (NoBias), which is trained using D 𝑡𝑟 = D 𝑠 ∪ D 𝑢 where all samples in D 𝑡𝑟 are fully observed, (b) logistic regression with sample selection bias (SSBias), which is trained using D 𝑠 , and (c) logistic regression with sample selection bias correction based on Greene's method, which is trained using the set D 𝑠 ∪ D 𝑢 where all samples in D 𝑢 have non-randomly missing labels. Our models and all baselines are implemented using Pytorch. Details regarding implementation and the hyperparameters used for BiasCorr are given in Appendix B. Our source code can be downloaded using the link https://tinyurl.com/4kvux87n. Results. Table 2 shows the training/testing accuracy of each model. We report average accuracies and their standard deviations over 5 runs. We first see that while the change in training accuracies is different for each dataset when comparing NoBias and SSBias, NoBias outperforms SSBias by 24.13%, 3.34%, and 2.30% when considering the testing accuracy for the Adult, German, and Drug datasets, respectively. This shows that the utility of the logistic regression model is reduced when trained on D 𝑠 . We also see that Greene's method does not outperform SSBias by much when evaluated on the testing set. For instance, when looking at the results for the Adult dataset in Table 2, the testing accuracy of Greene's method is 0.55% higher than SSBias while the testing accuracy of NoBias is 24.13% higher. This demonstrates that a classifier is not robust to MNAR sample selection bias when learning to optimize Eq. ( 15). More importantly, we observe that BiasCorr, under all 4 combinations of settings for 𝑔 𝑠 and 𝑔 𝑦 , outperforms SSBias and Greene's method. Using the German dataset as an example, BiasCorr(LR, MLP) has the lowest test accuracy out of the four BiasCorr settings after training on the dataset. Despite this, BiasCorr(LR, MLP) outperforms SSBias by 1.67% on the testing set. This difference is higher than the 0.34% margin when comparing Greene's method to SSBias.\nWe also examine the values of 𝜂 and 1/(2s) in Table 3 based on this experiment. Using 𝑔 𝑠 on probit as an example, we see that the value of 1/(2s) is 0.5868 for the Adult dataset. We also observe that, as shown in Table 2, BiasCorr(probit, LR) and BiasCorr(probit, MLP) outperform Greene's method by 7.16% and 22.79%, respectively. As 𝜂 > 1/(2-s) for the Adult dataset, the result validates our " }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we have proposed a framework, BiasCorr, to learn a classifier that is robust against sample selection bias on the training dataset that causes a subset of training samples to have nonrandomly missing labels. As a significant improvement to a formulation previously proposed to model MNAR sample selection bias, BiasCorr trains a robust classifier after learning separate classifiers to predict pseudolabels and estimate a soft selection value assignment for these samples. Theoretical analysis on the bias of BiasCorr provides a guarantee for this improvement based on the level of missingness in the training set. Experimental results on two real-world demonstrate not only the robustness of classifiers under this framework, but also their better performance than baselines. In the future, we plan to extend this framework to learn more complex non-linear regression models such as kernel ridge regression. " }, { "figure_ref": [], "heading": "A SELECTION AND PREDICTION FEATURES", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B IMPLEMENTATION DETAILS", "publication_ref": [], "table_ref": [], "text": "We pre-process the Adult dataset by making the marital status attribute binary (married or not-married), adjusting the country attribute to where countries represented by at most 150 records are considered in the Other category, and deleting the final weight and race attributes. Rather than keeping the number of training epochs fixed, Bias-Corr, which is optimized using stochastic gradient descent with a learning rate of 0.01 and a weight decay of 1 × 10 -4 , keeps training until the percent change in loss is less than 0.025% for the Adult dataset and 0.05% for the German and Drug datasets. The prediction and selection coefficients 𝜷 and 𝜸 are initialized to zero while 𝜎 and 𝜌 are initialized to 0.01. The number of random draws 𝑅 is set to 200. All baselines are implemented using Pytorch." }, { "figure_ref": [], "heading": "C PROOF DETAILS OF BIAS ANALYSIS IN SECTION 4.2 C.1 Proof of Lemma 1", "publication_ref": [], "table_ref": [], "text": "Following the definition, the bias of estimator L from Greene's method is:\nBias( L) = |L * -E D 𝑡𝑟 [ L]|\nThe loss function L * on all training samples is defined as:\nL * = - 1 |D 𝑡𝑟 | ∑︁ 𝑖 ∈ D 𝑡𝑟 log 𝑃 (𝑦 𝑖 |𝒙 𝑖 ) = - 1 𝑛 𝑛 ∑︁ 𝑖=1 log 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) (27)\nwhere 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) takes the form of logistic regression. By plugging in the expression of L * and L we have: 𝑖 ) respectively, the log-likelihood function for each tuple 𝑖 in Eq. ( 29) can be further rewritten as:\nBias( L) = - 1 𝑛 𝑛 ∑︁ 𝑖=1 log 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) -E D 𝑡𝑟 - 1 𝑛 𝑛 ∑︁ 𝑖=1 l𝑖(\nl𝑖 = log (1 -𝑠 𝑖 ) • p (𝑠 𝑖 ) + 𝑠 𝑖 • f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) • p (𝑠 𝑖 )\nAfter taking the expectation over all the training samples, the latter term in Eq. ( 28) could be expressed as:\nE D 𝑡𝑟 - 1 𝑛 𝑛 ∑︁ 𝑖=1 l𝑖 = - 1 𝑛 𝑛 ∑︁ 𝑖=1 E D 𝑡𝑟 log (1 -𝑠 𝑖 ) • p (𝑠 𝑖 ) + 𝑠 𝑖 • f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) • p (𝑠 𝑖 ) = - 1 𝑛 𝑛 ∑︁ 𝑖=1 log (1 -𝑝 (𝑠 𝑖 )) • p (𝑠 𝑖 ) + 𝑝 (𝑠 𝑖 ) • f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) • p (𝑠 𝑖 ) = - 1 𝑛 𝑛 ∑︁ 𝑖=1 log p (𝑠 𝑖 ) + 𝑝 (𝑠 𝑖 ) p (𝑠 𝑖 )( f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) -1)(30)\nFinally, plugging the results above into Eq. (28) leads to the bias of L:\nBias( L) = 1 𝑛 𝑛 ∑︁ 𝑖=1 log 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) p (𝑠 𝑖 ) + 𝑝 (𝑠 𝑖 ) p (𝑠 𝑖 )( f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) -1) (31)" }, { "figure_ref": [], "heading": "C.2 Proof of Lemma 2", "publication_ref": [], "table_ref": [], "text": "Similar to the proof in Section C.1 we derive the bias of the loss function for BiasCorr Algorithm. We plug in the expression of L * and L′ and obtain the following equation: \nBias( L′ ) = -" }, { "figure_ref": [], "heading": "C.3 Proof of Theorem 4.1", "publication_ref": [], "table_ref": [], "text": "To fairly compare the biases from two loss function estimators, we assume that p (•) accurately estimate the true selection model 𝑝 (𝑠 𝑖 ), that is, p (𝑠 𝑖 ) = 𝑝 (𝑠 𝑖 ) for both Greene's method and BiasCorr algorithm. We also assume that f (𝑦 𝑖 |𝒙 " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "theoretical comparison of BiasCorr and Greene's method. For the German and Drug datasets, we see that the value of 1/(2s) is not less than 𝜂. However, BiasCorr still outperforms Greene's method across all 4 combinations of settings for 𝑔 𝑠 and 𝑔 𝑦 . This shows that our BiasCorr algorithm, which improves Greene's method by incorporating pseudolabel generation and a soft selection assignment on samples in D 𝑢 , produces a more robust classifier against MNAR sample selection bias. Execution Time. We also report the execution times of training ℎ using Greene's method and BiasCorr in Table 4, where the experiments were conducted on the Dell XPS 8950 9020 with a Nvidia GeForce RTX 3080 Ti. We see that BiasCorr trains slower than Greene's method for the Adult dataset while BiasCorr has a slightly faster execution time than Greene's method for the German and Drug datasets. For instance, BiasCorr(LR, LR) executes 6.09 seconds slower than Greene's method for the Adult dataset and 0.19 and 0.86 seconds faster than Greene's method for the German and Drug datasets, respectively. Sensitivity Analysis. We further conduct an empirical evaluation of the performance of BiasCorr when considering different assignments for the soft selection value s on samples in D ′ 𝑢 , up to s = 0.5. In this particular study, we run Algorithm 1 except we ignore the training of 𝑔 𝑠 using the probit or logit model. Figure 3 shows the results of this experiment over the Adult and German datasets. We see that when training 𝑔 𝑦 under both logistic regression and an MLP, the performance of BiasCorr peaks within the range of the estimates we obtain by computing the average of predictions given by 𝑔 𝑠 on samples in D 𝑢 .\nWe also evaluate how modifying 𝜂 on the training set affects the performance of BiasCorr on the testing set. We look at the values of 0.5, 0.6, and 0.7 for 𝜂. We train our method using the Drug dataset for this experiment. Using testing accuracy and F1 score as evaluation metrics and logistic regression to train 𝑔 𝑦 , we report the results of this sensitivity analysis over the Drug dataset in Table 5. We first see that the 𝜂 > 1/(2s) condition is satisfied when 𝜂 = 0.7. For 𝜂 = 0.7, BiasCorr(probit, LR) and BiasCorr(LR, LR) outperform SSBias and Greene's method based on test accuracy and F1 score, as indicated in Table 5. For the other two values of 𝜂, where the condition is not satisfied, BiasCorr(probit, LR) and BiasCorr(LR, LR) still outperform SSBias and Greene's method." }, { "figure_ref": [], "heading": "Experiments on BiasCorr *", "publication_ref": [ "b3" ], "table_ref": [], "text": "For the biased training set of labeled samples, we use the same set D 𝑠 that was used in the experiments on BiasCorr and leave the rest of the samples unlabeled as part of the set D 𝑁 . We fix the number of samples 𝑛 drawn from D 𝑁 to be the number of samples that is obtained after splitting each dataset. Baselines. We compare BiasCorr * to the following baselines that were proposed to learn classification under MAR sample selection bias where samples from the unbiased target distribution are unlabeled: (a) a robust non-fair version of RFLearn 1 [4], which considers the empirical frequencies of each record in D 𝑠 and the unlabeled testing set to estimate the true probability of selection, and (b) the" } ]
2023-05-25
10.1109/BigData55660.2022.10021107
[ { "authors": "Takeshi Amemiya", "journal": "Harvard University Press", "ref_id": "b0", "title": "Advanced Econometrics", "year": "1985" }, { "authors": "Steffen Bickel; Michael Brückner; Tobias Scheffer", "journal": "", "ref_id": "b1", "title": "Discriminative learning for differing training and test distributions", "year": "2007" }, { "authors": "Corinna Cortes; Mehryar Mohri; Michael Riley; Afshin Rostamizadeh", "journal": "ALT", "ref_id": "b2", "title": "Sample Selection Bias Correction Theory", "year": "2008" }, { "authors": "Wei Du; Xintao Wu", "journal": "", "ref_id": "b3", "title": "Fair and robust classification under sample selection bias", "year": "2021" }, { "authors": "Wei Du; Xintao Wu; Hanghang Tong", "journal": "", "ref_id": "b4", "title": "Fair Regression under Sample Selection Bias", "year": "2022" }, { "authors": "Dheeru Dua; Casey Graff", "journal": "", "ref_id": "b5", "title": "UCI Machine Learning Repository", "year": "2017" }, { "authors": "Lee Lung", "journal": "Econometric Theory", "ref_id": "b6", "title": "Asymptotic Bias in Simulated Maximum Likelihood Estimation of Discrete Choice Models", "year": "1995" }, { "authors": "William Greene", "journal": "Working Papers", "ref_id": "b7", "title": "A General Approach to Incorporating Selectivity in a Model", "year": "2006" }, { "authors": "James J Heckman", "journal": "Econometrica: Journal of the econometric society", "ref_id": "b8", "title": "Sample selection bias as a specification error", "year": "1979" }, { "authors": "Weihua Hu; Gang Niu; Issei Sato; Masashi Sugiyama", "journal": "", "ref_id": "b9", "title": "Does distributionally robust supervised learning give robust classifiers?", "year": "2018" }, { "authors": "Xinting Hu; Yulei Niu; Chunyan Miao; Xian-Sheng Hua; Hanwang Zhang", "journal": "", "ref_id": "b10", "title": "On non-random missing labels in semi-supervised learning", "year": "2022" }, { "authors": "Jiayuan Huang; Alexander J Smola; Arthur Gretton; Karsten M Borgwardt; Bernhard Schölkopf", "journal": "", "ref_id": "b11", "title": "Correcting Sample Selection Bias by Unlabeled Data", "year": "2006" }, { "authors": "Dong-Hyun Lee", "journal": "", "ref_id": "b12", "title": "Pseudo-Label: The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks", "year": "2013" }, { "authors": "Jae-Woong Lee; Seongmin Park; Jongwuk Lee", "journal": "", "ref_id": "b13", "title": "Dual unbiased recommender learning for implicit feedback", "year": "2021" }, { "authors": "Anqi Liu; Brian Ziebart", "journal": "", "ref_id": "b14", "title": "Robust classification under sample selection bias", "year": "2014" }, { "authors": "Alfonso Miranda; Sophia Rabe-Hesketh", "journal": "The Stata Journal", "ref_id": "b15", "title": "Maximum Likelihood Estimation of Endogenous Switching and Sample Selection Models for Binary, Ordinal, and Count Variables", "year": "2006" }, { "authors": "G Jose; Troy Moreno-Torres; Rocío Raeder; Nitesh V Alaiz-Rodríguez; Francisco Chawla; Herrera", "journal": "Pattern recognition", "ref_id": "b16", "title": "A unifying view on dataset shift in classification", "year": "2012" }, { "authors": "Patrick Puhani", "journal": "Journal of economic surveys", "ref_id": "b17", "title": "The Heckman correction for sample selection and its critique", "year": "2000" }, { "authors": "Joseph V Terza", "journal": "Journal of Econometrics", "ref_id": "b18", "title": "Estimating count data models with endogenous switching: Sample selection and endogenous treatment effects", "year": "1998" }, { "authors": "Joseph V Terza", "journal": "Econometric Reviews", "ref_id": "b19", "title": "Parametric nonlinear regression with endogenous switching", "year": "2009" }, { "authors": "Kenneth E Train", "journal": "Cambridge University Press", "ref_id": "b20", "title": "Discrete choice methods with simulation", "year": "2009" }, { "authors": "Jenny Wu; Tucker ", "journal": "Journal of Accounting Literature", "ref_id": "b21", "title": "Selection bias and econometric remedies in accounting and finance research", "year": "2010" }, { "authors": "E Jesper; Van Engelen; H Holger; Hoos", "journal": "Machine Learning", "ref_id": "b22", "title": "A survey on semi-supervised learning", "year": "2020" }, { "authors": "Xiaojie Wang; Rui Zhang; Yu Sun; Jianzhong Qi", "journal": "PMLR", "ref_id": "b23", "title": "Doubly robust joint learning for recommendation on data missing not at random", "year": "2019" }, { "authors": "Bianca Zadrozny", "journal": "", "ref_id": "b24", "title": "Learning and evaluating classifiers under sample selection bias", "year": "2004" } ]
[ { "formula_coordinates": [ 2, 108.12, 543.97, 186.47, 21.08 ], "formula_id": "formula_0", "formula_text": "𝑡 𝑖 = (𝒙 𝑖 , 𝑦 𝑖 , 𝑠 𝑖 = 1) 1 ≤ 𝑖 ≤ 𝑚 (𝒙 𝑖 , 𝑠 𝑖 = 0) 𝑚 + 1 ≤ 𝑖 ≤ 𝑛(1)" }, { "formula_coordinates": [ 4, 53.8, 101.01, 240.25, 22.57 ], "formula_id": "formula_1", "formula_text": "(𝒙 𝑖 , 𝑦 𝑖 ) ∈ X × Y, the selection equation of the 𝑖th sample is 𝑧 𝑖 = 𝜸𝒙 (𝑠 ) 𝑖 + 𝑢 (𝑠 )" }, { "formula_coordinates": [ 4, 142.17, 171.05, 152.41, 19.19 ], "formula_id": "formula_2", "formula_text": "𝑠 𝑖 = 1 𝑧 𝑖 > 0 0 𝑧 𝑖 ≤ 0(2)" }, { "formula_coordinates": [ 4, 152.78, 200.13, 33.72, 11.43 ], "formula_id": "formula_3", "formula_text": "𝑓 (𝑦 𝑖 |𝒙 (𝑝 )" }, { "formula_coordinates": [ 4, 98.18, 230.73, 196.4, 30.25 ], "formula_id": "formula_4", "formula_text": "𝑓 (𝑦 𝑖 = 1|𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖 ) = exp(𝜷𝒙 (𝑝 ) 𝑖 + 𝜎𝜖 𝑖 ) 1 + exp(𝜷𝒙 (𝑝 ) 𝑖 + 𝜎𝜖 𝑖 ) (3)" }, { "formula_coordinates": [ 4, 53.47, 303.5, 241.56, 27.73 ], "formula_id": "formula_5", "formula_text": "(𝑝 ) 𝑖 , where 𝑢 (𝑝 ) 𝑖 ∼ N (0, 𝜎 2 )." }, { "formula_coordinates": [ 4, 107.57, 428.37, 187.02, 24.75 ], "formula_id": "formula_6", "formula_text": "L = - 1 𝑛 𝑛 ∑︁ 𝑖=1 log 𝑓 (𝑦 𝑖 , 𝑠 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝒙 (𝑠 ) 𝑖 )(4)" }, { "formula_coordinates": [ 4, 53.59, 470.77, 240.45, 27.73 ], "formula_id": "formula_7", "formula_text": "𝑓 (𝑦 𝑖 , 𝑠 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝒙 (𝑠 ) 𝑖 ). The first step is to consider 𝑓 (𝑦 𝑖 , 𝑠 𝑖 = 1|𝒙 (𝑝 ) 𝑖 , 𝒙 (𝑠 )" }, { "formula_coordinates": [ 4, 58.18, 503.84, 236.34, 20.82 ], "formula_id": "formula_8", "formula_text": "𝑓 (𝑦 𝑖 , 𝑠 𝑖 = 1|𝒙 (𝑝 ) 𝑖 , 𝒙 (𝑠 ) 𝑖 ) = ∫ ∞ -∞ 𝑓 (𝑦 𝑖 , 𝑠 𝑖 = 1|𝒙 (𝑝 ) 𝑖 , 𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) • 𝑓 (𝜖 𝑖 )𝑑𝜖 𝑖 (5)" }, { "formula_coordinates": [ 4, 53.8, 545.85, 240.73, 28.46 ], "formula_id": "formula_9", "formula_text": "𝑓 (𝑦 𝑖 , 𝑠 𝑖 = 1|𝒙 (𝑝 ) 𝑖 , 𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) = 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖 ) • 𝑃 (𝑠 𝑖 = 1|𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) (6) Because 𝑢 (𝑠 )" }, { "formula_coordinates": [ 4, 108.28, 585.07, 186.3, 26.92 ], "formula_id": "formula_10", "formula_text": "𝑃 (𝑠 𝑖 = 1|𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) = Φ 𝜸𝒙 (𝑠 ) 𝑖 + 𝜌𝜖 𝑖 √︁ 1 -𝜌 2 (7)" }, { "formula_coordinates": [ 4, 64.21, 661.5, 230.37, 44.45 ], "formula_id": "formula_11", "formula_text": "𝑓 (𝑦 𝑖 , 𝑠 𝑖 = 1|𝒙 (𝑝 ) 𝑖 , 𝒙 (𝑠 ) 𝑖 ) = ∫ ∞ -∞ 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖 ) • Φ 𝜸𝒙 (𝑠 ) 𝑖 + 𝜌𝜖 𝑖 √︁ 1 -𝜌 2 • 𝜙 (𝜖 𝑖 )𝑑𝜖 𝑖 (8)" }, { "formula_coordinates": [ 4, 464.66, 83.69, 29.7, 13.39 ], "formula_id": "formula_12", "formula_text": "(𝑝 ) 𝑖 , 𝒙 (𝑠 )" }, { "formula_coordinates": [ 4, 357.32, 112.21, 201.42, 13.39 ], "formula_id": "formula_13", "formula_text": "𝑓 (𝑦 𝑖 , 𝑠 𝑖 = 0|𝒙 (𝑝 ) 𝑖 , 𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) = 𝑃 (𝑠 𝑖 = 0|𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) (9)" }, { "formula_coordinates": [ 4, 369.59, 139.31, 189.15, 26.92 ], "formula_id": "formula_14", "formula_text": "𝑃 (𝑠 𝑖 = 0|𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) = Φ - 𝜸𝒙 (𝑠 ) 𝑖 + 𝜌𝜖 𝑖 √︁ 1 -𝜌 2(10)" }, { "formula_coordinates": [ 4, 333.07, 182.16, 225.61, 23.42 ], "formula_id": "formula_15", "formula_text": "𝑓 (𝑦 𝑖 , 𝑠 𝑖 = 0|𝒙 (𝑝 ) 𝑖 , 𝒙 (𝑠 ) 𝑖 ) = ∫ ∞ -∞ Φ - 𝜸𝒙 (𝑠 ) 𝑖 + 𝜌𝜖 𝑖 √︁ 1 -𝜌 2 • 𝜙 (𝜖 𝑖 )𝑑𝜖 𝑖(11)" }, { "formula_coordinates": [ 4, 328.37, 228.56, 205.99, 39.28 ], "formula_id": "formula_16", "formula_text": "𝑓 (𝑦 𝑖 , 𝑠 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝒙 (𝑠 ) 𝑖 ) = ∫ ∞ -∞ [(1 -𝑠 𝑖 ) + 𝑠 𝑖 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖 )] • 𝑃 (𝑠 𝑖 |𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) • 𝜙 (𝜖 𝑖 )𝑑𝜖 𝑖" }, { "formula_coordinates": [ 4, 359.11, 280.04, 199.63, 26.92 ], "formula_id": "formula_17", "formula_text": "𝑃 (𝑠 𝑖 |𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) = Φ (2𝑠 𝑖 -1) 𝜸𝒙 (𝑠 ) 𝑖 + 𝜌𝜖 𝑖 √︁ 1 -𝜌 2(13)" }, { "formula_coordinates": [ 4, 328.1, 339.78, 230.64, 46.28 ], "formula_id": "formula_18", "formula_text": "L = - 1 𝑛 𝑛 ∑︁ 𝑖=1 log ∫ ∞ -∞ [(1 -𝑠 𝑖 ) + 𝑠 𝑖 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖 )] • 𝑃 (𝑠 𝑖 |𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 )𝜙 (𝜖 𝑖 )𝑑𝜖 𝑖(14)" }, { "formula_coordinates": [ 4, 416.55, 439.35, 138.77, 24.75 ], "formula_id": "formula_19", "formula_text": "L = - 1 𝑛 𝑛 ∑︁ 𝑖=1 l𝑖 (15" }, { "formula_coordinates": [ 4, 555.32, 449.41, 3.42, 4.09 ], "formula_id": "formula_20", "formula_text": ")" }, { "formula_coordinates": [ 4, 330.29, 481.93, 228.39, 22.72 ], "formula_id": "formula_21", "formula_text": "l𝑖 = log 1 𝑅 𝑅 ∑︁ 𝑟 =1 [ (1 -𝑠 𝑖 ) + 𝑠 𝑖 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖𝑟 ) ] • 𝑃 (𝑠 𝑖 |𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖𝑟 )(16)" }, { "formula_coordinates": [ 4, 328.01, 629.78, 230.73, 53.01 ], "formula_id": "formula_22", "formula_text": "∇ 𝜷 l𝑖 = 1 l𝑖 1 𝑅 𝑅 ∑︁ 𝑟 =1 𝑠 𝑖 • 𝑃 (𝑠 𝑖 |𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖𝑟 ) • 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖𝑟 ) • 𝒙 (𝑝 ) 𝑖 • 𝜕𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖𝑟 ) 𝜕𝜷 (17)" }, { "formula_coordinates": [ 5, 317.96, 624.16, 240.78, 46.76 ], "formula_id": "formula_23", "formula_text": "L′ = - 1 𝑛 𝑛 ∑︁ 𝑖=1 log 1 𝑅 𝑅 ∑︁ 𝑟 =1 [(1-𝑠 ′ 𝑖 )+𝑠 ′ 𝑖 𝑓 (𝑦 ′ 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖𝑟 )] •𝑃 (𝑠 ′ 𝑖 |𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖𝑟 ) (18) over D ′ 𝑡𝑟 = D 𝑠 ∪ D ′ 𝑢 ," }, { "formula_coordinates": [ 5, 405.11, 687.56, 153.63, 20.91 ], "formula_id": "formula_24", "formula_text": "𝑠 ′ 𝑖 = 1 𝑡 𝑖 ∈ D 𝑠 s 𝑡 𝑖 ∈ D 𝑢(19)" }, { "formula_coordinates": [ 6, 138.72, 98.98, 155.87, 20.91 ], "formula_id": "formula_25", "formula_text": "𝑦 ′ 𝑖 = 𝑦 𝑖 𝑡 𝑖 ∈ D 𝑠 ỹ𝑖 𝑡 𝑖 ∈ D 𝑢(20)" }, { "formula_coordinates": [ 6, 53.66, 266.44, 46.32, 13.03 ], "formula_id": "formula_26", "formula_text": "𝑃 (𝑠 ′ 𝑖 = s |𝒙 (𝑠 )" }, { "formula_coordinates": [ 6, 94.93, 285.38, 199.65, 23.42 ], "formula_id": "formula_27", "formula_text": "𝑃 (𝑠 𝑖 = s |𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) = Φ (2s -1) 𝜸𝒙 (𝑠 ) 𝑖 + 𝜌𝜖 𝑖 √︁ 1 -𝜌 2(21)" }, { "formula_coordinates": [ 6, 53.59, 316, 240.45, 28.12 ], "formula_id": "formula_28", "formula_text": "𝜸𝒙 (𝑠 ) 𝑖 +𝜌𝜖 𝑖𝑟 √ 1-𝜌 2 is multiplied by 2s - 1." }, { "formula_coordinates": [ 6, 53.47, 419.17, 240.04, 27.37 ], "formula_id": "formula_29", "formula_text": "[(1 -s) + s 𝑓 ( ỹ𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖𝑟 )] which is multiplied by 𝑃 (𝑠 ′ 𝑖 = s |𝒙 (𝑠 )" }, { "formula_coordinates": [ 6, 325.32, 270.82, 233.42, 26.26 ], "formula_id": "formula_30", "formula_text": "L * = - 1 |D 𝑡𝑟 | ∑︁ 𝑖 ∈ D 𝑡𝑟 log 𝑃 (𝑦 𝑖 |𝒙 𝑖 ) = - 1 𝑛 𝑛 ∑︁ 𝑖=1 log 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) (22)" }, { "formula_coordinates": [ 6, 388.81, 333.58, 95.92, 11.74 ], "formula_id": "formula_31", "formula_text": "Bias(L) = L * -E D 𝑡𝑟 [L]" }, { "formula_coordinates": [ 6, 331.23, 520.01, 227.45, 26.02 ], "formula_id": "formula_32", "formula_text": "Bias( L ) = 1 𝑛 𝑛 ∑︁ 𝑖=1 log 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) p (𝑠 𝑖 ) + 𝑝 (𝑠 𝑖 ) p (𝑠 𝑖 ) ( f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) -1) (23)" }, { "formula_coordinates": [ 6, 323.86, 600.1, 234.82, 32.62 ], "formula_id": "formula_33", "formula_text": "Bias( L′ ) = 1 𝑛 𝑛 ∑︁ 𝑖=1 log 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) p (𝑠 ′ 𝑖 ) + (𝑝 (𝑠 𝑖 ) + s𝜂 ) p (𝑠 ′ 𝑖 ) ( f (𝑦 ′ 𝑖 |𝒙 (𝑝 ) 𝑖 ) -1)(24)" }, { "formula_coordinates": [ 6, 462.63, 654.86, 2.99, 7.7 ], "formula_id": "formula_34", "formula_text": ")" }, { "formula_coordinates": [ 7, 61.69, 345.37, 232.9, 64.88 ], "formula_id": "formula_35", "formula_text": "Bias( L) -Bias( L′ ) ≥ s𝜂 • 1 -s𝜂 - 1 𝑛 𝑛 ∑︁ 𝑖=1 2𝑝 (𝑠 𝑖 ) term 1 + 1 𝑛 𝑛 ∑︁ 𝑖=1 f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 )(2𝑝 (𝑠 𝑖 ) + s𝜂) term 2(25)" }, { "formula_coordinates": [ 7, 231.43, 89.58, 99.93, 619.36 ], "formula_id": "formula_36", "formula_text": "D 𝑎𝑢𝑔 = D 𝑠 ∪ D 𝑢 of 𝑛" }, { "formula_coordinates": [ 7, 371.32, 563.51, 187.42, 25.81 ], "formula_id": "formula_37", "formula_text": "𝑃 (𝑠 𝑖 = 1|𝑡) - 𝑎 𝑡 𝑏 𝑡 ≤ √︄ log 2𝑎 ′ + log 1 𝛿 𝑝 0 𝑛(26)" }, { "formula_coordinates": [ 11, 124.22, 654.64, 99.04, 11.74 ], "formula_id": "formula_38", "formula_text": "Bias( L) = |L * -E D 𝑡𝑟 [ L]|" }, { "formula_coordinates": [ 11, 61.16, 684.21, 233.42, 26.26 ], "formula_id": "formula_39", "formula_text": "L * = - 1 |D 𝑡𝑟 | ∑︁ 𝑖 ∈ D 𝑡𝑟 log 𝑃 (𝑦 𝑖 |𝒙 𝑖 ) = - 1 𝑛 𝑛 ∑︁ 𝑖=1 log 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) (27)" }, { "formula_coordinates": [ 11, 327.18, 128.95, 221.29, 24.75 ], "formula_id": "formula_40", "formula_text": "Bias( L) = - 1 𝑛 𝑛 ∑︁ 𝑖=1 log 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) -E D 𝑡𝑟 - 1 𝑛 𝑛 ∑︁ 𝑖=1 l𝑖(" }, { "formula_coordinates": [ 11, 351.88, 286.36, 168.17, 13.39 ], "formula_id": "formula_41", "formula_text": "l𝑖 = log (1 -𝑠 𝑖 ) • p (𝑠 𝑖 ) + 𝑠 𝑖 • f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) • p (𝑠 𝑖 )" }, { "formula_coordinates": [ 11, 326.46, 353.47, 232.28, 123.67 ], "formula_id": "formula_42", "formula_text": "E D 𝑡𝑟 - 1 𝑛 𝑛 ∑︁ 𝑖=1 l𝑖 = - 1 𝑛 𝑛 ∑︁ 𝑖=1 E D 𝑡𝑟 log (1 -𝑠 𝑖 ) • p (𝑠 𝑖 ) + 𝑠 𝑖 • f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) • p (𝑠 𝑖 ) = - 1 𝑛 𝑛 ∑︁ 𝑖=1 log (1 -𝑝 (𝑠 𝑖 )) • p (𝑠 𝑖 ) + 𝑝 (𝑠 𝑖 ) • f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) • p (𝑠 𝑖 ) = - 1 𝑛 𝑛 ∑︁ 𝑖=1 log p (𝑠 𝑖 ) + 𝑝 (𝑠 𝑖 ) p (𝑠 𝑖 )( f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) -1)(30)" }, { "formula_coordinates": [ 11, 325.27, 520.86, 233.47, 30.25 ], "formula_id": "formula_43", "formula_text": "Bias( L) = 1 𝑛 𝑛 ∑︁ 𝑖=1 log 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) p (𝑠 𝑖 ) + 𝑝 (𝑠 𝑖 ) p (𝑠 𝑖 )( f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) -1) (31)" }, { "formula_coordinates": [ 11, 325.32, 637.63, 52.73, 8.44 ], "formula_id": "formula_44", "formula_text": "Bias( L′ ) = -" } ]
A Robust Classifier under Missing-Not-At-Random Sample Selection Bias
The shift between the training and testing distributions is commonly due to sample selection bias, a type of bias caused by nonrandom sampling of examples to be included in the training set. Although there are many approaches proposed to learn a classifier under sample selection bias, few address the case where a subset of labels in the training set are missing-not-at-random (MNAR) as a result of the selection process. In statistics, Greene's method formulates this type of sample selection with logistic regression as the prediction model. However, we find that simply integrating this method into a robust classification framework is not effective for this bias setting. In this paper, we propose BiasCorr, an algorithm that improves on Greene's method by modifying the original training set in order for a classifier to learn under MNAR sample selection bias. We provide theoretical guarantee for the improvement of BiasCorr over Greene's method by analyzing its bias. Experimental results on real-world datasets demonstrate that BiasCorr produces robust classifiers and can be extended to outperform stateof-the-art classifiers that have been proposed to train under sample selection bias.
Huy Mai; Wen Huang; Wei Du; Xintao Wu
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of the effect of MNAR sample selection bias on a classifier's predictions. Solid (dashed) line represents the decision boundary of the classifier trained on the biased (unbiased and fully observed) set.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "1 -1are assumed to be bivariate normal, i.e. 𝑢 (𝑠 ) 𝑖 = 𝜌𝜖 𝑖 + √︁ 𝜌 2 𝑣 𝑖 , where 𝜌 is the correlation coefficient between 𝑢", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Process of producing D ′ 𝑡𝑟 using BiasCorr. The boxes outlined in red indicate the parts of D 𝑡𝑟 used to train 𝑔 𝑠 and 𝑔 𝑦 .", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "11 :𝑢 15 :1115gives an illustration of the process to obtain D ′ 𝑡𝑟 . Using the same assumptions as Greene's method on the training set D 𝑡𝑟 , BiasCorr assigns both an estimated soft selection value s and a pseudolabel ỹ to each sample in D 𝑢 , resulting in ℎ training Algorithm 1 BiasCorr(𝑔 𝑠 , 𝑔 𝑦 )Input: Original training set D 𝑡𝑟 = {(𝒙 𝑖 , 𝑦 𝑖 , 𝑠 𝑖 = 1)} 𝑚 𝑖=1 ∪ {(𝒙 𝑖 , 𝑠 𝑖 = 0)} 𝑛 𝑖=𝑚+1 , 𝑔 𝑠 model, 𝑔 𝑦 model Output: 𝜷 * 1: D 𝑠 ← {(𝒙 𝑖 , 𝑦 𝑖 , 𝑠 𝑖 = 1)} 𝑚 𝑖=1 2: D 𝑢 ← {(𝒙 𝑖 , 𝑠 𝑖 = 0)} 𝑛𝑖=𝑚+1 3: D ′ 𝑢 ← ∅ 4: Train classifier 𝑔 𝑠 (𝒙 (𝑠 ) 𝑖 ; 𝜻 ) on D 𝑡𝑟 to predict 𝑠 𝑖 5: Train classifier 𝑔 𝑦 (𝒙 (𝑝 ) 𝑖 ; 𝜽 ) on D 𝑠 to predict 𝑦 𝑖 6: for 𝑡 𝑖 in D 𝑢 do ← 1[𝑔 𝑦 (𝒙 (𝑝 ) 𝑖 ; 𝜽 ) > 0.5] 9: end for 10: s ← 1 for 𝑖 ∈ {𝑚 + 1, . . . , 𝑛} do 12: D ′ 𝑢 ← D ′ 𝑢 ∪ {(𝒙 𝑖 , 𝑦 ′ 𝑖 = ỹ𝑖 , 𝑠 ′ 𝑖 = s)} 13: end for 14: D ′ 𝑡𝑟 ← D 𝑠 ∪ D ′ Train ℎ(𝒙 (𝑝 ) 𝑖 ; 𝜷) to minimize L′ using D ′ 𝑡𝑟 and obtain 𝜷 *", "figure_data": "", "figure_id": "fig_3", "figure_label": "1115", "figure_type": "figure" }, { "figure_caption": "𝑖of predicting 𝑠 𝑖 = 1 for all samples in D 𝑢 . This is based on our observation that the value of 𝑃 (𝑠 𝑖 = 1|𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖 ) is not always equal to 0 for some tuple in D 𝑢 , where the ground truth selection value is 𝑠 𝑖 = 0. In our framework, we train a separate binary classifier 𝑔 𝑠 (𝒙 (𝑠 ) 𝑖 ; 𝜻 ) on D 𝑡𝑟 to predict 𝑠 𝑖 and obtain 𝑝 (𝑠 ) 𝑖 based on predictions using D 𝑢 . We then get a fixed soft selection value by taking the average value of 𝑝 (𝑠 )", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "𝑖 ; 𝜽 ) with parameters 𝜽 on D 𝑠 to predict the ground-truth label 𝑦 𝑖 . To add samples to D ′ 𝑢 , in line 7, we evaluate 𝑔 𝑠 and obtain the probability 𝑝 (𝑠 ) 𝑖 for each sample in D 𝑢 . In line 8, we use the prediction from the evaluation of 𝑔 𝑦 on each sample in D 𝑢 to obtain a pseudolabel ỹ𝑖 . In line 10, we compute the average s of 𝑝 (𝑠 ) 𝑖 of each sample in D 𝑢 . In line 12, we add each tuple (𝒙 𝑖 , ỹ𝑖 , s) to D ′ 𝑢 , where each 𝒙 𝑖 is taken from D 𝑢 . In line 15, using D ′", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Followingprevious definitions we have |D 𝑡𝑟 | = 𝑛, |D 𝑠 | = 𝑚, we further define the missing ratio of the unlabeled training samples |D 𝑢 |/|D 𝑡𝑟 | as 𝜂 = 1 -𝑚 𝑛 . Denoting 𝑝 (𝑠 𝑖 ) as the ground truth selection probability for each tuple 𝑖 based on its selection features 𝒙 (𝑠 ) 𝑖 , and the expectation of the estimated selection model 𝑃 (𝑠 𝑖 |𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖𝑟 ) and prediction model 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖𝑟 ) over 𝑅 random draws on the error terms as p (𝑠 𝑖 ) and f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) respectively. We next formally derive the bias of the loss function estimators from Greene's method and BiasCorr algorithm in the following two lemmas: Lemma 1 (Bias of Greene's method estimator). Given the estimated selection model p (𝑠 𝑖 ) and outcome model f (𝑦 𝑖 |𝒙 (𝑝 )", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Theorem 4 . 1 .41Given a training dataset with labeled and unlabeled tuples D 𝑡𝑟 = D 𝑠 ∪ D 𝑢 , suppose f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) takes the form of logistic regression and there is no bias caused by the estimated selection model for both Greene's method and BiasCorr. If the ratio of the unlabeled training data 𝜂 is larger than 1/(2s), we have Bias( L′ ) < Bias( L)", "figure_data": "", "figure_id": "fig_7", "figure_label": "41", "figure_type": "figure" }, { "figure_caption": "[( 1 -log ( 1 -11𝑠 ′ 𝑖 ) + 𝑠 ′ 𝑖 𝑓 (𝑦 ′ 𝑖 |𝒙 (𝑝 ) 𝑖 , 𝜖 𝑖𝑟 )] • 𝑃 (𝑠 ′ 𝑖 |𝒙 (𝑠 ) 𝑖 , 𝜖 𝑖𝑟 ) (33)Using the similar derivation procedure in Section C.1, the latter term in Eq. (32) could be further calculated as :E D 𝑡𝑟 -𝑝 (𝑠 𝑖 ) -s𝜂) • p (𝑠 ′ 𝑖 )+ (𝑝 (𝑠 𝑖 ) + s𝜂) • f (𝑦 ′ 𝑖 |𝒙 𝑠 ′ 𝑖 ) + (𝑝 (𝑠 𝑖 ) + s𝜂) p (𝑠 ′ 𝑖 )( f (𝑦 ′𝑖 |𝒙 bias regarding to the loss function of BiasCorr algorithm is listed as follows:Bias( L′ ) = 1 𝑛 𝑛 ∑︁ 𝑖=1 log 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) p (𝑠 ′ 𝑖 ) + (𝑝 (𝑠 𝑖 ) + s𝜂) p (𝑠 ′ 𝑖 )( f (𝑦 ′ 𝑖 |𝒙 (𝑝 ) 𝑖 ) -1)(35)From the result above we can see even the estimated selection variable model and outcome model are accurate, that is, p (𝑠 𝑖 ) = 𝑝 (𝑠 𝑖 ) and f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) = 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ), the biases of both loss function estimators L and L′ are still non-zero.", "figure_data": "", "figure_id": "fig_8", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "loglog 1 +𝑖=1 2 f12i.e. the outcome models for Greene's method and BiasCorr are exactly the same. The two biases thus reduce to the following form:Bias( L) = 1 𝑛 𝑛 ∑︁ 𝑖=1 log 𝑓 (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) 𝑝 (𝑠 𝑖 ) + 𝑝 (𝑠 𝑖 ) 2 ( f (𝑦 𝑖 |𝒙 𝑖 ) + s𝜂 + (𝑝 (𝑠 𝑖 ) + s𝜂) 2 ( f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) -1)(37) We first define the difference between the denominators inside the log-likelihood function for each tuple 𝑖 as follows:DIFF(i) = 𝑝 (𝑠 𝑖 ) + s𝜂 + (𝑝 (𝑠 𝑖 ) + s𝜂) 2 ( f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) -1) -𝑝 (𝑠 𝑖 ) + 𝑝 (𝑠 𝑖 ) 2 ( f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) -1) =s𝜂 + ( f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) -1) [(𝑝 (𝑠 𝑖 ) + s𝜂) 2 -𝑝 (𝑠 𝑖 ) 2 ] =s𝜂 + ( f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) -1)(2𝑝 (𝑠 𝑖 ) + s𝜂) • s𝜂 =s𝜂 • [1 + ( f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) -1)(2𝑝 (𝑠 𝑖 ) + s𝜂)] (38)Since L * denotes the minimized loss function with the maximized likelihood function, we have:Bias( L) -Bias( L′ ) = E D 𝑡𝑟 [ L] -L * -E D 𝑡𝑟 [ L′ ] -L * = E D 𝑡𝑟 -𝑝 (𝑠 𝑖 ) + s𝜂 + (𝑝 (𝑠 𝑖 ) + s𝜂) 2 ( f (𝑦 𝑖 |𝒙 𝑝 (𝑠 𝑖 ) + 𝑝 (𝑠 𝑖 ) 2 ( f (𝑦 𝑖 |𝒙 ( f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) -1)(2𝑝 (𝑠 𝑖 ) + s𝜂) 1)(2𝑝 (𝑠 𝑖 ) + s𝜂) =s𝜂 • 1 -s𝜂 + 1 𝑛 𝑛 ∑︁ (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 )𝑝 (𝑠 𝑖 ) -2𝑝 (𝑠 𝑖 ) + f (𝑦 𝑖 |𝒙 (𝑝 ) 𝑖 ) • s𝜂 =s𝜂 • 1 -s𝜂both f (𝑦 𝑖 |𝒙 (𝑝 )𝑖 ) and 𝑝 (𝑠 𝑖 ) lie in (0, 1) for each tuple 𝑖, term 2 is still a positive value after summation and averaging over all the 𝑛 training tuples. For term 1, notice that 1 𝑛 𝑛 𝑖=1 𝑝 (𝑠 𝑖 ) = 𝑚 𝑛 since 𝑝 (•) represents the ground truth selection model. In order to ensure term 1 is larger than 0 we need to satisfy:1 -s𝜂 -2(1 -𝜂) > 0(41) Solving the Equation above leads to: the ratio 𝜂 = |D 𝑢 |/|D 𝑡𝑟 | = 1 -𝑚/𝑛 is larger than 1/(2s), we obtain the result Bias( L) -Bias( L′ ) > 0, and our BiasCorr method achieves lower bias compared to Greene's method.", "figure_data": "", "figure_id": "fig_9", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "𝑠 𝑖 indicates whether or not 𝑦 𝑖 is observed for a training sample. Let D 𝑠 denote the set containing the first 𝑚 training samples where each sample is fully observed and D 𝑢 be the set that contains the remaining 𝑛 -𝑚 training samples with unobserved labels. We consider the following definitions to formally describe the MNAR sample selection bias scenario on D 𝑡𝑟 . We start with the MAR assumption: Definition 1.1. MAR Sample Selection: Missing-at-random occurs for a sample 𝑡 𝑖 if 𝑠 𝑖 depends on 𝒙 𝑖 but is independent of 𝑦 𝑖 given 𝑥 𝑖 , i.e. 𝑃 (𝑠 𝑖 |𝒙 𝑖 , 𝑦 𝑖 ) = 𝑃 (𝑠 𝑖 |𝒙 𝑖 ) .", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Symbols. Definition 1.2. MNAR Sample Selection: Missing-not-at-random occurs for a sample 𝑡 𝑖 if 𝑠 𝑖 is not independent of 𝑦 𝑖 given 𝑥 𝑖 , i.e. 𝑃 (𝑠 𝑖 |𝒙 𝑖 , 𝑦 𝑖 ) ≠ 𝑃 (𝑠 𝑖 |𝒙 𝑖 ) . This means that 𝑠 𝑖 may depend on 𝒙 𝑖 and 𝑦 𝑖 . For Greene's method, the selection mechanism is expressed in terms of a set of selection features to model the missingness of 𝑦 𝑖 for a training sample. These selection features are observed for all training samples. Thus the following assumptions are additionally made in this work: (i) Given a set of selection features 𝒙", "figure_data": "NotationDescriptionℎrobust classifierD 𝑠 (D 𝑢 )labeled (unlabeled) samples in training set D 𝑡𝑟𝑚 (𝑛)size of D 𝑠 ( D 𝑡𝑟 )𝜂missingness ratio in D 𝑡𝑟𝒙 𝑖set of all features for 𝑖th training sample𝑦 𝑖label of 𝑖th training sample𝑠 𝑖selection value of 𝑖th training sample𝒙(𝑠 ) 𝑖(𝒙(𝑝 ) 𝑖 )selection (prediction) features𝜷set of prediction coefficientsLsimulated negative log likelihood (Greene's)L′simulated negative log likelihood (BiasCorr)𝑅number of random drawssestimated soft selection value𝑔 𝑠 (𝑔 𝑦 )predictor for selection (pseudolabel)(𝑠 )𝑖", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "𝒙 𝑖 , 𝑃 (𝑠 𝑖 |𝒙 𝑖 , 𝑦 𝑖 ) is approximated by computing 𝑃 (𝑠 𝑖 |𝒙", "figure_data": "(𝑠 ) 𝑖 ).(ii) The set of selection features includes every prediction fea-ture, i.e. 𝒙(𝑠 ) 𝑖⊃ 𝒙(𝑝 ) 𝑖 .Problem Statement. Given a set of prediction features 𝒙(𝑝 ) 𝑖⊆ 𝒙 𝑖 ,we seek to train a binary classifier ℎ(𝒙(𝑝 )", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "are non-zero even if the estimated selection variable model and outcome model are accurate, that is, p (𝑠 𝑖 ) = 𝑝 (𝑠 𝑖 ) and f (𝑦 𝑖 |𝒙 𝑖 ) • p (𝑠 𝑖 ) to estimate the likelihood function 𝑓 (𝑦 𝑖 , 𝑠 𝑖 |𝒙 𝑖 ) for the samples in D 𝑠 and p (𝑠 𝑖 ) to estimate 𝑓 (𝑦 𝑖 , 𝑠 𝑖 |𝒙", "figure_data": "(𝑝 ) 𝑖 , 𝒙(𝑠 )(𝑝 ) 𝑖 , 𝒙(𝑠 )(𝑝 ) 𝑖 ) = 𝑓 (𝑦 𝑖 |𝒙(𝑝 ) 𝑖 ). Accord-ing to the design of the log-likelihood loss function in Eq. (14),Greene's method uses f (𝑦 𝑖 |𝒙(𝑝 )", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "samples, where D 𝑢 contains samples that are uniformly drawn from D 𝑁 and 𝑛 > 𝑚. Algorithm 2 BiasCorr * (𝑔 𝑠 , 𝑔 𝑦 ) Input: Labeled training set D 𝑠 = {(𝒙 𝑖 , 𝑦 𝑖 )} 𝑚 𝑖=1 , unlabeled testing set D 𝑁 = {(𝒙 𝑖 )} 𝑁 𝑖=1 , 𝑔 𝑠 model, 𝑔 𝑦 model Output: 𝜷 * 1: D 𝑛 ← 𝑛 randomly drawn samples from D 𝑁 2: D 𝑢 ← ∅ 3: for each distinct sample 𝑡 in D 𝑛 do Draw 𝑏 𝑡 -𝑎 𝑡 samples from D 𝑡 𝑛 and add samples to D 𝑢 Assign 𝑠 𝑖 = 1 (0) to all 𝑡 𝑖 ∈ D 𝑠 (D 𝑢 ) 14: D 𝑎𝑢𝑔 ← D 𝑠 ∪ D 𝑢 15: 𝜷 * ← BiasCorr(D 𝑎𝑢𝑔 , 𝑔 𝑠 , 𝑔 𝑦 ) Algorithm 2 gives the pseudocode of BiasCorr * . To obtain D 𝑎𝑢𝑔 , we first randomly draw 𝑛 samples uniformly from D 𝑁 in line 1, where 𝑛 > 𝑚. Let D 𝑛 denote this set of 𝑛 samples 1 . To construct D 𝑢 , we compare the empirical frequencies of D 𝑠 and D 𝑛 in lines 4-8, which follows a similar procedure as [3]. For a distinct sample 𝑡, let D 𝑡 𝑠 be a subset of D 𝑠 that contains all instances of 𝑡 and 𝑎 𝑡 = |D 𝑡 𝑠 |. We similarly define D 𝑡 𝑛 and 𝑏 𝑡 for D 𝑛 . Until D 𝑢 contains 𝑛 -𝑚 samples, we add 𝑏 𝑡 -𝑎 𝑡 random samples from D 𝑡 𝑛 to D 𝑢 for each 𝑡 such that 𝑏 𝑡 > 𝑎 𝑡 .", "figure_data": "4:𝑎 𝑡 ← |D 𝑡 𝑠 |5:𝑏 𝑡 ← |D 𝑡 𝑛 |6:if 𝑏 𝑡 > 𝑎 𝑡 then7:8:end if9:if |D 𝑢 | = 𝑛 -𝑚 then10:break11:end if12: end for13:", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Performance of baselines compared to BiasCorr. Highest test accuracies among SSBias, Greene's method, and the four BiasCorr settings are in bold.", "figure_data": "MethodsAdult Train Acc. (%) Test Acc. (%) Train Acc. (%) Test Acc. (%) Train Acc. (%) Test Acc. (%) German DrugNoBias86.57 ± 0.0086.57 ± 0.0073.29 ± 0.0072.67 ± 0.0069.83 ± 0.0069.08 ± 0.00SSBias77.56 ± 0.0062.44 ± 0.0075.28 ± 0.0069.33 ± 0.0077.78 ± 0.0066.78 ± 0.00Greene's method62.94 ± 0.0762.89 ± 0.0972.77 ± 0.4769.67 ± 0.3068.89 ± 0.2766.71 ± 0.33BiasCorr (probit, LR)86.84 ± 0.0270.05 ± 0.0479.97 ± 0.1471.60 ± 0.1387.93 ± 0.0769.22 ± 0.02BiasCorr (LR, LR)87.36 ± 0.0469.84 ± 0.0480.11 ± 0.2571.07 ± 0.1388.89 ± 0.0967.81 ± 0.17BiasCorr (probit, MLP)94.08 ± 0.0285.68 ± 0.0179.69 ± 0.4071.27 ± 0.2586.19 ± 0.0667.39 ± 0.09BiasCorr (LR, MLP)93.45 ± 0.0185.79 ± 0.0279.69 ± 0.5071.00 ± 0.2185.97 ± 0.1367.77 ± 0.14", "figure_id": "tab_5", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Empirical missingness ratio comparison.", "figure_data": "Dataset𝜂1/(2 -s ) (probit for 𝑔 𝑠 ) 1/(2 -s ) (logit for 𝑔 𝑠 )Adult0.74760.58680.5738German 0.23140.63450.6233Drug0.65200.71590.5976", "figure_id": "tab_6", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Execution times (in seconds).", "figure_data": "MethodAdult German DrugGreene's method93.532.063.14BiasCorr (probit, LR)94.591.842.85BiasCorr (LR, LR)99.621.872.28BiasCorr (probit, MLP) 112.171.862.69BiasCorr (LR, MLP)112.901.872.26", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Performance of baselines compared to BiasCorr * . As shown in Table6, BiasCorr * , under all combinations of settings for 𝑔 𝑠 and 𝑔 𝑦 , outperforms the baselines when trained on the Adult, German, and Drug datasets. For instance, BiasCorr * (probit, MLP) outperforms RBA by 16.09% for the Adult dataset in terms of testing accuracy. For the German dataset, BiasCorr * (probit, MLP) has a testing accuracy that is 2.74% higher than RBA. Moreover, the testing accuracy for BiasCorr * (probit, LR) is 3.14% higher than RBA for the German dataset. These results suggest that BiasCorr * can outperform other classifiers trained under sample selection bias regardless of the type of model chosen for 𝑔 𝑠 and 𝑔 𝑦 or the proportion of unbiased, unlabeled samples in D 𝑎𝑢𝑔 .", "figure_data": "MethodsAdult Train Acc. (%) Test Acc. (%) Train Acc. (%) Test Acc. (%) Train Acc. (%) Test Acc. (%) German DrugRFLearn 178.04 ± 0.0069.68 ± 0.0076.02 ± 0.0069.67 ± 0.0075.82 ± 0.0065.02 ± 0.00RBA77.69 ± 0.0069.59 ± 0.0075.84 ± 0.0067.33 ± 0.0075.82 ± 0.0065.55 ± 0.00BiasCorr * (probit, LR)87.10 ± 0.0269.84 ± 0.0780.57 ± 0.0970.47 ± 0.3487.70 ± 0.1368.52 ± 0.14BiasCorr * (LR, LR)87.37 ± 0.0369.75 ± 0.0280.57 ± 0.1670.67 ± 0.3087.98 ± 0.1168.34 ± 0.21BiasCorr * (probit, MLP)94.00 ± 0.3585.75 ± 0.0179.66 ± 0.2170.07 ± 0.1387.20 ± 0.1068.23 ± 0.13BiasCorr * (LR, MLP)93.78 ± 0.3685.62 ± 0.0279.40 ± 0.1769.87 ± 0.1687.40 ± 0.1567.99 ± 0.26Robust Bias Aware (RBA) classifier [15], which uses minimax esti-mation to learn against a worst-case conditional label distribution.Results.", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Features used for selection/prediction. Prediction features are in italic font while selection features are in either italic or regular font.", "figure_data": "Dataset FeaturesAdultAge, Target, Education-Num,Capital Gain, Capital Loss, Hours per week,Country_Canada, Occupation_Adm-clerical,Occupation_Armed-Forces, Occupation_Sales,Occupation_Craft-repair, Occupation_Other-service,Occupation_Prof-specialty, Occupation_Tech-support,Occupation_Exec-managerial,Occupation_Farming-fishing,Occupation_Protective-serv,Occupation_Machine-op-inspct,Occupation_Priv-house-serv,Occupation_Handlers-cleaners,Occupation_Transport-moving, Marital Status,Relationship_Other-relative, Relationship_Husband,Relationship_Wife, Relationship_Not-in-family,Relationship_Own-child, Relationship_UnmarriedGerman status checking, duration, credit history,credit amount, savings account, telephone,last employment, age, personal status and sex,last residence, property, existing credits,other plans, liable, foreign worker", "figure_id": "tab_9", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "𝑠 𝑖 ) + 𝑠 𝑖 𝑓 (𝑦 𝑖 |𝒙 𝑖 , 𝜖 𝑖𝑟 ) over 𝑅 random draws on the error terms as p (𝑠 𝑖 ), f (𝑦 𝑖 |𝒙", "figure_data": "28)where𝑅l𝑖 = log1 𝑅𝑟 =1 ∑︁[ (1 -(𝑝 ) 𝑖 , 𝜖 𝑖𝑟 ) ] • 𝑃 (𝑠 𝑖 |𝒙(𝑠 ) 𝑖 , 𝜖 𝑖𝑟 )(29)Denoting the expectation of selection model 𝑃 (𝑠 𝑖 |𝒙(𝑠 ) 𝑖 , 𝜖 𝑖𝑟 ) andprediction model 𝑓 (𝑦 𝑖 |𝒙(𝑝 )(𝑝 )", "figure_id": "tab_10", "figure_label": "", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "[17]", "Explanation": "The cited work by [17] provides a description of the phenomenon of dataset shift, which the citing paper uses to explain the real-world scenario of training and testing sets coming from different distributions."}, {"Category": "Data Source", "Citation": "[3], [12], [15], [25]", "Explanation": "The cited works by [3], [12], [15], and [25] are acknowledged for their contributions to the field of sample selection bias in training sets. The citing paper uses the findings from these works to address the issue of missing data in training sets and improve the performance of classifiers in the real world."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work by Heckman provides a two-step method to account for sample selection bias in the training set, which the citing paper adopts to address the issue of missing not at random in the training set."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work provides insights on the limitations of using Heckman's method in the classification context, which the citing paper uses to guide the development of a new method for solving the same problem."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work provides the basis for the assumption of a bivariate normal distribution in the design of loss functions to model under MNAR bias in the field of statistics."}, {"Category": "Extension or Continuation", "Citation": "[9]", "Explanation": "The citing paper extends the work of Heckman by applying the two-step method to address sample selection bias in linear regression models with MNAR values in the dependent variable."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work by [19] provides a method for deriving a term for the nonzero conditional noise expectation in Poisson regression, which the citing paper adopts in its own research to account for sample selection bias in the context of non-linear models."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work by [18] highlights the issue of collinearity between the coefficients of the selection and prediction equations, which the citing paper acknowledges as a limitation in the specification of the IMR."}, {"Category": "Supporting Evidence", "Citation": "[5]", "Explanation": "The cited work by [5] formulates a fair regression model under the assumption of MNAR training outcomes, which the citing paper uses to support its approach of handling sample selection bias in the context of fair machine learning with a categorical dependent variable."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work provides the foundational idea of importance weighting techniques for addressing MAR sample selection bias, which the citing paper adopts in their study of MNAR bias."}, {"Category": "Extension or Continuation", "Citation": "[24]", "Explanation": "The cited work in recommender learning extends the research on rating prediction by proposing a method for joint learning of an imputation model and a prediction model to address MNAR bias in ratings."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work introduces two propensity scoring methods that the citing paper adopts in its loss function to address the bias of MNAR implicit feedback in a tabular data setting."}, {"Category": "Extension or Continuation", "Citation": "[23]", "Explanation": "The problem defined in the citing paper is related to semi-supervised learning, which the cited work has explored in the context of training samples with and without labels."}, {"Category": "Data Source", "Citation": "[13]", "Explanation": "The cited work introduces the technique of pseudolabel generation, which the citing paper utilizes to address the issue of missing data mechanism in a semi-supervised learning setting."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work introduces a class-aware propensity score and imputation strategies for semi-supervised learning, which the citing paper adopts in their research to develop a doubly robust model against MNAR data."}, {"Category": "Methodological Basis", "Citation": "[8]", "Explanation": "Greene's method is cited as a methodological basis for the FIML estimator in the citing paper, as it provides a way to account for non-random sample selection bias in the label by considering the relationship between the noise terms in the prediction and selection equations."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work provides a simulation approach for minimizing an approximate form of L, which the citing paper adopts in their research to address the challenge of intractability in the optimization problem."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work provides a lemma that serves as a methodological basis for estimating the selection probability in the citing paper."}, {"Category": "Data Source", "Citation": "[6]", "Explanation": "The cited work provides the Adult, German, and Drug datasets used in the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work RFLearn 1 is used as a robust non-fair version in the baselines for learning classification under MAR sample selection bias, which the citing paper adopts to estimate the true probability of selection in the training and testing sets."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b44", "b3", "b43", "b9", "b0", "b21", "b20", "b18", "b47", "b14", "b28", "b39", "b40", "b41", "b27", "b32", "b29", "b22" ], "table_ref": [], "text": "In the latest National Standards of the People's Republic of China about Chinese coded character set (GB18030-2022), 87,887 Chinese character categories are included. To create a high-performance handwritten Chinese character recognition (HCCR) system that supports the full character set using traditional approaches, a large number of training samples with various writing styles would be collected for each character category. However, only about 4,000 categories are commonly used in daily life. It is therefore both time-consuming and expensive to collect representative handwritten samples for the remaining 95% rarely-used ones. These categories are often of complicated structures, existing in personal names, addresses, ancient books, historic documents and scientific publications. An HCCR system supporting the full-set of these categories with high accuracy will be beneficial to improve user experience, protect cultural heritages and promote academic exchanges.\nLots of research efforts have been made to build an HCCR system with only real training samples from commonly used characters. A Chinese character consists of radicals/strokes with specific spatial relationships, which are shared across all characters. Rather than encoding each character category as a single one-hot vector, [45,4,44,10] encode it as a sequence of radicals/strokes and spatial relationships to achieve zero-shot recognition goal. In [1,22,21,19], font-rendered glyph images are leveraged to provide reference representations for unseen character categories. There are also some efforts to synthesize handwritten samples for unseen categories. For example, [48] synthesizes unseen character samples with a radical composition network and combines them with real samples to train an HCCR system. However, its recognition accuracy is relatively poor.\nWe propose to solve this problem by synthesizing diverse and high-quality training samples for unseen character categories with denoising diffusion probabilistic models (DDPMs) [38,15]. Diffusion models have been shown to outperform other generation techniques in terms of diversity and quality [29,9,40,41,42], due to their powerful modeling capacity of high-dimensional distributions. This also offers a zero-shot generation capability. For example, in diffusion-based text-to-image generation [28,33,36], with all object types and spatial relationships existed in training samples, diffusion models are capable of generating photo-realistic images of in-existence object combinations and layouts. As mentioned above, Chinese characters can be treated as combinations of different radicals/strokes with specific layouts. We can leverage DDPM to achieve the goal of zero-shot handwritten Chinese character image generation.\nIn this paper, we design a glyph conditional DDPM (GC-DDPM), which concatenates a font-rendered character glyph image with the original input of U-Net used in [9], to guide the model in constructing mappings between fontrendered and handwritten strokes/radicals. To the best of our knowledge, we are the first to apply DDPMs to zero-shot handwritten Chinese character generation. Unlike other image-to-image diffusion model frameworks (e.g., [35,43,30]), which aim at synthesizing images in the target domain while faithfully preserving the content representations, our goal is to learn mappings from rendered printed radicals/strokes to the handwritten ones.\nExperimental results on CASIA-HWDB [23] dataset with 3,755 character categories show that the HCCR systems trained with DDPM-synthesized samples outperform other synthetic data based solutions and perform similarly with the one trained with real samples in terms of recognition accuracy. We also visualize the generation effect of both in and out of 3,755 character categories, which indicates that our method has the potential to be extended to a larger vocabulary.\nThe remainder of the paper is organized as follows. In Sec. 2, we briefly review related works. In Sec. 3, we describe our GC-DDPM design along with sampling methods. Our approach is evaluated and compared with prior arts in Sec. 4. We discuss limitations of our approach and future work in Sec. 5, and conclude the paper in Sec. 6." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b49", "b52", "b51", "b19", "b44", "b3", "b43", "b9", "b4", "b0", "b21", "b20", "b16", "b18", "b47", "b50", "b10", "b53", "b46", "b24", "b17", "b30", "b23", "b14", "b14", "b41", "b32", "b33", "b1", "b26" ], "table_ref": [], "text": "Zero-shot HCCR Conventional HCCR systems [7, 50,53,6,52,20], although achieving superior recognition accuracy, can only recognize character categories that are observed in the training set. Zero-shot HCCR aims to recognize handwritten characters that are never observed. Most of the previous zero-shot HCCR systems can be divided into two categories: structure-based and structure-free methods. In structure-based methods, a Chinese character is represented as a sequence of composing radicals [45,4,44,10] or strokes [5]. Although the character is never observed, the composing radicals, strokes and their spatial relationships have been observed in the training set. Therefore, structure-based methods are able to predict the radical or stroke sequences of unseen Chinese characters and achieve zero-shot recognition. However, in these methods, the radical or stroke sequence representations of Chinese characters require lots of language-specific domain knowledge. In structure-free method, [1,22,21,17] leverage information from the corresponding Chinese character glyph images. Zero-shot HCCR is achieved by choosing the Chinese character whose glyph features are closest to that of the handwritten ones in terms of visual representations. In [19], the radical information is also used to extract the visual representations of glyph images.\nZero-shot Data Synthesis for HCCR Besides designing zero-shot recognition systems, there are some studies to directly synthesize handwritten training samples for unseen categories. [48] investigates a radical composition network to generate unseen Chinese characters by integrating radicals and their spatial relationships. Although the generated handwritten Chinese characters can increase the recognition rate of unseen handwritten characters, the overall recognition performance is relatively poor. In this work, we propose to use a more powerful diffusion model to generate unseen handwritten Chinese characters given corresponding glyph images.\nZero-shot Chinese Font Generation Zero-shot Chinese font generation aims to generate font glyph for unseen Chinese characters based on some seen character / font glyph pairs. In [51,11,54,47,25], the image-to-image translation framework is used to achieve this goal. Works in [18,31,24] also leverage the information of composing components, radicals, strokes for better generalization. In this paper, we focus on zero-shot handwritten Chinese character generation with DDPM and we can easily adapt this method to zero-shot Chinese font generation task. Diffusion Model DDPM [38,15] has become extremely popular in computer vision and achieves superior performance in image generation tasks. DDPM uses two parameterized Markov chains and variational inference method to reconstruct the data distribution. DDPMs have demonstrated their powerful capabilities to generate high-quality and high-diversity images [15,9,42]. It is shown in [33] that DDPM can perform a great effect on combination of concepts, which can integrate multiple elements. Diffusion models are also applied to other tasks [49,8], including high-resolution generation [34], image inpainting [43], natural language processing [2] and so on. Besides, [27] introduces DDPM to solve the problem of online English handwriting generation. In this work, we propose to leverage DDPM for zero-shot handwritten Chinese character generation and to synthesize training data for unseen Chinese characters to build HCCR systems.\n3 Our Approach" }, { "figure_ref": [], "heading": "Preliminary", "publication_ref": [ "b14", "b45", "b28", "b28" ], "table_ref": [], "text": "Fig. 2: The Markov chain of forward (reverse) diffusion process of generating a handwritten Chinese character sample by slowly adding (removing) noise. Adapted from [15]. Diffusion model is a new paradigm of data generation. It defines a Markov chain of diffusion steps to slowly add random noise to data and then learn to reverse the diffusion process to construct desired data samples from the noise [46].\nAs shown in Fig. 2, in our handwritten Chinese character generation scenario, we first sample a character image from the real distribution x 0 ∼ q(x). Then, in forward diffusion process, small amounts of Gaussian noise are added to the sample in steps according to Eq. (1),\nq(x t |x t-1 ) = N (x t ; 1 -β t x t-1 , β t I)(1)\nx t = √ α t x t-1 + √ 1 -α t ϵ t\nwhere α t = 1 -β t and ϵ t ∼ N (0, I), producing a sequence of noisy samples. The step sizes are controlled by a variance schedule {β t ∈ (0, 1)} T t=1 . As t becomes larger, the image gradually loses its distinguishable features. When t → ∞ , x t becomes a sample of an isotropic Gaussian distribution.\nIf we can reverse the above process and sample from q(x t-1 |x t ), we will be able to recreate the true sample from a Gaussian noise x T ∼ N (0, I). If β t is small enough, q(x t-1 |x t ) will also be a Gaussian. So we can approximate it with a parameterized model, as shown in Eq. ( 2)\np θ (x t-1 |x t ) = N (x t-1 ; µ θ (x t , t), Σ θ (x t , t)) .\n(\n) Since q(x t-1 |x t , x 0 ) is tractable, q(x t-1 |x t , x 0 ) = N (x t-1 ; μ(x t , x 0 ), βt I)2\nwhere ᾱt = t s=1 α s , and\nμ(x t , x 0 ) = 1 √ α t (x t - 1 -α t √ 1 -ᾱt ϵ t ) (4) βt = 1 -ᾱt-1 1 -ᾱt • β t .(5)\nSo we can train a neural network to approximate ϵ t and the predicted value is denoted as ϵ θ (x t ). It has been verified that instead of directly setting Σ θ (x t , t) as βt , setting it as a learnable interpolation between βt , β t in log domain will yield better log-likelihood [29]:\nΣ θ (x t , t) = exp(ν θ (x t ) log β t + (1 -ν θ (x t )) log βt ) .(6)\nIn this paper, we will train a U-Net to predict ϵ θ (x t ) and ν θ (x t ) with the same hybrid loss as in [29]." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Glyph Conditional U-Net Architecture", "publication_ref": [ "b31", "b11", "b28", "b27" ], "table_ref": [], "text": "As shown in Fig. 1, the U-Net architecture we used is borrowed from [9]. With 128 × 128 image input, there are 5 resolution stages in encoder and decoder respectively, and each stage consists of 2 BigGAN residual blocks (ResBlock) [3].\nIn addition, BigGAN ResBlocks are also used for downsampling and upsampling activations. We also follow [9] to use multi-head attention at 32 × 32, 16 × 16 and 8 × 8 resolutions. Timestep t will first be mapped to sinusoidal embedding and then processed by a 2-layer feed-forward network (FFN). This processed embedding will then be fed to each convolution layer in U-Net through a featurewise linear modulation (FiLM) operator [32].\nTo control the style and content of generated character images, writer information [12] and character category information are also fed to the model. Given a writer w, which is actually the class index of all writer IDs, it will be mapped to a learnable embedding, followed by L2-normalization (denoted as z), which is injected to U-Net together with the timestep embedding [29] as shown in Fig. 1.\nIf we inject character category information in the same way as writer, the model will not be able to generate samples for unseen categories because their embeddings are not optimized at all. In this paper, we propose to leverage printed images rendered by font \"kai\" to provide character category information. We denote this glyph image as g. There are several ways to inject g to the model. For example, it can be encoded as a feature vector by a CNN/ViT and fed to U-Net in FiLM way, or encoded as feature sequences and fed to attention layers of U-Net serving as external keys and values [28]. In this paper, we simply inject g as model's input by concatenating it with x t and leave other ways as future work. We call our approach as Glyph Conditional DDPM (GC-DDPM).\nBy conditioning model output on glyph image, we expect the model can learn the implicit mapping rules between printed stroke combinations and their handwritten counterparts. Then we can input font-rendered glyph images of unseen characters to the well-trained GC-DDPM and get their handwritten samples of high quality and diversity." }, { "figure_ref": [], "heading": "Multi-conditional Classifier-free Diffusion Guidance", "publication_ref": [ "b15", "b15" ], "table_ref": [], "text": "Classifier-free guidance [16] has been proven effective for improving generation quality on different tasks. In this paper, we are also curious about its effects on HCCR system trained with synthetic samples.\nThere are 2 conditions, glyph g and writer w, in our model. We assume that given x t , g and w are independent. So we have\np θ (x t-1 |x t , g, w) ∝ p θ (x t-1 |x t )p θ (g|x t )p θ (w|x t ) .(7)\nFollowing the previous practice in [16], we assume that there is an implicit classifier (ic),\np ic (g, w|x t ) ∝ p(x t |g) p(x t ) γ • p(x t |w) p(x t ) η .(8)\nThen we have\n∇ xt log p ic (g, w|x t ) ∝ γϵ(x t , g) + ηϵ(x t , w) -(γ + η)ϵ(x t ) .(9)\nSo we can perform sampling with the score formulation εθ (x t , g, w) = ϵ θ (x t , g, w) + γϵ θ (x t , g, ∅)\n+ ηϵ θ (x t , ∅, w) -(γ + η)ϵ θ (x t , ∅, ∅) . (10\n)\nWe call γ, η as content and writer guidance scales respectively. When g = ∅, an empty glyph image will be fed to U-Net and when w = ∅, a special embedding will be used. During training, we set g and w to ∅ with probability 10% independently to get partial/unconditional models." }, { "figure_ref": [], "heading": "Writer Interpolation", "publication_ref": [ "b32" ], "table_ref": [], "text": "Besides generating unseen characters, our model is also able to generate unseen styles by injecting interpolation between different writer embeddings as new writer embedding. Given two normalized writer embeddings z i and z j , we use spherical interpolation [33] to get a new embedding z with L2-norm being 1, as in Eq. ( 11):\nz = z i cos λπ 2 + z j sin λπ 2 , λ ∈ [0, 1] . (11\n)" }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b22" ], "table_ref": [], "text": "We conduct our experiments on CASIA-HWDB [23] dataset. The detailed experimental setup is comprehensively explained in Sec. 4.1. Experiments on Writer Independent (WI) and Writer Dependent (WD) GC-DDPMs are conducted in Sec. 4.2 and Sec. 4.3, respectively. We further use synthesized samples to augment the training set of HCCR in Sec. 4.4. Finally, we compare our approach with prior arts in Sec. 4.5." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b22", "b0", "b44", "b25", "b38", "b6", "b13", "b12", "b17", "b30", "b23", "b50", "b10", "b53", "b46", "b24" ], "table_ref": [], "text": "Dataset: The CASIA-HWDB dataset is a large-scale offline Chinese handwritten character database including HWDB1.0, 1.1 and 1.2. We use the HWDB1.0 and 1.1 in experiments, where the former contains 3,866 Chinese character categories written by 420 writers, and the latter contains 3,755 categories written by another 300 writers. We follow the official partition of training and testing sets as in [23], where the training set is written by 576 writers.\nVocabulary partition: We use the 3,755 categories that cover the standard GB2312-80 level-1 Chinese set in experiments. We denote the set of 3,755 categories as S 3,755 . Following the setup in [1,45], we select the first 2,000 categories in GB2312-80 set as seen categories (denoted as S 2,000 ), and the remaining 1,755 categories as unseen categories (denoted as S 1,755 ). The diffusion models are trained on training samples of S 2,000 and used to generate handwritten Chinese character samples of S 1,755 to evaluate the performance of zero-shot training data generation for HCCR.\nDDPM settings: Our DDPM implementation is based on [9]. We use the \"kai\" as our font library to render printed character images. We conduct experiments on both WI and WD GC-DDPMs. In WI GC-DDPM training, we disable writer embeddings and randomly set content condition g as ∅ with Fig. 3: Synthetic handwritten Chinese character samples and corresponding glyphs, with stroke numbers increasing from left to right. probability 10%. And in WD GC-DDPM, writer condition w is also randomly set to ∅ with probability 10%. Flip and mirror augmentations are used during training. We set batch size as 256, image size as 128×128, and we use AdamW optimizer [26] with learning rate 1.0e-4. Diffusion step number is set to 1,000 with a linear noise schedule. GC-DDPMs are trained for about 200K steps using a machine with 8 Nvidia V100 GPUs, which takes about 5 days. During sampling, we use the denoising diffusion implicit model (DDIM) [39] sampling method with 50 steps. It takes 62 hours to sample 3,755 characters written by 576 writers, which are about 2.2M samples, with the same 8 Nvidia V100 GPUs.\nEvaluation metrics: We evaluate the quality of synthetic samples in three aspects. First, Inception score (IS) [37] and Frechet Inception Distance (FID) [14] are used to evaluate the diversity and distribution similarity of synthetic samples compared with real ones. Second, since samples are synthesized by conditioning on glyph image, the synthetic samples should be consistent with the category of conditioned glyph. Therefore, we introduce a new metric called correctness score (CS). For each synthetic sample, the category of conditioned glyph is used as ground truth, and CS is calculated as the recognition accuracy of synthetic samples using an HCCR model trained with real data, which achieves 97.3% recognition accuracy in real data testing set. Finally, as the purpose of diffusion model here is to generate training data for unseen categories, we also train HCCR models with synthetic samples and evaluate recognition accuracy on the real testing set of unseen categories. Our HCCR model adopts ResNet-18 [13] architecture and is trained with standard SGD optimizer. No data augmentation is applied during HCCR model training. It is noted that starting from different random noise, it is almost impossible to generate exact same handwritten samples even for same conditional character glyphs. So it is not appropriate to adopt pixel-level metrics to evaluate generative effect as [18,31,24,51,11,54,47,25] do." }, { "figure_ref": [ "fig_3", "fig_3", "fig_2", "fig_3", "fig_3", "fig_3" ], "heading": "WI GC-DDPM Results", "publication_ref": [ "b15" ], "table_ref": [], "text": "We first conduct experiments on WI GC-DDPM. It is shown in [16] that the classifier guidance scale is able to attain a trade-off between quality and diversity. In order to evaluate the behavior of different content guidance scale γ's, we choose different γ's and generate samples to compute FID, ID and CS. Here we synthesize 50K samples of S 2,000 , and the HCCR model used to measure CS is trained using real samples of S 3,755 . γ ∈ {0.0, 1.0, 2.0, 3.0 , 4.0} are used and the comparison results are summarized in Tab. 1. We can find that, as γ increases, the IS decreases, the FID increases and the CS achieves close to 100% accuracy. This indicates that with a larger γ, the diversity of synthetic samples is decreasing. This behavior is also observed in Fig. 5a where we visualize multiple sampled results of the character class in S 2,000 using different γ's. The generated samples are less diverse, less cursive and easier to recognize when conditioned on stronger content guidance. According to FID and examples in Fig. 5, the distribution of synthetic samples with γ = 0 is closer to that of real samples. When γ = 0, CS achieves 94.7%. In Fig. 4, we show synthetic cases that the trained HCCR model fails to recognize. Failure cases include (a) samples that are unreadable, and (b) samples that are closer to another easily confused Chinese character. They are caused by alignment failures between printed and synthetic strokes, and can be eliminated by improving glyph conditioning method. We leave it as future work.\nThen, we evaluate the quality of WI GC-DDPM for zero-shot generation of HCCR training data. We use the trained WI GC-DDPM to synthesize 576 samples for each category in S 1,755 . Then, the synthetic samples are used along with real samples of categories in S 2,000 to train an HCCR model that supports 3,755 categories. We calculate its recognition accuracy on the testing set of category S 1,755 , which is denoted as Acc 1,755 . Different γ's are tried, and the results are shown in Tab. 2. In Fig. 5b, we visualize synthetic samples of one category in S 1,755 . The best Acc 1,755 is achieved when γ = 0. Although synthetic samples with higher γ are less cursive, they achieve much lower Acc 1,755 . This is because the lack of diversity makes it difficult to cover the wide distribution of handwritten Chinese character image space. 意 Fig. 6: Generated handwritten Chinese character samples with different content and writer guidance scales, where the character is from the class of S 1,755 . Samples are generated with the same random seed and initial noise. Clearly, by learning the mapping of radicals and spatial relationship between Chinese printed and handwritten strokes, the diffusion model is capable of zeroshot generation of unseen Chinese character categories. Moreover, a high accuracy of 93.0% is achieved on S 1,755 by only leveraging the synthetic samples. In Figs. 5c and5d, we further show the synthetic samples of a Chinese character category that does not belong to S 3,755 . The excellent generation effect implies that our method has the potential to be extended to a larger vocabulary." }, { "figure_ref": [ "fig_3", "fig_5", "fig_5", "fig_6" ], "heading": "WD GC-DDPM Results", "publication_ref": [], "table_ref": [], "text": "Although WI GC-DDPM can generate desired handwritten characters, we cannot control their writing styles. In this part, we conduct experiments on WD GC-DDPM, which introduces writer information as an additional condition.\nFig. 6 shows the visualization results of sampling with different content guidance scale γ's and writer guidance scale η's. It shows that with larger γ, the synthetic samples become less cursive and more similar to the corresponding printed image. This behavior is consistent with that of the WI GC-DDPM in Fig. 5. We also find that with large η, the generated sample becomes inconsistent with the conditioned printed image. Since writer information is injected to GC-DDPM in FiLM way, a large guidance scale will cause the mean and variance shift of μθ (x t , g, w) and Σθ (x t , g, w) which hinders the subsequent denoising, leading to over-saturated images with over-smoothed textures [43].\nIn Fig. 7b, we show several synthetic text line images conditioned on a fixed writer embedding with our WD GC-DDPM. Writing styles of these samples are consistent and quite similar to real samples written by the same writer as shown in Fig. 7a. These results verify the writing style controllability of our model.\nThen, we compare the quality of synthetic samples when used as training data for HCCR. For a fair comparison, we also generate 576 samples for each category in S 1,755 , one image for each writer. Recognition performances are shown in Tab. 3. To improve sampling efficiency and ensure training data diversity, the writer guidance scale of 0 is applied. Compared with using samples synthesized with WI GC-DDPM as HCCR training set, the accuracy on the testing set Another capability of WD GC-DDPM is that it can interpolate between different writer embeddings and generate samples of new styles. We choose 2 writers and try different interpolation factor λ's and visualize the synthetic samples in Fig. 8. We find that as λ increases from 0 to 1, the style of synthetic samples gradually shifts from one writing style to another. We also observe that with the same λ, the synthetic samples of different Chinese characters share similar writing style as expected. Finally, we use writer style interpolation to generate the training data of S 1,755 for HCCR, and again 576 samples are generated for each category. For each image, we randomly select 2 writers for interpolation. We simply use an interpolation factor of 0.5. Results are summarized in Tab. 3. We observe a slight improvement in FID score and a 1% absolute recognition accuracy improvement on S 1,755 , which further verifies the superiority of our WD GC-DDPM." }, { "figure_ref": [], "heading": "Data-augmented HCCR Results", "publication_ref": [], "table_ref": [], "text": "We also use GC-DDPMs trained on S 2,000 , to synthesize samples for all categories in S 3,755 , and combine them with real samples to build HCCR systems. 3 settings are tried: WI, WD and WD w/ interpolation. And 576 samples for each category are synthesized in each setting. Tab. 4 summarizes the results. Best accuracies are achieved with samples synthesized by WD w/ interpolation, which is consistent with Tab. 3. The HCCR models trained with only synthetic samples perform slightly worse than the one trained with only real samples. Combining synthetic and real training samples only performs 0.0%˜0.1% better than real samples. These results demonstrate the distribution modeling capacity of GC-DDPMs." }, { "figure_ref": [], "heading": "Comparison with Prior Arts", "publication_ref": [ "b3", "b20", "b21", "b49", "b20", "b21", "b47", "b47" ], "table_ref": [], "text": "Finally, we compare our method with prior arts. We first compare our method with prior zero-shot HCCR systems. To be consistent with prior works in [4,21,22], we randomly choose 1,000 classes in S 1,755 as unseen classes and use ICDAR2013 [50] benchmark dataset for testing. Results are shown in Tab. 5. Here we only list the results from prior arts using 2,000 seen character classes. It is noted that the 2,000/1,000 seen/unseen character class split for training and testing is not exactly the same. So the results are not directly comparable. The results in Tab. 5 show that our methods achieve the same level recognition accuracy compared with previous state-of-the-art zero-shot HCCR systems. Moreover, our approach directly uses a standard CNN to predict supported categories, which is much simpler compared with the systems in [21,22].\nWe also compare our approach with [48], which also leverages a generation model to synthesize training samples for unseen classes. We follow the same experimental setups in [48] and use HWDB1. [48] achieves a 46.1% accuracy by adding more than 9.6M generated samples.\nOur approach achieves a 98.6% accuracy by only adding about 1.9M synthetic samples (576 samples for each unseen category). We also train a classifier using all real samples in HWDB1.2 training set (240 samples for each category). The classifier achieves a 97.9% accuracy, which is slightly worse than ours due to less diverse training samples. These results verify the zero-shot generation capability of our methods again. It is easy to extend to larger vocabularies, which makes it possible to build a high-quality HCCR system for 87,887 categories." }, { "figure_ref": [ "fig_9" ], "heading": "Limitations and Future Work", "publication_ref": [], "table_ref": [], "text": "Although GC-DDPM-synthesized images are quite helpful for building a highquality HCCR system, there are still some failure cases. The blur and dislocation phenomena in these samples reveal that there exist better ways to inject glyph information. It is also possible to encode radical/stroke sequences with spatial relationships as the condition of DDPM. We will investigate these methods and report the results elsewhere.\nAnother limitation of our approach is the long training time of DDPMs. We will try to reduce the number of character categories and sample numbers per category to find a better trade-off between synthesis quality and training cost.\nJapanese and Korean characters share most strokes with Chinese, so we also try to synthesize handwritten Japanese and Korean samples with our Chinesetrained DDPM. As Fig. 9 shows, except for some circle and curve strokes, the results are quite reasonable. As future work, we will combine handwritten samples of CJK languages to build a new DDPM, which is expected to synthesize samples for each language with higher diversity and quality." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We propose WI and WD GC-DDPM solutions to achieve zero-shot training data generation for HCCR. Experimental results have verified their effectiveness in terms of generation quality, diversity and HCCR accuracies of unseen categories. WD performs slightly better than WI due to its better distribution modeling capability and writing style controllability. These solutions can be easily extended to larger vocabularies and other languages, and provide a feasible way to build an HCCR system supporting 87,887 categories with high recognition accuracy." } ]
2023-05-25
[ { "authors": "X Ao; X Y Zhang; H M Yang; F Yin; C L Liu", "journal": "", "ref_id": "b0", "title": "Cross-modal prototype learning for zero-shot handwriting recognition", "year": "2019" }, { "authors": "J Austin; D D Johnson; J Ho; D Tarlow; R Van Den Berg", "journal": "NeurIPS", "ref_id": "b1", "title": "Structured denoising diffusion models in discrete state-spaces", "year": "2021" }, { "authors": "A Brock; J Donahue; K Simonyan", "journal": "ICLR", "ref_id": "b2", "title": "Large scale GAN training for high fidelity natural image synthesis", "year": "2019" }, { "authors": "Z Cao; J Lu; S Cui; C Zhang", "journal": "Pattern Recognition", "ref_id": "b3", "title": "Zero-shot handwritten Chinese character recognition with hierarchical decomposition embedding", "year": "2020" }, { "authors": "J Chen; B Li; X Xue", "journal": "", "ref_id": "b4", "title": "Zero-shot Chinese character recognition with stroke-level decomposition", "year": "2021" }, { "authors": "L Chen; S Wang; W Fan; J Sun; S Naoi", "journal": "", "ref_id": "b5", "title": "Beyond human recognition: A CNN-based framework for handwritten character recognition", "year": "2015" }, { "authors": "D Cireşan; U Meier", "journal": "", "ref_id": "b6", "title": "Multi-column deep neural networks for offline handwritten Chinese character classification", "year": "2015" }, { "authors": "F A Croitoru; V Hondru; R T Ionescu; M Shah", "journal": "", "ref_id": "b7", "title": "Diffusion models in vision: A survey", "year": "2022" }, { "authors": "P Dhariwal; A Nichol", "journal": "NeurIPS", "ref_id": "b8", "title": "Diffusion models beat GANs on image synthesis", "year": "2021" }, { "authors": "X Diao; D Shi; H Tang; L Wu; Y Li; H Xu", "journal": "", "ref_id": "b9", "title": "REZCR: A zero-shot character recognition method via radical extraction", "year": "2022" }, { "authors": "Y Gao; Y Guo; Z Lian; Y Tang; J Xiao", "journal": "ACM TOG", "ref_id": "b10", "title": "Artistic glyph image synthesis via one-stage few-shot learning", "year": "2019" }, { "authors": "A Graves", "journal": "", "ref_id": "b11", "title": "Generating sequences with recurrent neural networks", "year": "2013" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b12", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "M Heusel; H Ramsauer; T Unterthiner; B Nessler; S Hochreiter", "journal": "NeurIPS", "ref_id": "b13", "title": "GANs trained by a two time-scale update rule converge to a local nash equilibrium", "year": "2017" }, { "authors": "J Ho; A Jain; P Abbeel", "journal": "NeurIPS", "ref_id": "b14", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "J Ho; T Salimans", "journal": "", "ref_id": "b15", "title": "Classifier-free diffusion guidance", "year": "2021" }, { "authors": "G Huang; X Luo; S Wang; T Gu; K Su", "journal": "Pattern Recognition", "ref_id": "b16", "title": "Hippocampus-heuristic character recognition network for zero-shot learning in Chinese character recognition", "year": "2022" }, { "authors": "Y Huang; M He; L Jin; Y Wang", "journal": "", "ref_id": "b17", "title": "RD-GAN: few/zero-shot Chinese character style transfer via radical decomposition and rendering", "year": "2020" }, { "authors": "Y Huang; L Jin; D Peng", "journal": "", "ref_id": "b18", "title": "Zero-shot Chinese text recognition via matching class embedding", "year": "2021" }, { "authors": "Z Li; N Teng; M Jin; H Lu", "journal": "Int. J. Document Anal. Recog", "ref_id": "b19", "title": "Building efficient CNN architecture for offline handwritten Chinese character recognition", "year": "2018" }, { "authors": "C Liu; C Yang; H B Qin; X Zhu; C L Liu; X C Yin", "journal": "Pattern Recognition", "ref_id": "b20", "title": "Towards open-set text recognition via label-to-prototype learning", "year": "2022" }, { "authors": "C Liu; C Yang; X C Yin", "journal": "", "ref_id": "b21", "title": "Open-set text recognition via character-context decoupling", "year": "2022" }, { "authors": "C L Liu; F Yin; D H Wang; Q F Wang", "journal": "", "ref_id": "b22", "title": "CASIA online and offline Chinese handwriting databases", "year": "2011" }, { "authors": "W Liu; F Liu; F Ding; Q He; Z Yi", "journal": "", "ref_id": "b23", "title": "XMP-Font: Self-supervised cross-modality pre-training for few-shot font generation", "year": "2022" }, { "authors": "Y Liu; Z Lian", "journal": "", "ref_id": "b24", "title": "FontTransformer: Few-shot high-resolution Chinese glyph image synthesis via stacked Transformers", "year": "2022" }, { "authors": "I Loshchilov; F Hutter", "journal": "ICLR", "ref_id": "b25", "title": "Decoupled weight decay regularization", "year": "2019" }, { "authors": "T Luhman; E Luhman", "journal": "", "ref_id": "b26", "title": "Diffusion models for handwriting generation", "year": "2020" }, { "authors": "A Nichol; P Dhariwal; A Ramesh; P Shyam; P Mishkin; B Mcgrew; I Sutskever; M Chen", "journal": "", "ref_id": "b27", "title": "GLIDE: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2022" }, { "authors": "A Q Nichol; P Dhariwal", "journal": "", "ref_id": "b28", "title": "Improved denoising diffusion probabilistic models", "year": "2021" }, { "authors": "Y Pang; J Lin; T Qin; Z Chen", "journal": "IEEE Transactions on Multimedia", "ref_id": "b29", "title": "Image-to-image translation: Methods and applications", "year": "2021" }, { "authors": "S Park; S Chun; J Cha; B Lee; H Shim", "journal": "", "ref_id": "b30", "title": "Few-shot font generation with localized style representations and factorization", "year": "2021" }, { "authors": "E Perez; F Strub; H De Vries; V Dumoulin; A Courville", "journal": "", "ref_id": "b31", "title": "FiLM: Visual reasoning with a general conditioning layer", "year": "2018" }, { "authors": "A Ramesh; P Dhariwal; A Nichol; C Chu; M Chen", "journal": "", "ref_id": "b32", "title": "Hierarchical textconditional image generation with CLIP latents", "year": "2022" }, { "authors": "R Rombach; A Blattmann; D Lorenz; P Esser; B Ommer", "journal": "", "ref_id": "b33", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "C Saharia; W Chan; H Chang; C Lee; J Ho; T Salimans; D Fleet; M Norouzi", "journal": "", "ref_id": "b34", "title": "Palette: Image-to-image diffusion models", "year": "2022" }, { "authors": "C Saharia; W Chan; S Saxena; L Li; J Whang; E Denton; S K S Ghasemipour; B K Ayan; S S Mahdavi; R G Lopes", "journal": "", "ref_id": "b35", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "T Salimans; I Goodfellow; W Zaremba; V Cheung; A Radford; X Chen; X Chen", "journal": "NeurIPS", "ref_id": "b36", "title": "Improved techniques for training GANs", "year": "2016" }, { "authors": "J Sohl-Dickstein; E Weiss; N Maheswaranathan; S Ganguli", "journal": "", "ref_id": "b37", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "J Song; C Meng; S Ermon", "journal": "ICLR", "ref_id": "b38", "title": "Denoising diffusion implicit models", "year": "2021" }, { "authors": "Y Song; S Ermon", "journal": "NeurIPS", "ref_id": "b39", "title": "Generative modeling by estimating gradients of the data distribution", "year": "2019" }, { "authors": "Y Song; S Ermon", "journal": "NeurIPS", "ref_id": "b40", "title": "Improved techniques for training score-based generative models", "year": "2020" }, { "authors": "Y Song; J Sohl-Dickstein; D P Kingma; A Kumar; S Ermon; B Poole", "journal": "ICLR", "ref_id": "b41", "title": "Scorebased generative modeling through stochastic differential equations", "year": "2021" }, { "authors": "T Wang; T Zhang; B Zhang; H Ouyang; D Chen; Q Chen; F Wen", "journal": "", "ref_id": "b42", "title": "Pretraining is all you need for image-to-image translation", "year": "2022" }, { "authors": "T Wang; Z Xie; Z Li; L Jin; X Chen", "journal": "Pattern Recognition Letters", "ref_id": "b43", "title": "Radical aggregation network for fewshot offline handwritten Chinese character recognition", "year": "2019" }, { "authors": "W Wang; J Zhang; J Du; Z R Wang; Y Zhu", "journal": "ICFHR", "ref_id": "b44", "title": "DenseRAN for offline handwritten Chinese character recognition", "year": "2018" }, { "authors": "L Weng", "journal": "", "ref_id": "b45", "title": "What are diffusion models?", "year": "2021-07" }, { "authors": "Y Xie; X Chen; L Sun; Y Lu", "journal": "", "ref_id": "b46", "title": "DG-Font: Deformable generative networks for unsupervised font generation", "year": "2021" }, { "authors": "M Xue; J Du; J Zhang; Z R Wang; B Wang; B Ren", "journal": "", "ref_id": "b47", "title": "Radical composition network for Chinese character generation", "year": "2021" }, { "authors": "L Yang; Z Zhang; Y Song; S Hong; R Xu; Y Zhao; Y Shao; W Zhang; B Cui; M H Yang", "journal": "", "ref_id": "b48", "title": "Diffusion models: A comprehensive survey of methods and applications", "year": "2022" }, { "authors": "F Yin; Q F Wang; X Y Zhang; C L Liu", "journal": "", "ref_id": "b49", "title": "ICDAR 2013 Chinese handwriting recognition competition", "year": "2013" }, { "authors": "Y Zhang; Y Zhang; W Cai", "journal": "", "ref_id": "b50", "title": "Separating style and content for generalized style transfer", "year": "2018" }, { "authors": "Z Zhong; X Y Zhang; F Yin; C L Liu", "journal": "", "ref_id": "b51", "title": "Handwritten Chinese character recognition with spatial Transformer and deep residual networks", "year": "2016" }, { "authors": "Z Zhong; L Jin; Z Xie", "journal": "", "ref_id": "b52", "title": "High performance offline handwritten Chinese character recognition using GoogLeNet and directional feature maps", "year": "2015" }, { "authors": "A Zhu; X Lu; X Bai; S Uchida; B K Iwana; S Xiong", "journal": "IEEE Transactions on Image Processing", "ref_id": "b53", "title": "Few-shot text style transfer via deep feature similarity", "year": "2020" } ]
[ { "formula_coordinates": [ 5, 226.07, 177.85, 254.52, 9.68 ], "formula_id": "formula_0", "formula_text": "q(x t |x t-1 ) = N (x t ; 1 -β t x t-1 , β t I)(1)" }, { "formula_coordinates": [ 5, 226.07, 186.25, 116.12, 17.63 ], "formula_id": "formula_1", "formula_text": "x t = √ α t x t-1 + √ 1 -α t ϵ t" }, { "formula_coordinates": [ 5, 211.89, 322.57, 191.57, 9.68 ], "formula_id": "formula_2", "formula_text": "p θ (x t-1 |x t ) = N (x t-1 ; µ θ (x t , t), Σ θ (x t , t)) ." }, { "formula_coordinates": [ 5, 134.77, 322.6, 345.83, 53.03 ], "formula_id": "formula_3", "formula_text": ") Since q(x t-1 |x t , x 0 ) is tractable, q(x t-1 |x t , x 0 ) = N (x t-1 ; μ(x t , x 0 ), βt I)2" }, { "formula_coordinates": [ 5, 234.37, 407.81, 246.22, 49.43 ], "formula_id": "formula_5", "formula_text": "μ(x t , x 0 ) = 1 √ α t (x t - 1 -α t √ 1 -ᾱt ϵ t ) (4) βt = 1 -ᾱt-1 1 -ᾱt • β t .(5)" }, { "formula_coordinates": [ 5, 195.03, 522.57, 285.56, 12.28 ], "formula_id": "formula_6", "formula_text": "Σ θ (x t , t) = exp(ν θ (x t ) log β t + (1 -ν θ (x t )) log βt ) .(6)" }, { "formula_coordinates": [ 6, 199.34, 505.72, 281.25, 9.68 ], "formula_id": "formula_7", "formula_text": "p θ (x t-1 |x t , g, w) ∝ p θ (x t-1 |x t )p θ (g|x t )p θ (w|x t ) .(7)" }, { "formula_coordinates": [ 6, 218.35, 552.14, 262.24, 26.43 ], "formula_id": "formula_8", "formula_text": "p ic (g, w|x t ) ∝ p(x t |g) p(x t ) γ • p(x t |w) p(x t ) η .(8)" }, { "formula_coordinates": [ 6, 178.47, 605.33, 302.13, 9.68 ], "formula_id": "formula_9", "formula_text": "∇ xt log p ic (g, w|x t ) ∝ γϵ(x t , g) + ηϵ(x t , w) -(γ + η)ϵ(x t ) .(9)" }, { "formula_coordinates": [ 6, 254.25, 650.14, 221.92, 17.02 ], "formula_id": "formula_10", "formula_text": "+ ηϵ θ (x t , ∅, w) -(γ + η)ϵ θ (x t , ∅, ∅) . (10" }, { "formula_coordinates": [ 6, 476.16, 650.14, 4.43, 8.74 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 7, 224.29, 274.88, 251.88, 22.31 ], "formula_id": "formula_12", "formula_text": "z = z i cos λπ 2 + z j sin λπ 2 , λ ∈ [0, 1] . (11" }, { "formula_coordinates": [ 7, 476.16, 281.62, 4.43, 8.74 ], "formula_id": "formula_13", "formula_text": ")" } ]
Zero-shot Generation of Training Data with Denoising Diffusion Probabilistic Model for Handwritten Chinese Character Recognition
There are more than 80,000 character categories in Chinese while most of them are rarely used. To build a high performance handwritten Chinese character recognition (HCCR) system supporting the full character set with a traditional approach, many training samples need be collected for each character category, which is both timeconsuming and expensive. In this paper, we propose a novel approach to transforming Chinese character glyph images generated from font libraries to handwritten ones with a denoising diffusion probabilistic model (DDPM). Training from handwritten samples of a small character set, the DDPM is capable of mapping printed strokes to handwritten ones, which makes it possible to generate photo-realistic and diverse style handwritten samples of unseen character categories. Combining DDPMsynthesized samples of unseen categories with real samples of other categories, we can build an HCCR system to support the full character set. Experimental results on CASIA-HWDB dataset with 3,755 character categories show that the HCCR systems trained with synthetic samples perform similarly with the one trained with real samples in terms of recognition accuracy. The proposed method has the potential to address HCCR with a larger vocabulary.
Dongnan Gui; Kai Chen; Haisong ⋆⋆; Ding; Qiang Huo
[ { "figure_caption": "Fig. 1 :1Fig. 1: Architecture of glyph conditional U-Net, which is adapted from the model used in [9]. We concatenate font \"kai\" rendered character image with original input to provide glyph guidance during generation.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "γ 0.0 1.0 2.0 3.0 4.0 Acc1,755 (%) 93.0 88.6 91.7 63.7 33.2 (a) Failure samples that do not look like any Chinese characters. (b) (top) Glyph condition images; (middle) Synthetic samples; (bottom) Most similar characters.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Synthetic samples that are wrongly recognized by real data trained HCCR model when γ = 0.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 :5Fig. 5: Multiple synthetic handwritten Chinese character samples with different content guidance scale, where (a), (b) and (c) are characters from classes of S 2,000 , S 1,755 , and out of S 3,755 Chinese character sets. Samples in each line use the same random seed and initial noise. Samples across lines use different random seeds to visualize diversity.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "(a) Real text line from [23]. (b) Synthetic samples arranged as a text line.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 7 :7Fig. 7: Comparisons of real text line images in HWDB2.1 and generated samples arranged in a text line, where we replace the characters from real data with the generated characters. Samples in different lines of (a) and (b) are selected and generated conditioning on the same writer 1001.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 :8Fig. 8: Interpolation of handwritten Chinese character samples, where the top, middle, bottom lines are characters from classes of S 2,000 , S 1,755 , and out of S 3,755 Chinese character sets. We choose writer 1061 (left) and writer 1057 (right) for interpolation and interpolation factors are shown at the top of images. Standard glyph images of font \"kai\" are shown on the left. Samples in each line use the same random seed and initial noise.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "0 and 1.1 as training set, which contains 3,755 categories, to train GC-DDPMs. Unseen 3,319 categories in HWDB1.2 testing set are used as testing set. Results are shown in Tab. 6.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 9 :9Fig. 9: Synthetic samples of Japanese and Korean characters and standard glyph images in font \"SourceHans\".", "figure_data": "", "figure_id": "fig_9", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Comparisons of generation quality using different content guidance scale γ's in terms of IS, FID, and CS.", "figure_data": "γ IS FID CS (%)0.0 2.62 8.07 94.71.0 2.51 10.97 99.82.0 2.46 18.03 99.93.0 2.44 24.34 99.94.0 2.39 28.69 99.9", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparisons of generation quality using different content guidance scale γ's in terms of recognition accuracy on testing set of classes in S 1,755 using generated samples as training set.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparisons of generation quality between WI and WD DDPMs in terms of IS, FID, CS (%) and the recognition accuracy (%) on the testing set of class S 1,755 using generated samples as training set.", "figure_data": "ModelIS FID CS Acc1,755WI2.62 8.07 94.7 93.0WD2.49 6.34 94.8 93.7WD w/ interpolation 2.53 6.26 95.0 94.7", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparisons of recognition accuracy (%) on test sets of S 2,000 and S 1,755 using real and/or synthetic samples as HCCR training set. of S 1,755 is improved from 93.0% to 93.7%. When GC-DDPM is trained without conditioning on writer embedding, it may generate similar samples from different initial noise. Whereas in WD GC-DDPM, by conditioning on different writer embeddings, the model will generate samples with different writing styles. Therefore, the diversity of synthetic samples will be improved. To verify this, we compare the quality of synthetic samples in terms of IS and FID. As shown in Tab. 3, the FID improves from 8.07 to 6.34. The results demonstrate the superiority of WD GC-DDPM in zero-shot training data generation of unseen Chinese character categories.", "figure_data": "Training setAccuracy on testing setRealSyntheticAcc2,000Acc1,755✓/97.397.2/WI96.396.0/WD96.496.1/ WD w/ interpolation 96.596.1✓WI97.397.3✓WD97.497.3✓ WD w/ interpolation 97.497.3", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparisons of unseen character categories' recognition accuracy (%) between our method and prior zero-shot HCCR systems. Works with * also use samples from HWDB1.2 for training, while † means online trajectory information is also used.", "figure_data": "MethodAccuracyCM † [1]86.7DenseRan [45]19.5FewRan * [44]70.6HCCR * [4]73.4OSOCR * [21]84.3OSCCD * [22]95.6WI GC-DDPM96.4WD GC-DDPM96.8WD GC-DDPM w/ interpolation 96.9", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Comparisons of unseen character categories' recognition accuracy (%) on CASIA1.2 testing set.", "figure_data": "MethodsAccuracyRCN [48]46.1WI GC-DDPM98.6WD GC-DDPM98.6ResNet-18 trained with real data 97.9", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[45,4,44,10]", "Explanation": "The cited works provide a method of encoding character categories as sequences of radicals/strokes and spatial relationships to achieve zero-shot recognition in the citing paper."}, {"Category": "Data Source", "Citation": "[1,22,21,19]", "Explanation": "The cited works leverage font-rendered glyph images as reference representations for unseen character categories in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[48]", "Explanation": "The cited work on synthesizing unseen character samples with a radical composition network is extended in the citing paper to improve the recognition accuracy of HCCR systems."}, {"Category": "Methodological Basis", "Citation": "[38,15]", "Explanation": "The cited works on denoising diffusion probabilistic models (DDPMs) are used in the citing paper to synthesize diverse and high-quality training samples for unseen character categories."}, {"Category": "Methodological Basis", "Citation": "[28,33,36]", "Explanation": "The cited works provide a diffusion-based text-to-image generation method that the citing paper adopts to achieve the goal of zero-shot handwritten Chinese character image generation."}, {"Category": "Extension or Continuation", "Citation": "[9]", "Explanation": "The cited work is a U-Net used in the context of zero-shot handwritten Chinese character generation. The citing paper builds upon this work by designing a glyph conditional DDPM that leverages the U-Net to guide the model in constructing mappings between font-rendered and handwritten strokes/radicals."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work provides the dataset and evaluation metrics used in the citing paper to assess the performance of the HCCR systems trained with DDPM-synthesized samples."}, {"Category": "Supporting Evidence", "Citation": "[7, 50,53,6,52,20]", "Explanation": "The cited works are conventional HCCR systems that have achieved superior recognition accuracy, which serves as a foundational basis for the citing paper to build upon in their research on zero-shot HCCR."}, {"Category": "Extension or Continuation", "Citation": "[45,4,44,10]", "Explanation": "The cited works focus on representing Chinese characters as a sequence of composing radicals, which the citing paper extends by exploring the use of this method in zero-shot HCCR research."}, {"Category": "Extension or Continuation", "Citation": "[5]", "Explanation": "The cited work represents Chinese characters using strokes, which the citing paper further extends by applying this method in the context of zero-shot HCCR research."}, {"Category": "Data Source", "Citation": "[1,22,21,17]", "Explanation": "The cited works leverage information from the corresponding Chinese character glyph images to achieve zero-shot HCCR, which the citing paper uses as a data source for their research on the same topic."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work provides a method for extracting visual representations of glyph images, which the citing paper adopts in their research on zero-shot recognition systems."}, {"Category": "Extension or Continuation", "Citation": "[48]", "Explanation": "The cited work investigates a radical composition network for generating unseen Chinese characters, which the citing paper extends by using a more powerful diffusion model to achieve the same goal."}, {"Category": "Extension or Continuation", "Citation": "[51,11,54,47,25]", "Explanation": "The cited works use the image-to-image translation framework for zero-shot Chinese font generation, which the citing paper extends by focusing on zero-shot handwritten Chinese character generation with DDPM."}, {"Category": "Extension or Continuation", "Citation": "[18,31,24]", "Explanation": "The cited works leverage the information of composing components, radicals, and strokes for zero-shot Chinese font generation, which the citing paper extends by focusing on zero-shot handwritten Chinese character generation with DDPM."}, {"Category": "Supporting Evidence", "Citation": "[38,15]", "Explanation": "The cited works on diffusion model DDPM have become popular in computer vision and achieved superior performance in image generation tasks, providing a strong basis for the citing paper to leverage DDPM for zero-shot handwritten Chinese character generation."}, {"Category": "Methodological Basis", "Citation": "[15,9,42]", "Explanation": "The cited works on DDPM have demonstrated their powerful capabilities in generating high-quality and high-diversity images, which the citing paper leverages to conduct research on zero-shot handwritten Chinese character generation."}, {"Category": "Data Source", "Citation": "[33]", "Explanation": "The cited work on DDPM is shown to have a great effect on combination of concepts, which the citing paper uses to integrate multiple elements in the process of zero-shot handwritten Chinese character generation."}, {"Category": "Extension or Continuation", "Citation": "[27]", "Explanation": "The cited work on DDPM is applied to online English handwriting generation, which the citing paper extends to the context of zero-shot handwritten Chinese character generation."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work provides a Markov chain of forward and reverse diffusion processes for generating handwritten Chinese character samples, which the citing paper adopts in their own research to generate samples in a similar manner."}, {"Category": "Methodological Basis", "Citation": "[29]", "Explanation": "The cited work provides a method of setting the log-likelihood interpolation between \u03b2t and \u03b2 t in the log domain, which the citing paper adopts to improve the log-likelihood performance in their research."}, {"Category": "Methodological Basis", "Citation": "(6)", "Explanation": "The cited work [29] provides a hybrid loss function that the citing paper adopts in training a U-Net to predict \u03f5 \u03b8 (x t ) and \u03bd \u03b8 (x t ) in their research."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work provides the U-Net architecture that the citing paper uses as a basis for their research on generating character images."}, {"Category": "Extension or Continuation", "Citation": "[12]", "Explanation": "The cited work provides information on writer IDs that the citing paper uses to control the style and content of the generated character images in their research."}, {"Category": "Data Source", "Citation": "[29]", "Explanation": "The cited work provides the timestep embedding that the citing paper uses in their research on generating character images through the U-Net architecture."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work provides a method of encoding glyph images as feature sequences and feeding them to attention layers in U-Net, which the citing paper adopts in their approach of injecting glyph images as model input in the context of GC-DDPM."}, {"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work on classifier-free guidance has been adopted in the citing paper to improve the generation quality of HCCR system trained with synthetic samples by introducing the concept of independent conditions and using a classifier to model the relationship between the conditions and the system output."}, {"Category": "Methodological Basis", "Citation": "[33]", "Explanation": "The cited work provides a method for spherical interpolation, which the citing paper adopts to generate new writer embeddings in a given range of values."}, {"Category": "Data Source", "Citation": "[23]", "Explanation": "The cited work, CASIA-HWDB dataset, serves as the data source for the experiments conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[23]", "Explanation": "The cited work provides the official partition of training and testing sets for the CASIA-HWDB dataset used in the citing paper."}, {"Category": "Data Source", "Citation": "[1,45]", "Explanation": "The cited works provide the selection of seen and unseen categories in the Chinese set used in the experiments conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work provides the implementation of DDPM, which the citing paper uses as a basis for their experiments on generating handwritten Chinese character samples."}, {"Category": "Methodological Basis", "Citation": "[39]", "Explanation": "The cited work introduces the DDIM sampling method, which the citing paper adopts for sampling synthetic samples in their research."}, {"Category": "Data Source", "Citation": "[13]", "Explanation": "The cited work provides the architecture of the HCCR model used in the citing paper, which is based on the ResNet-18 model from the cited work."}, {"Category": "Supporting Evidence", "Citation": "[16]", "Explanation": "The cited work provides the concept of classifier guidance scale, which is used in the citing paper to achieve a trade-off between quality and diversity in the generation of samples."}, {"Category": "Supporting Evidence", "Citation": "[21,22]", "Explanation": "The cited works provide a consistent method for zero-shot HCCR system evaluation, which the citing paper adopts to ensure a fair comparison with prior arts."}, {"Category": "Extension or Continuation", "Citation": "[48]", "Explanation": "The cited work leverages a generation model to synthesize training samples for unseen classes, which the citing paper builds upon to compare their approach with a similar method."}, {"Category": "Extension or Continuation", "Citation": "[48]", "Explanation": "The cited work by [48] provides a baseline for zero-shot generation in the field of HCCR systems. The citing paper extends this work by improving the accuracy of the system and reducing the number of generated samples required to achieve the same level of performance."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b22", "b6", "b29", "b20", "b3", "b14", "b19", "b1", "b20", "b22" ], "table_ref": [], "text": "Grammatical Error Correction (GEC) systems aim to detect and correct grammatical errors in a given sentence and thus provide useful information for second-language learners. There are two lines of work for building GEC systems. Sequenceto-sequence methods (Rothe et al., 2021;Flachs et al., 2021;Zhang et al., 2022) take an erroneous sentence as input and generate an error-free sentence autoregressively. Sequence labeling methods (Omelianchuk et al., 2020;Tarnavskyi et al., 2022a) transform the target into a sequence of text-editing actions and use the sequence labeling scheme to predict those actions.\nWith advances in large pre-trained models (Devlin et al., 2018;Lewis et al., 2019) and availability of high-quality GEC corpora (Ng et al., 2014;Bryant et al., 2019), academic GEC systems (Omelianchuk et al., 2020;Rothe et al., 2021) have achieved promising results on benchmarks and serve as strong backbones for modern writing * Work was done during the internship at Tencent AI lab. † Corresponding authors.\nAs a result, I enjoy study accounting.\nAs a result, I enjoy studying accounting." }, { "figure_ref": [], "heading": "GEC systems", "publication_ref": [], "table_ref": [], "text": "correct grammatical errors without giving specific reasons Explainable-GEC system corrects grammatical errors with explanation" }, { "figure_ref": [ "fig_0" ], "heading": "Input", "publication_ref": [ "b23", "b5", "b2", "b20", "b12", "b1" ], "table_ref": [], "text": "Change 'study' to 'studying', because after 'enjoy' it should follow a \"gerund\".\nAs a result, I enjoy studying accounting. assistance applications (e.g., Google Docs1 , Grammarly2 , and Effidit (Shi et al., 2023) 3 ). Although these academic methods provide high-quality writing suggestions, they rarely offer explanations with specific clues for corrections. Providing a grammaraware explanation and evidence words to support the correction is important in second-language education scenarios (Ellis et al., 2006), where language learners need to \"know why\" than merely \"know how\". As a commercial system, Grammarly does provide evidence words, but in very limited cases, and the technical details are still a black box for the research community. Though some existing work has attempted to enhance the explainability of GEC's corrections (Bryant et al., 2017;Omelianchuk et al., 2020;Kaneko et al., 2022), they do not provide intrasentence hints (i.e., evidence words in the sentence). To fill this gap, we build a dataset named EXPlainble grammatical Error CorrecTion (EXPECT) on the standard GEC benchmark (W&I+LOCNESS (Bryant et al., 2019)), yielding 21,017 instances with explanations in total. As shown in Figure 1, given a sentence pair consisting of an erroneous sentence and its corrected counterpart, our explainable annotation includes:" }, { "figure_ref": [], "heading": "\"Gerund\" Error", "publication_ref": [ "b10" ], "table_ref": [], "text": "1) Evidence words in the erroneous sentence.\nError tracing could be rather obscure for second-language beginners. For example, given an erroneous sentence, \"As a result, I enjoy study accounting.\" where \"study\" should be corrected to \"studying\", a beginning learner might mistakenly attribute \"studying\" to \"accounting\" because they both have an \"ing\" suffix. However, the correct attribution should be \"enjoy\". Such incorrect judgment may lead the language learner to draw wrong conclusions (e.g., A verb needs to have an \"ing\" suffix if a subsequent verb does so), which significantly disturbs the learning procedure. To remedy this, EXPECT provides annotated evidence words, which enable training models to automatically assist second-language learners in finding error clues.\n2) Error types of the grammatical errors, ranging among the 15 well-defined categories by consulting the pragmatic errors designed by Skehan et al. (1998) and Gui (2004). Language learning consists of both abstract grammar rules and specific language-use examples.\nA model trained with EXPECT bridges the gap between the two parts: such a model can produce proper error types, automatically facilitating language learners to infer abstract grammar rules from specific errors in an inductive reasoning manner. Further, it allows learners to compare specific errors within the same category and those of different categories, benefiting the learner's inductive and deductive linguistic reasoning abilities.\nTo establish baseline performances for explainable GEC on EXPECT, we explore generationbased, labeling-based, and interaction-based methods. Note that syntactic knowledge also plays a crucial role in the human correction of grammatical errors. For example, the evidence word of subject-verb agreement errors can be more accurately identified with the help of dependency parsing. Motivated by these observations, we further inject the syntactic knowledge produced by an external dependency parser into the explainable GEC model.\nExperiments show that the interaction-based method with prior syntactic knowledge achieves the best performance (F 0.5 =70.77). We conduct detailed analysis to provide insights into developing and evaluating an explainable GEC system. Human evaluations suggest that the explainable GEC systems trained on EXPECT can help second language learners to understand the corrections better. We will release EXPECT (e.g., baseline code, model, and human annotations) on https: //github.com/lorafei/Explainable_GEC." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b22", "b19", "b1", "b21", "b24", "b0", "b20", "b2", "b20", "b12", "b13", "b16", "b18", "b11", "b17" ], "table_ref": [], "text": "Some work formulates GEC as a sequence-tosequence problem. Among them, transformerbased GEC models (Rothe et al., 2021) have attained state-of-the-art performance on several benchmark datasets (Ng et al., 2014;Bryant et al., 2019) using large PLMs (Raffel et al., 2020) and synthetic data (Stahlberg and Kumar, 2021). To avoid the low-efficiency problem of seq2seq decoding, some work (Awasthi et al., 2019;Omelianchuk et al., 2020;Tarnavskyi et al., 2022b) formats GEC as a sequence labeling problem and achieves competitive performance. Both lines of work focus on improving the correction performance and decoding speed but cannot provide users with further suggestions.\nSeveral methods have been proposed to provide explanations for GEC systems. ERRANT (Bryant et al., 2017) designs a rule-based framework as an external function to classify the error type information given a correction. GECToR (Omelianchuk et al., 2020) pre-defines g-transformations tag (e.g., transform singular nouns to plurals) and uses a sequence labeling model to predict the tag as explanations directly. Example-based GEC (Kaneko et al., 2022) adopts the k-Nearest-Neighbor method (Khandelwal et al., 2019) for GEC, which can retrieve examples to improve interpretability. Despite their success, their explanations are restricted by pre-defined grammar rules or unsupervised retrieval. They may not generalize well to real-life scenarios due to the limited coverage of the widely varying errors made by writers. In contrast, our annotated instances are randomly sampled from real-life human-written corpora without restriction, thus providing a much larger coverage. Nagata (2019); Nagata et al. (2020); Hanawa et al. (2021), andNagata et al. (2021) propose a feedback comment generation task and release two corresponding datasets, which, to our knowledge, are the only two publicly available and large-scale datasets focusing on GEC explanations. The task aims to generate a fluent comment describing the erroneous sentence's grammatical error. While this task integrates GEC and Explainable-GEC into one text generation task, we only focus on Explainable-GEC and formulate it as a labeling task, which is easier and can avoid the high computational cost of seq2seq decoding. Furthermore, the evaluation of feedback comment generation mainly relies on human annotators to check if the error types are correctly identified and if the grammatical error correction is proper in the generated text, which is time-consuming and susceptible to the variations resulting from subjective human judgment. In contrast, our token classification task can be easily and fairly evaluated by automatic metrics (e.g., Fscore), favoring future research in this direction." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [], "table_ref": [], "text": "To facilitate more explainable and instructive grammatical error correction, we propose the EXPECT, an English grammatical error explanation dataset annotated with 15 grammatical error types and corresponding evidence words." }, { "figure_ref": [], "heading": "Data Source", "publication_ref": [ "b1", "b22", "b20" ], "table_ref": [], "text": "We annotate EXPECT based on W&I+LOCNESS (Bryant et al., 2019), which comprises 3,700 essays written by international language learners and native-speaking undergraduates and corrected by English teachers. We first select all sentences with errors from essays. For a sentence with n errors, we repeat the sentence n times and only keep a single unique error in each sentence. Then, we randomly sample and annotate 15,187 instances as our training set. We do the same thing for the entire W&I+LOCNESS dev set, and split it up into test and development sets evenly.\nIn order to better align with real-world application scenarios, we have additionally annotated 1,001 samples based on the output of the conventional GEC models. We randomly sampled the output of T5-large (Rothe et al., 2021) and GECToR-Roberta (Omelianchuk et al., 2020) on the W&I+LOCNESS test set. We also report whether the corrections of the GEC model were right." }, { "figure_ref": [ "fig_1" ], "heading": "Error Type Definition", "publication_ref": [ "b10" ], "table_ref": [], "text": "Following the cognitive model of second language acquisition (Skehan et al., 1998;Gui, 2004), we design error types among three cognitive levels as follows:\nSingle-word level error is in the first and lowest cognitive level. These mistakes usually include misuse of spelling, contraction, and orthography, which are often due to misremembering. Since there is no clear evidence for those errors, we classify them into type others.\nInter-word level error is in the second cognitive level, which usually stems from a wrong understanding of the target language. Most error types with clear evidence lie at this level because it represents the interaction between words. This level can be further split into two linguistic categories, syntax class and morphology class: (1) In the view of syntax, we have seven error types, including infinitives, gerund, participles, subject-verb agreement, auxiliary verb, pronoun and noun possessive.\n(2) In the view of morphology, we have five error types, including collocation, preposition, word class confusion, numbers, and transitive verbs.\nDiscourse level error is at the highest cognitive level, which needs a full understanding of the context. These errors include punctuation, determiner, verb tense, word order and sentence structure. Since punctuation, word order, and sentence structure errors have no clear evidence words, we also classify them into type others.\nThe complete list of error types and corresponding evidence words are listed in Figure 2. The definition of each category is shown in Appendix A.1." }, { "figure_ref": [], "heading": "Annotation Procedure", "publication_ref": [], "table_ref": [], "text": "Our annotators are L2-speakers who hold degrees in English and linguistics, demonstrating their proficiency and expertise in English. The data are grouped into batches of 100 samples, each containing an erroneous sentence and its correction. The annotators are first trained on labeled batches until their F 1 scores are comparable to those of the main author. After that, annotators are asked to classify the type of the correction and highlight evidence words that support this correction on the unlabeled batches. The evidence should be informative enough to support the underlying grammar of the correction meanwhile complete enough to include all possible evidence words. For each complete batch, we have an experienced inspector to re-check 10% of the batch to ensure the annotation quality. According to inspector results, if F 1 scores for the annotation are lower than 90%, the batch is rejected and assigned to another annotator." }, { "figure_ref": [], "heading": "Data Statistics", "publication_ref": [], "table_ref": [], "text": "The detailed statistics of EXPECT have listed in " }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b7" ], "table_ref": [ "tab_2" ], "text": "We consider our task as a token classification task.\nThus we employ token-level (precision, recall, F 1 , and F 0.5 ) and sentence-level (exact match, label accuracy) evaluation metrics. Specifically, the exact match requires identical error types and evidence words between label and prediction, and the label accuracy measures the classification performance of error types. To explore which automatic metric is more in line with human evaluation, we compute Pearson correlation (Freedman et al., 2007) between automatic metrics and human judgment.\nAs shown in Table 2, F 0.5 achieves the highest score in correlation. And precision is more correlated with human judgment than recall. The reason may be that finding the precise evidence words is more instructive than extracting all evidence words for explainable GEC." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "In this section, we define the task of explainable-GEC in Section 4.1 and then introduce the labeling- based baseline method in Section 4.2, and the interaction method in Section 4.3." }, { "figure_ref": [ "fig_2" ], "heading": "Task Formulation", "publication_ref": [], "table_ref": [], "text": "The task input is a pair of sentences, including an erroneous sentence X = {x 1 , x 2 , ..., x n } and the corresponding corrected sentence Y = {y 1 , y 2 , ..., y m }. The two sentences usually share a large ratio of overlap. The difference between the two sentences is defined as a span edit {(s x , s y )}. The task of explainable GEC is to find the grammar evidence span E x within X and predict corresponding error type classes c. Take Figure 3 as an example, s x = \"are\" and s y = \"is\", the evidence span E x =\"Evidence words\"." }, { "figure_ref": [ "fig_2" ], "heading": "Labeling-based Method", "publication_ref": [ "b9" ], "table_ref": [], "text": "We adopt the labeling-based method for explainable GEC.\nInput. We concatenate the erroneous sentence X and the corresponding error-free sentence Y , formed as\n[CLS]X[SEP]Y [SEP].\nCorrection Embedding. To enhance the positional information of the correction, we adopt a correction embedding e c to encode the position of the correction words in the sentence X and Y . We further add e c to embeddings in BERT-based structure as follow: e = e t + e p + e c\n(1) where e t is the token embeddings, and e p is the position embeddings.\nSyntactic Embedding. There is a strong relation between evidence words and syntax as shown in Section 5.3. Hence we inject prior syntactic information into the model. Firstly, given the corrected sentence Y and its span edit (s x , s y ), we parse sentence Y with an off-the-shell dependency parser from the AllenNLP library (Gardner et al., 2018).\nFor each word in s y , we extract its first-order dependent and second-order dependent words in the dependency parse tree. For example, as shown in Figure 3, the correction word s y = \"are\", the first-order dependent word is \"important\", and the second-order dependent words are \"words\", and \"for\", and they are marked separately. By combining all correction edits' first-order words and second-order words, we construct the syntactic vector d Y ∈ R m for sentence Y . Dependency parsing is originally designed for grammatical sentences.\nTo acquire the syntax vector of the erroneous sentence X, we use the word alignment to map the syntax-order information from the corrected sentence to the erroneous sentence, yielding d X ∈ R n . We then convert [d X , d Y ] to syntactic embedding e s , and add to the original word embedding:\ne = e t + e p + e c + e s (2)\nEncoder. We adopt a pre-trained language model (e.g. BERT) as an encoder to encode the input e, yielding a sequence of hidden representation H.\nLabel Classifier. The hidden representation H is fed into a classifier to predict the label of each word. The classifier is composed of a linear classification layer with a softmax activation function.\nli = softmax(Wh i + b),(3)\nwhere li is the predicted label for i-th word, W and b are the parameters for the softmax layer.\nTraining. The model is optimized by the loglikelihood loss. For each sentence, the training object is to minimize the cross-entropy between l i (4)" }, { "figure_ref": [ "fig_3" ], "heading": "Interaction-based Method", "publication_ref": [ "b4" ], "table_ref": [], "text": "Although labeling-based methods model the paired sentences in a joint encoder, it still predicts two separate outputs independently. The dependencies between the erroneous sentence and the corrected sentence are not explicitly modeled. Intuitively, the alignment between the erroneous sentence and the corrected sentence can be highly informative. We propose an interactive matrix to jointly model the alignment and the evidence span. In particular, we adopt a bi-affine classifier to model the multiplicative interactions between the erroneous sentence and the corrected sentence. Assume that the hidden representation of the erroneous sentence and the corrected sentence are H e and H c , respectively. We first use two separate feed-forward networks to map the hidden representation into an erroneous query representation and a corrected key representation:\nH q = W q H e + b e H k = W k H c + b c (5)\nThen a bi-affine attention (Dozat and Manning, 2016) is adopted to model the interaction between H q and H k :\nM = sof tmax(H q UH k + b U ),(6)\nwhere U ∈ R |H|×|H|×|L| , |H| and |L| indicates the hidden size and the size of the label set.\nTraining. Similar to the labeling-based method, the training objective is to minimize the crossentropy between M and M given a labeled goldstandard sentence:\nL = - m i n j log Mij . (7\n)\nSyntactic Interactive Matrix. Similar to Syntactic Embedding, we use a syntactic interactive matrix to better merge the syntactic knowledge into the model. We construct the syntactic interactive matrix D syn in the same way as the syntactic embedding above, except for using a interactive matrix rather than a flat embedding. Figure 4 shows an example of a syntactic matrix, where the row of the correction index in the erroneous sentence is placed with a syntactic vector of the corrected sentence, whereas the column of the correction index in a corrected sentence is placed with erroneous sentence's syntactic vector. Then a two-layer MLP is used to map D syn to H syn :\nH syn = W syn 2 RELU(W syn 1 D syn + b syn 1 ) + b syn 2 (8)\nH syn is then used as an auxiliary term to calculate the interaction matrix M. Eq 6 is reformulated as:\nM = sof tmax(H q UH k + H syn + b U ). (9\n)\n5 Experiments" }, { "figure_ref": [], "heading": "Baseline Methods", "publication_ref": [], "table_ref": [], "text": "Human performance is reported. We employ three NLP researchers to label the test set and report the average score as human performance.\nGeneration-based method frames the task as a text generation format. It utilizes a pre-trained generation model to predict the type of error and generate a corrected sentence with highlighted evidence words marked by special tokens.\nLabeling-based (error only) method uses only erroneous sentences as input and predicted explanation directly.\nLabeling-based (correction only) method uses only corrected sentences as input and predicted explanation directly.\nLabeling-based (with appendix) method uses only erroneous sentences or corrected sentences and appends correction words at the end of the sentence.\nLabeling-based (error and correction) method concatenate erroneous and corrected sentences as described in Section 4.2. " }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "The model performance under different settings are shown in Table 3.\nWe evaluate the model performance across a variety of settings, including generation-based, labeling-based, and interaction-based, as well as syntactic-based and non-syntactic-based. First, we find that generation-based methods do not outperform labeling-based methods and suffer from poor inference efficiency due to auto-regressive decoding. In addition, interaction-based methods exhibit higher precision but lower recall compared to labeling-based methods. This is likely due to the interaction between two sentences helping the model identify more evidence words. Based on labeling-based methods, adding syntactic information has a marginal 0.28 F 0.5 point increase, while for interaction-based methods, the performance increases by 1.94 F 0.5 point. This suggests that syntactic information can generally provide an indication for identifying evidence words. And the interaction matrix better incorporates syntactic information into the model. Particularly, we found correction embeddings are pretty important for this task. With correction embeddings, the performance increases by 2.64 F 0.5 points on Dev set and 1.16 points on Test set. Finally, interaction-based methods with syntactic knowledge achieve the best performance when measured by precision, F 0.5 , exact match, and accuracy." }, { "figure_ref": [ "fig_4" ], "heading": "Impact of Syntactic Knowledge", "publication_ref": [ "b25" ], "table_ref": [], "text": "To further explore the role of syntactic knowledge in boosting the explainable GEC performance, we first analyze the relation between evidence words and correction words' adjacent nodes in the dependency parsing tree. As shown in of instances have at least one evidence word within correction words' first-order nodes, and 27.02% of instances' all evidence words stay within secondorder nodes. We can infer that syntactic knowledge can in a way narrow the search space of extracting evidence words.\nModel Performance across Syntactic Distance.\nWe compare F 0.5 scores for instances whose evidence words are in and out of the 1st and 2nd dependent orders in Figure 5. The overall performance decreases when evidence words are outside the 2nd dependent order, indicating that the model has trouble in handling complex syntactic structure. But after injecting the syntactic knowledge, the performance increases in all sections, suggesting the effectiveness of syntactic representation. Benefit of Syntactic Representation. We report F 0.5 scores on specific error types before and after injecting syntactic information into the models in Figure 6. Dependency parsing is a common tool to detect SVA (Sun et al., 2007). The performance on SVA indeed increases with the syntax. We also find four other error types which are closely associated with syntactic information, including auxiliary verb, collocation, POS confusion and number, whose performance increases significantly for both the labeling-based method and the interactionbased method." }, { "figure_ref": [], "heading": "Impact of Sentence Length", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "Table 5 illustrates the model performance across different lengths of erroneous sentences. As the sentence length increases, the performance of all methods decreases significantly, which is consistent with human intuition. Longer sentences may contain more complex syntactic and semantic structures, which are challenging for models to capture." }, { "figure_ref": [], "heading": "Result on Real-world GEC System", "publication_ref": [], "table_ref": [], "text": "We employ the gold correction as the input during both the training phase and the inference phase. However, in a practical scenario, this input would be replaced with the output of a GEC system. To evaluate the performance of the explainable system equipped with real-world GEC systems, we use interaction-based methods with syntactic knowledge trained on EXPECT, and directly test using samples that are annotated from the outputs of the GEC model on the W&I+LOCNESS test set. The F 0.5 scores obtained are 57.43 for T5-large outputs and 60.10 for GECToR-Roberta outputs, which significantly underperforms 68.39. This may be caused by the training-inference gap as mentioned and the error propagation of the GEC system." }, { "figure_ref": [], "heading": "Human Evaluation", "publication_ref": [ "b20" ], "table_ref": [], "text": "To assess the effectiveness of the explainable GEC for helping second-language learners understand corrections, we randomly sample 500 instances with gold GEC correction and 501 outputs decoded by an off-the-shelf GEC system GECTOR (Omelianchuk et al., 2020), and predict their evidence words and error types using the interactionbased model with syntactic knowledge. We recruit 5 second-language learners as annotators to evaluate whether the predicted explanation is helpful in understanding the GEC corrections. The results show that 84.0 and 82.4 percent of the model prediction for gold GEC correction and GECTOR has explanations, and 87.9 and 84.5 percent of the explanations of EXPECT and gold GEC correction, respectively, are helpful for a language learner to understand the correction and correct the sentence. This show that the explainable GEC system trained on EXPECT can be used as a post-processing module for the current GEC system." }, { "figure_ref": [ "fig_6", "fig_6" ], "heading": "Case Study", "publication_ref": [ "b15" ], "table_ref": [], "text": "We identify two phenomena from our syntactic and non-syntactic models based on labeling models:\nDistant Words Identification. The nonsyntactic model makes errors because it does not incorporate explicit syntactic modeling, particularly in long and complex sentences where it is difficult to identify distant evidence words.\nAs shown in the first case of Figure 7, the nonsyntactic model fails to consider evidence words, such as \"apply\", that is located far away from the correction. However, the syntactic-based model is able to identify the evidence word \"apply\".\nDependency Parsing Errors. Some evidence word identification errors are from the misleading parsing results in the long sentence (Ma et al., 2018). As shown in the second case of Figure 7, the model with syntactic knowledge is actually using an inaccurate parse tree in the green box from the off-the-shelf parser, which results in identifying redundant word \"off \". " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We introduce EXPECT, an explainable dataset for grammatical error correction, which contains 21,017 instances with evidence words and error categorization annotation. We implement several models and perform a detailed analysis to understand the dataset better. Experiments show that injecting syntactic knowledge can help models to boost their performance. Human evaluation verifies the explanations provided by the proposed explainable GEC systems are effective in helping second language learners understand the corrections. We hope that EXPECT facilitates future research on building explainable GEC systems." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "The limitations of our work can be viewed from two perspectives. Firstly, we have not thoroughly investigated seq2seq architectures for explainable GEC. Secondly, the current input of the explainable system is the gold correction during training, whereas, in practical applications, the input would be the output of a GEC system. We have not yet explored methods to bridge this gap." }, { "figure_ref": [], "heading": "Ethics Consideration", "publication_ref": [], "table_ref": [], "text": "We annotate the proposed dataset based on W&I+LOCNESS, without copyright constraints for academic use. For human annotation (Section 3.3 and Section 5.6), we recruit our annotators from the linguistics departments of local universities through public advertisement with a specified pay rate. All of our annotators are senior undergraduate students or graduate students in linguistic majors who took this annotation as a part-time job.\nWe pay them 60 CNY an hour. The local minimum salary in 2022 is 25.3 CNY per hour for part-time jobs. The annotation does not involve any personally sensitive information. The annotated is required to label factual information (i.e., evidence words inside the sentence.)." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 Grammatical Error Categories", "publication_ref": [], "table_ref": [], "text": "The definition of each grammatical error category in EXPECT is shown as follows:\n• Infinitives: including errors like missing to before a certain verbs for to-infinitives, or unnecessary to after modal verbs for zero-infinitives.\n• Gerund: misuse of the verb form that should act as a noun in a sentence.\n• Participles: confuse with ordinary verbs like present simple, past simple or present continuous and other participles-related situations.\n• Subject-verb agreement(SVA): the verb didn't agree with the number of the subject.\n• Auxiliary verb: misuse of main auxiliary verbs like do, have or model auxiliary verbs like could, may, should, etc.\n• Verb tense: incongruities in verb tenses, such as erroneous tense shift in a compound sentence, etc.\n• Pronoun-antecedent agreement(PAA): pronouns didn't agree in number, person, and gender with their antecedents.\n• Possessive: misuse of possessive adjectives and possessive nouns.\n• Collocation: atypical word combinations that are grammatically acceptable but not common.\n• Preposition: misuse of prepositional words.\n• POS confusion: confusions in part of speech like noun/adjective confusion(e.g. difficulty, difficult), adjective/adverb confusion(e.g. ready, readily), etc.\n• Article: wrong use of article.\n• Number: confusion in singular or plural form of nouns.\n• Transition: extra preposition after transitive verbs and missing proposition after intransitive verbs." }, { "figure_ref": [], "heading": "A.2 Implementation Details", "publication_ref": [ "b28" ], "table_ref": [], "text": "We employ pre-trained BERT-large-cased in HuggingFace's Transformer Library (Wolf et al., 2020) as our encoder, which consists of 24 Transformer layers and 16 attention heads with 1024 hidden dimensions. We set the dimension of the correction embeddings and syntactic embeddings as 1024, which is the same as that in BERT. We set the learning rate to 1e-5 and batch size to 32 for non-interactive matrix models, and 5e-5 and 16 for interactive matrix models." } ]
2023-06-10
10.18653/v1/2022.acl-long.266
[ { "authors": "Abhijeet Awasthi; Sunita Sarawagi; Rasna Goyal; Sabyasachi Ghosh; Vihari Piratla", "journal": "", "ref_id": "b0", "title": "Parallel iterative edit models for local sequence transduction", "year": "2019" }, { "authors": "Christopher Bryant; Mariano Felice; Øistein E Andersen; Ted Briscoe", "journal": "", "ref_id": "b1", "title": "The bea-2019 shared task on grammatical error correction", "year": "2019" }, { "authors": "Christopher Bryant; Mariano Felice; Edward Briscoe", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "Automatic annotation and evaluation of error types for grammatical error correction", "year": "2017" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b3", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Timothy Dozat; Christopher D Manning", "journal": "", "ref_id": "b4", "title": "Deep biaffine attention for neural dependency parsing", "year": "2016" }, { "authors": "Rod Ellis; Shawn Loewen; Rosemary Erlam", "journal": "Studies in second language acquisition", "ref_id": "b5", "title": "Implicit and explicit corrective feedback and the acquisition of l2 grammar", "year": "2006" }, { "authors": "Simon Flachs; Felix Stahlberg; Shankar Kumar", "journal": "", "ref_id": "b6", "title": "Data strategies for low-resource grammatical error correction", "year": "2021" }, { "authors": "David Freedman; Robert Pisani; Roger Purves", "journal": "", "ref_id": "b7", "title": "Statistics", "year": "2007" }, { "authors": "R Pisani; Purves", "journal": "WW Norton & Company", "ref_id": "b8", "title": "", "year": "" }, { "authors": "Matt Gardner; Joel Grus; Mark Neumann; Oyvind Tafjord; Pradeep Dasigi; Nelson Liu; Matthew Peters; Michael Schmitz; Luke Zettlemoyer", "journal": "", "ref_id": "b9", "title": "Allennlp: A deep semantic natural language processing platform", "year": "2018" }, { "authors": "Shichun Gui", "journal": "Modern Foreign Languages(Quarterly)", "ref_id": "b10", "title": "A cognitive model of corpusbased analysis of chinese learners' errors of english", "year": "2004" }, { "authors": "Kazuaki Hanawa; Ryo Nagata; Kentaro Inui", "journal": "", "ref_id": "b11", "title": "Exploring methods for generating feedback comments for writing learning", "year": "2021" }, { "authors": "Masahiro Kaneko; Sho Takase; Ayana Niwa; Naoaki Okazaki", "journal": "", "ref_id": "b12", "title": "Interpretability for language learners using example-based grammatical error correction", "year": "2022" }, { "authors": "Urvashi Khandelwal; Omer Levy; Dan Jurafsky; Luke Zettlemoyer; Mike Lewis", "journal": "", "ref_id": "b13", "title": "Generalization through memorization: Nearest neighbor language models", "year": "2019" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Ves Stoyanov; Luke Zettlemoyer", "journal": "", "ref_id": "b14", "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2019" }, { "authors": "Xuezhe Ma; Zecong Hu; Jingzhou Liu; Nanyun Peng; Graham Neubig; Eduard Hovy", "journal": "", "ref_id": "b15", "title": "Stackpointer networks for dependency parsing", "year": "2018" }, { "authors": "Ryo Nagata", "journal": "", "ref_id": "b16", "title": "Toward a task of feedback comment generation for writing learning", "year": "2019" }, { "authors": "Ryo Nagata; Masato Hagiwara; Kazuaki Hanawa; Masato Mita; Artem Chernodub; Olena Nahorna", "journal": "", "ref_id": "b17", "title": "Shared task on feedback comment generation for language learners", "year": "2021" }, { "authors": "Ryo Nagata; Kentaro Inui; Shin'ichiro Ishikawa", "journal": "", "ref_id": "b18", "title": "Creating corpora for research in feedback comment generation", "year": "2020" }, { "authors": "Tou Hwee; Ng; Mei Siew; Ted Wu; Christian Briscoe; Raymond Hendy Hadiwinoto; Christopher Susanto; Bryant", "journal": "", "ref_id": "b19", "title": "The conll-2014 shared task on grammatical error correction", "year": "2014" }, { "authors": "Kostiantyn Omelianchuk; Vitaliy Atrasevych; Artem Chernodub; Oleksandr Skurzhanskyi", "journal": "", "ref_id": "b20", "title": "Gector-grammatical error correction: tag, not rewrite", "year": "2020" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b21", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Sascha Rothe; Jonathan Mallinson; Eric Malmi; Sebastian Krause; Aliaksei Severyn", "journal": "", "ref_id": "b22", "title": "A simple recipe for multilingual grammatical error correction", "year": "2021" }, { "authors": "Shuming Shi; Enbo Zhao; Bi Wei; Cai Deng; Leyang Cui; Xinting Huang; Haiyun Jiang; Duyu Tang; Kaiqiang Song; Wang Longyue; Chengyan Huang; Guoping Huang; Yan Wang; Li Piji", "journal": "Oxford University Press", "ref_id": "b23", "title": "Effidit: An assistant for improving writing efficiency", "year": "1998" }, { "authors": "Felix Stahlberg; Shankar Kumar", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Synthetic data generation for grammatical error correction with tagged corruption models", "year": "2021" }, { "authors": "Guihua Sun; Gao Cong; Xiaohua Liu; Chin-Yew Lin; Ming Zhou", "journal": "", "ref_id": "b25", "title": "Mining sequential patterns and tree patterns to detect erroneous sentences", "year": "2007" }, { "authors": "Maksym Tarnavskyi; Artem Chernodub; Kostiantyn Omelianchuk", "journal": "", "ref_id": "b26", "title": "Ensembling and knowledge distilling of large sequence taggers for grammatical error correction", "year": "2022" }, { "authors": "Maksym Tarnavskyi; Artem Chernodub; Kostiantyn Omelianchuk", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Ensembling and knowledge distilling of large sequence taggers for grammatical error correction", "year": "2022" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz", "journal": "", "ref_id": "b28", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Yue Zhang; Bo Zhang; Zhenghua Li; Zuyi Bao; Chen Li; Min Zhang", "journal": "", "ref_id": "b29", "title": "Syngec: Syntax-enhanced grammatical error correction with a tailored gecoriented parser", "year": "2022" } ]
[ { "formula_coordinates": [ 5, 116.91, 547.71, 103.2, 8.76 ], "formula_id": "formula_0", "formula_text": "[CLS]X[SEP]Y [SEP]." }, { "formula_coordinates": [ 5, 358.21, 676.81, 166.93, 13.51 ], "formula_id": "formula_1", "formula_text": "li = softmax(Wh i + b),(3)" }, { "formula_coordinates": [ 6, 136.9, 629.3, 152.97, 30.15 ], "formula_id": "formula_2", "formula_text": "H q = W q H e + b e H k = W k H c + b c (5)" }, { "formula_coordinates": [ 6, 109.44, 722.66, 180.43, 12.48 ], "formula_id": "formula_3", "formula_text": "M = sof tmax(H q UH k + b U ),(6)" }, { "formula_coordinates": [ 6, 363.12, 129.95, 157.78, 33.71 ], "formula_id": "formula_4", "formula_text": "L = - m i n j log Mij . (7" }, { "formula_coordinates": [ 6, 520.9, 141.93, 4.24, 9.46 ], "formula_id": "formula_5", "formula_text": ")" }, { "formula_coordinates": [ 6, 317.06, 365.57, 207.95, 11.77 ], "formula_id": "formula_6", "formula_text": "H syn = W syn 2 RELU(W syn 1 D syn + b syn 1 ) + b syn 2 (8)" }, { "formula_coordinates": [ 6, 339.3, 419.07, 182.22, 10.33 ], "formula_id": "formula_7", "formula_text": "M = sof tmax(H q UH k + H syn + b U ). (9" }, { "formula_coordinates": [ 6, 521.53, 421.62, 3.48, 7.77 ], "formula_id": "formula_8", "formula_text": ")" } ]
Enhancing Grammatical Error Correction Systems with Explanations
Grammatical error correction systems improve written communication by detecting and correcting language mistakes. To help language learners better understand why the GEC system makes a certain correction, the causes of errors (evidence words) and the corresponding error types are two key factors. To enhance GEC systems with explanations, we introduce EXPECT, a large dataset annotated with evidence words and grammatical error types. We propose several baselines and analysis to understand this task. Furthermore, human evaluation verifies our explainable GEC system's explanations can assist second-language learners in determining whether to accept a correction suggestion and in understanding the associated grammar rule.
Yuejiao Fei; Leyang Cui; Sen Yang; Wai Lam; Zhenzhong Lan; Shuming Shi
[ { "figure_caption": "Figure 1 :1Figure 1: Comparison between explainable GEC and conventional GEC systems.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Examples of each error type and corresponding evidence words in EXPECT. Blue text indicates the correction, while red text indicates the evidence words. SVA means subject-verb agreement, PAA means pronounantecedent agreement, POS confusion means part-of-speech confusion.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: An illustration of labeling-based methods with syntax for solving explainable GEC. On the right is the dependency parsing tree of the corrected sentence, where the correction word are is marked in red, and 1st and 2nd-order nodes are marked with red circles.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Syntactic Interactive Matrix.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: F 0.5 score comparison of evidence words in first and second order nodes.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Undertaking a scholarship and admission to one of the universities I have selected above will provide me with the opportunity to apply the knowledge gained at high school [into->in] a business setting. Gold: preposition error, [apply, a business setting] Labeling-based: preposition error, [a business setting] Labeling-based + syntax: preposition error, [apply, a business setting] Gold: gerund error, [end up] Labeling-based: gerund error, [end up] Labeling-based + syntax: gerund error, [off, end up] order On the other hand, many teens who take a year off end up [to spend->spending] it in the wrong way .", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Case study. The first case shows the identification problem for distant evidence words. The second case shows the error caused by wrong dependency parsing results.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "Data StatisticsTrainDevTestOutputsNumber of sentences15,1872,4132,4161001Number of words435,503 70,111 70,61927,262Avg. w.p.s28.6829.0629.2327.23With evidence rate74.1559.1059.7772.73Total evidence words29,1874,2804,3401736Avg. evidence w.p.s2.593.003.012.38", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Take the train set for example, the average number of words per sentence is 28.68, and 74.15% of the entire dataset has explainable evidence. Among all sentences with evidence words, the average number of words per evidence is 2.59.", "figure_data": "Precision RecallF 1F 0.5 Exact Match0.4690.410 0.463 0.4710.342", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Pearson correlation between human judgment and different automatic evaluation metrics.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": " 73.27 Correction+Appendix 64.85 55.74 59.95 62.80 50.00 74.36 61.86 54.45 57.92 60.22 47.66 72.98 Error+Correction 67.82 57.51 62.24 65.47 50.60 72.42 68.91 57.94 62.95 66.39 59.19 77.31 Error+Correction+CE 69.76 62.20 65.77 68.11 54.09 75.65 69.44 60.93 64.91 67.55 61.39 79.14 Error+Correction+CE+Syntax 70.06 62.44 66.03 68.39 55.21 76.57 68.23 61.23 64.54 66.71 61.26 78.93 Interaction-based Error+Correction+CE 71.63 59.54 65.03 68.83 63.04 80.05 68.47 59.14 63.46 66.38 66.28 81.17 Error+Correction+CE+Syntax 74.77 58.31 65.52 70.77 64.58 81.34 73.05 56.45 63.69 68.99 67.81 81.79 Model performance on EXPECT. EM means Exact Match, CE means correction embeddings.", "figure_data": "DevTestMethodsPRF 1F 0.5EMAccPRF 1F 0.5EMAccHuman------77.50 75.98 76.73 77.19 69.00 87.00Generation-based", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "46.71%", "figure_data": "Count RatioExist evidence word in 1st7,094 46.71Exist evidence word in 2st7,723 50.85All evidence words in 1st2,528 16.65All evidence words in 2st4,103 27.02", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Statistics of training set evidence words within first-order and second-order nodes.", "figure_data": "76Labeling-basedLabeling-based+SyntaxInteraction-basedInteraction-based+Syntax73706764In 1st orderIn 2nd orderOutside 2nd order表格 5-1BaselineSyntacticInteractive Matrix SyntacticembeddingInteractive MatrixIn 1st order nodes70.7271.5070.7873.20In 2nd order nodes70.7470.9570.4372.22Outside 2nd order nodes66.0266.3067.4369.49", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Model performance F 0.5 scores across sentence length.", "figure_data": "", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "(Rothe et al., 2021)", "Explanation": "The cited work by Rothe et al. provides a sequence-to-sequence method for building GEC systems, which the citing paper adopts to detect and correct grammatical errors in a given sentence."}, {"Category": "Supporting Evidence", "Citation": "(Flachs et al., 2021)", "Explanation": "The cited work by Flachs et al. also contributes a sequence-to-sequence method for building GEC systems, which the citing paper may have considered in their research."}, {"Category": "Supporting Evidence", "Citation": "(Zhang et al., 2022)", "Explanation": "The cited work by Zhang et al. provides another sequence-to-sequence method for building GEC systems, which the citing paper may have used in their study to detect and correct grammatical errors."}, {"Category": "Supporting Evidence", "Citation": "(Omelianchuk et al., 2020)", "Explanation": "The cited work by Omelianchuk et al. presents a sequence labeling method for building GEC systems, which the citing paper may have considered in their research to transform the target into a sequence of text-editing actions."}, {"Category": "Supporting Evidence", "Citation": "(Tarnavskyi et al., 2022a)", "Explanation": "The cited work by Tarnavskyi et al. also contributes a sequence labeling method for building GEC systems, which the citing paper may have used in their study to transform the target into a sequence of text-editing actions."}, {"Category": "Supporting Evidence", "Citation": "(Devlin et al., 2018)", "Explanation": "The cited work by Devlin et al. (2018) is a foundational study in the field of large pre-trained models, which the citing paper leverages to underpin its research on GEC systems."}, {"Category": "Supporting Evidence", "Citation": "(Lewis et al., 2019)", "Explanation": "The cited work by Lewis et al. (2019) is another key study in the area of large pre-trained models, providing valuable insights and methodologies that the citing paper builds upon in its research."}, {"Category": "Data Source", "Citation": "(Ng et al., 2014)", "Explanation": "The cited work by Ng et al. (2014) is a data source for the GEC corpora used in the citing paper, acknowledging the origin of the data and its importance in the study conducted."}, {"Category": "Data Source", "Citation": "(Bryant et al., 2019)", "Explanation": "The cited work by Bryant et al. (2019) is another data source for the GEC corpora utilized in the research of the citing paper, highlighting the reliance on external data in the study conducted."}, {"Category": "Methodological Basis", "Citation": "(Omelianchuk et al., 2020)", "Explanation": "The cited work by Omelianchuk et al. (2020) provides a methodological basis for the academic GEC systems discussed in the citing paper, serving as a foundational study for the research conducted."}, {"Category": "Methodological Basis", "Citation": "(Rothe et al., 2021)", "Explanation": "The cited work by Rothe et al. (2021) is another methodological study that the citing paper builds upon in its research on academic GEC systems, contributing to the development of the field."}, {"Category": "Supporting Evidence", "Citation": "(Shi et al., 2023)", "Explanation": "The cited work by Shi et al. (2023) is mentioned as a commercial system that provides evidence words in limited cases, which is a key element in the citing paper's discussion on the importance of providing grammar-aware explanations and evidence words in second-language education scenarios."}, {"Category": "Data Source", "Citation": "(Bryant et al., 2019)", "Explanation": "The cited work serves as the standard GEC benchmark for the construction of the EXPECT dataset in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Skehan et al., 1998)", "Explanation": "The cited work by Skehan et al. provides a well-defined set of error categories that the citing paper uses to classify the error types in the grammatical errors."}, {"Category": "Supporting Evidence", "Citation": "(Gui, 2004)", "Explanation": "The cited work by Gui provides additional insights on the error types in language learning, which the citing paper uses to further refine the classification of error types in the grammatical errors."}, {"Category": "Methodological Basis", "Citation": "(Rothe et al., 2021)", "Explanation": "The cited work by Rothe et al. (2021) has attained state-of-the-art performance in GEC using large PLMs and synthetic data, which the citing paper builds upon to improve the correction performance and decoding speed in GEC systems."}, {"Category": "Extension or Continuation", "Citation": "(Stahlberg and Kumar, 2021)", "Explanation": "The cited work by Stahlberg and Kumar (2021) has used large PLMs and synthetic data to improve the performance of GEC systems, which the citing paper extends by further improving the correction performance and decoding speed in GEC systems."}, {"Category": "Data Source", "Citation": "(Raffel et al., 2020)", "Explanation": "The cited work by Raffel et al. (2020) has introduced large PLMs that the citing paper utilizes in GEC systems to improve the correction performance and decoding speed."}, {"Category": "Data Source", "Citation": "(Ng et al., 2014;Bryant et al., 2019)", "Explanation": "The cited works by Ng et al. (2014) and Bryant et al. (2019) have used benchmark datasets in GEC systems, which the citing paper uses to evaluate the performance of the GEC systems in terms of correction performance and decoding speed."}, {"Category": "Methodological Basis", "Citation": "(Awasthi et al., 2019;Omelianchuk et al., 2020;Tarnavskyi et al., 2022b)", "Explanation": "The cited works by Awasthi et al. (2019), Omelianchuk et al. (2020), and Tarnavskyi et al. (2022b) have formulated GEC as a sequence labeling problem, which the citing paper adopts to improve the correction performance and decoding speed in GEC systems."}, {"Category": "Supporting Evidence", "Citation": "(Bryant et al., 2017)", "Explanation": "The cited work by Bryant et al. (2017) has designed a rule-based framework to classify the error type information in GEC systems, which provides supporting evidence for the citing paper to further improve the correction performance and decoding speed in GEC systems."}, {"Category": "Data Source", "Citation": "(Kaneko et al., 2022)", "Explanation": "The cited work by Kaneko et al. provides the k-Nearest-Neighbor method for GEC, which the citing paper adopts to improve interpretability in GEC."}, {"Category": "Extension or Continuation", "Citation": "(Hanawa et al., 2021)", "Explanation": "The cited work by Hanawa et al. proposes a feedback comment generation task and releases a dataset for GEC explanations, which the citing paper extends by providing a larger and more diverse dataset for GEC research."}, {"Category": "Data Source", "Citation": "(Bryant et al., 2019)", "Explanation": "The cited work provides the W&I+LOCNESS dataset, which the citing paper uses to select and annotate sentences with errors for training and testing the GEC model."}, {"Category": "Extension or Continuation", "Citation": "(Rothe et al., 2021)", "Explanation": "The cited work provides the T5-large model, which the citing paper uses to sample output for additional annotation and testing of the GEC model."}, {"Category": "Extension or Continuation", "Citation": "(Omelianchuk et al., 2020)", "Explanation": "The cited work provides the GECToR-Roberta model, which the citing paper uses to sample output for additional annotation and testing of the GEC model."}, {"Category": "Methodological Basis", "Citation": "(Skehan et al., 1998)", "Explanation": "The cited work provides a cognitive model of second language acquisition that serves as the basis for the design of error types in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Gui, 2004)", "Explanation": "The cited work also contributes to the design of error types in the citing paper by providing a specific model of second language acquisition."}, {"Category": "Supporting Evidence", "Citation": "(Skehan et al., 1998)", "Explanation": "The cited work provides evidence for the existence of error types in the first and lowest cognitive level, which is used to support the classification of single-word level errors in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Gui, 2004)", "Explanation": "The cited work also provides evidence for the existence of error types in the second cognitive level, which is used to support the classification of inter-word level errors in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Skehan et al., 1998)", "Explanation": "The citing paper extends the research of the cited work by further exploring the design of error types in the first and lowest cognitive level."}, {"Category": "Extension or Continuation", "Citation": "(Gui, 2004)", "Explanation": "The citing paper also extends the research of the cited work by exploring the design of error types in the second cognitive level."}, {"Category": "Methodological Basis", "Citation": "(Freedman et al., 2007)", "Explanation": "The cited work by Freedman et al. introduces the concept of Pearson correlation, which the citing paper adopts to measure the correlation between automatic metrics and human judgment in the context of GEC evaluation."}, {"Category": "Methodological Basis", "Citation": "(Gardner et al., 2018)", "Explanation": "The cited work provides the off-the-shell dependency parser used in the syntactic embedding process in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Sun et al., 2007)", "Explanation": "The cited work by Sun et al. (2007) is used to detect SVA in the citing paper, which indicates a methodological basis for the study conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Omelianchuk et al., 2020)", "Explanation": "The cited work by Omelianchuk et al. (2020) serves as a reference for the GEC system used in the citing paper, which is the GECTOR system. The citing paper further extends the research by using the GECTOR system as a post-processing module for the current GEC system."}, {"Category": "Methodological Basis", "Citation": "(Ma et al., 2018)", "Explanation": "The cited work by Ma et al. (2018) provides the off-the-shelf parser used in the syntactic-based model for identifying evidence words in long sentences, which serves as a methodological basis for the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Wolf et al., 2020)", "Explanation": "The cited work provides the pre-trained BERT-large-cased model used in the citing paper as the encoder for their research on non-interactive and interactive matrix models."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b6" ], "table_ref": [], "text": "In the past decade, the development of information technology has led to a shift in the main carrier of information from text to images and then to videos. Moreover, with the rise of User-generated Content (UGC), the producer of information has shifted from Occupationally-generated Content (OGC) to UGC. As a result, a large number of videos have emerged on social media platforms and have been widely shared, leading to the increasingly important and challenging problems of video copyright protection. The video copy detection task can always be divided into two parts: the descriptor task, which is used to recall similar videos, and the matching task, which is used to locate the copied segment. In this report, we summarize our work on the Matching Track of the Meta AI Video Similarity Challenge.\nFor the matching task, two important problems arise. The first is deciding which feature to use: the embedding or the similarity matrix. The second is determining how to match the video copy segments. We first consider the feature. It is reasonable choice to share features among the matching task and the descriptor task, as two task are highly correlated. Extracting features independently for each task would require double the computing resources, so sharing features can reduce the computation cost. As to the model input, the advantage of using embedding for the matching task is that it contains more information and can be used for further tuning [3] . However, the drawback is that the matching model must be changed when the descriptor model is changed, as two different embedding models may have little correlation with each other. While the similarity matrix is more robust as changes of embedding model does not change the characteristic of similarity matrix, and even the use of different embedding models can expand the limited annotations. Finally, we choose the similarity matrix as model input.\nAs to the video copy segments, we found several academic approaches. First is the Temporal Network(TN) [7], a graph-based method takes matched frames as nodes and similarities between frames as weights of links to construct a network. This is also the baseline, but we found this method is hard to modify and optimize. Similarity Pattern Detection (SPD) [5] adopts detection method to direct output the result. We attempted to use this method, but encountered difficulties in optimizing the model with limited annotations. We also explored TransVCL [3], but ultimately decided to against it. Because this method relies on frame embedding as input, which does not align with our project's objectives. Most importantly, we discovered that the pri- mary challenge here is not simply outputting the results in an end-to-end way, but rather obtaining a cleaner matching relationship in comparison to the raw similarity matrix input. To address this, our SAM was specifically designed to take a similarity matrix as input and output a score matrix with the same resolution, with significantly improved matching relationships." }, { "figure_ref": [ "fig_1" ], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "In this section, we will introduce the whole pipeline we used to develop our result. As shown in figure 2. Our pipeline include the preprocessing, embedding extraction, similarity matrix filtering, and the SAM model processing." }, { "figure_ref": [ "fig_2", "fig_0" ], "heading": "Preprocessing", "publication_ref": [ "b0" ], "table_ref": [], "text": "Video frames were extracted at one frame per second, but many videos contained multiple scenes in one frame or extra edges. Canny [1] edge detection and frame pixel standard deviation feature was used to address this issue. First, we average the edge detection results from multiple frames to get more robust edges. Then we use frame pixel standard deviation feature to find potential background. The whole processing is done recursively by: 1) A split images function divides a video into segments based on: a) the vertical or horizontal edges that extend across the frame; b) a low pixel standard deviation zone that split video vertically or horizontally. 2) A edge erase function that remove low variant parts of videos. With a video input, it stops when the processing parts doesn't change in size or has too low resolution. Figure 3 shows the feature we used in preprocessing. Figure 1. reveals the typical edits in stacked frames and extra edges. " }, { "figure_ref": [], "heading": "Embedding Model", "publication_ref": [], "table_ref": [], "text": "As previously mentioned, our method utilizes the embedding model from the Descriptor track, using the similarity matrix as the input for matching task. This allows our model to be resilient to changes in the embedding, and multiple embeddings of the same (query, reference) pair can be utilized as a form of data augmentation.\nTo maximize efficiency and recall, a recall process is employed to identify potential copied video pairs. This process only considers the similarity of descriptor features and utilizes a low recall threshold, resulting in a high number of potential matches being identified but is not copied at all. We utilized a classification process that takes a similarity matrix as input and outputs the probability that a video pair is duplicated. Our chosen classification model is the MobileNet-v3 [4], pre-trained by Image-net. This approach successfully removed 95% of the recalled samples without impacting overall performance." }, { "figure_ref": [ "fig_3" ], "heading": "Similarity Alignment Model", "publication_ref": [ "b0", "b0" ], "table_ref": [], "text": "We choose the similarity matrix as model input. The query/reference longer than 128 seconds is truncated, while the query/reference video shorter than 128 are padding to 128 with zero embedding. So our SAM model takes (128, 128) resolution similarity matrix as input. For query videos that has been splited into multiple frames in preprocessing. We choose the segment with max top matching similarity as target pair. Other segments with lower matching similarity are discarded.\nTo build a model that output matching relationship, one possible approach is to use models such as key point detection or semantic segmentation. The key idea here is that the model should both learn global information which helps to recognize the real matching parts and the local information for percise detection. So we choose the high resolution network(HRnet-w18) [6] as our backbone. There are two major changes to the model: 1) Our model outputs the same resolution feature maps as the input. We do it by setting the first two convolution stride to (1,1). 2) The target output has been changed to a heat-map generated by annotations to accurately reflect the real matching relationship. Figure 4. shows some model detection result and the post-processing result." }, { "figure_ref": [], "heading": "Postprocessing", "publication_ref": [], "table_ref": [], "text": "The SAM model outputs a matching relationship matrix, but post-processing is required before submitting the final result. This involves: 1) using a filter threshold to remove false positive matches, 2) identifying multiple detections with the Connected Components algorithm, 3) and detecting the matching relation with RANSAC Where coef is the slope of RANSAC regression. score is the matched score list for points which is first filter by score threshold t, then filter by RANSAC regression inner points. α is the weights for score variance penalty. For the final submission, we ensemble results with parameter (t, α) equal to (0.35, 0.5), (0.1, 1.25), (0.001, 2)." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "The SAM model was trained using 1 A100 GPU. After applying the classification filter, there were 10,000 pairs of samples, and their labels were generated through annotations. The resolution of the similarity matrix used for training the SAM model is 128x128, and the batch size is 64. As mentioned earlier, four embedding models from the descriptor track were used to generate the similarity matrix. Therefore, the training sample size for each round is approximately 20,000 pairs. It takes around 3 hours to finished training.\nTo generate the final submission, the two crossvalidation models were ensembled by averaging their predicted scores. As to different embedding model, the SAM is directly evaluated on their PCA ensembled feature similarity matrix. On Matching Track,we got the first place on both phases with a 0.108 µAP / 0.144 µAP absolute improvement over the second-place competitor in Phase 1 / Phase 2. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This report introduces the SAM for video copy segment matching. By modify the structure and target of high resolution network, Our model takes similarity matching matrix as input, output high quality video segment matching score. Based on this model, We get 1 st rank on VSC 2022 Matching Track." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "-Submission/ tree/main/VSC22-Matching-Track-1st." } ]
2023-05-25
[ { "authors": "John Canny", "journal": "IEEE Transactions on pattern analysis and machine intelligence", "ref_id": "b0", "title": "A computational approach to edge detection", "year": "1986" }, { "authors": "A Martin; Robert C Fischler; Bolles", "journal": "Communications of the ACM", "ref_id": "b1", "title": "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography", "year": "1981" }, { "authors": "Sifeng He; Yue He; Minlong Lu; Chen Jiang; Xudong Yang; Feng Qian; Xiaobo Zhang; Lei Yang; Jiandong Zhang", "journal": "", "ref_id": "b2", "title": "Transvcl: Attention-enhanced video copy localization network with flexible supervision", "year": "2022" }, { "authors": "Andrew Howard; Mark Sandler; Grace Chu; Liang-Chieh Chen; Bo Chen; Mingxing Tan; Weijun Wang; Yukun Zhu; Ruoming Pang; Vijay Vasudevan", "journal": "", "ref_id": "b3", "title": "Searching for mobilenetv3", "year": "2019" }, { "authors": "Chen Jiang; Kaiming Huang; Sifeng He; Xudong Yang; Wei Zhang; Xiaobo Zhang; Yuan Cheng; Lei Yang; Qing Wang; Furong Xu", "journal": "", "ref_id": "b4", "title": "Learning segment similarity and alignment in large-scale content based video retrieval", "year": "2021" }, { "authors": "Ke Sun; Bin Xiao; Dong Liu; Jingdong Wang", "journal": "", "ref_id": "b5", "title": "Deep highresolution representation learning for human pose estimation", "year": "2019" }, { "authors": "Hung-Khoon Tan; Chong-Wah Ngo; Richard Hong; Tat-Seng Chua", "journal": "", "ref_id": "b6", "title": "Scalable detection of partial near-duplicate videos by visual-temporal consistency", "year": "2009" } ]
[]
A Similarity Alignment Model for Video Copy Segment Matching
Copy Detection has been a crucial problem for social media platforms. Meta AI hold Video Similarity Challenge on CVPR 2023 to push the technology forward. In this report, we share our winner solutions on Matching Track. We propose a Similarity Alignment Model(SAM) for video copy segment matching. Our SAM exhibits superior performance compared to other competitors, with a 0.108 / 0.144 absolute improvement over the second-place competitor in Phase 1 / Phase 2. Code is available at https:// github.com/FeipengMa6/VSC22
Zhenhua Liu; Feipeng Ma; Tianyi Wang; Fengyun Rao
[ { "figure_caption": "Figure 1 .1Figure 1. Typical edits in stacked frames and extra edges. A plenty of video contains 2-4 scenes, as showed in the first two columns.And there is also extra edges showed in the last column.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Our Pipeline: Preprocess input video by extracting frames, splitting scenes, and removing edges. Use embedding model(s) to generate embeddings. Generate similarity matrix for (query, reference) videos, filter negative recalls using a small classification model. Use SAM model to output noiseless matching score matrix.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. The frame processing details. a is one frame of a query video. b is the Canny [1] edge result, which is noisy to locate all edges of the stacked video. c is the average Canny result of multiple frames, the edge is more clear than single frame. d is standard deviation of frame pixels values. Our frame preprocessing is base on feature c and d", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. The input similarity matrix is compared with the SAM output matching score. The first row is the input similarity matrix, the second row is the SAM matching score. The first three column are sample successfully matched, the last two column are samples that not recognize copy result.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "[2] regression, which is effective for detecting linear relationships in video copies. The final submit score s is an ensemble of SAM score predictions: c = max(1/coef, coef ) s = mean(score) -α * std(score) -abs(c -1)/10", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Leaderboard results on Matching Track.", "figure_data": "User or teamsPhase 1 µAP Phase 2 µAPdo something more(Ours)0.92900.9153CompetitionSecond0.82060.7711cvl-matching0.77270.7036Zihao0.5687-", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
[{"Category": "Data Source", "Citation": "[3]", "Explanation": "The cited work provides the information on the use of embedding for the matching task and the need for model changes when descriptor models are changed."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The cited work introduces the Similarity Pattern Detection (SPD) method for direct output of result, which the citing paper attempts to use but encounters difficulties in optimizing the model with limited annotations."}, {"Category": "Extension or Continuation", "Citation": "[7]", "Explanation": "The cited work presents the Temporal Network (TN) method for constructing a graph-based network with matched frames as nodes and similarities between frames as weights of links. The citing paper builds upon this method by exploring the possibility of modifying and optimizing it for the video copy segments."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work by Canny provides a method for edge detection and frame pixel standard deviation feature that the citing paper adopts in their research to address the issue of multiple scenes in one frame or extra edges in video frames."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work, HRnet-w18, is used as the backbone for the model in the citing paper, providing a method for detecting key points and semantic segmentation in the model output."}]
[ { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "I. INTRODUCTION", "publication_ref": [ "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b4", "b13", "b9", "b14", "b8", "b15", "b16", "b17", "b18", "b19", "b20", "b4", "b5", "b21", "b22", "b23", "b24", "b25", "b26", "b27", "b28", "b29", "b30", "b31", "b4", "b5" ], "table_ref": [], "text": "A TTENTION mechanisms [1], [2], [3] play an essential role in Natural Language Processing (NLP) and have been shown to be effective in various text classification tasks, such as sentiment analysis [4], [5], [6], document classification [7] and natural language inference [8]. They achieve significant performance gains, and can be used to provide insights into the inner workings of the model. Generally, the attention learning procedure is conditioned on access to large amounts of training data without additional supervision information.\nAlthough the current attention mechanisms have achieved remarkable performance, several problems remain unsolved. First, learning a good attention distribution without spurious correlations for neural networks requires large volumes of informative labeled data [9], [10]. As described in the work of Wallace et al. [11], after inserting 50 poison examples with the name \"James Bond\" into its training set, a sentiment model will frequently predict a positive whenever the input contains this name, even though there is no correlation between the name and the prediction. Second, attention mechanisms are prone to focus on high-frequency words with sentiment polarities and assign relatively high weights to them [12], [13], [5], while the higher frequency does not imply greater importance.\nEspecially when there's an adversative relation in a text, some high-frequency words with strong sentiment valence need to be selectively ignored based on the context of the whole text. In these cases, these words will mislead the model because the important words don't get enough attention. The sentences in Figure 1 illustrate this problem. In most training sentences, as shown in the first four rows, \"better\" and \"free\" appear with positive sentiment, which makes the attention mechanism accustomed to attaching great importance to them and relating them to positive predictions. However, the two words are used ironically in the fifth sentence, and the model pays the most attention to them while the critical word -\"leave\" -is not attended to, resulting in an incorrect prediction. Based on these observations, there's reason to believe that the attention mechanisms could be improved for text classification.\nTo tackle this problem the most direct solution is to add human supervision collected by manual annotation [14], [10], [15] or special instruments [9], [16], [17], [18] (e.g., eyetracking), to provide an inductive bias for attention. These approaches are costly, the labeling is entirely subjective, and there is often high variance between annotators. In particular, Sen et al. [19] point out that there is a huge difference between machine and human attention and it is difficult to map human attention to machine attention.\nAnother flexible solution is to measure attribution scores, i.e., how much each token in a text contributes to the final prediction, to approximate an importance distribution as an attention supervision signal [20], [21], [5], [6]. Generally, the attribution scores are obtained by masking each token one by one to generate counterfactual examples, reflecting the difference in the softmax probability of the model after masking each token. These approaches have little or no additional annotation overhead and augment supervision information from the training corpus to refine the attention distribution. Despite their success, masking schemes can give rise to an out-of-distribution (OOD) problem [22], [23], [24]. That is, the generated counterfactuals deviate from the training data distribution of the target model, resulting in an overestimation of the contribution of unimportant tokens. The OOD problem induced by existing masking schemes makes it difficult to identify whether high-scoring tokens contribute significantly to the prediction. Furthermore, most of them are limited to generating uniform attention weights for the selected important words. Obviously, the contribution of different important words to the model should also be different according to the context, e.g., the word leave should have a higher attention weight than better and free for the fifth sentence in Figure 1.\nSome efforts reveal that the output of neural networks can be theoretically guaranteed to be invariant for a certain magnitude of input perturbations through establishing the concept of maximum safety radius [25], [26] or minimum disturbance rejection [27]. In simple terms, these approaches evaluate the minimum distance of the nearest perturbed text in the Fig. 1. The attention visualization for five sentences. The \"A/B\" style tags before each row mean the model's prediction is A and the label is B. The first four sentences are selected from training sets as representatives containing high-frequency words -\"better\" (yellow box) and \"free\" (green box). The last sentence including both of the two words is selected from testing sets, typically showing that the distribution of attention weights when some words in the sentence appear frequently in the corpus but are unimportant to the current prediction.\nembedding space that is classified differently from the original text. Inspired by this work, we propose a novel perturbationbased self-supervised attention learning method without any additional annotation overhead for text classification. Specifically, we design an attention supervision mining mechanism called Word-based Concurrent Perturbation (WBCP), which effectively calculates an explainable word-level importance distribution for the input text. Concretely, WBCP tries to concurrently add as much noise as possible to perturb each word embedding of the input, while ensuring that the semantics of input and the classification outcome is not changed. Under this condition, the words that tolerate more noise are less important and the ones sensitive to noise deserve more attention. We can use the permissible perturbation amplitude as a measure of the importance of a word, where small amplitude indicates that minor perturbations of that word can have a significant influence on the semantic understanding of input text and easily lead to prediction error.\nAccording to the inverse distribution of perturbation amplitude, we can get sample-specific attention supervision information. Later, we use this supervision information to refine the attention distribution of the target model and iteratively update it. Notably, our method is model-agnostic and can be applied to any attention-based neural network. It generates attention supervision signals in a self-supervised manner to improve text classification performance without any manual labeling and incorporates Perturbation-based Self-supervised Attention (PBSA) to avoid the OOD problem caused by the masking scheme. In addition, it can also generate special attention supervision weights adaptively for each sample based on the perturbation amplitude, rather than allocate them uniformly.\nIn summary, the contributions of this paper are as follows:\n(1) Through analysis of current methods, we point out the disadvantages and drawbacks of current attention mechanisms for text classification.\n(2) We propose a simple yet effective approach to automati-cally mine the attribution scores for the input text, and use it as supervision information to guide the learning of attention weights of target models.\n(3) We apply our approach to various text classification tasks, including sentence classification, document categorization, and aspect-level sentiment analysis. Extensive experiments and visualization analysis show the effectiveness of the proposed method in improving both model prediction accuracy and robustness.\n(4) Theoretically, our algorithm can be applied to the models with attention mechanisms, but it is impossible to compare with all of them. Considering this, we conduct our experiments on several typical baselines (LSTM, BERT [28], DEBERTA [29], ELECTRA [30], Memory Net [31], etc.) to justify the effectiveness of our method. Notably, we also compared our algorithm with other advanced attention selfsupervised methods (PGAS [32], AWAS [5], SANA [6])." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [ "b13", "b14", "b15", "b16", "b17", "b13", "b9", "b14", "b8", "b15", "b16", "b17", "b32", "b18", "b33", "b34", "b35", "b36", "b35", "b36", "b37", "b19", "b20", "b4", "b5", "b31", "b4", "b5" ], "table_ref": [], "text": "Work related to our method can be categorized into three types: Introducing human attention; using external resources or tools; and using self-supervision.\nIntroducing human attention Adding human supervision to attention has been shown to effectively alleviate attention bias and improve model prediction accuracy on a range of tasks [14], [15], [16], [17], [18]. In general, the annotators need to explicitly highlight the important words or rationales [14], [10], [15] for the given sample. Obviously, the annotation is very labor-intensive and expensive in real-world scenarios, so an alternative is to use implicit signals such as eye gaze [9], [16], [17], [18]. For these methods, it is expected that the model can generate similar attention to human supervision. However, human recognition and model reasoning processes may be inconsistent [33], and aligning the two is challenging [19].\nUsing external resources or tools With the development of NLP, many corpora and tools, such as Dependency Tree and Synonym Dictionary, are created to obtain a deeper understanding of words and sentences. Therefore, some methods [34], [35], [36], [37] that generate attention supervision information according to existing corpora and tools emerge. For example, Nguyen et al. [36] introduce attention supervision information based on important words selected by semantic word lists and dependency trees. Similarly, Zhao et al. [37] first train the model on the document-level sentiment classification and then transfer the attention knowledge to a fine-grained one for aspect-level sentiment classification. And Hu et al. [38] introduce the tree structure's representation into attention computations. However, these methods still rely on annotations based on parsers or external resources, and the performance depends heavily on the quality of the parser.\nSelf-supervised attention learning Currently, selfsupervised attention learning frameworks [20], [21], [5], [6], [32] have become the mainstream method because they do not require additional annotation overhead. They usually mask or erase each token one by one and quantify the difference in predictions of the model after masking each token, to approximate an importance distribution as attention supervision information. For example, Tang et al. [5] divide the words in sentences into the active set and the misleading set by progressively masking each word with respect to the maximum attention weight, and augment them to make the model focus on the active context words. Similarly, Choi et al. [6] adopt the masking method to find the unimportant words and gradually reduce their weights. These methods use a self-supervised paradigm to mine important words, which can greatly reduce the annotation cost and improve the robustness of the model. Nevertheless, the masking scheme they follow has an OOD problem. The counterfactuals generated by the mask operation deviate from the original training set distribution, which easily leads to the over-evaluation of unimportant words. In addition, the above methods usually assign the same weight to the extracted important words, but in our opinion, different words should have different contributions to the classification." }, { "figure_ref": [], "heading": "III. PROPOSED METHOD", "publication_ref": [], "table_ref": [], "text": "In this section, we propose a Perturbation-based Selfsupervised Attention (PBSA) mechanism to enhance the attention learning process and provide a good inductive bias. We first design a Word-based Concurrent Perturbation (WBCP) to automatically mine the attribution score for each word and use this as a measure of its degree of importance. Then we use the measure mentioned above to compute a word-level importance distribution as supervision information. Finally, we describe how to use the supervision information to refine the attention mechanism of the target model, improving the accuracy and robustness of text classification tasks." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "A. Word-based Concurrent Perturbation", "publication_ref": [ "b38" ], "table_ref": [], "text": "The basic assumption of our design is based on the following fact: under the premise of trying not to change the semantics of the input text, unimportant words can withstand more changes than more significant ones. Specifically, a little noise on keywords can lead to dramatic changes in the final results, while greater noise on the unimportant ones won't easily lead to changes in results. Therefore, we can estimate the importance distribution of the words according to the maximum amount of noise they can tolerate. To be specific, we try to concurrently add as much noise as possible to perturb each word embedding without changing the latent representations (e.g., the hidden states for classification) of the text and the prediction result. The above process can be optimized according to the maximum entropy principle.\nGiven a sentence consisting of n words s = {w 1 , w 2 , ..., w m }, we map each word into its embedding vector X = {x 1 , x 2 , ..., x n }. Actually, WBCP (Word-based Concurrent Perturbation) is based on the embedding of each token X but not each word s. Intuitively, one word can be tokenized into several parts, and various parts have various influences on the representation. Considering that, in experiments, the perturbation is added to each token generated by the tokenizer, which means each token has its own σ i (maximum safety radius). For ease of explanation and comprehension, here we take the traditional embedding where m = n (each word has only one embedding, e.g. word2vec, glove, and so on) as an example in Figure 2 and Section III-A. We assume that the noise on word embeddings obeys a Gaussian distribution ϵ i ∼ N 0, Σ i = σ 2 i I and let x i = x i + ϵ i denote an input with noise ϵ i . We use h, y and h, y to indicate the hidden state for classification and the prediction result of a pre-trained model with no noise and with noise respectively. Then we can write the loss function of WBCP as follows:\nL W BCP = || h -h|| 2 2 + || y -y|| 2 2 -λ n i=1 H(ϵ i )| ϵi∼N (0,Σi=σ 2 i I) ,(1)\nwhere λ is a hyperparameter that balances the strength of noise.\nThe first and the second term of Eq. ( 1) mean that we need to minimize the L2-normalized euclidean distance between the two hidden states and between the two predictions respectively, to quantify the change of information [39]. The first term maintains latent representations to prevent modification of the text semantics, and the second term prevents excessive perturbations from causing the model to mispredict. The last term indicates that we need to maximize the entropy H(ϵ i )| ϵi∼N (0,Σi=σ 2 i I) to encourage adding as much noise as possible to each word embedding. We can simplify the maximum entropy of the Gaussian distribution as follows:\nM aximize(H(ϵ i )) = M aximize(-p(ϵ i ) ln p(ϵ i )dϵ i ) = M aximize( 1 2 (ln(2πσ i 2 ) + 1)) = M aximize(ln 2( 1 2 log(2πe) + log σ i )) = M aximize(log σ i )\nFinally we can use Eq. ( 2) to rewrite our final objective function:\nL W BCP = || h -h|| 2 2 + || y -y|| 2 2 + λ n i=1 log(-σ i ) (2)\nThe illustration of WBCP is given in Figure 2. After fixing the parameters of the pre-trained model, the only learnable parameters σ = {σ 1 , σ 2 , σ 3 , σ 4 } can be considered as the perturbation radii, which is positively associated with the perturbation amplitude. Specifically, the larger σ i WBCP gets, the more likely ϵ i is a big number, the more noise is added to x i , and the less important it is. As what is shown in the picture, it is obvious that σ 2 > σ 1 > σ 4 > σ 3 . According to the analysis listed above, we know that w 2 (a) is the least important word and w 3 (nice) is the most significant one, for x 2 can tolerate the most noise while x 3 can hardly stand any perturbation.\nDuring the training stage of WBCP, σ is first initialized as the normal distribution and then normalized by the standard deviation of sentence embeddings before generating noise. And we set the epochs to 500 for most datasets. Actually, most perturbation models converge within less than 200 steps, but we choose more epochs for the time cost is acceptable. However, IMDB's settings differ because of the large training and testing set. Therefore, we set epochs to 300 for it. As for the optimizer, we select AdamW with a learning rate of 0.01." }, { "figure_ref": [], "heading": "B. Attention supervision", "publication_ref": [ "b4", "b5" ], "table_ref": [], "text": "We obtain the σs, the perturbation magnitudes, by optimizing Eq. ( 2) on the pre-trained model. If a word embedding x i can tolerate more noise without impacting the semantics of input text, σ i will be larger, which means the word x i is less important. Conversely, small σ i indicates that slight perturbations of word embedding x i will lead to semantic drift and may affect the classification result. We can therefore use the perturbation magnitude to compute a word-level importance distribution as attention supervision information, as shown below:\nα ′ i = 1 - σ i max j {σ j } α = Softmax(α ′ )(3)\nIt is worth noting that our method generates sample-specific attention supervision, where the weight of each word is quantified according to the perturbation magnitude, instead of using the same importance weight for all words [5], [6]. Also, the quantification occurs in the embedding space rather than replacing the token with a predefined value, thus avoiding the OOD problem caused by masking schemes. 4) and update θ using Adam. end" }, { "figure_ref": [], "heading": "C. Perturbation-based Self-supervised Attention", "publication_ref": [], "table_ref": [], "text": "We do not use α to generate a new attention distribution to replace the original one α. Rather, we use it as a supervision target for the attention weights. We want the attention supervision to make the model notice more words that have an influence on the output. In this way, some low-frequency context words with great importance that would normally be ignored can be discovered by attention learning. In this section, we describe how to exploit the supervision information α to guide the learning of model attention strengths.\nOur method is shown in Algorithm 1. We first pre-train an attention-based model f (•, θ) based on the classification dataset D. We then fix the model parameters θ and minimize the WBCP objective using Eq. ( 2) to obtain the perturbation amplitude σ for each sample, and used to compute the attention supervision α using Eq. ( 3). We then retrain the model using α to guide the attention distribution α produced by the model. The above process can iterate T times to capture the important distribution more accurately. The training objective function with attention supervision α is defined as follows:\nL cls = 1 M M m=1 ŷm log y m + γKL( α m ||α m ),(4)\nwhere M is the number of samples, γ is a hyperparameter that controls the strength of attention supervision, ŷm and y m are the ground-truth label and predicted output for the m-th sample respectively. The first term is the Cross-Entropy Loss for classification, and the second term is the Kullback-Leibler Divergence between the distributions of attention α m produced by model and attention supervision information α m for the m th sample. It's worth noting that our method requires extra computations, but the time cost is usually acceptable because nearly all the process is parallel. The analysis are explained in Appendix A." }, { "figure_ref": [], "heading": "IV. EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "We tried PBSA on several text classification tasks, including sentence classification, document categorization, and aspectlevel sentiment analysis. Experimental results demonstrate that PBSA consistently enhances the performance and robustness of various attention-based baselines, and outperforms some strong models following self-supervised attention learning. Furthermore, a visualization analysis confirms that our model is capable of generating high-quality attention for target tasks.\nWe aim to answer the following questions:\nRQ1: Does PBSA improve model accuracy? RQ2: Is PBSA more effective than other approaches? RQ3: How do hyperparameters affect the results? RQ4: How does PBSA work?" }, { "figure_ref": [], "heading": "A. Datasets and Baselines", "publication_ref": [ "b48" ], "table_ref": [ "tab_0", "tab_0" ], "text": "The statistics of widely-studied datasets used by different tasks are listed in Table I. These datasets come from different topics, such as movie reviews, customer reviews, social reviews, and question type. In particular, since there is no standard partition of MR, CR, SUBJ, and MPQA, we follow the data splitting protocol, 7:1:2 for them to get the training, validation, and test sets. For the aspect-level tasks, we remove the instances with conflict sentiment labels in Laptop and Restaurant as implemented in [49].\nAs for hyperparameters, we use a grid search to find the optimal value of γ and T for each dataset, from the sets γ ∈ {0.05, 0.1, 1.0, 2.0, 10, 100} and T ∈ {1, 2, 3, 4}. We use the Adam optimizer with learning rate 0.001 and the batch size is set to 64.\nWe use Att-BiLSTM, Memory Network, BERT, DEBERTA, ELECTRA, Att-BERT, BERTABSA, Att-BERTABSA as baselines and explain the details about them in Appendix B.\nThe setup of hyperparameters for Att-BiLSTM and Memory Net are listed in Table II. To make a fair compare with other algorithms, we set our hyperparameters the same as theirs." }, { "figure_ref": [], "heading": "B. RQ1: Sentence-level and Document-level Classification", "publication_ref": [ "b51", "b27", "b28", "b29", "b31", "b40", "b41", "b42", "b43", "b44", "b45", "b46", "b46", "b47" ], "table_ref": [ "tab_2", "tab_6" ], "text": "To verify that PBSA can improve the performance of the attention-based model, in this section, we use the classic Att-BiLSTM [52] and the pre-trained models BERT [28], DEBERTA [29], and ELECTRA [30] as the baselines. It is worth noting that Transformers use multiple-layer and multiplehead attention, so selecting the suitable head as the supervised target is difficult [32]. Hence, how to effectively combine its multiple-layer and multiple-head attention with our method is an exciting and valuable question.\nThe previous researchers have yet to find a good way to apply their methods to Transformers, and we have made some explorations in this field, which is also one of our innovations. We explore two simple strategies to combine our approach with Transformers, 1) We first add a scaled dot-product attention layer to the output of BERT to derive a fixed-sized sentence representation for classification, and we call this model Att-BERT for short. 2) We also try a simple but effective way to combine the internal multi-head attention in Transformers with our method. Specifically, we average the multi-head attention of all the layers and compress the attention matrix to a vector to be guided by our mechanism.\nTable III reports the experimental results on the seven datasets of sentence classification and document categorization. We observe that our method consistently helps improve the accuracy of the baseline on all the datasets. The average accuracy of our approach on the five baselines across seven datasets are 83.65, 90.86, 92.55, 92.43, and 94.06, an improvement of 1.44%, 0.45%, 0.83%, 0.66%, and 0.66% over the baselines (82. 21, 90.41, 91.71, 91.82, and 93.44). The results demonstrate that our approach delivers significant performance improvements over the baselines. It also indicates that the current model limits the potential of attention mechanisms when without any supervision information. However, PBSA can mine the potential important words and then guide the attention mechanism of the model to learn a good inductive bias.\nHowever, we find the improvements on pre-trained models are relatively marginal compared with smaller models like Att-BiLSTM. The phenomenon indicates that the pre-training on large corpora relieves the attention bias to some extent, which is further verified in Section IV-D. Moreover, we also find the size of the pre-trained model also impacts the performance of PBSA. We conduct the experiments on BERT-small and ELECTRA-small (shown in Table VII), and PBSA gains greater improvements under the same settings. To sum up, the attention bias may be more likely to appear in smaller-size models and [41] 150 50 MR [42] 150 100 CR [43] 150 50 SUBJ [44] 150 100 MPQA [45] 150 100 Document Categorization IMDB [46] 150 300\nAspect-based Sentiment Analyis REST [47] 300 300 LAPTOP [47] 300 300 TWITTER [48] 300 300 smaller-scaled datasets. And the performance of PBSA will be more significant in these scenarios." }, { "figure_ref": [], "heading": "C. RQ1: Aspect-level Sentiment Analyis", "publication_ref": [ "b30", "b49", "b52", "b31" ], "table_ref": [ "tab_4" ], "text": "To further verify the effectiveness of our approach, we apply PBSA into MN [31], [50], BERTABSA [53], and Att-BERTABSA [32]. Both BERTABSA and Att-BERTABSA are typical and simple ways to apply BERT to aspect-level classification tasks. The difference is that BERTABSA directly uses the hidden states of the aspect words to classify, while Att-BERTABSA adds an attention layer to the output of BERT. To show that our method truly improves the results, we only use the most critical parts of the model without any other tricks or mechanisms (e.g. the gating mechanism). We conduct experiments on three benchmark datasets of aspect-based sentiment analysis and PBSA outperforms all the baselines on all datasets both in accuracy and Macro-F1. As shown in Table V, compared with other tasks, PBSA has a more significant improvement on these small-scale datasets, indicating that the original attention lacks a good inductive bias due to limited labeled data. With the help of PBSA, the robustness of the model can be improved effectively." }, { "figure_ref": [ "fig_3" ], "heading": "D. RQ1: Performances under Different Sample Ratios", "publication_ref": [], "table_ref": [], "text": "To verify the performance of our approach on low-resource tasks, we conduct experiments on different values of sample ratio. We get sample sets from the original datasets with sample ratio ∈ {0.001, 0.005, 0.01, 0.05, 0.1}, and measure the performances of BERT and BERT+PBSA on these sample sets according to their accuracy.\nAs shown in Figure 4, the performances of BERT and BERT+PBSA have the same trend. As the accuracy of BERT increases, the accuracy of BERT+PBSA increases and vice versa. As explained in Section III-C, the attention supervision information is obtained based on the pre-trained model, whose performance has a direct influence on the quality of the attention The improvement is more prominent when the ratio is in the middle range (sample ratio∈ (0.005, 0.05)). As listed above, when the ratio is small, the pre-trained model has a bad performance, which results in meaningless attention supervision information and further limits the performance of PBSA. As the value of the sample ratio increases, the original model performs better, and the quality of attention supervision information is enhanced, and then PBSA improves the model even more. However, the improvement is not without limitation.\nAs the value of the sample ratio exceeds a certain value, the phenomenon of attention bias is no longer evident, and the improvement reduces. It may be because BERT is pre-trained on a large-scale corpus, and when we fine-tune it, its attention fits well on these 'larger-scale' sample sets, which makes the original model has scant room for improvement.\nTo sum up, the distribution of the attention parameters is not stable enough when the data is limited or the model size is small, which can be refined by PBSA. And the performance and lifting area of PBSA are closely related to the performance of the baseline." }, { "figure_ref": [], "heading": "E. RQ2: Comparison with other methods", "publication_ref": [ "b5", "b4", "b31" ], "table_ref": [ "tab_0", "tab_4", "tab_4" ], "text": "On the tasks listed above, we compare our method with other advanced self-supervised attention learning approaches. SANA [6] generates counterfactuals by a masking scheme and measures the difference in the softmax probability of the model between the counterfactual and original sample as an indicator of important words. AWAS [5] and PGAS [32] progressively mask the word with the largest attention weight or partial gradient. Most of these works don't publish their critical code and do their experiment only on certain specific tasks, so we directly compare our algorithm with their best results published on different tasks respectively. To make a fair comparison, we use the same word embedding and the same settings of hidden size to reproduce their baselines, which is listed in Table II.\nOn the document-level and sentence-level tasks (Table IV), PBSA is superior to SANA by 1.11% and 1.37%, which verifies that the word-based concurrent perturbation can mine the importance distribution of words more accurately than the masking scheme. On the aspect-level task (Table VI), compared with AWAS and PGAS, our method improves the model more. As we mentioned in the Introduction (Section I), our method can generate word-specific attention supervision while others treat the important words equally without discrimination. We speculate that this may be one of the main reasons for our improvement." }, { "figure_ref": [], "heading": "F. RQ2: Comparison with human intuition methods", "publication_ref": [ "b50" ], "table_ref": [ "tab_2" ], "text": "From the aspect of human intuition, the gradient-based methods and leave-one-out methods are usually used to improve the interpretability of model. The current self-supervised attention learning methods are mostly based on word masking, which can be seen as a variation of leave-one-out methods. We also try to use the gradient-based method [51] to generate supervision information. As shown in Table III and Table V, the gradient-based method does badly on most of the datasets, especially on aspect-level datasets. These results demonstrate that although the gradient-based method can improve the interpretability of the model, it does not necessarily improve the performance. However, our method enhances interpretability while also improving its performance." }, { "figure_ref": [ "fig_4" ], "heading": "G. RQ3: Hyperparameter sensitivity", "publication_ref": [], "table_ref": [], "text": "As shown in Figure 5, our method achieves the best results on REST and TWITTER when T = 2 and T = 1 respectively. When the increase of T , the performance increases initially and then decreases due to over-fitting. The performance of models won't change sharply with the increase of T once they achieve the best results. In practice, we find that one iteration has achieved promising results. The hyperparmeter λ controls the perturbation degree of WBCP, when λ is too large, it will deteriorate performance due to injecting too much noise. In all of our experiments, we set λ as 0.1. The hyperparmeter γ controls the strength of attention supervision, when γ is too large, it easily leads to overly penalize the alignment between the model attention and perturbation attention, which may hurt the model's internal reasoning process.\nCompared with γ, λ has less effect on results when the value of which changes slightly, but we cannot remove n i=1 log(-σ i ) from our loss function. Otherwise, the model will try not to add any noise to x without the term, which makes PBSA get a meaningless supervision distribution that varies dramatically for the same sentence each time (the distribution is supposed to be essentially unchanged for the same sentence). On the other hand, results are more sensitive to γ, which determines if the models can reach the peak of the results." }, { "figure_ref": [ "fig_2", "fig_2", "fig_2", "fig_2", "fig_2" ], "heading": "H. RQ4: Visualization analysis", "publication_ref": [ "b4" ], "table_ref": [], "text": "In this section, we select several attention visualizations on SST2 test set to explain how PBSA works. As shown in Figure 3, we see that PBSA makes the model pay more attention to important but low-frequency words, reduces the focus on high-frequency words that do not affect the results, increases the difference in weight between words with conflicting meanings, and increases sensitivity to adversative relations in sentences.\na) Pay more attention to important but low-frequency words: Some words do have important effects on the results, but if they do not appear frequently enough then the traditional attention mechanism may not pay enough attention to them. As shown in Figure 3-(1), the word drowsy has an important influence on the emotional polarity of the film. However, it is a low-frequency word in the corpus, which makes the attention mechanisms do not allocate enough weights to it, resulting in a classification error. After being trained by PBSA, the model can assign enough weights to drowsy, which changes the result from false to correct.\nb) Reduce the focus on high-frequency words that do not affect the results: In baseline, some high-frequency words which do not contain any emotional polarity usually get high weights, while some important words that should have been focused on are ignored. As Figure 3-(2) shows, romantic and doesn't are words with strong emotional polarity. However, the baseline assigns greater weights to other high-frequency words (e.g., between) with no emotional polarity, and thus ignores the words romantic and doesn't which results in misclassification. After being trained by PBSA, the model reduces the focus on between and the weights allocated to the significant words increase correspondingly, which turns the result. 3), the baseline focuses on too many words: horror, revenge, perfect, relentless, torture, and so on. Maybe all of the words are important but the meanings of them are conflicting, which interferes with the classification task. The model feels confused because it does not know how to make a prediction according to so many emotional words. After being trained by PBSA, the difference in the weight of emotional words becomes larger, which makes it get the right result. It should be noted that the entropy of attention distribution may not decrease because PBSA keeps attention to important words while diluting the distribution of the other words.\nd) Be more sensitive to adversative relations in sentences: If there are adversative conjunctions (e.g., but, however, and so on) in the sentence, it is likely to express two opposite emotions before and after the adversative conjunction. This is when the model needs to keenly feel the changes of emotional polarity in the sentence. From this aspect, the model is also supposed to assign higher weights to those adversative conjunctions. Judging from our results, it is unfortunate that the original attention mechanism tends to ignore these conjunctions for they seem to have no effect on results outwardly. As Figure 3-(4) and Figure 3-( 5) show, the baseline ignores the word but and results in errors. After being trained by PBSA, the baseline pays more attention to but which makes both of the emotions before and after the adversative conjunction can be taken into consideration." }, { "figure_ref": [], "heading": "V. CONCLUSIONS AND FUTURE WORK", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a novel self-supervised attention learning method based on word-based concurrent perturbation. The algorithm adds as much as noise to each word in the sentence under the premise of unchanged semantics to mine the supervision information to guide attention learning. Our experiments demonstrate that our method achieves significant performance improvements over the baselines on several text classification tasks. Moreover, we use several visualization samples to interpret how our method guides the internal reasoning process of models.\nIt is worth to note that we combine our method with transformers, which is not implemented in most of the previous attention guiding methods. Our strategies may not be the best ways to apply our algorithm into transformers, but they still prove the effectiveness of the proposed method. We will try to find more appropriate and effective strategies and incorporate our algorithm into other NLP tasks in the future. " }, { "figure_ref": [], "heading": "ANALYSIS OF THE EXTRA COMPUTATIONS", "publication_ref": [], "table_ref": [], "text": "The extra computations mainly come from the process of generating supervision information (Pre-training and re-training are the same as the common training method). The extra time required depends on the size of the model, the number of the samples and the epochs of training perturbation model. It is acceptable for most datasets because the whole process is parallel. All the sub-perturbation models have independent samples and training processes, and they just share the same pre-trained model whose parameters are fixed during the generating process. Therefore, the whole process can be handled concurrently if having enough GPU resources.\nFor SST2, TREC, MR, CR, SUBJ, and MPQA, the generating process (batch-size=64) can be finished on 2 * GTX 3090 within less than 15 min. Some small datasets (e.g. SST2, TREC and CR) only need 8 min to generate supervison information. However, as for IMDB, the number of samples is enormous, and their average length is too long. Therefore, we must use several GPUs (2 * GTX 3090 and 4 * GTX 1080ti) to simultaneously deal with each part of the dataset to finish the task in a limited time." }, { "figure_ref": [], "heading": "APPENDIX B DETAILS OF BASELINES", "publication_ref": [], "table_ref": [], "text": "The details of our baselines are listed below." }, { "figure_ref": [], "heading": "A. Att-BiLSTM", "publication_ref": [ "b53" ], "table_ref": [], "text": "Figure 6 shows the structure of Att-BiLSTM. Att-BiLSTM first map each word into pre-trained skip-gram [54] word embedding and then utilize 1-layered BiLSTM with a scale-dot attention mechanism to get sentence-level hidden states which are finally used for classification." }, { "figure_ref": [], "heading": "B. Memory Network", "publication_ref": [], "table_ref": [], "text": "Figure 7 shows the structure of MN. Memory Network uses an iteratively updated vector A (initialized as the aspect embedding) and the context embedding to generate the attention distribution, which is then used to select the important information from the context embedding and iteratively update the vector A." }, { "figure_ref": [], "heading": "C. Att-BERT", "publication_ref": [], "table_ref": [], "text": "Figure 8 shows the structure of Att-BERT. We add a scaledot attention layer to the output of the BERT and use the output of the attention layer to classify." }, { "figure_ref": [], "heading": "D. BERTABSA", "publication_ref": [], "table_ref": [], "text": "Figure 9 shows the structure of BERTABSA. We input the whole sentence to get the context representation of the aspect words, which is directly used for classification. To verify that our method truly improves the results, we delete the gating mechanism and use bert-base-uncased instead of bert-largeuncased. " }, { "figure_ref": [], "heading": "E. Att-BERTABSA", "publication_ref": [], "table_ref": [], "text": "Figure 10 shows the structure of Att-BERTABSA. Its structure is similar to Att-BERT, for adding a scale-dot attention layer after the output of BERT. However, different from Att-BERT, the hidden states of context words and aspect words are regarded as Q and K respectively and fed into the attention layer separately. To verify the effectiveness of our method, we make the same modifications on the Att-BERTABSA." } ]
2023-05-25
[ { "authors": "D Bahdanau; K Cho; Y Bengio", "journal": "", "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "year": "2014" }, { "authors": "M.-T Luong; H Pham; C D Manning", "journal": "", "ref_id": "b1", "title": "Effective approaches to attention-based neural machine translation", "year": "2015" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; L U Kaiser; I Polosukhin", "journal": "", "ref_id": "b2", "title": "Attention is all you need", "year": "2017" }, { "authors": "Z Lin; M Feng; C N D Santos; M Yu; B Xiang; B Zhou; Y Bengio", "journal": "", "ref_id": "b3", "title": "A structured self-attentive sentence embedding", "year": "2017" }, { "authors": "J Tang; Z Lu; J Su; Y Ge; L Song; L Sun; J Luo", "journal": "", "ref_id": "b4", "title": "Progressive self-supervised attention learning for aspect-level sentiment analysis", "year": "2019" }, { "authors": "S Choi; H Park; J Yeo; S.-W Hwang", "journal": "", "ref_id": "b5", "title": "Less is more: Attention supervision with counterfactuals for text classification", "year": "2020" }, { "authors": "Z Yang; D Yang; C Dyer; X He; A Smola; E Hovy", "journal": "", "ref_id": "b6", "title": "Hierarchical attention networks for document classification", "year": "2016" }, { "authors": "Q Chen; X Zhu; Z.-H Ling; S Wei; H Jiang; D Inkpen", "journal": "", "ref_id": "b7", "title": "Enhanced lstm for natural language inference", "year": "2017" }, { "authors": "M Barrett; J Bingel; N Hollenstein; M Rei; A Søgaard", "journal": "", "ref_id": "b8", "title": "Sequence classification with human attention", "year": "2018" }, { "authors": "Y Bao; S Chang; M Yu; R Barzilay", "journal": "", "ref_id": "b9", "title": "Deriving machine attention from human rationales", "year": "2018" }, { "authors": "E Wallace; T Zhao; S Feng; S Singh", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Concealed data poisoning attacks on NLP models", "year": "2021-06" }, { "authors": "H Xu; S Li; R Hu; S Li; S Gao", "journal": "", "ref_id": "b11", "title": "From random to supervised: A novel dropout mechanism integrated with global information", "year": "2018" }, { "authors": "X Li; L Bing; W Lam; B Shi", "journal": "", "ref_id": "b12", "title": "Transformation networks for targetoriented sentiment classification", "year": "2018" }, { "authors": "Y Zhang; I Marshall; B C Wallace", "journal": "", "ref_id": "b13", "title": "Rationale-augmented convolutional neural networks for text classification", "year": "2016" }, { "authors": "O.-M Camburu; T Rocktäschel; T Lukasiewicz; P Blunsom", "journal": "", "ref_id": "b14", "title": "esnli: natural language inference with natural language explanations", "year": "2018" }, { "authors": "E Sood; S Tannert; P Mueller; A Bulling", "journal": "", "ref_id": "b15", "title": "Improving natural language processing tasks with human gaze-guided neural attention", "year": "2020" }, { "authors": "E Sood; S Tannert; D Frassinelli; A Bulling; N T Vu", "journal": "", "ref_id": "b16", "title": "Interpreting attention models with human visual attention in machine reading comprehension", "year": "2020" }, { "authors": "J Malmaud; R Levy; Y Berzak", "journal": "", "ref_id": "b17", "title": "Bridging information-seeking human gaze and machine reading comprehension", "year": "2020" }, { "authors": "C Sen; T Hartvigsen; B Yin; X Kong; E Rundensteiner", "journal": "", "ref_id": "b18", "title": "Human attention maps for text classification: Do humans and neural networks focus on the same words?", "year": "2020" }, { "authors": "J Li; W Monroe; D Jurafsky", "journal": "", "ref_id": "b19", "title": "Understanding neural networks through representation erasure", "year": "2016" }, { "authors": "S Choi; H Park; S.-W Hwang", "journal": "IEEE", "ref_id": "b20", "title": "Counterfactual attention supervision", "year": "2019" }, { "authors": "D Hendrycks; K Gimpel", "journal": "", "ref_id": "b21", "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "year": "2016" }, { "authors": "C.-H Chang; E Creager; A Goldenberg; D Duvenaud", "journal": "", "ref_id": "b22", "title": "Explaining image classifiers by counterfactual generation", "year": "2018" }, { "authors": "J Yi; E Kim; S Kim; S Yoon", "journal": "", "ref_id": "b23", "title": "Information-theoretic visual explanation for black-box classifiers", "year": "2020" }, { "authors": "M Wu; M Wicker; W Ruan; X Huang; M Kwiatkowska", "journal": "Theoretical Computer Science", "ref_id": "b24", "title": "A gamebased approximate verification of deep neural networks with provable guarantees", "year": "2020" }, { "authors": "E La Malfa; M Wu; L Laurenti; B Wang; A Hartshorn; M Kwiatkowska", "journal": "", "ref_id": "b25", "title": "Assessing robustness of text classification through maximal safe radius computation", "year": "2020" }, { "authors": "T.-W Weng; H Zhang; P.-Y Chen; J Yi; D Su; Y Gao; C.-J Hsieh; L Daniel", "journal": "", "ref_id": "b26", "title": "Evaluating the robustness of neural networks: An extreme value theory approach", "year": "2018" }, { "authors": "J Devlin; M.-W Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b27", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "P He; X Liu; J Gao; W Chen", "journal": "", "ref_id": "b28", "title": "Deberta: Decoding-enhanced bert with disentangled attention", "year": "2020" }, { "authors": "K Clark; M.-T Luong; Q V Le; C D Manning", "journal": "", "ref_id": "b29", "title": "Electra: Pretraining text encoders as discriminators rather than generators", "year": "2020" }, { "authors": "D Tang; B Qin; T Liu", "journal": "", "ref_id": "b30", "title": "Aspect level sentiment classification with deep memory network", "year": "2016" }, { "authors": "J Su; J Tang; H Jiang; Z Lu; Y Ge; L Song; D Xiong; L Sun; J Luo", "journal": "Artificial Intelligence", "ref_id": "b31", "title": "Enhanced aspect-based sentiment analysis models with progressive self-supervised attention learning", "year": "2021" }, { "authors": "A Jacovi; Y Goldberg", "journal": "", "ref_id": "b32", "title": "Towards faithfully interpretable nlp systems: How should we define and evaluate faithfulness?", "year": "2020" }, { "authors": "H Kamigaito; K Hayashi; T Hirao; H Takamura; M Okumura; M Nagata", "journal": "", "ref_id": "b33", "title": "Supervised attention for sequence-to-sequence constituency parsing", "year": "2017" }, { "authors": "Y Zou; T Gui; Q Zhang; X.-J Huang", "journal": "", "ref_id": "b34", "title": "A lexicon-based supervised attention model for neural sentiment analysis", "year": "2018" }, { "authors": "M Nguyen; T H Nguyen", "journal": "", "ref_id": "b35", "title": "Who is killed by police: Introducing supervised attention for hierarchical lstms", "year": "2018" }, { "authors": "F Zhao; Z Wu; X Dai", "journal": "", "ref_id": "b36", "title": "Attention transfer network for aspectlevel sentiment classification", "year": "2020" }, { "authors": "X Hu; X Kong; K Tu", "journal": "", "ref_id": "b37", "title": "A multi-grained self-interpretable symbolicneural model for single/multi-labeled text classification", "year": "2023" }, { "authors": "S Jain; B C Wallace", "journal": "", "ref_id": "b38", "title": "Attention is not explanation", "year": "2019" }, { "authors": "R Socher; A Perelygin; J Wu; J Chuang; C D Manning; A Y Ng; C Potts", "journal": "", "ref_id": "b39", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "year": "2013" }, { "authors": "X Li; D Roth", "journal": "", "ref_id": "b40", "title": "Learning question classifiers", "year": "2002" }, { "authors": "B Pang; L Lee", "journal": "", "ref_id": "b41", "title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales", "year": "2005" }, { "authors": "M Hu; B Liu", "journal": "", "ref_id": "b42", "title": "Mining and summarizing customer reviews", "year": "2004" }, { "authors": "B Pang; L Lee", "journal": "", "ref_id": "b43", "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", "year": "2004" }, { "authors": "J Wiebe; T Wilson; C Cardie", "journal": "Language resources and evaluation", "ref_id": "b44", "title": "Annotating expressions of opinions and emotions in language", "year": "2005" }, { "authors": "A Maas; R E Daly; P T Pham; D Huang; A Y Ng; C Potts", "journal": "", "ref_id": "b45", "title": "Learning word vectors for sentiment analysis", "year": "2011" }, { "authors": "M Pontiki; H Papageorgiou; D Galanis; I Androutsopoulos; J Pavlopoulos; S Manandhar", "journal": "SemEval", "ref_id": "b46", "title": "Semeval-2014 task 4: Aspect based sentiment analysis", "year": "2014" }, { "authors": "L Dong; F Wei; C Tan; D Tang; M Zhou; K Xu", "journal": "", "ref_id": "b47", "title": "Adaptive recursive neural network for target-dependent twitter sentiment classification", "year": "2014" }, { "authors": "C Peng; Z Sun; L Bing; Y Wei", "journal": "", "ref_id": "b48", "title": "Recurrent attention network on memory for aspect sentiment analysis", "year": "2017" }, { "authors": "S Wang; S Mazumder; B Liu; M Zhou; Y Chang", "journal": "", "ref_id": "b49", "title": "Target-sensitive memory networks for aspect sentiment classification", "year": "2018" }, { "authors": "S Serrano; N A Smith", "journal": "", "ref_id": "b50", "title": "Is attention interpretable?", "year": "2019" }, { "authors": "P Zhou; W Shi; J Tian; Z Qi; B Li; H Hao; B Xu", "journal": "", "ref_id": "b51", "title": "Attention-based bidirectional long short-term memory networks for relation classification", "year": "2016" }, { "authors": "J Dai; H Yan; T Sun; P Liu; X Qiu", "journal": "CoRR", "ref_id": "b52", "title": "Does syntax matter? A strong baseline for aspect-based sentiment analysis with roberta", "year": "2021" }, { "authors": "T Mikolov; I Sutskever; K Chen; G Corrado; J Dean", "journal": "", "ref_id": "b53", "title": "Distributed representations of words and phrases and their compositionality", "year": "2013" } ]
[ { "formula_coordinates": [ 4, 103.83, 324.15, 196.86, 33.78 ], "formula_id": "formula_0", "formula_text": "L W BCP = || h -h|| 2 2 + || y -y|| 2 2 -λ n i=1 H(ϵ i )| ϵi∼N (0,Σi=σ 2 i I) ,(1)" }, { "formula_coordinates": [ 4, 88.6, 513.16, 171.78, 95.69 ], "formula_id": "formula_1", "formula_text": "M aximize(H(ϵ i )) = M aximize(-p(ϵ i ) ln p(ϵ i )dϵ i ) = M aximize( 1 2 (ln(2πσ i 2 ) + 1)) = M aximize(ln 2( 1 2 log(2πe) + log σ i )) = M aximize(log σ i )" }, { "formula_coordinates": [ 4, 56.11, 641.66, 244.58, 20.09 ], "formula_id": "formula_2", "formula_text": "L W BCP = || h -h|| 2 2 + || y -y|| 2 2 + λ n i=1 log(-σ i ) (2)" }, { "formula_coordinates": [ 4, 395.64, 361, 168.06, 36.74 ], "formula_id": "formula_3", "formula_text": "α ′ i = 1 - σ i max j {σ j } α = Softmax(α ′ )(3)" }, { "formula_coordinates": [ 5, 78.54, 251.37, 222.15, 22.31 ], "formula_id": "formula_4", "formula_text": "L cls = 1 M M m=1 ŷm log y m + γKL( α m ||α m ),(4)" } ]
Perturbation-based Self-supervised Attention for Attention Bias in Text Classification
In text classification, the traditional attention mechanisms usually focus too much on frequent words, and need extensive labeled data in order to learn. This paper proposes a perturbation-based self-supervised attention approach to guide attention learning without any annotation overhead. Specifically, we add as much noise as possible to all the words in the sentence without changing their semantics and predictions. We hypothesize that words that tolerate more noise are less significant, and we can use this information to refine the attention distribution. Experimental results on three text classification tasks show that our approach can significantly improve the performance of current attention-based models, and is more effective than existing selfsupervised methods. We also provide a visualization analysis to verify the effectiveness of our approach.
Huawen Feng; Zhenxi Lin; Qianli Ma
[ { "figure_caption": "Fig. 2 .2Fig. 2. The diagram of WBCP. The left part of the figure corresponds to the last term of Eq. (2), which illustrates the process of adding noise that follows a Gaussian distribution to each word. The right part of the figure corresponds to the first two terms of Eq. (2), indicating the constraint of trying to not change the semantics and predictions after the noise is introduced.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 :1Perturbation-based self-supervised attention Input: training dataset D, attention-based model f (•, θ), the number of iterations T . Pre-train model f (•, θ) on D and update θ using Adam. for t = 1, ...T do Fix θ, and minimize WBCP objective function by Eq. (2) using Adam. Obtain the perturbation amplitude σ for each sample in D. Calculate the attention supervision α by Eq. (3) for each sample in D. Re-train model on D with the attention supervision α by Eq. (", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. The visualization result of several samples on SST2 test set.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. The chart of the fluctuations of accuracy when we change the value of the sample ratio. Each triangle point and circular point corresponds to the accuracy of BERT and BERT+PBSA under the current sample ratio, respectively.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. The chart of the fluctuations of Macro-F1 when we change the values of hyperparameters.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. The illustration of Att-BERT.", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .9Fig. 9. The illustration of BERTABSA.", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 10 .10Fig. 10. The illustration of Att-BERTABSA.", "figure_data": "", "figure_id": "fig_7", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "DATASET STATISTICS.", "figure_data": "TaskDatasetClassAvgLenTrainTestSST2 [40]2196,9201821TREC [41]6105,452500Sentence ClassificationMR [42] CR [43]2 219 1910,662 3,775--SUBJ [44]22310,000-MPQA [45]2310,606-Document CategorizationIMDB [46]228025,00025,000REST [47]3163,5911,121Aspect-based Sentiment AnalyisLAPTOP [47]3172,292639TWITTER [48]3196,248692", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "PERFORMANCE OF PBSA ON THE DOCUMENT-LEVEL AND SENTENCE-LEVEL CLASSIFICATION.", "figure_data": "ModelIMDBSST2TRECMRCRSUBJ MPQAAverageAtt-BiLSTM87.2183.4290.6077.04 76.8289.8270.5982.20Att-BiLSTM+PBSA89.1485.7292.2079.05 77.6490.5371.3183.65Att-BERT(base)92.5391.4396.6079.26 89.0694.3089.6990.41Att-BERT(base)+PBSA92.6191.9397.2079.97 89.3894.7690.2190.86BERT(base)92.9291.7196.6085.47 89.4296.3089.5991.71BERT(base)+PBSA93.4892.2097.8086.08 90.2197.5090.5792.55DEBERTA(base)91.1492.6996.2086.64 91.0195.3089.7491.82DEBERTA(base)+PBSA91.6893.0296.8087.18 91.8095.7090.8292.43ELECTRA(base)93.4894.6796.889.2792.296.7590.993.44ELECTRA(base)+PBSA93.8795.4397.4089.88 92.9997.3091.5594.06", "figure_id": "tab_2", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "PERFORMANCE OF PBSA ON THE ASPECT-LEVEL CLASSIFICATION.", "figure_data": "ModelsRESTLAPTOPTWITTERAccuracyMacro-F1AccuracyMacro-F1AccuracyMacro-F1MN [50]77.3265.8868.9063.2867.7866.18MN (Ours)79.8965.8972.6861.9768.3466.23+PBSA83.9870.8475.7567.2172.1069.64BERTABSA79.8071.3779.3875.6976.0174.52+PBSA79.8971.5979.5175.8776.1174.69Att-BERTABSA83.2975.8777.9875.0273.9971.23+PBSA83.4176.7078.6575.5374.4572.88", "figure_id": "tab_4", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "PERFORMANCE OF PBSA ON SMALL-SIZE PRE-TRAINED MODELS.", "figure_data": "ModelIMDBSST2TRECMRCRSUBJ MPQAAverageBERT(small)90.8188.8595.6081.1681.6194.4587.2388.53BERT(small)+PBSA91.7390.1797.4082.3383.0796.2088.5489.92ELECTRA(small)92.3791.2196.0083.6087.3095.7088.9790.74ELECTRA(small)+PBSA93.3592.2096.4084.7288.6296.6590.6291.79", "figure_id": "tab_6", "figure_label": "VII", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[1], [2], [3]", "Explanation": "The cited works introduce attention mechanisms that are essential in NLP tasks and are adopted in the citing paper to improve text classification performance."}, {"Category": "Supporting Evidence", "Citation": "[4], [5], [6]", "Explanation": "The cited works demonstrate the effectiveness of attention mechanisms in sentiment analysis tasks, providing foundational evidence for the citing paper to further explore the use of attention in text classification."}, {"Category": "Supporting Evidence", "Citation": "[7]", "Explanation": "The cited work shows the application of attention mechanisms in document classification tasks, adding to the body of evidence supporting the use of attention in text classification."}, {"Category": "Supporting Evidence", "Citation": "[8]", "Explanation": "The cited work highlights the use of attention mechanisms in natural language inference tasks, further demonstrating the versatility of attention in text classification."}, {"Category": "Supporting Evidence", "Citation": "[9], [10]", "Explanation": "The cited works discuss the challenges of learning good attention distributions without spurious correlations, providing insights for the citing paper to address these issues in text classification."}, {"Category": "Supporting Evidence", "Citation": "[11]", "Explanation": "The cited work by Wallace et al. highlights the vulnerability of sentiment models to poison examples, providing a cautionary tale for the citing paper to consider in text classification."}, {"Category": "Supporting Evidence", "Citation": "[12], [13], [5]", "Explanation": "The cited works show the tendency of attention mechanisms to focus on high-frequency words with sentiment polarities, which the citing paper addresses in its research on text classification."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work provides a method of manual annotation for human supervision, which the citing paper adopts to improve the attention mechanism in text classification."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work also contributes a method of manual annotation for human supervision, which the citing paper uses to improve the attention mechanism in text classification."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work provides another method of manual annotation for human supervision, which the citing paper incorporates to enhance the attention mechanism in text classification."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work presents a special instrument for human supervision, which the citing paper utilizes to improve the attention mechanism in text classification."}, {"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work also contributes a special instrument for human supervision, which the citing paper adopts to enhance the attention mechanism in text classification."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work provides another special instrument for human supervision, which the citing paper incorporates to improve the attention mechanism in text classification."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work presents a final special instrument for human supervision, which the citing paper adopts to enhance the attention mechanism in text classification."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work by Sen et al. highlights the differences between human and machine attention, which the citing paper acknowledges and uses to inform their own research on attention supervision signals."}, {"Category": "Extension or Continuation", "Citation": "[20], [21], [5], [6]", "Explanation": "The cited works on attribution scores are extended in the citing paper to use the information from the training corpus to refine attention distribution in a flexible and cost-effective manner."}, {"Category": "Data Source", "Citation": "[22], [23], [24]", "Explanation": "The cited works on the out-of-distribution (OOD) problem in counterfactual examples are acknowledged as a potential issue in the citing paper, which may impact the accuracy of the attention supervision signal."}, {"Category": "Supporting Evidence", "Citation": "[25], [26]", "Explanation": "The cited works establish the concept of maximum safety radius, which is used in the citing paper to evaluate the minimum distance of the nearest perturbed text in the context of input perturbations."}, {"Category": "Supporting Evidence", "Citation": "[27]", "Explanation": "The cited work introduces the concept of minimum disturbance rejection, which is utilized in the citing paper to evaluate the minimum distance of the nearest perturbed text in the context of input perturbations."}, {"Category": "Extension or Continuation", "Citation": "[28]", "Explanation": "The cited work, BERT, is a baseline for the experiments conducted in the citing paper to justify the effectiveness of the method proposed in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[29]", "Explanation": "The cited work, DEBERTA, is a baseline for the experiments conducted in the citing paper to justify the effectiveness of the method proposed in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[30]", "Explanation": "The cited work, ELECTRA, is a baseline for the experiments conducted in the citing paper to justify the effectiveness of the method proposed in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[31]", "Explanation": "The cited work, Memory Net, is a baseline for the experiments conducted in the citing paper to justify the effectiveness of the method proposed in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[32]", "Explanation": "The cited work, PGAS, is a method for attention self-supervision that is compared with the method proposed in the citing paper to evaluate its effectiveness."}, {"Category": "Extension or Continuation", "Citation": "[5]", "Explanation": "The cited work, AWAS, is a method for attention self-supervision that is compared with the method proposed in the citing paper to evaluate its effectiveness."}, {"Category": "Extension or Continuation", "Citation": "[6]", "Explanation": "The cited work, SANA, is a method for attention self-supervision that is compared with the method proposed in the citing paper to evaluate its effectiveness."}, {"Category": "Supporting Evidence", "Citation": "[14], [15], [16], [17], [18]", "Explanation": "The cited works demonstrate the effectiveness of human supervision in alleviating attention bias and improving model prediction accuracy on various tasks, providing a strong basis for the citing paper to further explore the use of human attention in attention bias mitigation."}, {"Category": "Extension or Continuation", "Citation": "[9], [16], [17], [18]", "Explanation": "The cited works focus on using implicit signals such as eye gaze to add human supervision to attention, which the citing paper extends by exploring the use of other implicit signals in this area."}, {"Category": "Supporting Evidence", "Citation": "[33]", "Explanation": "The cited work highlights the potential inconsistency between human recognition and model reasoning processes, which the citing paper addresses by proposing a method to align the two."}, {"Category": "Supporting Evidence", "Citation": "[19]", "Explanation": "The cited work challenges the alignment of human recognition and model reasoning processes, providing a basis for the citing paper to address this issue in their method."}, {"Category": "Methodological Basis", "Citation": "[20], [21], [5], [6], [32]", "Explanation": "The cited works on self-supervised attention learning frameworks provide a mainstream method for attention learning in the citing paper, which the author adopts to conduct their own research on attention supervision information."}, {"Category": "Supporting Evidence", "Citation": "[5]", "Explanation": "The cited work by Choi et al. [6] adopts the masking method to find the unimportant words and gradually reduce their weights, which is a self-supervised paradigm that can reduce the annotation cost and improve the robustness of the model."}, {"Category": "Extension or Continuation", "Citation": "[6]", "Explanation": "The citing paper builds upon the work by Choi et al. [6] to further explore the use of the masking method in finding important words and reducing the weights of unimportant words, with a focus on improving the robustness of the model."}, {"Category": "Methodological Basis", "Citation": "[39]", "Explanation": "The cited work provides a method for quantifying the change of information in the text by minimizing the L2-normalized euclidean distance between the two hidden states and the two predictions, which the citing paper adopts in the loss function of WBCP."}, {"Category": "Methodological Basis", "Citation": "[5], [6]", "Explanation": "The cited works provide a method for generating sample-specific attention supervision by quantifying the perturbation magnitude in the embedding space, which the citing paper adopts in their research to improve the accuracy of word-level importance distribution."}, {"Category": "Data Source", "Citation": "[49]", "Explanation": "The cited work is used to provide a data splitting protocol for the task of aspect-level sentiment analysis, which the citing paper follows in the data preparation process."}, {"Category": "Methodological Basis", "Citation": "[52]", "Explanation": "The cited work Att-BiLSTM is used as a baseline to compare the performance of the attention-based model in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[28]", "Explanation": "The pre-trained model BERT is used as a baseline to explore the performance of the attention-based model in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[29]", "Explanation": "The pre-trained model DEBERTA is used as a baseline to compare the performance of the attention-based model in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[30]", "Explanation": "The pre-trained model ELECTRA is used as a baseline to compare the performance of the attention-based model in the citing paper."}, {"Category": "Data Source", "Citation": "[32]", "Explanation": "The cited work provides insights on the challenge of selecting the suitable head in multi-layer and multi-head attention in Transformers, which the citing paper uses to guide its research on combining the method with Transformers."}, {"Category": "Supporting Evidence", "Citation": "[41]", "Explanation": "The cited work provides a dataset of 150 examples for the task of MR, which is used in the experiments conducted in the citing paper to assess the performance of PBSA."}, {"Category": "Supporting Evidence", "Citation": "[42]", "Explanation": "The cited work provides a dataset of 150 examples for the task of CR, which is used in the experiments conducted in the citing paper to assess the performance of PBSA."}, {"Category": "Supporting Evidence", "Citation": "[43]", "Explanation": "The cited work provides a dataset of 150 examples for the task of SUBJ, which is used in the experiments conducted in the citing paper to assess the performance of PBSA."}, {"Category": "Supporting Evidence", "Citation": "[44]", "Explanation": "The cited work provides a dataset of 150 examples for the task of MPQA, which is used in the experiments conducted in the citing paper to assess the performance of PBSA."}, {"Category": "Supporting Evidence", "Citation": "[45]", "Explanation": "The cited work provides a dataset of 100 examples for the task of document categorization, which is used in the experiments conducted in the citing paper to assess the performance of PBSA."}, {"Category": "Supporting Evidence", "Citation": "[46]", "Explanation": "The cited work provides a dataset of 300 examples for the task of document categorization, which is used in the experiments conducted in the citing paper to assess the performance of PBSA."}, {"Category": "Supporting Evidence", "Citation": "[47]", "Explanation": "The cited work provides two datasets of 300 examples each for the task of aspect-based sentiment analysis, which are used in the experiments conducted in the citing paper to assess the performance of PBSA."}, {"Category": "Supporting Evidence", "Citation": "[48]", "Explanation": "The cited work provides a dataset of 300 examples for the task of aspect-based sentiment analysis on the Twitter platform, which is used in the experiments conducted in the citing paper to assess the performance of PBSA."}, {"Category": "Methodological Basis", "Citation": "[31], [50], [53], [32]", "Explanation": "The cited works provide the base models (MN, BERTABSA, and Att-BERTABSA) for the citing paper to build upon in the aspect-level classification task. The cited works serve as the methodological basis for the experiments conducted in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[6]", "Explanation": "The cited work, SANA, provides a masking scheme for generating counterfactuals and measuring the difference in softmax probability between the counterfactual and original sample. This method serves as a foundational approach for measuring the importance of words in self-supervised attention learning."}, {"Category": "Extension or Continuation", "Citation": "[5]", "Explanation": "The cited work, AWAS, and PGAS both use progressive masking to measure the attention weight or partial gradient of words. The citing paper extends this approach by proposing a new word-based concurrent perturbation method to more accurately mine the importance distribution of words in self-supervised attention learning."}, {"Category": "Methodological Basis", "Citation": "[51]", "Explanation": "The cited work introduces the gradient-based method for generating supervision information, which the citing paper adopts in their research to improve the interpretability of the model but does not necessarily improve its performance."}, {"Category": "Methodological Basis", "Citation": "[54]", "Explanation": "The cited work provides the pre-trained word embedding model that the citing paper utilizes in the process of mapping words into embeddings for the sentence-level hidden state computation in the Att-BiLSTM model."}]
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b8", "b9", "b14", "b34", "b55", "b12", "b37", "b52", "b15", "b20", "b22", "b45", "b48", "b13", "b33", "b44", "b47", "b49", "b7" ], "table_ref": [], "text": "Visual object tracking is a fundamental task in computer vision, and deep learning-based methods [7, 9,10,15,35,56] have dominated this field. Limited by the conventional sensor, most existing approaches are designed and evaluated on benchmarks [13,24,38,53] with a low frame rate of approximately 30 frames per second (FPS). However, the value of a higher frame rate tracking in the real world has been proved [16,[21][22][23]. For example, the shuttlecock can reach speeds of up to 493km/h, and analyzing its position is essential for athletes to learn how to improve their skills [46]. Utilizing professional high-speed cameras is one strategy ⋆ Xin Yang ([email protected]) is the corresponding author. for high frame rate tracking, but these cameras are inaccessible to casual users. Consumer devices with cameras, such as smartphones, have made attempts to integrate sensors with similar functionalities into their systems. However, these sensors still suffer from large memory requirements and high power consumption [49].\nAs bio-inspired sensors, event-based cameras measure light intensity changes and output asynchronous events to represent visual information. Compared with conventional frame-based sensors, event-based cameras offer a high measurement rate (up to 1MHz), high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) [14]. These unique properties offer great potential for higher frame rate tracking in challenging conditions. Nevertheless, event-based cameras cannot measure fine-grained texture information like conventional cameras, thus inhibiting tracking performance. Therefore, in this paper, we exploit to integrate the valuable information from event-based modality with that of framebased modality for high frame rate single object tracking under various challenging conditions.\nTo attain our objective, two challenges require to be addressed: (i) The measurement rate of event-based cameras is much higher than that of conventional cameras. Hence for high frame rate tracking, low-frequency frames must be aligned with high-frequency events to disambiguate target locations. Although recent works [34,45,48,50] have proposed various alignment strategies across multiple frames for video-related tasks, they are specifically designed for conventional frames of the same modality at different moments. Thus, applying these approaches directly to our cross-modality alignment does not offer an effective solution. (ii) Effectively fusing complementary information between modalities and preventing interference from noise is another challenge. Recently, Zhang et al. [61] proposed a cross-domain attention scheme to fuse visual cues from frame and event modalities for improving the single object tracking performance under different degraded conditions. However, the tracking frequency is bounded by the conventional frame rate since they ignore the rich temporal information recorded in the event modality.\nTo tackle the above challenges, we propose a novel endto-end framework to effectively combine complementary information from two modalities at different measurement rates for high frame rate tracking, dubbed AFNet, which consists of two key components for alignment and fusion, respectively. Specifically, (i) we first propose an eventguided cross-modality alignment (ECA) module to simultaneously accomplish cross-style alignment and cross-framerate alignment. Cross-style alignment is enforced by matching feature statistics between conventional frame modality and events augmented by a well-designed attention scheme; Cross-frame-rate alignment is based on deformable convolution [8] to facilitate alignment without explicit motion estimation or image warping operation by implicitly focusing on motion cues. (ii) A cross-correlation fusion (CF) module is further presented to combine complementary information by learning a dynamic filter from one modality that contributes to the feature expression of another modality, thereby emphasizing valuable information and suppressing interference. Extensive experiments on different eventbased tracking datasets validate the effectiveness of the proposed approach (see Figure 1 as an example).\nIn summary, we make the following contributions:\n• Our AFNet is, to our knowledge, the first to combine the rich textural clues of frames with the high temporal resolution offered by events for high frame rate object tracking.\n• We design a novel event-guided alignment framework that performs cross-modality and cross-frame-rate alignment simultaneously, as well as a cross-correlation fusion architecture that complements the two modalities.\n• Through extensive experiments, we show that the proposed approach outperforms state-of-the-art trackers in various challenging conditions." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Visual Object Tracking", "publication_ref": [ "b0", "b3", "b17", "b32", "b54", "b63", "b27", "b40", "b29", "b51", "b4", "b24", "b58", "b30", "b11", "b28", "b57", "b35", "b36", "b41", "b59" ], "table_ref": [], "text": "Visual object tracking based on the conventional frame has undergone astonishing progress in recent years, which can be generally divided into two categories, i.e., correlation filter (CF) trackers [1,4,18,33], and deep trackers [2, 26,39,55,64,65]. CF trackers learn a filter corresponding to the object of interest in the first frame, and this filter is used to locate the target in subsequent frames. While mainstream deep trackers estimate a general similarity map by cross-correlation between template and search images. However, limited by sensors and benchmarks, those methods are mainly applied to low frame rate (30FPS) tracking.\nThe high temporal resolution of event cameras allows tracking targets at a higher frame rate. Compared with conventional frame-based tracking, a few attempts have been made at event-based tracking, which can be generally classified into cluster-based and learning-based methods. Litzenberger et al. [28] assigned each new event to a cluster based on distance criteria, which is continuously updated for locating the target. Linares et al. [27] used software to initialize the size and location of clusters, then proposed an FPGA-based framework for tracking. Piatkowska et al. [41] extended the clustering method by a stochastic prediction of the objects' states to locate multiple persons. However, these methods involve handcrafted strategies and only apply in simple situations. Based on the powerful representation ability of deep learning [30,52], Chen et al. [5,6] designed two different event representation algorithms based on Time Surface [25] for target location regression. Zhang et al. [59] combined Swin-Transformer [31] and spiking neural network [12,29,58] to extract spatial and temporal features for improving event-based tracking performance. However, these event-based trackers often fail to locate targets accurately when events are too sparse or insufficient.\nTo combine benefits from frame and event modalities, [61] employed attention schemes [36,37,42,43,60] to balance the contributions of the two modalities. This work is most closely related to ours, but it does not exploit the high measurement rate of event-based cameras to accomplish a higher frame rate tracking, thus the tracking frequency is constrained by the frame rate in the frame modality. In contrast, our approach attains high frame rate tracking under various challenging conditions by aligning and fusing frame and event modalities with different measurement rates." }, { "figure_ref": [], "heading": "Alignment between Multiple Frames", "publication_ref": [ "b47", "b49", "b10", "b62", "b43", "b46", "b53", "b10", "b44", "b47", "b7" ], "table_ref": [], "text": "Alignment across multiple frames in the same sequence is essential to exploit the temporal information for videorelated tasks, such as video super-resolution [48,50] and compressed video quality enhancement [11,63]. A line of works [44,47,54] performs alignment by estimating the op-tical flow field between the reference and its neighbouring frames. In another line of works [11,45,48], implicit motion compensation is accomplished by deformable convolution. Deformable convolution was first proposed in [8], which improves the ability of convolutional layers to model geometric transformations by learning additional offsets. Although the deformable convolution has shown superiority in alignment on the conventional frame domain, aligning on the frame and event modalities brings unique challenges caused by the different styles. In this paper, we propose a novel alignment strategy to simultaneously achieve crossmodality and cross-frame-rate alignment." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Events Representation", "publication_ref": [], "table_ref": [], "text": "Event-based cameras asynchronously capture log intensity change for each pixel. An event will be triggered when:\nL(x, y, t) -L(x, y, t -∆t) ≥ pC,(1)\nwhere C denotes a certain contrast threshold; p is the polarity which means the sign of bright change, with +1 and -1 representing the positive and negative events, respectively. ∆t is the time since the last event at location (x, y) ⊤ . Suppose two sequential conventional frames F i and F i+1 are captured at times i and i + 1, respectively.\nE i→i+1 = {[x k , y k , t k , p k ]} N -1 k=0 contains N events triggered during the interval [i, i + 1].\nOur goal is to achieve high frame rate tracking by aligning and fusing conventional frame F i and E i→t at any time t ∈ [i, i + 1]. The apart in time between dual-modality inputs depends on their frame rates. Specifically, t -i = n γe , where n is an integer in [1, γe γ f ]; γ e and γ f denote the frame rates of event and frame modalities, respectively. Following [61], we represent events E i→t as:\ng(x, y) = ⌊ p k × δ(x -x k , y -y k , t -t k ) + 1 2 × 255⌋,(2\n) where g(x, y) denotes the pixel value of aggregated events at (x, y) ⊤ ; δ is the Dirac delta function. In this way, the asynchronous event stream E i→t is accumulated to a 2D event frame, denoted E t ." }, { "figure_ref": [ "fig_1" ], "heading": "Network Overview", "publication_ref": [ "b2" ], "table_ref": [], "text": "Following DiMP [3], as illustrated in Figure 2 (a), the overall architecture of our proposed AFNet contains three components: the feature extractor (i.e., backbone, ECA, and CF), the target classifier, and the bbox regressor. The feature extractors of the template branch and the search branch share the same architecture. Each branch receives an RGB image F i and aggregated events E t at different times as inputs, and corresponding features F f and F e can be extracted by the backbone network. ECA and CF are two key components of our method. The goal of ECA is to address the misalignment between the conventional frames and aggregated event frames at different moments. While CF aims to combine the strengths of both modalities by complementing one modality with information from another. Both target classifier and bbox regressor receive the fused features from feature extractors. Given a template set of fused features and corresponding target boxes, the model predictor generates the weights of the target classifier. Applying these weights to the features collected from search branch predicts the target confidence scores. The bbox regressor estimates the IoU of the groundtruth and the predicted bounding box." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "Event-guided Cross-modality Alignment", "publication_ref": [ "b18", "b49", "b7" ], "table_ref": [], "text": "The ECA module is proposed to align conventional frames to the reference aggregated events at the feature level. The key to ECA is designed based on the following challenges: (i) Cross-style alignment is a challenge. Frames and events are recorded by different sensors and thus have different styles, making alignment challenging. (ii) Crossframe-rate alignment is another challenge. The frame rate of aggregated event frames is far higher than that of conventional images, resulting in target location ambiguity that confuses the tracker's predictions. As shown in Figure 2 (b), ECA contains three modules: Motion Aware (MA), Style Transformer (ST), and Deformable Alignment (DA). MA. Since event-based cameras respond to changes in light intensity, they provide natural motion cues that can effectively facilitate multi-modality alignment. We thus first enhance the valuable motion information of event modality by visual attention mechanisms. As shown in Figure 2 (b), given event modality features F e ∈ R C×H×W , we design spatial and channel attention schemes to emphasize the meaningful moving cues while suppressing noise,\nF c e = σ(ψ 1 (ψ 1 (R (C,1,1) (F s e ))))F e ,(3)\nF s e = R (1,C,HW ) (F e ) × R (1,HW,1) (S(ψ 1 (F e ))),(4)\nwhere F s e and F c e are event features enhanced in the spatial and channel dimensions, respectively. ψ k denotes the convolution operation where kernel size is k × k; S and σ denote the softmax and the sigmoid function, respectively; R(•) is a reshape function with a target shape (•). ST. ST is responsible for combining the content of conventional frames and the style of events to meet the first challenge. Specifically, F c e is employed to guide the frame features F f to focus on the motion cues that aid in alignment,\nF m f = σ(F c e )F f + F f ,(5)\nwhere F m f denotes frame features fused with moving information provided by events. Then, we adopt the adaptive instance normalization (AdaIN) [19] to adjust the mean and variance of the content input (i.e., frame features) to match those of the style input (i.e., event features). Formally,\nF st f = AdaIN(F m f , F c e ) = σ(F c e ) F m f -µ(F m f ) σ(F m f ) + µ(F c e ),(6)\nwhere F st f denotes the output of our ST module, which combines the content of frame modality and the style of event modality. µ and σ are the mean and standard deviation, computed independently across batch size and spatial dimensions for each feature channel. DA. To address the second challenge, inspired by [50], we propose the DA module to adaptively align the conventional frames and aggregated events at different frame rates without explicit motion estimation and image warping operations. As shown in Figure 2 (b), DA first predict the offsets O of the convolution kernels according to F c e and F st f ,\nO = ψ 3 (ψ 1 ([F c e , F st f ])),(7)\nwhere [•] denotes channel-wise concatenation. The learnable offsets will implicitly focus on motion cues and explore similar features across modalities for alignment. With O and F f , the aligned feature F da f of the conventional frame can be computed by the deformable convolution D [8],\nF da f = D(F f , O).(8)" }, { "figure_ref": [ "fig_1" ], "heading": "Cross-correlation Fusion", "publication_ref": [ "b2" ], "table_ref": [], "text": "Our CF is proposed to robustly fuse frame and event correlations by adaptively learning a dynamic filter from one modality that contributes to the feature expression of another modality. Simply fusing frame and event modalities ignores circumstance in which one of the modalities does not provide meaningful information. In an HDR scene, for instance, the frame modality will provide no useful information, yet the event modality still exhibits strong cues. Conversely, in the absence of motion, event-based cameras cannot successfully record target-related information, while conventional frames can still deliver rich texture features. Therefore, we propose a cross-correlation scheme to complement one domain with information from another domain as shown in Figure 2 (c). Specifically, given the aligned frame feature F da f and enhanced event feature F c e , the proposed CF first adaptively estimates a dynamic filter of high-level contextual information from one modality. Then, this dynamic filter serves to enhance the features of another modality. Formally,\nF e f = F ⊛ K e + F, F = ϑ(ψ 3 (F da f )), K e = ψ 3 (A(F c e )),(9)\nwhere F e f denotes the enhanced feature of the frame modality based on the dynamic filter K e from event modality; ⊛ is the depthwise convolution; A denotes the adaptive average pooling; ϑ is the Batch Normalization (BN) followed by a ReLU activation function. Similarly, we can extract the complementary feature F f e of the event modality based on the dynamic filter K f from frame modality. Finally, F e f and F f e are concatenated to build the fused feature F f u ,\nF f u = ψ 1 ([ψ 1 (F e f ), ψ 1 (F f e )]),(10)\nF f u will be fed into the classifier and regressor to locate the target. The classifier adopts an effective model initializer and a steepest descent based optimizer to predict the score map. The regressor employs the overlap maximization strategy for the task of accurate bounding box estimation. We refer to [3] for details." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b16", "b2", "b39", "b31" ], "table_ref": [], "text": "We adopt the pretrained ResNet18 [17] as the backbone to extract frame and event features. Following [3,61], the loss function is defined as:\nL = βL cls + L bb ,(11)\nwhere L cls is the target classification loss which includes a hinge function to equally focus on both positive and negative samples. L bb is the bounding box regressor loss which estimates MSE between the predicted IoU and the groundtruth. β is set to 100. We implemented our approach in PyTorch [40] and trained our network for 100 epochs with a batch size of 32 using Adam optimizer with the default parameters. We set the initial learning rate of the feature extraction network, the classifier, and the regressor to 2e-4, 1e-3, 1e-3, respectively. The learning rate is adjusted by the CosineAnneal-ingLR strategy [32]. Our network is run on a single Nvidia RTX3090 GPU with 24G memory." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b50", "b58" ], "table_ref": [], "text": "We evaluate our AFNet on two event-frame-based datasets: FE240hz [61] and VisEvent [51]. The FE240hz dataset has annotation frequencies as high as 240 Hz and consists of more than 143K images and corresponding recorded events. With this dataset, our method can accomplish a high frame rate tracking of 240Hz. Compared with FE240hz, VisEvent provides a low annotation frequency, about 25Hz. However, it contains various rigid and nonrigid targets both indoors and outdoors. Following [59], there are 205 sequences for training and 172 for testing." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Comparison with State-of-the-art Trackers", "publication_ref": [ "b8", "b2", "b9", "b55", "b34", "b56", "b61", "b2", "b8", "b9", "b34", "b55", "b56", "b61", "b50" ], "table_ref": [ "tab_3" ], "text": "To demonstrate the effectiveness of our method, we compare AFNet with the nine state-of-the-art trackers. Specifically, ATOM [9], DiMP [3], PrDiMP [10], STARKs [56], TransT [7] and ToMP [35] are conventional frame-based trackers. For a fair comparison, we extend them to multimodality trackers via the following two fusion strategies: (i) Early Fusion (EF), we first add the aggregated events and corresponding frame as unified data, and then feed it into trackers; (ii) Middle Fusion (MF), we first use the backbone of these trackers to extract the frame and event features separately before feeding the sum of these features into the regressor. We also compared three original multimodality methods: DeT [57], HMFT [62], and FENet [61] are frame-depth, frame-thermal, and frame-event trackers, respectively. All approaches are re-trained and tested on the FE240hz and VisEvent datasets. Following [61], we use RSR and RPR to evaluate all trackers. RSR and RPR focus on the overlap and center distance between the ground truth and the predicted bounding box, respectively.\nFigure 3 (a) shows the overall evaluation results on the FE240hz [61] dataset, which demonstrates the proposed AFNet offers state-of-the-art high frame rate tracking performance and outperforms other compared approaches in terms of both precision and success rate. In particular, our proposed AFNet achieves an 87.0% overall RPR and 58.4% RSR, outperforming the runner-up by 2.7% and 2.8%, respectively. We further validate the robustness of our AFNet under five common challenging scenarios: high dynamic range (HDR), low-light (LL), fast motion (FM), no motion (NM), and severe background motion (SBM). Among them, the first three conditions present challenges for tracking in the conventional frame modality, while the last two scenarios provide difficulties for the event modality. As shown in ing. The extended multi-modal methods [3,7,9,10,35,56] lack a well-designed fusion module, preventing them from efficiently combining the complementary information of the two domains. While original multi-modality trackers DeT [57], HMFT [62] and FENet [61] do not address the misalignment between frame and event data at different measurement rates, causing ambiguity when locating targets. Figure 4 further qualitatively shows the effectiveness of our AFNet in different challenging conditions. Even though the VisEvent dataset [51] has a low frame rate annotation, it provides various non-rigid targets that are absent from the FE240hz dataset. Thus, we also compare our AFNet against other state-of-the-art methods on VisEvent. As shown in Figure 3 (b), our AFNet obtains 44.5% and 59.3% in terms of RSR and RPR, respectively, surpassing all previous methods. Table 2 reports the evaluation of various trackers on rigid and non-rigid targets, showing that AFNet outperforms other competing trackers on these two attributes, except the RSR on rigid targets. These results validate that our proposed multi-modality approach still remains effective for low frame rate frame-event tracking." }, { "figure_ref": [ "fig_3" ], "heading": "Ablation Study", "publication_ref": [ "b19", "b56" ], "table_ref": [ "tab_4", "tab_4", "tab_6", "tab_7" ], "text": "Impact of Input Modalities. To validate the effectiveness of fusing frame and event modalities, we design comparative experiments based only on a single modality: (i) tracking with low frame rate conventional frames, then linearly interpolating the results to 240Hz; (ii) tracking with aggregated events of 240Hz. As shown in the rows A and B of Table 3, when using only frame or event modality as input, the performance of trackers is 16.2%/26.9% and 43.6%/66.9% at PSR/PPR, respectively. These results are significantly worse than our AFNet, which demonstrates the necessity of multi-modality fusion for high frame rate tracking. Influence of Event-guided Cross-modality Alignment (ECA). Our proposed ECA module has two key components: style transformer (ST) and deformable alignment (DA). We thus conduct the following experiments to validate the effectiveness of ECA: (i) without ECA; Inside ECA, (ii) without ST (ECA w/o ST); (iii) without DA (ECA w/o DA). We retrain these three modified models, and the corresponding results are shown in the rows C-E of Table 3, respectively. We can see that the proposed ECA module and its components all contribute to the tracking performance of AFNet. When the ST is removed, the PSR and PPR drop significantly by 2.6% and 4.0%, respectively. This illustrates that combining the frame modality's content with the event modality's style plays a key role in multi-modality alignment. The performance drops by 2.9%/4.2% at PSR/PPR when the DA is removed. This drop demonstrates that cross-frame-rate alignment between conventional frames and events indeed decreases target location ambiguity and enhances the discrimination ability of our tracker. To further verify cross-modality and cross-framerate alignment capabilities of ECA, we visualize the feature heatmaps of the frame modality prior to and following ECA. As shown in Figure 5, the first example shows a target that is moving upwards. We can see that the frame features shift the attention to the location of the aggregated events by our ECA. The second illustration shows that frame features suffered from the HDR scenario. With our ECA, target lo- cation ambiguity is eliminated. The aligned frame features will be fused with event features to improve the discriminative ability of our tracker further.\nInfluence of Cross-correlation Fusion (CF). We assess the influence of our CF module by replacing it with a concatenation operation in our AFNet. As shown in the row F of Table 3 , the performance drops on PSR and PPR by 2.5% and 3.3% illustrate that a well-designed multi-modality fusion strategy is essential. We further validate the impact of cross-correlation between two modalities by removing the dynamic filter. The results in the row G of Event Representation. We provide ablation on the way events are converted to frames from two perspectives: (i)\nThe frame rate of accumulated event frames. We conduct experiments with different event frame rates on the FE240hz dataset. The results in Table 4 indicate that AFNet performs the best at all six event frame rates. (ii) The starting point of accumulation. We report the performance of accumulating events since the last event frame (a) and since the last intensity frame (b), see Table 5. The results of (a) on the top-3 methods are clearly lower than (b). This is because the accumulation method (a) leads to too sparse event frames, while (b) provides more motion cues for tracking.\nHigh Frame Rate Tracking Based on Interpolation. One question in our mind is whether interpolation on results or conventional frames still yields satisfactory high frame rate tracking results. To answer this question, we conduct two interpolation strategies: (i) We first aggregate events at the frame rate of conventional frames. Then, these aggregated events and frames are utilized for training and testing trackers to predict low frame rate results, which are further linearly interpolated to generate high frame rate bounding boxes. (ii) We employ the video interpolation approach Su-perSloMo [20] on conventional frames to predict high frame rate sequences for evaluation. Take note that the input of the event branch of all multi-modality trackers is replaced with interpolated frames. As shown in Figure 6, the results of interpolating on low frame rate results and on conventional frames are both noticeably inferior to using high frame rate aggregated events. These results demonstrate that designing multi-modality alignment and fusion networks to fully exploit the high temporal resolution of events for achieving high frame rate tracking is a feasible and significant manner.\n(a) since last event frame (b) since last intensity frame DeT [57] " }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "Ideally, the tracking frame rate of our AFNet can reach the measurement rate of an event-based camera. Constrained by the existing annotated rates, we verify the effectiveness of our proposed AFNet on FE240hz at 240Hz and VisEvent at 25Hz. Our current focus is on exploiting multi-modality alignment and fusion schemes for effective and robust high frame rate tracking in various challenging conditions. However, we have not developed a lightweight " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a multi-modality architecture for high frame rate single object tracking, which is comprised of two key components: event-guided cross-modality alignment (ECA) module and cross-correlation fusion (CF) module. The novel-designed ECA scheme is able to effectively establish cross-modality and cross-frame-rate alignment between conventional frames and aggregated events at the feature level. After alignment, the CF module focuses on fusing the advantages of both modalities by complementing one modality with information from another. Extensive experiments and ablation validation demonstrate the effectiveness and robustness of our AFNet in various challenging conditions. The proposed AFNet is the first in a line of work that jointly exploits frame and event modalities for high frame rate object tracking. " } ]
2023-05-25
[ { "authors": "Luca Bertinetto; Jack Valmadre; Stuart Golodetz; Ondrej Miksik; Philip Hs Torr", "journal": "", "ref_id": "b0", "title": "Staple: Complementary learners for real-time tracking", "year": "2016" }, { "authors": "Luca Bertinetto; Jack Valmadre; Joao F Henriques; Andrea Vedaldi; Philip Hs Torr", "journal": "", "ref_id": "b1", "title": "Fully-convolutional siamese networks for object tracking", "year": "2016" }, { "authors": "Goutam Bhat; Martin Danelljan; Luc Van Gool; Radu Timofte", "journal": "", "ref_id": "b2", "title": "Learning discriminative model prediction for tracking", "year": "2019" }, { "authors": "Ross David S Bolme; Bruce A Beveridge; Yui Man Draper; Lui", "journal": "", "ref_id": "b3", "title": "Visual object tracking using adaptive correlation filters", "year": "2010" }, { "authors": "Haosheng Chen; David Suter; Qiangqiang Wu; Hanzi Wang", "journal": "", "ref_id": "b4", "title": "End-to-end learning of object motion estimation from retinal events for event-based object tracking", "year": "2020" }, { "authors": "Haosheng Chen; Qiangqiang Wu; Yanjie Liang; Xinbo Gao; Hanzi Wang", "journal": "", "ref_id": "b5", "title": "Asynchronous tracking-by-detection on adaptive time surfaces for event-based object tracking", "year": "2019" }, { "authors": "Xin Chen; Bin Yan; Jiawen Zhu; Dong Wang; Xiaoyun Yang; Huchuan Lu", "journal": "", "ref_id": "b6", "title": "Transformer tracking", "year": "2021" }, { "authors": "Jifeng Dai; Haozhi Qi; Yuwen Xiong; Yi Li; Guodong Zhang; Han Hu; Yichen Wei", "journal": "", "ref_id": "b7", "title": "Deformable convolutional networks", "year": "2017" }, { "authors": "Martin Danelljan; Goutam Bhat; Fahad Shahbaz Khan; Michael Felsberg", "journal": "", "ref_id": "b8", "title": "Atom: Accurate tracking by overlap maximization", "year": "2019" }, { "authors": "Martin Danelljan; Luc Van Gool; Radu Timofte", "journal": "", "ref_id": "b9", "title": "Probabilistic regression for visual tracking", "year": "2020" }, { "authors": "Jianing Deng; Li Wang; Shiliang Pu; Cheng Zhuo", "journal": "", "ref_id": "b10", "title": "Spatio-temporal deformable convolution for compressed video quality enhancement", "year": "2020" }, { "authors": "Jianchuan Ding; Bo Dong; Felix Heide; Yufei Ding; Yunduo Zhou; Baocai Yin; Xin Yang", "journal": "NeurIPS", "ref_id": "b11", "title": "Biologically inspired dynamic thresholds for spiking neural networks", "year": "2022" }, { "authors": "Liting Heng Fan; Fan Lin; Peng Yang; Ge Chu; Sijia Deng; Hexin Yu; Yong Bai; Chunyuan Xu; Haibin Liao; Ling", "journal": "", "ref_id": "b12", "title": "Lasot: A high-quality benchmark for large-scale single object tracking", "year": "2019" }, { "authors": "Guillermo Gallego; Tobi Delbrück; Garrick Orchard; Chiara Bartolozzi; Brian Taba; Andrea Censi; Stefan Leutenegger; Andrew J Davison; Jörg Conradt; Kostas Daniilidis", "journal": "TPAMI", "ref_id": "b13", "title": "Event-based vision: A survey", "year": "2020" }, { "authors": "Shenyuan Gao; Chunluan Zhou; Chao Ma; Xinggang Wang; Junsong Yuan", "journal": "", "ref_id": "b14", "title": "Aiatrack: Attention in attention for transformer visual tracking", "year": "2022" }, { "authors": "Ankur Handa; Richard A Newcombe; Adrien Angeli; Andrew J Davison", "journal": "", "ref_id": "b15", "title": "Real-time camera tracking: When is high frame-rate best?", "year": "2012" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b16", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Joao F Henriques; Rui Caseiro; Pedro Martins; Jorge Batista", "journal": "", "ref_id": "b17", "title": "Exploiting the circulant structure of tracking-bydetection with kernels", "year": "2012" }, { "authors": "Xun Huang; Serge Belongie", "journal": "", "ref_id": "b18", "title": "Arbitrary style transfer in real-time with adaptive instance normalization", "year": "2017" }, { "authors": "Huaizu Jiang; Deqing Sun; Varun Jampani; Ming-Hsuan Yang; Erik Learned-Miller; Jan Kautz", "journal": "", "ref_id": "b19", "title": "Super slomo: High quality estimation of multiple intermediate frames for video interpolation", "year": "2018" }, { "authors": "Kiani Hamed; Ashton Galoogahi; Chen Fagg; Deva Huang; Simon Ramanan; Lucey", "journal": "", "ref_id": "b20", "title": "Need for speed: A benchmark for higher frame rate object tracking", "year": "2017" }, { "authors": "Hanme Kim; Stefan Leutenegger; Andrew J Davison", "journal": "", "ref_id": "b21", "title": "Real-time 3d reconstruction and 6-dof tracking with an event camera", "year": "2016" }, { "authors": "Adarsh Kowdle; Christoph Rhemann; Sean Fanello; Andrea Tagliasacchi; Jonathan Taylor; Philip Davidson; Mingsong Dou; Kaiwen Guo; Cem Keskin; Sameh Khamis", "journal": "TOG", "ref_id": "b22", "title": "The need 4 speed in real-time dense visual tracking", "year": "2018" }, { "authors": "Kristan Matej", "journal": "ICCVW", "ref_id": "b23", "title": "The visual object tracking vot2017 challenge results", "year": "2017" }, { "authors": "Xavier Lagorce; Garrick Orchard; Francesco Galluppi; Bertram E Shi; Ryad B Benosman", "journal": "TPAMI", "ref_id": "b24", "title": "Hots: a hierarchy of event-based time-surfaces for pattern recognition", "year": "2016" }, { "authors": "Bo Li; Junjie Yan; Wei Wu; Zheng Zhu; Xiaolin Hu", "journal": "", "ref_id": "b25", "title": "High performance visual tracking with siamese region proposal network", "year": "2018" }, { "authors": "Alejandro Linares-Barranco; Francisco Gómez-Rodríguez; Vicente Villanueva; Luca Longinotti; Tobi Delbrück", "journal": "", "ref_id": "b26", "title": "A usb3. 0 fpga event-based filtering and tracking framework for dynamic vision sensors", "year": "2015" }, { "authors": "Martin Litzenberger; Christoph Posch; D Bauer; Ahmed Nabil Belbachir; P Schon; B Kohn; H Garn", "journal": "DSPW & SPEW", "ref_id": "b27", "title": "Embedded vision system for real-time object tracking using an asynchronous transient vision sensor", "year": "2006" }, { "authors": "Qianhui Liu; Dong Xing; Huajin Tang; De Ma; Gang Pan", "journal": "", "ref_id": "b28", "title": "Event-based action recognition using motion information and spiking neural networks", "year": "2021" }, { "authors": "Yuanyuan Liu; Chengjiang Long; Zhaoxuan Zhang; Bokai Liu; Qiang Zhang; Baocai Yin; Xin Yang", "journal": "IEEE Transactions on Visualization and Computer Graphics", "ref_id": "b29", "title": "Explore contextual information for 3d scene graph generation", "year": "2022" }, { "authors": "Ze Liu; Yutong Lin; Yue Cao; Han Hu; Yixuan Wei; Zheng Zhang; Stephen Lin; Baining Guo", "journal": "", "ref_id": "b30", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b31", "title": "Sgdr: Stochastic gradient descent with warm restarts", "year": "2016" }, { "authors": "Chao Ma; Jia-Bin Huang; Xiaokang Yang; Ming-Hsuan Yang", "journal": "", "ref_id": "b32", "title": "Hierarchical convolutional features for visual tracking", "year": "2015" }, { "authors": "Juan Marín-Vega; Michael Sloth; Peter Schneider-Kamp; Richard Röttger", "journal": "", "ref_id": "b33", "title": "Drhdr: A dual branch residual network for multi-bracket high dynamic range imaging", "year": "2022" }, { "authors": "Christoph Mayer; Martin Danelljan; Goutam Bhat; Matthieu Paul; Danda Pani Paudel; Fisher Yu; Luc Van Gool", "journal": "", "ref_id": "b34", "title": "Transforming model prediction for tracking", "year": "2022" }, { "authors": "Haiyang Mei; Bo Dong; Wen Dong; Jiaxi Yang; Seung-Hwan Baek; Felix Heide; Pieter Peers; Xiaopeng Wei; Xin Yang", "journal": "", "ref_id": "b35", "title": "Glass segmentation using intensity and spectral polarization cues", "year": "2022" }, { "authors": "Haiyang Mei; Ge-Peng Ji; Ziqi Wei; Xin Yang; Xiaopeng Wei; Deng-Ping Fan", "journal": "", "ref_id": "b36", "title": "Camouflaged object segmentation with distraction mining", "year": "2021" }, { "authors": "Matthias Mueller; Neil Smith; Bernard Ghanem", "journal": "", "ref_id": "b37", "title": "A benchmark and simulator for uav tracking", "year": "2016" }, { "authors": "Hyeonseob Nam; Bohyung Han", "journal": "", "ref_id": "b38", "title": "Learning multi-domain convolutional neural networks for visual tracking", "year": "2016" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga", "journal": "NeurIPS", "ref_id": "b39", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" }, { "authors": "Ewa Pi Ątkowska; Ahmed Nabil Belbachir; Stephan Schraml; Margrit Gelautz", "journal": "CVPRW", "ref_id": "b40", "title": "Spatiotemporal multiple persons tracking using dynamic vision sensor", "year": "2012" }, { "authors": "Yu Qiao; Yuhao Liu; Xin Yang; Dongsheng Zhou; Mingliang Xu; Qiang Zhang; Xiaopeng Wei", "journal": "", "ref_id": "b41", "title": "Attention-guided hierarchical structure aggregation for image matting", "year": "2020" }, { "authors": "Yu Qiao; Jincheng Zhu; Chengjiang Long; Zeyao Zhang; Yuxin Wang; Zhenjun Du; Xin Yang", "journal": "", "ref_id": "b42", "title": "Cpral: Collaborative panoptic-regional active learning for semantic segmentation", "year": "2022" }, { "authors": "S M Mehdi; Raviteja Sajjadi; Matthew Vemulapalli; Brown", "journal": "", "ref_id": "b43", "title": "Frame-recurrent video super-resolution", "year": "2018" }, { "authors": "Zhihao Shi; Xiaohong Liu; Kangdi Shi; Linhui Dai; Jun Chen", "journal": "TMM", "ref_id": "b44", "title": "Video frame interpolation via generalized deformable convolution", "year": "2021" }, { "authors": "Hubert Shum; Takaaki Komura", "journal": "", "ref_id": "b45", "title": "Tracking the translational and rotational movement of the ball using high-speed camera movies", "year": "2005" }, { "authors": "Xin Tao; Hongyun Gao; Renjie Liao; Jue Wang; Jiaya Jia", "journal": "", "ref_id": "b46", "title": "Detail-revealing deep video super-resolution", "year": "2017" }, { "authors": "Yapeng Tian; Yulun Zhang; Yun Fu; Chenliang Xu", "journal": "", "ref_id": "b47", "title": "Tdan: Temporally-deformable alignment network for video super-resolution", "year": "2020" }, { "authors": "Stepan Tulyakov; Daniel Gehrig; Stamatios Georgoulis; Julius Erbach; Mathias Gehrig; Yuanyou Li; Davide Scaramuzza", "journal": "", "ref_id": "b48", "title": "Time lens: Event-based video frame interpolation", "year": "2021" }, { "authors": "Xintao Wang; Kelvin Ck Chan; Ke Yu; Chao Dong; Chen Change Loy", "journal": "CVPRW", "ref_id": "b49", "title": "Edvr: Video restoration with enhanced deformable convolutional networks", "year": "2019" }, { "authors": "Xiao Wang; Jianing Li; Lin Zhu; Zhipeng Zhang; Zhe Chen; Xin Li; Yaowei Wang; Yonghong Tian; Feng Wu", "journal": "", "ref_id": "b50", "title": "Visevent: Reliable object tracking via collaboration of frame and event flows", "year": "2021" }, { "authors": "Yang Wang; Bo Dong; Ke Xu; Haiyin Piao; Yufei Ding; Baocai Yin; Xin Yang", "journal": "ACM Transactions on Multimedia Computing, Communications and Applications", "ref_id": "b51", "title": "A geometrical approach to evaluate the adversarial robustness of deep neural networks", "year": "2023" }, { "authors": "Yi Wu; Jongwoo Lim; Ming-Hsuan Yang", "journal": "TPAMI", "ref_id": "b52", "title": "Object tracking benchmark", "year": "2015" }, { "authors": "Tianfan Xue; Baian Chen; Jiajun Wu; Donglai Wei; William T Freeman", "journal": "IJCV", "ref_id": "b53", "title": "Video enhancement with task-oriented flow", "year": "2019" }, { "authors": "Bin Yan; Yi Jiang; Peize Sun; Dong Wang; Zehuan Yuan; Ping Luo; Huchuan Lu", "journal": "", "ref_id": "b54", "title": "Towards grand unification of object tracking", "year": "2022" }, { "authors": "Bin Yan; Houwen Peng; Jianlong Fu; Dong Wang; Huchuan Lu", "journal": "ICCV", "ref_id": "b55", "title": "Learning spatio-temporal transformer for visual tracking", "year": "2021" }, { "authors": "Song Yan; Jinyu Yang; Jani Kapyla; Feng Zheng; Ales Leonardis; Joni-Kristian Kamarainen", "journal": "", "ref_id": "b56", "title": "Depthtrack: Unveiling the power of rgbd tracking", "year": "2021" }, { "authors": "Haiwei Zhang; Jiqing Zhang; Bo Dong; Pieter Peers; Wenwei Wu; Xiaopeng Wei; Felix Heide; Xin Yang", "journal": "", "ref_id": "b57", "title": "In the blink of an eye: Event-based emotion recognition", "year": "2023" }, { "authors": "Jiqing Zhang; Bo Dong; Haiwei Zhang; Jianchuan Ding; Felix Heide; Baocai Yin; Xin Yang", "journal": "", "ref_id": "b58", "title": "Spiking transformers for event-based single object tracking", "year": "2022" }, { "authors": "Jiqing Zhang; Chengjiang Long; Yuxin Wang; Haiyin Piao; Haiyang Mei; Xin Yang; Baocai Yin", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b59", "title": "A two-stage attentive network for single image super-resolution", "year": "2021" }, { "authors": "Jiqing Zhang; Xin Yang; Yingkai Fu; Xiaopeng Wei; Baocai Yin; Bo Dong", "journal": "", "ref_id": "b60", "title": "Object tracking by jointly exploiting frame and event domain", "year": "2021" }, { "authors": "Pengyu Zhang; Jie Zhao; Dong Wang; Huchuan Lu; Xiang Ruan", "journal": "", "ref_id": "b61", "title": "Visible-thermal uav tracking: A large-scale benchmark and new baseline", "year": "2022" }, { "authors": "He Zheng; Xin Li; Fanglong Liu; Lielin Jiang; Qi Zhang; Fu Li; Qingqing Dang; Dongliang He", "journal": "", "ref_id": "b62", "title": "Adaptive spatialtemporal fusion of multi-objective networks for compressed video perceptual enhancement", "year": "2021" }, { "authors": "Xingyi Zhou; Tianwei Yin; Vladlen Koltun; Philipp Krähenbühl", "journal": "", "ref_id": "b63", "title": "Global tracking transformers", "year": "2022" }, { "authors": "Zikun Zhou; Jianqiu Chen; Wenjie Pei; Kaige Mao; Hongpeng Wang; Zhenyu He", "journal": "", "ref_id": "b64", "title": "Global tracking via ensemble of local trackers", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 98.02, 306.51, 188.34, 8.96 ], "formula_id": "formula_0", "formula_text": "L(x, y, t) -L(x, y, t -∆t) ≥ pC,(1)" }, { "formula_coordinates": [ 3, 50.11, 390.43, 236.25, 34.01 ], "formula_id": "formula_1", "formula_text": "E i→i+1 = {[x k , y k , t k , p k ]} N -1 k=0 contains N events triggered during the interval [i, i + 1]." }, { "formula_coordinates": [ 3, 55.79, 510.54, 226.7, 30.28 ], "formula_id": "formula_2", "formula_text": "g(x, y) = ⌊ p k × δ(x -x k , y -y k , t -t k ) + 1 2 × 255⌋,(2" }, { "formula_coordinates": [ 3, 322.29, 502.34, 222.83, 12.69 ], "formula_id": "formula_3", "formula_text": "F c e = σ(ψ 1 (ψ 1 (R (C,1,1) (F s e ))))F e ,(3)" }, { "formula_coordinates": [ 3, 322.09, 519.25, 223.03, 12.69 ], "formula_id": "formula_4", "formula_text": "F s e = R (1,C,HW ) (F e ) × R (1,HW,1) (S(ψ 1 (F e ))),(4)" }, { "formula_coordinates": [ 3, 380.8, 657.11, 164.31, 12.69 ], "formula_id": "formula_5", "formula_text": "F m f = σ(F c e )F f + F f ,(5)" }, { "formula_coordinates": [ 4, 82.74, 415.86, 203.63, 45.31 ], "formula_id": "formula_6", "formula_text": "F st f = AdaIN(F m f , F c e ) = σ(F c e ) F m f -µ(F m f ) σ(F m f ) + µ(F c e ),(6)" }, { "formula_coordinates": [ 4, 113.98, 612.89, 172.38, 12.69 ], "formula_id": "formula_7", "formula_text": "O = ψ 3 (ψ 1 ([F c e , F st f ])),(7)" }, { "formula_coordinates": [ 4, 126.89, 702.2, 159.47, 12.69 ], "formula_id": "formula_8", "formula_text": "F da f = D(F f , O).(8)" }, { "formula_coordinates": [ 4, 386.67, 637.09, 158.45, 45.63 ], "formula_id": "formula_9", "formula_text": "F e f = F ⊛ K e + F, F = ϑ(ψ 3 (F da f )), K e = ψ 3 (A(F c e )),(9)" }, { "formula_coordinates": [ 5, 106.52, 153.53, 179.85, 12.69 ], "formula_id": "formula_10", "formula_text": "F f u = ψ 1 ([ψ 1 (F e f ), ψ 1 (F f e )]),(10)" }, { "formula_coordinates": [ 5, 131.93, 313.95, 154.43, 9.65 ], "formula_id": "formula_11", "formula_text": "L = βL cls + L bb ,(11)" } ]
Frame-Event Alignment and Fusion Network for High Frame Rate Tracking
Most existing RGB-based trackers target low frame rate benchmarks of around 30 frames per second. This setting restricts the tracker's functionality in the real world, especially for fast motion. Event-based cameras as bioinspired sensors provide considerable potential for high frame rate tracking due to their high temporal resolution. However, event-based cameras cannot offer fine-grained texture information like conventional cameras. This unique complementarity motivates us to combine conventional frames and events for high frame rate object tracking under various challenging conditions. In this paper, we propose an end-toend network consisting of multi-modality alignment and fusion modules to effectively combine meaningful information from both modalities at different measurement rates. The alignment module is responsible for cross-style and crossframe-rate alignment between frame and event modalities under the guidance of the moving cues furnished by events. While the fusion module is accountable for emphasizing valuable features and suppressing noise information by the mutual complement between the two modalities. Extensive experiments show that the proposed approach outperforms state-of-the-art trackers by a significant margin in high frame rate tracking. With the FE240hz dataset, our approach achieves high frame rate tracking up to 240Hz.
Jiqing Zhang; Yuanchen Wang; Wenxi Liu; Meng Li; Jinpeng Bai; Baocai Yin; Xin Yang
[ { "figure_caption": "Figure 1 .1Figure 1. A comparison of our AFNet with SOTA trackers. All competing trackers locate the target at time t + ∆t with conventional frames at time t and aggregated events at time t + ∆t as inputs. Our method achieves high frame rate tracking up to 240Hz on the FE240hz dataset. The two examples also show the complementary benefits of both modalities. (a) The event modality does not suffer from HDR, but the frame does; (b) The frame modality provides rich texture information, while the events are sparse.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. (a) Overview of our AFNet; (b) two key components in the event-guided cross-modality alignment (ECA) module: style transformer (ST) and deformable alignment (DA); and (c) the cross-correlation fusion (CF) module.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Results on FE240hz [61] and VisEvent [51] datasets.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Visualization of features from the frame modality before (i.e., F f ) and after (i.e., F da f ) alignment by our ECA. F S i and E S t are the frame modality input and event modality input of the search branch, respectively. F f u denotes the final fused feature.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Acknowledgements. This work was supported in part by National Key Research and Development Program of China (2022ZD0210500), the National Natural Science Foundation of China under Grant 61972067/U21A20491/U1908214, the HiSilicon(Shanghai) Technologies Co.,Ltd (No. TC20210510004), and the Distinguished Young Scholars Funding of Dalian (No. 2022RJ01).", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "we can see that AFNet surpasses other approaches in all conditions. These results validate the effectiveness of our proposed approach on high frame rate object track-", "figure_data": "MethodsFusion TypeHDR RSR RPR RSR RPR RSR RPR RSR RPR RSR RPR RSR RPR LL FM NM SBM AllATOM [9]EF MF26.6 42.6 44.6 67.2 56.4 83.2 46.7 78.7 28.9 41.9 41.1 62.2 29.1 48.4 52.7 78.1 45.4 68.9 60.8 92.4 40.1 60.3 45.2 68.6DiMP [3]EF MF38.5 61.0 58.1 86.8 53.7 90.4 50.1 86.5 47.8 74.8 48.2 77.2 39.1 61.4 55.4 83.6 59.4 93.0 42.6 76.6 50.4 78.7 48.2 76.1PrDiMP [10]EF MF22.3 32.7 64.0 92.2 53.1 85.0 56.9 91.8 35.0 52.9 46.9 71.8 39.3 64.1 63.0 89.3 60.4 95.7 55.0 92.2 47.9 73.8 51.2 78.3STARKs [56]EF MF42.2 73.1 55.0 90.5 41.6 75.1 26.4 53.0 51.9 84.5 45.5 78.8 44.1 75.7 54.8 90.0 40.7 73.1 25.5 50.5 53.2 85.2 46.2 79.4TransT [7]EF MF47.4 74.2 58.8 84.7 64.4 95.3 43.9 70.5 54.7 84.0 51.8 79.1 49.5 74.7 49.1 73.7 57.4 87.1 28.6 49.3 54.7 83.7 49.3 76.2ToMP [35]EF MF32.0 50.6 61.8 89.5 56.3 79.5 31.1 47.7 43.0 60.9 52.0 76.0 47.7 76.8 56.6 86.4 61.8 94.4 44.8 84.8 55.5 87.3 52.3 83.1DeT [57]-52.5 78.8 57.3 86.7 65.9 96.0 58.2 95.4 56.4 82.5 54.2 81.2HMFT [62]-40.2 67.7 51.4 86.7 52.6 87.7 46.9 82.5 54.9 90.3 49.1 84.6FENet [61]-53.1 83.5 58.2 83.9 62.5 94.7 47.2 72.4 57.8 88.5 55.6 84.3AFNet (Ours)-55.5 84.9 64.7 93.8 66.3 96.4 62.0 98.8 60.1 90.3 58.4 87.0", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Attribute-based RSR/RPR scores(%) on FE240hz [61] dataset against state-of-the-art trackers.", "figure_data": "MethodsFusion Rigid Non-Rigid Type RSR RPR RSR RPR RSR RPR AllATOM [9]EF 45.2 58.1 22.4 30.6 36.8 48.0 MF 47.9 61.1 20.7 27.8 37.9 48.9DiMP [3]EF 49.3 63.6 25.4 36.8 40.5 53.8 MF 50.1 65.5 27.8 39.5 41.9 56.0PrDiMP [10]EF 46.5 65.3 31.0 45.8 40.8 58.2 MF 47.2 60.9 23.6 33.1 38.5 50.7STARKs [56]EF 50.0 63.7 26.7 37.2 41.5 54.0 MF 50.1 64.0 27.4 38.3 41.8 54.6TransT [7]EF 43.1 59.6 25.4 38.5 36.6 51.9 MF 43.9 63.6 26.7 40.3 37.6 55.1ToMP [35]EF 45.2 57.3 20.2 27.7 36.0 46.4 MF 46.7 59.5 23.0 31.8 38.1 49.4DeT [57]-48.9 62.8 33.3 45.5 43.2 56.6HMFT [62]-50.0 64.0 27.2 39.7 41.6 55.1FENet [61]-51.0 65.9 32.3 46.7 44.2 58.9AFNet (Ours)-50.8 66.1 33.4 47.6 44.5 59.3", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "State-of-the-art comparison of rigid and non-rigid targets on the VisEvent[51] dataset.", "figure_data": "", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Figure 4. Qualitative comparison of AFNet against SOTA trackers on the FE240hz dataset [61] under five challenging conditions. All trackers locate the target at time t + ∆t with conventional frames at time t and aggregated events at time t + ∆t as inputs. Ablation study results.", "figure_data": "ToMP-MFDeTHMFTFENetAFNetGTModelsRSR↑ OP 0.50 ↑ OP 0.75 ↑ RPR↑A. Frame Only16.215.83.426.9B. Event Only43.653.418.666.9C. w/o ECA55.169.329.182.4D. ECA w/o ST 55.869.831.283.0E. ECA w/o DA 55.570.030.982.8F. w/o CF55.969.231.583.7G. CF w/o K56.269.531.684.3H. Ours58.473.532.687.0", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Event Frame Rate (Hz) 40 80 120 160 200 240DeT [57]49.7 52.2 51.3 53.5 54.3 54.2FENet [61]52.4 54.4 55.6 52.8 54.7 55.6AFNeT56.1 56.5 58.0 57.4 57.9 58.4", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "RSR of various event frame rates on the top-3 trackers.", "figure_data": "", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparison of start times for event accumulation.", "figure_data": "FENet [61] AFNet DeT [57] FENet [61] AFNetRSR 49.851.854.954.255.658.4RPR 75.578.483.281.284.387.0", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Figure6. Comparison of whether to interpolate on the top-7 trackers. The blue denotes linearly interpolated performance on low frame rate tracking results; The green is tracking results on high frame rate conventional frames interpolated by SuperSloMo[20]; While red represents the results of utilizing aggregated events that have a higher frame rate than conventional frames. network or a simple regression mechanism to speed up the evaluation of our approach. As shown in Table6, we report the RPR and RSR with respect to the evaluation speed of the four multi-modality approaches on the FE240hz [61] dataset. We can see that, at nearly equal assessment speeds, our AFNet offers the best tracking accuracy. Comparison of accuracy and efficiency of multi-modality approaches on the FE240hz [61] dataset.", "figure_data": "18 1 . 28 4 . 68 4 . 38 7 . 0P P R4 7 . 44 6 . 24 7 . 84 9 . 64 7 . 94 9 . 95 0 . 6", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[7, 9,10,15,35,56]", "Explanation": "The cited works provide deep learning-based methods that the citing paper adopts to design and evaluate visual object tracking on benchmarks with a low frame rate of approximately 30 frames per second."}, {"Category": "Data Source", "Citation": "[13,24,38,53]", "Explanation": "The cited works are benchmarks that the citing paper utilizes in its research to evaluate the performance of the deep learning-based methods in visual object tracking."}, {"Category": "Extension or Continuation", "Citation": "[16,[21][22][23]", "Explanation": "The cited works demonstrate the value of high frame rate tracking in the real world, which the citing paper extends by exploring the use of professional high-speed cameras for this purpose."}, {"Category": "Methodological Basis", "Citation": "[49]", "Explanation": "The cited work provides a strategy for high frame rate tracking using professional high-speed cameras, which the citing paper adopts in its research to analyze the position of objects in real-world scenarios."}, {"Category": "Methodological Basis", "Citation": "[46]", "Explanation": "The cited work highlights the need for analyzing the position of objects in real-world scenarios, which the citing paper addresses by utilizing professional high-speed cameras in its research."}, {"Category": "Data Source", "Citation": "[49]", "Explanation": "The cited work is a consumer device with a camera that the citing paper mentions as a strategy for high frame rate tracking in the real world."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work provides a detailed description of the unique properties of event-based cameras, which serves as the basis for the discussion of the potential benefits of integrating event-based modality with frame-based modality for high frame rate tracking in challenging conditions."}, {"Category": "Methodological Basis", "Citation": "[61]", "Explanation": "The cited work by Zhang et al. introduces a cross-domain attention scheme for fusing visual cues from different modalities, which the citing paper builds upon to improve the single object tracking performance under degraded conditions."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work by Litzenberger et al. provides a cluster-based method for assigning events to clusters based on distance criteria, which the citing paper adopts in their own research for tracking targets."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work by Linares et al. proposes an FPGA-based framework for tracking that the citing paper may have adopted in their research to improve the speed and efficiency of tracking targets."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work on Time Surface is used as a basis for the design of two different event representation algorithms in the citing paper by Chen et al."}, {"Category": "Extension or Continuation", "Citation": "[59]", "Explanation": "The cited work on combining Swin-Transformer and spiking neural network is extended in the citing paper by Zhang et al. to improve event-based tracking performance."}, {"Category": "Data Source", "Citation": "[31]", "Explanation": "The cited work on Swin-Transformer is used as a data source in the cited work by Zhang et al. to extract spatial and temporal features for event-based tracking."}, {"Category": "Data Source", "Citation": "[12,29,58]", "Explanation": "The cited works on spiking neural network are used as data sources in the cited work by Zhang et al. to improve event-based tracking performance."}, {"Category": "Extension or Continuation", "Citation": "[61]", "Explanation": "The cited work on attention schemes is extended in the citing paper to balance the contributions of frame and event modalities in target location regression."}, {"Category": "Methodological Basis", "Citation": "[44,47,54]", "Explanation": "The cited works provide a method for performing alignment by estimating the optical flow field between reference and neighboring frames, which the citing paper adopts in their research on video-related tasks."}, {"Category": "Methodological Basis", "Citation": "[11,45,48]", "Explanation": "The cited works introduce a method of implicit motion compensation through deformable convolution, which the citing paper utilizes in their research on video-related tasks."}, {"Category": "Data Source", "Citation": "[48,50]", "Explanation": "The cited works provide data on video super-resolution tasks, which the citing paper uses in their research on video-related tasks."}, {"Category": "Data Source", "Citation": "[11,63]", "Explanation": "The cited works offer data on compressed video quality enhancement tasks, which the citing paper utilizes in their research on video-related tasks."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work by DiMP is used as a basis for the overall architecture of the proposed AFNet, which includes the feature extractor, target classifier, and bbox regressor components."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work on adaptive instance normalization (AdaIN) is adopted in the ST module of the citing paper to adjust the mean and variance of the content input to match those of the style input, contributing to the alignment of frame and event features."}, {"Category": "Methodological Basis", "Citation": "(D [8])", "Explanation": "The cited work introduces the concept of deformable convolution, which the citing paper adopts to align features between the conventional frame and the deformable one in the process of feature extraction."}, {"Category": "Methodological Basis", "Citation": "(3)", "Explanation": "The cited work provides a model initializer and a steepest descent based optimizer for the classifier, which the citing paper adopts in its own research to predict the score map."}, {"Category": "Methodological Basis", "Citation": "(3)", "Explanation": "The regressor in the cited work employs the overlap maximization strategy for accurate bounding box estimation, which the citing paper also adopts in its research."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work, ResNet18, serves as the backbone for feature extraction in the citing paper, providing a methodological basis for the research conducted."}, {"Category": "Data Source", "Citation": "[3,61]", "Explanation": "The loss function defined in the cited works is adopted in the citing paper, indicating a reliance on external data and pre-existing models for the study conducted."}, {"Category": "Supporting Evidence", "Citation": "[32]", "Explanation": "The use of the CosineAnnealingLR strategy in the citing paper is supported by the cited work, providing evidence for the effectiveness of this method in adjusting learning rates."}, {"Category": "Data Source", "Citation": "[61]", "Explanation": "The cited work provides the FE240hz dataset, which the citing paper uses for training and evaluation of their event-frame-based tracking method."}, {"Category": "Data Source", "Citation": "[51]", "Explanation": "The cited work provides the VisEvent dataset, which the citing paper uses for training and testing of their event-frame-based tracking method."}, {"Category": "Extension or Continuation", "Citation": "[9]", "Explanation": "The cited work, ATOM, is used as a baseline for comparison in the citing paper to demonstrate the effectiveness of the proposed method."}, {"Category": "Extension or Continuation", "Citation": "[3]", "Explanation": "The cited work, DiMP, is used as a baseline for comparison in the citing paper to demonstrate the effectiveness of the proposed method."}, {"Category": "Extension or Continuation", "Citation": "[10]", "Explanation": "The cited work, PrDiMP, is used as a baseline for comparison in the citing paper to demonstrate the effectiveness of the proposed method."}, {"Category": "Extension or Continuation", "Citation": "[56]", "Explanation": "The cited work, STARKs, is used as a baseline for comparison in the citing paper to demonstrate the effectiveness of the proposed method."}, {"Category": "Extension or Continuation", "Citation": "[7]", "Explanation": "The cited work, TransT, is used as a baseline for comparison in the citing paper to demonstrate the effectiveness of the proposed method."}, {"Category": "Extension or Continuation", "Citation": "[35]", "Explanation": "The cited work, ToMP, is used as a baseline for comparison in the citing paper to demonstrate the effectiveness of the proposed method."}, {"Category": "Extension or Continuation", "Citation": "[57]", "Explanation": "The cited work, DeT, is used as a frame-depth tracker in the citing paper to compare the performance of the proposed method in frame-depth tracking."}, {"Category": "Extension or Continuation", "Citation": "[62]", "Explanation": "The cited work, HMFT, is used as a frame-depth tracker in the citing paper to compare the performance of the proposed method in frame-depth tracking."}, {"Category": "Extension or Continuation", "Citation": "[61]", "Explanation": "The cited work, FENet, is used as a frame-event tracker in the citing paper to compare the performance of the proposed method in frame-event tracking."}, {"Category": "Supporting Evidence", "Citation": "[61]", "Explanation": "The FE240hz dataset is used as a benchmark to evaluate the performance of the proposed AFNet in high frame rate tracking, providing a basis for comparison and validation of the method."}, {"Category": "Supporting Evidence", "Citation": "[57]", "Explanation": "The cited work, DeT, is used as a reference to show that the citing paper is building upon the research in the field of multi-modality tracking by addressing the issue of misalignment between frame and event data at different measurement rates."}, {"Category": "Supporting Evidence", "Citation": "[62]", "Explanation": "The cited work, HMFT, is used to highlight the research on multi-modality tracking and the issue of misalignment between frame and event data at different measurement rates, which the citing paper aims to address."}, {"Category": "Supporting Evidence", "Citation": "[61]", "Explanation": "The cited work, FENet, is used to further support the research on multi-modality tracking and the issue of misalignment between frame and event data at different measurement rates, as the citing paper aims to address this problem."}, {"Category": "Supporting Evidence", "Citation": "[51]", "Explanation": "The cited work, VisEvent dataset, is used to provide a new challenge for multi-modality tracking by introducing non-rigid targets that are absent from the FE240hz dataset. This helps the citing paper to compare the effectiveness of their proposed method against other state-of-the-art methods on this new dataset."}, {"Category": "Methodological Basis", "Citation": "(a)", "Explanation": "The cited work provides the accumulation method for event frame generation, which the citing paper adopts in their research to generate sparse event frames."}, {"Category": "Methodological Basis", "Citation": "(b)", "Explanation": "The cited work provides a method for event frame generation that results in more motion cues for tracking, which the citing paper adopts in their research to improve tracking results."}, {"Category": "Extension or Continuation", "Citation": "[20]", "Explanation": "The cited work, Su-perSloMo, is used in the citing paper to conduct video interpolation on conventional frames to predict high frame rate sequences for evaluation, extending the research on high frame rate tracking."}, {"Category": "Methodological Basis", "Citation": "[57]", "Explanation": "The cited work, DeT, is a method for designing multi-modality alignment and fusion networks to fully exploit the high temporal resolution of events for achieving high frame rate tracking, which the citing paper adopts in their research."}]
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b46", "b75", "b51", "b37", "b34", "b5", "b4", "b34", "b35" ], "table_ref": [], "text": "Three dimensional object detection is a critical task in many real-world applications, such as self-driving and robot navigation. Early methods [47,76,52] typically rely on LiDAR sensors because they can produce sparse yet accurate 3D point measurements. In contrast, cameras provide dense texture features but lack 3D information. Recently, monocular-based methods [38,50,35,46] for 3D detection, also known as monocular 3D detection, have gained significant attention from both industry and academia due to their cost-effectiveness and deployment-friendly nature.\nRecovering accurate 3D information from a single RGB image poses a challenge. While previous researches have employed geometry constraints [42, 25,6,35] and dense depth estimates [64,36,50] to facilitate 3D reasoning, they often overlook the importance of discriminative and informative 3D features in 3D space, which are essential for effective 3D detection. They mainly focus on improving features in 2D space, with little attention paid to better feature encoding and representation in the frustum and 3D space. Towards this goal, in this paper we propose to learn occupancy in frustum and 3D space, to obtain more discriminative and informative 3D features/representations for monocular 3D detection. Specifically, we employ synchronized raw sparse LiDAR point clouds to generate voxel-based occupancy labels in frustum and 3D space during the training stage. Concerning the sparsity of LiDAR points, we define three occupancy statuses: free, occupied, and unknown. Based on this, we voxelize the 3D space and use ray tracing on each LiDAR point to obtain occupancy labels. With the occupancy labels, we can enforce explicit 3D supervision on intermediate 3D features. It allows the network to learn voxelized occupancy for current 3D space, which enhances the original 3D features. This process is also performed in the frustum space, enabling a more fine-grained manner in extracting three-dimensional features for near objects due to the perspective nature of camera imagery. Overall, we call the proposed occupancy learning method OccupancyM3D, and illustrate the framework overview in Figure 1.\nTo demonstrate the effectiveness of our method, we con-duct experiments on the competitive KITTI and Waymo open datasets. As a result, the proposed method achieves state-of-the-art results with a significant margin over other methods. Our contributions can be summarized as follows:\n• We emphasize the importance of feature encoding and representation in the frustum and 3D space for monocular 3D detection, and we propose to learn occupancy in both space.\n• We propose a method to generate occupancy labels using synchronized raw sparse LiDAR points and introduce corresponding occupancy losses, enabling the network to learn voxelized occupancy in both frustum and 3D space. This occupancy learning process facilitates the extraction of discriminative and informative 3D features in the network.\n• Experiments demonstrate the superiority of the proposed method. Evaluated on challenging KITTI and Waymo open datasets, our method achieves new stateof-the-art (SOTA) results and outperforms other methods by a significant margin." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "LiDAR Based 3D Object Detection", "publication_ref": [ "b64", "b17", "b73", "b47", "b48", "b46", "b55", "b75", "b73", "b12", "b0", "b14", "b6", "b42" ], "table_ref": [], "text": "LiDAR-based methods [65,55,24,69,18,74,66] currently dominate 3D object detection accuracy because of their precise depth measurements. Due to the unordered nature of point clouds, LiDAR-based methods are required to organize the input data. There are four main streams based on the input data representation: point-based, voxelbased, range-view-based, and hybrid-based. PointNet families [48,49] are effective methods for feature extraction from raw point clouds, allowing point-based methods [47,54,56,68] to directly perform 3D detection. Voxelbased methods [76,67,70,74,13] organize point clouds into voxel grids, making them compatible with regular convolutional neural networks. Range-view-based methods [1,15,4] convert point clouds into range-view to accommodate the LiDAR scanning mode. Hybrid-based methods [53,69,7,43] use a combination of different representations to leverage their individual strengths. There is still a significant performance gap between monocular and LiDARbased methods, which encourages researchers to advance monocular 3D detection." }, { "figure_ref": [], "heading": "Monocular 3D Object Detection", "publication_ref": [ "b38", "b4", "b8", "b8", "b36", "b35", "b60", "b10", "b5" ], "table_ref": [], "text": "Significant progress has been made in advancing monocular 3D detection in recent years. The ill-posed problem of recovering instance level 3D information from a single image is challenging and important, attracting many researches. This is also the core sub-problem in monocular 3D detection. Early works [8,42] resort to using scene priors and geometric projections to resolve objects' 3D locations. More recent monocular methods [2, 39,25,34,73,27,29] employ more geometry constraints and extra priors like CAD models to achieve this goal. AutoShape [34] incorporates shape-aware 2D/3D constraints into the 3D detection framework by learning distinguished 2D and 3D keypoints. MonoJSG [29] reformulates the instance depth estimation as a progressive refinement problem and propose a joint semantic and geometric cost volume to model the depth error. As RGB images lack explicit depth information, many works rely on dense depth estimates. Some methods [64,37,36] directly convert depth map to pseudo LiDAR or 3D coordinate patches, and some works [50] use depth distributions to lift 2D image features to 3D space. Therefore, previous well-designed LiDAR 3D detectors can be easily employed on such explicit 3D features. Other researches [14, 61,44,12,11,46] also take advantage of depth maps or LiDAR point clouds as guidance for feature extraction and auxiliary information. While previous works have leveraged geometry constraints and dense depth estimates, they have not fully explored feature encoding and representation in the frustum and 3D space. To address this, our proposed method focuses on learning occupancy for monocular 3D detection." }, { "figure_ref": [], "heading": "3D Scene Representations", "publication_ref": [ "b71", "b50", "b31", "b7", "b25" ], "table_ref": [], "text": "Recent researches [40,41,72] rapidly advance implicit representations. Implicit representations have the advantage of arbitrary-resolution on modeling the 3D scene. This nature is beneficial for fine-grained tasks such as 3D reconstruction and semantic segmentation. Different from them, monocular 3D detection is an instance level task, and we explore explicit occupancy learning using fixed-sized voxels. Implicit occupancy representations for this task can be explored in future works, which is an interesting and promising topic. Additionally, many bird's-eye-view (BEV) based works [51,50,19,32,28,26] have been proposed recently. These works commonly employ BEV representations and obtain great success, especially for multi-camera BEV detection. The most related work to ours is CaDDN [50]. We follow its architecture design except for the proposed occupancy learning module, and we replace its 2D backbone with lightweight DLA34 [71]. It should be noted that our work focuses on the monocular setting, and extending the method to the multi-camera setup is a potential avenue for future researches." }, { "figure_ref": [], "heading": "OccupancyM3D", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Preliminary and Overview", "publication_ref": [], "table_ref": [], "text": "Task Definition. We first describe the preliminary of this task and the method. At inference, monocular 3D detec-Figure 2. Network overview. Compared to previous works, our method employs two newly-proposed components for learning occupancy in frustum and 3D space. All network blocks in the proposed parts consist of vanilla 3D convolutions. Please refer to Section 3.1 for detailed feature passing description. For occupancy in frustum and 3D space and their network blocks, please see Section 3.2.1; For occupancy label generation, please see Section 3.2.2; For occupancy losses, please see Section 3.2.3; Best viewed in color with zoom in. tion takes only a single RGB image and outputs interested amodal 3D bounding boxes in the current scene. At the training stage, our method requires RGB images, 3D box labels annotated on LiDAR points and synchronized LiDAR data. It is worth noting that the system has been calibrated, and the camera intrinsics and extrinsics between the camera and LiDAR are available.\nNetwork Overview. We present the network overview of our method in Figure 2. First, a single RGB image is fed into the DLA34 [71] backbone network to extract features. Then, we use these features to produce categorical depth distributions [50], which lifts 2D features to frustum space. After that, we employ the depth predictions and backbone features to generate frustum features. They are used for learning occupancy in frustum space, and then are transformed to voxelized 3D features using grid-sampling. Such voxelized 3D features are employed to study occupancy in 3D space. Occupancy learning in both frustum and 3D spaces can produce reasonable occupancy estimates that enhance the original features. The final enhanced voxelized 3D features are passed to the detection module to obtain final 3D detection results.\nAt the training stage, occupancy estimates are supervised by the generated occupancy labels in frustum and 3D space, respectively, using the proposed occupancy losses. We detail the occupancy learning in following sections." }, { "figure_ref": [], "heading": "Occupancy Learning", "publication_ref": [], "table_ref": [], "text": "We consider a frustum voxel or regular 3D voxel to be occupied if it contains part of an object. We denote the resulting voxel states as frustum occupancy and 3D occupancy, respectively.\nIn this section, we introduce occupancy learning for monocular 3D detection. It is organized as four parts: occupancy in frustum/3D space, occupancy labels, occupancy losses, and occupancy and depth." }, { "figure_ref": [], "heading": "Occupancy in Frustum Space and 3D Space", "publication_ref": [], "table_ref": [], "text": "After extracting backbone features, we employ a depth head to obtain dense category depth [50]. To save GPU memory, we use a convolution layer to reduce the number of feature channels, and the resulting feature is lifted to a frustum feature Fru 1 ∈ R W F ×H F ×D×C with the assistance of depth estimates. Then we extract frustum feature Fru 2 ∈ R W F ×H F ×D×C as follows:\nFru 2 = f 1 (Fru 1 )(1)\nwhere f 1 denotes two 3D convolutions followed by ReLU activate functions. Then we use a 3D convolution layer f 2 and sigmoid function to obtain frustum occupancy O fru ∈ R W F ×H F ×D×1 , which is supervised by corresponding labels O * fru as described in Section 3.2.2 and Section 3.2.3.\nO fru = Sigmoid(f 2 (Fru 2 ))(2)\nThe frustum occupancy indicates the feature density in the frustum space, thus inherently can be employed to weight original frustum features for achieving enhanced frustum feature Fru 3 ∈ R W F ×H F ×D×C as follows:\nFru 3 = O fru ⊙ Fru 2(3)\nwhere ⊙ denotes the Hadamard product (element-wise multiplication). The resulted frustum feature Fru 3 is transformed to regular voxelized feature V 1 ∈ R X×Y ×Z×C via grid-sampling [50]. The occupancy learning process is then repeated in the regular 3D space.\nV 2 = f 3 (V 1 ); O 3d = Sigmoid(f 4 (V 2 )); V 3 = O 3d ⊙V 2(\n4) To better encode 3D features in the regular 3D space, we use a 3D hourglass-like design [5] in f 3 , and f 4 is a 3D convolution. Finally, we have more informative 3D voxel feature V 3 ∈ R X×Y ×Z×C for the detection module. Please refer to the supplementary material for more detailed network architecture and their ablations. What is the rationale behind learning occupancy in both frustum and 3D space? Occupancy learning in both frustum and 3D space is beneficial because they have different nature. Frustum space has a resolution that depends on camera intrinsics and the downsample factor of the backbone network, while voxelized 3D space has a resolution that is decided by the pre-defined voxel size and detection range. Frustum voxels are irregular and vary in size based on the distance to the camera, which results in fine-grained voxels for objects that are closer and coarse-grained voxels for objects that are far away. In contrast, regular 3D voxels have the same size throughout the 3D space. On the other hand, frustum space is more fit to camera imagery, but objects in the frustum space cannot precisely represent the real 3D geometry. Thus the feature extraction and occupancy in frustum space have distortion for objects/scenes. Therefore, occupancy learning in both frustum and 3D space is complementary, and can result in more informative representations and features." }, { "figure_ref": [ "fig_1" ], "heading": "Occupancy Labels", "publication_ref": [], "table_ref": [], "text": "Given a set of sparse LiDAR points P ∈ R N ×3 , where N is the number of points and 3 is the coordinate dimension (X, Y, Z), we generate corresponding occupancy labels. The process is illustrated in Figure 3, and is operated on every LiDAR point. More formally, we first define three space status and represent them with numbers: free:0, occupied:1, unknown:-1. We then describe the occupancy label generation process in the frustum and 3D space, respectively.\nOccupancy label in frustum space: Let us denote the frustum occupancy label as O * fru ∈ R W F ×H F ×D , where W F and H F are feature resolution, and D is the depth category. We first project LiDAR points onto the image plane to form a category depth index map. Each valid projected point has a category depth index, while invalid points (no LiDAR points projections) are given negative indexes of -1. This index map is then downsampled to fit the feature resolution, resulting in Ind ∈ R W F ×H F . Benefit from the camera projection nature, we can easily distinguish the space status as follows:\nO * frui,j,d =          1 if Ind i,j > -1 and d = Ind i,j , 0 if Ind i,j > -1 and d < Ind i,j , -1 otherwise.\n(5) where i, j, d ∈ W F , H F , D. Note that we do not consider unknown voxels in both the occupancy labels and occupancy losses. We use the known voxels, i.e., the free and occupied voxels, to perform occupancy learning.\nOccupancy label in 3D space: We denote O * 3d ∈ R X×Y ×Z as the 3D occupancy label, where X, Y, Z are determined by the pre-defined voxel size and detection range. We voxelize LiDAR points within the grid and set the voxels containing points to 1, and those without points to -1. In this way, occupied voxels can be easily achieved. To obtain the free voxels, we utilize ray tracing from each LiDAR point to the camera, where intersected voxels are set as free, filled by 0. We summarize the occupancy label in 3D space as follows:\nO * 3dx,y,z =          1 if Vol 3dx,y,z > 0, 0 if R(O * 3dx,y,z ) ∩ Ray point→cam -1 otherwise.\n(6) where x, y, z ∈ X, Y, Z. In this equation, Vol 3d denotes the voxelized grid. Vol 3d > 0 when it is occupied by LiDAR points. R(•) refers to the voxel range and R(O 3di,j,d ) ∩ Ray point→cam denotes that the voxel at index i, j, d intersects with a ray from LiDAR points to the camera. In this way, 3D occupancy labels are generated.\nWhen generating voxel-based occupancy labels, there is a quantization error that arises due to the discretization process. A smaller voxel size results in lower quantization error, providing more fine-grained and accurate information. However, it requires more computation and GPU memory resources." }, { "figure_ref": [], "heading": "Occupancy Losses", "publication_ref": [ "b29" ], "table_ref": [], "text": "We use generated occupancy labels O * fru and O * 3d to supervised the predicted occupancy O fru and O 3d , respectively. We regard occupancy prediction as a simple classification problem and use focal loss [30] as the classification loss. Only valid voxels, i.e., free and occupied voxels, contribute to the loss, and unknown voxels are ignored. We first obtain valid masks\nM fru ∈ R W F ×H F ×D and M 3d ∈ R X×Y ×Z . M fru = true if O *\nfru > -1 otherwise f alse. M 3d is obtained using the similar way.\nTherefore, the occupancy loss in frustum space is:\nL f ru = FL(O * fru [M fru ], O fru [M fru ])(7)\nwhere FL(•) refers to focal loss. Similarly, we can obtain 3D occupancy loss as follows:\nL 3d = FL(O * 3d [M 3d ], O 3d [M 3d ])(8)\nThe final occupancy loss is their sum:\nL occupancy = L f ru + L 3d(9)\nThe occupancy loss allows the network to learn informative and discriminative features and representations, thus benefit downstream tasks. Therefore, the final loss of the network is:\nL = L org + λL occupancy(10)\nwhere L org denotes the original detection and depth losses in CaDDN [50] and λ is the occupancy loss weighting factor, which is set to 1 by default." }, { "figure_ref": [], "heading": "Occupancy and Depth", "publication_ref": [], "table_ref": [], "text": "Occupancy has some similarities with 2D depth map, especially the frustum occupancy. They both can represent object geometry surface in the space. However, depth map is two-dimensional while occupancy is three-dimensional.\nOccupancy is beyond the depth and can base on it. It is able to express dense features of objects but not only the surface. For unknown space due to occlusion, the occupancy can infer reasonable results. Moreover, learning occupancy in frustum and 3D space allows the network to study more informative features under a higher dimension compared to 2D space. Occupancy and depth are not mutually exclusive representations. In fact, they complement each other in the 3D object detection task. Without depth, the network has to deal with a large search space, making it challenging to learn reasonable occupancy features. Incorporating depth estimation provides the network with a good starting point and facilitates learning the occupancy features. Therefore, it is recommended to utilize both depth and occupancy information to achieve better representations and features for monocular 3D detection." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b44", "b34" ], "table_ref": [], "text": "We employ PyTorch [45] for implementation. The network is trained on 4 NVIDIA 3080Ti (12G) GPUs, with a total batch size of 8 for 80 epochs. We use Adam [21] optimizer with initial learning rate 0.001 and employ the one-cycle learning rate policy [59]. We use pre-trained DLA34 [71] backbone from [44]. We employ flip and crop data augmentation [35]. For KITTI [16], we fix the input image to 1280 × 384, detection range [2, 46.8] × [-30.08, 30.08] × [-3.0, 1.0](meter) for x, y, z axes under the LiDAR coordinate system, respectively. We use voxel size [0.16, 0.16, 0.16](meter). For Waymo [60], we downsample the input RGB image from 1920 × 1280 to 960 × 640 to meet GPU memory. We use detection range [2, 59.6] × [-25.6, 25.6] × [-2.0, 2.0](meter) for x, y, z axes due to the larger depth domain on Waymo. We use voxel size [0.16, 0.16, 0.16](meter). Due to the space limitation, more network details and experimental results are provided in the supplementary material." }, { "figure_ref": [], "heading": "Datasets and Metrics", "publication_ref": [], "table_ref": [], "text": "Following the fashion in previous works, we conduct , 46], we use the front camera of the multi-camera rig and provide performance comparison on val set for the vehicle category. To make fair comparisons, we use one third samples of training sequences to train the network due to the large-scale and high frame rate of this dataset. Waymo divides objects to LEVEL 1 and LEVEL 2 according to the LiDAR point number within objects. For metrics, we employ the official mAP and mAPH under LEVEL 1 and LEVEL 2." }, { "figure_ref": [], "heading": "Results on KITTI and Waymo Datasets", "publication_ref": [ "b5", "b62", "b5" ], "table_ref": [ "tab_0", "tab_2", "tab_1" ], "text": "We provide the performance comparisons on KITTI and WaymoOD. Table 1 shows the results of Car category on KITTI test set. Our method outperforms other methods including video-based methods with a large margin. For example, the proposed method exceeds CaDDN [50] under all metrics, e.g., 25.55/17.02/14.79 vs. 19.17/13.41/11.46 AP 3D . Our method outperforms DID-M3D [46] by a margin of 2.43/1.42/1.54 AP BEV . When compared to the recent video-based method DfM [63], OccupancyM3D also shows better performance, e.g., 24.18 vs. 22.89 AP BEV under the moderate setting. In Table 3, we provide comparisons on other categories, namely, Pedestrian and Cyclist. The results demonstrate the superiority of our method on different categories. To sum up, our method achieves new state-of-the-art results on KITTI test set for monocular 3D detection.\nWe also evaluate our method on Waymo open dataset (WaymoOD) [60] and obtain promising results. As shown in Table 2, our method surpasses other methods with a significant margin. For example, under LEVEL 1 setting, with LEVEL 1 and 2 settings, respectively. This success can be attributed to the fact that occupancy learning ben-efits from the diverse scenes present in large datasets. In other words, large datasets especially favor the proposed occupancy learning method. Interestingly, concerning objects within [50m, ∞], our method performs worse than DID-M3D [46]. It is because our method is voxel-based, which has a detection range limitation ([2, 59.6](meters) in our method). By contrast, DID-M3D is a perspectivebased method, indicating that it does not have this limitation and can detect more faraway objects. We encourage future Figure 4. Qualitative results of occupancy predictions and 3D detections on KITTI val set. In 3D detections, Red boxes are our results and Green boxes denote ground-truths. The LiDAR point clouds in 3D detections are used only for visualization. We can see that the proposed method generates reasonable occupancy predictions for the current scene, which benefits downstream monocular 3D detection task. However, our method may fail to estimate heavily occluded objects (see right objects of the bottom picture). More qualitative results and discussions are provided in the supplementary material. Best viewed in color with zoom in. works to address this range limitation in our method." }, { "figure_ref": [], "heading": "Ablations", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Following common practice in previous works, we perform ablations on KITTI val set to validate the effectiveness of each component. We compare the performance on Car category under IoU criterion 0.7. Due to the space limitation, we provide only the main ablation in the main text, and more detailed ablations are provided in the supplementary material.\nWe provide the main ablation in Table 4. It can be easily seen that occupancy learning significantly benefit the final detection performance. When enforcing occupancy learning in frustum space, the detection AP 3D increases from 21.04/17.05/15.01 to 24.69/17.79/15.16 (Exp. (a)→(b)). On the other hand, when enforcing occupancy learning in 3D space, the detection AP 3D is boosted to 24.64/18.88/16.38 (Exp. (c)). Finally, the model obtains 5.83/2.91/2.14 AP 3D gains (Exp. (a)→(d)) by employing occupancy learning in both frustum and 3D space. This main ablation demonstrates the effectiveness of our method." }, { "figure_ref": [], "heading": "Qualitative Results", "publication_ref": [], "table_ref": [], "text": "We present qualitative results of occupancy predictions and 3D detections in Figure 4. Our method can predict reasonable occupancy for the current scene, especially for foreground objects. This indicates the potential of occupancy learning in downstream tasks. Nevertheless, we can see that the occupancy estimates are not very accurate for heavily occluded objects (see right objects of the bottom picture), which leaves room for improvement in future works. Concerning the space limitation, more qualitative results are included in the supplementary material, with providing more detailed discussions." }, { "figure_ref": [], "heading": "Limitation and Future Work", "publication_ref": [], "table_ref": [], "text": "One significant drawback of this work is the voxel size limitation. Large voxels in explicit voxel-based representation can reduce computation overhead and GPU memory, but at the cost of failing to precisely describe the 3D geometry of the scene due to quantization errors. Conversely, smaller voxel sizes are able to express fine-grained 3D geometry but come at the significant expense of increased computation overhead and GPU memory usage. On the other hand, the voxel-based method has limited detection ranges. This work mainly focuses on occupancy learning in the monocular 3D detection task, and the exploration of its application in more downstream tasks such as multi-camera detection and segmentation is less explored. We believe that it is an interesting and promising topic and encourage future works to alleviate the above limitations to advance the selfdriving community." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose to learn occupancy for monocular 3D detection, to obtain more discriminative and informative 3D features. To perform occupancy learning, we design occupancy labels by using synchronized raw sparse LiDAR point clouds and introduce corresponding occupancy losses. Ablations verify the effectiveness of each proposed component. To the best of our knowledge, this is the first work that introduces occupancy learning to monocular 3D detection. We conduct experiments on the challenging KITTI and Waymo open datasets. The results demonstrate that the proposed method achieves new state-of-the-art results and outperforms other methods by a large margin." } ]
2023-05-25
[ { "authors": "Alex Bewley; Pei Sun; Thomas Mensink; Dragomir Anguelov; Cristian Sminchisescu", "journal": "", "ref_id": "b0", "title": "Range conditioned dilated convolutions for scale invariant 3d object detection", "year": "2020" }, { "authors": "Garrick Brazil; Xiaoming Liu", "journal": "", "ref_id": "b1", "title": "M3d-rpn: Monocular 3d region proposal network for object detection", "year": "2019" }, { "authors": "Garrick Brazil; Gerard Pons-Moll; Xiaoming Liu; Bernt Schiele", "journal": "Springer", "ref_id": "b2", "title": "Kinematic 3d object detection in monocular video", "year": "2020" }, { "authors": "Yuning Chai; Pei Sun; Jiquan Ngiam; Weiyue Wang; Benjamin Caine; Vijay Vasudevan; Xiao Zhang; Dragomir Anguelov", "journal": "", "ref_id": "b3", "title": "To the point: Efficient 3d object detection in the range image with graph convolution kernels", "year": "2021" }, { "authors": "Jia-Ren Chang; Yong-Sheng Chen", "journal": "", "ref_id": "b4", "title": "Pyramid stereo matching network", "year": "2018" }, { "authors": "Hansheng Chen; Yuyao Huang; Wei Tian; Zhong Gao; Lu Xiong", "journal": "", "ref_id": "b5", "title": "Monorun: Monocular 3d object detection by reconstruction and uncertainty propagation", "year": "2021" }, { "authors": "Qi Chen; Lin Sun; Ernest Cheung; Alan L Yuille", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b6", "title": "Every view counts: Cross-view consistency in 3d object detection with hybrid-cylindrical-spherical voxelization", "year": "2020" }, { "authors": "Xiaozhi Chen; Kaustav Kundu; Ziyu Zhang; Huimin Ma; Sanja Fidler; Raquel Urtasun", "journal": "", "ref_id": "b7", "title": "Monocular 3d object detection for autonomous driving", "year": "2016" }, { "authors": "Xiaozhi Chen; Kaustav Kundu; Yukun Zhu; Huimin Ma; Sanja Fidler; Raquel Urtasun", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b8", "title": "3d object proposals using stereo imagery for accurate object class detection", "year": "2017" }, { "authors": "Yongjian Chen; Lei Tai; Kai Sun; Mingyang Li", "journal": "", "ref_id": "b9", "title": "Monopair: Monocular 3d object detection using pairwise spatial relationships", "year": "2020" }, { "authors": "Yi-Nan Chen; Hang Dai; Yong Ding", "journal": "", "ref_id": "b10", "title": "Pseudo-stereo for monocular 3d object detection in autonomous driving", "year": "2022" }, { "authors": "Zhiyu Chong; Xinzhu Ma; Hong Zhang; Yuxin Yue; Haojie Li; Zhihui Wang; Wanli Ouyang", "journal": "", "ref_id": "b11", "title": "Monodistill: Learning spatial features for monocular 3d object detection", "year": "2022" }, { "authors": "Jiajun Deng; Shaoshuai Shi; Peiwei Li; Wengang Zhou; Yanyong Zhang; Houqiang Li", "journal": "", "ref_id": "b12", "title": "Voxel r-cnn: Towards high performance voxel-based 3d object detection", "year": "2021" }, { "authors": "Mingyu Ding; Yuqi Huo; Hongwei Yi; Zhe Wang; Jianping Shi; Zhiwu Lu; Ping Luo", "journal": "", "ref_id": "b13", "title": "Learning depth-guided convolutions for monocular 3d object detection", "year": "2020" }, { "authors": "Lue Fan; Xuan Xiong; Feng Wang; Naiyan Wang; Zhaoxiang Zhang", "journal": "", "ref_id": "b14", "title": "Rangedet: In defense of range view for lidar-based 3d object detection", "year": "2021" }, { "authors": "Andreas Geiger; Philip Lenz; Raquel Urtasun", "journal": "IEEE", "ref_id": "b15", "title": "Are we ready for autonomous driving? the kitti vision benchmark suite", "year": "2012" }, { "authors": "Jiaqi Gu; Bojian Wu; Lubin Fan; Jianqiang Huang; Shen Cao; Zhiyu Xiang; Xian-Sheng Hua", "journal": "", "ref_id": "b16", "title": "Homography loss for monocular 3d object detection", "year": "2022" }, { "authors": "Chenhang He; Hui Zeng; Jianqiang Huang; Xian-Sheng Hua; Lei Zhang", "journal": "", "ref_id": "b17", "title": "Structure aware single-stage 3d object detection from point cloud", "year": "2020" }, { "authors": "Junjie Huang; Guan Huang; Zheng Zhu; Dalong Du", "journal": "", "ref_id": "b18", "title": "Bevdet: High-performance multi-camera 3d object detection in bird-eye-view", "year": "2021" }, { "authors": "Kuan-Chih Huang; Tsung-Han Wu; Hung-Ting Su; Winston H Hsu", "journal": "", "ref_id": "b19", "title": "Monodtr: Monocular 3d object detection with depth-aware transformer", "year": "2022" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b20", "title": "Adam: A method for stochastic optimization", "year": "" }, { "authors": "Abhinav Kumar; Garrick Brazil; Enrique Corona; Armin Parchami; Xiaoming Liu", "journal": "", "ref_id": "b21", "title": "Deviant: Depth equivariant network for monocular 3d object detection", "year": "2022" }, { "authors": "Abhinav Kumar; Garrick Brazil; Xiaoming Liu", "journal": "", "ref_id": "b22", "title": "Groomed-nms: Grouped mathematically differentiable nms for monocular 3d object detection", "year": "2021" }, { "authors": "Alex H Lang; Sourabh Vora; Holger Caesar; Lubing Zhou; Jiong Yang; Oscar Beijbom", "journal": "", "ref_id": "b23", "title": "PointPillars: Fast encoders for object detection from point clouds", "year": "2019" }, { "authors": "Peixuan Li; Huaici Zhao; Pengfei Liu; Feidao Cao", "journal": "", "ref_id": "b24", "title": "Rtm3d: Real-time monocular 3d detection from object keypoints for autonomous driving", "year": "2020" }, { "authors": "Yinhao Li; Zheng Ge; Guanyi Yu; Jinrong Yang; Zengran Wang; Yukang Shi; Jianjian Sun; Zeming Li", "journal": "", "ref_id": "b25", "title": "Bevdepth: Acquisition of reliable depth for multi-view 3d object detection", "year": "2022" }, { "authors": "Zhuoling Li; Zhan Qu; Yang Zhou; Jianzhuang Liu; Haoqian Wang; Lihui Jiang", "journal": "", "ref_id": "b26", "title": "Diversity matters: Fully exploiting depth clues for reliable monocular 3d object detection", "year": "2022" }, { "authors": "Zhiqi Li; Wenhai Wang; Hongyang Li; Enze Xie; Chonghao Sima; Tong Lu; Yu Qiao; Jifeng Dai", "journal": "Springer", "ref_id": "b27", "title": "Bevformer: Learning bird's-eye-view representation from multi-camera images via spatiotemporal transformers", "year": "2022" }, { "authors": "Qing Lian; Peiliang Li; Xiaozhi Chen", "journal": "", "ref_id": "b28", "title": "Monojsg: Joint semantic and geometric cost volume for monocular 3d object detection", "year": "2022" }, { "authors": "Tsung-Yi Lin; Priya Goyal; Ross Girshick; Kaiming He; Piotr Dollár", "journal": "", "ref_id": "b29", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "Xianpeng Liu; Nan Xue; Tianfu Wu", "journal": "", "ref_id": "b30", "title": "Learning auxiliary monocular contexts helps monocular 3d object detection", "year": "2021" }, { "authors": "Yingfei Liu; Tiancai Wang; Xiangyu Zhang; Jian Sun", "journal": "Springer", "ref_id": "b31", "title": "Petr: Position embedding transformation for multi-view 3d object detection", "year": "2022" }, { "authors": "Yuxuan Liu; Yuan Yixuan; Ming Liu", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b32", "title": "Ground-aware monocular 3d object detection for autonomous driving", "year": "2021" }, { "authors": "Zongdai Liu; Dingfu Zhou; Feixiang Lu; Jin Fang; Liangjun Zhang", "journal": "", "ref_id": "b33", "title": "Autoshape: Real-time shape-aware monocular 3d object detection", "year": "2021" }, { "authors": "Yan Lu; Xinzhu Ma; Lei Yang; Tianzhu Zhang; Yating Liu; Qi Chu; Junjie Yan; Wanli Ouyang", "journal": "", "ref_id": "b34", "title": "Geometry uncertainty projection network for monocular 3d object detection", "year": "2021" }, { "authors": "Xinzhu Ma; Shinan Liu; Zhiyi Xia; Hongwen Zhang; Xingyu Zeng; Wanli Ouyang", "journal": "", "ref_id": "b35", "title": "Rethinking pseudo-lidar representation", "year": "2020" }, { "authors": "Xinzhu Ma; Zhihui Wang; Haojie Li; Pengbo Zhang; Wanli Ouyang; Xin Fan", "journal": "", "ref_id": "b36", "title": "Accurate monocular 3d object detection via color-embedded 3d reconstruction for autonomous driving", "year": "2019" }, { "authors": "Xinzhu Ma; Yinmin Zhang; Dan Xu; Dongzhan Zhou; Shuai Yi; Haojie Li; Wanli Ouyang", "journal": "", "ref_id": "b37", "title": "Delving into localization errors for monocular 3d object detection", "year": "2021" }, { "authors": "Fabian Manhardt; Wadim Kehl; Adrien Gaidon", "journal": "", "ref_id": "b38", "title": "Roi-10d: Monocular lifting of 2d detection to 6d pose and metric shape", "year": "2019" }, { "authors": "Lars Mescheder; Michael Oechsle; Michael Niemeyer; Sebastian Nowozin; Andreas Geiger", "journal": "", "ref_id": "b39", "title": "Occupancy networks: Learning 3d reconstruction in function space", "year": "2019" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "Communications of the ACM", "ref_id": "b40", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2021" }, { "authors": "Arsalan Mousavian; Dragomir Anguelov; John Flynn; Jana Kosecka", "journal": "", "ref_id": "b41", "title": "3d bounding box estimation using deep learning and geometry", "year": "2017" }, { "authors": "Jongyoun Noh; Sanghoon Lee; Bumsub Ham", "journal": "", "ref_id": "b42", "title": "Hvpr: Hybrid voxel-point representation for single-stage 3d object detection", "year": "2021" }, { "authors": "Dennis Park; Rares Ambrus; Vitor Guizilini; Jie Li; Adrien Gaidon", "journal": "", "ref_id": "b43", "title": "Is pseudo-lidar needed for monocular 3d object detection", "year": "2021" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga", "journal": "", "ref_id": "b44", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" }, { "authors": "Liang Peng; Xiaopei Wu; Zheng Yang; Haifeng Liu; Deng Cai", "journal": "", "ref_id": "b45", "title": "Did-m3d: Decoupling instance depth for monocular 3d object detection", "year": "2022" }, { "authors": "Wei Charles R Qi; Chenxia Liu; Hao Wu; Leonidas J Su; Guibas", "journal": "", "ref_id": "b46", "title": "Frustum pointnets for 3d object detection from rgbd data", "year": "2018" }, { "authors": "Hao Charles R Qi; Kaichun Su; Leonidas J Mo; Guibas", "journal": "", "ref_id": "b47", "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "year": "2017" }, { "authors": "Li Charles R Qi; Hao Yi; Leonidas J Su; Guibas", "journal": "", "ref_id": "b48", "title": "Point-net++: Deep hierarchical feature learning on point sets in a metric space", "year": "2017" }, { "authors": "Cody Reading; Ali Harakeh; Julia Chae; Steven L Waslander", "journal": "", "ref_id": "b49", "title": "Categorical depth distribution network for monocular 3d object detection", "year": "2007" }, { "authors": "Thomas Roddick; Alex Kendall; Roberto Cipolla", "journal": "", "ref_id": "b50", "title": "Orthographic feature transform for monocular 3d object detection", "year": "2018" }, { "authors": "Shaoshuai Shi; Chaoxu Guo; Li Jiang; Zhe Wang; Jianping Shi; Xiaogang Wang; Hongsheng Li", "journal": "", "ref_id": "b51", "title": "PV-RCNN: Pointvoxel feature set abstraction for 3D object detection", "year": "2020" }, { "authors": "Shaoshuai Shi; Chaoxu Guo; Li Jiang; Zhe Wang; Jianping Shi; Xiaogang Wang; Hongsheng Li", "journal": "", "ref_id": "b52", "title": "Pv-rcnn: Pointvoxel feature set abstraction for 3d object detection", "year": "2020" }, { "authors": "Shaoshuai Shi; Xiaogang Wang; Hongsheng Li", "journal": "", "ref_id": "b53", "title": "Pointrcnn: 3d object proposal generation and detection from point cloud", "year": "2019" }, { "authors": "Shaoshuai Shi; Zhe Wang; Jianping Shi; Xiaogang Wang; Hongsheng Li", "journal": "", "ref_id": "b54", "title": "From points to parts: 3d object detection from point cloud with part-aware and part-aggregation network", "year": "2019" }, { "authors": "Weijing Shi; Raj Rajkumar", "journal": "", "ref_id": "b55", "title": "Point-gnn: Graph neural network for 3d object detection in a point cloud", "year": "2020" }, { "authors": "Qi Shi; Xiaozhi Ye; Chuangrong Chen; Zhixiang Chen; Tae-Kyun Chen; Kim", "journal": "", "ref_id": "b56", "title": "Geometry-based distance decomposition for monocular 3d object detection", "year": "2021" }, { "authors": "Andrea Simonelli; Samuel Rota Bulo; Lorenzo Porzi; Manuel López-Antequera; Peter Kontschieder", "journal": "", "ref_id": "b57", "title": "Disentangling monocular 3d object detection", "year": "2019" }, { "authors": "N Leslie; Smith", "journal": "", "ref_id": "b58", "title": "A disciplined approach to neural network hyper-parameters: Part 1-learning rate, batch size, momentum, and weight decay", "year": "2018" }, { "authors": "Pei Sun; Henrik Kretzschmar; Xerxes Dotiwalla; Aurelien Chouard; Vijaysai Patnaik; Paul Tsui; James Guo; Yin Zhou; Yuning Chai; Benjamin Caine", "journal": "", "ref_id": "b59", "title": "Scalability in perception for autonomous driving: Waymo open dataset", "year": "2020" }, { "authors": "Li Wang; Liang Du; Xiaoqing Ye; Yanwei Fu; Guodong Guo; Xiangyang Xue; Jianfeng Feng; Li Zhang", "journal": "", "ref_id": "b60", "title": "Depthconditioned dynamic message propagation for monocular 3d object detection", "year": "2021" }, { "authors": "Li Wang; Li Zhang; Yi Zhu; Zhi Zhang; Tong He; Mu Li; Xiangyang Xue", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b61", "title": "Progressive coordinate transforms for monocular 3d object detection", "year": "2021" }, { "authors": "Tai Wang; Jiangmiao Pang; Dahua Lin", "journal": "", "ref_id": "b62", "title": "Monocular 3d object detection with depth from motion", "year": "2022" }, { "authors": "Yan Wang; Wei-Lun Chao; Divyansh Garg; Bharath Hariharan; Mark Campbell; Kilian Q Weinberger", "journal": "", "ref_id": "b63", "title": "Pseudo-lidar from visual depth estimation: Bridging the gap in 3d object detection for autonomous driving", "year": "2019" }, { "authors": "Hai Wu; Chenglu Wen; Wei Li; Xin Li; Ruigang Yang; Cheng Wang", "journal": "", "ref_id": "b64", "title": "Transformation-equivariant 3d object detection for autonomous driving", "year": "2022" }, { "authors": "Xiaopei Wu; Liang Peng; Honghui Yang; Liang Xie; Chenxi Huang; Chengqi Deng; Haifeng Liu; Deng Cai", "journal": "", "ref_id": "b65", "title": "Sparse fuse dense: Towards high quality 3d detection with depth completion", "year": "2022" }, { "authors": "Yan Yan; Yuxing Mao; Bo Li", "journal": "Sensors", "ref_id": "b66", "title": "SECOND: Sparsely embedded convolutional detection", "year": "2018" }, { "authors": "Zetong Yang; Yanan Sun; Shu Liu; Jiaya Jia", "journal": "", "ref_id": "b67", "title": "3DSSD: Point-based 3D single stage object detector", "year": "2020" }, { "authors": "Zetong Yang; Yanan Sun; Shu Liu; Xiaoyong Shen; Jiaya Jia", "journal": "", "ref_id": "b68", "title": "STD: Sparse-to-dense 3D object detector for point cloud", "year": "2019" }, { "authors": "Xingyi Tianwei Yin; Philipp Zhou; Krahenbuhl", "journal": "", "ref_id": "b69", "title": "Centerbased 3d object detection and tracking", "year": "2021-06" }, { "authors": "Fisher Yu; Dequan Wang; Evan Shelhamer; Trevor Darrell", "journal": "", "ref_id": "b70", "title": "Deep layer aggregation", "year": "2018" }, { "authors": "Kai Zhang; Gernot Riegler; Noah Snavely; Vladlen Koltun", "journal": "", "ref_id": "b71", "title": "Nerf++: Analyzing and improving neural radiance fields", "year": "2020" }, { "authors": "Yunpeng Zhang; Jiwen Lu; Jie Zhou", "journal": "", "ref_id": "b72", "title": "Objects are different: Flexible monocular 3d object detection", "year": "2021" }, { "authors": "Weiliang Wu Zheng; Li Tang; Chi-Wing Jiang; Fu", "journal": "", "ref_id": "b73", "title": "Sessd: Self-ensembling single-stage object detector from point cloud", "year": "2021" }, { "authors": "Yunsong Zhou; Yuan He; Hongzi Zhu; Cheng Wang; Hongyang Li; Qinhong Jiang", "journal": "", "ref_id": "b74", "title": "Monocular 3d object detection: An extrinsic parameter free approach", "year": "2021" }, { "authors": "Yin Zhou; Oncel Tuzel", "journal": "", "ref_id": "b75", "title": "Voxelnet: End-to-end learning for point cloud based 3d object detection", "year": "2018" } ]
[ { "formula_coordinates": [ 3, 389.45, 612.28, 155.66, 9.68 ], "formula_id": "formula_0", "formula_text": "Fru 2 = f 1 (Fru 1 )(1)" }, { "formula_coordinates": [ 3, 369, 704.17, 176.11, 9.68 ], "formula_id": "formula_1", "formula_text": "O fru = Sigmoid(f 2 (Fru 2 ))(2)" }, { "formula_coordinates": [ 4, 122.85, 133.43, 163.52, 9.68 ], "formula_id": "formula_2", "formula_text": "Fru 3 = O fru ⊙ Fru 2(3)" }, { "formula_coordinates": [ 4, 50.11, 226.13, 238.39, 20.94 ], "formula_id": "formula_3", "formula_text": "V 2 = f 3 (V 1 ); O 3d = Sigmoid(f 4 (V 2 )); V 3 = O 3d ⊙V 2(" }, { "formula_coordinates": [ 4, 308.86, 487.83, 230.35, 52.08 ], "formula_id": "formula_4", "formula_text": "O * frui,j,d =          1 if Ind i,j > -1 and d = Ind i,j , 0 if Ind i,j > -1 and d < Ind i,j , -1 otherwise." }, { "formula_coordinates": [ 5, 50.11, 94.57, 231.14, 52.62 ], "formula_id": "formula_5", "formula_text": "O * 3dx,y,z =          1 if Vol 3dx,y,z > 0, 0 if R(O * 3dx,y,z ) ∩ Ray point→cam -1 otherwise." }, { "formula_coordinates": [ 5, 50.11, 416.31, 236.25, 23.18 ], "formula_id": "formula_6", "formula_text": "M fru ∈ R W F ×H F ×D and M 3d ∈ R X×Y ×Z . M fru = true if O *" }, { "formula_coordinates": [ 5, 89.77, 476.33, 196.6, 12.69 ], "formula_id": "formula_7", "formula_text": "L f ru = FL(O * fru [M fru ], O fru [M fru ])(7)" }, { "formula_coordinates": [ 5, 96.34, 535.77, 190.02, 12.69 ], "formula_id": "formula_8", "formula_text": "L 3d = FL(O * 3d [M 3d ], O 3d [M 3d ])(8)" }, { "formula_coordinates": [ 5, 115.26, 585.32, 171.1, 9.65 ], "formula_id": "formula_9", "formula_text": "L occupancy = L f ru + L 3d(9)" }, { "formula_coordinates": [ 5, 117.32, 659.62, 169.04, 9.65 ], "formula_id": "formula_10", "formula_text": "L = L org + λL occupancy(10)" } ]
Learning Occupancy for Monocular 3D Object Detection
Monocular 3D detection is a challenging task due to the lack of accurate 3D information. Existing approaches typically rely on geometry constraints and dense depth estimates to facilitate the learning, but often fail to fully exploit the benefits of three-dimensional feature extraction in frustum and 3D space. In this paper, we propose Occu-pancyM3D, a method of learning occupancy for monocular 3D detection. It directly learns occupancy in frustum and 3D space, leading to more discriminative and informative 3D features and representations. Specifically, by using synchronized raw sparse LiDAR point clouds, we define the space status and generate voxel-based occupancy labels. We formulate occupancy prediction as a simple classification problem and design associated occupancy losses. Resulting occupancy estimates are employed to enhance original frustum/3D features. As a result, experiments on KITTI and Waymo open datasets demonstrate that the proposed method achieves a new state of the art and surpasses other methods by a significant margin. Codes and pretrained models will be available at https://github. com/SPengLiang/OccupancyM3D.
Liang Peng; Junkai Xu; Haoran Cheng; Zheng Yang; Xiaopei Wu; Wei Qian; Wenxiao Wang; Boxi Wu; Deng Cai
[ { "figure_caption": "Figure 1 .1Figure 1. Overall design. We introduce occupancy learning for monocular 3D detection. Best viewed in color.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Occupancy label generation in frustum and 3D space. Best viewed in color with zoom in.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "experiments on competitive KITTI and Waymo open datasets. KITTI: KITTI [16] is a widely employed benchmark for autonomous driving. KITTI3D object dataset consists of 7,481 training samples and 7,518 testing samples, where labels on test set keep secret and the final performance is evaluated on KITTI official website. To conduct ablations, the Comparisons on KITTI test set for Car category. The red refers to the highest result and blue is the second-highest result. Our method outperforms other methods including monocular and video-based methods. training samples are further divided into a train set and a val set [9]. They individually contain 3,512 and 3,769 samples, respectively. KITTI has three categories: Car, Pedestrian, and Cyclist. According to difficulties (2D box height, occlusion and truncation levels), KITTI divides objects into Easy, Moderate, and Hard. Following common practice [58, 50, 34], we use AP BEV |R40 and AP 3D |R40 under IoU threshold of 0.7 to evaluate the performance.", "figure_data": "ApproachesVenueInputAPBEV (IoU=0.7)|R 40 Easy Moderate HardAP3D (IoU=0.7)|R 40 Easy Moderate HardKinematic3D [3]ECCV20Video 26.6917.5213.10 19.0712.729.17DfM [63]ECCV22Video 31.7122.8919.97 22.9416.8214.65AM3D [37]ICCV19Image 25.0317.3214.91 16.5010.749.52M3D-RPN [2]ICCV19Image 21.0213.6710.23 14.769.717.42MonoPair [10]CVPR20Image 19.2814.8312.89 13.049.998.65D4LCN [14]CVPR20Image 22.5116.0212.55 16.6511.729.51PatchNet [36]ECCV20Image 22.9716.8614.97 15.6811.1210.17RTM3D [25]ECCV20Image 19.1714.2011.99 14.4110.348.77Ground-Aware [33]RAL21Image 29.8117.9813.08 21.6513.259.91Monodle [38]CVPR21Image 24.7918.8916.00 17.2312.2610.29DDMP-3D [61]CVPR21Image 28.0817.8913.44 19.7112.789.80GrooMeD-NMS [23]CVPR21Image 26.1918.2714.05 18.1012.329.65MonoRUn [6]CVPR21Image 27.9417.3415.24 19.6512.3010.58MonoEF [75]CVPR21Image 29.0319.7017.26 21.2913.8711.71MonoFlex [73]CVPR21Image 28.2319.7516.89 19.9413.8912.07CaDDN [50]CVPR21Image 27.9418.9117.19 19.1713.4111.46MonoRCNN [57]ICCV21Image 25.4818.1114.10 18.3612.6510.03GUP Net [35]ICCV21Image 30.2921.1918.20 22.2615.0213.12AutoShape [34]ICCV21Image 30.6620.0815.95 22.4714.1711.36PCT [62]NeurIPS21 Image 29.6519.0315.92 21.0013.3711.31MonoCon [31]AAAI22Image 31.1222.1019.00 22.5016.4613.95HomoLoss [17]CVPR22Image 29.6020.6817.81 21.7514.9413.07MonoDTR [20]CVPR22Image 28.5920.3817.14 21.9915.3912.73MonoJSG [29]CVPR22Image 32.5921.2618.18 24.6916.1413.64DEVIANT [22]ECCV22Image 29.6520.4417.43 21.8814.4611.89DID-M3D [46]ECCV22Image 32.9522.7619.83 24.4016.2913.75OccupancyM3D-Image 35.3824.1821.37 25.5517.0214.79Waymo: Waymo open dataset (WaymoOD) [60] is a largemodern dataset for autonomous driving. It has 798 se-quences for training and 202 sequences for validation. Fol-lowing previous works [50", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results on WaymoOD val set for Vehicle category. The red refers to the highest result and blue is the second-highest result. Our method outperforms other methods by significant margins on most metrics. Note that our method has the detection range limitation of [2, 59.6](meters), while the perspective-view based method DID-M3D[46] does not have this shortcoming. Thus our method performs worse for objects within [50m, ∞] under IoU=0.5 criterion.", "figure_data": "MethodsVenue3D mAP / mAPH (IoU = 0.7) Overall 0 -30m 30 -50m 50m -∞ Overall 3D mAP / mAPH (IoU = 0.5) 0 -30m 30 -50m 50m -∞Comparison on LEVEL 1PatchNet[36]ECCV20 0.39/0.37 1.67/1.63 0.13/0.12 0.03/0.03 2.92/2.74 10.03/9.75 1.09/0.96 0.23/0.18CaDDN [50]CVPR21 5.03/4.99 14.54/14.43 1.47/1.45 0.10/0.10 17.54/17.31 45.00/44.46 9.24/9.11 0.64/0.62PCT [62]NeurIPS21 0.89/0.88 3.18/3.15 0.27/0.27 0.07/0.07 4.20/4.15 14.70/14.54 1.78/1.75 0.39/0.39MonoJSG [29]CVPR22 0.97/0.95 4.65/4.59 0.55/0.53 0.10/0.09 5.65/5.47 20.86/20.26 3.91/3.79 0.97/0.92DEVIANT [22]ECCV22 2.69/2.67 6.95/6.90 0.99/0.98 0.02/0.02 10.98/10.89 26.85/26.64 5.13/5.08 0.18/0.18DID-M3D [46]ECCV22-/--/--/--/-20.66/20.47 40.92/40.60 15.63/15.48 5.35/5.24OccupancyM3D-10.61/10.53 29.18/28.96 4.49/4.46 0.41/0.40 28.99/28.66 61.24/60.63 23.25/23.00 3.65/3.59Comparison on LEVEL 2PatchNet[36]ECCV20 0.38/0.36 1.67/1.63 0.13/0.11 0.03/0.03 2.42/2.28 10.01/9.73 1.07/0.94 0.22/0.16CaDDN [50]CVPR21 4.49/4.45 14.50/14.38 1.42/1.41 0.09/0.09 16.51/16.28 44.87/44.33 8.99/8.86 0.58/0.55PCT [62]NeurIPS21 0.66/0.66 3.18/3.15 0.27/0.26 0.07/0.07 4.03/3.99 14.67/14.51 1.74/1.71 0.36/0.35MonoJSG [29]CVPR22 0.91/0.89 4.64/4.65 0.55/0.53 0.09/0.09 5.34/5.17 20.79/20.19 3.79/3.67 0.85/0.82DEVIANT [22]ECCV22 2.52/2.50 6.93/6.87 0.95/0.94 0.02/0.02 10.29/10.20 26.75/26.54 4.95/4.90 0.16/0.16DID-M3D[46]ECCV22-/--/--/--/-19.37/19.19 40.77/40.46 15.18/15.04 4.69/4.59OccupancyM3D-10.02/9.94 28.38/28.17 4.38/4.34 0.36/0.36 27.21/26.90 61.09/60.49 22.59/22.34 3.18/3.13ApproachesVenueInputPedestrian APBEV /AP3D(IoU=0.5)|R 40 Cyclist APBEV /AP3D(IoU=0.5)|R 40 Easy Moderate Hard Easy Moderate HardDfM [63]ECCV22 Video-/13.70-/8.71-/7.32-/8.98-/5.75-/4.88Monodle [38]CVPR21 Image10.73/9.646.96/6.556.20/5.445.34/4.59 3.28/2.662.83/2.45DDMP-3D [61]CVPR21 Image5.53/4.934.02/3.553.36/3.014.92/4.18 3.14/2.502.44/2.32MonoRUn [6]CVPR21 Image 11.70/10.887.59/6.786.34/5.831.14/1.01 0.73/0.610.66/0.48MonoEF [75]CVPR21 Image4.61/4.273.05/2.792.85/2.212.36/1.80 1.18/0.921.15/0.71MonoFlex [73]CVPR21 Image10.36/9.437.36/6.316.29/5.264.41/4.17 2.67/2.352.50/2.04CaDDN [50]CVPR21 Image 14.72/12.879.41/8.148.17/6.769.67/7.00 5.38/3.414.75/3.30GUP Net [35]ICCV21Image 15.62/14.95 10.37/9.768.79/8.416.94/5.58 3.85/3.213.64/2.66AutoShape [34]ICCV21Image-/5.46-/3.74-/3.03-/5.99-/3.06-/2.70MonoCon [31]AAAI22Image-/13.10-/8.41-/6.94-/2.80-/1.92-/1.55HomoLoss [17]CVPR22 Image 13.26/11.878.81/7.667.41/6.826.81/5.48 4.09/3.503.78/2.99MonoJSG [29]CVPR22 Image-/11.02-/7.49-/6.41-/5.45-/3.21-/2.57DEVIANT [22]ECCV22 Image 14.49/13.439.77/8.658.28/7.696.42/5.05 3.97/3.133.51/2.59OccupancyM3D-Image 16.54/14.68 10.65/9.159.16/7.808.58/7.37 4.35/3.563.55/2.84", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparisons on KITTI test set for Pedestrian and Cyclist categories. The red refers to the highest result and blue is the secondhighest result. Our method achieves new state-of-the-art results.", "figure_data": "OccupancyM3D outperforms CaDDN [50] by 5.58/5.54mAP/mAPH (10.61/10.53 vs. 5.03/4.99) and 11.55/11.35mAP/mAPH (28.99/28.66 vs. 17.54/17.31) with IoU 0.7and 0.5 criterions, respectively. Compared to DID-M3D[46], under IoU criterion 0.5, our method outperforms itby 8.33/8.19 mAP/mAPH (28.99/28.66 vs. 20.66/20.47)and 7.84/7.71 mAP/mAPH (27.21/26.90 vs. 19.37/19.19)", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": "E.OL-FS OL-3DSAP BEV /AP 3D (IoU=0.7)|R 40 Easy Moderate Hard(a)30.32/21.04 24.58/17.05 22.02/15.01(b)✓35.46/24.69 25.46/17.79 22.96/15.16(c)✓33.15/24.64 25.45/18.88 22.68/16.38(d)✓✓35.72/26.87 26.60/19.96 23.68/17.15", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[42]", "Explanation": "The cited work introduces geometry constraints that the citing paper adopts to facilitate 3D reasoning in monocular 3D detection."}, {"Category": "Data Source", "Citation": "[64]", "Explanation": "The cited work provides dense depth estimates that the citing paper utilizes in their research on monocular 3D detection."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work employs geometry constraints that the citing paper adopts to improve features in 2D space for monocular 3D detection."}, {"Category": "Data Source", "Citation": "[50]", "Explanation": "The cited work provides dense depth estimates that the citing paper utilizes in their research on monocular 3D detection."}, {"Category": "Methodological Basis", "Citation": "[48,49]", "Explanation": "The cited works of PointNet families are effective methods for feature extraction from raw point clouds, which the citing paper adopts in their research to perform 3D detection."}, {"Category": "Data Source", "Citation": "[47,54,56,68]", "Explanation": "The cited works of point-based methods are a source of data for the citing paper, as they are used to directly perform 3D detection on raw point clouds."}, {"Category": "Data Source", "Citation": "[76,67,70,74,13]", "Explanation": "The cited works of voxel-based methods are a source of data for the citing paper, as they organize point clouds into voxel grids for compatibility with regular convolutional neural networks."}, {"Category": "Data Source", "Citation": "[1,15,4]", "Explanation": "The cited works of range-view-based methods are a source of data for the citing paper, as they convert point clouds into range-view to accommodate the LiDAR scanning mode."}, {"Category": "Data Source", "Citation": "[53,69,7,43]", "Explanation": "The cited works of hybrid-based methods are a source of data for the citing paper, as they use a combination of different representations to leverage the strengths of each method."}, {"Category": "Methodological Basis", "Citation": "[8,42]", "Explanation": "The cited works are used as a basis for early works in monocular 3D detection that resort to using scene priors and geometric projections to recover instance level 3D information from a single image."}, {"Category": "Extension or Continuation", "Citation": "[2, 39,25,34,73,27,29]", "Explanation": "The cited works are further extended in more recent monocular methods that employ geometry constraints and extra priors to achieve instance depth estimation in a progressive refinement problem."}, {"Category": "Supporting Evidence", "Citation": "[34]", "Explanation": "The cited work of AutoShape incorporates shape-aware 2D/3D constraints into the 3D detection framework by learning distinguished 2D and 3D keypoints, providing evidence for the use of geometry constraints in monocular 3D detection."}, {"Category": "Supporting Evidence", "Citation": "[29]", "Explanation": "The cited work of MonoJSG reformulates the instance depth estimation as a progressive refinement problem and proposes a joint semantic and geometric cost volume to model the depth error, further supporting the use of geometry constraints in monocular 3D detection."}, {"Category": "Methodological Basis", "Citation": "[64,37,36]", "Explanation": "The cited works provide direct methods for converting depth maps to pseudo LiDAR or 3D coordinate patches, which the citing paper adopts in its research to lift 2D image features to 3D space."}, {"Category": "Methodological Basis", "Citation": "[50]", "Explanation": "The cited work uses depth distributions to lift 2D image features to 3D space, which the citing paper leverages in its research to further explore feature extraction and representation in the frustum and 3D space."}, {"Category": "Data Source", "Citation": "[14, 61,44,12,11,46]", "Explanation": "The cited works provide useful data or information in the form of depth maps or LiDAR point clouds, which the citing paper utilizes in its research to enhance feature extraction and auxiliary information."}, {"Category": "Methodological Basis", "Citation": "[40,41,72]", "Explanation": "The cited works provide a foundation for the development of implicit representations, which the citing paper builds upon to model 3D scenes in a more fine-grained manner for tasks such as 3D reconstruction and semantic segmentation."}, {"Category": "Extension or Continuation", "Citation": "CaDDN [50]", "Explanation": "The cited work serves as a reference for the architecture design in the citing paper, with the exception of the proposed occupancy learning module and the replacement of the 2D backbone with DLA34 [71]. The citing paper extends the method to the monocular setting, which is a potential avenue for future research."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work introduces the concept of focal loss, which the citing paper adopts in the context of occupancy prediction to improve the classification accuracy of the predicted occupancy labels."}, {"Category": "Methodological Basis", "Citation": "[50]", "Explanation": "The cited work, CaDDN, provides the original detection and depth losses that the citing paper adopts in its research to calculate the final loss of the network."}, {"Category": "Methodological Basis", "Citation": "[45]", "Explanation": "The cited work, PyTorch, is used for implementation in the citing paper, indicating a methodological basis for the research conducted."}, {"Category": "Data Source", "Citation": "[16]", "Explanation": "The cited work, KITTI, is used as a data source for training the network in the citing paper, providing a foundational element for the study conducted."}, {"Category": "Data Source", "Citation": "[60]", "Explanation": "The cited work, Waymo, is also used as a data source for training the network in the citing paper, providing another foundational element for the study conducted."}, {"Category": "Supporting Evidence", "Citation": "[50]", "Explanation": "The cited work, CaDDN, provides a method for 3D object detection that the citing paper outperforms, indicating that the proposed method in the citing paper is more effective."}, {"Category": "Extension or Continuation", "Citation": "[46]", "Explanation": "The cited work, DID-M3D, is extended in the citing paper by outperforming it in terms of AP BEV, showing a new state-of-the-art result in the field of monocular 3D detection."}, {"Category": "Extension or Continuation", "Citation": "[63]", "Explanation": "The cited work, DfM, is further extended in the citing paper by outperforming it in terms of AP BEV under the moderate setting, indicating a new state-of-the-art result in the field of video-based monocular 3D detection."}, {"Category": "Supporting Evidence", "Citation": "[60]", "Explanation": "The Waymo open dataset is cited to provide a large dataset that benefits the proposed occupancy learning method, as it contains diverse scenes that are useful for the method."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b34", "b8", "b25", "b0", "b24", "b42", "b51", "b48", "b15", "b6", "b31", "b43", "b12", "b33", "b28", "b24", "b41" ], "table_ref": [], "text": "Recent advances in large language models (LLMs) have exhibited remarkable abilities in language comprehension, text generation, question answering, dialogue, reasoning, and can even exhibit human-level performance on various benchmarks (Ouyang et al., 2022;Chowdhery et al., 2022;OpenAI, 2023;Google, 2023). Since LLMs have been trained on extensive and diverse text corpora, they have captured a broad range of commonsense understanding about the world, enabling them to adeptly handle a multitude of various and complex scenarios. Therefore, recently, researchers have proposed to integrate LLMs in embodied decision making (Huang et al., 2022a;Li et al., 2022;Ahn et al., 2022;Huang et al., 2022b;Singh et al., 2022a;Yao et al., 2022;Wang et al., 2023;Driess et al., 2023;Carta et al., 2023), either using LLMs to do task planning or directly using LLM as an agent.\nAlthough language agents are capable of making sound decisions when provided with ample information, they face challenges when encountering situations with limited or insufficient information Figure 1: Asking Before Action (ABA) allows the agent to efficiently query for pertinent information from external sources via natural language and subsequently execute actions based on the acquired responses. Imagine the room owner instructs the robot to \"put a statue on the dresser\" (in the purple box), however, the robot lacks precise knowledge of the statue's location. To accomplish the task, the robot must first gather the necessary information (in this case, the statue's location), then take further actions as shown in the gray box. In the process of information gathering, classical methods (in red dashed box) search for the statute through onerous trial and error, which is both inefficient and demanding especially in complex environments. In contrast, our proposed method (in green dashed box) empowers the robot to directly ask for the location in natural language and proceed directly to the statute according to the answer, which significantly improves both efficiency and success rates. such as unfamiliar environments. In such cases, the agents may struggle to make informed decisions. To illustrate this, let's consider the scenario depicted in Figure 1, where a robot is deployed in an unfamiliar house with the task to put a statue on the dresser. However, the robot lacks prior information about the house like the statue's location. Consequently, the robot decides to systematically search for every possible position in order, as shown in the red dashed box in Figure 1. Even though the robot finally manages to find the statue, this decision-making process is notably inefficient, not to mention the possibility of suboptimal searching behavior which may lead to failure in finding the statue.\nOn the other hand, when we humans encounter such scenarios, we tend to adopt a different approach. Rather than onerous trial and error, it is natural for us to actively query external information from our peers or other information sources to accelerate information gathering and guide decision making. Imagine you are invited to your friend's house and your friend asks you to help move a statue. As shown in Figure 1, instead of opening each and every cabinet to check whether there is a statue, you would likely opt to directly ask \"where is the statue?\", then directly go to the specific location after you got the answer (as shown in green dashed box).\nBuilding upon this intuition, we focus on a novel setting where the agent can actively query for additional pertinent information from external sources using natural language during their interactions within environments. Though some existing works have explored scenarios involving human-in-theloop interactions to provide additional information, our setting is stands apart from these previous ones. A majority of works (Nguyen and Daumé III, 2019;Nguyen et al., 2019;Singh et al., 2022b;Da Silva et al., 2020) ask humans for oracle actions or action descriptions, Nguyen et al. (2022) ask for information about current states and (sub-)goals. Liu et al. (2022) asks three-word-templated questions to accelerate training, while Huang et al. (2022b) ask for scene, task, or preferences descriptions. In contrast to existing works, we concentrate on designing a generic mechanism to gather information through natural language, which imposes fewer restrictions and aligns more closely with human decision-making processes.\nIn this paper, we aim to investigate the feasibility of designing an agent that is able to automatically ask proper questions in unfamiliar environments via natural language. Two questions linger in our mind: Firstly, can the agent ask various questions to gather a variety of necessary information while filtering out the irrelevant one? Furthermore, can the agent remember and reuse the previously acquired information, thereby avoiding asking for the same information in later-on tasks? The main paper solves these questions in the following organization:\n• We introduce Contextual MDP with Human / External Information Sources in the Loop, a novel theoretical formulation that is able to formalize the scenarios where the agent can actively query to efficiently gather information via language (Section 3.1). • We propose Asking Before Action (ABA), an efficient method that is able to accelerate information gathering by allowing the agent to actively query for pertinent information in natural language while interacting with the environments. ABA can learn to ask proper questions even only with a modest modification of existing agents by providing in-context examples. To further improve the performance, we propose to use imitation learning to enable asking diverse yet pertinent questions as well as remembering and reusing the acquired information (Section 3.2). • Experiments on a series of tasks in ALFWorld (Shridhar et al., 2021) and its variants empirically demonstrate that our method is capable of asking proper questions and acting upon the answers. Our method demonstrates more than 40% improvement in ALFWorld tasks success rate and achieves remarkable performance on tasks that can hardly be completed using previous methods on ALFWorld variants (Section 4)." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b16", "b53", "b2", "b54" ], "table_ref": [], "text": "To effectively portray the obstacles that arise when deploying an agent to unfamiliar environments, we formulate the embodied decision making problem as Contextual Markov Decision Processes (Contextual MDPs) (Hallak et al., 2015).\nDefinition 2.1 Contextual MDP is a tuple (S, A, C, M(c)).\nHere S and A stand for state space and action space respectively. C is the context space. M is a function mapping\ncontext c ∈ C to a specific T -horizon MDP M(c) = (S, A, p(•|s, a, c), r(s, a, c)).\nHere, {M(c), ∀c ∈ C} represents a family of MDPs characterized by a shared state space S and action space A, but with different transition function p(•|s, a, c) and reward function r(s, a, c) specified by c. The goal of the agent is to learn a policy π to maximize the accumulative rewards on the target environment(s). Denote C ′ ⊂ C as the context set in evaluation, we would like to optimize for\nJ (π) = E c ′ ∈C ′ ,s0,p,π T t=0 •r(s t , a t , c)(1)\nNote that the context c varies across different environments, and oftentimes, it remains unknown. Optionally, the agent will be additionally provided with a task instruction i, which is usually a concise language description of the goal, providing extra information about the context c. As shown in Defenition 2.1, when deployed to a new environment, understanding the context c becomes a prerequisite for comprehending the transition and reward functions and ultimately achieving success. In light of this, one common approach is to gather information about the context c through interactions with the environment by trial and error, i.e., infer the context from history\nĉ = f θ (s 1 , a 1 , r 1 , • • • , s t ) (or ĉ = f θ (i, s 1 , a 1 , r 1 , • • • , s t ) if i is provided) while trying to solve the task. Here t ∈ {1, 2, • • • , T }\nand f θ refers to some learnable encoder of c as in (Zintgraf et al., 2020) Consider an example setting where a robot is tasked with the delivery of food to a bedroom within an unfamiliar house, where c represents the house layout, encompassing the precise locations of the food and the bedroom. To effectively complete the food delivery, the robot must embark on a journey of exploration, aimlessly wandering around to discover both the food and the bedroom.\nHowever, efficiently gathering information in various unknown environments with different contexts c can be challenging. Aside from limited generalization capability Beck et al. (2023), existing methods often rely on dense rewards and sufficiently small state space (Zintgraf et al., 2021), which may lead to catastrophic failure in embodied decision making where the environments often lack carefully crafted dense reward functions and the state spaces are often large.\nWe argue that this is not, at least always, the case for how we humans deal with unfamiliar environments. Instead of trying to explore everything on our own, we usually turn to another human, maybe a more experienced senior, and ask for helpful information. This behavior will significantly alleviate the exploration burden in a lot of situations. The above intuition urges us to reconsider the process of embodied decision making in unfamiliar evaluation environments: what if the agent does not necessarily need to figure out everything on itself? Though some prior works have studied the scenarios with human-in-the-loop (refer to Section 5 for a detailed survey), as far as we know, we are the first to deal with the setting of enabling information gathering for embodied decision making with LLM agents." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "In this section, we present our new problem formulation as well as the corresponding algorithm." }, { "figure_ref": [], "heading": "Contextual MDP with Human / External Information Source in the Loop", "publication_ref": [], "table_ref": [], "text": "To incorporate humans (or other external knowledge sources) in the loop of decision making, the key difference is that the agent is able to interact with humans directly to efficiently gather information:\nDefinition 3.1 Contextual MDP with Human / External information source in the loop based on (S U , A U , C, H(c), M(c)). Here S U , A U are the augmented state space and action space: S U = S ∪ L ans , A U = A ∪ L ask , where L ask and L ans include all possible questions and answers in natural language. H(c) maps context c ∈ C to H c , which is a model of human (or other external information source) in context c that can map any questions to information.\nM(c) = (S U , A U , H c , p U (•|s, a, c, H c ), r(s, a, c), γ) Like Contextual MDP, M(c) = (S U , A U , H c , p U (•|s, a, c, H c ), r(s, a, c), γ\n) has a transition function and a reward function parameterized by c. However, the state space S U and the action space A U are augmented with answers and questions respectively, and the transition function p U (•|s, a, c, H c ) can be factorized as:\np U (s ′ |s, a, c, H c ) = p(s ′ |s, a, c) • 1 a∈A + p(H c (a) = s ′ ) • 1 a∈L ask (2)\nWith the augmented action space, the agent can now query to gather information while interacting with the environments. For instance, by simply asking \"where is the kitchen?\", the agent can omit tens of steps of exploration to find the food. However, several challenges exist in this process. Firstly, when deployed to unfamiliar environments, it is important for the agent to identify the key information that is pertinent while filtering out the task-irrelevant ones. Secondly, it would be icing on the cake if the agent can choose to ask only when it cannot reason the answers from historical information.\nTo solve these challenges, we propose Asking Before Action (ABA), an effective method for the language agent to cleverly gather necessary information." }, { "figure_ref": [], "heading": "Asking Before Action", "publication_ref": [ "b21", "b15", "b0", "b42", "b34", "b9" ], "table_ref": [], "text": "In this paper, we focus on the setting where the task instruction i is provided. Therefore, the agent will integrate i and the historical observations and actions (s\n1 , a 1 , • • • , s t ) by concatenation to get τ t = concat(i, s 1 , a 1 , • • • , s t )\n, which is used as the input to the policy a t ∼ π(τ t ).\nTo efficiently phrase the questions and comprehend the answers, we use pretrained LLMs as the initialization of the agent's policy. Therefore without loss of generality, in the following of this paper we assume that both the states and the actions are in the form of natural language. However, our method can be easily extended to visual settings with multimodal LLMs such as (Huang et al., 2023;Driess et al., 2023), or be combined with pretrained low-level policies as in Ahn et al. (2022); Singh et al. (2022a) to solve complex robot control tasks.\nWhile notable progress has been made in instruction-following LLMs (Ouyang et al., 2022;Chung et al., 2022), relying solely on the zero-shot deployment of an LLM agent based on task instruction i falls short of meeting the desired outcomes. To this end, we describe two methods in the following sections to further improve the performance. In Section 3.2.1, we introduce a simple yet effective method to improve policy learning via few-shot in-context examples. In Section 3.2.2, to further improve the performance, we propose to do model finetuning with expert demonstration data." }, { "figure_ref": [], "heading": "Asking Before Action via In-context Examples", "publication_ref": [ "b3", "b50", "b1", "b13", "b42", "b51", "b0" ], "table_ref": [], "text": "In-context learning (Brown et al., 2020) allows LLMs to learn new tasks just by prepending several input-output examples before the inputs, without even optimizing any model parameters. Its superior efficiency and the ability of generalization has attracted a lot of attention (Xie et al., 2021;Akyürek et al., 2022;Dai et al., 2022). Recent works focus on embodied planning (Huang et al., 2022a;Singh et al., 2022a) or embodied decision making (Yao et al., 2022) with LLM also leverage in-context learning to learn the policy. Therefore, an intuitive and natural way is to provide the agent with examples which show the ability to ask appropriate questions at appropriate time, and then try to generalize to new tasks via in-context learning.\nInstead of directly using the history of current task τ t = concat(i, s 1 , a 1 , • • • , s t ) as inputs, we provide the agent with K human annotated trajectories\nτ k = concat(i k , s k 1 , a k 1 , • • • , s k T , a k T ),\nwhere k ∈ {1, 2, • • • , K}. τ k contains proper asking actions as well as actions interacting with the environment. We then sample i k randomly for different k. With K examples, for the current task, the agent will select actions according to\na t ∼ π LLM (τ 1 , • • • , τ K , τ t )(3)\nInstead of directly letting the LLM agent to generate the final action, we follow Ahn et al. (2022) by directly outputting the conditional probability of each action a i ∈ A by\na t = arg max a∈A |a| i=0 π LLM (e i |τ 1 , • • • , τ K , τ t , e 1:i-1 )(4)\nwhere e i is the i-th token of the action, and |a| refers to the number of tokens of encoded action a. However, in our paper, the action space is augmented with L ask , and therefore A U is infinite. Thus, we propose to first augment the action space with one special action \"ask\", then score and select. If the action \"ask\" is selected, the agent will then keep generating the corresponding questions via LLM until the stop token." }, { "figure_ref": [], "heading": "Asking Before Action with Imitation Learning", "publication_ref": [ "b36" ], "table_ref": [], "text": "In experiments, we find that when the task time horizon is relatively short, in-context learning is enough. However, when the time horizon is relatively long, purely in-context learning might be insufficient, and the reasons are as follows. First of all, due to the token limitations of LLMs, sometimes even only providing two examples will also result in truncation during evaluation time. Therefore, sometimes we can only use one example which results in limited diversity and thus hampers the performance of in-context learning. Furthermore, as the task horizon increases, the policy usually tends to become more complex. This complexity exacerbates the need for samples, which may further refrain the agent from learning a good policy especially when the number of samples is limited. Taking these two factors into consideration, more proper treatment is needed for the agent to learn a robust policy.\nTo this end, we propose to further finetune the model via imitation learning. We collect a dataset of N trajectories using expert policy\nD = {(τ i t , a i t , n i t ) Ti t=0 } N i=0\nwhere each trajectory consists of input-output pairs for T i timesteps and n i t is a mask variable. To alleviate the distribution shift problem (Ross et al., 2011), we intentionally corrupt the expert policy by randomly injecting noisy actions with probability p and mark this as noise by setting n i t = 1 in the dataset. Then, the policy is trained to maximize the probability of actions across trajectories via the cross-entropy loss with all the noisy actions ignored as follows:\nL = - N i=0 Ti t=0 log π LLM (a i t |τ i t ) • 1 n i t =0\n(5)" }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b41", "b7", "b41", "b51", "b7", "b51", "b0" ], "table_ref": [], "text": "In this section, we empirically evaluate ABA on a series of decision making tasks in ALFWorld Shridhar et al. (2021) and its variants. In Section 4.1, we assess the effectiveness of ABA on To minimize human involvement, during the evaluation phase, we implement the model of human (or other information sources) H via another language model which is instructed to respond to questions based on the provided information. To incorporate prior knowledge about the current room, we extract the information about the object placement from the simulator and transform it into a descriptive paragraph. This paragraph is then fed to H. Whenever the agent poses a question, H is tasked with providing an answer based on the paragraph. To further improve the answer accuracy, H is prompted with several question-answer examples. We use Vicuna-7B (Chiang et al., 2023) to implement H. For more details, please refer to Appendix A. It's worth noting that this design allows for the straightforward replacement of the current language model with human or alternative information sources for more appropriate answers.\nAs for baselines, we propose to use:\n• BUTLER (Shridhar et al., 2021), which is an imitation learning-based method without LLM. Instead, it trains independent models using a substantial dataset of 10 5 expert trajectories for each task.\n• ReAct (Yao et al., 2022), which is an LLM-based method synergizing reasoning and acting to take actions.\nAs for the implementation, we use Vicuna-7B Chiang et al. (2023) as the language model for both ReAct and our method, and we incorporate the reasoning process when making decisions (Yao et al., 2022;Ahn et al., 2022). For a fair comparison, we use the same scoring method to select actions for both our method and ReAct. In this section, we present the results for ABA with human-annotated in-context examples. For our method and ReAct, we use K in-context examples with K = 2.\nThe results are presented in 2023) (with a success rate 6%). We hypothesize that the limited model size hampers the reasoning ability, and it is likely that our method would yield even better results with larger models. For example trajectories and qualitative analysis, please refer to Appendix B." }, { "figure_ref": [], "heading": "Modified ALFWorld: Multiround ALFWorld and ALFWorld with ambiguous tasks", "publication_ref": [ "b51" ], "table_ref": [], "text": "To further assess the capabilities of ABA in terms of question asking for gathering diverse information and the ability to remember queried or known information to avoid repetitive questioning, we expand the ALFWorld environment to include two additional variants: ALFWorld with ambiguous tasks and multiround ALFWorld. In this section, we compare two variants of our methods, namely ABA-IC and ABA-IL, with ReAct (Yao et al., 2022). ABA-IC refers to ABA via in-context examples as elaborated in Section 3.2.1, while ABA-IL refers to ABA with imitation learning as elaborated in Section 3.2.2. For ABA-IC and ReAct, we utilize human-annotated examples, while for ABA-IL, we manually design an expert policy to collect data. Additional details can be found in Appendix C. In the following, we will detail the modified environments and present the experiment results:\nALFWorld with ambiguous tasks In this setting, we manually adjusted the task descriptions and reward functions to introduce ambiguity. Instead of providing precise task descriptions, we deliberately left some aspects open-ended, thereby necessitating the agent to gather additional information for successful completion. For instance, in ALFWorld, the task \"put a mug on the shelf\" is typically considered accomplished as long as any mug is placed on the shelf (there might be multiple mugs in the room). But in this modified setting, the task is only deemed completed when a specific mug is put on the shelf. To complete this task, one can either enumerate all possibilities accordingly until the correct one is identified or directly ask for further clarification about the task.\nFor ABA-IC and ReAct, we use K = 2 in-context examples. For ABA-IL, we collect a dataset of 1500 trajectories. The results are shown in Figure 4.2. Both ABA-IC and ABA-IL consistently exhibit superior performance compared to ReAct, while the baseline fails to complete the task in many scenarios. In Appendix D, we provide example trajectories that demonstrate the effectiveness of ABA-IC and ABA-IL in asking pertinent questions to gather necessary information, while ReAct struggles to conduct consistent exploration. This again highlights the significance of asking: actively questioning for necessary information does not only improve efficiency but also improves success rate, as it proves challenging to gather the necessary information in various complex environments solely using one policy. Furthermore, ABA-IL slightly outperforms ABA-IC. For ID tasks average success rate, ABA-IL achieves 54% while ABA-IC achieves 45%, and for OOD tasks, ABA-IL achieves 43% while ABA-IC achieves 37%, which proves that, compared learning via in-context examples, imitation learning can further improve the performance. Multiround ALFWorld To further test whether the agent is able to remember the previously known information and avoid asking repeatedly, we introduce multiround ALFWorld. In previous experiments, the episode ends as long as the current task is completed. subsequently, in the next episode, the environment will reset to another room with a different layout. In Multiround ALFWorld, after one task is completed, we randomly sample a new task for the agent to undertake within the same room for multiple rounds. This adjustment enables the agent to familiarize itself with the object placement and provides an opportunity to test its capability to remember and refrain from repetitive questioning. For instance, suppose the agent has previously visited the sidetable to complete a previous task and happened to see there is a mug, or the agent has previously ask about the location of the mug, when the agent is tasked to bring a mug, it can directly go to the location without the need for further inquiries. In this environment, instead of measuring the success rate as in previous experiments, we assign a reward r = 1 upon the completion of each task and measure the total reward after T steps." }, { "figure_ref": [], "heading": "Pick", "publication_ref": [], "table_ref": [], "text": "For ReAct and ABA-IC, we use K = 1 in-context example due to the longer trajectory length and token limitation. For ABA-IL, we have a dataset of 500 trajectories. For all experiments, we set T = 50. The agent first queries itself whether it has seen a certain object, and asks only when the answer is negative. For more details, please refer to Appendix E. As for the qualitative results, we show the example trajectories in Appendix F, which demonstrates that the agent is capable of recalling previously acquired information and leveraging it in the following tasks. As for the quantitative results, in Figure 4.2, ReAct achieves less than 0.1 rewards in almost all the scenarios. In sharp contrast with that, ABA-IC achieves an average of 0.98 for ID tasks and 0.76 for OOD tasks, while ABA-IL achieves 2.4 and 2.1 respectively. These results indicate that our approach is particularly effective in handling complex tasks.\nMoreover, the deterioration in ReAct's performance compared with previous experiments aligns with our analysis in Section 3.2.2, which suggests that longer in-context examples and smaller K can hinder its effectiveness. While ABA-IC partially overcomes this limitation through a relatively clear policy mapping, we show that ABA-IL can further improve the ABA-IC's performance by around 2X. These findings provide additional evidence for the effectiveness of our proposed methods.\n5 Related Works" }, { "figure_ref": [], "heading": "Language Agent", "publication_ref": [ "b35", "b14", "b3", "b52", "b10", "b37", "b4", "b20", "b38", "b50", "b3", "b49", "b30", "b5", "b17", "b20", "b26", "b6", "b24", "b27", "b48", "b25", "b42", "b6", "b4", "b46", "b47" ], "table_ref": [], "text": "Natural language modeling pre-trained on large-scale unstructured text corpus has seen tremendous success in a variety of applications, including downstream NLP tasks (Radford et al., 2019;Devlin et al., 2018;Brown et al., 2020), logic reasoning (Zhao et al., 2023;Cobbe et al., 2021;Shen et al., 2021), and human-AI coordination (Bubeck et al., 2023;Hu and Sadigh, 2023). The rich information contained in LLMs as an implicit knowledge base also catalyzes the research on in-context learning (Shin et al., 2022;Xie et al., 2021) and prompting (Brown et al., 2020;Wei et al., 2022) that prepend instructions and a few examples to the input of LLMs. However, the time and memory complexity for encoding the prompt is quadratic in the length of the interaction history, such as all the previous trajectories in embodied decision-making, which can increase the burden of the self-attention mechanism and even exceed the token limitations of LLMs. Despite the techniques introduced to address this issue (Mu et al., 2023;Bulatov et al., 2023), the proposed ABA-IL is inspired by the recent studies on fine-tuning LLMs (Houlsby et al., 2019;Hu and Sadigh, 2023;Lialin et al., 2023), especially those that leverage decision-making signals to train language agents that satisfy certain goals (Carta et al., 2023;Snell et al., 2022a,b).\nLLMs have also shown great potential for task planning (Huang et al., 2022b;Lin et al., 2023;Huang et al., 2022a;Wang et al., 2023;Li et al., 2022;Singh et al., 2022a;Carta et al., 2023). However, recent criticisms are made on the planning abilities of LLMs (Bubeck et al., 2023;Valmeekam et al., 2022Valmeekam et al., , 2023)). They show that LLMs can get stuck in long-horizon decision-making tasks and the resulting search procedure often degrades to exhaustive search over the large state and action spaces. While pure LLM planning remains a highly challenging open problem, in this work, we investigate the capacity of LLM agents to actively gather information with humans in the loop." }, { "figure_ref": [], "heading": "Embodied Decision Making with Human-in-the-Loop", "publication_ref": [ "b31", "b43", "b12", "b12", "b43", "b33", "b31", "b24", "b28" ], "table_ref": [], "text": "Some existing works have also studied the scenarios with human-in-the-loop. They query humans for extra information to guide decision making. A majority of works (Nguyen and Daumé III, 2019;Nguyen et al., 2019;Singh et al., 2022b;Da Silva et al., 2020) Existing works include human-in-the-loop of decision making, either (1) directly asking for numerical vectors like actions/states (Da Silva et al., 2020;Singh et al., 2022b;Nguyen et al., 2022) or (2) querying humans to give exhaustive instruction and learn to convert them to actions (Nguyen and Daumé III, 2019;Nguyen et al., 2019). However, in our setting, we only put a minimal burden on humans and ask them for natural language information which is more natural and more straightforward than providing detailed action instructions for humans. Instead of considering human feedback as the scene (or task, preference) descriptor in the decision making pipeline (Huang et al., 2022b), we formally formulate the setting as Contextual MDP with Human / External Information Sources in the Loop, which elaborate the effects of asking via context c and allow the agent to query a broader range of information to gather information. Finally, unlike Liu et al. (2022), we focus on zero-shot adaptation setting and propose more natural end-to-end methods to circumvent the needs of template and similarity designing." }, { "figure_ref": [], "heading": "Conclusion and Discussion", "publication_ref": [ "b15" ], "table_ref": [], "text": "In this paper, we focus on the setting where the agent can actively query for additional pertinent information from external sources using natural language while interacting in the environments. To formalize this problem, we propose Contextual MDP with Human / External Information Sources in the Loop. Then, we propose Asking Before Action (ABA), a method that empowers the agent to ask various questions to gather diverse information and filter out irrelevant ones. ABA is also able to remember and reuse the acquired information in subsequent tasks, thus avoiding redundant queries.\nIn a series of experiments on ALFWorld and its variants, we show qualitatively that ABA is able to propose appropriate questions that satisfy our expectations and make informed decisions based on the answers. Furthermore, the quantitative experiments indicate that ABA consistently outperforms the baselines and achieves a remarkable performance on tasks that are challenging for existing methods. Though currently our method is confined to the language environment, it can readily be extended to incorporate image inputs via multimodal language model (Driess et al., 2023) or tackle control tasks via low-level policies. We believe that this exciting and promising direction has the potential to significantly expand the capabilities and performance of embodied agents." }, { "figure_ref": [], "heading": "A Design and Details about Human Model", "publication_ref": [ "b7" ], "table_ref": [], "text": "In this section, we describe the design and details about the human model (or other information sources) H. To minimize human involvement, during the evaluation phase, we implement H via another language model which is instructed to respond to questions based on the provided information.\nTo incorporate prior knowledge about the current room, we extract the information about the object placement from the simulator and transform it into a descriptive paragraph. This paragraph is then fed to H. Specifically, we use Vicuna-7B (Chiang et al., 2023) to implement H. Using a pretrained LLM as H allows for answering questions in free-form language based on the information provided, which acts just like humans.\nTo better demonstrate, we provide an example for the ALFWorld experiment in Section 4.1. Other experiments in Section 4.2 are similar. In ALFWorld, the context c mainly refers to the initial mappings of the object placement. For different rooms, the initial mappings are, therefore, different.\nWe slightly abuse the notations about c here since the agent may replace the objects. Under this mapping, we can directly get the ground truth object locations from the simulator, which are unobservable to the agent. Then, we use a rule-based conversion to convert that list to a string of \"A is in B\", where A refers to the object, while B refers to the place containing the object.\nHere is an example. After converting, we derive a descriptive paragraph like:\nbowl 2 is in diningtable 2. saltshaker 2 is in sidetable 1. spatula 1 is in countertop 1. pot 1 is in stoveburner 4. spatula 2 is in drawer 1. dishsponge 3 is in diningtable 2. peppershaker 1 is in cabinet 2. tomato 4 is in sidetable 1. knife 1 is in diningtable 3. cup 1 is in sidetable 1. bread 2 is in diningtable 3. spatula 3 is in diningtable 2. pan 1 is in cabinet 4. tomato 3 is in fridge 1. potato 1 is in sinkbasin 1. peppershaker 3 is in diningtable 3. apple 1 is in fridge 1. saltshaker 1 is in cabinet 4. fork 2 is in drawer 1. spoon 1 is in sidetable 1. egg 1 is in fridge 1. lettuce 1 is in sidetable 1. plate 1 is in diningtable 2.\nWhenever the agent poses a question, H is tasked with providing an answer based on this paragraph. For instance, the agent may learn to ask:\nWhere can I find the dishsponge? Then, in this example, the input to H will be (1) an instruction that tells the model to provide the answers (in gray); (2) a descriptive paragraph (in black); (3) the question proposed by the agent (in blue)." }, { "figure_ref": [], "heading": "Read the following paragraph and answer questions", "publication_ref": [ "b7", "b7" ], "table_ref": [], "text": ": bowl 2 is in diningtable 2. saltshaker 2 is in sidetable 1. spatula 1 is in countertop 1. pot 1 is in stoveburner 4. spatula 2 is in drawer 1. dishsponge 3 is in diningtable 2. peppershaker 1 is in cabinet 2. tomato 4 is in sidetable 1. knife 1 is in diningtable 3. cup 1 is in sidetable 1. bread 2 is in diningtable 3. spatula 3 is in diningtable 2. pan 1 is in cabinet 4. tomato 3 is in fridge 1. potato 1 is in sinkbasin 1. peppershaker 3 is in diningtable 3. apple 1 is in fridge 1. saltshaker 1 is in cabinet 4. fork 2 is in drawer 1. spoon 1 is in sidetable 1. egg 1 is in fridge 1. lettuce 1 is in sidetable 1. plate 1 is in diningtable 2.\nThe questions is: Where can I find the dishsponge?\nThen, the pretrained LLM (e.g., Vicuna-7B Chiang et al. (2023) in our case), H will provide the answers since it can follow the instructions. In our case, the answer is: dishsponge 1 is in garbagecan 1, dishsponge 2 is in drawer 3, dishsponge 3 is in drawer 4.\nTo enhance the performance and improve the accuracy, we provide a few in-context examples and use the prompting format as in Vicuna Chiang et al. (2023). Please refer to Appendix G for details.\nIt's worth noting that, though we use a language model to simulate humans in our evaluation phase, this design allows for the straightforward replacement of the current language model with human or alternative information sources for more appropriate answers. The performance of our method will be further improved if H can provide more accurate and meaningful answers." }, { "figure_ref": [], "heading": "B Examples and Qualitative Analysis for ALFWorld Experiments", "publication_ref": [ "b51" ], "table_ref": [], "text": "This section provides some examples and qualitative analysis for ALFWorld experiments in Section 4.1.\nFirst, we provide an episode trajectory of our method (ABA-IC), which is randomly sampled from the evaluation set:\nObs 1:\nYou are in the middle of a room. Looking quickly around you, you see a bed 1, a diningtable 1, a drawer 4, a drawer 3, a drawer 2, a drawer 1, a garbagecan 1, a sidetable 2, and a sidetable 1. Your task is to: put a mug in sidetable.\nAct 1: think: To solve the task, I need to find and take a mug, then put it in sidetable. But where is the mug? Let me ask that person.\nObs 2: OK. Act 2: ask: Where is the mug?\nObs 3: mug 1 is in diningtable 1, mug 3 is in diningtable 1, mug 2 is in diningtable 1.\nAct 3: think: We can go to diningtable 1 and take the mug 1, then put it in sidetable.\nObs 4: OK. Act 4: go to diningtable 1 Obs 5: On the diningtable 1, you see a creditcard 3, a creditcard 2, a keychain 3, a keychain 2, a mug 3, a mug 2, a mug 1, a pen 2, a pen 1, a pencil 3, and a pencil 1.\nAct 5: think: Now I find the mug 1. Next, I need to take it, then put it in sidetable.\nObs 6: Ok. Act 6: take mug 1 from diningtable 1 Obs 7: You pick up the mug 1 from the diningtable 1. Act 7: think: Now I take a mug 1. Next, I need to put it in sidetable.\nObs 8: Ok. Act 8: go to sidetable 1 Obs 9: On the sidetable 1, you see a keychain 1. Act 9: put mug 1 in/on sidetable 1 Obs 10: You put the mug 1 in/on the sidetable 1.\nIt can be concluded that, for our method, the agent can identify the correct context that is related to the task (i.e., the location of the mug) and then summarize the question and ask a pertinent question (i.e., \"Where is the mug?\"). After that, the agent can integrate the information in the answer (i.e., notice that mug 1 is in diningtable 1) and act according to that information (i.e., go to diningtable 1, take the mug, and place it in the specific location).\nFor comparison, we provide the trajectory of the same episode of ReAct (Yao et al., 2022):\nObs 1: You are in the middle of a room. Looking quickly around you, you see a bed 1, a diningtable 1, a drawer 4, a drawer 3, a drawer 2, a drawer 1, a garbagecan 1, a sidetable 2, and a sidetable 1. Your task is to: put a mug in sidetable.\nAct 1: think: To solve the task, I need to find and take a mug, then put it in sidetable.\nObs 2: OK. Act 2: think: First I need to find a mug. A mug is more likely to appear in drawer (1-4), sidetable (1-2). I can check one by one, starting with drawer 1.\nObs Obs 51: OK.\nIt can be concluded that the agent failed to provide the right plan at first (i.e., Action 1,2), since none of these locations has a mug. Moreover, even with the plans, it failed to yield coherent and correct actions in later steps. For instance, in Action 5, 8, 10, 12, etc., the agent is constantly confused by hallucination (it believes that \"Now I find a mug\" while there wasn't a mug actually). It also got stuck in loops (it repeatedly visited drawer 1,2,3,4, sidetable 1,2, and bed 1). These together prevent the agent from finishing the task.\nTo further understand the experiment results, we provide more examples for our method and ReAct.\nFor our method:\nObs 1: You are in the middle of a room. Looking quickly around you, you see a bathtubbasin 1, a countertop 1, a drawer 4, a drawer 3, a drawer 2, a drawer 1, a dresser 1, a garbagecan 1, a handtowelholder 2, a handtowelholder 1, a shelf 2, a shelf 1, a sinkbasin 1, a toilet 1, a toiletpaperhanger 1, and a towelholder 1. Your task is to: put some spraybottle on toilet.\nAct 1: think: To solve the task, I need to find and take a spraybottle, then put it on the toilet. But where is the spraybottle? Let me ask that person.\nObs 2: OK. Act 2: ask: Where is the spraybottle? Obs 3: spraybottle 3 is in countertop 1, spraybottle 4 is in dresser 1, spraybottle 2 is in shelf 1.\nAct 3: think: We can go to countertop 1 and take the spraybottle 3, then put it on the toilet.\nObs 4: OK. Act 4: go to countertop 1 Obs 5: On the countertop 1, you see a mirror 1, a soapbar 1, and a spraybottle 1.\nAct 5: think: Now I find the spraybottle 1. Next, I need to take it, then put it on the toilet.\nObs 6: OK. Act 6: take spraybottle 1 from countertop 1 Obs 7: You pick up the spraybottle 1 from the countertop 1.\nAct 7: think: Now I take a spraybottle 1. Next, I need to put it on the toilet.\nObs 8: OK. Act 8: go to toilet 1 Obs 9: On the toilet 1, you see a candle 1, a toiletpaper 2, and a toiletpaper 1.\nAct 9: think: Now I put the spraybottle 1 on the toilet.\nObs 10: OK. Act 10: put spraybottle 1 in/on toilet 1 Obs 11: You put the spraybottle 1 in/on the toilet 1.\nFor ReAct:\nObs 1: You are in the middle of a room. Looking quickly around you, you see a bathtubbasin 1, a countertop 1, a drawer 4, a drawer 3, a drawer 2, a drawer 1, a dresser 1, a garbagecan 1, a handtowelholder 2, a handtowelholder 1, a shelf 2, a shelf 1, a sinkbasin 1, a toilet 1, a toiletpaperhanger 1, and a towelholder 1. Your task is to: put some spraybottle on toilet.\nAct 1: think: To solve the task, I need to find and take a spraybottle, then put it on toilet.\nObs 2: OK. Act 2: think: First I need to find a spraybottle. A spraybottle is more likely to appear in drawer (1-4), dresser (1), sinkbasin (1), toilet (1), and towelholder (1). I can check one by one, starting with drawer 1.\nObs 3: OK. From the above examples, our method can ask proper questions and act accordingly. Though H provides information with slight error (e.g., it mentions that spraybottle 3 is in countertop 1, but only spraybottle 1 is in countertop 1), the agent is robust to such error and successfully adjust its behavior after observing the objects on countertop 1 (i.e., action 5, 6, 7).\nAs for ReAct, it successfully visited four drawers and finally found the spraybottle at dresser 1. However, first, it failed to list every possible container for the spraybottle (i.e., action 2, it omits countertop, shelf, etc.). In the reasoning step, we observe an interesting pattern (i.e., in Action 5, 8, 11, 14): \"Now I find a spraybottle (not). Next, I need to take it\", which seems inconsistent (though it does not affect the next step). Moreover, though the agent finally finds the spraybottle and completes the task successfully, it is inefficient and slow to search every possible location: ReAct takes 20 steps.\nIn comparison, our method only takes 10 steps to finish the task.\nFour above examples demonstrate that, first, it is challenging to learn a information-gathering policy especially in unfamiliar environments, due to the complexity of the environment. Moreover, even if the agent manage to follow this policy, the information-gathering phase can be inefficient, which needs to exhaustively search every possible position. In contrast, our method succeeds in proposing proper questions and then acting accordingly, which improve the success rate as well as the efficiency. This proves our method's efficacy." }, { "figure_ref": [], "heading": "C Details about the Data Collection and Environment Variants", "publication_ref": [ "b51", "b19" ], "table_ref": [], "text": "In this section, we provide details about how the data is collected and training as mentioned in Section 4.2.\nAs for in-context examples used in ABA-IC, we manually interact with the environment and try to finish the tasks. We ask questions related to the tasks, and answer the questions ourselves by checking the ground truth states in the simulator. Beside the questions, we also add reasoning steps as in Yao et al. (2022) and select actions according to the information we have. Once completing the task, we take down all the actions and observations and use them as in-context examples.\nAs for ABA-IL, we design a rule-based policy according to the PDDL planning trajectories provided along with the environment. Specifically, we integrate the PDDL trajectories and the ground truth states within the simulator to find out what we should do to finish the tasks. Then, we extract the ground truth placements of the necessary objects from the simulator, and we write template-based questions to query this information and provide corresponding answers as observations. We also write chain-of-thought reasoning steps. As mentioned in Section 3.2.2, we manually inject noises by randomly inserting noisy actions at probability p = 0.2. These noisy actions are randomly sampled from the action space. The planning trajectories are also modified accordingly to correctly finish the task. Finally, we organize these questions, reasoning, and interactive actions to get a list of actions.\nWhen the actions belong to asking for reasoning, we use provided answers or simply \"OK.\" as observations. When the actions aim to interact with the environment, we use environmental feedback as the observations. As for Ambiguous ALFWorld, we use K = 2 for ABA-IC and collect 1500 trajectories for ABA-IL. As for Multiround ALFWorld, we use K = 1 for ABA-IC and collect 500 trajectories for ABA-IL. As for training, to ease the computational burden, we use LoRA (Hu et al., 2021) with r = 16 and a learning rate of 1e -5. We train the model with 3000 steps for Ambiguous ALFWorld and 6000 steps for Multiround ALFWorld." }, { "figure_ref": [], "heading": "D Examples and Qualitative Analysis for ALFWorld with Ambiguous Tasks", "publication_ref": [], "table_ref": [], "text": "This section provides some examples and qualitative analysis for ALFWorld with Ambiguous Tasks experiments in Section 4.2.\nFirst, we provide an episode trajectory of ReAct, which is randomly sampled from the evaluation set:\nObs 1: You are in the middle of a room. Looking quickly around you, you see a cabinet 10, a cabinet 9, a cabinet 8, a cabinet 7, a cabinet 6, a cabinet 5, a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a coffeemachine 1, a countertop 2, a countertop 1, a diningtable 2, a diningtable 1, a drawer 3, a drawer 2, a drawer 1, a fridge 1, a garbagecan 1, a microwave 1, a sinkbasin 1, a stoveburner 4, a stoveburner 3, a stoveburner 2, a stoveburner 1, and a toaster 1. Your task is to: put a hot egg in diningtable.\nAct 1: think: To finish the task, I need to find an egg, then heat it with microwave, then put it in/on diningtable.\nObs 2: OK. Act 2: think: First I need to find an egg. An egg is more likely to appear in stoveburner 2, countertop 1-3, diningtable 1, and cabinet 9-10. But I have checked stoveburner 2 and countertop 1-3. I can check the remaining places one by one, starting with cabinet 9.\nObs 3: OK. Act 3: go to cabinet 9 Obs 4: The cabinet 9 is closed. Act 4: think: An egg does not appear in cabinet 9.\nObs 5: OK. Act 5: go to cabinet 10 Obs 6: The cabinet 10 is closed. Act 6: think: An egg does not appear in cabinet 10. Obs 7: OK. Act 7: go to diningtable 1 Obs 8: On the diningtable 1, you see a bread 2, a bread 1, a butterknife 2, a dishsponge 1, a fork 1, a potato 1, and a spatula 1.\nAct 8: think: I have checked all the remaining places, and I have not found an egg. The task is not finished.\nObs 9: OK. Act 9: think: I have not found an egg. This is not a failure, just an unexpected result.\nObs 10: OK. Act 10: think: What should I do now? The task is not finished.\nObs 11: OK. Act 11: think: I could try to find an egg again.\nObs 12: OK. Act 12: go to stoveburner 1 Obs 13: On the stoveburner 1, you see nothing.\nAct 13: go to countertop 1 Obs 14: On the countertop 1, you see a butterknife 1, and a mug 1.\nAct 14: think: It is possible that an egg is hidden under one of these objects.\nObs 15: OK." }, { "figure_ref": [], "heading": "Obs 1:", "publication_ref": [], "table_ref": [], "text": "You are in the middle of a room. Looking quickly around you, you see a cabinet 10, a cabinet 9, a cabinet 8, a cabinet 7, a cabinet 6, a cabinet 5, a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a coffeemachine 1, a countertop 2, a countertop 1, a diningtable 2, a diningtable 1, a drawer 3, a drawer 2, a drawer 1, a fridge 1, a garbagecan 1, a microwave 1, a sinkbasin 1, a stoveburner 4, a stoveburner 3, a stoveburner 2, a stoveburner 1, and a toaster 1. Your task is to: put a hot mug in cabinet.\nAct 1: think: To solve the task, I need to find and take a mug, then heat it with microwave, then put it in cabinet. But where is the mug? Let me ask that person.\nObs 2: OK. Act 2: ask: Where is the mug? Obs 3: mug 1 is in countertop 1, mug 2 is in microwave 1. Act 3: think: There are multiple mug. I need to ask which mug should be taken.\nObs 4: OK. Act 4: ask: Which mug do you prefer? Obs 5: I mean mug 1. Act 5: think: Now I understand the task. I can go to countertop 1 and take the mug 1, then heat it with microwave, then put it in cabinet.\nObs 6: OK. Act 6: go to countertop 1 Obs 7: On the countertop 1, you see a butterknife 1, and a mug 1.\nAct 7: take mug 1 from countertop 1 Obs 8: You pick up the mug 1 from the countertop 1. Act 8: think: Now I take a mug 1. Next, I need to go to microwave 1 and heat mug 1, then put it in cabinet.\nObs 9: OK. Act 9: go to microwave 1 Obs 10: The microwave 1 is closed.\nAct 10: heat mug 1 with microwave 1 Obs 11: You heat the mug 1 using the microwave 1.\nAct 11: think: Now I heat mug 1. Next, I need to put it in cabinet 1.\nObs 12: OK. Act 12: go to cabinet 1 Obs 13: On the cabinet 1, you see nothing.\nAct 13: put mug 1 in/on cabinet 1\nThe above trajectory shows that, for our method (ABA-IL), the agent is able to identify the correct object related to the task (i.e., mug) and propose a proper question (i.e., Action 2) to ask the position of the mug, which is also shown in Appendix B. Moreover, when it realized that there were multiple mugs in the room, it further asked for clarification about the task (i.e., Action 4) since in Ambiguous ALFWorld, only the specified object is desirable. This demonstrates that it can ask diverse questions to query various information. Finally, the agent is able to integrate the acquired information into the following decision making: it knows which object is needed, it can head for the correct mug, heat it, and place it on the target location. In contrast to ReAct trajectory where the agent got stuck before finding the mug, the effiency in qualitative examples further proves our method's efficacy.\nAs a supplement, we show a typical failure case of our method:\nObs 1: You are in the middle of a room. Looking quickly around you, you see a cabinet 27, a cabinet 26, a cabinet 25, a cabinet 24, a cabinet 23, a cabinet 22, a cabinet 21, a cabinet 20, a cabinet 19, a cabinet 18, a cabinet 17, a cabinet 16, a cabinet 15, a cabinet 14, a cabinet 13, a cabinet 12, a cabinet 11, a cabinet 10, a cabinet 9, a cabinet 8, a cabinet 7, a cabinet 6, a cabinet 5, a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a coffeemachine 1, a countertop 2, a countertop 1, a diningtable 1, a drawer 12, a drawer 11, a drawer 10, a drawer 9, a drawer 8, a drawer 7, a drawer 6, a drawer 5, a drawer 4, a drawer 3, a drawer 2, a drawer 1, a fridge 1, a garbagecan 1, a microwave 1, a sinkbasin 1, a stoveburner 4, a stoveburner 3, a stoveburner 2, a stoveburner 1, and a toaster 1. Your task is to: clean some spatula and put it in drawer.\nAct 1: think: To solve the task, I need to find and take a spatula, then clean it with sinkbasin, then put it in drawer. But where is the spatula? Let me ask that person.\nObs 2: OK. Act 2: ask: Where is the spatula? Obs 3: spatula 1 is in countertop 2, spatula 2 is in sinkbasin 1.\nAct 3: think: There are multiple spatula. I need to ask which spatula should be taken.\nObs 4: OK. Act 4: ask: Which spatula do you prefer? Obs 5: I mean, it depends on personal preference. Act 5: think: Now I understand the task better. I need to ask which spatula should be taken, then find and take it, then clean it with sinkbasin, then put it in drawer.\nObs 6: OK. Act 6: ask: Which spatula do you prefer? Obs 7: I am not sure. Could you remind me the information about each spatula?\nAct 7: think: There are multiple spatula. Let me ask which spatula should be taken first.\nObs 8: OK. Act 8: ask: Which spatula do you prefer? Obs 9: I am not sure. Could you remind me the information about each spatula?\nAct 9: think: Now I understand the task better. I need to ask which spatula should be taken first, then find and take it, then clean it with sinkbasin, then put it in drawer.\nObs 10: OK. Act 10: ask: Which spatula do you prefer? Obs 11: I am not sure. Could you remind me the information about each spatula?\nAct 11: think: Now I understand the task better. I need to ask which spatula should be taken first, then find and take it, then clean it with sinkbasin, then put it in drawer.\nObs 12: OK. Act 12: ask: Which spatula do you prefer? Obs 13: ...\nThough H is instructed with the goal, namely which spatula it is looking for (see Appendix G for more details), it fails to provide the correct answer (e.g., Obs 5,7,9,11). Therefore, we assume that by providing more accurate H, the performance of our model will further enhance." }, { "figure_ref": [], "heading": "E Details about Multiround ALFWorld", "publication_ref": [], "table_ref": [], "text": "In this section, we provide more details about data collection in Multiround ALFWorld. In Multiround ALFWorld, the main challenge is proposing the right questions. Specifically, the agent needs to avoid repeatedly asking by identifying whether the information to query has already been collected. This requires a specical treatment to the data and we explicitly implement this for clarity. In our case, the agent first asks itself whether it has seen a specific object before asking a question. Only when the answer is negative will it continue to ask. Otherwise, it may directly act based on its memory.\nFor ABA-IC, we provide the agent with manually labeled trajectories, in which we manually identify whether the agent needs to ask according to previous interactions, and only ask for more information if needed. As for ABA-IL, we integrate this part in the reasoning step. To be specific, the reasoning will include an explicit query about the target object. When the agent has never seen a particular object, the reasoning step will be like:\nthink: To solve the task, I need to find and take a mug, then put it in sidetable. First I need to find the locations of mug. ### query: mug > I have never seen mug before.\nIn the above example, the target object is the mug (i.e., \"query: mug\"), and the agent believes it has never seen the mug before (i.e., \"I have never seen mug before.\").\nOn the other hand, if the agent has ever seen the object (e.g., it has visited diningtable 1 and seen pencil 1 and pencil 3 there), the query and the answer will be like: think: To solve the task, I need to find and take a pencil, then put it in garbagecan. First I need to find the locations of pencil. ### query: pencil > pencil 3 is in diningtable 1, pencil 1 is in diningtable 1.\nAfter querying itself about the target object, the will act according to the answers. If the agent believes it has never seen the object, it will probably ask for more information. Otherwise, it will directly make decisions depend on the information.\nFor ABA-IL, we make it clearer to split the reasoning into two steps. For the first step, we identify the target object. For the second step, we identify whether and where we have seen this object before. These two steps form two pairs of input/output training samples in the dataset. During training, we use teacher forcing. We further augment the dataset by inserting several new queries asking for another object which is randomly sampled from the object list in this environment." }, { "figure_ref": [], "heading": "F Examples and Qualitative Analysis for Multiround ALFWorld", "publication_ref": [], "table_ref": [], "text": "In this section, we provide some examples and qualitative analysis for Multiround ALFWorld experiments in Section 4.2.\nWe first provide an trajectory for ReAct, which is randomly sampled from the evaluation set:\nObs 1:\nYou are in the middle of a room. Looking quickly around you, you see a bed 1, a diningtable 1, a drawer 4, a drawer 3, a drawer 2, a drawer 1, a garbagecan 1, a sidetable 2, and a sidetable 1. Your task is to: put a cellphone in diningtable.\nAct 1: think: To solve the task, I need to find and take a cellphone, then put it in diningtable.\nObs 2: OK. Act 2: think: First I need to find a cellphone. A cellphone is more likely to appear in drawer 1-4. But there is no cellphone on drawer 1-2. I can check one by one, starting with drawer 3.\nObs 3: OK. Act 3: go to drawer 3 Obs 4: On the drawer 3, you see a cellphone 2. Act 4: take cellphone 2 from drawer 3 Obs 5: You pick up the cellphone 2 from the drawer 3. Act 5: think: Now I take a cellphone 2. Next, I need to put it in diningtable 1.\nObs 49: OK. Act 49: go to garbagecan 1 Obs 50: On the garbagecan 1, you see nothing. Act 50:\nthink: I have finished to put all the objects in their places.\nWe show that, in the above example, though the agent successfully located the cellphone after opening drawer3, the initial planning (i.e., Action 2) seemed to contain hallucination (\"There is no cellphone on drawer 1-2\"), which makes the result a bit of a coincidence. After finishing the first task, the agent again makes some erroneous claims that seem like hallucinations (i.e., Action 9, \"there is no pen on drawer 2\"). Then, the agent searched for the remaining locations and mistook the \"pencil\" for the \"pen\" (i.e., Action 11), and got stuck after wrongly placing the pencil on the target location.\nIn comparison, we provide an episode trajectory of our method:\nObs 1:\nYou are in the middle of a room. Looking quickly around you, you see a bathtubbasin 1, a countertop 1, a drawer 4, a drawer 3, a drawer 2, a drawer 1, a dresser 1, a garbagecan 1, a handtowelholder 2, a handtowelholder 1, a shelf 2, a shelf 1, a sinkbasin 1, a toilet 1, a toiletpaperhanger 1, and a towelholder 1. Your task is to: put a toiletpaper in drawer.\nAct 1: think: To solve the task, I need to find and take a toiletpaper, then put it in drawer. First I need to find the locations of toiletpaper. ### query: toiletpaper > I have never seen toiletpaper before.\nObs 2: OK. Act 2: think: I cannot locate toiletpaper, I need to ask the owner of this room.\nObs 3: OK. Act 3: ask: Where is the toiletpaper? Obs 4: toiletpaper 1 is in toilet 1, toiletpaper 2 is in drawer 2.\nAct 4: think: I can go to toilet 1 and take the toiletpaper, then put it in drawer.\nObs 5: OK. Act 5: go to toilet 1 Obs 6: On the toilet 1, you see a candle 1, a toiletpaper 2, and a toiletpaper 1.\nAct 6: take toiletpaper 1 from toilet 1 Obs 7: You pick up the toiletpaper 1 from the toilet 1. Act 7: think: Now I take a toiletpaper 1. Next, I need to put it in drawer.\nObs 8: OK. Act 10: put toiletpaper 1 in/on drawer 1 Obs 11: You put the toiletpaper 1 in/on the drawer 1. Your next task is to: put some candle on shelf.\nAct 11: think: To solve the task, I need to find and take a candle, then put it in shelf. First I need to find the locations of candle. ### query: candle > candle 1 is in toilet 1.\nObs 12: OK. Act 12: think: I can go to toilet 1 and take the candle, then put it in shelf.\nObs 13: OK.\nObs 34: You put the soapbottle 3 in/on the toilet 1. Your next task is to: put some cloth on countertop.\nAct 34: think: To solve the task, I need to find and take a cloth, then put it in countertop. First I need to find the locations of cloth. ### query: cloth > I have never seen cloth before.\nObs Act 49: put toiletpaper 1 in/on shelf 1\nIn the above example, the agent can correctly reason whether the information has been collected before. For instance, at the beginning (i.e., Action 1), the agent knows that it has never seen the toiletpaper before, so it decides to ask for more information (i.e., Action 2,3). Moreover, when the information has been collected, it can omit the asking. Since it has seen the candle before (i.e., in Obs 6), when it is tasked to replace the candle, it recalls this information (i.e., Action 11) and directly acts upon it. Action 18, 25, 34, and 43 also successfully identify whether the required information has been collected. The above examples demonstrate that our method can ask proper questions and can avoid repeatedly asking for acquired information. With this ability, it is able to solve more challenging tasks and can achieve better performance." }, { "figure_ref": [], "heading": "G Examples provided for Human Model", "publication_ref": [ "b7" ], "table_ref": [], "text": "In this section, we provide more details about the human model examples. As mentioned in Appendix A, we use Vicuna prompts (Chiang et al., 2023) to help organize these examples (i.e., \"A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.\"). For ALFWorld experiments in Section 4.1, and the multiround ALFWorld experiments in Section 4.2, the in-context examples are:\nA chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions. ### Human: Read the following paragraph and answer questions: dishsponge 2 is in drawer 3. spatula 1 is in diningtable 1. spoon 1 is in diningtable 1. cup 1 is in fridge 1. dishsponge 1 is in garbagecan 1. butterknife 2 is in diningtable 1. fork 3 is in diningtable 1. saltshaker 1 is in diningtable 1. pot 2 is in stoveburner 3. lettuce 2 is in diningtable 1. tomato 2 is in countertop 2. spatula 2 is in diningtable 1. bowl 3 is in cabinet 16. egg 2 is in countertop 1. bowl 2 is in cabinet 6. fork 1 is in countertop 2. pan 1 is in fridge 1. cup 2 is in cabinet 16. papertowelroll 1 is in diningtable 1. butterknife 3 is in drawer 5. soapbottle 1 is in cabinet 9. apple 1 is in diningtable 1. kettle 2 is in cabinet 12. knife 1 is in countertop 2. cup 3 is in microwave 1. butterknife 1 is in drawer 3. tomato 1 is in sinkbasin 1. peppershaker 1 is in countertop 2. potato 1 is in fridge 1. bread 2 is in diningtable 1. pot 1 is in cabinet 10. dishsponge 3 is in drawer 4. soapbottle 2 is in countertop 1. kettle 1 is in countertop 2. houseplant 1 is in diningtable 1. pot 3 is in stoveburner 4. fork 2 is in drawer 2. mug 1 is in sinkbasin 1. lettuce 1 is in countertop 2. bread 1 is in countertop 2. peppershaker 2 is in countertop 2. plate 1 is in countertop 2. potato 2 is in sinkbasin 1. egg 1 is in countertop 2. bowl 1 is in cabinet 1. peppershaker 3 is in countertop 2. The questions is: Where can I find the dishsponge? ### Assistant: dishsponge 1 is in garbagecan 1, dishsponge 2 is in drawer 3, dishsponge 3 is in drawer 4. ### Human: Read the following paragraph and answer questions:\nplate 1 is in cabinet 4. soapbottle 1 is in shelf 2. spoon 2 is in diningtable 1. egg 1 is in sinkbasin 1. knife 3 is in diningtable 1. bowl 1 is in diningtable 1. butterknife 2 is in countertop 1. spatula 3 is in diningtable 1. apple 2 is in countertop 1. tomato 1 is in garbagecan 1. pan 1 is in stoveburner 2. mug 2 is in cabinet 7. lettuce 1 is in fridge 1. tomato 2 is in sinkbasin 1. bread 1 is in diningtable 1. ladle 2 is in countertop 1. fork 1 is in countertop 1. saltshaker 1 is in countertop 1. soapbottle 3 is in diningtable 1. spoon 1 is in diningtable 1. butterknife 1 is in countertop 1. spatula 2 is in diningtable 1. ladle 1 is in countertop 1. spatula 1 is in diningtable 1. plate 3 is in diningtable 1. peppershaker 3 is in cabinet 13. cup 1 is in diningtable 1. dishsponge 1 is in cabinet 7. winebottle 1 is in countertop 1. butterknife 3 is in countertop 1. peppershaker 1 is in countertop 1. potato 2\nis in fridge 1. peppershaker 2 is in diningtable 1. mug 3 is in diningtable 1. apple 1 is in microwave 1. potato 1 is in garbagecan 1. mug 1 is in cabinet 5. pot 1 is in diningtable 1. saltshaker 3 is in diningtable 1. knife 1 is in countertop 1. pan 1 is in stoveburner 1. knife 2 is in diningtable 1. saltshaker 2 is in countertop 1. soapbottle 2 is in diningtable 1. The questions is: Where is the soapbottle? ### Assistant: soapbottle 1 is in shelf 2, soapbottle 2 is in diningtable 1, soapbottle 3 is in diningtable 1. ### Human: Read the following paragraph and answer questions: remotecontrol 1 is in coffeetable 1. pillow 1 is in sofa 1. book 1 is in dresser 1. creditcard 2 is in coffeetable 1. pen 1 is in coffeetable 1. pen 2 is in sidetable 2. statue 2 is in sidetable 2. creditcard 1 is in dresser 1. cellphone 1 is in dresser 1. creditcard 3 is in armchair 1. pencil 1 is in coffeetable 1. tissuebox 2 is in coffeetable 1. tissuebox 3 is in sidetable 2. houseplant 1 is in sidetable 1. remotecontrol 2 is in armchair 1. book 2 is in coffeetable 1. tissuebox 1 is in coffeetable 1. laptop 1 is in sofa 1. television 1 is in dresser 1. statue 1 is in coffeetable 1. box 1 is in coffeetable 1. The questions is: Where can I find the pen? ### Assistant: pen 1 is in coffeetable 1, pen 2 is in sidetable 2. ### Human: Read the following paragraph and answer questions: bowl 2 is in diningtable 2. potato 2 is in sidetable 1. tomato 1 is in sidetable 1. bread 1 is in diningtable 3. fork 1 is in sidetable 1. dishsponge 1 is in countertop 1. peppershaker 2 is in cabinet 2. lettuce 2 is in garbagecan 1. spoon 2 is in drawer 1. bowl 1 is in cabinet 1. cup 2 is in diningtable 2. soapbottle 2 is in diningtable 2. egg 2 is in sidetable 1. mug 2 is in sinkbasin 1. mug 1 is in coffeemachine 1. plate 2 is in diningtable 3. tomato 2 is in sinkbasin 1. spoon 3 is in drawer 1. saltshaker 2 is in sidetable 1. spatula 1 is in countertop 1. pot 1 is in stoveburner 4. spatula 2 is in drawer 1. dishsponge 3 is in diningtable 2. peppershaker 1 is in cabinet 2. tomato 4 is in sidetable 1. knife 1 is in diningtable 3. cup 1 is in sidetable 1. bread 2 is in diningtable 3. spatula 3 is in diningtable 2. pan 1 is in cabinet 4. tomato 3 is in fridge 1. potato 1 is in sinkbasin 1. peppershaker 3 is in diningtable 3. apple 1 is in fridge 1. saltshaker 1 is in cabinet 4. butterknife 2 is in countertop 1. soapbottle 1 is in cabinet 4. fork 2 is in drawer 1. spoon 1 is in sidetable 1. egg 1 is in fridge 1. lettuce 1 is in sidetable 1. plate 1 is in diningtable 2. egg 3 is in diningtable 2. dishsponge 2 is in diningtable 3. The questions is: Where is the apple? ### Assistant: apple 1 is in fridge 1.\nFor ambiguous AFLWorld experiments in Section 4.2, besides the basic information about the object placements, we further assign the specific object the task is looking for. The in-context examples are:\nA chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions. ### Human: Read the following paragraph and answer questions: dishsponge 2 is in drawer 3. spatula 1 is in diningtable 1. spoon 1 is in diningtable 1. cup 1 is in fridge 1. dishsponge 1 is in garbagecan 1. saltshaker 1 is in diningtable 1. fork 1 is in countertop 2. pan 1 is in fridge 1. soapbottle 1 is in cabinet 9. apple 1 is in diningtable 1. knife 1 is in countertop 2. butterknife 1 is in drawer 3. tomato 1 is in sinkbasin 1. peppershaker 1 is in countertop 2. potato 1 is in fridge 1. dishsponge 3 is in drawer 4. kettle 1 is in countertop 2. houseplant 1 is in diningtable 1. mug 1 is in sinkbasin 1. You are looking for dishsponge 1. The questions is: Where can I find the dishsponge? ### Assistant: dishsponge 1 is in garbagecan 1, dishsponge 2 is in drawer 3, dishsponge 3 is in drawer 4. ### Human: Another question is: Which dishsponge do you prefer? ### Assistant: I mean dishsponge 1. ### Human: Read the following paragraph and answer questions: plate 1 is in cabinet 4. soapbottle 1 is in shelf 2. tomato 1 is in garbagecan 1. pan 1 is in stoveburner 2. mug 2 is in cabinet 7. lettuce 1 is in fridge 1. ladle 2 is in countertop 1. soapbottle 3 is in diningtable 1. butterknife 1 is in countertop 1. spatula 2 is in diningtable 1. ladle 1 is in countertop 1. cup 1 is in diningtable 1. winebottle 1 is in countertop 1. peppershaker 1 is in countertop 1. potato 2 is in fridge 1. peppershaker 2 is in diningtable 1. mug 3 is in diningtable 1. apple 1 is in microwave 1. mug 1 is in cabinet 5. saltshaker 2 is in countertop 1. soapbottle 2 is in diningtable 1. You are looking for soapbottle 1, soapbottle 2. The questions is: Where can I find the soapbottle? ### Assistant: soapbottle 1 is in shelf 2, soapbottle 2 is in diningtable 1, soapbottle 3 is in diningtable 1. ### Human: Another question is: Which soapbottle do you prefer? ### Assistant: I mean soapbottle 1 and soapbottle 2. ### Human: Read the following paragraph and answer questions: remotecontrol 1 is in coffeetable 1. pillow 1 is in sofa 1. book 1 is in dresser 1. creditcard 2 is in coffeetable 1. pen 1 is in coffeetable 1. pen 2 is in sidetable 2. statue 2 is in sidetable 2. creditcard 1 is in dresser 1. cellphone 1 is in dresser 1. creditcard 3 is in armchair 1. pencil 1 is in coffeetable 1. tissuebox 2 is in coffeetable 1. tissuebox 3 is in sidetable 2. houseplant 1 is in sidetable 1. remotecontrol 2 is in armchair 1. book 2 is in coffeetable 1. tissuebox 1 is in coffeetable 1. laptop 1 is in sofa 1. television 1 is in dresser 1. statue 1 is in coffeetable 1. box 1 is in coffeetable 1. You are looking for pen 2. The questions is: Where can I find the pen? ### Assistant: pen 1 is in coffeetable 1, pen 2 is in sidetable 2. ### Human: Another question is: Which pen do you prefer? ### Assistant: I mean pen 2. ### Human: Read the following paragraph and answer questions: bowl 2 is in diningtable 2. saltshaker 2 is in sidetable 1. spatula 1 is in countertop 1. pot 1 is in stoveburner 4. spatula 2 is in drawer 1. dishsponge 3 is in diningtable 2. peppershaker 1 is in cabinet 2. tomato 4 is in sidetable 1. knife 1 is in diningtable 3. cup 1 is in sidetable 1. bread 2 is in diningtable 3. spatula 3 is in diningtable 2. pan 1 is in cabinet 4. tomato 3 is in fridge 1. potato 1 is in sinkbasin 1. peppershaker 3 is in diningtable 3. apple 1 is in fridge 1. saltshaker 1 is in cabinet 4. fork 2 is in drawer 1. spoon 1 is in sidetable 1. egg 1 is in fridge 1. lettuce 1 is in sidetable 1. plate 1 is in diningtable 2. You are looking for spatula 1. The questions is: Where is the spatula? ### Assistant: spatula 1 is in countertop 1, spatula 2 is in drawer 1. ### Human: Another question is: Which spatula do you prefer? ### Assistant I mean spatula 1." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Act 15: go to countertop 2 Obs 16: On the countertop 2, you see a bowl 2, and a knife 1.\nAct 16: think: It is possible that an egg is hidden under one of these objects.\nObs 17: OK. Act 17: go to countertop 1 Obs 18: On the countertop 1, you see a butterknife 1, and a mug 1.\nAct 18: think: It is possible that an egg is hidden under one of these objects.\nObs 19: OK. Act 19: go to countertop 2 Obs 20: On the countertop 2, you see a bowl 2, and a knife 1.\nAct 20: think: It is possible that an egg is hidden under one of these objects.\nObs 21: OK. Act 21: go to countertop 1 Obs 22: On the countertop 1, you see a butterknife 1, and a mug 1.\nAct 22: think: It is possible that an egg is hidden under one of these objects.\nObs 23: OK. Act 23: go to countertop 2 Obs 24: On the countertop 2, you see a bowl 2, and a knife 1.\nAct 24: think: It is possible that an egg is hidden under one of these objects.\nObs 25: OK. Act 25: go to countertop 1 Obs 26: On the countertop 1, you see a butterknife 1, and a mug 1.\nAct 26: think: It is possible that an egg is hidden under one of these objects.\nObs 27: OK. Act 27: go to countertop 2 Obs 28: On the countertop 2, you see a bowl 2, and a knife 1.\nAct 28: think: It is possible that an egg is hidden under one of these objects.\nObs 29: OK. Act 29: go to countertop 1 Obs 30: On the countertop 1, you see a butterknife 1, and a mug 1.\nAct 30: think: It is possible that an egg is hidden under one of these objects.\nObs 31: OK. Act 31: go to countertop 2 Obs 32: On the countertop 2, you see a bowl 2, and a knife 1. Act 32: think: It is possible that an egg is Obs 33: OK. Act 33: go to countertop 1 Obs 34: On the countertop 1, you see a butterknife 1, and a mug 1. Act 34: think: It is possible that an egg is hidden under one of these objects. Obs 35: OK. Act 35: go to countertop 2 Obs 36: On the countertop 2, you see a bowl 2, and a knife 1. Act 36: think: It is possible that an egg is hidden under one Obs 37: OK. Act 37: go to countertop 1 Obs 38: On the countertop 1, you see a butterknife 1, and a mug 1. Act 38: think: It is possible that an egg is hidden under one of these objects. Obs 39: OK. Act 39: go to countertop 2 Obs 40: On the countertop 2, you see a bowl 2, and a knife 1. Act 40: think: It is possible that an egg is hidden under one of these objects. Obs 41: OK. Act 41: go to countertop 1 Obs 42: On the countertop 1, you see a butterknife 1, and a mug 1. Act 42: think: It is possible that an egg is hidden under one of these objects. Obs 43: OK. Act 43: go to countertop 2 Obs 44: On the countertop 2, you see a bowl 2, and a knife 1.\nAct 44: think: It is possible that an egg is hidden under one of these objects.\nObs 45: OK. Act 45: go to countertop 1 Obs 46: On the countertop 1, you see a butterknife 1, and a mug 1.\nAct 46: think: It is possible that an egg is hidden under one of these objects.\nObs 47: OK. Act 47: go to countertop 2 Obs 48: On the countertop 2, you see a bowl 2, and a knife 1.\nAct 48: think: It is possible that an egg is hidden under one of these objects.\nObs 49: OK. Act 49: go to countertop 1 Obs 50: On the countertop 1, you see a butterknife 1, and a mug 1.\nAct 50: think: It is possible that an egg is hidden under one of these objects.\nObs 51: OK.\nIn this episode, the agent failed to find the egg. Initially, it failed to propose correct plans (i.e., action 2). It listed several locations where the eggs could be but not all (e.g., cabinet 1-8, diningtable 2, fridge, etc). Then, it incorrectly thought it had checked some of these places, which suggested there might be hallucinations. These two steps limit the searching range of the agent and prevent the agent from finding the target object. In the following steps, it checked cabinet 9-10 (but without opening the cabinets), stoveburner 1, countertop 1-2, and then got stuck by repeatedly visiting the countertop 1-2.\nThe episode of the same room setting for our method (ABA-IL) is:" } ]
2023-05-25
[ { "authors": "M Ahn; A Brohan; N Brown; Y Chebotar; O Cortes; B David; C Finn; K Gopalakrishnan; K Hausman; A Herzog", "journal": "", "ref_id": "b0", "title": "Do as i can, not as i say: Grounding language in robotic affordances", "year": "2022" }, { "authors": "E Akyürek; D Schuurmans; J Andreas; T Ma; D Zhou", "journal": "", "ref_id": "b1", "title": "What learning algorithm is in-context learning? investigations with linear models", "year": "2022" }, { "authors": "J Beck; R Vuorio; E Z Liu; Z Xiong; L Zintgraf; C Finn; S Whiteson", "journal": "", "ref_id": "b2", "title": "A survey of meta-reinforcement learning", "year": "2023" }, { "authors": "T Brown; B Mann; N Ryder; M Subbiah; J D Kaplan; P Dhariwal; A Neelakantan; P Shyam; G Sastry; A Askell", "journal": "Advances in neural information processing systems", "ref_id": "b3", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "S Bubeck; V Chandrasekaran; R Eldan; J Gehrke; E Horvitz; E Kamar; P Lee; Y T Lee; Y Li; S Lundberg", "journal": "", "ref_id": "b4", "title": "Sparks of artificial general intelligence: Early experiments with gpt-4", "year": "2023" }, { "authors": "A Bulatov; Y Kuratov; M S Burtsev", "journal": "", "ref_id": "b5", "title": "Scaling transformer to 1m tokens and beyond with rmt", "year": "2023" }, { "authors": "T Carta; C Romac; T Wolf; S Lamprier; O Sigaud; P.-Y Oudeyer", "journal": "", "ref_id": "b6", "title": "Grounding large language models in interactive environments with online reinforcement learning", "year": "2023" }, { "authors": "W.-L Chiang; Z Li; Z Lin; Y Sheng; Z Wu; H Zhang; L Zheng; S Zhuang; Y Zhuang; J E Gonzalez; I Stoica; E P Xing", "journal": "", "ref_id": "b7", "title": "Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality", "year": "2023" }, { "authors": "A Chowdhery; S Narang; J Devlin; M Bosma; G Mishra; A Roberts; P Barham; H W Chung; C Sutton; S Gehrmann; P Schuh; K Shi; S Tsvyashchenko; J Maynez; A Rao; P Barnes; Y Tay; N Shazeer; V Prabhakaran; E Reif; N Du; B Hutchinson; R Pope; J Bradbury; J Austin; M Isard; G Gur-Ari; P Yin; T Duke; A Levskaya; S Ghemawat; S Dev; H Michalewski; X Garcia; V Misra; K Robinson; L Fedus; D Zhou; D Ippolito; D Luan; H Lim; B Zoph; A Spiridonov; R Sepassi; D Dohan; S Agrawal; M Omernick; A M Dai; T S Pillai; M Pellat; A Lewkowycz; E Moreira; R Child; O Polozov; K Lee; Z Zhou; X Wang; B Saeta; M Diaz; O Firat; M Catasta; J Wei; K Meier-Hellstern; D Eck; J Dean; S Petrov; N Fiedel", "journal": "", "ref_id": "b8", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "H W Chung; L Hou; S Longpre; B Zoph; Y Tay; W Fedus; E Li; X Wang; M Dehghani; S Brahma", "journal": "", "ref_id": "b9", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "K Cobbe; V Kosaraju; M Bavarian; M Chen; H Jun; L Kaiser; M Plappert; J Tworek; J Hilton; R Nakano", "journal": "", "ref_id": "b10", "title": "Training verifiers to solve math word problems", "year": "2021" }, { "authors": "M.-A Côté; A Kádár; X Yuan; B Kybartas; T Barnes; E Fine; J Moore; M Hausknecht; L El Asri; M Adada", "journal": "Springer", "ref_id": "b11", "title": "Textworld: A learning environment for text-based games", "year": "2018-07-13" }, { "authors": "F L Da Silva; P Hernandez-Leal; B Kartal; M E Taylor", "journal": "", "ref_id": "b12", "title": "Uncertainty-aware action advising for deep reinforcement learning agents", "year": "2020" }, { "authors": "D Dai; Y Sun; L Dong; Y Hao; Z Sui; F Wei", "journal": "", "ref_id": "b13", "title": "Why can gpt learn incontext? language models secretly perform gradient descent as meta optimizers", "year": "2022" }, { "authors": "J Devlin; M.-W Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b14", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "D Driess; F Xia; M S Sajjadi; C Lynch; A Chowdhery; B Ichter; A Wahid; J Tompson; Q Vuong; T Yu", "journal": "", "ref_id": "b15", "title": "Palm-e: An embodied multimodal language model", "year": "2023" }, { "authors": "A Hallak; D Di Castro; S Mannor", "journal": "", "ref_id": "b16", "title": "Contextual markov decision processes", "year": "2015" }, { "authors": "N Houlsby; A Giurgiu; S Jastrzebski; B Morrone; Q De Laroussilhe; A Gesmundo; M Attariyan; S Gelly", "journal": "", "ref_id": "b17", "title": "Parameter-efficient transfer learning for nlp", "year": "2019" }, { "authors": " Pmlr", "journal": "", "ref_id": "b18", "title": "", "year": "" }, { "authors": "E J Hu; Y Shen; P Wallis; Z Allen-Zhu; Y Li; S Wang; L Wang; W Chen", "journal": "", "ref_id": "b19", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "H Hu; D Sadigh", "journal": "", "ref_id": "b20", "title": "Language instructed reinforcement learning for human-ai coordination", "year": "2023" }, { "authors": "S Huang; L Dong; W Wang; Y Hao; S Singhal; S Ma; T Lv; L Cui; O K Mohammed; Q Liu", "journal": "", "ref_id": "b21", "title": "Language is not all you need: Aligning perception with language models", "year": "2023" }, { "authors": "W Huang; P Abbeel; D Pathak; I Mordatch", "journal": "", "ref_id": "b22", "title": "Language models as zero-shot planners: Extracting actionable knowledge for embodied agents", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b23", "title": "", "year": "" }, { "authors": "W Huang; F Xia; T Xiao; H Chan; J Liang; P Florence; A Zeng; J Tompson; I Mordatch; Y Chebotar", "journal": "", "ref_id": "b24", "title": "Inner monologue: Embodied reasoning through planning with language models", "year": "2022" }, { "authors": "S Li; X Puig; C Paxton; Y Du; C Wang; L Fan; T Chen; D.-A Huang; E Akyürek; A Anandkumar", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b25", "title": "Pre-trained language models for interactive decision-making", "year": "2022" }, { "authors": "V Lialin; V Deshpande; A Rumshisky", "journal": "", "ref_id": "b26", "title": "Scaling down to scale up: A guide to parameterefficient fine-tuning", "year": "2023" }, { "authors": "K Lin; C Agia; T Migimatsu; M Pavone; J Bohg", "journal": "", "ref_id": "b27", "title": "Text2motion: From natural language instructions to feasible plans", "year": "2023" }, { "authors": "I.-J Liu; X Yuan; M.-A Côté; P.-Y Oudeyer; A Schwing", "journal": "", "ref_id": "b28", "title": "Asking for knowledge (afk): Training rl agents to query external knowledge using language", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b29", "title": "", "year": "" }, { "authors": "J Mu; X L Li; N Goodman", "journal": "", "ref_id": "b30", "title": "Learning to compress prompts with gist tokens", "year": "2023" }, { "authors": "K Nguyen; Iii Daumé; H ", "journal": "", "ref_id": "b31", "title": "Help, anna! visual navigation with natural multimodal assistance via retrospective curiosity-encouraging imitation learning", "year": "2019" }, { "authors": "K Nguyen; D Dey; C Brockett; B Dolan", "journal": "", "ref_id": "b32", "title": "Vision-based navigation with languagebased assistance via imitation learning with indirect intervention", "year": "2019" }, { "authors": "K X Nguyen; Y Bisk; H D Iii", "journal": "PMLR. OpenAI", "ref_id": "b33", "title": "A framework for learning to request rich and contextually useful information from humans", "year": "2022" }, { "authors": "L Ouyang; J Wu; X Jiang; D Almeida; C Wainwright; P Mishkin; C Zhang; S Agarwal; K Slama; A Ray", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b34", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "A Radford; J Wu; R Child; D Luan; D Amodei; I Sutskever", "journal": "OpenAI blog", "ref_id": "b35", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "S Ross; G Gordon; D Bagnell", "journal": "", "ref_id": "b36", "title": "A reduction of imitation learning and structured prediction to no-regret online learning", "year": "2011" }, { "authors": "J Shen; Y Yin; L Li; L Shang; X Jiang; M Zhang; Q Liu", "journal": "", "ref_id": "b37", "title": "Generate & rank: A multi-task framework for math word problems", "year": "2021" }, { "authors": "S Shin; S.-W Lee; H Ahn; S Kim; H Kim; B Kim; K Cho; G Lee; W Park; J.-W Ha", "journal": "", "ref_id": "b38", "title": "On the effect of pretraining corpora on in-context learning by a large-scale language model", "year": "2022" }, { "authors": "N Shinn; B Labash; A Gopinath", "journal": "", "ref_id": "b39", "title": "Reflexion: an autonomous agent with dynamic memory and self-reflection", "year": "2023" }, { "authors": "M Shridhar; J Thomason; D Gordon; Y Bisk; W Han; R Mottaghi; L Zettlemoyer; D Fox", "journal": "", "ref_id": "b40", "title": "Alfred: A benchmark for interpreting grounded instructions for everyday tasks", "year": "2020" }, { "authors": "M Shridhar; X Yuan; M.-A Côté; Y Bisk; A Trischler; M Hausknecht", "journal": "", "ref_id": "b41", "title": "ALFWorld: Aligning Text and Embodied Environments for Interactive Learning", "year": "2021" }, { "authors": "I Singh; V Blukis; A Mousavian; A Goyal; D Xu; J Tremblay; D Fox; J Thomason; A Garg", "journal": "", "ref_id": "b42", "title": "Progprompt: Generating situated robot task plans using large language models", "year": "2022" }, { "authors": "K P Singh; L Weihs; A Herrasti; J Choi; A Kembhavi; R Mottaghi", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b43", "title": "Ask4help: Learning to leverage an expert for embodied tasks", "year": "2022" }, { "authors": "C Snell; I Kostrikov; Y Su; M Yang; S Levine", "journal": "", "ref_id": "b44", "title": "Offline rl for natural language generation with implicit language q learning", "year": "2022" }, { "authors": "C Snell; S Yang; J Fu; Y Su; S Levine", "journal": "", "ref_id": "b45", "title": "Context-aware language modeling for goal-oriented dialogue systems", "year": "2022" }, { "authors": "K Valmeekam; A Olmo; S Sreedharan; S Kambhampati", "journal": "", "ref_id": "b46", "title": "Large language models still can't plan (a benchmark for llms on planning and reasoning about change", "year": "2022" }, { "authors": "K Valmeekam; S Sreedharan; M Marquez; A Olmo; S Kambhampati", "journal": "", "ref_id": "b47", "title": "On the planning abilities of large language models (a critical investigation with a proposed benchmark", "year": "2023" }, { "authors": "Z Wang; S Cai; A Liu; X Ma; Y Liang", "journal": "", "ref_id": "b48", "title": "Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents", "year": "2023" }, { "authors": "J Wei; X Wang; D Schuurmans; M Bosma; E Chi; Q Le; D Zhou", "journal": "", "ref_id": "b49", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "S M Xie; A Raghunathan; P Liang; T Ma", "journal": "", "ref_id": "b50", "title": "An explanation of in-context learning as implicit bayesian inference", "year": "2021" }, { "authors": "S Yao; J Zhao; D Yu; N Du; I Shafran; K Narasimhan; Y Cao", "journal": "", "ref_id": "b51", "title": "React: Synergizing reasoning and acting in language models", "year": "2022" }, { "authors": "H Zhao; K Wang; M Yu; H Mei", "journal": "", "ref_id": "b52", "title": "Explicit planning helps language models in logical reasoning", "year": "2023" }, { "authors": "L Zintgraf; K Shiarlis; M Igl; S Schulze; Y Gal; K Hofmann; S Whiteson", "journal": "", "ref_id": "b53", "title": "Varibad: a very good method for bayes-adaptive deep rl via meta-learning", "year": "2020" }, { "authors": "L M Zintgraf; L Feng; C Lu; M Igl; K Hartikainen; K Hofmann; S Whiteson", "journal": "", "ref_id": "b54", "title": "Exploration in approximate hyper-state space for meta reinforcement learning", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 108, 360.18, 245.35, 8.96 ], "formula_id": "formula_0", "formula_text": "Definition 2.1 Contextual MDP is a tuple (S, A, C, M(c))." }, { "formula_coordinates": [ 3, 108, 371.16, 396, 19.65 ], "formula_id": "formula_1", "formula_text": "context c ∈ C to a specific T -horizon MDP M(c) = (S, A, p(•|s, a, c), r(s, a, c))." }, { "formula_coordinates": [ 3, 226.07, 452.67, 278.59, 30.2 ], "formula_id": "formula_2", "formula_text": "J (π) = E c ′ ∈C ′ ,s0,p,π T t=0 •r(s t , a t , c)(1)" }, { "formula_coordinates": [ 3, 107.67, 560.42, 397.49, 20.56 ], "formula_id": "formula_3", "formula_text": "ĉ = f θ (s 1 , a 1 , r 1 , • • • , s t ) (or ĉ = f θ (i, s 1 , a 1 , r 1 , • • • , s t ) if i is provided) while trying to solve the task. Here t ∈ {1, 2, • • • , T }" }, { "formula_coordinates": [ 4, 106.83, 314.67, 397.17, 42.33 ], "formula_id": "formula_4", "formula_text": "M(c) = (S U , A U , H c , p U (•|s, a, c, H c ), r(s, a, c), γ) Like Contextual MDP, M(c) = (S U , A U , H c , p U (•|s, a, c, H c ), r(s, a, c), γ" }, { "formula_coordinates": [ 4, 171.39, 396.42, 333.28, 12.39 ], "formula_id": "formula_5", "formula_text": "p U (s ′ |s, a, c, H c ) = p(s ′ |s, a, c) • 1 a∈A + p(H c (a) = s ′ ) • 1 a∈L ask (2)" }, { "formula_coordinates": [ 4, 108, 560.42, 396, 20.56 ], "formula_id": "formula_6", "formula_text": "1 , a 1 , • • • , s t ) by concatenation to get τ t = concat(i, s 1 , a 1 , • • • , s t )" }, { "formula_coordinates": [ 5, 324.81, 197.05, 152.88, 12.48 ], "formula_id": "formula_7", "formula_text": "τ k = concat(i k , s k 1 , a k 1 , • • • , s k T , a k T )," }, { "formula_coordinates": [ 5, 248.77, 250.93, 255.9, 11.72 ], "formula_id": "formula_8", "formula_text": "a t ∼ π LLM (τ 1 , • • • , τ K , τ t )(3)" }, { "formula_coordinates": [ 5, 203.79, 306.3, 300.88, 31.18 ], "formula_id": "formula_9", "formula_text": "a t = arg max a∈A |a| i=0 π LLM (e i |τ 1 , • • • , τ K , τ t , e 1:i-1 )(4)" }, { "formula_coordinates": [ 5, 261.63, 556.48, 103.51, 13.2 ], "formula_id": "formula_10", "formula_text": "D = {(τ i t , a i t , n i t ) Ti t=0 } N i=0" }, { "formula_coordinates": [ 5, 225.44, 631.65, 160.63, 30.43 ], "formula_id": "formula_11", "formula_text": "L = - N i=0 Ti t=0 log π LLM (a i t |τ i t ) • 1 n i t =0" }, { "formula_coordinates": [ 14, 143.6, 294.25, 324.53, 117.39 ], "formula_id": "formula_12", "formula_text": "bowl 2 is in diningtable 2. saltshaker 2 is in sidetable 1. spatula 1 is in countertop 1. pot 1 is in stoveburner 4. spatula 2 is in drawer 1. dishsponge 3 is in diningtable 2. peppershaker 1 is in cabinet 2. tomato 4 is in sidetable 1. knife 1 is in diningtable 3. cup 1 is in sidetable 1. bread 2 is in diningtable 3. spatula 3 is in diningtable 2. pan 1 is in cabinet 4. tomato 3 is in fridge 1. potato 1 is in sinkbasin 1. peppershaker 3 is in diningtable 3. apple 1 is in fridge 1. saltshaker 1 is in cabinet 4. fork 2 is in drawer 1. spoon 1 is in sidetable 1. egg 1 is in fridge 1. lettuce 1 is in sidetable 1. plate 1 is in diningtable 2." }, { "formula_coordinates": [ 14, 143.6, 519.48, 324.53, 128.3 ], "formula_id": "formula_13", "formula_text": ": bowl 2 is in diningtable 2. saltshaker 2 is in sidetable 1. spatula 1 is in countertop 1. pot 1 is in stoveburner 4. spatula 2 is in drawer 1. dishsponge 3 is in diningtable 2. peppershaker 1 is in cabinet 2. tomato 4 is in sidetable 1. knife 1 is in diningtable 3. cup 1 is in sidetable 1. bread 2 is in diningtable 3. spatula 3 is in diningtable 2. pan 1 is in cabinet 4. tomato 3 is in fridge 1. potato 1 is in sinkbasin 1. peppershaker 3 is in diningtable 3. apple 1 is in fridge 1. saltshaker 1 is in cabinet 4. fork 2 is in drawer 1. spoon 1 is in sidetable 1. egg 1 is in fridge 1. lettuce 1 is in sidetable 1. plate 1 is in diningtable 2." }, { "formula_coordinates": [ 15, 158.81, 250.26, 31.38, 8.3 ], "formula_id": "formula_14", "formula_text": "Obs 1:" }, { "formula_coordinates": [ 15, 143.6, 361.49, 324.53, 19.21 ], "formula_id": "formula_15", "formula_text": "Obs 3: mug 1 is in diningtable 1, mug 3 is in diningtable 1, mug 2 is in diningtable 1." }, { "formula_coordinates": [ 32, 143.6, 550.28, 324.47, 171.93 ], "formula_id": "formula_16", "formula_text": "plate 1 is in cabinet 4. soapbottle 1 is in shelf 2. spoon 2 is in diningtable 1. egg 1 is in sinkbasin 1. knife 3 is in diningtable 1. bowl 1 is in diningtable 1. butterknife 2 is in countertop 1. spatula 3 is in diningtable 1. apple 2 is in countertop 1. tomato 1 is in garbagecan 1. pan 1 is in stoveburner 2. mug 2 is in cabinet 7. lettuce 1 is in fridge 1. tomato 2 is in sinkbasin 1. bread 1 is in diningtable 1. ladle 2 is in countertop 1. fork 1 is in countertop 1. saltshaker 1 is in countertop 1. soapbottle 3 is in diningtable 1. spoon 1 is in diningtable 1. butterknife 1 is in countertop 1. spatula 2 is in diningtable 1. ladle 1 is in countertop 1. spatula 1 is in diningtable 1. plate 3 is in diningtable 1. peppershaker 3 is in cabinet 13. cup 1 is in diningtable 1. dishsponge 1 is in cabinet 7. winebottle 1 is in countertop 1. butterknife 3 is in countertop 1. peppershaker 1 is in countertop 1. potato 2" } ]
Asking Before Action: Gather Information in Embodied Decision Making with Language Models
With strong capabilities of reasoning and a generic understanding of the world, Large Language Models (LLMs) have shown great potential in building versatile embodied decision making agents capable of performing diverse tasks. However, when deployed to unfamiliar environments, we show that LLM agents face challenges in efficiently gathering necessary information, leading to suboptimal performance. On the other hand, in unfamiliar scenarios, human individuals often seek additional information from their peers before taking action, leveraging external knowledge to avoid unnecessary trial and error. Building upon this intuition, we propose Asking Before Action (ABA), a method that empowers the agent to proactively query external sources for pertinent information using natural language during their interactions in the environment. In this way, the agent is able to enhance its efficiency and performance by mitigating wasteful steps and circumventing the difficulties associated with exploration in unfamiliar environments. We empirically evaluate our method on an embodied decision making benchmark, ALFWorld, and demonstrate that despite modest modifications in prompts, our method exceeds baseline LLM agents by more than 40%. Further experiments on two variants of ALFWorld illustrate that by imitation learning, ABA effectively retains and reuses queried and known information in subsequent tasks, mitigating the need for repetitive inquiries. Both qualitative and quantitative results exhibit remarkable performance on tasks that previous methods struggle to solve.
Xiaoyu Chen; Shenao Zhang; Pushi Zhang; Li Zhao; Jianyu Chen
[ { "figure_caption": "Figure 3 :3Figure 3: Performance on multiround ALFWorld for our methods and baselines.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Act 8: go to drawer 1 Obs 9: The drawer 1 is closed. Act 9: open drawer 1 Obs 10: You open the drawer 1. The drawer 1 is open. In it, you see nothing.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Success rate on ALFWorld environments for our methods and baselines. ID and OOD refer to in distribution evaluation set and out-of-distribution evaluation set provided in ALFWorld environment respectively. V7B refers to Vicuna 7B model. We report the best BUTLER success rates across 8 seeds aligned with the original paper(Shridhar et al., 2021). For ReActYao et al. (2022) and out method (ABA), we report success rates mean and std across 5 seeds.", "figure_data": "PickExamineCleanHeatCoolPick 2AllBUTLERID61394481602940(best of 8)OOD462239741002437ReAct + V7BID9 ± 78 ± 49 ± 34 ± 59 ± 84 ± 47 ± 3(avg of 5)OOD3 ± 36 ± 35 ± 310 ± 32 ± 39 ± 56 ± 1ABA + V7BID60 ± 6 52 ± 5 59 ± 6 46 ± 6 61 ± 3 61 ± 10 56 ± 3(avg of 5)OOD37 ± 553 ± 5 51 ± 2 52 ± 6 50 ± 15 41 ± 0 48 ± 2ALFWorld, demonstrating its capability to formulate proper questions and take subsequent actions.We show ABA results in improvements exceeding 40% in success rate than LLM baseline withoutasking. In Section 4.2, we extend our evaluation to two variants of ALFWorld, showing the agent'sadeptness in gathering diverse information through question-asking, as well as its ability to retainand reuse acquired knowledge to avoid redundant querying. Notably, these modifications to theenvironments introduce new challenges that previous methods struggle to solve, while ABA exhibitsexceptional performance in tackling these tasks.4.1 ALFWorldALFWorld Shridhar et al. (2021) is an embodied decision making environment based on TextWorldCôté et al. (2019), which serves widely as a testbed in previous papers analyzing embodied decisionmaking with LLMs Yao et al. (2022); Shinn et al. (2023). ALFWorld contains six types of differenteveryday tasks from ALFRED Shridhar et al. (2020) encompassing activities such as picking andplacing, examining in light, cleaning, etc. Within each episode, the agent is deployed in a new roomand assigned specific tasks to accomplish. All observations and actions are in natural language.", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Performance on ALFWorld with ambiguous tasks for our methods and baselines. ID and OOD refer to in-distribution and out-of-distribution evaluation sets.", "figure_data": "0.60.5Success Rate0.2 0.3 0.40.10.0PickExamine CleanHeatCoolPick 2AllReAct IDReAct OODABA-IC IDABA-IC OODABA-IL IDABA-IL OODFigure 2:", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "You put the spraybottle 4 in/on the toilet 1.", "figure_data": "Obs 15: OK.Act 15: go to dresser 1Obs 16: On the dresser 1, you see a candle 2, a soapbottle3, a soapbottle 2, and a spraybottle 4.Act 16: take spraybottle 4 from dresser 1Obs 17: You pick up the spraybottle 4 from the dresser 1.Act 17: think: Now I find a spraybottle (4). Next, I needto put it in/on toilet 1.Obs 18: OK.Act 18: go to toilet 1Obs 19: On the toilet 1, you see a candle 1, a toiletpaper2, and a toiletpaper 1.Act 19: put spraybottle 4 in/on toilet 1Obs 20:Act 3: go to drawer 1Obs 4: The drawer 1 is closed.Act 4: open drawer 1Obs 5:You open the drawer 1. The drawer 1 is open. Init, you see nothing.Act 5:think: Now I find a spraybottle (not). Next, Ineed to take it.Obs 6: OK.Act 6: go to drawer 2Obs 7: The drawer 2 is closed.Act 7: open drawer 2Obs 8:You open the drawer 2. The drawer 2 is open. Init, you see a candle 3.Act 8:think: Now I find a spraybottle (not). Next, Ineed to take it.Obs 9: OK.Act 9: go to drawer 3Obs 10: The drawer 3 is closed.Act 10: open drawer 3Obs 11:You open the drawer 3. The drawer 3 is open. Init, you see nothing.Act 11:think: Now I find a spraybottle (not). Next, Ineed to take it.Obs 12: OK.Act 12: go to drawer 4Obs 13: The drawer 4 is closed.Act 13: open drawer 4Obs 14:You open the drawer 4. The drawer 4 is open. Init, you see nothing.Act 14:think: Now I find a spraybottle (not). Next, Ineed to take it.", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "35: OK.", "figure_data": "Act 35:think: I cannot locate cloth, I need to ask theowner of this room.Obs 36: OK.Act 36: ask: Where is the cloth?Obs 37: cloth 1 is in bathtubbasin 1, cloth 2 is in dresser1, cloth 3 is in drawer 2.Act 37:think: I can go to bathtubbasin 1 and take thecloth, then put it in countertop.Obs 38: OK.Act 38: go to bathtubbasin 1Obs 39:On the bathtubbasin 1, you see a cloth 3, a cloth2, and a cloth 1.Act 39: take cloth 1 from bathtubbasin 1Obs 40: You pick up the cloth 1 from the bathtubbasin 1.Act 40: think: Now I take a cloth 1. Next, I need to putit in countertop.Obs 41: OK.Act 41: go to countertop 1Obs 42: On the countertop 1, you see a mirror 1, a soapbar1, and a spraybottle 1.Act 42: put cloth 1 in/on countertop 1Obs 43:You put the cloth 1 in/on the countertop 1. Yournext task is to: put some toiletpaper on shelf.Act 43:think: To solve the task, I need to find andtake a toiletpaper, then put it in shelf. First I need tofind the locations of toiletpaper. ### query: toiletpaper >toiletpaper 1 is in garbagecan 1, toiletpaper 2 is in drawer2.Obs 44: OK.Act 44:think: I can go to garbagecan 1 and take thetoiletpaper, then put it in shelf.Obs 45: OK.Act 45: go to garbagecan 1Obs 46: On the garbagecan 1, you see a spraybottle 3, and atoiletpaper 1.Act 46: take toiletpaper 1 from garbagecan 1Obs 47:You pick up the toiletpaper 1 from the garbagecan1.Act 47:think: Now I take a toiletpaper 1. Next, I needto put it in shelf.Obs 48: OK.Act 48: go to shelf 1Obs 49:On the shelf 1, you see a candle 1, and aspraybottle 2.", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "(Huang et al., 2022a)", "Explanation": "The cited work by Huang et al. provides a method for integrating LLMs in embodied decision making, which the citing paper adopts to conduct their research."}, {"Category": "Methodological Basis", "Citation": "(Li et al., 2022)", "Explanation": "The cited work by Li et al. also contributes to the integration of LLMs in embodied decision making, providing a method that the citing paper may have adopted in their research."}, {"Category": "Methodological Basis", "Citation": "(Ahn et al., 2022)", "Explanation": "The cited work by Ahn et al. further adds to the methodological basis for integrating LLMs in embodied decision making, providing insights that the citing paper may have incorporated in their research."}, {"Category": "Methodological Basis", "Citation": "(Huang et al., 2022b)", "Explanation": "The cited work by Huang et al. also contributes to the methodological basis for integrating LLMs in embodied decision making, providing additional methods and techniques that the citing paper may have adopted."}, {"Category": "Methodological Basis", "Citation": "(Singh et al., 2022a)", "Explanation": "The cited work by Singh et al. further extends the methodological basis for integrating LLMs in embodied decision making, providing new methods and techniques that the citing paper may have adopted in their research."}, {"Category": "Methodological Basis", "Citation": "(Yao et al., 2022)", "Explanation": "The cited work by Yao et al. also contributes to the methodological basis for integrating LLMs in embodied decision making, providing additional methods and techniques that the citing paper may have adopted in their research."}, {"Category": "Methodological Basis", "Citation": "(Wang et al., 2023)", "Explanation": "The cited work by Wang et al. further extends the methodological basis for integrating LLMs in embodied decision making, providing new methods and techniques that the citing paper may have adopted in their research."}, {"Category": "Methodological Basis", "Citation": "(Driess et al., 2023)", "Explanation": "The cited work by Driess et al. also contributes to the methodological basis for integrating LLMs in embodied decision making, providing additional methods and techniques that the citing paper may have adopted in their research."}, {"Category": "Methodological Basis", "Citation": "(Carta et al., 2023)", "Explanation": "The cited work by Carta et al. further extends the methodological basis for integrating LLMs in embodied decision making, providing new methods and techniques that the citing paper may have adopted in their research."}, {"Category": "Methodological Basis", "Citation": "(Nguyen and Daum\u00e9 III, 2019)", "Explanation": "The cited work by Nguyen and Daum\u00e9 III (2019) provides a method for asking humans for oracle actions or action descriptions, which the citing paper builds upon to develop a new method for asking for information in a more general and human-like manner."}, {"Category": "Extension or Continuation", "Citation": "(Nguyen et al., 2022)", "Explanation": "The cited work by Nguyen et al. (2022) asks for information about current states and (sub-)goals, which the citing paper extends by focusing on asking for information in a more general and human-like manner."}, {"Category": "Methodological Basis", "Citation": "(Liu et al., 2022)", "Explanation": "The cited work by Liu et al. (2022) asks three-word-templated questions to accelerate training, which the citing paper builds upon to develop a more general and human-like method for asking for information."}, {"Category": "Methodological Basis", "Citation": "(Huang et al., 2022b)", "Explanation": "The cited work by Huang et al. (2022b) asks for scene, task, or preferences descriptions, which the citing paper builds upon to develop a more general and human-like method for asking for information."}, {"Category": "Methodological Basis", "Citation": "(Hallak et al., 2015)", "Explanation": "The cited work introduces the concept of Contextual MDPs, which the citing paper adopts to formulate the embodied decision making problem in a more effective way."}, {"Category": "Methodological Basis", "Citation": "(Zintgraf et al., 2020)", "Explanation": "The cited work provides a learnable encoder of c, which the citing paper adopts in their research to model the house layout and the locations of the food and the bedroom."}, {"Category": "Extension or Continuation", "Citation": "(Beck et al., 2023)", "Explanation": "The cited work highlights the challenges in generalization capability, which the citing paper extends by exploring the challenges in efficient information gathering in various unknown environments with different contexts."}, {"Category": "Supporting Evidence", "Citation": "(Zintgraf et al., 2021)", "Explanation": "The cited work provides evidence of the need for dense rewards and small state spaces in existing methods, which the citing paper further elaborates on the challenges in embodied decision making in environments with large state spaces and lack of dense reward functions."}, {"Category": "Methodological Basis", "Citation": "(Huang et al., 2023)", "Explanation": "The cited work by Huang et al. (2023) provides a method for using multimodal LLMs in visual settings, which the citing paper can adapt to solve complex robot control tasks."}, {"Category": "Methodological Basis", "Citation": "(Driess et al., 2023)", "Explanation": "The cited work by Driess et al. (2023) also discusses the use of multimodal LLMs in visual settings, which the citing paper can use to improve the efficiency of phrasing questions and comprehending answers in visual settings."}, {"Category": "Methodological Basis", "Citation": "(Ahn et al., 2022)", "Explanation": "The cited work by Ahn et al. (2022) provides a method for combining pretrained low-level policies with LLMs to solve complex robot control tasks, which the citing paper can use to improve the efficiency of the policy initialization process."}, {"Category": "Methodological Basis", "Citation": "(Singh et al., 2022a)", "Explanation": "The cited work by Singh et al. (2022a) also discusses the use of low-level policies in combination with LLMs to solve complex robot control tasks, which the citing paper can use to improve the efficiency of the policy initialization process."}, {"Category": "Methodological Basis", "Citation": "(Ouyang et al., 2022)", "Explanation": "The cited work by Ouyang et al. provides a method for instruction-following LLMs that the citing paper builds upon to improve the performance of the LLM agent."}, {"Category": "Methodological Basis", "Citation": "(Chung et al., 2022)", "Explanation": "The cited work by Chung et al. also contributes to the field of instruction-following LLMs, providing a method that the citing paper adopts to improve the performance of the LLM agent."}, {"Category": "Extension or Continuation", "Citation": "(Ouyang et al., 2022)", "Explanation": "The citing paper extends the research of Ouyang et al. by proposing two methods to further improve the performance of instruction-following LLMs."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work introduces the concept of in-context learning, which the citing paper adopts to allow LLMs to learn new tasks by prepending input-output examples without optimizing model parameters."}, {"Category": "Extension or Continuation", "Citation": "(Xie et al., 2021)", "Explanation": "The cited work is an extension of the in-context learning concept, focusing on the efficiency and generalization of LLMs in learning new tasks."}, {"Category": "Extension or Continuation", "Citation": "(Aky\u00fcrek et al., 2022)", "Explanation": "The cited work further extends the in-context learning concept to improve the efficiency and generalization of LLMs in learning new tasks."}, {"Category": "Extension or Continuation", "Citation": "(Dai et al., 2022)", "Explanation": "The cited work continues the exploration of in-context learning in LLMs, focusing on improving the efficiency and generalization of LLMs in learning new tasks."}, {"Category": "Extension or Continuation", "Citation": "(Huang et al., 2022a)", "Explanation": "The cited work extends the in-context learning concept to embodied planning tasks, allowing LLMs to learn the policy by providing examples of appropriate questions to ask at the right time."}, {"Category": "Extension or Continuation", "Citation": "(Singh et al., 2022a)", "Explanation": "The cited work also extends the in-context learning concept to embodied planning tasks, focusing on the ability of LLMs to learn the policy by providing examples of appropriate questions to ask at the right time."}, {"Category": "Extension or Continuation", "Citation": "(Yao et al., 2022)", "Explanation": "The cited work further extends the in-context learning concept to embodied decision making tasks, allowing LLMs to learn the policy by providing examples of appropriate questions to ask at the right time."}, {"Category": "Methodological Basis", "Citation": "(Ahn et al., 2022)", "Explanation": "The cited work by Ahn et al. provides a method for sampling i k randomly for different k, which the citing paper adopts in their research to improve the action selection process in the LLM agent."}, {"Category": "Methodological Basis", "Citation": "(Ross et al., 2011)", "Explanation": "The cited work by Ross et al. introduces the concept of distribution shift and provides a method to alleviate the problem by introducing noise in the expert policy. The citing paper adopts this method to improve the performance of the policy training process."}, {"Category": "Methodological Basis", "Citation": "(Shridhar et al., 2021)", "Explanation": "The cited work, ALFWorld, serves as the basis for the evaluation of ABA in the citing paper, providing the platform and context for the study of decision making tasks."}, {"Category": "Data Source", "Citation": "(Chiang et al., 2023)", "Explanation": "The cited work, Vicuna-7B, is used to implement the model of human (or other information sources) in the evaluation of ABA, providing the data and model for the study of question answering in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Shridhar et al., 2021)", "Explanation": "The cited work, BUTLER, is used as a baseline for comparison in the citing paper, providing a method for training independent models using a large dataset of expert trajectories for each task."}, {"Category": "Methodological Basis", "Citation": "(Yao et al., 2022)", "Explanation": "The cited work, ReAct, is used as a baseline for comparison in the citing paper, providing a method for synergizing reasoning and acting to take actions using a language model."}, {"Category": "Data Source", "Citation": "(Chiang et al., 2023)", "Explanation": "The cited work, Vicuna-7B Chiang et al. (2023), is used as the language model for both ReAct and the method in the citing paper, providing a specific implementation for the study."}, {"Category": "Data Source", "Citation": "(Yao et al., 2022)", "Explanation": "The cited work, the reasoning process in Yao et al. (2022), is incorporated into the implementation of both the method in the citing paper and ReAct, providing a specific method for making decisions."}, {"Category": "Data Source", "Citation": "(Ahn et al., 2022)", "Explanation": "The cited work, the scoring method in Ahn et al. (2022), is used to select actions for both the method in the citing paper and ReAct, providing a specific method for comparison."}, {"Category": "Extension or Continuation", "Citation": "(Shridhar et al., 2021)", "Explanation": "The cited work, BUTLER, is used as a baseline for comparison in the citing paper, but the method in the citing paper extends the work by proposing a new method for training independent models using a large dataset of expert trajectories for each task."}, {"Category": "Extension or Continuation", "Citation": "(Yao et al., 2022)", "Explanation": "The cited work, ReAct, is used as a baseline for comparison in the citing paper, but the method in the citing paper extends the work by proposing a new method for synergizing reasoning and acting to take actions using a language model."}, {"Category": "Data Source", "Citation": "(Yao et al., 2022)", "Explanation": "The cited work by Yao et al. (2022) serves as the basis for the comparison of ABA-IC and ReAct methods in terms of their capabilities in question asking and information gathering in the ALFWorld environment."}, {"Category": "Methodological Basis", "Citation": "(Radford et al., 2019)", "Explanation": "The cited work by Radford et al. (2019) has been instrumental in the success of natural language modeling in various applications, including downstream NLP tasks, logic reasoning, and human-AI coordination."}, {"Category": "Methodological Basis", "Citation": "(Devlin et al., 2018)", "Explanation": "The cited work by Devlin et al. (2018) has contributed to the success of natural language modeling in various applications, including downstream NLP tasks."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Methodological Basis", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) has been instrumental in the research on in-context learning and prompting in natural language modeling."}, {"Category": "Supporting Evidence", "Citation": "(Mu et al., 2023;Bulatov et al., 2023)", "Explanation": "The cited works introduce techniques to address the issue of language model bias, which the citing paper leverages to inspire the development of the proposed ABA-IL method."}, {"Category": "Extension or Continuation", "Citation": "(Houlsby et al., 2019;Hu and Sadigh, 2023;Lialin et al., 2023)", "Explanation": "The cited works on fine-tuning LLMs provide a basis for the development of the proposed ABA-IL method, which builds upon the research in this area to address the issue of language model bias."}, {"Category": "Extension or Continuation", "Citation": "(Carta et al., 2023;Snell et al., 2022a,b)", "Explanation": "The cited works on leveraging decision-making signals to train language agents for goal satisfaction are further extended in the development of the proposed ABA-IL method, which aims to address the issue of language model bias."}, {"Category": "Extension or Continuation", "Citation": "(Huang et al., 2022b;Lin et al., 2023;Huang et al., 2022a;Wang et al., 2023;Li et al., 2022;Singh et al., 2022a;Carta et al., 2023)", "Explanation": "The cited works on task planning with LLMs provide a basis for the development of the proposed ABA-IL method, which aims to address the issue of language model bias by leveraging the potential of LLMs in this area."}, {"Category": "Supporting Evidence", "Citation": "(Bubeck et al., 2023;Valmeekam et al., 2022Valmeekam et al., , 2023))", "Explanation": "The cited works on criticisms of LLMs in task planning highlight the need to address the issue of language model bias, which the proposed ABA-IL method aims to address by leveraging the potential of LLMs in this area."}, {"Category": "Supporting Evidence", "Citation": "(Da Silva et al., 2020)", "Explanation": "The cited work by Da Silva et al. (2020) provides a method of directly asking humans for numerical vectors like actions/states, which the citing paper uses as a reference for their own research on human-in-the-loop decision making."}, {"Category": "Supporting Evidence", "Citation": "(Singh et al., 2022b)", "Explanation": "The cited work by Singh et al. (2022b) also studies the human-in-the-loop scenario, specifically the use of human input to guide decision making. This work provides a basis for the research conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Da Silva et al., 2020)", "Explanation": "The cited work by Da Silva et al. (2020) is an extension of the research on human-in-the-loop decision making, as it focuses on directly asking humans for numerical vectors like actions/states. The citing paper builds upon this work by exploring a new approach of asking humans for natural language information in a more natural and straightforward manner."}, {"Category": "Theoretical Foundation", "Citation": "(Huang et al., 2022b)", "Explanation": "The cited work by Huang et al. provides the theoretical framework for considering human feedback as a scene descriptor in the decision making pipeline, which the citing paper builds upon to formalize the setting as Contextual MDP with Human / External Information Sources in the Loop."}, {"Category": "Methodological Basis", "Citation": "(Driess et al., 2023)", "Explanation": "The cited work by Driess et al. (2023) provides a method for incorporating image inputs into the language model, which the citing paper can use to extend its current focus on language environments to incorporate image inputs in a multimodal way."}, {"Category": "Data Source", "Citation": "(Chiang et al., 2023)", "Explanation": "The cited work provides the language model used to implement the human model in the evaluation phase of the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Chiang et al., 2023)", "Explanation": "The cited work, Vicuna-7B Chiang et al. (2023), is used as a pretrained LLM in the citing paper to provide answers in a given context."}, {"Category": "Methodological Basis", "Citation": "(Yao et al., 2022)", "Explanation": "The cited work provides the ReAct model and the associated trajectory for the same episode, which the citing paper uses for comparison and analysis of the ReAct model in the context of the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Yao et al., 2022)", "Explanation": "The cited work provides a method for creating in-context examples for ABA-IC by using manual interaction with the environment and reasoning steps to complete tasks and collect data."}, {"Category": "Data Source", "Citation": "(Hu et al., 2021)", "Explanation": "The cited work provides the LoRA model with a r value of 16 and a learning rate of 1e -5, which the citing paper uses in the training process of the model."}, {"Category": "Methodological Basis", "Citation": "(Chiang et al., 2023)", "Explanation": "The cited work provides the Vicuna prompts used in the human model examples to organize the data and structure the responses in the experiments."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b32", "b13", "b3", "b30", "b4", "b26", "b25", "b5", "b12" ], "table_ref": [], "text": "Most machine learning and statistics relies on the foundational assumption that the data is sampled in an IID manner. To reliably generalize to the population underlying the data, one assumes each datapoint does not affect the others (independent) and that the population is not changing as datapoints are being collected (identically distributed). In settings that violate these assumptions, data-driven inference cannot be trusted unless done with special care (Zhao et al., 2018;Hsieh et al., 2020;Cao, 2022).\nReal-world data collection can be messy and hard to know whether data was sampled in a strictly IID fashion. Unlike other works from the time-series and online learning literature, this paper considers a single given dataset (without an explicit time column) and presents a statistical test to answer a practical question: Does the order in which my data were collected matter? The role of data-ordering/ collectiontimes is often not obvious, especially for non-expert data analysts. Algorithms that can automatically detect when data violates the IID assumption in certain ways are especially valuable to: novice data scientists, users of AutoML systems, or those who simply lack domain knowledge (or time to acquire it) about their data.\nBecause it is impossible to detect all the ways a dataset can be non-IID, an effective audit should aim to detect the types of violations most common in data-driven applications. In particular, such common violations include: drift where the underlying distribution/population is evolving over time (Webb et al., 2016) and attractive interactions between certain datapoints which influence their values to be mutually similar (Cox & Isham, 1980;Reinhart, 2018). Both types of common IID violations lead to data that exhibits the following common non-IID property: datapoints that are closer together in the data ordering tend to have more similar feature values.\nAlthough shuffling the order of a dataset is an effective way to make it appear IID, it remains important to determine if the underlying data collection or generation process is actually IID. IID violations may imply a dataset is not fully representative of the population and future data could look different -issues that cannot be addressed merely by shuffling the datapoints' order. If the data distribution drifts over time in a ML application, then the most recently collected data will better represent future deployment conditions. In this case, holding-out this latest data is a more reliable way to estimate model performance vs. shuffling the dataset and using a random train/test split. This is just one example out of many where special actions are preferred once one realizes the data are not IID (Rao, 2021;Darrell et al., 2015;Hsieh et al., 2019). Hence why an automated check for common violations of IID sampling is valuable, especially if it is simple and efficient! Here, we introduce a straightforward k-Nearest Neighbors (kNN) approach 1 for detecting when data exhibits the common non-IID property. Our approach performs statistical test of this alternative against the null hypothesis that the collection order of the datapoints does not affect their joint feature value distribution. The only requirement to apply our method is defining a similarity measure between two datapoints' feature values (to form a kNN graph over the data), and thus the method can be directly applied to complex multivariate data including images, text, audio, etc. (e.g. via cosine distance between model embeddings of each example). As empirically demonstrated in subsequent experiments, our kNN method is capable of detecting many forms of IID violation including datasets where the underlying distribution is: drifting as data are being collected, subject to a sharp changepoint, or marginally identical over time but the datapoints are not independent with positively-correlated interactions." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b0", "b9", "b1", "b15", "b23", "b20", "b2", "b10", "b7", "b8", "b29", "b16", "b6", "b0", "b19", "b28", "b27" ], "table_ref": [], "text": "With the growth in real-world model deployments, drift detection has become increasingly relevant in recent years. Most new methods are intended for online use where a dataset is continually updated over time (Agrahari & Singh, 2022). Offline methods like ours can nonetheless be easily integrated into an online application. Many diverse strategies for drift detection have been proposed, including methods based on: distributional dissimilarity (Gama et al., 2004;Barros et al., 2017;Liu et al., 2017), examining datapoints sequentially to track change (Page, 1954;Mahdi et al., 2020), window-based examination of a datastream in batches (Bifet, 2009;Gözüac ¸ık & Can, 2021;Duda et al., 2018), statistical pattern tracking (Frias-Blanco et al., 2014;Song et al., 2016), density clustering (Liu et al., 2018), and model-based interpretability (Demšar & Bosnić, 2018). Each method balances trade-offs between sensitivity, generality, and complexity (Agrahari & Singh, 2022) and are at times not obviously extendable to multi-dimensional data.\nA kNN-based approach has been previously considered for concept drift detection in data labels (Losing et al., 2016), whereas our work is instead focused on the features of the data. More similar to our work are kNNbased approaches to two-sample testing (Schilling, 1986;Rosenbaum, 2005), which utilize nearest neighbor graphs to perform distribution-free two sample testing in arbitrary dimensions-a powerful framework that inspired our work. To our knowledge, none of these existing kNN-based statistical tests has been applied to a single dataset in order to identify the types of IID violations considered in this paper." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "Our method is simple yet effective. Given a similarity measure between datapoints' feature values (e.g. cosine distance for multivariate numerical data or embedding vectors), we construct a k-Nearest Neighbor graph of the dataset based on feature values. A pair of datapoints is considered neigh-bors or non-neighbors based on this kNN graph (not their indices in the dataset ordering).\nNext we consider the indices of the datapoints (perhaps determined by the time each was collected), and compute the index distance between datapoint i and j as |i -j| (e.g. the index distance between the 8th datapoint and the 108th datapoint is 100). Subsequently, we apply a two-sample permutation test using the Kolmogorov-Smirnov statistic in order to determine if there is a statistically significant difference between the distributions of index distances between kNN-neighbors (the foreground distribution) vs. index distances between arbitrary datapoint pairs (the background distribution). A low p-value implies that neighbors and non-neighbors in feature space are systematically ordered in certain patterns, indicating the data was sampled in a non-IID manner. At a high-level, this is all there is to our straightforward kNN approach, but we provide some lowerlevel details for completeness." }, { "figure_ref": [], "heading": "Dataset Statistic", "publication_ref": [], "table_ref": [], "text": "Given a dataset D = {x 1 , x 2 , ..., x N } containing N examples x i , we first collect the index distances of all pairs of neighbors in the dataset. Let\nX = {|i -j| | ∀x i ∈ D, j ∈ K i }.\nWhere K i is the set of k indices corresponding to the k neighbors of x i . For example, if k = 2 and x i is neighbors with x i+1 , x j then K i = {i + 1, j}. Throughout, we stick with the default value of k = 10 and did not empirically observe the choice of k to have much impact as long as k was not too large.\nNext, we define the background distribution, or the distribution of all possible index distances. We use a cumulative distribution to represent the probability of randomly drawing a pair of points from dataset D with index distance less than or equal to d. This is easily defined analytically as a function mapping an index-distance d to the empirical distribution function (over random pairs of datapoints) evaluated at that value:\nB(d) = d d ′ =1 2(N -d ′ ) N (N -1)\n.\nAs there are N (N -1) 2 total pairs, (N -d ′ ) of which which have index distance exactly d ′ .\nThe test statistic T for our dataset is computed as follows:\nT = max d∈{1,2,...,N -1} | F X (d) -B(d)|\nHere F X denotes the empirical cumulative distribution function for a set of values X, i.e. F X (d) is the empirically estimated probability that x ≤ d for x randomly sampled from X. T corresponds to a Kolmogorov-Smirnov statistic between two distributions, in our case the foreground and background distribution of index-distances. For IID data, these two distributions should be identical." }, { "figure_ref": [], "heading": "Permutation Testing", "publication_ref": [ "b24", "b21", "b17" ], "table_ref": [], "text": "While there exists analytical methods to assess the significance of Kolmorogov-Smirnov statistics, these rely on asymptotics that require IID observations, whereas our \"observations\" here are index-distances between pairs of datapoints which are certainly not IID. Here we instead simply rely permutation testing to obtain a p-value for the significance of our test statistic T (Pesarin & Salmaso, 2010).\nFor P permutations, we permute the order of dataset D and compute another test statistic for each permuted dataset, obtaining a collection of P statistics distributed under our null hypothesis where the order of the data does not matter.\nWhile permutation testing can be sometimes require intensive computation, we accelerate this process in multiple ways. First we only compute relatively few permutations (only 25 permutations were used for each result in this paper), and use kernel density estimation to extrapolate the null distribution from the P permuted test statistics. Also done by Mueller et al. (2018), such kernel density smoothing produces a smoother spectrum of possible p-values from a limited number of permutations. Additionally, each permutation is quite efficient because we don't need to recompute the kNN graph or background distribution B(d). For large datasets, the entire test can be done via a subset of the possible pairs from the dataset. In all results presented in this paper, we only consider at most k pairings with each We also shuffled the data from each scenario to create an analogous IID dataset on which all methods were also run. Ours is the only method which reliably distinguishes between IID and non-IID data in each scenario.\ndatapoint to avoid quadratic runtime complexity. Further speedups can be achieved by using approximate nearest neighbor algorithms to construct the initial kNN graph (Liu et al., 2004)." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Comparison to other methods", "publication_ref": [ "b18", "b0" ], "table_ref": [], "text": "Using simulated datasets, we first compare our approach against alternative baseline methods to detect certain violations of the IID assumption. The first baseline is a simple auto-correlation measure. We use the Ljung-Box test (Ljung & Box, 1978) on each dimension of numerical input data and average the p-values from this test across dimensions and lags to produce a single measure for a multivariate dataset.\nA second baseline method is based on PCA reconstruction error, a popular way to detect distribution drift (NannyML). This method computes the PCA reconstruction error over a series of subsets of data with respect to a reference subset. High error indicates the existence of data drift. This baseline method uses the same permutation testing framework as our kNN approach in order to convert PCA reconstruction errors into p-values. When working with 2-dimensional data (in our simulations), we use only the first principal component to compute reconstruction error.\nWhen applying our kNN method, we always use the following default parameter settings: k = 10 and 25 permutations and Euclidean distance measure. We compare the performance of these three methods on a few simulated 2dimensional datasets (depicted in Figure 1) each with 1000 samples. The non-IID datasets stem from the following scenarios:\n1. Gradual mean shift. Data is drawn from a mixture of 10 Gaussians with means drawn uniformly from [0, 10] × [0, 10] and standard deviations drawn uniformly from [0, 1]. Each time a datapoint is drawn, the means increase linearly such that the mean of each Gaussian has shifted by 2 in each dimension by the end of data collection. This setting represents a classic example of distribution drift studied in the literature (Agrahari & Singh, 2022).\n2. Variance changepoint. Data is drawn from the same mixture of Gaussians described in the previous paragraph. After the first half of the dataset has been collected, the standard deviation of each Gaussian in the underlying mixture distribution is multiplied by a factor of 1.5 after which the remaining half of the samples are collected.\n3. Dependent but marginally identical in distribution. In this scenario, we sample three points from a standard two-dimensional normal distribution. Then, each of the following datapoints, x i is sampled in an auto-regressive fashion from: N (α\n1 x i-1 + α 2 x i-2 + α 3 x i-3 , 1)\nwhere α 1 + α 2 + α 3 = 0. In this case, every datapoint is sampled from the same marginal distribution (identically distributed), but their values are not independent. For this demonstration, we use α 1 = 0.5, α 2 = 0.4, α 3 = -0.9." }, { "figure_ref": [ "fig_1" ], "heading": "Results of benchmark", "publication_ref": [], "table_ref": [], "text": "Figure 2 shows the results of each method applied in each scenario, as well as a similar IID dataset created by randomly shuffling data from the scenario. Repeatedly running each method on datasets from each scenario produces a distribution of p-values for both non-IID and IID (shuffled) data. It is evident that only our kNN method reliably distinguishes between non-IID and IID data in every scenario, with a clear separation between p-values from the two settings.\nThe auto-correlation baseline struggles in the variance changepoint scenario in which virtually no temporal dependence exists on small timescales other than near the changepoint itself. In the final dependent data scenario in which the temporal dependence is the most salient, autocorrelation excels, but our kNN approach also reliably detects this form of IID violation. PCA Reconstruction fares poorly in many scenarios, producing p-value distributions between non-IID and IID runs that are not clearly distinguished except in the last scenario in which it performs better but still lags the other two methods. Only our kNN method demonstrates sensitivity to both data drift and interactions between datapoints' values without drift." }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "Image data scenarios", "publication_ref": [ "b14", "b11", "b31" ], "table_ref": [], "text": "We subsequently study how our kNN method works on a handful of non-IID subsets of the CIFAR-10 image classification dataset (Krizhevsky et al., 2009). These showcase the generality of our methodology across multiple axes: it is able to detect many types of violations commonly expected in real-world data and directly applicable to complex image data. For images, we extract embeddings from an pretrained ResNet-50 model (He et al., 2016;Wightman, 2019), and then use the cosine distance between them to form the kNN graph. The alternative baseline methods fared poorly in all image data scenarios we considered and are thus not presented here. In contrast, our method remains effective and can be similarly applied to other data types with semantic model embeddings such as: text, audio, etc. Depicted in Figure 3, here are the image data scenarios we study:\n1. IID data. First, we draw 10,000 random images from CIFAR10, shuffle them, and apply our kNN method to ensure that it correctly identifies when data is, in fact, drawn IID at random from CIFAR-10.\n2. Data sorted by underlying class. In the next scenario, we sort this 10,000 image subset by class so all images of a given class appear contiguously in the dataset.\nNote the class labels are not included in the dataset to which we apply our method, solely the images themselves. This is an extreme example of non-IID data, where the underlying sampling distribution is subject to multiple sharp changepoints. It entails an easy setting that any non-IID detection algorithm ought to detect.\n3. Distribution drift. Our next experiment considers distribution drift. We create a 5,000 image subset which simulates this by randomly sampling images from CIFAR-10 from a probability distribution over their classes that is slowly evolving. At first, images are sampled from any object class with equal probability. Then, we gradually change the sampling weights to be less symmetric such that some classes are far more prevalent than others in the last third of the data.\n4. Contiguous non-IID subset in the middle of dataset.\nIn certain datasets, most of the data were collected in an IID manner, but something happened in the midst of data collection leading to a cluster of non-IID data lurking the broader dataset. For instance, accidentally including multiple frames from the same video in a web-scraped image dataset, or individuals from the same community in a survey dataset. Our next experiment studies such settings.\nWe draw 2,500 images IID from CIFAR-10, except that we insert a small contiguous subset of 250 images that are drawn all from the same class halfway through the dataset (here we arbitrarily choose the airplane class). Results for image data Figure 4 shows the results of our method in each of these scenarios, after many replicate runs of the method in each scenario. Our method approach consistently detects non-IID sampling in each of the 3 non-IID scenarios with low p-values in almost every replicate run.\nIn contrast, the p-values appear uniformly distributed for the IID dataset where images are randomly drawn from CIFAR-10, as expected under the null hypothesis (and guaranteed for permutation testing)." }, { "figure_ref": [ "fig_4" ], "heading": "Scoring individual datapoints to spot trends", "publication_ref": [], "table_ref": [], "text": "Thus far, we only considered producing a single p-value to summarize whether the collection order of a dataset appears to matter or not. This section presents a simple technique to gain more insights about a dataset that produced a low p-value. Specifically, we score every datapoint in the dataset and plot these scores against the index of the datapoints to see if any trends emerge.\nOur score is obtained for each datapoint x i by computing the same sort of Kolmogorov-Smirnov statistic T used in our overall hypothesis test, but this time restricting the two distributions being compared, F and B, to only consider the portions of the foreground/background distribution in which x i is involved. That is we only consider pairs of datapoints in which the first element is x i itself (neighbors of x i in the kNN graph for the foreground distribution, random other datapoints paired with x i for the background distribution). For ease of visualization, we map the resulting per-datapoint statistic to a score between 0 and 1, such that scores near 0 indicate a datapoint for which the index-distance distributions to its neighbors and non-neighbors are significantly different.\nFigure 5 shows what these scores look like for our previously described image data scenarios. The results display no trend in these per-datapoint scores when the dataset is IID. However informative trends can be seen for scenarios where the dataset is not IID and p-values from our kNN approach are low. In the data sorted by underlying class scenario, we see clear peaks and valleys in these scores whose span corresponds to the number of images sampled from a particular class before drawing images from another class. In the contiguous non-IID subset in the middle of dataset scenario, we see a clear valley in these scores whose location corresponds to the non-IID data (images all from the airplane class) that happens to be present in an otherwise IID dataset.\nIn practice, a data analyst who receives a low overall pvalue for their dataset can zoom in the region of the dataset highlighted by these scores. Understanding how this region differs from other regions of the dataset with different score behavior may reveal important insights about data collection mishaps or important trends to account for during modeling. Here all images between the 1250th and 1500th in the dataset happen to be from the airplane class, while other images are randomly drawn from CIFAR-10." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "This paper presents a simple/scalable approach to detect certain types of common violations of the IID assumption. We statistically test whether or not datapoints that are closer in the data ordering (i.e. collection time) also tend to have more similar feature values. Our approach exhibits high power to detect many forms of such issues, and can be applied to many different data types, as long as suitable similarity measure between two datapoints' features can be defined. Our kNN method's capabilities extend beyond drift detection to properly catching settings where nearly adjacent datapoints are positively interacting (lack of independence).\nThis generality is a strength, but may also be seen as a limitation. A more specialized algorithm may capture more information about how a non-IID trend presents itself, however many kinds of problematic data will go undetected as observed in our empirical results. Our work represents a step toward developing automated data investigation methods that algorithmically detect fundamental problems in any given dataset. As more data analysis is conducted by non-experts, such automated data checks will become increasingly vital to ensure reliable inferences are produced." } ]
2023-05-25
[ { "authors": "S Agrahari; A K Singh", "journal": "Journal of King Saud University-Computer and Information Sciences", "ref_id": "b0", "title": "Concept drift detection in data stream mining: A literature review", "year": "2022" }, { "authors": "R S Barros; D R Cabral; P M Gonc ¸alves Jr; S G Santos; Rddm", "journal": "Expert Systems with Applications", "ref_id": "b1", "title": "Reactive drift detection method", "year": "2017" }, { "authors": "A Bifet", "journal": "ACM SIGKDD Explorations Newsletter", "ref_id": "b2", "title": "Adaptive learning and mining for data streams and frequent patterns", "year": "2009" }, { "authors": "L Cao", "journal": "IEEE Intelligent Systems", "ref_id": "b3", "title": "Beyond iid: non-iid thinking, informatics, and learning", "year": "2022" }, { "authors": "D R Cox; V Isham", "journal": "CRC Press", "ref_id": "b4", "title": "Point processes", "year": "1980" }, { "authors": "T Darrell; M Kloft; M Pontil; G Rätsch; E Rodner", "journal": "", "ref_id": "b5", "title": "Machine learning with interdependent and non-identically distributed data (dagstuhl seminar 15152)", "year": "2015" }, { "authors": "J Demšar; Z Bosnić", "journal": "Expert Systems with Applications", "ref_id": "b6", "title": "Detecting concept drift in data streams using model explanation", "year": "2018" }, { "authors": "P Duda; M Jaworski; L Rutkowski", "journal": "International journal of neural systems", "ref_id": "b7", "title": "Convergent time-varying regression models for data streams: tracking concept drift by the recursive parzen-based generalized regression neural networks", "year": "2018" }, { "authors": "I Frias-Blanco; J Del Campo-Ávila; G Ramos-Jimenez; R Morales-Bueno; A Ortiz-Diaz; Y Caballero-Mota", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b8", "title": "Online and non-parametric drift detection methods based on hoeffding's bounds", "year": "2014" }, { "authors": "J Gama; P Medas; G Castillo; P Rodrigues", "journal": "Springer", "ref_id": "b9", "title": "Learning with drift detection", "year": "2004" }, { "authors": "Ö Gözüac ¸ık; F Can", "journal": "Artificial Intelligence Review", "ref_id": "b10", "title": "Concept learning using one-class classifiers for implicit drift detection in evolving data streams", "year": "2021" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b11", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "K Hsieh; A Phanishayee; O Mutlu; P B Gibbons", "journal": "", "ref_id": "b12", "title": "The non-iid data quagmire of decentralized machine learning", "year": "2019" }, { "authors": "K Hsieh; A Phanishayee; O Mutlu; P Gibbons", "journal": "PMLR", "ref_id": "b13", "title": "The non-iid data quagmire of decentralized machine learning", "year": "2020" }, { "authors": "A Krizhevsky; G Hinton", "journal": "", "ref_id": "b14", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "A Liu; G Zhang; J Lu", "journal": "IEEE", "ref_id": "b15", "title": "Fuzzy time windowing for gradual concept drift adaptation", "year": "2017" }, { "authors": "A Liu; J Lu; F Liu; G Zhang", "journal": "Pattern Recognition", "ref_id": "b16", "title": "Accumulating regional density dissimilarity for concept drift detection in data streams", "year": "2018" }, { "authors": "T Liu; A Moore; K Yang; A Gray", "journal": "Advances in neural information processing systems", "ref_id": "b17", "title": "An investigation of practical approximate nearest neighbor algorithms", "year": "2004" }, { "authors": "G M Ljung; G E Box", "journal": "Biometrika", "ref_id": "b18", "title": "On a measure of lack of fit in time series models", "year": "1978" }, { "authors": "V Losing; B Hammer; H Wersing", "journal": "IEEE", "ref_id": "b19", "title": "Knn classifier with self adjusting memory for heterogeneous concept drift", "year": "2016" }, { "authors": "O A Mahdi; E Pardede; N Ali; J Cao", "journal": "Knowledge-Based Systems", "ref_id": "b20", "title": "Diversity measure as a new drift detection method in data streaming", "year": "2020" }, { "authors": "J Mueller; T Jaakkola; D Gifford", "journal": "Journal of the American Statistical Association", "ref_id": "b21", "title": "Modeling persistent trends in distributions", "year": "2018" }, { "authors": " Nannyml", "journal": "NannyML", "ref_id": "b22", "title": "", "year": "2023-03" }, { "authors": "E S Page", "journal": "Biometrika", "ref_id": "b23", "title": "Continuous inspection schemes", "year": "1954" }, { "authors": "F Pesarin; L Salmaso", "journal": "Statistica", "ref_id": "b24", "title": "The permutation testing approach: a review", "year": "2010" }, { "authors": "K Rao", "journal": "", "ref_id": "b25", "title": "Splitting data randomly can ruin your model", "year": "2021" }, { "authors": "A Reinhart", "journal": "Statistical Science", "ref_id": "b26", "title": "A review of self-exciting spatio-temporal point processes and their applications", "year": "2018" }, { "authors": "P R Rosenbaum", "journal": "Journal of the Royal Statistical Society: Series B (Statistical Methodology)", "ref_id": "b27", "title": "An exact distribution-free test comparing two multivariate distributions based on adjacency", "year": "2005" }, { "authors": "M F Schilling", "journal": "Journal of the American Statistical Association", "ref_id": "b28", "title": "Multivariate two-sample tests based on nearest neighbors", "year": "1986" }, { "authors": "G Song; Y Ye; H Zhang; X Xu; R Y Lau; F Liu", "journal": "Information Sciences", "ref_id": "b29", "title": "Dynamic clustering forest: an ensemble framework to efficiently classify textual data stream with concept drift", "year": "2016" }, { "authors": "G I Webb; R Hyde; H Cao; H L Nguyen; F Petitjean", "journal": "Data Mining and Knowledge Discovery", "ref_id": "b30", "title": "Characterizing concept drift", "year": "2016" }, { "authors": "R Wightman", "journal": "", "ref_id": "b31", "title": "Pytorch image models", "year": "2019" }, { "authors": "Y Zhao; M Li; L Lai; N Suda; D Civin; V Chandra", "journal": "", "ref_id": "b32", "title": "Federated learning with non-iid data", "year": "2018" } ]
[ { "formula_coordinates": [ 2, 354.32, 366.8, 140.24, 9.65 ], "formula_id": "formula_0", "formula_text": "X = {|i -j| | ∀x i ∈ D, j ∈ K i }." }, { "formula_coordinates": [ 2, 373.89, 570.56, 97.15, 30.55 ], "formula_id": "formula_1", "formula_text": "B(d) = d d ′ =1 2(N -d ′ ) N (N -1)" }, { "formula_coordinates": [ 2, 352.02, 668.25, 144.84, 15.05 ], "formula_id": "formula_2", "formula_text": "T = max d∈{1,2,...,N -1} | F X (d) -B(d)|" }, { "formula_coordinates": [ 5, 75.37, 355.19, 216.01, 21.61 ], "formula_id": "formula_3", "formula_text": "1 x i-1 + α 2 x i-2 + α 3 x i-3 , 1)" } ]
Detecting Dataset Drift and Non-IID Sampling via k-Nearest Neighbors
We present a straightforward statistical test to detect certain violations of the assumption that the data are Independent and Identically Distributed (IID). The specific form of violation considered is common across real-world applications: whether the examples are ordered in the dataset such that almost adjacent examples tend to have more similar feature values (e.g. due to distributional drift, or attractive interactions between datapoints). Based on a k-Nearest Neighbors estimate, our approach can be used to audit any multivariate numeric data as well as other data types (image, text, audio, etc.) that can be numerically represented, perhaps via model embeddings. Compared with existing methods to detect drift or auto-correlation, our approach is both applicable to more types of data and also able to detect a wider variety of IID violations in practice.
Jesse Cummings; Elías Snorrason; Jonas Mueller
[ { "figure_caption": "Figure 1: Visualizations of the 2D datasets used for our benchmark. (a-c) Data is drawn from the same mixture of Gaussians. (a) Data is drawn IID from the distribution. (b) The mean of each Gaussian mixture component drifts gradually as each sample is drawn. In the plot, the color gradient represents the index of each sample in the dataset. (c) The variance of each Gaussian mixture component doubles suddenly after half the data is collected. In the plot, the colors indicate samples collected in the first vs. second halves of the dataset. (d) Samples are drawn in auto-regressive matter where their values are strongly inter-dependent but marginally identical in distribution.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: The distribution of p-values for different baselines on the Gaussian data scenarios described. Shown are histograms of p-values computed from 50 replicate runs of each method in each scenario. We also shuffled the data from each scenario to create an analogous IID dataset on which all methods were also run. Ours is the only method which reliably distinguishes between IID and non-IID data in each scenario.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Depicting our various non-IID scenarios using CIFAR-10. (a) Images are sorted by class. Each row of the example grid contains images of the same class which differs from row to row. (b) The distribution of classes drifts over time. We plot the class distributions of 3 sections of the data in the first, second and last thirds of the dataset. (c) The data contains a contiguous subset of images belonging to the same class. In this image grid, the first and last row are images drawn randomly from the whole dataset, but the middle row contains only images of random airplanes.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Results of our kNN method on the four image data scenarios described previously. Shown are histograms of pvalues computed from 50 replicate runs of our method in each scenario, using default parameters (k = 10, 25 permutations).", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Per datapoint scores plotted against the datapoint index for various scenarios involving the CIFAR-10 image data. Shown in red is a low-pass filter to smooth the scores and highlight their trend. The scenarios are as follows: (a) Images are randomly drawn from CIFAR-10 (IID). (b) The data sorted by underlying class scenario depicted in Figure 5(a) in which the underlying latent class changes every 1000 images. (c) The contiguous non-IID subset in the middle of dataset scenario depicted in Figure 5(c). Here all images between the 1250th and 1500th in the dataset happen to be from the airplane class, while other images are randomly drawn from CIFAR-10.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" } ]
[{"Category": "Methodological Basis", "Citation": "(Zhao et al., 2018)", "Explanation": "The cited work by Zhao et al. introduces the concept of data-driven inference in settings that violate the IID assumption, which the citing paper builds upon in their research on data order in datasets."}, {"Category": "Methodological Basis", "Citation": "(Hsieh et al., 2020)", "Explanation": "The work by Hsieh et al. provides a method for data-driven inference in non-IID settings, which the citing paper adopts in their study of data order in datasets."}, {"Category": "Methodological Basis", "Citation": "(Cao, 2022)", "Explanation": "The work by Cao presents a method for data-driven inference in non-IID settings, which the citing paper builds upon in their research on data order in datasets."}, {"Category": "Supporting Evidence", "Citation": "(Webb et al., 2016)", "Explanation": "The cited work by Webb et al. provides evidence of the common IID violation of drift, which the citing paper uses to highlight the importance of detecting and addressing such violations in data-driven applications."}, {"Category": "Supporting Evidence", "Citation": "(Cox & Isham, 1980)", "Explanation": "The cited work by Cox and Isham discusses the concept of attractive interactions between datapoints, which the citing paper uses to further illustrate the types of non-IID violations that can occur in data-driven applications."}, {"Category": "Supporting Evidence", "Citation": "(Reinhart, 2018)", "Explanation": "The cited work by Reinhart provides additional evidence of the common IID violation of attractive interactions between datapoints, which the citing paper uses to highlight the need for effective audits to detect and address such violations."}, {"Category": "Methodological Basis", "Citation": "(Rao, 2021)", "Explanation": "The cited work by Rao (2021) provides a method for detecting non-IID data, which the citing paper adopts in their research to identify violations of IID sampling in the data."}, {"Category": "Methodological Basis", "Citation": "(Darrell et al., 2015)", "Explanation": "The cited work by Darrell et al. (2015) contributes to the discussion on the importance of special actions in handling non-IID data, which the citing paper highlights in their research on the need for automated checks for violations of IID sampling."}, {"Category": "Methodological Basis", "Citation": "(Hsieh et al., 2019)", "Explanation": "The cited work by Hsieh et al. (2019) provides insights on the need for special actions in handling non-IID data, which the citing paper uses to emphasize the importance of automated checks for violations of IID sampling in the data."}, {"Category": "Supporting Evidence", "Citation": "(Agrahari & Singh, 2022)", "Explanation": "The cited work highlights the growth in real-world model deployments and the increasing relevance of drift detection in recent years, providing foundational information for the citing paper to discuss the importance of the topic."}, {"Category": "Methodological Basis", "Citation": "(Gama et al., 2004;Barros et al., 2017;Liu et al., 2017)", "Explanation": "The cited works present methods based on distributional dissimilarity for drift detection, which the citing paper can adopt or adapt in its own research to develop new strategies for the same task."}, {"Category": "Data Source", "Citation": "(Page, 1954;Mahdi et al., 2020)", "Explanation": "The cited works provide datastream examination methods for tracking change in data, which the citing paper can use as a foundational element in its research on drift detection."}, {"Category": "Extension or Continuation", "Citation": "(Song et al., 2016)", "Explanation": "The cited work presents statistical pattern tracking methods for drift detection, which the citing paper can build upon to explore new dimensions and strategies in the same area of research."}, {"Category": "Methodological Basis", "Citation": "(Frias-Blanco et al., 2014)", "Explanation": "The cited work presents a method for density clustering in drift detection, which the citing paper can adopt or adapt in its own research to develop new strategies in the same task."}, {"Category": "Methodological Basis", "Citation": "(Dem\u0161ar & Bosni\u0107, 2018)", "Explanation": "The cited work presents model-based interpretability methods for drift detection, which the citing paper can use as a basis for its own research on the topic."}, {"Category": "Methodological Basis", "Citation": "(Losing et al., 2016)", "Explanation": "The cited work provides a kNN-based approach for concept drift detection in data labels, which the citing paper adopts in their research on feature-based IID violations."}, {"Category": "Extension or Continuation", "Citation": "(Schilling, 1986;Rosenbaum, 2005)", "Explanation": "The cited works on kNN-based two-sample testing provide a powerful framework that inspires the extension of the research in the citing paper to identify IID violations in a single dataset."}, {"Category": "Methodological Basis", "Citation": "(Pesarin & Salmaso, 2010)", "Explanation": "The cited work by Pesarin and Salmaso provides a method for obtaining a p-value for the significance of a test statistic, which the citing paper adopts in its research to assess the significance of the test statistic T."}, {"Category": "Supporting Evidence", "Citation": "(Liu et al., 2004)", "Explanation": "The cited work by Liu et al. (2004) provides a method for constructing the initial kNN graph, which is used in the citing paper to further speed up the runtime complexity in the analysis of large datasets."}, {"Category": "Methodological Basis", "Citation": "(He et al., 2016)", "Explanation": "The cited work by He et al. (2016) provides the pretrained ResNet-50 model that the citing paper uses to extract embeddings from images in the CIFAR-10 dataset."}, {"Category": "Methodological Basis", "Citation": "(Wightman, 2019)", "Explanation": "The cited work by Wightman (2019) provides the method of using the cosine distance between image embeddings to form the kNN graph in the CIFAR-10 dataset."}, {"Category": "Data Source", "Citation": "(Krizhevsky et al., 2009)", "Explanation": "The cited work by Krizhevsky et al. (2009) is the source of the CIFAR-10 image classification dataset used in the study of non-IID subsets in the citing paper."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b47", "b48", "b45", "b50", "b84", "b83", "b37", "b75", "b29", "b50", "b53", "b48", "b50", "b71", "b71", "b50", "b25", "b47", "b47", "b70" ], "table_ref": [], "text": "Analyzing first-view videos, i.e., egocentric videos, captured by wearable cameras has become an active research topic in recent years. With the recent development of virtual and augmented reality technologies, this topic has gained more attention in the research communities due to the enormous interest in analyzing human behaviors from the firstview perspective. Many tasks have been currently explored in the egocentric video data that provide many practical applications, e.g., action recognition [48,56,49,46,51], action detection [15, 85,84,38,76], action anticipation [30,51,54], etc. In comparison with third-view video data, i.e., exocentric videos, egocentric videos provide new, dis-Figure 1. The Cross-view Self-attention Constraints. Although under the setting of cross-view unpaired data where the corresponding video and its attention in the opposite view are inaccessible, our cross-view self-attention loss is proven to impose the cross-view constraints via unpaired samples based on the geometric properties between two camera positions. tinct viewpoints of surrounding scenes and actions driven by the camera position holding on the observer.\nThe properties of egocentric videos also bring new challenges to video analysis tasks. One of the main issues is the scale of datasets. It is well known that learning robust video models, e.g., action recognition models, usually requires a large amount of video data [56, 49,51]. For example, the third-view action models are learned on the large-scale Kinetics-700 [11] data that consists of 650K videos over 700 classes. Meanwhile, the scale of egocentric video data is relatively small compared to third-view datasets, e.g., EPIC Kitchens [15] only consists of 90K clips or Charades-Ego [72] includes 68K clips of 157 classes. In addition, the egocentric video data lack variation, e.g., videos only in kitchens [15] or daily indoor activities [72]. These problems pose a considerable challenge for learning robust video models on the first-view data.\nMany prior works [51,26] have improved the performance of action recognition models by adopting the pretrained model on large-scale third-view datasets and finetuning it on the first-view dataset. However, these meth-ods often ignore the unique characteristics of egocentric videos. Thus, they could meet the unaligned domain problems. Another method [48] has tried to alleviate this domain mismatch problem by introducing several additional egocentric tasks during the pre-training phase on the thirdview datasets. However, this approach requires the labels of egocentric tasks on third-view data or relies on the offthe-shelf specific-task models. Domain adaptation methods [48,33,71] have also been utilized to transfer the knowledge from the third-view to first-view data. Nevertheless, these methods still need to model the camera-view changes during the adaptation phase.\nWith the recent success of Vision Transformer, the selfattention mechanism is fundamental to building an efficient action recognition model. Still, fewer prior works have focused on leveraging self-attention to model action recognition from the third-view to first-view data. Moreover, modeling the change in camera positions across views is also one of the primary factors in sufficiently developing a learning approach from the exocentric to egocentric view. Therefore, taking these characteristics into consideration, we introduce a novel cross-view learning approach to model the self-attention mechanism to effectively transfer the knowledge learned on third-view to first-view data. Our proposed approach first considers the geometric correlation between two camera views. Then, the cross-view geometric correlation constraint is further embedded into the self-attention mechanism so that the model can generalize well from the exocentric to the egocentric domain. Fig. 1 illustrates the cross-view self-attention constraint." }, { "figure_ref": [], "heading": "Contributions of this Work:", "publication_ref": [], "table_ref": [], "text": "This work introduces a novel Cross-View learning approach to Action Recognition (CVAR) via effectively transferring knowledge from the exocentric video domain to the egocentric one. By analyzing the role of the self-attention mechanism and the change of camera position across views, we introduce a new geometric cross-view constraint for correlation between videos and attention maps. Then, from the proposed cross-view restriction, we present a novel cross-view self-attention loss that models the cross-view learning into the self-attention mechanism. Our proposed loss allows the Transformer-based model to adapt knowledge and generalize from the thirdview to first-view video data. The cross-view correlations of videos and attention maps are further enhanced using the deep metric and the Jensen-Shannon divergence metric, respectively, that capture deep semantic information. Our experimental results on the standard egocentric benchmark, i.e., Charades-Ego, EPIC-Kitchens-55, and EPIC-Kitchens-100, have illustrated the effectiveness of our proposed method and achieved state-of-the-art (SOTA) results." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b9", "b35", "b11", "b25", "b49", "b23", "b48", "b36", "b77", "b38", "b72", "b26", "b11", "b81", "b25", "b24", "b89", "b49", "b87", "b25", "b24", "b19", "b23", "b48", "b7", "b82", "b7", "b23", "b48", "b82", "b8", "b7", "b82", "b23", "b48", "b71", "b15", "b50", "b71", "b15", "b50", "b57", "b44", "b86", "b30", "b61", "b56", "b59", "b58", "b52", "b29", "b6", "b16", "b88", "b78", "b40", "b47" ], "table_ref": [], "text": "Video Action Recognition Many large-scale third-view datasets have been introduced for action recognition tasks, e.g., Kinetics [44,10,11], Something-Something V2 [34], Sport1M [42], AVA [36], etc. Many deep learning approaches [12,26,86,50,5,56,24,49] have been introduced and achieved remarkable achievements. The early deep learning approaches [42,19] have utilized the 2D Convolutional Neural Networks (CNNs) [37,74,78] to extract the deep spatial representation followed by using Recurrent Neural Networks (RNNs) [39] to learn the temporal information from these extracted spatial features. Some later approaches have improved the temporal learning capability by introducing the two-stream networks [73,29,27,28,86] using both RGB video inputs and optical flows for motion modeling. Later, the 3D CNN-based approaches [81] and their variants [12,93] have been introduced, i.e., several (2+1)D CNN architectures have been proposed [82,26,25,90]. Meanwhile, other approaches have used pseudo-3D CNNs built based on 2D CNNs [50,63]. In addition, to better capture the long-range temporal dependencies among video frames, the non-local operation has also been introduced [88]. SlowFast [26] proposes a dual-path network to learn spatiotemporal information at two different temporal rates. X3D [25] progressively expands the networks to search for an optimal network for action recognition. Vision Transformer [20,5,56,24,49,8,83] has become a dominant backbone in various tasks due to its outstanding performance. The early success of Video Vision Transformer (ViViT) [5] has shown its promising capability in handling spatial-temporal tokens in action recognition. Then, many variants [56,8,24,49,83] of ViViT has been introduced to not only improve the accuracy but also reduce the computational cost. [9] presented a space-time mixing attention mechanism to reduce the complexity of the self-attention layers. TimeSFormer [8] introduced divided spatial and temporal attention to reduce the computational overhead. Then, it is further improved by using the directed attention mechanism [83]. Then, [24] proposed a Multiscale Vision Transformer (MViT) by using multiscale feature hierarchies. Then, MViT-V2 [49] improves the performance of MViT by incorporating decomposed relative positional embeddings and residual pooling connections. Swin Video Transformer [56] has achieved state-of-the-art performance in action recognition by using shifted windows to limit the self-attention computational cost to local windows and also allow learning attention across windows. Egocentric Video Analysis Apart from third-view videos, egocentric videos provide distinguished viewpoints that pose several challenges in action recognition. Many datasets have been introduced to support the egocentric video analysis tasks, e.g., Charades-Ego [72], EPIC Kitchens [16,15], Ego4D [35], EgoClip [51], HOI4D [55].\nThese datasets provide several standard egocentric benchmarks, e.g., action recognition [72,16,35], action anticipation [35,15], action detection [15], video-text retrieval [35,51]. Many methods have been proposed for egocentric action recognition, including Multi-stream Networks [58,47,45,87], RNNs [32,31,77], 3D CNNs [62,57], Graph Neural Networks [60]. Despite the difference in network designs, these prior works are usually pre-trained on the large-scale third-view datasets before fine-tuning them on the first-view dataset. However, there is a significant difference between the first-view and third-view datasets. Thus, a direct fine-tuning approach without consideration of modeling view changes could result in less efficiency. Many methods have improved the performance of the action recognition models by using additional egocentric cues or tasks, including gaze and motor attention [59,47,53], object detection [30,7,17,89], hand interactions [79,67,41]. Ego-Exo [48] presented an approach by introducing the auxiliary egocentric tasks i.e., ego-score, object-score, and interaction map predictions, into the pre-training phase on the third-view dataset. However, these methods usually require the labels of auxiliary egocentric tasks on the thirdview datasets or rely on pseudo labels produced by the offthe-shelf pre-trained models on egocentric tasks." }, { "figure_ref": [], "heading": "Cross-view Video Learning", "publication_ref": [ "b93", "b79", "b69", "b68", "b13", "b65", "b12", "b74", "b70", "b67", "b90", "b0", "b91", "b21", "b63", "b64", "b51" ], "table_ref": [], "text": "The cross-view learning approaches have been exploited and proposed for several tasks, e.g., geo-localization [94,80,70,69], semantic segmentation [14,66,18]. Meanwhile, in video understanding tasks, several prior methods have alleviated the cross-view gap between exocentric and egocentric domains by using domain adaptation [33,13], learning viewpoint-invariant [75,71,2,68], or learning joint embedding [91,1,3,4,92]. Other works utilized generative models to synthesize the other viewpoints from a given image/video [22,64,65,52]. However, these methods often require either a pair of data of both first and third views to learn the joint embedding or a share label domain when using domain adaptation." }, { "figure_ref": [], "heading": "Cross-view Learning in Action Recognition", "publication_ref": [ "b0", "b47", "b70", "b25", "b70", "b47", "b25", "b47", "b48", "b25", "b47" ], "table_ref": [], "text": "Let x exo ∈ R T ×H×W ×3 be a third-view (exocentric) video and y exo ∈ Y exo be its corresponding ground-truth class, Y exo is the set of classes in the exocentric dataset. Similarly, x ego ∈ R T ×H×W ×3 be a first-view (egocentric) video and y ego ∈ Y ego be its corresponding ground-truth class, Y exo is the set of classes in the egocentric dataset. Let F : R T ×H×W ×3 → R D be the backbone network that maps a video into the deep representation, C exo and C ego are the classifier of exocentric and egocentric videos that predict the class probability from the deep representation. Then, the basic learning approach to learning the action model from the exocentric to the egocentric view can be formulated as a supervised objective, as in Eqn. (1).\narg min θ F ,θ Cexo ,θ Cego [Ex exo,yego Lce(Cexo(F (xexo)), yexo) +Ex ego,yego Lce(Cego(F (xego)), yego)](1)\nwhere θ F , θ Cexo , θ Cego are the network parameters, L ce is the supervised loss (i.e., cross-entropy loss). Several prior approaches [48,71] have adopted this learning approach to learn a cross-view action recognition model. Then, other prior methods have further improved the performance of models by using a large pretrained model [26,56], domain adaptation [33], learning a joint embedding between two views [71], learning auxiliary egocentric tasks [48].\nAlthough these prior approaches [26,48,56,49] showed their potential in improving performance, they have not effectively addressed the problem of cross-view learning. In particular, domain adaptation methods [33] are often employed in the context of environment changes (e.g., simulation to real data), and the camera views are assumed on the same position (either third view or first view). However, there is a huge difference in videos between the third view and the first view. Thus, domain adaptation is considered less effective in the cross-view setting. Meanwhile, fine-tuning the first-view action model on the large pretrained models [26,56] usually relies on the deep representation learned from the large-scale third-view data. However, these deep representations do not have any guarantee mechanism well generalized in the first-view video. Also, learning the join embedding or auxiliary egocentric tasks [48] suffer a similar problem due to their design of learning approaches without the consideration of camera changes. In addition, it requires a pair of views of video data during training. Therefore, to effectively learn the cross-view action recognition model, the learning approach should consider the following properties: (1) the geometric correlation between the third view and the first view has to be considered during the learning process, (2) the mechanism that guarantees the knowledge learned is well generalized from the third view to the first view." }, { "figure_ref": [], "heading": "Cross-view Geometric Correlation in Attentions", "publication_ref": [ "b19", "b48", "b5" ], "table_ref": [], "text": "With the success of Vision Transformer in action recognition [20,49,56], the self-attention mechanism is the key to learning the robust action recognition models. Therefore, in our work, we propose explicitly modeling crossview learning in action recognition models through the self-attention mechanism. First, we revise the geometric correlation of the exocentric and egocentric views in obtaining the videos. Let us assume that xego is the corresponding egocentric video of the exocentric video x exo , and K exo , [R exo , t exo ] and K ego , [R ego , t ego ] are the camera (intrinsic and extrinsic) parameters of third and first views, respectively. Then, the procedure of obtaining the videos can be formed as a rendering function as in Eqn. (2).\nxexo = R(Kexo, [Rexo, texo]) xego = R(Kego, [Rego, tego]) (2)\nwhere R is a rendering function that obtains the video with the given corresponding camera matrix and position. In Eqn.\n(2), the rendering function R remains the same across views as x exo and xego are the pair video of the same scene. Moreover, as the camera parameters are represented by matrices, there exist linear transformations of the cameras between two views defined as in Eqn.\n(3).\nKego = T K × Kexo [Rego, tego] = T Rt × [Rexo, texo](3)\nRemark 1: Cross-view Geometric Transformation From Eqn.\n(2) and Eqn.\n(3), we have observed that there exists a geometric transformation T of videos (images) between two camera views as follows:\nxego = T (xexo; T K , T Rt )(4)\nIn our proposed method, we consider the action recognition backbone model F designed as a Transformer with self-attention layers. Given a video, the input of the Transformer is represented by N + 1 tokens, including N = It should be noted the video and its attention map could be considered as a pixel-wised correspondence. Even though the patch size is greater than 1 (K, P > 1), a single value in the attention map always corresponds to a group of pixels in its patch. Therefore, without a lack of generality, with the changes of cameras from the exocentric view to the eccentric view, we argue that the focuses of the model (the attention maps) also change correspondingly to the transitions of the videos across views because both videos are representing the same action scene from different camera views. As a result, the transformation between two attention maps, i.e., a exo and āego , can also be represented by a transformation T ′ w.r.t. the camera transformation matrices T K and T Rt . Remark 2: Cross-view Equivalent Transformation of Videos and Attentions We argue that the transformations T and T ′ remain similar (T ≡ T ′ ) as they are both the transformation interpolation based on the camera transformation matrices T K and T Rt . Hence, the transformation T could be theoretically adopted to the attention transformation.\nāego = T ′ (aexo; T K , T Rt ) ≡ T (aexo; T K , T Rt )(5)\nFollowed by the above remarks, we further consider the cross-view correlation between the videos and the attention maps. Let D x (x exo , xego ) and D a (a exo , āego ) be the metrics measure the cross-view correlation in videos (x exo , xego ) and attention maps (a exo , āego ), respectively.\nFrom Remark 1 and Remark 2, we have observed that the transformation of both video and attention from the exocentric view to the egocentric view is represented by the shared transformation T and the camera transformation matrices T K , T Rt . In other words, the cross-view relation between D x (x exo , xego ) and D a (a exo , āego ) relies on the shared transformation T (•, T K , T Rt ) and the difference between x exo and a exo . Therefore, we argue that the cross-view video correlation D x (x exo , xego ) is theoretically proportional to the cross-view attention correlation D a (a exo , āego ). In addition, the transformations between the two cameras are linear, as indicated in Eqn. (3). Thus, in our work, the proportion between D x (x exo , xego ) and D a (a exo , āego ) can be theorized as a linear relation and modeled by a linear scale α as in Eqn. (6).\nDx(xexo, xego) ∝ Da(aexo, āego) ⇔ Dx(xexo, xego) = αDa(aexo, āego)(6)" }, { "figure_ref": [ "fig_2" ], "heading": "Unpaired Cross-View Self-Attention Loss", "publication_ref": [ "b5", "b6", "b5", "b6", "b7", "b12", "b7", "b7", "b60" ], "table_ref": [], "text": "Eqn. ( 6) defines a condition that explicitly models the self-attention correlation based on the geometric transformation across views. Thus, to efficiently learn the action recognition model from the exocentric to the egocentric view, Eqn (1) can be optimized w.r.t the condition in Eqn. (6) and presented as in Eqn. (7). \nHence, optimizing Eqn. ( 7) can be solved by considering the cross-view constraint as a regularizer during training, i.e., ||D x (x exo , xego ) -αD a (a exo , āego )|| 2 2 . However, it is noted that it requires a pair of third-view and first-view videos during training. Meanwhile, in practice, the video data of these two views are often recorded independently. Thus, optimizing Eqn. ( 7) by imposing the constraint of Eqn. (6) on pair data remains an ill-posed problem. Instead of solving Eqn. (7) on pair data, let us consider all cross-view unpaired samples (x exo , x ego ). In addition, we assume that the cross-view correlation of videos D x and attention maps D a is bounded by a certain threshold β, i.e., ∀x exp , x ego : D x (x exo , x ego ) ≤ β and ∀a exp , a ego : D a (a exo , a ego ) ≤ β. This assumption implies that the distribution shifts (i.e., the changes of views) from the exocentric to the egocentric view are bounded to ensure that the model is able to generalize its capability across views. Hence, our Cross-view Self-Attention Loss on unpaired data can be formulated as in Eqn. (8).\nL self = Ex exo,xego λ||Dx(xexo, xego) -αDa(aexo, aego)|| 2 2 (8)\nwhere λ is the hyper-parameter controlling the relative importance of L self . Intuitively, even though the pair samples between exocentric and egocentric views are inaccessible, the cross-view constraints between videos and attention maps can still be imposed by modeling the topological constraint among unpaired samples. Furthermore, under our cross-view distribution shift assumption, our loss in Eqn. (8) can be proved as an upper bound of the constrain Eqn. (6) on pair samples as follows:\nDx(xexo,xego) -αDa(aexo, āego) ≤ Dx(xexo, xego) -αDa(aexo, aego) + (1 + α)β(9)\nEqn. ( 13) can be proved using the triangle inequality property of D x and D a . It is detailed in the supplementary. As shown in Eqn. (13), as\nD x (x exo , x ego ) - αD a (a exo , a ego ) + (1 + α)β is the upper bound of D x (x exo , xego ) -αD a (a exo , āego ), minimizing ||D x (x exo , x ego ) -αD a (a exo , a ego )|| 2 2 also imposes the constraint of ||D x (x exo , xego )-αD a (a exo , āego )|| 2\n2 . Noted that α and β are constant numbers, which can be excluded during training. Therefore, the constraints of cross-view correlation on pair samples in Eqn. ( 7) is guaranteed when optimizing L self defined in Eqn. (8). More importantly, our proposed cross-view self-attention loss does NOT require the pair data between exocentric and egocentric views during training. Fig. 2 illustrates our proposed cross-view learning framework. Cross-view Topological Preserving Property: The proposed loss defined in Eqn. (8) to impose the cross-view correlation over all unpaired samples is a special case of the Gromov-Wasserstein [61] distance between the video and the attention map distributions where the association matrix has been pre-defined. As a result, our loss inherits these Gromov-Wasserstein properties to preserve the topological distributions between the video and attention space. Particularly, the cross-view topological structures of video distributions are preserved in cross-view attention distributions." }, { "figure_ref": [], "heading": "The Choices of Correlation Metrics", "publication_ref": [ "b7", "b9", "b20" ], "table_ref": [], "text": "As shown in Eqn. (8), the choice of correlation metric D x and D a is one of the primary factors directly influencing the performance of the action recognition models. The direct metrics, i.e., ℓ 2 , could be straightforwardly adopted for the correlation metric D x and D a . However, this direct approach is ineffective because the deep semantic information of videos is not well modeled in the direct Euclidean metric ℓ s . To overcome this limitation, we first propose to design D x as the correlation metric on the deep latent spaces that can be defined as in Eqn. (10).\nDx(xexo, xego) = D G x (xexo, xego) = ||G(xexo) -G(xego)|| 2 2 (10\n)\nwhere G : R T ×H×W ×3 → R K be the deep network trained on the large-scale dataset. Intuitively, measuring the correlation between two videos provides a higher level of semantic information since the deep representation extracted by the large pre-trained model G capture more contextual information about the videos [40,21].\nAs D a measures the correlation between two attention maps where each of that is in the form of the probability distribution, D a should be defined as the statistical distance to measure the correlation between two probabilistic attention maps comprehensively. Thus, we propose to design D a as the Jensen-Shannon divergence defined as in Eqn. (11). " }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "This section first briefly presents the datasets and the implementation details in our experiments. Then, we analyze the effectiveness of the approach in ablative experiments followed by comparing results with prior methods on the standard benchmarks of first-view action recognition." }, { "figure_ref": [], "heading": "Datasets and Implementation Details", "publication_ref": [ "b25", "b7", "b42", "b19", "b22", "b48", "b25", "b9", "b7", "b47" ], "table_ref": [], "text": "Following the common practice in action recognition [26,8,56], Kinetics has been used as the third-view dataset in our experiment due to its large scale and diverse actions. To evaluate the effectiveness of our approach, we use EPIC-Kitchens and Charades-Ego as our first-view datasets. These two datasets are currently known as large-scale and challenging benchmarks in egocentric action recognition. Kinetics-400 [43] [20] for our Transformer backbone. Our model is implemented in Python using the PyTorch and PySlowFast [23] frameworks. The input video of our network consists of T = 16 frames sampled at the frame rate of 1/4 and the input resolution of each video frame is H × W = 224 × 224. Each video is tokenized by the non-overlapping patch size of K × P × P = 2 × 16 × 16. Each token is projected by an embedding where the dimension length of the embedding is set to 768. There are 12 Transformer layers in our model and the number of heads in each self-attention layer is set to 8. The entire framework is optimized by the Stochastic Gradient Descent algorithm, where our models are trained for 50 epochs. The cosine learning policy is utilized in our training, where the base learning rate is set to 0.00125. Similar to [49,26], to increase the diversity of training data, we also apply several augmentation methods during training. All of our models are trained on the four 40GB-VRAM A100 GPUs, and the batch size in each GPU is set to 4. Swin-B [56] pre-trained on the Kinetics-400 dataset has been adopted for our network G in Eqn. (10). Since we do not want the gradients produced by the supervised loss L ce being suppressed by the cross-view loss L self , the hyper-parameter λ is set to 5.10 -3 . In our evaluation, following prior works [8,48], each input is sampled in the middle of the video. The final result is obtained by averaging the prediction scores of three spatial crops, i.e., top-left, center, and bottom-right, from the video input. " }, { "figure_ref": [ "fig_5" ], "heading": "Ablation Studies", "publication_ref": [ "b9", "b70", "b12", "b12", "b25", "b5", "b47", "b47", "b47", "b47", "b47", "b9" ], "table_ref": [ "tab_0", "tab_1", "tab_2" ], "text": "Our ablative experiments report the results of our CVAR method with different settings trained on the Kinetics-400 → Charades-Ego and Kinetics-400 → EPIC-Kitchens-55 benchmarks. For fair comparisons, all the models are trained with the same learning configuration. Effectiveness of the scale α We study the effectiveness of the linear scale α to the performance of the model. In this experiment, the metrics defined in Eqn. (10) and Eqn. (11) have been adopted to D x and D a . The value α ranges from 0.0 to 2.0. When α = 0.0, it is equivalent to ViT simultaneously trained on both third-view and first-view datasets. As shown in Table 1, the mAP performance on the Charades-Ego benchmark is consistently improved when the value of α increases from 0.1 to 0.75 and achieves the best performance at the value of α = 0.75 and the mAP performance is 31.95%. Similarly, on the EPIC-Kitchen-55 benchmarks, the Top 1 and Top 5 accuracy is gradually improved w.r.t the increasing of α and reaches the maximum performance when the value of α is 1.50 in which the Top 1 accuracy on EPIC Verb and EPIC Noun are 73.52% and 68.19%. Then, the performance on both benchmarks inclines to steadily decrease when the value of α keeps increasing over the optimal point. Indeed, the variation in the video space is typically higher than in the attention maps due to the higher complexity of video data where the video data contains much more information, e.g., objects, humans, and interactions, etc.; meanwhile, the attention maps represent the focus of the models w.r.t model decisions. Thus, if the value of α is small, it could not represent the correct proportion of changes between videos and attention maps. Meanwhile, the higher value of α inclines to exaggerate the model focuses, i.e., attention maps, that results in the performance. [71] 20.00 SSDA [13] 23.10 I3D [13] 25.80 DANN [33] 23.62 SlowFast [26] 25.93 Frozen [6] 28.80 MViT-V2 25.65 Swin-B [56] 28.77 Ego-Exo + ResNet-50 [48] 26.23 Ego-Exo + SlowFast R50 [48] 28.04 Ego-Exo* + ResNet-50 [48] 27.47 Ego-Exo* + SlowFast R50 [48] 29.19 Ego-Exo* + SlowFast R101 [48] Effectiveness of the metrics This experiment studies the effectiveness of correlation metrics to the performance of the action recognition models on first-view videos. The optimal value of the linear α in the previous ablation study has been adopted in this experiment. For each metric correlation, we study its effect by comparing the performance of action recognition models using our metric in Eqn. (10) and Eqn. (11) against the Euclidean distance ℓ 2 . As our results in Table 2, by measuring the correlation of videos on the deep latent spaces, i.e., D G x , the performance of the action recognition model has been improved, e.g., 28.77% to 31.95% (results using D JS a ). This improvement is gained thanks to the deep semantic representation extracted by deep network G. Besides, the probability metric used to measure the correlation between attention maps, i.e., D JS a , has illustrated its significant role. For example, the performance of the model has been promoted by +2.84% from 29.11% (using ℓ 2 ) to 31.95% (using D JS a ). As the attention map is in the form of the probability distribution, using the Jensen-Shannon divergence as the correlation metric provides the informative difference of the model's focus over the videos. Meanwhile, ℓ 2 tends to rely on the difference of the magnitude of the attention, which provides less correlation information between two attentions. studies the effectiveness of imposing the cross-view loss into attention maps of the Transformer layers. In this experiment, we adopt the optimal setting of the linear scale (α) and correlation metrics (D x , D a ) in the previous ablation studies. We consider four groups of Transformer layers where each group consists of three consecutive layers, i.e., Layer 1-3, Layer 4-6, Layer 7-9, and Layer 10-12. As experimental results in Table 3, the later Transformer layers of our model play an important role than the initial ones. In particular, when imposing the cross-view loss on only the first three Transformer layers, the performance of Charades-Ego has achieved 25.65% and the Top 1 accuracy of verb and noun predictions in EPIC-Kitchens-55 is 60.47% and 57.85%. Meanwhile, enforcing the cross-view self-attention loss into all attention layers brings better performance and achieves the best performance, i.e., the mAP of 31.95% on Charades-Ego and Top 1 accuracy of 73.52% and 68.19% on EPIC-Kitchens-55. Fig. 3 visualizes the attention maps of the model predictions." }, { "figure_ref": [], "heading": "Effectiveness of Transformer Layers This experiment", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Comparisons with State-of-the-Art Results", "publication_ref": [ "b70", "b12", "b12", "b25", "b5", "b48", "b47", "b19", "b5", "b23", "b12", "b47", "b12", "b25", "b25", "b48", "b47", "b25", "b48", "b47", "b44", "b49", "b25", "b48", "b47", "b48", "b19" ], "table_ref": [ "tab_3", "tab_3", "tab_4", "tab_5" ], "text": "Kinetics-400 → Charades-Ego Table 4 presents results of our CVAR compared to prior methods, i.e., ActorObserver-Net [71], SSDA [13], I3D [13], DANN [33], SlowFast [26], Frozen [6], MViT-V2 [49], Swin-B [56], and Ego-Exo [48], on the Charades-Ego benchmark. Our results in Table 4 have gained SOTA performance where our mAP accuracy in our approach has achieved 31.95%. Compared to direct training approaches [20,6,56,24,13], our method achieves better performance than other methods by a large margin, e.g., higher than Swin-B [56] by 3.18%. In comparison with the prior pre-training approach using additional egocentric tasks, our result is higher than Ego-Exo [48] by +1.82%. Meanwhile, compared with domain adaptation approaches [33,13], our methods outperform DANN by +8.33%. Kinetics-400 → EPIC-Kitchens-55 Table 5 presents the results of our approach compared to prior methods, i.e., ResNet-50 [26], DANN [33], SlowFast [26], MViT-V2 [49], Swin-B [56], and Ego-Exo [48], on the EPIC-Kitchens-55 benchmark. Our proposed CVAR has gained the SOTA performance where our Top 1 accuracy on EPIC Verb and EPIC Noun of our approach has achieved 73.52% and 68.19%, respectively. Our proposed approach outperforms the traditional direct training approaches [26,49,56] by a large margin. In addition, our result is higher than the pre-training approach using additional egocentric tasks, i.e., Ego-Exo [48], by +6.48% and +18.4% on Top 1 accuracy of verb and noun predictions. Our method also outperforms the domain adaptation approach [33]. Kinetics-400 → EPIC-Kitchens-100 Table 6 compares our results with TSN [86], TRN [93], TBN [45], TSM [50], SlowFast [26], MViT-V2 [49], Ego-Exo using SlowFast-R50 [48], and Swin-B [56] on the EPIC-Kitchens-100 benchmark. Overall, our proposed CVAR has achieved the SOTA performance where the Top 1 accuracy of verb, noun, and action predictions are 69.37%, 61.03%, and 46.15%, respectively. Also, CVAR has gained competitive performance on the sets of unseen participants and tail classes. Compared to prior direct training methods [56,49,20], out method outperforms these approaches by a notable margin, i.e. higher than Swin-B by +1.44% and +2.34% on Top 1 Accuracy of Verb and Noun predictions in overall. Also, our results outperform Ego-Exo in not only overall accuracy but also in unseen participants and tail classes." }, { "figure_ref": [], "heading": "Conclusions and Limitations", "publication_ref": [ "b12", "b7" ], "table_ref": [], "text": "Conclusions This paper presents a novel approach for cross-view learning in action recognition (CVAR). By using our proposed cross-view self-attention loss, our approach has effectively transferred the knowledge learned from the exocentric to the egocentric view. Moreover, our approach does not require pairs of videos across views which increases the flexibility of our learning approaches in practice. Experimental results on standard egocentric action recognition benchmarks, have shown our SOTA performance. Particularly, our method outperforms the prior direct training, pre-training, and domain adaptation methods. Limitation of Linear Relation Modeling the relation in Eqn. ( 6) by the linear scale α could bring some potential limitations as the cross-view correlation of videos and at-tention maps could be a non-linear proportion and may be subjected to an individual video and its corresponding attention map. Our future works will consider modeling this relation by a deep network to gain more improvement. Limitation of Bounded Distribution Shifts Although this assumption allows us to establish the bounded constraint as in Eqn. (13) and further derive into our loss in Eqn. (8), this could also contain some potential limitations. If the changes across views of videos (attention maps) are significantly large, this could result in the bounded constraint in Eqn. ( 13) is not tight. Thus, the models could not be well generalized w.r.t the large distribution shifts." }, { "figure_ref": [], "heading": "Proof of Eqn (9)", "publication_ref": [], "table_ref": [], "text": "D x and D a are the metrics that measure the correlation of videos and attention maps, respectively; therefore, for all x ego and a ego , these metrics have to satisfy the triangular inequality as follows: D x (x exo , x ego ) + D x (x ego , xego ) ≥ D x (x exo , xego ) D a (a exo , a ego ) + D a (a ego , āego ) ≥ D a (a exo , āego )\nIn addition, under our cross-view distribution shift assumption, the metrics D x and D y are bounded by a threshold β, i.e., D x (x exo , x ego ) ≤ β and D a (a exo , a ego ) ≤ β. As a result, the cross-view self-attention constraint can be further extended as follows: \nD x (" } ]
2023-05-25
[ { "authors": "Shervin Ardeshir; Ali Borji", "journal": "", "ref_id": "b0", "title": "Ego2top: Matching viewers in egocentric and top-view videos", "year": "2016" }, { "authors": "Shervin Ardeshir; Ali Borji", "journal": "Computer Vision and Image Understanding", "ref_id": "b1", "title": "An exocentric look at egocentric actions and vice versa", "year": "2018" }, { "authors": "Shervin Ardeshir; Ali Borji", "journal": "", "ref_id": "b2", "title": "Integrating egocentric videos in top-view surveillance videos: Joint identification and temporal alignment", "year": "2018" }, { "authors": "Shervin Ardeshir; Sandesh Sharma; Ali Broji", "journal": "", "ref_id": "b3", "title": "Egoreid: Cross-view self-identification and human re-identification in egocentric and surveillance videos", "year": "2016" }, { "authors": "Anurag Arnab; Mostafa Dehghani; Georg Heigold; Chen Sun; Mario Lučić; Cordelia Schmid", "journal": "", "ref_id": "b4", "title": "Vivit: A video vision transformer", "year": "2021" }, { "authors": "Max Bain; Arsha Nagrani; Gül Varol; Andrew Zisserman", "journal": "", "ref_id": "b5", "title": "Frozen in time: A joint video and image encoder for end-to-end retrieval", "year": "2021" }, { "authors": "Fabien Baradel; Natalia Neverova; Christian Wolf; Julien Mille; Greg Mori", "journal": "", "ref_id": "b6", "title": "Object level visual reasoning in videos", "year": "2018" }, { "authors": "Gedas Bertasius; Heng Wang; Lorenzo Torresani", "journal": "", "ref_id": "b7", "title": "Is space-time attention all you need for video understanding", "year": "2021-07" }, { "authors": "Adrian Bulat; Juan-Manuel Perez-Rua; Swathikiran Sudhakaran; Brais Martinez; Georgios Tzimiropoulos", "journal": "", "ref_id": "b8", "title": "Space-time mixing attention for video transformer", "year": "2021" }, { "authors": "João Carreira; Eric Noland; Andras Banki-Horvath; Chloe Hillier; Andrew Zisserman", "journal": "", "ref_id": "b9", "title": "A short note about kinetics-600", "year": "2018" }, { "authors": "João Carreira; Eric Noland; Chloe Hillier; Andrew Zisserman", "journal": "", "ref_id": "b10", "title": "A short note on the kinetics-700 human action dataset", "year": "2019" }, { "authors": "João Carreira; Andrew Zisserman", "journal": "IEEE", "ref_id": "b11", "title": "Quo vadis, action recognition? A new model and the kinetics dataset", "year": "2017" }, { "authors": "Jinwoo Choi; Gaurav Sharma; Manmohan Chandraker; Jia-Bin Huang", "journal": "", "ref_id": "b12", "title": "Unsupervised and semi-supervised domain adaptation for action recognition from drones", "year": "2020" }, { "authors": "Benjamin Coors; Alexandru Paul Condurache; Andreas Geiger", "journal": "", "ref_id": "b13", "title": "Nova: Learning to see in novel viewpoints and domains", "year": "2019" }, { "authors": "Dima Damen; Hazel Doughty; Giovanni Maria Farinella; Antonino Furnari; Jian Ma; Evangelos Kazakos; Davide Moltisanti; Jonathan Munro; Toby Perrett; Will Price; Michael Wray", "journal": "", "ref_id": "b14", "title": "Rescaling egocentric vision", "year": "2020" }, { "authors": "Dima Damen; Hazel Doughty; Giovanni Maria Farinella; Sanja Fidler; Antonino Furnari; Evangelos Kazakos; Davide Moltisanti; Jonathan Munro; Toby Perrett; Will Price", "journal": "", "ref_id": "b15", "title": "Scaling egocentric vision: The epic-kitchens dataset", "year": "2018" }, { "authors": "Eadom Dessalene; Michael Maynord; Chinmaya Devaraj; Cornelia Fermuller; Yiannis Aloimonos", "journal": "", "ref_id": "b16", "title": "Egocentric object manipulation graphs", "year": "2020" }, { "authors": "Daniele Di; Mauro ; Antonino Furnari; Giuseppe Patanè; Sebastiano Battiato; Giovanni Maria Farinella", "journal": "Pattern Recognition Letters", "ref_id": "b17", "title": "Sceneadapt: Scene-based domain adaptation for semantic segmentation using adversarial learning", "year": "2020" }, { "authors": "Jeff Donahue; Lisa Anne Hendricks; Sergio Guadarrama; Marcus Rohrbach; Subhashini Venugopalan; Trevor Darrell; Kate Saenko", "journal": "IEEE", "ref_id": "b18", "title": "Long-term recurrent convolutional networks for visual recognition and description", "year": "2015" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "", "ref_id": "b19", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "Chi Nhan Duong; Thanh-Dat Truong; Khoa Luu; Gia Kha; Hung Quach; Kaushik Bui; Roy", "journal": "", "ref_id": "b20", "title": "Vec2face: Unveil human faces from their blackbox features in face recognition", "year": "2020" }, { "authors": "Mohamed Elfeki; Krishna Regmi; Shervin Ardeshir; Ali Borji", "journal": "", "ref_id": "b21", "title": "From third person to first person: Dataset and baselines for synthesis and retrieval", "year": "" }, { "authors": "Yanghao Haoqi Fan; Bo Li; Wan-Yen Xiong; Christoph Lo; Feichtenhofer", "journal": "", "ref_id": "b22", "title": "Pyslowfast", "year": "2020" }, { "authors": "Bo Haoqi Fan; Karttikeya Xiong; Yanghao Mangalam; Zhicheng Li; Jitendra Yan; Christoph Malik; Feichtenhofer", "journal": "", "ref_id": "b23", "title": "Multiscale vision transformers", "year": "2021" }, { "authors": "Christoph Feichtenhofer", "journal": "", "ref_id": "b24", "title": "X3d: Expanding architectures for efficient video recognition", "year": "2020" }, { "authors": "Christoph Feichtenhofer; Haoqi Fan; Jitendra Malik; Kaiming He", "journal": "IEEE", "ref_id": "b25", "title": "Slowfast networks for video recognition", "year": "2008" }, { "authors": "Christoph Feichtenhofer; Axel Pinz; Richard P Wildes", "journal": "NeurIPS", "ref_id": "b26", "title": "Spatiotemporal residual networks for video action recognition", "year": "2016" }, { "authors": "Christoph Feichtenhofer; Axel Pinz; Richard P Wildes", "journal": "IEEE", "ref_id": "b27", "title": "Spatiotemporal multiplier networks for video action recognition", "year": "2017" }, { "authors": "Christoph Feichtenhofer; Axel Pinz; Andrew Zisserman", "journal": "IEEE", "ref_id": "b28", "title": "Convolutional two-stream network fusion for video action recognition", "year": "2016" }, { "authors": "A Furnari; S Battiato; K Grauman; G Maria Farinella", "journal": "Journal of Visual Communication and Image Representation", "ref_id": "b29", "title": "Next-active-object prediction from egocentric videos", "year": "2017" }, { "authors": "Antonino Furnari; Giovanni Farinella", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b30", "title": "Rolling-unrolling lstms for action anticipation from first-person video", "year": "2020" }, { "authors": "Antonino Furnari; Giovanni Maria Farinella", "journal": "", "ref_id": "b31", "title": "What would you expect? anticipating egocentric actions with rollingunrolling lstms and modality attention", "year": "2019" }, { "authors": "Yaroslav Ganin; Evgeniya Ustinova; Hana Ajakan; Pascal Germain; Hugo Larochelle; Mario Franc ¸ois Laviolette; Victor Marchand; Lempitsky", "journal": "The Journal of Machine Learning Research", "ref_id": "b32", "title": "Domain-adversarial training of neural networks", "year": "2016" }, { "authors": "Raghav Goyal; Samira Ebrahimi Kahou; Vincent Michalski; Joanna Materzyńska; Susanne Westphal; Heuna Kim; Valentin Haenel; Ingo Fruend; Peter Yianilos; Moritz Mueller-Freitag; Florian Hoppe; Christian Thurau; Ingo Bax; Roland Memisevic", "journal": "", "ref_id": "b33", "title": "The \"something something\" video database for learning and evaluating visual common sense", "year": "2017" }, { "authors": "Kristen Grauman; Andrew Westbury; Eugene Byrne; Zachary Chavis; Antonino Furnari; Rohit Girdhar; Jackson Hamburger; Hao Jiang; Miao Liu; Xingyu Liu; Miguel Martin; Tushar Nagarajan; Ilija Radosavovic; Santhosh Kumar Ramakrishnan; Fiona Ryan; Jayant Sharma; Michael Wray; Mengmeng Xu; Eric Zhongcong Xu; Chen Zhao; Siddhant Bansal; Dhruv Batra; Vincent Cartillier; Sean Crane; Tien Do; Morrie Doulaty; Akshay Erapalli; Christoph Feichtenhofer; Adriano Fragomeni; Qichen Fu; Christian Fuegen; Abrham Gebreselasie; Cristina Gonzalez; James Hillis; Xuhua Huang; Yifei Huang; Wenqi Jia; Weslie Khoo; Jachym Kolar; Satwik Kottur; Anurag Kumar; Federico Landini; Chao Li; Yanghao Li; Zhenqiang Li; Karttikeya Mangalam; Raghava Modhugu; Jonathan Munro; Tullie Murrell; Takumi Nishiyasu; Will Price; Paola Ruiz Puentes; Merey Ramazanova; Leda Sari; Kiran Somasundaram; Audrey Southerland; Yusuke Sugano; Ruijie Tao; Minh Vo; Yuchen Wang; Xindi Wu; Takuma Yagi; Yunyi Zhu; Pablo Arbelaez; David Crandall; Dima Damen; Giovanni Maria Farinella; Bernard Ghanem; Vamsi Krishna Ithapu; C V Jawahar; Hanbyul Joo; Kris Kitani; Haizhou Li; Richard Newcombe; Aude Oliva; Hyun Soo Park; James M Rehg; Yoichi Sato; Jianbo Shi; Mike Zheng Shou; Antonio Torralba; Lorenzo Torresani; Mingfei Yan; Jitendra Malik", "journal": "", "ref_id": "b34", "title": "Ego4d: Around the World in 3,000 Hours of Egocentric Video", "year": "2022" }, { "authors": "Chunhui Gu; Chen Sun; David A Ross; Carl Vondrick; Caroline Pantofaru; Yeqing Li; Sudheendra Vijayanarasimhan; George Toderici; Susanna Ricco; Rahul Sukthankar; Cordelia Schmid; Jitendra Malik", "journal": "IEEE", "ref_id": "b35", "title": "AVA: A video dataset of spatio-temporally localized atomic visual actions", "year": "2018" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "IEEE", "ref_id": "b36", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Roei Herzig; Elad Ben-Avraham; Karttikeya Mangalam; Amir Bar; Gal Chechik; Anna Rohrbach; Trevor Darrell; Amir Globerson", "journal": "", "ref_id": "b37", "title": "Object-region video transformers", "year": "2022-06" }, { "authors": "Sepp Hochreiter; Jürgen Schmidhuber", "journal": "Neural Comput", "ref_id": "b38", "title": "Long short-term memory", "year": "1997" }, { "authors": "Justin Johnson; Alexandre Alahi; Li Fei-Fei", "journal": "Springer", "ref_id": "b39", "title": "Perceptual losses for real-time style transfer and super-resolution", "year": "2016" }, { "authors": "Georgios Kapidis; Ronald Poppe; Elsbeth Van Dam; Lucas Noldus; Remco Veltkamp", "journal": "Smart-World/SCALCOM/UIC/ATC/CBDCom/IOP/SCI)", "ref_id": "b40", "title": "Egocentric hand track and object-based human action recognition", "year": "2019" }, { "authors": "Andrej Karpathy; George Toderici; Sanketh Shetty; Thomas Leung; Rahul Sukthankar; Fei-Fei Li", "journal": "IEEE", "ref_id": "b41", "title": "Large-scale video classification with convolutional neural networks", "year": "2014" }, { "authors": "Will Kay; Joao Carreira; Karen Simonyan; Brian Zhang; Chloe Hillier; Sudheendra Vijayanarasimhan; Fabio Viola; Tim Green; Trevor Back; Paul Natsev", "journal": "", "ref_id": "b42", "title": "The kinetics human action video dataset", "year": "2017" }, { "authors": "Will Kay; João Carreira; Karen Simonyan; Brian Zhang; Chloe Hillier; Sudheendra Vijayanarasimhan; Fabio Viola; Tim Green; Trevor Back; Paul Natsev; Mustafa Suleyman; Andrew Zisserman", "journal": "", "ref_id": "b43", "title": "The kinetics human action video dataset", "year": "2017" }, { "authors": "Evangelos Kazakos; Arsha Nagrani; Andrew Zisserman; Dima Damen", "journal": "", "ref_id": "b44", "title": "Epic-fusion: Audio-visual temporal binding for egocentric action recognition", "year": "2019" }, { "authors": "Haoxin Li; Wei-Shi Zheng; Jianguo Zhang; Haifeng Hu; Jiwen Lu; Jian-Huang Lai", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b45", "title": "Egocentric action recognition by automatic relation modeling", "year": "2023" }, { "authors": "Yin Li; Miao Liu; James M Rehg", "journal": "", "ref_id": "b46", "title": "In the eye of beholder: Joint learning of gaze and actions in first person video", "year": "2018" }, { "authors": "Yanghao Li; Tushar Nagarajan; Bo Xiong; Kristen Grauman", "journal": "", "ref_id": "b47", "title": "Ego-exo: Transferring visual representations from third-person to first-person videos", "year": "2021" }, { "authors": "Yanghao Li; Chao-Yuan Wu; Haoqi Fan; Karttikeya Mangalam; Bo Xiong; Jitendra Malik; Christoph Feichtenhofer", "journal": "", "ref_id": "b48", "title": "Mvitv2: Improved multiscale vision transformers for classification and detection", "year": "2022" }, { "authors": "Ji Lin; Chuang Gan; Song Han", "journal": "", "ref_id": "b49", "title": "Tsm: Temporal shift module for efficient video understanding", "year": "2019" }, { "authors": "Kevin Qinghong; Lin ; Alex Jinpeng Wang; Mattia Soldan; Michael Wray; Rui Yan; Eric Zhongcong Xu; Difei Gao; Rongcheng Tu; Wenzhe Zhao; Weijie Kong", "journal": "", "ref_id": "b50", "title": "Egocentric video-language pretraining", "year": "2022" }, { "authors": "Gaowen Liu; Hao Tang; Hugo Latapie; Yan Yan", "journal": "", "ref_id": "b51", "title": "Exocentric to egocentric image generation via parallel generative adversarial network", "year": "2020" }, { "authors": "Miao Liu; Siyu Tang; Yin Li; James Rehg", "journal": "", "ref_id": "b52", "title": "Forecasting human object interaction: Joint prediction of motor attention and egocentric activity", "year": "2019" }, { "authors": "Tianshan Liu; Kin-Man Lam", "journal": "", "ref_id": "b53", "title": "A hybrid egocentric activity anticipation framework via memory-augmented recurrent and one-shot representation forecasting", "year": "2022-06" }, { "authors": "Yunze Liu; Yun Liu; Che Jiang; Kangbo Lyu; Weikang Wan; Hao Shen; Boqiang Liang; Zhoujie Fu; He Wang; Li Yi", "journal": "", "ref_id": "b54", "title": "Hoi4d: A 4d egocentric dataset for category-level humanobject interaction", "year": "2022-06" }, { "authors": "Ze Liu; Jia Ning; Yue Cao; Yixuan Wei; Zheng Zhang; Stephen Lin; Han Hu", "journal": "", "ref_id": "b55", "title": "Video swin transformer", "year": "2021" }, { "authors": "Minlong Lu; Danping Liao; Ze-Nian Li", "journal": "", "ref_id": "b56", "title": "Learning spatiotemporal attention for egocentric action recognition", "year": "2019" }, { "authors": "Minghuang Ma; Haoqi Fan; Kris M Kitani", "journal": "", "ref_id": "b57", "title": "Going deeper into first-person activity recognition", "year": "2016" }, { "authors": "Stefan Mathe; Cristian Sminchisescu", "journal": "", "ref_id": "b58", "title": "Dynamic eye movement datasets and learnt saliency models for visual action recognition", "year": "2012" }, { "authors": "Tushar Nagarajan; Yanghao Li; Christoph Feichtenhofer; Kristen Grauman", "journal": "", "ref_id": "b59", "title": "Ego-topo: Environment affordances from egocentric video", "year": "2020" }, { "authors": "Gabriel Peyré; Marco Cuturi", "journal": "", "ref_id": "b60", "title": "Computational optimal transport", "year": "2018" }, { "authors": "Fiora Pirri; Lorenzo Mauro; Edoardo Alati; Valsamis Ntouskos; Mahdieh Izadpanahkakhk; Elham Omrani", "journal": "", "ref_id": "b61", "title": "Anticipation and next action forecasting in video: an end-toend model with memory", "year": "2019" }, { "authors": "Zhaofan Qiu; Ting Yao; Tao Mei", "journal": "IEEE", "ref_id": "b62", "title": "Learning spatiotemporal representation with pseudo-3d residual networks", "year": "2017" }, { "authors": "Krishna Regmi; Ali Borji", "journal": "", "ref_id": "b63", "title": "Cross-view image synthesis using conditional gans", "year": "2018" }, { "authors": "Krishna Regmi; Mubarak Shah", "journal": "", "ref_id": "b64", "title": "Bridging the domain gap for ground-to-aerial image matching", "year": "2019" }, { "authors": "Yanchao Hanxiang Ren; He Yang; Bokui Wang; Qingnan Shen; Fan; C Karen Youyi Zheng; Leonidas Liu; Guibas", "journal": "", "ref_id": "b65", "title": "Adela: Automatic dense labeling with attention for viewpoint shift in semantic segmentation", "year": "2022" }, { "authors": "Dandan Shan; Jiaqi Geng; Michelle Shu; David F Fouhey", "journal": "", "ref_id": "b66", "title": "Understanding human hands in contact at internet scale", "year": "2020" }, { "authors": "Jinghuan Shang; Srijan Das; Michael S Ryoo", "journal": "", "ref_id": "b67", "title": "Learning viewpoint-agnostic visual representations by recovering tokens in 3d space", "year": "2022" }, { "authors": "Yujiao Shi; Liu Liu; Xin Yu; Hongdong Li", "journal": "", "ref_id": "b68", "title": "Spatialaware feature aggregation for image based cross-view geolocalization", "year": "" }, { "authors": "Yujiao Shi; Xin Yu; Dylan Campbell; Hongdong Li", "journal": "", "ref_id": "b69", "title": "Where am i looking at? joint location and orientation estimation by cross-view matching", "year": "2020" }, { "authors": "Abhinav Gunnar A Sigurdsson; Cordelia Gupta; Ali Schmid; Karteek Farhadi; Alahari", "journal": "", "ref_id": "b70", "title": "Actor and observer: Joint modeling of first and third-person videos", "year": "2018" }, { "authors": "Abhinav Gunnar A Sigurdsson; Cordelia Gupta; Ali Schmid; Karteek Farhadi; Alahari", "journal": "", "ref_id": "b71", "title": "Charades-ego: A large-scale dataset of paired third and first person videos", "year": "2018" }, { "authors": "Karen Simonyan; Andrew Zisserman", "journal": "NeurIPS", "ref_id": "b72", "title": "Two-stream convolutional networks for action recognition in videos", "year": "2014" }, { "authors": "Karen Simonyan; Andrew Zisserman", "journal": "ICLR", "ref_id": "b73", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2015" }, { "authors": "Bilge Soran; Ali Farhadi; Linda Shapiro", "journal": "", "ref_id": "b74", "title": "Action recognition in the presence of one egocentric and multiple static cameras", "year": "2014" }, { "authors": "Tomáš Souček; Jean-Baptiste Alayrac; Antoine Miech; Ivan Laptev; Josef Sivic", "journal": "", "ref_id": "b75", "title": "Look for the change: Learning object states and state-modifying actions from untrimmed web videos", "year": "2022-06" }, { "authors": "Swathikiran Sudhakaran; Sergio Escalera; Oswald Lanz", "journal": "", "ref_id": "b76", "title": "Lsta: Long short-term attention for egocentric action recognition", "year": "2019" }, { "authors": "Christian Szegedy; Vincent Vanhoucke; Sergey Ioffe; Jonathon Shlens; Zbigniew Wojna", "journal": "IEEE", "ref_id": "b77", "title": "Rethinking the inception architecture for computer vision", "year": "2016" }, { "authors": "Federica Bugra Tekin; Marc Bogo; Pollefeys", "journal": "", "ref_id": "b78", "title": "H+ o: Unified egocentric recognition of 3d hand-object poses and interactions", "year": "2019" }, { "authors": "Aysim Toker; Qunjie Zhou; Maxim Maximov; Laura Leal-Taixe", "journal": "", "ref_id": "b79", "title": "Coming down to earth: Satellite-to-street view synthesis for geo-localization", "year": "2021-06" }, { "authors": "Du Tran; Lubomir D Bourdev; Rob Fergus; Lorenzo Torresani; Manohar Paluri", "journal": "IEEE", "ref_id": "b80", "title": "Learning spatiotemporal features with 3d convolutional networks", "year": "2015" }, { "authors": "Du Tran; Heng Wang; Lorenzo Torresani; Jamie Ray; Yann Lecun; Manohar Paluri", "journal": "IEEE", "ref_id": "b81", "title": "A closer look at spatiotemporal convolutions for action recognition", "year": "2018" }, { "authors": "Thanh-Dat Truong; Quoc-Huy Bui; Chi Nhan Duong; Han-Seok Seo; Son Lam Phung; Xin Li; Khoa Luu", "journal": "", "ref_id": "b82", "title": "Direcformer: A directed attention in transformer approach to robust action recognition", "year": "2022" }, { "authors": "Huiyu Wang; Mitesh Kumar Singh; Lorenzo Torresani", "journal": "", "ref_id": "b83", "title": "Ego-only: Egocentric action detection without exocentric pretraining", "year": "2023" }, { "authors": "Limin Wang; Yuanjun Xiong; Dahua Lin; Luc Van Gool", "journal": "IEEE", "ref_id": "b84", "title": "Untrimmednets for weakly supervised action recognition and detection", "year": "2017" }, { "authors": "Limin Wang; Yuanjun Xiong; Zhe Wang; Yu Qiao; Dahua Lin; Xiaoou Tang; Luc Van Gool", "journal": "Springer", "ref_id": "b85", "title": "Temporal segment networks: Towards good practices for deep action recognition", "year": "2016" }, { "authors": "Weiyao Wang; Du Tran; Matt Feiszli", "journal": "", "ref_id": "b86", "title": "What makes training multi-modal classification networks hard", "year": "2020" }, { "authors": "Xiaolong Wang; Ross B Girshick; Abhinav Gupta; Kaiming He", "journal": "IEEE", "ref_id": "b87", "title": "Non-local neural networks", "year": "2018" }, { "authors": "Xiaohan Wang; Linchao Zhu; Yu Wu; Yi Yang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b88", "title": "Symbiotic attention for egocentric action recognition with objectcentric alignment", "year": "2020" }, { "authors": "Saining Xie; Chen Sun; Jonathan Huang; Zhuowen Tu; Kevin Murphy", "journal": "Springer", "ref_id": "b89", "title": "Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification", "year": "2018" }, { "authors": "Mingze Xu; Chenyou Fan; Yuchen Wang; Michael S Ryoo; David J Crandall", "journal": "", "ref_id": "b90", "title": "Joint person segmentation and identification in synchronized first-and third-person videos", "year": "2018" }, { "authors": "Liang Yang; Hao Jiang; Jizhong Xiao; Zhouyuan Huo", "journal": "", "ref_id": "b91", "title": "Ego-downward and ambient video based person location association", "year": "2018" }, { "authors": "Bolei Zhou; Alex Andonian; Aude Oliva; Antonio Torralba", "journal": "Springer", "ref_id": "b92", "title": "Temporal relational reasoning in videos", "year": "2018" }, { "authors": "Sijie Zhu; Mubarak Shah; Chen Chen", "journal": "", "ref_id": "b93", "title": "Transgeo: Transformer is all you need for cross-view image geo-localization", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 316.38, 96.35, 228.74, 30.72 ], "formula_id": "formula_0", "formula_text": "arg min θ F ,θ Cexo ,θ Cego [Ex exo,yego Lce(Cexo(F (xexo)), yexo) +Ex ego,yego Lce(Cego(F (xego)), yego)](1)" }, { "formula_coordinates": [ 4, 111.24, 106.31, 175.12, 21.84 ], "formula_id": "formula_1", "formula_text": "xexo = R(Kexo, [Rexo, texo]) xego = R(Kego, [Rego, tego]) (2)" }, { "formula_coordinates": [ 4, 103.29, 229.36, 183.07, 22.27 ], "formula_id": "formula_2", "formula_text": "Kego = T K × Kexo [Rego, tego] = T Rt × [Rexo, texo](3)" }, { "formula_coordinates": [ 4, 117.97, 316.8, 168.39, 8.32 ], "formula_id": "formula_3", "formula_text": "xego = T (xexo; T K , T Rt )(4)" }, { "formula_coordinates": [ 4, 75.22, 700.79, 211.14, 10.56 ], "formula_id": "formula_4", "formula_text": "āego = T ′ (aexo; T K , T Rt ) ≡ T (aexo; T K , T Rt )(5)" }, { "formula_coordinates": [ 4, 354.88, 323.61, 190.23, 21.84 ], "formula_id": "formula_5", "formula_text": "Dx(xexo, xego) ∝ Da(aexo, āego) ⇔ Dx(xexo, xego) = αDa(aexo, āego)(6)" }, { "formula_coordinates": [ 5, 56.32, 319.35, 230.05, 22.28 ], "formula_id": "formula_7", "formula_text": "L self = Ex exo,xego λ||Dx(xexo, xego) -αDa(aexo, aego)|| 2 2 (8)" }, { "formula_coordinates": [ 5, 67.31, 470.56, 219.06, 17.1 ], "formula_id": "formula_8", "formula_text": "Dx(xexo,xego) -αDa(aexo, āego) ≤ Dx(xexo, xego) -αDa(aexo, aego) + (1 + α)β(9)" }, { "formula_coordinates": [ 5, 50.11, 524.51, 236.25, 57.5 ], "formula_id": "formula_9", "formula_text": "D x (x exo , x ego ) - αD a (a exo , a ego ) + (1 + α)β is the upper bound of D x (x exo , xego ) -αD a (a exo , āego ), minimizing ||D x (x exo , x ego ) -αD a (a exo , a ego )|| 2 2 also imposes the constraint of ||D x (x exo , xego )-αD a (a exo , āego )|| 2" }, { "formula_coordinates": [ 5, 311.68, 555.53, 230.11, 21.28 ], "formula_id": "formula_10", "formula_text": "Dx(xexo, xego) = D G x (xexo, xego) = ||G(xexo) -G(xego)|| 2 2 (10" }, { "formula_coordinates": [ 5, 541.38, 569.04, 3.73, 7.77 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 13, 53.91, 267.96, 17.66, 9.65 ], "formula_id": "formula_13", "formula_text": "D x (" } ]
Cross-view Action Recognition Understanding From Exocentric to Egocentric Perspective
Understanding action recognition in egocentric videos has emerged as a vital research topic with numerous practical applications. With the limitation in the scale of egocentric data collection, learning robust deep learning-based action recognition models remains difficult. Transferring knowledge learned from the large-scale exocentric data to the egocentric data is challenging due to the difference in videos across views. Our work introduces a novel crossview learning approach to action recognition (CVAR) that effectively transfers knowledge from the exocentric to the egocentric view. First, we introduce a novel geometricbased constraint into the self-attention mechanism in Transformer based on analyzing the camera positions between two views. Then, we propose a new cross-view self-attention loss learned on unpaired cross-view data to enforce the selfattention mechanism learning to transfer knowledge across views. Finally, to further improve the performance of our cross-view learning approach, we present the metrics to measure the correlations in videos and attention maps effectively. Experimental results on standard egocentric action recognition benchmarks, i.e., Charades-Ego, EPIC-Kitchens-55, and EPIC-Kitchens-100, have shown our approach's effectiveness and state-of-the-art performance.
Thanh-Dat Truong; Khoa Luu
[ { "figure_caption": "Pnon-overlapped patches (K × P × P is the patch size of the token) of a video and a single classification token. Let a exo , āego ∈ R T K × H P × W P be an attention map of the video frames w.r.t the classification token extracted from the network F on the inputs x exo and xexo , respectively. The attention maps a exo and āego represent the focus of the model on the video over time w.r.t to the model predictions.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "arg min θ F ,θ Cexo ,θ Cego [Ex exo,yego Lce(Cexo(F (xexo)), yexo) +Ex ego,yego Lce(Cego(F (xego)), yego)] s.t. Dx(xexo, xego) = αDa(aexo, āego)", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. The Proposed Framework. The input videos xexo and xego are first forwarded to Transformer F followed by the corresponding classifiers Cexo and Cego, respectively. Then, the supervised cross-entropy loss Lce is applied to the predictions produced by the model. Meanwhile, the attention maps of video inputs, i.e, aexo and aego, are extracted and imposed by the cross-view self-attention loss L self .", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Da(aexo, aego) = D JS a (aexo, aego) aexo||aego) + DKL(aego||aexo))(11)where D KL is the Kullback-Leibler divergence. To satisfy the cross-view distribution shift assumption aforementioned, the correlation metrics D x and D a are constrained by the threshold β, i.e., D x (x exo , x ego ) = min D G x (x exo , x ego ), β and D a (a exo , a ego ) = min D JS a (a exo , a ego ), β . In our experiments, the value of β is set to 200.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "is a large-scale third-view action recognition dataset including 300K videos of 400 classes of human actions. The dataset is licensed by Google Inc. under a Creative Commons Attribution 4.0 International License. Charades-Ego [72] is a first-view action recognition dataset that consists of 157 action classes with 68K clips. The license of Charades-Ego is registered for academic purposes by the Allen Institute for Artificial Intelligence. EPIC-Kitchens-55 [16] is a large-scale multi-task egocentric dataset of daily activities in kitchens. The action recognition task includes 55 hours of 39K clips and is annotated by interactions between 352 nouns and 125 verbs. EPIC-Kitchens-100 [15] is an larger version of the EPIC-Kitchens-55 where it is extended to 100 hours of 90k action clips. Each single action segment is annotated by an action of 97 verbs and 300 nouns. The EPIC Kitchens dataset was published under the Creative Commons Attribution-NonCommerial 4.0 International License. Evaluation Metrics Our experiments follow the standard benchmarks of the Charades-Ego and EPIC-Kitchens for action recognition. We report the mean average precision (mAP) in the Charades-Ego [72] experiments and Top 1 and Top 5 accuracy of verb, noun, and action predictions of the validation set in EPIC-Kitchens [16, 15] experiments.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Attention Visualization of Model Prediction (verb and noun) on EPIC Kitchen Videos.", "figure_data": "", "figure_id": "fig_5", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Effectiveness of the Scale α in the Linear Relation to the Charades-Ego (E-Ego) and EPIC-Kitchen-55 (EPIC) Action Recognition Benchmarks. Implementation In our work, we adopt the design of the Vision Transformation Base model (ViT-B)", "figure_data": "αC-Ego mAP Top 1 Top 5 Top 1 Top 5 EPIC Verb EPIC Noun0.00 20.70 41.94 67.31 43.19 60.140.25 25.09 55.96 89.37 55.96 80.650.50 28.97 58.84 87.24 54.75 75.270.75 31.95 60.80 89.62 57.42 77.771.00 30.68 68.97 89.53 44.87 70.981.50 29.51 73.52 92.22 68.19 84.932.00 27.80 69.60 92.54 61.60 81.22", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Effectiveness of the Choices of Correlation Metrics to the Charades-Ego (E-Ego) and EPIC-Kitchen-55 (EPIC) Action Recognition Benchmarks.", "figure_data": "D xD aC-Ego EPIC Verb EPIC Nounℓ 2 D G x ℓ 2 D JS amAP Top 1 Top 5 Top 1 Top 5✓✓27.80 60.97 89.95 58.05 78.07✓✓ 28.77 61.13 90.16 58.05 78.40✓ ✓29.11 63.13 90.12 59.68 80.03✓✓ 31.95 73.52 92.22 68.19 84.93", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Effectiveness of the Transformer Layers to the Charades-Ego (E-Ego) and EPIC-Kitchen-55 (EPIC) Action Recognition Benchmarks.", "figure_data": "Transformer Layers C-Ego EPIC Verb EPIC Noun1-3 4-6 7-9 10-12 mAP Top 1 Top 5 Top 1 Top 5✓25.65 60.47 90.26 57.85 78.58✓ ✓28.19 68.46 91.08 66.54 83.36✓ ✓ ✓30.60 69.27 92.58 68.09 85.02✓ ✓ ✓✓31.95 73.52 92.22 68.19 84.93", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparisons on Charades-Ego.", "figure_data": "MethodmAPActorObserverNet", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparisons on EPIC-Kitchen-55.", "figure_data": "MethodEPIC verbs EPIC nouns Top 1 Top 5 Top 1 Top 5ResNet-50 [26]61.19 87.49 46.18 69.72MViT-V2 [49]55.17 89.87 56.59 79.40Swin-B [56]56.40 85.84 47.68 71.02DANN [33]61.27 87.49 45.93 68.73Joint-Embed [71]61.26 87.17 46.55 68.97Ego-Exo + ResNet-50 [48] 62.83 87.63 48.15 70.28Ego-Exo + SlowFast [48]65.97 88.91 49.42 72.35Ego-Exo* + ResNet-50 [48] 64.26 88.45 48.39 70.68Ego-Exo* + SlowFast [48] 66.43 89.16 49.79 71.60CVAR (Ours)73.52 92.22 68.19 84.93", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Comparisons to Prior Methods on the EPIC-Kitchen-100 Action Recognition Benchmark. Accuracy Verb Noun Action Verb Noun Action Verb Noun Action Verb Noun Action TSN [86] 60.18 46.03 33.19 89.59 72.90 55.13 47.42 38.03 23.47 30.45 19.37 13.88 TRN [93] 65.88 45.43 35.34 90.42 71.88 56.74 55.96 37.75 27.70 34.66 17.58 14.07 TBN [45] 66.00 47.23 36.72 90.46 73.76 57.66 59.44 38.22 29.48 39.09 24.84 19.13 TSM [50] 67.86 49.01 38.27 90.98 74.97 60.41 58.69 39.62 29.48 36.59 23.37 17.62 SlowFast [26] 65.56 50.02 38.54 90.00 75.62 58.60 56.43 41.50 29.67 36.19 23.26 18.81 MViT-V2 [49] 67.13 60.89 45.79 91.13 83.93 66.83 57.75 50.52 34.84 40.85 38.47 25.35 Ego-Exo [48] 66.61 59.51 44.89 91.13 82.03 65.05 56.57 48.87 33.71 40.91 38.26 25.23 Swin-B [56] 67.93 58.69 46.05 90.96 83.77 65.23 58.69 50.89 35.02 41.08 37.21 25.41 CVAR (Ours) 69.37 61.03 46.15 91.51 81.03 67.05 59.91 48.36 35.12 41.93 38.58 25.99", "figure_data": "OverallUnseen ParticipantsTail ClassesMethodTop-1 AccuracyTop-5 AccuracyTop-1 AccuracyTop-1", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "x exo , xego ) -αD a (a exo , āego )≤ D x (x exo , x ego ) + D x (x ego , xego ) -αD a (a exo , āego ) ≤ D x (x exo , x ego ) + β -α(D a (a exo , a ego ) + β) + αβ ≤ D x (x exo , x ego ) -αD a (a exo , a ego ) + (1 + α)β(13)", "figure_data": "", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[48,56,49,46,51]", "Explanation": "The cited works provide a basis for action recognition tasks in egocentric video data, which the citing paper builds upon in their research."}, {"Category": "Methodological Basis", "Citation": "[15, 85,84,38,76]", "Explanation": "The cited works offer a foundation for action detection tasks in egocentric video data, which the citing paper leverages in their research."}, {"Category": "Methodological Basis", "Citation": "[30,51,54]", "Explanation": "The cited works contribute to the field of action anticipation in egocentric video data, which the citing paper builds upon in their study."}, {"Category": "Supporting Evidence", "Citation": "[11]", "Explanation": "The cited work, Kinetics-700 dataset, is used as a benchmark for learning robust action recognition models, which the citing paper builds upon to improve the performance of action recognition models on the first-view data."}, {"Category": "Data Source", "Citation": "[15]", "Explanation": "The cited work, EPIC Kitchens dataset, is mentioned as a source of egocentric video data for training action recognition models, which the citing paper uses to address the scale and variation issues in egocentric video data."}, {"Category": "Data Source", "Citation": "[72]", "Explanation": "The cited work, Charades-Ego dataset, is mentioned as a source of egocentric video data for training action recognition models, which the citing paper uses to address the scale and variation issues in egocentric video data."}, {"Category": "Extension or Continuation", "Citation": "[26]", "Explanation": "The cited work, a method for improving action recognition models by finetuning pre-trained models on first-view data, is extended in the citing paper to address the unaligned domain problems in egocentric video data."}, {"Category": "Methodological Basis", "Citation": "[48]", "Explanation": "The cited work introduces several additional egocentric tasks during the pre-training phase on the third-view datasets, which the citing paper adopts to alleviate the domain mismatch problem."}, {"Category": "Extension or Continuation", "Citation": "[33]", "Explanation": "The cited work on domain adaptation methods has been utilized in the citing paper to transfer knowledge from the third-view to first-view data."}, {"Category": "Extension or Continuation", "Citation": "[48]", "Explanation": "The cited work on domain adaptation methods has also been utilized in the citing paper to transfer knowledge from the third-view to first-view data."}, {"Category": "Extension or Continuation", "Citation": "[71]", "Explanation": "The cited work on domain adaptation methods has been utilized in the citing paper to transfer knowledge from the third-view to first-view data."}, {"Category": "Extension or Continuation", "Citation": "[48]", "Explanation": "The cited work on egocentric tasks has been utilized in the citing paper to alleviate the domain mismatch problem by introducing several additional egocentric tasks during the pre-training phase on the third-view datasets."}, {"Category": "Methodological Basis", "Citation": "[44,10,11]", "Explanation": "The cited works provide the datasets (Kinetics) that the citing paper uses for action recognition tasks."}, {"Category": "Data Source", "Citation": "[34]", "Explanation": "The cited work introduces the Something-Something V2 dataset that the citing paper utilizes in its research."}, {"Category": "Data Source", "Citation": "[42]", "Explanation": "The cited work provides the Sport1M dataset that the citing paper uses for action recognition tasks."}, {"Category": "Data Source", "Citation": "[36]", "Explanation": "The cited work introduces the AVA dataset that the citing paper utilizes in its research."}, {"Category": "Methodological Basis", "Citation": "[12,26,86,50,5,56,24,49]", "Explanation": "The cited works present various deep learning approaches that the citing paper adopts or adapts for action recognition tasks."}, {"Category": "Methodological Basis", "Citation": "[42,19]", "Explanation": "The cited works introduce early deep learning approaches that the citing paper uses to extract spatial features and learn temporal information from video inputs."}, {"Category": "Methodological Basis", "Citation": "[73,29,27,28,86]", "Explanation": "The cited works present two-stream networks that the citing paper utilizes to improve the temporal learning capability in action recognition tasks."}, {"Category": "Methodological Basis", "Citation": "[81]", "Explanation": "The cited work introduces 3D CNN-based approaches that the citing paper adopts for action recognition tasks."}, {"Category": "Methodological Basis", "Citation": "[12,93]", "Explanation": "The cited works present variants of 3D CNN architectures that the citing paper uses in action recognition tasks."}, {"Category": "Methodological Basis", "Citation": "[82,26,25,90]", "Explanation": "The cited works introduce (2+1)D CNN architectures that the citing paper adopts in action recognition tasks."}, {"Category": "Methodological Basis", "Citation": "[26]", "Explanation": "The cited work introduces a dual-path network to learn spatiotemporal information at two different temporal rates, which the citing paper adopts in their research to improve the action recognition process."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work proposes a progressive expansion of networks to search for an optimal network for action recognition, which the citing paper uses to enhance the action recognition process."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The cited work on Video Vision Transformer (ViViT) has shown its promising capability in handling spatial-temporal tokens in action recognition, which the citing paper leverages in their research to improve action recognition performance."}, {"Category": "Methodological Basis", "Citation": "[8]", "Explanation": "The cited work on TimeSFormer introduces divided spatial and temporal attention to reduce computational overhead, which the citing paper adopts to improve the action recognition process."}, {"Category": "Methodological Basis", "Citation": "[83]", "Explanation": "The cited work on the directed attention mechanism improves the performance of action recognition by using a space-time mixing attention mechanism, which the citing paper references in their research to enhance action recognition."}, {"Category": "Supporting Evidence", "Citation": "[24]", "Explanation": "The cited work by [24] proposed a Multiscale Vision Transformer (MViT) that uses multiscale feature hierarchies, which the citing paper leverages in their research to improve the performance of MViT."}, {"Category": "Methodological Basis", "Citation": "[49]", "Explanation": "The cited work by [49] improved the performance of MViT by incorporating decomposed relative positional embeddings and residual pooling connections, which the citing paper adopts in their research to further enhance the performance of MViT."}, {"Category": "Extension or Continuation", "Citation": "[56]", "Explanation": "The cited work by [56] proposed the Swin Video Transformer that achieved state-of-the-art performance in action recognition by using shifted windows to limit the self-attention computational cost to local windows and also allow learning attention across windows. The citing paper extends the research by building upon the methods and techniques introduced in the cited work."}, {"Category": "Data Source", "Citation": "[72,16,35,51]", "Explanation": "The cited works by [72,16,35,51] introduced several datasets for egocentric video analysis tasks, which the citing paper utilizes in their research to support the analysis of egocentric videos."}, {"Category": "Methodological Basis", "Citation": "[58,47,45,87]", "Explanation": "The cited works provide a multi-stream network design that the citing paper adopts in its research for egocentric action recognition."}, {"Category": "Extension or Continuation", "Citation": "[59,47,53]", "Explanation": "The cited works introduce additional egocentric cues or tasks such as gaze and motor attention, which the citing paper further builds upon in its research for action recognition."}, {"Category": "Data Source", "Citation": "[30,7,17,89]", "Explanation": "The cited works provide the data source for object detection tasks, which the citing paper utilizes in its research for action recognition."}, {"Category": "Extension or Continuation", "Citation": "[79,67,41]", "Explanation": "The cited works present methods for hand interactions in action recognition, which the citing paper further extends in its research by considering the use of additional egocentric cues or tasks."}, {"Category": "Methodological Basis", "Citation": "[48]", "Explanation": "The cited work introduces the approach of pre-training on the third-view dataset with auxiliary egocentric tasks, which the citing paper adopts in its research for action recognition."}, {"Category": "Methodological Basis", "Citation": "[94,80,70,69]", "Explanation": "The cited works provide a basis for the use of cross-view learning approaches in geo-localization tasks."}, {"Category": "Methodological Basis", "Citation": "[14,66,18]", "Explanation": "The cited works offer a methodological basis for the use of cross-view learning approaches in semantic segmentation tasks."}, {"Category": "Methodological Basis", "Citation": "[33,13]", "Explanation": "The cited works provide a basis for the use of domain adaptation in video understanding tasks to alleviate the cross-view gap between exocentric and egocentric domains."}, {"Category": "Methodological Basis", "Citation": "[75,71,2,68]", "Explanation": "The cited works offer a methodological basis for learning viewpoint-invariant methods in video understanding tasks."}, {"Category": "Methodological Basis", "Citation": "[91,1,3,4,92]", "Explanation": "The cited works provide a basis for the use of joint embedding learning in video understanding tasks to alleviate the cross-view gap between exocentric and egocentric domains."}, {"Category": "Extension or Continuation", "Citation": "[22,64,65,52]", "Explanation": "The cited works extend the use of generative models to synthesize other viewpoints from given images or videos in video understanding tasks."}, {"Category": "Methodological Basis", "Citation": "[48,71]", "Explanation": "The cited works provide a learning approach for cross-view action recognition, which the citing paper adopts to learn a model for recognizing actions in cross-view videos."}, {"Category": "Extension or Continuation", "Citation": "[26,56]", "Explanation": "The cited works are further improved by the citing paper to enhance the performance of cross-view action recognition models."}, {"Category": "Extension or Continuation", "Citation": "[33]", "Explanation": "The cited work is used in the context of environment changes to address the problem of cross-view learning in the context of different camera views."}, {"Category": "Extension or Continuation", "Citation": "[49]", "Explanation": "The cited work is used to improve the performance of cross-view action recognition models in a way that effectively addresses the problem of cross-view learning."}, {"Category": "Methodological Basis", "Citation": "[26,56]", "Explanation": "The cited works provide a large pretrained model that the citing paper adopts for fine-tuning the first-view action model in the cross-view setting."}, {"Category": "Data Source", "Citation": "[48]", "Explanation": "The cited work is used to acknowledge the origin of the join embedding and auxiliary egocentric tasks that the citing paper utilizes in learning the cross-view action recognition model."}, {"Category": "Methodological Basis", "Citation": "[20,49,56]", "Explanation": "The cited works are the key to learning robust action recognition models in the citing paper, as they are the success of Vision Transformer in action recognition."}, {"Category": "Methodological Basis", "Citation": "[61]", "Explanation": "The cited work on Gromov-Wasserstein distance is used as a basis for the proposed loss in the citing paper, which is a special case of the distance between video and attention map distributions with a pre-defined association matrix."}, {"Category": "Methodological Basis", "Citation": "(10)", "Explanation": "The proposed correlation metric D x is designed based on the deep latent spaces defined in Eqn. (10), which is a method adopted from the cited work to model the deep semantic information of videos in a more effective way."}, {"Category": "Data Source", "Citation": "[26,8,56]", "Explanation": "The cited works are used as a benchmark dataset in the experiment to evaluate the effectiveness of the approach in action recognition."}, {"Category": "Extension or Continuation", "Citation": "[43] [20]", "Explanation": "The cited works are used as a third-view dataset in the experiment to demonstrate the effectiveness of the approach in action recognition."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work is used as the framework for implementing the model in the experiment, providing a methodological basis for the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[49,26]", "Explanation": "The cited works provide the training methods and augmentation techniques that the citing paper adopts in its research to increase the diversity of training data and improve the model performance."}, {"Category": "Supporting Evidence", "Citation": "[56]", "Explanation": "The cited work provides the pre-trained model Swin-B that the citing paper uses in its network G to improve the performance of the model in the video input process."}, {"Category": "Data Source", "Citation": "[8,48]", "Explanation": "The cited works are the source of the video input used in the evaluation of the citing paper, providing the basis for the final result obtained by averaging the prediction scores of the spatial crops from the video input."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work provides the method of using SSDA and I3D for action recognition, which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "[33]", "Explanation": "The cited work introduces the DANN method for action recognition, which the citing paper uses in their research."}, {"Category": "Methodological Basis", "Citation": "[26]", "Explanation": "The cited work presents the SlowFast method for action recognition, which the citing paper employs in their research."}, {"Category": "Methodological Basis", "Citation": "[48]", "Explanation": "The cited work introduces the Ego-Exo method for action recognition, which the citing paper uses in their research to study the effectiveness of correlation metrics in action recognition performance."}, {"Category": "Supporting Evidence", "Citation": "[71]", "Explanation": "The cited work, ActorObserver-Net, is used as a baseline to compare the performance of the citing paper in the Charades-Ego benchmark."}, {"Category": "Extension or Continuation", "Citation": "[13]", "Explanation": "The cited work, SSDA and I3D, is used in the citing paper to show the performance of the proposed method in the Charades-Ego benchmark and to compare it with other methods."}, {"Category": "Supporting Evidence", "Citation": "[33]", "Explanation": "The cited work, DANN, is used in the citing paper to show the performance of the proposed method in the Charades-Ego benchmark and to compare it with other methods."}, {"Category": "Extension or Continuation", "Citation": "[26]", "Explanation": "The cited work, SlowFast, is used in the citing paper to show the performance of the proposed method in the Charades-Ego benchmark and to compare it with other methods."}, {"Category": "Extension or Continuation", "Citation": "[49]", "Explanation": "The cited work, MViT-V2, is used in the citing paper to show the performance of the proposed method in the Charades-Ego benchmark and to compare it with other methods."}, {"Category": "Extension or Continuation", "Citation": "[56]", "Explanation": "The cited work, Swin-B, is used in the citing paper to show the performance of the proposed method in the Charades-Ego benchmark and to compare it with other methods."}, {"Category": "Extension or Continuation", "Citation": "[48]", "Explanation": "The cited work, Ego-Exo, is used in the citing paper to show the performance of the proposed method in the Charades-Ego benchmark and to compare it with other methods."}, {"Category": "Extension or Continuation", "Citation": "[26]", "Explanation": "The cited work, ResNet-50, is used as a baseline in the comparison with the proposed approach in the citing paper. The results presented in Table 5 show the performance of the approach compared to the baseline, indicating an extension or continuation of the research in the field of image recognition."}, {"Category": "Extension or Continuation", "Citation": "[33]", "Explanation": "The cited work, DANN, is also used as a baseline in the comparison with the proposed approach in the citing paper. The results presented in Table 5 show the performance of the approach compared to the baseline, indicating an extension or continuation of the research in the field of image recognition."}, {"Category": "Extension or Continuation", "Citation": "[26]", "Explanation": "The cited work, SlowFast, is used as a baseline in the comparison with the proposed approach in the citing paper. The results presented in Table 5 show the performance of the approach compared to the baseline, indicating an extension or continuation of the research in the field of image recognition."}, {"Category": "Extension or Continuation", "Citation": "[49]", "Explanation": "The cited work, MViT-V2, is used as a baseline in the comparison with the proposed approach in the citing paper. The results presented in Table 5 show the performance of the approach compared to the baseline, indicating an extension or continuation of the research in the field of image recognition."}, {"Category": "Extension or Continuation", "Citation": "[56]", "Explanation": "The cited work, Swin-B, is used as a baseline in the comparison with the proposed approach in the citing paper. The results presented in Table 5 show the performance of the approach compared to the baseline, indicating an extension or continuation of the research in the field of image recognition."}, {"Category": "Extension or Continuation", "Citation": "[48]", "Explanation": "The cited work, Ego-Exo, is used as a baseline in the comparison with the proposed approach in the citing paper. The results presented in Table 5 show the performance of the approach compared to the baseline, indicating an extension or continuation of the research in the field of image recognition."}, {"Category": "Extension or Continuation", "Citation": "[26]", "Explanation": "The cited work, SlowFast, is used as a baseline for comparison in the citing paper, which further extends the research by comparing the performance of the proposed CVAR method with the SlowFast model on the EPIC-Kitchens-100 benchmark."}, {"Category": "Methodological Basis", "Citation": "[45]", "Explanation": "The cited work, TBN, provides a method for training a model on a small number of classes and then finetuning it on a large number of classes. This method is adopted in the citing paper to train a model on a small number of classes and then finetune it on a large number of classes in the EPIC-Kitchens-100 benchmark."}, {"Category": "Extension or Continuation", "Citation": "[48]", "Explanation": "The cited work, Ego-Exo using SlowFast-R50, is used as a baseline for comparison in the citing paper, which further extends the research by comparing the performance of the proposed CVAR method with the Ego-Exo model using SlowFast-R50 on the EPIC-Kitchens-100 benchmark."}, {"Category": "Data Source", "Citation": "[49]", "Explanation": "The cited work, MViT-V2, is a pre-trained model that is used as a data source in the citing paper to train a model on a small number of classes and then finetune it on a large number of classes in the EPIC-Kitchens-100 benchmark."}, {"Category": "Methodological Basis", "Citation": "[50]", "Explanation": "The cited work, TSM, provides a method for training a model on a small number of classes and then finetuning it on a large number of classes. This method is adopted in the citing paper to train a model on a small number of classes and then finetune it on a large number of classes in the EPIC-Kitchens-100 benchmark."}, {"Category": "Methodological Basis", "Citation": "[56]", "Explanation": "The cited work, Swin-B, provides a method for training a model on a small number of classes and then finetuning it on a large number of classes. This method is adopted in the citing paper to train a model on a small number of classes and then finetune it on a large number of classes in the EPIC-Kitchens-100 benchmark."}, {"Category": "Extension or Continuation", "Citation": "[86]", "Explanation": "The cited work, TSN, is used as a baseline for comparison in the citing paper, which further extends the research by comparing the performance of the proposed CVAR method with the TSN model on the EPIC-Kitchens-100 benchmark."}, {"Category": "Extension or Continuation", "Citation": "[93]", "Explanation": "The cited work, TRN, is used as a baseline for comparison in the citing paper, which further extends the research by comparing the performance of the proposed CVAR method with the TRN model on the EPIC-Kitchens-100 benchmark."}]
[ { "figure_ref": [ "fig_0" ], "heading": "Introductions", "publication_ref": [ "b5", "b7", "b45", "b8", "b39", "b42", "b43", "b0", "b18", "b12", "b46", "b12", "b46", "b0", "b33", "b14", "b38", "b12", "b46", "b12", "b46", "b3", "b2", "b48", "b0", "b18", "b19", "b11", "b44", "b19", "b0", "b18", "b47", "b0" ], "table_ref": [], "text": "In the ADE20K 50-50 (3 steps) benchmark, the distribution of classes in the majority group in the early step dominates the ones in the minority groups in the later steps. The distributions of classes gradually decrease over steps.\nConvolutional Neural Networks (CNNs) [5,7] and Transformers [45,8] have been introduced to approach semantic segmentation tasks where the models learn from the large-scale data having known classes at once. These segmentation models learned on large-scale data may perform poorly as they may counter the new objects or new environments. To bridge this gap, several approaches have been proposed to adapt the model to the new data. Domain Adaptation is one of the common approaches [39,42,43,1,18] that adaptively transfer the segmentation model into the deployed environment. However, domain adaptation cannot handle when new objects appear due to its close-set setup. Also, this approach requires access to both original and new training data. In practice, the segmentation models should be capable of learning new classes continually without re-training from the previous data. This paradigm is defined as Continual Semantic Segmentation [12,46]. The current continual semantic segmentation approaches [12,46] concentrate on addressing two main challenges, i.e., (1) catastrophic forgetting [33,14,38] and (2) background shift [12,46]. The former problem indicates the forgetting issue of the model about the previous knowledge when learning on new training data [12,46,4]. Meanwhile, the latter problem refers to the classes of previous or future data that have collapsed into a background class [3]. However, another critical problem that needs attention is the fairness issue in semantic segmentation.\nFairness in semantic segmentation refers to the problem of the model where it behaves unfairly between classes in the dataset due to the class distribution, i.e., the model tends to produce prediction bias toward a specific group of classes occupying a large region on an image or frequently existing in the dataset (as shown in Fig. 1). The unfair predictions could result in severe problems, especially in human-related applications that could influence human safety. Moreover, the fairness problem could even be well observed in the context of continual learning when the model encounters new classes without accessing previous training data. The prior work of continual learning in image classification [48] has also considered this problem. Several prior studies [1,18,19] in semantic segmentation have tried to reduce the effect of the class imbalance by introducing the weighted cross entropy [11,44,19], focal loss [1], over-sampling techniques [18,47,1]. However, the fairness problem in continual semantic segmentation has yet to be well-defined and directly addressed. There should be more attention on addressing the fairness issue in continual semantic segmentation. Therefore, this work aims to address the fairness problem in continual semantic segmentation caused by the imbalanced class distribution defined based on the number of pixels of each class in the dataset." }, { "figure_ref": [], "heading": "Contributions of this Work:", "publication_ref": [ "b0", "b1" ], "table_ref": [], "text": "This work presents a novel Fairness Continual Learning (FairCL) approach to semantic scene segmentation. Our contributions can be summarized as follows. First, under the perspective of fairness learning, the new metric is formulated to measure the fairness of the model via the error rate among classes. Then, the metric is further derived into the three main objectives, i.e., (1) the Task-specific Objective that handles the catastrophic forgetting problem, (2) the Fairness Objective that maintains the fairness of predictions produced by the model based on the class distribution, and (3) the Conditional Structural Constraint that imposes the consistency of the segmentation predictions. Second, to sufficiently model the continual learning problem, the novel Prototypical Contrastive Clustering loss is presented to address the catastrophic forgetting and the background shifting problems. Moreover, the proposed Prototypical Contrastive Clustering loss has been proven to be a generalized paradigm of knowledge distillation approaches commonly adopted in continual learning. Last, sufficient ablation studies have shown the effectiveness of our method in continual learning and promoted the fairness of the model. The empirical comparison with prior methods has shown the State-of-the-Art (SOTA) results on the standard benchmarks." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b22", "b17", "b27", "b28", "b24", "b2", "b12", "b41", "b34", "b35", "b46", "b3", "b30", "b29", "b32", "b44", "b23", "b9", "b37", "b40" ], "table_ref": [], "text": "Continual Semantic Segmentation While there is enormous progress in the continual learning problem, most approaches focus on image classification. Existing continual learning methods [22,17] have been extended for continual semantic segmentation in medical images [27,28] and general datasets [24]. Cermelli et al. [3] addressed a background shifting problem in the continual semantic segmentation. Douillard et al. [12] introduced a multi-scale spatial distillation scheme that preserves long-and short-range spatial relationships at the feature level. Volpi et al. [41] presented a continual learning framework where the model is sequentially trained on multiple labeled data domains. Rostami et al. [34] proposed a continual learning framework under the unsupervised domain adaptation setting using data rehearsal. Saporta et al. [35] introduced a multi-head knowledge distillation framework for the continual unsupervised domain. Zhang et al. [46] presented a representation compensation module to decouple the representation learning of both old and new knowledge. Cha et al. [4] suggested finding the unknown class from the background to distinguish the representations of the potential future classes. Qiu et al. [30] proposed a self-attention transferring method to capture both within-class and between-class knowledge. Phan et al. [29] introduced a class-similarity knowledgedistillation method to revise the old classes more likely to be forgotten and better learn new classes related to the previous classes.\nClass Imbalance and Fairness Approaches in Semantic Segmentation Jiawei et al. [32] presented a balanced Softmax loss that reduces the distribution shift of labels and alleviates the long-tail issue. Wang et al. [44] proposed a Seesaw loss that reweights the contributions of gradients produced by positive and negative instances of a class by using two regularizers, i.e., mitigation and compensation.\nLiu et al. [23] proposed an algorithm that handles imbalanced classification, few-shot learning, and open-set recognition using dynamic meta-embedding. Chu et al. [9] proposed a stochastic training scheme for semantic segmentation, which improves the learning of debiased and disentangled representations. Szabo et al. [37] proposed tilted cross-entropy loss to reduce the performance differences, which promotes fairness among the target classes. Truong et al. [40] introduced a fairness domain adaptation approach to semantic segmentation that maintains the fairness of the predictions produced by the segmentation model on the target domain." }, { "figure_ref": [ "fig_0" ], "heading": "The Proposed Fairness Continual Learning Approach", "publication_ref": [ "b12", "b46", "b3" ], "table_ref": [], "text": "Let F parameterized by θ be the deep semantic segmentation model that maps an input image x ∈ X to the segmentation map y ∈ Y, y = F(x, θ). Continual Semantic Segmentation (CSS) aims to learn a model in T steps. In each step, the segmentation model F encounters a dataset D t = {x t , ŷt } where x t is the input image and ŷt is the ground-truth segmentation at time t ∈ [1..T ]. The current ground-truth segmentation map ŷt only contains the labels of the current classes C t and all the class labels of prevision steps, C 1..t-1 , or the future steps, C t+1..T are collapsed into a background class or ignored. Formally, learning the semantic segmentation at time step t can be formulated as follows:\nθ * t = arg min θ t E x t ,ŷ t ∈D t L F(x t , θt), ŷt(1)\nwhere θ * t is the parameters of F at time step t, L is the objective learning of the continual learning task. In CSS learning, at the current time step t, the segmentation model F is expected to not only predict all the classes C 1..t-1 learned in the previous steps but also predict the current new classes C t . Three significant challenges have been identified in this learning setting and should be addressed.\n• Background Shift At time step t, the labels of previous and future steps have been ignored.\nThus, the pixels of these classes are ambiguous, which means these could contain either the class of previous or future steps. During learning C t , the model could consider these classes as negative samples. As a result, the model tends to learn non-discriminative features for these pixels, leading to difficulty learning new classes or forgetting the old ones.\n• Catastrophic Forgetting cause the model may partially or completely forget the knowledge of classes C 1..t-1 when learning the new classes C t . This problem could be caused by the background shift and the learning mechanism. Since classes in C 1..t-1 are considered as the background class at time step t, the model tends to update the knowledge of the new classes while the predictions of classes incline to be suppressed.\n• Fairness While the prior approaches [12,46,4] focus on addressing the two above challenges, the fairness issue has received less attention and has not been well addressed yet. However, fairness is one of the most important criteria as it guarantees the model behaves fairly among not only classes in C t but also classes in C 1..t that have been learned. The fairness in CSS is typically caused by the imbalance distribution between classes as several classes occupy the larger portion or exist more frequently than other classes (Fig. 1). In addition, the appearance of training samples at the training step t also exaggerates the bias due to the new classes since the classes in the previous training steps have collapsed.\nTo address these challenges in CSS, we first reconsider the semantic segmentation problem from the fairness viewpoint followed by introducing a novel approach to alleviate the fairness issue based on the ideal class distribution. Then, we introduce a novel Prototypical Contrastive Clustering loss to model background classes and catastrophic forgetting." }, { "figure_ref": [], "heading": "Fairness Learning Approach to Continual Semantic Segmentation", "publication_ref": [ "b0", "b2", "b0", "b3", "b5" ], "table_ref": [], "text": "Maintaining fairness is one of the most important factors in learning CSS as it reduces the bias of the model toward a certain group of classes. Formally, under the fairness constraint in semantic segmentation, the error rate of each class should be equally treated by the segmentation model. Given classes c a and c b in the set of classes C 1..T , this constraint can be formulated as follows:\nmax ca,c b ∈C 1..T E x,ŷ∈D i,j L (yi,j, ŷi,j = ca) -E x,ŷ i,j L (yi,j, ŷi,j = c b ) ≤ ϵ(2)\nwhere y i,j and ŷi,j is the prediction and ground truth at the pixel location (i, j), respectively; the loss function L measures the error rates of predictions. Intuitively, Eqn. (2) aims to maintain the differences in the error rates between classes lower than a small threshold ϵ to guarantee fairness among classes. For simplicity, this constraint can be considered as an additional loss while optimizing Eqn. (1). However, this approach could not guarantee the model being fair due to Eqn. (3).\nmax ca ,c b ∈C 1..T E x,ŷ∈D i,j L (yi,j , ŷi,j = ca) -E x,ŷ i,j L (yi,j , ŷi,j = c b ) ≤ ca,c b ∈C 1..T E x,ŷ∈D i,j L (yi,j , ŷi,j = ca) -E x,ŷ i,j L (yi,j , ŷi,j = c b ) ≤ 2 C 1..T [E x,ŷ∈D L (F (x, θ), ŷ)](3)\nAs shown in the above inequality, the constraint of Eqn. ( 2) has been bounded by the loss function in Eqn. (1). Although minimizing Eqn. ( 1) could also impose the constraint in Eqn. ( 2), the fairness issue still remains unsolved due to the imbalanced class distributions indicated through Eqn. (4).\nθ * = arg min θ L(y, ŷ)p(y)p(ŷ)dydŷ = arg min θ i,j L(yi,j , ŷi,j )p(yi,j )p(y \\(i,j) |yi,j )p(ŷ)dydŷ(4)\nwhere p(•) is the data distribution, y \\(i,j) is the predicted segmentation map without the pixel at position (i, j), p(y k ) is the class distribution of pixels, and p(y \\(i,j) |y (i,j) ) is the conditional structural constraint of the predicted segmentation y \\(i,j) conditioned on the prediction y i,j . Practically, the class distributions of pixels p(y i,j ) suffer imbalanced issues, where several classes in the majority group significantly dominate other classes in the minority group. Then, learning by gradient descent method, the model could potentially bias towards the class in the majority group because the produced gradients of classes in the majority group tend to prevail over the ones in the minority group. Formally, considering the two classes c a and c b of the dataset where their distributions are skewed, i.e., p(c a ) < p(c b ), the gradients produced are favored towards class c b as shown in Eqn. (5).\n∂ i,j L(yi,j , ŷi,j )p(yi,j = ca)p(y \\(i,j) |yi,j )p(ŷ)dydŷ ∂y (ca ) < ∂ N k=1 L(yi,j , ŷi,j )p(yi,j = c b )q(y \\(i,j) |yi,j )p(ŷ k )dydŷ ∂y (c b )(5)\nwhere || • || is the magnitude of gradients, y (ca) and y (c b ) are the predictions of classes c a and c b ." }, { "figure_ref": [], "heading": "Learning Fairness from Ideally Fair Distribution", "publication_ref": [ "b6", "b7" ], "table_ref": [], "text": "To address this problem, we first assume that there exits an ideal distribution q(•) where the class distributions q(y i,j ) are equally distributed. Under this assumption, the model learned is expected to behave fairly among classes as there is no bias toward any groups of classes. It should be noted that our assumption is used to derive our learning objective and is going to be relaxed later. In other words, the ideal data is not required at training. Then, our learning objective in Eqn. (4) could be rewritten as follows:\nθ * = arg min θ E x∼p(y),ŷ∼p(ŷ) i,j L(yi,j, ŷi,j) q(yi,j)q(y \\(i,j) |yi,j) p(yi,j)p(y \\(i,j) |yi,j)(6)\nThe fraction between the ideal distribution q(•) and the actual data distribution p(•) is the residual learning objective for the model to achieve the desired fairness goal. Let us further derive Eqn. ( 6) by taking the log as follows:\nθ * = arg min θ E x∼p(x),ŷ∼p(ŷ) L(y, ŷ) + 1 N i,j log q(yi,j) p(yi,j) + log q(y \\(i,j) |yi,j) p(y \\(i,j) |yi,j)(7)\nThe detailed derivation of Eqn. (6) and Eqn. (7) will be available in the supplementary. As shown in the Eqn. (7), there are three learning objectives as follows:\n• The Continual Learning Objective The first term, i.e., L(y, ŷ), represents the task-specific loss which is the continual learning objective. This objective aims to address the catastrophic forgetting and background shift problems. To achieve this desired goal, we introduce a novel Prototypical Contrastive Clustering Loss that will be discussed in Sec. 3.2. • The Fairness Objective The second term, i.e., L class = log q(yi,j ) p(yi,j ) , maintains the fairness in the predictions produced by the model. This objective penalizes the prediction of classes forcing the model to behave fairly based on the class distribution. Under the ideal distribution assumption where the model is expected to perform fairly, the q(y i,j ) will be considered as a uniform distribution U, i.e., q(y i,j ) ∼ U(C) (C is the number of classes).\n• The Conditional Structural Consistency Objective The third term, i.e., L cons = log q(y \\(i,j) |yi,j ) p(y \\(i,j) |yi,j ) , regularizes the structural consistency of the prediction. This objective acts as a metric to constrain the structure of the predicted segmentation under the ideal distribution assumption. To model this conditional structure consistency, we introduce a conditional structural constraint based on the Markovian assumption discussed in Sec. 3.4." }, { "figure_ref": [ "fig_1" ], "heading": "Prototypical Contrastive Clustering Loss for Handling Unknown Classes", "publication_ref": [ "b20", "b20", "b1", "b21", "b20", "b20", "b16" ], "table_ref": [], "text": "A successful continual learning approach should be able to model background classes without explicit supervision and confront the forgetting problem of previously learned classes when the labels of the task are provided to update the knowledge of the model [20]. A straightforward adoption of Softmax could not be enough to handle. Indeed, the unlabeled pixels will be ignored during training. Thus, it results in these unannotated pixels could be treated as negative samples. Consequently, the segmentation model tends to produce indiscriminative features for these unknown classes that limit the capability of learning new classes in future tasks or recognizing the classes learned previously.\nTo address this limitation, in addition to the Softmax loss, we introduce a novel Prototypical Contrastive Clustering Loss. In particular, the semantic segmentation pixel belonging to each class can be represented in latent space. Inspired by [20,2,21], the features representing the classes can be separated by defining it as a contrastive clustering problem [20] where features of the same class would be pulled closer while features of different classes would be pushed far away. In addition, the deep representations of unknown classes will be grouped into the same cluster of unknown classes to produce discriminative features against other classes.\nFormally, for each class c ∈ C 1..t , it is represented by a prototypical vector p c . In addition, the additional prototypical vector p 0 represents a cluster of unknown classes. Let f t i,j be a feature representation of pixel at location (i, j) of the input x t . Then, the Prototypical Contrastive Clustering Loss L cluster can be defined via a distance D as follows:\nL cluster (x t , F, θ t ) = i,j c D(f t i,j , p c )(8)\nD(f t i,j , p c ) = ℓ(f t i,j , p c ) If ŷt i,j = c max{0, ∆ -ℓ(f t i,j , p c } otherwise (9\n)\nwhere ℓ is a distance metric, ∆ is a margin between the feature vectors of different classes, and ŷt i,j is the label of pixel (i, j). Minimizing this loss separates the classes represented in the latent space. For step t > 1, ŷt i,j of an unknown-class pixel will utilize a pseudo label where its assigned label is computed based on the closest cluster. In addition, since the prototypical vectors of classes c ∈ C 1..t-1 have been well learned to represent for classes, these vectors p c (where c ∈ C 1..t-1 ) will be frozen at step t to maintain its learned knowledge of classes C 1..t-1 .\nThe set of prototypical vectors at current step t, i.e., p 0 and p c where c ∈ C t are updated gradually with respect to the growth of feature vectors. In particular, the prototypical vector p c will be periodically updated (after every M iterations) with momentum η based on the set of features f i,j of class c. Following common practices [20,16], to effectively support the updating step and memory efficiency, for each class c, we only maintain a set of features S c with a fixed length of L. Algorithm 1 in the supplementary illustrates an overview of computing the prototypical contrastive clustering loss while updating the class prototypes. Fig. 2 illustrates our proposed FairCL framework." }, { "figure_ref": [], "heading": "Prototypical Constrative Clustering Loss to Catastrophic Forgetting", "publication_ref": [ "b36", "b2", "b12", "b46", "b13" ], "table_ref": [], "text": "Knowledge Distillation is a common continual learning approach [36,3,12,46] where the knowledge of the previous model will be distilled into the current model. This mechanism prevents the segmentation model from diverging knowledge learned previously and avoiding the catastrophic forgetting problem. This continual learning paradigm has been widely adopted due to its efficiency in computation. In addition, this approach also does not require data rehearsal, i.e., storing the data samples of previous tasks. In this paper, we demonstrate that our Prototypical Constrative Clustering approach is a comprehensive upper limit of the Knowledge Distillation approach. In particular, the common knowledge distillation approach can be formulated as follows:\nL distill (x t , F, θt, θt-1) = D(f t-1 , f t )(10)\nwhere f t and f t-1 are the features of the input x t produced by the segmentation model at step t and step t -1, respectively; and the distance metric D measure the knowledge gap between f t and f t-1 .\nProposition 1: The Prototypical Constrative Clustering Approach is the generalized upper bound of the Knowledge Distillation Approach.\nL distill (x t , F, θt, θt-1) = O L cluster (x t , F, θt)(11)\nProof: Without lack of generality, we assume that D is a metric that measures distance between features. Given a set of fixed prototypical vectors p c , we consider the following triangle inequality of the metric D as follows:\n∀c ∈ {0} ∪ C 1..t : D(f t , f t-1 ) ≤ D(f t , pc) + D(pc, f t-1 ) ⇔ D(f t , f t-1 ) ≤ 1 |C| c D(f t , pc) + D(pc, f t-1 ) (12\n)\nwhere |C| is the number of prototypical vectors. The prototypical vectors p c and the feature vectors f t-1 produced by the segmentation model at step t -1 are considered as constant features as the model in the previous model t -1 has been fixed at step t. Thus, the distance D(p c , f t-1 ) could be considered as constant number. Then, Eqn. ( 12) can be further derived as follows:\nD(f t , f t-1 ) = O 1 |C| c D(f t , pc) + D(pc, f t-1 ) = O c D(f t , pc) ⇒ L distill (x t , F, θt, θt-1) = O L cluster (x t , F, θt)(13)\nIntuitively, under this upper bound in Eqn. (13), by only optimizing our prototypical contrastive clustering loss, the knowledge distillation constraint has also been implicitly imposed. Beyond the property of generalized upper bound stated in Proposition 1, our approach offers other benefits over the knowledge distillation approach. In particular, our approach is computationally efficient, where our method only requires a single forward pass of the segmentation model. Meanwhile, the knowledge distillation demands two forward passes for both the current and previous models, which also requires additional computational memory for the previous model. Moreover, our approach provides a better representation of each class c through the prototypical vector p c . This mechanism helps to effectively maintain the knowledge of classes learned previously while allowing the model to update the new knowledge without rehearsing the old data." }, { "figure_ref": [], "heading": "Learning Conditional Structural Consistency", "publication_ref": [ "b14", "b5", "b39" ], "table_ref": [], "text": "The conditional structural constraint plays an important role as it will ensure the consistency of the predicted segmentation map. However, modeling the conditional structural constraint log q(y \\(i,j) |yi,j ) p(y \\(i,j) |yi,j )\nin Eqn. ( 7) is a quite challenging problem due to two factors, i.e., (1) the unknown ideal conditional distribution q(y \\(i,j) |y i,j ), and the complexity of the distribution q(y \\(i,j) |y i,j ). To address the first limitation of unknown ideal distribution, let us consider the following tight bound as follows:\nE x∼p(x) log q(y \\(i,j) |yi,j) p(y \\(i,j) |yi,j) ≤ -E x∼p(x) log p(y \\(i,j) |yi,j)(14)\nThe inequality in Eqn. ( 14) always hold with respect to any form ideal distribution q(•). Thus, optimizing the negative log-likelihood of log p(y \\(i,j) |y i,j ) could also regularize the conditional structural constraint due to the upper bound of Eqn. (14). More importantly, the requirement of ideal data distribution during training has also been relaxed. However, up to this point, the second limitation of modeling the complex distribution q(y \\(i,j) |y i,j ) has still not been solved.\nTo address this problem, we adopt the Markovian assumption [5,39] to model conditional structural consistency. In particular, we propose a simple yet effective approach to impose the consistency of the segmentation map through the prediction at location (i, j) and predictions of its neighbor pixels. Formally, the conditional structure consistency can be formed via the Gaussian kernel as follows:\n-log p(y \\(i,j) |yi,j) ∝\n(i ′ ,j ′ )∈N (i,j) exp - ||x i,j t -x t i ′ ,j ′ || 2 2 2σ 2 1 - ||y i,j t -y t i ′ ,j ′ || 2 2 2σ 2 2 (15\n)\nwhere N (i, j) is the set of neighbor pixels of (i, j), {σ 1 , σ 2 } are the scale hyper-parameters of the Gaussian kernels. The conditional structural consistency loss defined in Eqn. ( 15) enhance the smoothness and maintain the consistency of the predicted segmentation map by imposing similar predictions of neighbor pixels with similar contextual colors." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we first describe the datasets and metrics used in our experiments. Then, we present the ablation studies to illustrate the effectiveness of our proposed method. Finally, we compare our approach with prior CSS methods to demonstrate our SOTA performance." }, { "figure_ref": [], "heading": "Datasets and Evaluation Protocols", "publication_ref": [ "b49", "b10", "b13", "b6", "b45", "b12" ], "table_ref": [], "text": "Datasets ADE20K [49] is a semantic segmentation dataset that consists of more than 20K scene images of 150 semantic categories. Each image has been densely annotated with pix-level objects and objects parts labels. Cityscapes [10] is a real-world autonomous driving dataset collected in European. This dataset includes 3, 975 urban images with high-quality, dense labels of 30 semantic categories. PASCAL VOC [13] is a common dataset that consists of more than 10K images of 20 classes.\nImplementation Two segmentation network architectures are used in our experiments, i.e., (1) DeepLab-V3 [6] with the ResNet-101 backbone, and (2) SegFormer [45] with MiT-B3 backbone. Further details of our implementation will be available in our supplementary.\nEvaluation Protocols: Following [12], we focus on the overlapped CSS evaluation. Our proposed method is evaluated on several settings for each dataset, i.e., ADA20K 100-50 (2 steps), ADA20K The mIoU is computed after the last step for the classes learned from the first step, the later continual classes, and all classes. The mIoU for the initial classes shows the robustness of the model to catastrophic forgetting, while the metric for the later classes reflects the ability to learn new classes. To measure the fairness of the model, we also report the standard deviation (STD) of IoUs over all classes." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [ "b5", "b45", "b45" ], "table_ref": [ "tab_0", "tab_0" ], "text": "Our ablative experiments study the effectiveness of our proposed FairCL approach on the performance of the CSS model and fairness improvement on the ADE20K 100-50 benchmark (Table 1). Effectiveness of the Network Backbone Table 1 illustrates the results of our approach using the DeepLab-V3 [5] with the Resnet101 backbone and the SegFormer [45] with a Transformer backbone, i.e. MiT-B3 [45]. As shown in our results, the performance of segmentation models using a more powerful backbone, i.e., Transformer, outperforms the models using the Resnet backbone. The capability of learning new classes has been improved notably, i.e., the mIoU of classes 101-150 in the full configuration has been improved from 19.86% to 25.46% while the model keeps robust to catastrophic forgetting, the mIoU has been increased from 41.96% to 43.56% in the classes 0-100. Additionally, fairness between classes has been promoted when the standard deviation of the IoU over classes has been reduced from 21.67% to 21.10%." }, { "figure_ref": [], "heading": "Effectiveness of the Prototypical Contrastive Clustering Loss", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "We evaluate the impact of the Prototypical Contrastive Clustering Loss (L cluster ) in improving the performance in the continual learning problem compared to the fine-tuning approach. As shown in Table 1, the clustering loss has significant improvements in the catastrophic forgetting robustness compared to using only the Softmax loss. In particular, the mIoU of classes 0-100 for both DeepLab-V3 and SegFormer backbones has been improved by +41.63% and +43.30% respectively that makes the overall mIoU increase by +26.45% and +28.44%, and the average mIoU between classes increases by +13.17% and +14.03%.\nAlthough the STD of IoUs has slightly increased in this setting, the major target of our L cluster is used to model the catastrophic forgetting and background shift problems in CSS illustrated by the significant performance improvement of mIoU." }, { "figure_ref": [], "heading": "Effectiveness of the Fairness Treatment Loss", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "As reported in Table 1, the fairness treatment from the class distribution loss L class significantly improves the overall performance and the accuracy of classes. In detail, the STD of IoU from classes has been reduced by 0.96% and 0.46% for both backbones while the mIoU has been improved from 32.97% to 34.40% and from 36.18% to 36.78%, respectively. The results have shown that our approach has promoted the fairness of the model." }, { "figure_ref": [ "fig_3" ], "heading": "Effectiveness of the Conditional Structural Consistency", "publication_ref": [ "b15", "b12", "b12", "b46" ], "table_ref": [ "tab_0", "tab_1", "tab_3", "tab_5" ], "text": "The full configuration in Table 1 shows experimental results of our model using conditional structure constraint loss L cons . As illustrated Cityscapes As shown in Table 2, our FairCL outperforms previous SOTA methods evaluated on Cityscapes benchmarks. In particular, in the 11-5 task, our method using Resnet and Transformer achieves the mIoU of 66.96% and 67.85% respectively which shows better performance than prior methods. Meanwhile, the results for the 11-1 task are 66.61% and 67.09% w.r.t. the Resnet and Transformer backbones. For the 1-1 task, the mIoU of our method is 49.22% and 55.68%.\nADE20K Table 3 presents our experimental results using ResNet and Transformer backbones compared to prior SOTA approaches. Our proposed approach achieves SOTA performance and outperforms prior methods. In particular, our approach achieves the final mIoU of 36.99% for Resnet and 37.56% for Transformer in the 100-50 tasks. For the 50-50 tasks, the model reaches the final mIoU of 34.55% and 35.15% for the Resnet and Transformer backbones, respectively while the result of the prior method [15] is 33.50%. Meanwhile, the overall results of our method for the 100-10 task are 34.65% and 35.49% which shows outperforming previous approaches. Fig. 3 visualizes the qualitative result of our method compared to PLOP [12]. Initially, the ground truth contains the class \"car\" in the first step and the class \"minibike\" in the third step. Then, in the fourth step, the class \"bicycle\" is included. As a result, PLOP [12] partly forgets the \"minibike\" information when learning the class \"bicycle\" information. Meanwhile, our method consistently maintains the information of \"minibike\" and predicts segmentation correctly. Pascal VOC As shown in Table 4, the proposed method outperforms the prior approaches evaluated on the Pascal VOC 2012 dataset. In detail, our method achieves the overall mIoU of 61.5% in the 15-1 task while the result of the previous method [46] is 59.4%. Meanwhile, the mIoU in the 10-1 task is 36.6% which shows better performance than the prior methods." }, { "figure_ref": [], "heading": "Conclusions and Dicussions", "publication_ref": [], "table_ref": [], "text": "Conclusions: This paper introduces a novel fairness continual learning approach in semantic segmentation by analyzing the effect of class distribution. Furthermore, a new learning paradigm of continual learning, i.e., the prototypical Contrastive clustering loss, is proposed to sufficiently address the catastrophic forgetting and the background shift problems. The experimental results on the three CSS benchmarks, i.e., ADE20K, Cityscapes, and Pascal VOC, have shown our SOTA performance.\nLimitations: Our paper has chosen specific configurations of network backbones and hyperparameters to support our hypothesis. However, the other aspects of learning have not been fully investigated, e.g., learning hyper-parameters, the selected neighbor pixels, or other forms of L cons .\nBroader Impacts: This work studies the problem of Fairness in Continual Learning which is a step toward fairness awareness in continual semantic segmentation. Our contributions emphasize the importance of fairness in continual semantic segmentation learning and provide a solution to address the fairness concern that increases the robustness and trustworthiness of the segmentation model. log q(yi,j)q(y \\(i,j) |yi,j) p(yi,j)p(y \\(i,j) |yi,j)\n= arg min θ E x∼p(x),ŷ∼p(ŷ) log L(y, ŷ) + 1 N i,j\nlog q(yi,j) p(yi,j) + log q(y \\(i,j) |yi,j) p(y \\(i,j) |yi,j)\nwhere N is the total number of pixels. In addition, minimizing log L(y, ŷ) is equivalent to minimizing L(y, ŷ). Therefore, the formula can be further derived as follows:\nθ * = arg min θ E x∼p(x),ŷ∼p(ŷ) L(y, ŷ) + 1 N i,j\nlog q(yi,j) p(yi,j) + log q(y \\(i,j) |yi,j) p(y \\(i,j) |yi,j)" }, { "figure_ref": [], "heading": "Protypical Contrastive Clustering Algorithm", "publication_ref": [ "b20", "b16" ], "table_ref": [], "text": "Inspired by [20,16], we develop the algorithm to compute the Prototypical Contrastive Clustering loss and update prototypical vectors. Algorithm 1 illustrates the procedure to compute the Prototypical Contrastive Clustering loss and update the prototypical vectors." }, { "figure_ref": [], "heading": "Implementation", "publication_ref": [ "b5", "b45", "b20", "b16", "b5", "b39" ], "table_ref": [], "text": "Two segmentation network architectures have been used in our experiments, i.e., (1) DeepLab-V3 [5] with the ResNet-101 backbone, and (2) SegFormer [45] with MiT-B3 backbone. Our framework is implemented in PyTorch and trained on four 40GB-VRAM NVIDIA A100 GPUs. The model is optimized by the SGD optimizer [?] with momentum 0.9, weight decay 10 -4 , and batch size of 6 per GPU. The learning rate is set individually for each step and dataset. In particular, the learning rate for the initial step and the continual steps of the ADE20K dataset is 10 -2 and 10 -3 respectively, while the learning rate for the Cityscapes experiment is 2 × 10 -2 and 2 × 10 -3 . The feature vectors from the last layer of the decoder are used for the prototypical For each c ∈ {0} ∪ C t 5:\npc ← E f ∈Sc f . 6:\nL cluster ← Compute Prototypical Constrative Clustering Loss based on Eqn. (9). 7: else if i > M then 8:\nif i%M == 0 then 9:\nFor each c ∈ {0} ∪ C t 10:\npc ← ηpc + (1 -η)E f ∈Sc f . 11:\nL cluster ← Compute Prototypical Constrative Clustering Loss based on Eqn. (9). 12: return L cluster clustering loss. For each class, the number of feature vectors in each set Sc for computing the prototypes is 500 features. Following common practices in contrastive learning [20,16], we adopt the Euclidean distance for our ℓ in the Prototypical Contrastive Clustering loss L cluster and the margin ∇ between features of different classes is set to 10. The momentum η to update the prototypical vectors is set to 0.99. Following [5,39], in the conditional structural consistency loss, the number of neighbor pixels is within a window size of 3 × 3." }, { "figure_ref": [], "heading": "Additional Experiments 10.1 Performance Improvement of Major and Minor Groups", "publication_ref": [], "table_ref": [ "tab_7", "tab_8" ], "text": "To illustrate the performance improvement of our proposed method in major and minor classes, we include the results of the mIoU (all) and the STD among IoUs of the major group and the minor group on the ADE20K 100-50 (Table 5) and Cityscapes 11-5 (Table 6) benchmarks. As shown in the table below, our proposed approach has improved the performance of both major and minor groups. Thus, these results illustrated that the performance improvement in mIoU is also coming from the minority classes. It helps to enhance the mIoU performance and reduce the STD in both major and minor classes, thus, improving the fairness of the model predictions." }, { "figure_ref": [], "heading": "Backbone", "publication_ref": [], "table_ref": [], "text": "L Similarly, to illustrate the effectiveness and robustness of our method in the non-incremental setting. We report our results after the first learning step on the ADE20K 100-50 (Table 7) and Cityscapes 11-5 (Table 8) benchmarks. Our proposed fairness approach has also contributed to the performance improvement of both major and minor classes in non-incremental settings. The comparison table of major and minor groups in the first step is illustrated below. " }, { "figure_ref": [], "heading": "Memory Efficiency", "publication_ref": [ "b12" ], "table_ref": [], "text": "We would like to highlight that storing prototypical vectors requires significantly less memory than using the additional teacher model as used in distillation approaches [12]. For example, in the ADE20K benchmark, storing DeepLab-V3 (151 classes) requires 58.664M parameters, while storing 152 prototypical 2048-D vectors (including the unknown cluster) only uses 0.311M parameters. In addition, the computation of loss with is cheaper than a forward pass of the entire network used in distillation. Therefore, in terms of computational cost and memory, our approach remains more efficient compared to knowledge distillation approaches." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgment This work is partly supported by NSF Data Science, Data Analytics that are Robust and Trusted (DART), and Googler Initiated Research Grant. We also thank Utsav Prabhu for invaluable discussions and suggestions and acknowledge the Arkansas High-Performance Computing Center for providing GPUs." }, { "figure_ref": [], "heading": "Supplementary", "publication_ref": [ "b6", "b16" ], "table_ref": [], "text": "6 Proof of Eqn. (6) θ * = L(y, ŷ)q(y)q(ŷ)dydŷ = L(y, ŷ) q(y) p(y) q(ŷ) p(ŷ) p(y)p(ŷ)dydŷ (16) It should be noted that the fraction q(ŷ) p(ŷ) could be considered as constants as q(ŷ) and p(ŷ) are distriuted over the ground-truth segmentation. Thus, it would be ignored during the optimization process. Then, the formula can be further derived as follows: The goal of conditional structural consistency is to improve the prediction gap among neighbor pixels, thus, enhancing the smoothness of the predictions. In addition, it helps to increase fairness among classes. It is because this loss helps to clean up the spurious or ambiguous predictions produced by the major classes around minor classes. Therefore, the loss alleviates the dominance of the major groups and improves the accuracy of the minor groups, thus, resulting in fairness that has also been further improved (as illustrated in Table 1 in the main paper)." }, { "figure_ref": [], "heading": "The Choice of Margin ∆", "publication_ref": [], "table_ref": [], "text": "We also perform an additional ablation study on the ADE20K (100-50) benchmark to investigate the impact of the delta. As shown in Table 9, the impact of ∆ does not significantly influence the results due to the minor performance drop. We have observed the performance improvement of our approach on Pascal VOC is less significant than the ADE20K and Cityscapes because of the minor bias in Pascal VOC. In particular, the data distributions of these datasets are visualized in Figure 4 in the rebuttal file (the PDF file of the rebuttal is attached in Global Response). We calculated the entropy value of the data distributions that illustrate the balance level of the datasets (the higher value of entropy, the more balance the dataset as the data distribution tends to be more uniform). Then, based on the data distributions and entropy values, we observe the data distributions of ADE20K and Cityscapes suffer more bias than the Pascal VOC since the entropy value of Pascal VOC (H = 0.81) is higher than ADE20K (H = 0.69) and Cityscapes (H = 0.62). Thus, ADE20K and Cityscapes suffer severe fairness issues compared to Pascal VOC. Our approach aims to improve the fairness of the model. Therefore, on the more severe bias datasets (ADE20K and Cityscapes), our approach performs more significantly." } ]
[ { "authors": "N Araslanov; S Roth", "journal": "", "ref_id": "b0", "title": "Self-supervised augmentation consistency for adapting semantic segmentation", "year": "2021" }, { "authors": "J Cen; P Yun; J Cai; M Y Wang; M Liu", "journal": "", "ref_id": "b1", "title": "Deep metric learning for open world semantic segmentation", "year": "2021" }, { "authors": "F Cermelli; M Mancini; S Rota; E Bulò; B Ricci; Caputo", "journal": "", "ref_id": "b2", "title": "Modeling the background for incremental learning in semantic segmentation", "year": "2020" }, { "authors": "S Cha; Y Kim; T Yoo; Moon", "journal": "", "ref_id": "b3", "title": "Ssul: Semantic segmentation with unknown label for exemplar-based class-incremental learning", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b4", "title": "", "year": "2021" }, { "authors": "L.-C Chen; G Papandreou; I Kokkinos; K Murphy; A L Yuille", "journal": "TPAMI", "ref_id": "b5", "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs", "year": "2018" }, { "authors": "L.-C Chen; G Papandreou; F Schroff; H Adam", "journal": "", "ref_id": "b6", "title": "Rethinking atrous convolution for semantic image segmentation", "year": "2017" }, { "authors": "L.-C Chen; Y Zhu; G Papandreou; F Schroff; H Adam", "journal": "", "ref_id": "b7", "title": "Encoder-decoder with atrous separable convolution for semantic image segmentation", "year": "2018" }, { "authors": "R Chen; Y Rong; S Guo; J Han; F Sun; T Xu; W Huang", "journal": "CoRR", "ref_id": "b8", "title": "Smoothing matters: Momentum transformer for domain adaptive semantic segmentation", "year": "2022" }, { "authors": "S Chu; D Kim; B Han", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b9", "title": "Learning debiased and disentangled representations for semantic segmentation", "year": "2021" }, { "authors": "M Cordts; M Omran; S Ramos; T Rehfeld; M Enzweiler; R Benenson; U Franke; S Roth; B Schiele", "journal": "", "ref_id": "b10", "title": "The Cityscapes dataset for semantic urban scene understanding", "year": "2016" }, { "authors": "Y Cui; M Jia; T.-Y Lin; Y Song; S Belongie", "journal": "", "ref_id": "b11", "title": "Class-balanced loss based on effective number of samples", "year": "2019-06" }, { "authors": "A Douillard; Y Chen; A Dapogny; M Cord", "journal": "", "ref_id": "b12", "title": "Plop: Learning without forgetting for continual semantic segmentation", "year": "2021" }, { "authors": "M Everingham; S M A Eslami; L Van Gool; C K I Williams; J Winn; A Zisserman", "journal": "IJCV", "ref_id": "b13", "title": "The pascal visual object classes challenge: A retrospective", "year": "2015" }, { "authors": "R French", "journal": "Trends in Cognitive Sciences", "ref_id": "b14", "title": "Catastrophic forgetting in connectionist networks", "year": "1999" }, { "authors": "D Goswami; R Schuster; J Van De Weijer; D Stricker", "journal": "", "ref_id": "b15", "title": "Attribution-aware weight transfer: A warmstart initialization for class-incremental semantic segmentation", "year": "2023" }, { "authors": "K He; H Fan; Y Wu; S Xie; R Girshick", "journal": "", "ref_id": "b16", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "S Hou; X Pan; C C Loy; Z Wang; D Lin", "journal": "", "ref_id": "b17", "title": "Learning a unified classifier incrementally via rebalancing", "year": "2019" }, { "authors": "L Hoyer; D Dai; L Van Gool", "journal": "", "ref_id": "b18", "title": "DAFormer: Improving network architectures and training strategies for domain-adaptive semantic segmentation", "year": "2022" }, { "authors": "T.-I Hsieh; E Robb; H.-T Chen; J.-B Huang", "journal": "", "ref_id": "b19", "title": "Droploss for long-tail instance segmentation", "year": "2021" }, { "authors": "K Joseph; S Khan; F S Khan; V N Balasubramanian", "journal": "", "ref_id": "b20", "title": "Towards open world object detection", "year": "2021" }, { "authors": "J Li; Q Dong", "journal": "", "ref_id": "b21", "title": "Open-set semantic segmentation for point clouds via adversarial prototype framework", "year": "2023" }, { "authors": "Z Li; D Hoiem", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b22", "title": "Learning without forgetting", "year": "2017" }, { "authors": "Z Liu; Z Miao; X Zhan; J Wang; B Gong; S X Yu", "journal": "", "ref_id": "b23", "title": "Large-scale long-tailed recognition in an open world", "year": "2019" }, { "authors": "U Michieli; P Zanuttigh", "journal": "", "ref_id": "b24", "title": "Incremental learning techniques for semantic segmentation", "year": "2019" }, { "authors": "U Michieli; P Zanuttigh", "journal": "", "ref_id": "b25", "title": "Incremental learning techniques for semantic segmentation", "year": "2019" }, { "authors": "U Michieli; P Zanuttigh", "journal": "", "ref_id": "b26", "title": "Continual semantic segmentation via repulsion-attraction of sparse and disentangled latent representations", "year": "2021" }, { "authors": "F Ozdemir; P Fuernstahl; O Goksel", "journal": "Springer", "ref_id": "b27", "title": "Learn the new, keep the old: Extending pretrained models with new anatomy and images", "year": "2018" }, { "authors": "F Ozdemir; O Goksel", "journal": "International journal of computer assisted radiology and surgery", "ref_id": "b28", "title": "Extending pretrained segmentation networks with additional anatomical structures", "year": "2019" }, { "authors": "M H Phan; T.-A Ta; S L Phung; L Tran-Thanh; A Bouzerdoum", "journal": "", "ref_id": "b29", "title": "Class similarity weighted knowledge distillation for continual semantic segmentation", "year": "2022-06" }, { "authors": "Y Qiu; Y Shen; Z Sun; Y Zheng; X Chang; W Zheng; R Wang", "journal": "Pattern Recognition", "ref_id": "b30", "title": "Sats: Self-attention transfer for continual semantic segmentation", "year": "2023" }, { "authors": "S.-A Rebuffi; A Kolesnikov; G Sperl; C H Lampert", "journal": "", "ref_id": "b31", "title": "icarl: Incremental classifier and representation learning", "year": "2017" }, { "authors": "J Ren; C Yu; S Sheng; X Ma; H Zhao; S Yi; H Li", "journal": "", "ref_id": "b32", "title": "Balanced meta-softmax for long-tailed visual recognition", "year": "2020" }, { "authors": "A Robins", "journal": "Connection Science", "ref_id": "b33", "title": "Catastrophic forgetting, rehearsal and pseudorehearsal", "year": "1995" }, { "authors": "M Rostami", "journal": "", "ref_id": "b34", "title": "Lifelong domain adaptation via consolidated internal distribution", "year": "2021" }, { "authors": "A Saporta; A Douillard; T.-H Vu; P Pérez; M Cord", "journal": "", "ref_id": "b35", "title": "Multi-head distillation for continual unsupervised domain adaptation in semantic segmentation", "year": "2022" }, { "authors": "K Shmelkov; C Schmid; K Alahari", "journal": "", "ref_id": "b36", "title": "Incremental learning of object detectors without catastrophic forgetting", "year": "2017" }, { "authors": "A Szabó; H Jamali-Rad; S.-D Mannava", "journal": "", "ref_id": "b37", "title": "Tilted cross-entropy (tce): Promoting fairness in semantic segmentation", "year": "2021" }, { "authors": "S Thrun", "journal": "", "ref_id": "b38", "title": "Lifelong learning algorithms", "year": "1998" }, { "authors": "T.-D Truong; C N Duong; N Le; S L Phung; C Rainwater; K Luu", "journal": "", "ref_id": "b39", "title": "Bimal: Bijective maximum likelihood approach to domain adaptation in semantic scene segmentation", "year": "2021" }, { "authors": "T.-D Truong; N Le; B Raj; J Cothren; K Luu", "journal": "", "ref_id": "b40", "title": "Fredom: Fairness domain adaptation approach to semantic scene understanding", "year": "2023" }, { "authors": "R Volpi; D Larlus; G Rogez", "journal": "", "ref_id": "b41", "title": "Continual adaptation of visual representations via domain randomization and meta-learning", "year": "2021" }, { "authors": "T.-H Vu; H Jain; M Bucher; M Cord; P Pérez", "journal": "", "ref_id": "b42", "title": "Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation", "year": "2019" }, { "authors": "T.-H Vu; H Jain; M Bucher; M Cord; P Pérez", "journal": "", "ref_id": "b43", "title": "Dada: Depth-aware domain adaptation in semantic segmentation", "year": "2019" }, { "authors": "J Wang; W Zhang; Y Zang; Y Cao; J Pang; T Gong; K Chen; Z Liu; C C Loy; D Lin", "journal": "", "ref_id": "b44", "title": "Seesaw loss for long-tailed instance segmentation", "year": "2021" }, { "authors": "E Xie; W Wang; Z Yu; A Anandkumar; J M Alvarez; P Luo", "journal": "", "ref_id": "b45", "title": "Segformer: Simple and efficient design for semantic segmentation with transformers", "year": "2021" }, { "authors": "C.-B Zhang; J.-W Xiao; X Liu; Y.-C Chen; M.-M Cheng", "journal": "", "ref_id": "b46", "title": "Representation compensation networks for continual semantic segmentation", "year": "2022" }, { "authors": "P Zhang; B Zhang; T Zhang; D Chen; Y Wang; F Wen", "journal": "", "ref_id": "b47", "title": "Prototypical pseudo label denoising and target structure learning for domain adaptive semantic segmentation", "year": "2021" }, { "authors": "B Zhao; X Xiao; G Gan; B Zhang; S.-T Xia", "journal": "", "ref_id": "b48", "title": "Maintaining discrimination and fairness in class incremental learning", "year": "2020" }, { "authors": "B Zhou; H Zhao; X Puig; S Fidler; A Barriuso; A Torralba", "journal": "", "ref_id": "b49", "title": "Scene parsing through ade20k dataset", "year": "2017" } ]
[ { "formula_coordinates": [ 3, 228.06, 275.25, 276.54, 16.01 ], "formula_id": "formula_0", "formula_text": "θ * t = arg min θ t E x t ,ŷ t ∈D t L F(x t , θt), ŷt(1)" }, { "formula_coordinates": [ 3, 162.89, 706.31, 341.71, 17.67 ], "formula_id": "formula_1", "formula_text": "max ca,c b ∈C 1..T E x,ŷ∈D i,j L (yi,j, ŷi,j = ca) -E x,ŷ i,j L (yi,j, ŷi,j = c b ) ≤ ϵ(2)" }, { "formula_coordinates": [ 4, 116.01, 145.08, 388.45, 46.43 ], "formula_id": "formula_2", "formula_text": "max ca ,c b ∈C 1..T E x,ŷ∈D i,j L (yi,j , ŷi,j = ca) -E x,ŷ i,j L (yi,j , ŷi,j = c b ) ≤ ca,c b ∈C 1..T E x,ŷ∈D i,j L (yi,j , ŷi,j = ca) -E x,ŷ i,j L (yi,j , ŷi,j = c b ) ≤ 2 C 1..T [E x,ŷ∈D L (F (x, θ), ŷ)](3)" }, { "formula_coordinates": [ 4, 128.28, 248.63, 376.19, 16.19 ], "formula_id": "formula_3", "formula_text": "θ * = arg min θ L(y, ŷ)p(y)p(ŷ)dydŷ = arg min θ i,j L(yi,j , ŷi,j )p(yi,j )p(y \\(i,j) |yi,j )p(ŷ)dydŷ(4)" }, { "formula_coordinates": [ 4, 190.34, 383.12, 314.13, 45.1 ], "formula_id": "formula_4", "formula_text": "∂ i,j L(yi,j , ŷi,j )p(yi,j = ca)p(y \\(i,j) |yi,j )p(ŷ)dydŷ ∂y (ca ) < ∂ N k=1 L(yi,j , ŷi,j )p(yi,j = c b )q(y \\(i,j) |yi,j )p(ŷ k )dydŷ ∂y (c b )(5)" }, { "formula_coordinates": [ 4, 175.7, 541.13, 328.9, 24.13 ], "formula_id": "formula_5", "formula_text": "θ * = arg min θ E x∼p(y),ŷ∼p(ŷ) i,j L(yi,j, ŷi,j) q(yi,j)q(y \\(i,j) |yi,j) p(yi,j)p(y \\(i,j) |yi,j)(6)" }, { "formula_coordinates": [ 4, 130.87, 616.39, 373.73, 24.13 ], "formula_id": "formula_6", "formula_text": "θ * = arg min θ E x∼p(x),ŷ∼p(ŷ) L(y, ŷ) + 1 N i,j log q(yi,j) p(yi,j) + log q(y \\(i,j) |yi,j) p(y \\(i,j) |yi,j)(7)" }, { "formula_coordinates": [ 5, 184.9, 594.82, 319.77, 21.98 ], "formula_id": "formula_7", "formula_text": "L cluster (x t , F, θ t ) = i,j c D(f t i,j , p c )(8)" }, { "formula_coordinates": [ 5, 216.61, 622.57, 284.19, 25.44 ], "formula_id": "formula_8", "formula_text": "D(f t i,j , p c ) = ℓ(f t i,j , p c ) If ŷt i,j = c max{0, ∆ -ℓ(f t i,j , p c } otherwise (9" }, { "formula_coordinates": [ 5, 500.8, 630.9, 3.87, 8.64 ], "formula_id": "formula_9", "formula_text": ")" }, { "formula_coordinates": [ 6, 234.33, 277.67, 270.27, 10.61 ], "formula_id": "formula_10", "formula_text": "L distill (x t , F, θt, θt-1) = D(f t-1 , f t )(10)" }, { "formula_coordinates": [ 6, 212.19, 350.12, 292.41, 10.61 ], "formula_id": "formula_11", "formula_text": "L distill (x t , F, θt, θt-1) = O L cluster (x t , F, θt)(11)" }, { "formula_coordinates": [ 6, 174.45, 404.41, 326.42, 38.46 ], "formula_id": "formula_12", "formula_text": "∀c ∈ {0} ∪ C 1..t : D(f t , f t-1 ) ≤ D(f t , pc) + D(pc, f t-1 ) ⇔ D(f t , f t-1 ) ≤ 1 |C| c D(f t , pc) + D(pc, f t-1 ) (12" }, { "formula_coordinates": [ 6, 500.87, 420.08, 3.73, 7.77 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 6, 123.01, 503.13, 381.59, 37.47 ], "formula_id": "formula_14", "formula_text": "D(f t , f t-1 ) = O 1 |C| c D(f t , pc) + D(pc, f t-1 ) = O c D(f t , pc) ⇒ L distill (x t , F, θt, θt-1) = O L cluster (x t , F, θt)(13)" }, { "formula_coordinates": [ 7, 191.01, 114.82, 313.59, 21.22 ], "formula_id": "formula_15", "formula_text": "E x∼p(x) log q(y \\(i,j) |yi,j) p(y \\(i,j) |yi,j) ≤ -E x∼p(x) log p(y \\(i,j) |yi,j)(14)" }, { "formula_coordinates": [ 7, 240.1, 255.77, 260.77, 27.32 ], "formula_id": "formula_16", "formula_text": "(i ′ ,j ′ )∈N (i,j) exp - ||x i,j t -x t i ′ ,j ′ || 2 2 2σ 2 1 - ||y i,j t -y t i ′ ,j ′ || 2 2 2σ 2 2 (15" }, { "formula_coordinates": [ 7, 500.87, 265.18, 3.73, 7.77 ], "formula_id": "formula_17", "formula_text": ")" }, { "formula_coordinates": [ 13, 135.6, 455.8, 183.77, 23.47 ], "formula_id": "formula_18", "formula_text": "= arg min θ E x∼p(x),ŷ∼p(ŷ) log L(y, ŷ) + 1 N i,j" }, { "formula_coordinates": [ 13, 130.87, 521.8, 181.77, 23.47 ], "formula_id": "formula_19", "formula_text": "θ * = arg min θ E x∼p(x),ŷ∼p(ŷ) L(y, ŷ) + 1 N i,j" } ]
Fairness Continual Learning Approach to Semantic Scene Understanding in Open-World Environments
Continual semantic segmentation aims to learn new classes while maintaining the information from the previous classes. Although prior studies have shown impressive progress in recent years, the fairness concern in the continual semantic segmentation needs to be better addressed. Meanwhile, fairness is one of the most vital factors in deploying the deep learning model, especially in human-related or safety applications. In this paper, we present a novel Fairness Continual Learning approach to the semantic segmentation problem. In particular, under the fairness objective, a new fairness continual learning framework is proposed based on class distributions. Then, a novel Prototypical Contrastive Clustering loss is proposed to address the significant challenges in continual learning, i.e., catastrophic forgetting and background shift. Our proposed loss has also been proven as a novel, generalized learning paradigm of knowledge distillation commonly used in continual learning. Moreover, the proposed Conditional Structural Consistency loss further regularized the structural constraint of the predicted segmentation. Our proposed approach has achieved State-of-the-Art performance on three standard scene understanding benchmarks, i.e., ADE20K, Cityscapes, and Pascal VOC, and promoted the fairness of the segmentation model.
Thanh-Dat Truong; Hoang-Quan Nguyen; Bhiksha Raj; Khoa Luu
[ { "figure_caption": "Figure 1 :1Figure 1: The Class Distribution of ADE20K. In the ADE20K 50-50 (3 steps) benchmark, the distribution of classes in the majority group in the early step dominates the ones in the minority groups in the later steps. The distributions of classes gradually decrease over steps.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The Proposed Fairness Continual Learning Framework. The predicted segmentation maps are imposed the cross-entropy loss, the Prototypical Contrastive Clustering loss (L cluster ), the Fairness Loss from Class Distribution (L class ), and the Conditional Structural Consistency loss (L cons )", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "in our results, the conditional structure constraint demonstrates effective improvement. Indeed, it promotes the accuracy of the initial classes and the novel classes when the mIoU has been increased from 43.35% to 43.56% and from 23.50% to 25.46% respectively with the Transformer backbone. The fairness of classes is also improved as the standard deviation of the IoU of classes 0-100 and classes 101-150 is reduced from 19.03% to 18.71% and from 20.75% to 19.99%.4.3 Comparison with State-of-the-Art Methods", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Qualitative results of Our Approach and PLOP [12] on ADE20K 100-10.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Data Distribution of ADE20K, Cityscapes, and Pascal VOC. The data distributions of ADE20K and Cityscapes suffer a more severe imbalance compared to Pascal VOC.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Pascal VOC 15-1 (3 steps), and Pascal VOC 10-1 (11 steps). The mean Intersection over Union (mIoU) metric is used in our experiments.", "figure_data": "Backbone L cluster L class L cons0-100 mIoU STD mIoU STD mIoU STD mIoU STD 100-150 all avg0.08 0.84 19.52 20.18 6.52 13.14 24.41 13.14DeepLab-V3✓ ✓✓41.71 19.90 15.33 21.96 32.97 23.03 37.58 23.03 42.25 19.31 18.55 20.52 34.40 22.07 38.35 22.07✓✓✓43.40 19.08 24.04 19.12 36.99 21.67 40.45 21.670.10 0.84 23.18 19.83 7.74 15.74 25.82 15.74SegFormer✓ ✓✓43.40 19.35 21.60 22.06 36.18 22.32 39.85 22.32 43.35 19.03 23.50 20.75 36.78 21.86 40.34 21.86✓✓✓43.56 18.71 25.46 19.99 37.56 21.10 40.73 21.10", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Final mIoU (%) for Continual Semantic Segmentation on Cityscapes.", "figure_data": "Method11-5 3 steps 11 steps 21 steps 11-1 1-1Joint79.3079.3079.30LWF-MC [31]58.9056.9231.24ILT [25]59.1457.7530.11MiB [3]61.5160.0242.15PLOP [12]63.5162.0545.24RCIL [46]64.3063.0048.90FairCL + DeepLab-V3 66.9666.6149.22FairCL + SegFormer67.8567.0955.68", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "V3 43.40 24.04 36.99 40.45 49.65 26.84 34.55 41.68 41.73 20.36 34.65 39.01 FairCL + SegFormer 43.56 25.46 37.56 40.73 49.62 27.78 35.15 42.25 42.21 21.91 35.49 39.36", "figure_data": "Method100-50 (2 steps) 0-100 101-150 all50-50 (3 steps) avg 0-50 51-150 all100-10 (6 steps) avg 0-100 101-150 allavgJoint44.30 28.20 38.90-51.10 32.80 38.90-44.30 28.20 38.90-ILT [25]18.29 14.40 17.00 29.42 3.53 12.85 9.70 30.12 0.113.061.09 12.56MiB [3]40.52 17.17 32.79 37.31 45.57 21.01 29.31 38.98 38.21 11.12 29.24 35.12PLOP [12]41.87 14.89 32.94 37.39 48.83 20.99 30.40 39.42 40.48 13.61 31.59 36.64RCIL [46]42.30 18.80 34.50 38.48 48.30 25.00 32.50-39.30 17.60 32.10-MiB + AWT [15]40.90 24.70 35.60-46.60 26.85 33.50-39.10 21.28 33.20-SSUL [4]41.28 18.02 33.58-48.38 20.15 29.56-40.20 18.75 33.10-SATS [30]--------41.42 19.09 34.18-FairCL + DeepLab-", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Continual Semantic Segmentation results on ADE20k in Mean IoU (%).", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "The mIoU (%) of CSS on Pascal VOC.", "figure_data": "", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Algorithm 1: Prototypical Constrative Clustering Loss Input: Current iteration i of step t; A set of prototypical vectors {pc} |C 1..t | c=0 ; A set of features fi,j; Momentum parameter: η; A set of storing features {Sc} |C 1..t | c=0 1: Initialize pc where c ∈ C t in the first iteration. 2: L cluster ← 0 3: if i == M then 4:", "figure_data": "", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "cluster L class L cons ADE20K 150-50 Benchmark Backbone L cluster L class L cons", "figure_data": "Major GroupMinor GroupmIoU STD mIoU STD✓48.78 18.12 25.13 21.12DeepLab-V3✓✓48.89 17.87 27.24 20.76✓✓✓50.11 17.46 30.52 20.43Major GroupMinor GroupmIoU STD mIoU STD✓87.44 9.25 53.37 16.72DeepLab-V3✓✓88.29 8.85 55.72 13.39✓✓✓89.20 8.41 56.70 11.96", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": " ", "figure_data": "", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "[12]", "Explanation": "The cited work by [12] is used to define the paradigm of Continual Semantic Segmentation, which the citing paper builds upon in its research on semantic segmentation tasks."}, {"Category": "Methodological Basis", "Citation": "[5,7]", "Explanation": "The cited works by [5,7] provide the basis for the use of Convolutional Neural Networks (CNNs) in the citing paper for approaching semantic segmentation tasks."}, {"Category": "Methodological Basis", "Citation": "[45,8]", "Explanation": "The cited works by [45,8] are referenced for the use of Transformers in the citing paper for semantic segmentation tasks."}, {"Category": "Data Source", "Citation": "[39,42,43,1,18]", "Explanation": "The cited works by [39,42,43,1,18] are acknowledged for their contributions to the field of domain adaptation in semantic segmentation tasks, which the citing paper builds upon in its research."}, {"Category": "Extension or Continuation", "Citation": "[12,46]", "Explanation": "The cited works by [12,46] are used to extend the research on Continual Semantic Segmentation, exploring new dimensions and variables in the field of semantic segmentation tasks."}, {"Category": "Supporting Evidence", "Citation": "[12,46]", "Explanation": "The cited works focus on addressing the challenges of catastrophic forgetting and background shift in semantic segmentation, which the citing paper also aims to address in its research on continual learning in the same domain."}, {"Category": "Data Source", "Citation": "[33,14,38]", "Explanation": "The cited works provide the data and methods used to study the problem of catastrophic forgetting in semantic segmentation, which the citing paper builds upon in its research on continual learning in the same domain."}, {"Category": "Extension or Continuation", "Citation": "[12,46]", "Explanation": "The cited works on background shift in semantic segmentation are extended in the citing paper to address the problem of fairness in semantic segmentation, which is a new dimension in the research of continual learning in the same domain."}, {"Category": "Extension or Continuation", "Citation": "[12,46]", "Explanation": "The cited works on the class distribution in semantic segmentation are extended in the citing paper to consider the problem of class distribution in the context of continual learning in the same domain."}, {"Category": "Extension or Continuation", "Citation": "[3]", "Explanation": "The cited work on the problem of background shift in semantic segmentation is extended in the citing paper to consider the problem of class distribution in the context of continual learning in the same domain."}, {"Category": "Extension or Continuation", "Citation": "[48]", "Explanation": "The cited work on fairness in image classification is extended in the citing paper to consider the problem of fairness in semantic segmentation in the context of continual learning in the same domain."}, {"Category": "Methodological Basis", "Citation": "[1,18,19]", "Explanation": "The cited works have tried to reduce the effect of class imbalance in semantic segmentation by introducing weighted cross entropy and over-sampling techniques, which the citing paper adopts to address the fairness problem in continual semantic segmentation."}, {"Category": "Supporting Evidence", "Citation": "[11,44,19]", "Explanation": "The cited works have provided foundational data and techniques for addressing the class imbalance in semantic segmentation, which the citing paper uses to support its research on fairness in continual semantic segmentation."}, {"Category": "Extension or Continuation", "Citation": "[18,47,1]", "Explanation": "The cited works have explored over-sampling techniques in semantic segmentation, which the citing paper extends by applying them to address the fairness problem in continual semantic segmentation."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work provides a foundational method for addressing the continual learning problem in image classification, which the citing paper extends to the field of medical images and general datasets."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work provides a method for extending existing continual learning methods to the field of medical images and general datasets, which the citing paper further builds upon."}, {"Category": "Extension or Continuation", "Citation": "[27]", "Explanation": "The cited work addresses a background shifting problem in continual semantic segmentation, which the citing paper extends by applying the method to medical images."}, {"Category": "Extension or Continuation", "Citation": "[28]", "Explanation": "The cited work extends the field of continual semantic segmentation to medical images, which the citing paper further expands upon by applying the method to general datasets."}, {"Category": "Extension or Continuation", "Citation": "[24]", "Explanation": "The cited work extends the field of continual semantic segmentation to general datasets, which the citing paper further expands upon by applying the method to a wider range of data."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work addresses a background shifting problem in continual semantic segmentation, which the citing paper builds upon by applying the method to medical images and general datasets."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work introduces a multi-scale spatial distillation scheme for preserving spatial relationships in feature level, which the citing paper adopts for continual semantic segmentation."}, {"Category": "Methodological Basis", "Citation": "[41]", "Explanation": "The cited work presents a continual learning framework for training on multiple labeled data domains, which the citing paper uses as a basis for its own research."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work proposes a continual learning framework under the unsupervised domain adaptation setting using data rehearsal, which the citing paper builds upon for its own research."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work introduces a multi-head knowledge distillation framework for continual unsupervised domain adaptation, which the citing paper adopts for its own research."}, {"Category": "Methodological Basis", "Citation": "[46]", "Explanation": "The cited work presents a representation compensation module to decouple representation learning in old and new knowledge, which the citing paper uses as a basis for its own research."}, {"Category": "Extension or Continuation", "Citation": "[4]", "Explanation": "The cited work suggests finding unknown classes from the background to distinguish representations of potential future classes, which the citing paper further extends by applying the method to a wider range of data."}, {"Category": "Extension or Continuation", "Citation": "[46]", "Explanation": "The cited work presents a representation compensation module to decouple representation learning in old and new knowledge, which the citing paper further extends by applying the method to a wider range of data."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work proposes a self-attention transferring method that the citing paper adopts to capture both within-class and between-class knowledge in their research."}, {"Category": "Extension or Continuation", "Citation": "[29]", "Explanation": "The cited work introduces a class-similarity knowledge-distillation method that the citing paper further explores to revise old classes and better learn new classes related to previous classes in their research."}, {"Category": "Supporting Evidence", "Citation": "[32]", "Explanation": "The cited work presents a balanced Softmax loss that the citing paper uses to reduce the distribution shift of labels and alleviate the long-tail issue in their research."}, {"Category": "Supporting Evidence", "Citation": "[44]", "Explanation": "The cited work proposes a Seesaw loss that the citing paper adopts to reweight the contributions of gradients produced by positive and negative instances of a class using two regularizers in their research."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work presents an algorithm that the citing paper uses to handle imbalanced classification, few-shot learning, and open-set recognition using dynamic meta-embedding in their research."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work proposes a stochastic training scheme for semantic segmentation that the citing paper adopts to improve the learning of debiased and disentangled representations in their research."}, {"Category": "Supporting Evidence", "Citation": "[37]", "Explanation": "The cited work presents a tilted cross-entropy loss that the citing paper uses to reduce performance differences and promote fairness among target classes in their research."}, {"Category": "Supporting Evidence", "Citation": "[38]", "Explanation": "The cited work presents a new loss function that the citing paper adopts to address the class imbalance issue in their research."}, {"Category": "Methodological Basis", "Citation": "[39]", "Explanation": "The cited work proposes a new loss function that the citing paper uses to address the class imbalance issue in their research."}, {"Category": "Supporting Evidence", "Citation": "[40]", "Explanation": "The cited work presents a new loss function that the citing paper adopts to address the class imbalance issue in their research."}, {"Category": "Methodological Basis", "Citation": "[41]", "Explanation": "The cited work proposes a new loss function that the citing paper uses to address the class imbalance issue in their research."}, {"Category": "Methodological Basis", "Citation": "[42]", "Explanation": "The cited work presents a new loss function that the citing paper uses to address the class imbalance issue in their research."}, {"Category": "Methodological Basis", "Citation": "[43]", "Explanation": "The cited work proposes a new loss function that the citing paper uses to address the class imbalance issue in their research."}, {"Category": "Methodological Basis", "Citation": "[40]", "Explanation": "The cited work introduces a fairness domain adaptation approach that the citing paper adopts to ensure the fairness of predictions in semantic segmentation on the target domain."}, {"Category": "Methodological Basis", "Citation": "[12,46,4]", "Explanation": "The cited works focus on addressing the two challenges in the prior approaches, which the citing paper adopts to build upon and further improve upon the research in the field of semantic segmentation."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work provides a method of using Softmax loss to model background classes without explicit supervision, which the citing paper adopts in their approach to address the limitations of unlabeled pixels during training."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work provides a method of maintaining a set of features for class c that is used in the updating step of the prototypical vectors in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work provides a method of supporting the updating step of the class prototypes in the citing paper by maintaining a set of features for each class."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work provides a specific approach for knowledge distillation in continual learning, which the citing paper adopts in their research to prevent the segmentation model from forgetting knowledge learned in previous tasks."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work contributes to the citing paper by providing a specific approach for knowledge distillation in continual learning, which the citing paper adopts in their research to improve the performance of the segmentation model."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work provides a specific approach for knowledge distillation in continual learning, which the citing paper adopts in their research to improve the performance of the segmentation model in a more efficient way."}, {"Category": "Methodological Basis", "Citation": "[46]", "Explanation": "The cited work contributes to the citing paper by providing a specific approach for knowledge distillation in continual learning, which the citing paper adopts in their research to improve the performance of the segmentation model in a more efficient and effective manner."}, {"Category": "Methodological Basis", "Citation": "[5,39]", "Explanation": "The cited works provide a Markovian assumption for modeling conditional structural consistency, which the citing paper adopts to impose consistency in the segmentation map through the prediction at location (i, j) and its neighbor pixels."}, {"Category": "Data Source", "Citation": "[49]", "Explanation": "The dataset ADE20K is used as a source of data for the research conducted in the citing paper, providing a large collection of scene images for semantic segmentation analysis."}, {"Category": "Data Source", "Citation": "[10]", "Explanation": "The Cityscapes dataset is used as a data source for the research on autonomous driving, providing a high-quality, dense set of images for analysis."}, {"Category": "Data Source", "Citation": "[13]", "Explanation": "The PASCAL VOC dataset is referenced as a common data source for the research on image segmentation, providing a large collection of images for analysis."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The DeepLab-V3 network architecture is used as a methodological basis in the experiments conducted in the citing paper, providing a specific segmentation network to be used in the research."}, {"Category": "Methodological Basis", "Citation": "[45]", "Explanation": "The SegFormer network architecture is used as a methodological basis in the experiments conducted in the citing paper, providing a specific segmentation network to be used in the research."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The DeepLab-V3 [5] with the Resnet101 backbone is used as a network backbone in the ablative experiments, providing a methodological basis for the study of the performance of the CSS model and fairness improvement on the ADE20K benchmark."}, {"Category": "Methodological Basis", "Citation": "[45]", "Explanation": "The SegFormer [45] with a Transformer backbone is also used in the ablative experiments, providing an alternative methodological basis for the study of the performance of the CSS model and fairness improvement on the ADE20K benchmark."}, {"Category": "Extension or Continuation", "Citation": "[5]", "Explanation": "The results in Table 1 show the performance of segmentation models using a more powerful backbone, i.e., Transformer, outperforming the models using the Resnet backbone. This extension or continuation of the study suggests the potential of using a more powerful backbone for further improvement in the performance of the CSS model and fairness improvement on the ADE20K benchmark."}, {"Category": "Extension or Continuation", "Citation": "[45]", "Explanation": "The results in Table 1 also show the improvement in the capability of learning new classes when using a Transformer backbone, i.e., MiT-B3 [45], in the full configuration. This extension or continuation of the study suggests the potential of using a more powerful backbone for further improvement in the performance of the CSS model and fairness improvement on the ADE20K benchmark."}, {"Category": "Extension or Continuation", "Citation": "[15]", "Explanation": "The cited work is referenced to compare the results of the citing paper with the previous method, which is used to establish a baseline for the performance evaluation."}, {"Category": "Extension or Continuation", "Citation": "[12]", "Explanation": "The cited work is compared to the proposed method in terms of qualitative results, showing the performance improvement of the citing paper in terms of maintaining information and predicting segmentation correctly."}, {"Category": "Data Source", "Citation": "Pascal VOC", "Explanation": "The Pascal VOC dataset is referenced to evaluate the performance of the proposed method on the dataset, highlighting the reliance on external data for the study conducted in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[46]", "Explanation": "The cited work by [46] provides a result of 59.4% in the 15-1 task, which serves as a benchmark for the citing paper to compare and improve upon in terms of overall mIoU performance."}, {"Category": "Methodological Basis", "Citation": "[20,16]", "Explanation": "The cited works provide the basis for the development of the algorithm to compute the Prototypical Contrastive Clustering loss and update prototypical vectors in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The cited work, DeepLab-V3, serves as the basis for the segmentation network architecture used in the citing paper, providing a specific model and implementation details for the research."}, {"Category": "Data Source", "Citation": "[45]", "Explanation": "The cited work, SegFormer, is referenced to acknowledge the use of the MiT-B3 backbone in the segmentation network architecture, likely to provide a specific model or implementation for the research."}, {"Category": "Methodological Basis", "Explanation": "The cited work is used to implement the framework in PyTorch and train the model on four A100 GPUs, providing a specific method and technical details for the research."}, {"Category": "Methodological Basis", "Explanation": "The cited work is used to optimize the model with the SGD optimizer and set the learning rate for each step and dataset, providing a specific method and parameters for the research."}, {"Category": "Methodological Basis", "Explanation": "The cited work is used to implement the feature vectors from the last layer of the decoder for the prototypical clustering loss, providing a specific method and implementation for the research."}, {"Category": "Methodological Basis", "Citation": "[20,16]", "Explanation": "The cited works provide a common practice in contrastive learning for computing the prototypes in the Prototypical Contrastive Clustering loss, which the citing paper adopts in their research."}, {"Category": "Data Source", "Citation": "[5,39]", "Explanation": "The cited works provide the number of neighbor pixels within a window size of 3 \u00d7 3 for the conditional structural consistency loss, which the citing paper utilizes in their study."}, {"Category": "Supporting Evidence", "Citation": "[12]", "Explanation": "The cited work provides a comparison of the memory and computational requirements of storing prototypical vectors versus using a teacher model in distillation approaches, which supports the claim that the approach used in the citing paper is more efficient in terms of memory and computation."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b24", "b62", "b45", "b16", "b13", "b75", "b79", "b31", "b76", "b8", "b18", "b10" ], "table_ref": [], "text": "With an increasing number of videos appearing online, video understanding has become a prominent research topic in computer vision. Temporal action localization (TAL), which aims to temporally locate and recognize human actions with a set of categories in a video clip, is a challenging yet fundamental task in this area, owing to its various applications such as sports highlighting, human action analysis and security monitoring [25,63,46,17,14].\nWe have recently witnessed significant progress in TAL, where most methods can be mainly divided into two parts: What helps to recognize this action?\nWhat helps to find the boundary?\nFigure 1. The motivation of our method. We show the action instance of clothes drying and depict the possible importance of each frame to recognizing the action category and locating action boundaries. Each frame's importance is different.\n1) Two-stage approaches [76,86] tackle this task accompanied by the generation of class-agnostic action proposals and then perform classification and proposal boundaries refinement in proposal-level; 2) One-stage approaches [80,73,32] simultaneously recognize and localize action instances in a single-shot manner. Typical methods [77,29] of this type predict categories as well as locate corresponding temporal boundaries in frame-level, achieving stronger TAL results currently. In training, they classify every frame as one action category or background and regress the boundaries of frames inside ground-truth action segments. However, these works treat each frame within action segments equally in training, leading to sub-optimal performance. When humans intend to locate action instances, the discrepant information of each frame is referred to. For the instance of action: clothes drying, as depicted in Fig 1, frames in the purple box promote recognizing clothes drying most as they describe the intrinsic sub-action: hang clothes on the hanger. Analogously, frames in red and gray boxes depict take out clothes from laundry basket and lift laundry basket, which are more informative to locate precise start and end time respectively. In a word, each frame's contribution is quite different, due to intrinsic patterns of actions, as well as existing transitional or blurred frames.\nCan we discover informative frames for classifying and localizing respectively? To this end, we first introduce a concept -Action Sensitivity, to measure the frame's importance. It is disentangled into two parts: action sensitivity to classification sub-task and action sensitivity to localization sub-task. For one sub-task, the higher action sensitivity each frame has, the more important it will be for this sub-task. With this concept, intuitively, more attention should be paid to action sensitive frames in training.\nTherefore in this paper, we propose a lightweight Action Sensitivity Evaluator (ASE) for each sub-task to better exploit frame-level information. Essentially, for a specific sub-task, ASE learns the action sensitivity of each frame from two perspectives: class-level and instance-level. The class-level perspective is to model the coarse action sensitivity distribution of each action category and is achieved by incorporating gaussian weights. The instance-level perspective is complementary to class-level modeling and is supervised in a prediction-aware manner. Then the training weights of each frame are dynamically adjusted depending on their action sensitivity, making it more reasonable and effective for model training.\nWith the proposed ASE, we build our novel Action Sensitivity Learning framework dubbed ASL to tackle temporal action localization task (TAL) effectively. Moreover, to furthermore enhance the features and improve the discrimination between actions and backgrounds, we design a novel Action Sensitive Contrastive Loss (ASCL) based on ASE. It is implemented by elaborately generating various types of action-related and action-irrelevant features and performing contrasting between them, which brings multiple merits for TAL.\nBy conducting extensive experiments on 6 datasets and detailed ablation studies, we demonstrate ASL is able to classify and localize action instances better. In a nutshell, our main contributions can be summarized as follows:\n• We propose a novel framework with an Action Sensitivity Evaluator component to boost training, by discovering action sensitive frames to specific sub-tasks, which is modeled from class level and instance level.\n• We design an Action Sensitive Contrastive Loss to do feature enhancement and to increase the discrimination between actions and backgrounds.\n• We verify ASL on various action localization datasets of multiple types: i) densely-labeled (i.e., Multi-Thumos [75] and Charades [53]). ii) egocentric (Ego4d-Moment Queries v1.0 [19] and Epic-Kitchens 100 [11]). iii) nearly single-labeled (Thumos14 [57] and ActivityNet1.3 [2]), and achieve superior results." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b30", "b32", "b57", "b64", "b69", "b48", "b75", "b75", "b43", "b48", "b80", "b31", "b79", "b38", "b8", "b76", "b70", "b37", "b36", "b60", "b9", "b3", "b58", "b2", "b57", "b58", "b37", "b38", "b3", "b3", "b38", "b3", "b38", "b34", "b33", "b59", "b77", "b83", "b84", "b51", "b12", "b59", "b73", "b15", "b7", "b6", "b5", "b19", "b21", "b40", "b35", "b66", "b22", "b8" ], "table_ref": [], "text": "Temporal Action Localization. Temporal action localization is a long-standing research topic. Contemporary approaches mostly fall into two categories, i.e. two-stage and one-stage paradigms. Previous two-stage methods usually focused on action proposal generation [31,33,56,58,65]. Others have integrated action proposal, calibrated backbone, classification and boundary regression or refinement modules into one single model [51,70,49,82]. Recent efforts have investigated the proposal relations [76,86,66], utilized graph modeling [73,76], or designed fine-grained temporal representation [44,55]. One-stage approaches usually perform frame-level or segment-level classification and directly localization or merging segments [49,81,32]. [80,39] process the video with the assistance of pre-defined anchors or learned proposals, while others utilize existing information and are totally anchor-free [29,77,79]. Currently, some works introduce pretrain-finetune to TAL task [71,72] or attempt to train the model in an efficient end-to-end manner [38,7,37]. Others focused on denselylabeled setting [61,10,9,24,59,8]. With the success of DETR [3] in object detection, query-based methods have also been proposed [48,58,59,38]. Our method falls into the one-stage TAL paradigm and performs frame-level classification and localization. Notably, [43,39] incorporate Gaussian kernels to improve receptive fields and optimize the temporal scale of action proposals, [24] use fixed gaussian-like weights to fuse the coarse and fine stage. We also utilize gaussian weights as one part of ASE, but it differs in that: i) Our gaussian-like weights in ASE serve as modeling class-level action sensitivity and to boost effective training, while [24,43,39] use it only to better encode the videos. ii) Our learned gaussian weights describe frames' contributions to each sub-task and can be easily visualized, whereas the semantic meaning of gaussian weights in [24,43,39] is unclear. iii) Our gaussian-like weights are totally learnable, category-aware and disentangled to different sub-tasks.\nOne-stage Object Detection. Analogous to TAL task, the object detection task shares a few similarities. As a counterpart in object detection, the one-stage paradigm has surged recently. Some works remain anchor-based [35], while others are anchor-free, utilizing a feature pyramid network [34,60] and improved label-assign strategies [78,84,85,52]. Moreover, some works define key points in different ways (e.g. corner [26], center [13,60] or learned points [74]). These methods bring some inspiration to design a better TAL framework. Some methods [16,28,27] aim to tackle the misalignment between classification and localization. But i) we mainly focus on the discrepant information of frames. ii) Misalignment of two sub-tasks (i.e., classification and localization) is only the second issue and we alleviate it by a novel contrastive loss which differs from these works.\nContrastive Learning. Contrastive learning [6,20,22] is an unsupervised learning objective that aims to bring similar examples closer together in feature space while pushing dissimilar examples apart. NCE [21] and Info-NCE [41] are two typical methods that mine data features by distinguishing between data and noise or negative samples. Info-NCE-based contrastive learning has been used in methods of different tasks, such as [68, 36,67] in cross-modality retrieval and [23,42] in unsupervised learning. In TAL, [29] leverages ranking loss to boost discrimination between foreground and background while [48] contrasts different actions with a global representation of action segments. But we design a new contrastive loss both across different types of actions and between actions and backgrounds. Moreover, compared to [50] which also contrasts between actions and backgrounds, our proposed contrastive loss contrasts more between i)same and different action classes, ii)sensitive frames of localization and classification to mitigate the misalignment of sub-tasks. Details will be discussed in 3.3." }, { "figure_ref": [], "heading": "Method Problem Formulation. The task of temporal action localization (TAL) is to predict a set of action instances", "publication_ref": [], "table_ref": [], "text": "{(t s m , t e m , c m )} M m=1\n, given a video clip, where M is the number of predicted action instances, t s m ,t e m ,c m are the start, end timestamp and action category of the m-th predicted action instance. ASL is built on an anchor-free representation that classifies each frame as one action category or background, as well as regresses the distances from this frame to the start time and end time.\nOverview. The overall architecture of ASL is shown in Fig 2 . ASL is composed of four parts: video feature extractor, feature encoder, action sensitivity evaluator, and two sub-task heads. Concretely, given a video clip, we first ex-tract the video feature using a pre-trained 3D-CNN model. Then we exert a feature encoder involving a pyramid network to better represent the temporal features at multiple levels. We propose an action sensitivity evaluator module to access the action sensitivity of frames to a specific subtask. The pyramid features combined with frames' action sensitivity are further processed by sub-task heads to generate predictions. We now describe the details of ASL." }, { "figure_ref": [], "heading": "Feature Encoder", "publication_ref": [ "b76", "b8" ], "table_ref": [], "text": "With the success of [77,29], ASL utilizes a Transformer encoder and feature pyramid network to encode feature sequences into a multiscale representation. To enhance features, in Transformer encoder we design a new attention mechanism that operates temporal attention and channel attention parallelly and then fuses these two outputs.\nFor normal temporal attention that is performed in the temporal dimension, input features generate query, key and value tensors (Q t , K t , V t ) ∈ R T ×D , where T is the number of frames, D is the embedding dimension, then the output is calculated:\nf ′ ta = softmax( QtK T t √ D )Vt(1)\nFor channel attention that is conducted in the channel dimension, input features generate query, key and value tensors\n(Q d , K d , V d ) ∈ R D×T ,\nwhere D is the number of channels. Then the output is calculated:\nf ′ ca = softmax( Q d K T d √ T )V d(2)\nAbove two outputs are then added with a coefficient θ:\nf ′ = (1 -θ)f ′ ta + θf ′ T ca .\nThen it is processed by layer nor-malization and feedforward network to obtain the encoded video representation f ∈ R T ×D ." }, { "figure_ref": [], "heading": "Action Sensitivity Evaluator", "publication_ref": [ "b76", "b59", "b34", "b82" ], "table_ref": [], "text": "As discussed in 1, not all frames inside ground-truth segments contribute equally to the sub-task (i.e., localization or classification). Thus we designed an Action Sensitivity Evaluator (ASE) module, the core idea of which is to determine the sub-task-specific action sensitivity of each frame and help the model pay more attention to those valuable frames. Besides, this module is lightweight, leading to efficient and effective training.\nDecoupling to two levels. Digging into action instances, a key observation is that actions of a particular category often share a similar pattern, but they appear slightly different in diverse scenarios or under different behavior agents. For example, action instances of category:wash vegetables inherently contain sub-actions: turn the tap on, take vegetables, wash, turn the tap off, where frames depicting washing are more sensitive to classification, frames depicting turning the tap on and turning the tap off are more sensitive to localization. But the respective duration or proportion of these sub-actions are dependent on the scenes and context of each action instance, thus making sensitive frames a little different. This motivates us that the action sensitivity of every frame should be decoupled into class-level and instancelevel modeling and then recombined from these two parts.\nDisentangling to two sub-tasks. Here sub-tasks mean classification and localization. Intuitively action sensitivity for classification needs to be modeled as sensitive frames for classification is not easily determined. Actually, action sensitivity modeling for localization is also necessary. Though the boundaries of action segments are defined already, sensitive frames are not necessarily at the start or the end of an action since i) action boundaries are often unclear, ii) each frame of sub-actions around boundaries also has different semantics. Therefore, action sensitivity modeling should be disentangled for two sub-tasks respectively (i.e., classification and localization).\nFormally, for a given ground-truth G = { ts , te , c}, each indicating the start time, end time and category of one action, we denote N f as the number of frames within this action, N c as the number of all pre-defined action categories. Our goal is to model the class-level action sensitivity p (disentangled into p cls , p loc to classification and localization respectively), instance-level action sensitivity q (disentagled to q cls , q loc ). Then we delve into details of action sensitivity learning.\nClass-level Modeling.\nClass-level sensitivity poses a fundamental prior for action sensitivity learning. Two key observations are that: i) video frames are often consecutive. ii) there often exist keyframes that have a peek value of sensitivity among all frames. In this case, we in-corporate gaussian-like weights with learnable parameters µ, σ ∈ R Nc to model class-level action sensitivity p.\nFor classification sub-task, we model corresponding action sensitivity p cls i for the i-th frame:\np cls i = exp{- (d(i) -µc) 2 2σ 2 c }(3)\nwhere d(i) is the distance from the current i-th frame to the central frame of the ground-truth segment which is normalized by N f . In this case, d(i) ∈ [-0.5, 0.5], when i = 1 (i.e., start frame), d(i) = -0.5, when i = N f (i.e., end frame), d(i) = 0.5. Learnable parameters µ c , σ c denote mean and variance of each category c's action sensitivity distribution.\nFor localization sub-task, different frames are sensitive to locating start time and end time. Therefore action sensitivity p loc is the combination of two parts. We explicitly allocate one gaussian-like weights p sot to model the start time locating sensitivity and another p eot to model the end time locating sensitivity. p loc is calculated:\np loc i = exp{- (d(i) -µc,1) 2 2σc,1 } p sot i + exp{- (d(i) -µc,2) 2 2σc,2 } p eot i(4)\nIn this way, class-level action sensitivity p cls , p loc ∈ R N f ×Nc of all categories are learned with the optimization of model training. In addition, the initialization of µ c and σ c counts as there exists prior knowledge [77,60] according to different sub-tasks. For classification sub-task, nearcenter frames are more sensitive. Thus we initialize µ c as 0. For localization sub-task, near-start and near-end frames are more sensitive. Thus we initialize µ 1 as -0.5 and µ 2 as 0.5. For all σ, we initialize as 1.\nInstance-level Modeling. Intuitively, a Gaussian can only give a single peak, and thus class-level action sensitivity learning may not discover all sensitive frames. To this end, we introduce instance-level modeling which is complementary and aims to capture additional important frames that haven't been discovered by class-level modeling.\nIn the instance-level modeling, as more information about frame contexts of each instance is referred to, we obtain instance-level action sensitivity q ∈ R N f using an instance-level evaluator operated directly on each frame, composed of 1D temporal convolution network which aims to encode temporal contexts better, a fully connected layer and a Sigmoid activation function. We denote Φ cls and Φ loc as two sub-task specific instance-level evaluator, then q cls and q loc are computed:\nq cls i = Φ cls (fi) q loc i = Φ loc (fi)(5)\nUnlike class-level modeling that contains some prior knowledge, instance-level sensitivity q is hard to learn in an unsupervised manner. Intuitively, from the instance level a sensitive frame implies that it can result in fine predictions. Hence we utilize the quality { Qi } N f i=1 of each frame's prediction to supervise the learning of q. For localization, The higher tIoU indicates a higher degree of overlap between two segments. Thus tIoU between the predicted segment and the ground-truth segment can measure the quality of prediction. For classification, the probability of the groundtruth category can serve as the quality of prediction. Therefore, quality Qcls and Qloc are defined as:\nQcls i = φ(si[c)]) Qloc i = tIoU(∆i, ∆)(6)\nwhere s denotes the classification logits, ∆ i is the predicted segment (t s , t e ) of the i-th frame, ∆ is the corresponding ground-truth segment, φ(•) is Sigmoid function. We use MSE loss to supervise the calculation of q. For q cls , optimization objective is formed as 7. Optimization of q loc is in a similar way.\nLs = MSE(q cls , Qcls )(7)\nOptimization with Action Sensitivity. In this way, combining class-level and instance-level, we obtain the final action sensitivity h(c) ∈ R N f (disentangled to classification and localization sub-task: h(c) → {h cls (c), h loc (c)}) for the ground-truth G = { ts , te , c}:\nh cls (c) = p cls 1[c] + q cls h loc (c) = p loc 1[c] + q loc (8)\nwhere 1[c] ∈ R Nc denotes the one-hot vector of c. Action sensitivity h is further used in training. For classification sub-task, we use a focal loss [35] to classify each frame, combined with classification action sensitivity h cls :\nL cls = 1 Npos i (1in i h cls i (ci)L focal i + 1 bg i L focal i ) (9)\nwhere 1 ini , 1 bgi are indicators that denote if the i-th frame is within one ground-truth action or if is background, N pos is the number of frames within action segments, ci denotes the action category of the i-th frame.\nFor localization sub-task, we use a DIoU loss [83] performed on frames within any ground-truth action instance, to regress offsets from current frames to boundaries, combined with localization action sensitivity h loc :\nL loc = 1 Npos i (1in i h loc i (ci)L DIoU i )(10)" }, { "figure_ref": [ "fig_2" ], "heading": "Action Sensitive Contrastive Loss", "publication_ref": [], "table_ref": [], "text": "Now with ASE, each frame is equipped with action sensitivity and valuable frames to specific sub-tasks will be discovered. We further boost the training from the perspective of feature enhancement. Delve into feature representation, three shortcomings may hinder the performance: i) classification sensitive and localization sensitive frames are quite different, resulting in the misalignment of these two subtasks. ii) features in actions of different categories are not much discriminable. iii) features within action and outside boundaries are not much distinguished yet.\nTherefore, on the basis of ASE, we propose an Action Sensitive Contrastive Loss (ASCL) to correspondingly tackle the above issues. Specifically, for a given video feature {f t } T t=1 and a ground-truth action instance G = { ts , te , c}, we generate two action-related features and one action-irrelevant feature. First, to generate more valuable action-related features, we aim to find sensitive frames to these sub-tasks. Thinking that ASCL contrasts action instances of different classes, where class-level discrimination is more important, we hence utilize class-level sensitivity p to parse the sensitive frame ranges T cls for classification and T loc for localization. With one ground-truth category c, we get the most sensitive frames a cls , a sot , a eot for classification, start time localization, end time localization respectively. Take a eot as an example:\naeot = arg max i (p eot i 1[c])(11)\na cls and a sot are obtained in a similar way. Then, centered on a and extending forward and backward with a range of δN f , where δ is the sampling length ratio, we get sensitive frame ranges T cls for classification and T loc for localization (T cls and T loc are limited inside the action instance). Furthermore, we utilize class-level sensitivity to compute sensitive features f cls for classification, f loc for localization:\n         f cls = 1 T t p cls t 1[c]ft, t ∈ T cls f loc = 1 T t p loc t 1[c]ft, t ∈ T loc(12)\nSecondly, we aim to simultaneously discriminate actions and backgrounds better. Consequently we generate boundary-related background features f bg :\nf bg = 1 T t ft, t ∈ [ ts -δN f , ts ] ∪ [ te , te + δN f ](13)\nThe learning objective of ASCL is based on a contrastive loss. As figure 2 shows, the positive samples P are constructed from f cls and f loc in action instances of the same category while the negative samples N come from: i) f cls and f loc in action instances of different categories. ii) all background features f bg . ASCL is computed for each batch B with N samples:\nL ASCL = 1 N B -log fx∈P f * sim(f * , fx) fx∈P f * sim(f * , fx) + fx∈N f * sim(f * , fx)(14)\nOptimizing ASCL will be of benefits to tackle the corresponding issues above : i) alleviate the misalignment of two sub-tasks by pulling features of their respective sensitive frames closer. ii) discriminate actions and backgrounds better by pushing action features of the same category closer and different categories apart, meanwhile pushing actions and backgrounds apart. Thus ASCL can enhance the feature representation and boost training furthermore." }, { "figure_ref": [], "heading": "Training and Inference", "publication_ref": [ "b0" ], "table_ref": [], "text": "Training. In the training process, our final loss function is designed:\nL = L cls + L loc + Ls + λL ASCL (15\n)\nwhere L cls , L loc and L s are discussed in equation 9, equation 10 and equation 7. λ denotes the weight of Action Sensitive Contrastive loss.\nInference. At inference time, our model outputs predictions (t s , t e , c) for every frame across all pyramids levels, where t s , t e denote the start and end time of action, c denote the predicted action category. c also serves as the action confidence score. SoftNMS [1] is then applied on these results to suppress redundant predictions." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets and Evaluation Metric", "publication_ref": [ "b18", "b10" ], "table_ref": [ "tab_0" ], "text": "Datasets. To validate the efficacy of the proposed ASL, extensive experiments on 6 datasets of 3 types are conducted: i) densely-labeled: MultiThumos[75] and Charades[53]; ii) densely-labeled and egocentric: Ego4D-Moment Queries v1.0 [19] and Epic-Kitchens 100 [11]; iii) single-labeled: Thumos14[57] and ActivityNet1. 3[2].\nMultiThumos is a densely labeled dataset including 413 sports videos of 65 classes. Charades is a large multi-label dataset containing 9848 videos of 157 action classes. These two datasets are both densely labeled and hence have multiple action instances in each video clip, where different actions may occur concurrently.\nEgo4D-Moment Queries v1.0 (Ego4D-MQ1.0 for short) is a large-scale egocentric benchmark with 2,488 video clips and 22.2K action instances from 110 pre-defined action categories, which is densely labeled and composed of long clips. EPIC-Kitchens 100 is a large egocentric action dataset containing 100 hours of videos from 700 sessions capturing cooking activities in different kitchens. These two datasets are both large, egocentric and densely labeled.\nThumos14 is composed of 200 validation videos and 212 testing videos from 20 action classes while ActivityNet has 19,994 videos with 200 action classes. These two datasets are singly labeled and thus most of video clips in them have one action instance in each video clip.\nEvaluation Metric. Since ASL focuses on action detection, we take mean Average Precision (mAP) at certain tIoU thresholds as the evaluation metric. For all six datasets, we also report average mAP over several tIoU thresholds as the main metric. The tIoU thresholds are set consistent with the official setup or previous methods, which is detailed in the caption of Table 1, 2, 3, 4." }, { "figure_ref": [], "heading": "Implementation Details.", "publication_ref": [ "b29", "b11", "b0", "b58", "b76", "b59", "b33" ], "table_ref": [], "text": "We follow the practice of using off-the-shelf preextracted features as input, specifically I3D [4] RGB features for MultiThumos, Charades, Thumos14 and Activ-ityNet , EgoVLP [30], Slowfast [15] and Omnivore [18] features for Ego4D-MQ1.0, Slowfast features [15,12] for Epic-Kitchens 100.\nWe train our model with a batch size of 2, 16, 2, 2 for 60, 30, 15, 25 epochs on MultiThumos, Charades, Ego4D-MQ1.0 and Epic-Kitchens 100 respectively, where the learning rate is set to 2e -4 . On ActivityNet and Thumos, we train our model with the batch size of 16, 2, the learning rate of 1e -3 , 1e -4 for 15, 30 epochs. We set λ as 0.3 and θ as 0.2.\nIn post-processing, we apply softNMS [1] to suppress redundant predictions. For fair comparison, we keep 200, 100, and 2000, 2000 predictions on Thumos14, Activi-tyNet, Ego4D-MQ1.0 and Epic-Kitchens 100 respectively. As on MultiThumos and Charades, considering that Point-TAD [59] splits a video clip into more than 4 parts and generates 48 predictions for each part, we keep 200 predictions on these two datasets.\nIn the training process, we clamp σ with a threshold (set as 5.0) to ensure σ won't be very large and thus prevent very small p cls , p loc , which may cause trivial solution to minimize the loss. Moreover, We tackle the issue of overlapped actions following [77,60]: i)use multi-scale mechanism [34] to assign actions with different duration to different feature levels. ii)If a frame, even with multi-scale used, is still assigned to more than one ground-truth action, we choose the action with the shortest duration as its groundtruth target and model its action sensitivity based on this ground-truth." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [ "b9", "b3", "b60", "b58", "b76", "b29", "b39", "b4", "b4", "b76", "b30", "b11", "b32", "b75", "b57", "b8", "b76", "b76", "b79", "b68" ], "table_ref": [ "tab_0", "tab_2", "tab_3", "tab_0" ], "text": "MultiThumos and Charades: We compare ASL with state-of-the-art methods under detection-mAP on these two densely-labeled TAL benchmarks. PDAN [10], coarsefine [24], MLAD [61], MS-TCT[9] are based on framelevel representation, while PointTAD [59] are query-based. As shown in Table 1, ASL reaches the highest mAP over all tIoU thresholds, outperforming the previous best method(i.e. PointTAD) by 2.0% absolute increase of average mAP on MultiThumos and 3.3% on Charades. Notably, PointTAD is further trained in an end-to-end manner with strong image augmentation while ASL is feature-based, indicating that ASL performs more accurate TAL with more efficiency on densely-labeled datasets. Ego4D-MQ1.0 and Epic-Kitchens 100: These two datasets are both challenging as they are large-scale, egocentric, densely labeled and composed of longer clips. Table 2 reports the results on Ego4D-MQ1.0. The state-ofthe-art methods are all based on Actionformer [77] and perform frame-level recognition and localization with strong features. Using the same feature EgoVLP [30], ASL surpasses the current best entry [40]. Using the combined EgoVLP, slowfast[15] and omnivore[18] features, ASL gains 2.06% improvement of average mAP on Val set and 2.21% on Test set. Moreover, ASL performs better than [5] which uses a stronger but not open-sourced In-ternVideo [5] feature. Meanwhile, on Epic-Kitchens 100 as table 3 shows, ASL outperforms the strong performance of Actionformer [77], BMN [31] and G-TAD [73] with the same Slowfast feature [15,12]. The above results demonstrate the advantage of ASL on the challenging, egocentric and densely labeled benchmark.\nThumos14 and ActivityNet1.3: These two datasets are popular and nearly single-labeled, with approximately one action instance in each clip. Table 4 compares the results of ASL with various state-of-the-art methods (e.g., two-stage methods: BSN [33], G-TAD[73], P-GCN [76], RTDNet [58], one-stage methods: AFSD [29], SSN[82], Actionformer [77].). On Thumos14, across all tIoU thresholds, ASL achieves the best and gains 1.1% improvement of average mAP (67.9% v.s. 66.8%). On ActivityNet, ASL also outperforms previous methods of [email protected] and average mAP, though the gap is slight. One possible reason is that due to the success of action recognition on ActivityNet, we follow the common practice [77,80,86] to fuse external video-level classification scores [69]. In this case, classlevel sensitivity will not play an important role in training. Another reason may be that since each video in Ac-tivityNet is nearly single-labeled, our proposed ASCL will be short of positive and negative samples, leading to a nonsignificant increase compared to improvements on densely labeled datasets as Table 1, 2." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_4", "tab_5", "tab_6", "tab_4" ], "text": "To further verify the efficacy of our contributions, we analyze main components of ASL on MultiThumos.\nAction Sensitive Evaluator. Our proposed ASE can be divided into class-level and instance-level modeling. we first investigate the effect of these parts. In Table 5, baseline 1 denotes using our proposed framework without ASE and ASCL. After being equipped with class-level modeling, it boosts the performance by 1.1% of average mAP (baseline 2 v.s. baseline 1). When further adding instance-level bias, it gains 0.5% absolute increase (baseline 6 v.s. baseline 2). And our ASE contributes a total improvement of 1.6% on average mAP (baseline 7 v.s. baseline 1). It is obvious that action sensitivity modeling from both class-level and instance-level is beneficial to TAL task. Gaussian Weights. Then we analyze the effect of learnable gaussian weights in class-level action sensitivity learning. Table 6 demonstrates that compared to baseline 1 which does not use any gaussian weights to learn action sensitivity, fixed gaussian weights with prior knowledge do bring benefits (baseline 2,3 v.s. baseline 1). Meanwhile, learnable gaussian weights are more favored (baseline 4 v.s. baseline 3, baseline 7 v.s. baseline 6). Moreover, learnable gaussian weights for both two sub-tasks achieve the best results.\nWe further study the number of Gaussians used in classification and localization sub-task. As shown in Table 7, using two Gaussians for localization and one Gaussian for classification achieves the best results. It is probably because on the one hand, using two Gaussians for localization explicitly allocates one for modeling start time and one for modeling end time. On the other hand, more Gaussian weights may be a burden for training, leading to inferior performance. Action Sensitive Contrastive Loss. Moreover, we delve into our proposed ASCL. As shown in Table 5, ASCL improves around 0.6% of average mAP on the basis of classlevel prior (baseline 5 v.s. baseline 2) and 0.5% on the basis of ASE (baseline 7 v.s. baseline 6). Baseline 4, where using ASCL alone denotes sampling near the center frame to form f cls and f loc directly, also gains an improvement of 0.3% compared to the vanilla framework (baseline 4 v.s. baseline 1). This indicates the effectiveness of contrast between actions and backgrounds. When performing ASCL based on ASE, it will facilitate the final performance more because it can alleviate the misalignment as discussed in 3.3.\nFinally we discussed the hyperparameters in ASCL. " }, { "figure_ref": [], "heading": "Qualitative Experiment", "publication_ref": [], "table_ref": [], "text": "To better illustrate the effectiveness of ASL, we visualize some qualitative results of Ego4D-MQ1.0 benchmark in Fig 4 . We show that i) frames depicting action's main subaction (i.e., hang clothes on the hanger, water run through hands) are of higher action sensitivity for classification. ii) Frames depicting near-start and near-end sub-action (i.e, turn the tap on, lift laundry basket, e.t.c.) are of higher ac- tion sensitivity for localization. Moreover, action sensitivity of frames is not continuous, as our proposed instance-level action sensitivity is discrete partly because blurred or transitional frames exist in video clips." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we introduce an Action Sensitivity Learning framework (ASL) for temporal action localization (TAL). ASL models action sensitivity of each frame and dynamically change their weights in training. Together with the proposed Action Sensitive Contrastive Loss (ASCL) to further enhance features and alleviate misalignment, ASL is able to recognize and localize action instances effectively. For accurate TAL, fine-grained information should be considered (e.g. frame-level information). We believe that ASL is a step further in this direction. In the future, efforts could be paid to more complicated sensitivity modeling. Besides, ASL could also be redesigned as a plug-and-play component that will be beneficial to various TAL methods." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements: This work is supported by the Fundamental Research Funds for the Central Universities (No.226-2023-00048) and Major Program of the National Natural Science Foundation of China (T2293720/T2293723)" } ]
2023-09-13
[ { "authors": "Navaneeth Bodla; Bharat Singh; Rama Chellappa; Larry S Davis", "journal": "", "ref_id": "b0", "title": "Soft-nms-improving object detection with one line of code", "year": "2017" }, { "authors": "Fabian Caba Heilbron; Victor Escorcia; Bernard Ghanem; Juan Carlos Niebles", "journal": "", "ref_id": "b1", "title": "Activitynet: A large-scale video benchmark for human activity understanding", "year": "2015" }, { "authors": "Nicolas Carion; Francisco Massa; Gabriel Synnaeve; Nicolas Usunier; Alexander Kirillov; Sergey Zagoruyko", "journal": "Springer", "ref_id": "b2", "title": "End-toend object detection with transformers", "year": "2020" }, { "authors": "Joao Carreira; Andrew Zisserman", "journal": "", "ref_id": "b3", "title": "Quo vadis, action recognition? a new model and the kinetics dataset", "year": "2017" }, { "authors": "Guo Chen; Sen Xing; Zhe Chen; Yi Wang; Kunchang Li; Yizhuo Li; Yi Liu; Jiahao Wang; Yin-Dong Zheng; Bingkun Huang; Zhiyu Zhao; Junting Pan; Yifei Huang; Zun Wang; Jiashuo Yu; Yinan He; Hongjie Zhang; Tong Lu; Yali Wang; Limin Wang; Yu Qiao", "journal": "", "ref_id": "b4", "title": "Internvideo-ego4d: A pack of champion solutions to ego4d challenges", "year": "2022" }, { "authors": "Ting Chen; Simon Kornblith; Mohammad Norouzi; Geoffrey Hinton", "journal": "PMLR", "ref_id": "b5", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "Gedas Cheng; & Feng; Bertasius", "journal": "Springer", "ref_id": "b6", "title": "Tallformer: Temporal action localization with a long-memory transformer", "year": "2022" }, { "authors": "Rui Dai; Srijan Das; Francois Bremond", "journal": "", "ref_id": "b7", "title": "Ctrn: Classtemporal relational network for action detection", "year": "2021" }, { "authors": "Rui Dai; Srijan Das; Kumara Kahatapitiya; Michael S Ryoo; Franc; Brémond", "journal": "", "ref_id": "b8", "title": "Ms-tct: multi-scale temporal convtransformer for action detection", "year": "2022" }, { "authors": "Rui Dai; Srijan Das; Luca Minciullo; Lorenzo Garattoni; Gianpiero Francesca; Franc; Bremond", "journal": "", "ref_id": "b9", "title": "Pdan: Pyramid dilated attention network for action detection", "year": "2021" }, { "authors": "Dima Damen; Hazel Doughty; Giovanni Maria Farinella; Sanja Fidler; Antonino Furnari; Evangelos Kazakos; Davide Moltisanti; Jonathan Munro; Toby Perrett; Will Price; Michael Wray", "journal": "", "ref_id": "b10", "title": "Scaling egocentric vision: The epickitchens dataset", "year": "2018" }, { "authors": "Dima Damen; Hazel Doughty; Giovanni Maria Farinella; Antonino Furnari; Jian Ma; Evangelos Kazakos; Davide Moltisanti; Jonathan Munro; Toby Perrett; Will Price; Michael Wray", "journal": "International Journal of Computer Vision (IJCV)", "ref_id": "b11", "title": "Rescaling egocentric vision: Collection, pipeline and challenges for epic-kitchens-100", "year": "2022" }, { "authors": "Kaiwen Duan; Song Bai; Lingxi Xie; Honggang Qi; Qingming Huang; Qi Tian", "journal": "", "ref_id": "b12", "title": "Centernet: Keypoint triplets for object detection", "year": "2019" }, { "authors": "Lijie Fan; Wenbing Huang; Chuang Gan; Stefano Ermon; Boqing Gong; Junzhou Huang", "journal": "", "ref_id": "b13", "title": "End-to-end learning of motion representation for video understanding", "year": "2018" }, { "authors": "Christoph Feichtenhofer; Haoqi Fan; Jitendra Malik; Kaiming He", "journal": "", "ref_id": "b14", "title": "Slowfast networks for video recognition", "year": "2019-10" }, { "authors": "Chengjian Feng; Yujie Zhong; Yu Gao; Matthew R Scott; Weilin Huang", "journal": "IEEE Computer Society", "ref_id": "b15", "title": "Tood: Task-aligned one-stage object detection", "year": "2021" }, { "authors": "Chuang Gan; Naiyan Wang; Yi Yang; Dit-Yan Yeung; Alex G Hauptmann", "journal": "", "ref_id": "b16", "title": "Devnet: A deep event network for multimedia event detection and evidence recounting", "year": "2015" }, { "authors": "Rohit Girdhar; Mannat Singh; Nikhila Ravi; Laurens Van Der Maaten; Armand Joulin; Ishan Misra", "journal": "", "ref_id": "b17", "title": "Omnivore: A Single Model for Many Visual Modalities", "year": "2022" }, { "authors": "Kristen Grauman; Andrew Westbury; Eugene Byrne; Zachary Chavis; Antonino Furnari; Rohit Girdhar; Jackson Hamburger; Hao Jiang; Miao Liu; Xingyu Liu", "journal": "", "ref_id": "b18", "title": "Ego4d: Around the world in 3,000 hours of egocentric video", "year": "2022" }, { "authors": "Jean-Bastien Grill; Florian Strub; Florent Altché; Corentin Tallec; Pierre Richemond; Elena Buchatskaya; Carl Doersch; Bernardo Avila Pires; Zhaohan Guo; Mohammad Gheshlaghi Azar", "journal": "Advances in neural information processing systems", "ref_id": "b19", "title": "Bootstrap your own latent-a new approach to self-supervised learning", "year": "2020" }, { "authors": "Michael Gutmann; Aapo Hyvärinen", "journal": "", "ref_id": "b20", "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "year": "2010" }, { "authors": "Kaiming He; Haoqi Fan; Yuxin Wu; Saining Xie; Ross Girshick", "journal": "", "ref_id": "b21", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "Kaiming He; Haoqi Fan; Yuxin Wu; Saining Xie; Ross Girshick", "journal": "", "ref_id": "b22", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "Kumara Kahatapitiya; Michael S Ryoo", "journal": "", "ref_id": "b23", "title": "Coarse-fine networks for temporal activity detection in videos", "year": "2021" }, { "authors": "Dahun Kim; Donghyeon Cho; In So Kweon", "journal": "", "ref_id": "b24", "title": "Selfsupervised video representation learning with space-time cubic puzzles", "year": "2019" }, { "authors": "Hei Law; Jia Deng", "journal": "", "ref_id": "b25", "title": "Cornernet: Detecting objects as paired keypoints", "year": "2018" }, { "authors": "Xiang Li; Wenhai Wang; Xiaolin Hu; Jun Li; Jinhui Tang; Jian Yang", "journal": "", "ref_id": "b26", "title": "Generalized focal loss v2: Learning reliable localization quality estimation for dense object detection", "year": "2021" }, { "authors": "Xiang Li; Wenhai Wang; Lijun Wu; Shuo Chen; Xiaolin Hu; Jun Li; Jinhui Tang; Jian Yang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b27", "title": "Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection", "year": "2020" }, { "authors": "Chuming Lin; Chengming Xu; Donghao Luo; Yabiao Wang; Ying Tai; Chengjie Wang; Jilin Li; Feiyue Huang; Yanwei Fu", "journal": "", "ref_id": "b28", "title": "Learning salient boundary feature for anchorfree temporal action localization", "year": "2007" }, { "authors": "Kevin Qinghong; Lin ; Alex Jinpeng Wang; Mattia Soldan; Michael Wray; Rui Yan; Eric Zhongcong Xu; Difei Gao; Rongcheng Tu; Wenzhe Zhao; Weijie Kong", "journal": "", "ref_id": "b29", "title": "Egocentric video-language pretraining", "year": "2022" }, { "authors": "Tianwei Lin; Xiao Liu; Xin Li; Errui Ding; Shilei Wen", "journal": "", "ref_id": "b30", "title": "Bmn: Boundary-matching network for temporal action proposal generation", "year": "2019" }, { "authors": "Tianwei Lin; Xu Zhao; Zheng Shou", "journal": "", "ref_id": "b31", "title": "Single shot temporal action detection", "year": "2017" }, { "authors": "Tianwei Lin; Xu Zhao; Haisheng Su; Chongjing Wang; Ming Yang", "journal": "", "ref_id": "b32", "title": "Bsn: Boundary sensitive network for temporal action proposal generation", "year": "2018" }, { "authors": "Tsung-Yi Lin; Piotr Dollár; Ross Girshick; Kaiming He; Bharath Hariharan; Serge Belongie", "journal": "", "ref_id": "b33", "title": "Feature pyramid networks for object detection", "year": "2017" }, { "authors": "Tsung-Yi Lin; Priya Goyal; Ross Girshick; Kaiming He; Piotr Dollár", "journal": "", "ref_id": "b34", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "Naiyuan Liu; Xiaohan Wang; Xiaobo Li; Yi Yang; Yueting Zhuang", "journal": "", "ref_id": "b35", "title": "Reler@zju-alibaba submission to the ego4d natural language queries challenge", "year": "2022" }, { "authors": "Shuming Liu; Mengmeng Xu; Chen Zhao; Xu Zhao; Bernard Ghanem", "journal": "", "ref_id": "b36", "title": "Etad: Training action detection end to end on a laptop", "year": "2022" }, { "authors": "Xiaolong Liu; Qimeng Wang; Yao Hu; Xu Tang; Shiwei Zhang; Song Bai; Xiang Bai", "journal": "IEEE Transactions on Image Processing", "ref_id": "b37", "title": "End-to-end temporal action detection with transformer", "year": "2022" }, { "authors": "Fuchen Long; Ting Yao; Zhaofan Qiu; Xinmei Tian; Jiebo Luo; Tao Mei", "journal": "", "ref_id": "b38", "title": "Gaussian temporal awareness networks for action localization", "year": "2019" }, { "authors": "Fangzhou Mu; Sicheng Mo; Gillian Wang; Yin Li", "journal": "", "ref_id": "b39", "title": "Where a strong backbone meets strong features -actionformer for ego4d moment queries challenge", "year": "2022" }, { "authors": "Aaron Van Den Oord; Yazhe Li; Oriol Vinyals", "journal": "", "ref_id": "b40", "title": "Representation learning with contrastive predictive coding", "year": "2018" }, { "authors": "Tian Pan; Yibing Song; Tianyu Yang; Wenhao Jiang; Wei Liu", "journal": "", "ref_id": "b41", "title": "Videomoco: Contrastive video representation learning with temporally adversarial examples", "year": "2021" }, { "authors": "A J Piergiovanni; Michael Ryoo", "journal": "PMLR", "ref_id": "b42", "title": "Temporal gaussian mixture layer for videos", "year": "2019" }, { "authors": "Zhiwu Qing; Haisheng Su; Weihao Gan; Dongliang Wang; Wei Wu; Xiang Wang; Yu Qiao; Junjie Yan; Changxin Gao; Nong Sang", "journal": "", "ref_id": "b43", "title": "Temporal context aggregation network for temporal action proposal refinement", "year": "2021" }, { "authors": "Zhaofan Qiu; Ting Yao; Tao Mei", "journal": "", "ref_id": "b44", "title": "Learning spatiotemporal representation with pseudo-3d residual networks", "year": "2017" }, { "authors": "Zhaofan Qiu; Ting Yao; Chong-Wah Ngo; Xinmei Tian; Tao Mei", "journal": "", "ref_id": "b45", "title": "Learning spatio-temporal representation with local and global diffusion", "year": "2019" }, { "authors": "Jiayi Shao; Xiaohan Wang; Yi Yang", "journal": "", "ref_id": "b46", "title": "Reler@zju submission to the ego4d moment queries challenge", "year": "2022" }, { "authors": "Dingfeng Shi; Yujie Zhong; Qiong Cao; Jing Zhang; Lin Ma; Jia Li; Dacheng Tao", "journal": "", "ref_id": "b47", "title": "React: Temporal action detection with relational queries", "year": "2022" }, { "authors": "Zheng Shou; Jonathan Chan; Alireza Zareian; Kazuyuki Miyazawa; Shih-Fu Chang", "journal": "", "ref_id": "b48", "title": "Cdc: Convolutional-deconvolutional networks for precise temporal action localization in untrimmed videos", "year": "2017" }, { "authors": "Zheng Shou; Hang Gao; Lei Zhang; Kazuyuki Miyazawa; Shih-Fu Chang", "journal": "", "ref_id": "b49", "title": "Autoloc: Weakly-supervised temporal action localization in untrimmed videos", "year": "2018" }, { "authors": "Zheng Shou; Dongang Wang; Shih-Fu Chang", "journal": "", "ref_id": "b50", "title": "Temporal action localization in untrimmed videos via multi-stage cnns", "year": "2016" }, { "authors": "Abhinav Shrivastava; Abhinav Gupta; Ross Girshick", "journal": "", "ref_id": "b51", "title": "Training region-based object detectors with online hard example mining", "year": "2016" }, { "authors": "Gül Gunnar A Sigurdsson; Xiaolong Varol; Ali Wang; Ivan Farhadi; Abhinav Laptev; Gupta", "journal": "Springer", "ref_id": "b52", "title": "Hollywood in homes: Crowdsourcing data collection for activity understanding", "year": "2016" }, { "authors": "Karen Simonyan; Andrew Zisserman", "journal": "", "ref_id": "b53", "title": "Two-stream convolutional networks for action recognition in videos", "year": "2014" }, { "authors": "Deepak Sridhar; Niamul Quader; Srikanth Muralidharan; Yaoxin Li; Peng Dai; Juwei Lu", "journal": "", "ref_id": "b54", "title": "Class semanticsbased attention for action detection", "year": "2021" }, { "authors": "Haisheng Su; Weihao Gan; Wei Wu; Yu Qiao; Junjie Yan", "journal": "", "ref_id": "b55", "title": "Bsn++: Complementary boundary regressor with scalebalanced relation modeling for temporal action proposal generation", "year": "2021" }, { "authors": "Yu-Gang Jiang&jingen; Liu&a Roshan; Zamir&george Toderici&ivan; Laptev&mubarak Shah&; Rahul Sukthankar", "journal": "", "ref_id": "b56", "title": "Thumos challenge: Action recognition with a large number of classes", "year": "2014" }, { "authors": "Jing Tan; Jiaqi Tang; Limin Wang; Gangshan Wu", "journal": "", "ref_id": "b57", "title": "Relaxed transformer decoders for direct action proposal generation", "year": "2021-10" }, { "authors": "Jing Tan; Xiaotong Zhao; Xintian Shi; Bin Kang; Limin Wang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b58", "title": "Pointtad: Multi-label temporal action detection with learnable query points", "year": "" }, { "authors": "Chunhua Zhi Tian; Hao Shen; Tong Chen; He", "journal": "", "ref_id": "b59", "title": "Fcos: Fully convolutional one-stage object detection", "year": "2019" }, { "authors": "Praveen Tirupattur; Kevin Duarte; Yogesh S Rawat; Mubarak Shah", "journal": "", "ref_id": "b60", "title": "Modeling multi-label action dependencies for temporal action localization", "year": "2021" }, { "authors": "Zhan Tong; Yibing Song; Jue Wang; Limin Wang", "journal": "", "ref_id": "b61", "title": "VideoMAE: Masked autoencoders are data-efficient learners for self-supervised video pre-training", "year": "2022" }, { "authors": "Heng Wang; Dan Oneata; Jakob Verbeek; Cordelia Schmid", "journal": "International journal of computer vision", "ref_id": "b62", "title": "A robust and efficient video representation for action recognition", "year": "2016" }, { "authors": "Limin Wang; Yuanjun Xiong; Zhe Wang; Yu Qiao; Dahua Lin; Xiaoou Tang; Luc Van Gool", "journal": "Springer", "ref_id": "b63", "title": "Temporal segment networks: Towards good practices for deep action recognition", "year": "2016" }, { "authors": "Qiang Wang; Yanhao Zhang; Yun Zheng; Pan Pan", "journal": "", "ref_id": "b64", "title": "Rcl: Recurrent continuous localization for temporal action detection", "year": "2022" }, { "authors": "Xiang Wang; Zhiwu Qing; Ziyuan Huang; Yutong Feng; Shiwei Zhang; Jianwen Jiang; Mingqian Tang; Changxin Gao; Nong Sang", "journal": "", "ref_id": "b65", "title": "Proposal relation network for temporal action detection", "year": "2021" }, { "authors": "Xiaohan Wang; Linchao Zhu; Yi Yang", "journal": "", "ref_id": "b66", "title": "T2vlad: Globallocal sequence alignment for text-video retrieval", "year": "2021-06" }, { "authors": "Xiaohan Wang; Linchao Zhu; Zhedong Zheng; Mingliang Xu; Yi Yang", "journal": "IEEE Transactions on Multimedia", "ref_id": "b67", "title": "Align and tell: Boosting text-video retrieval with local alignment and fine-grained supervision", "year": "2022" }, { "authors": "Yuanjun Xiong; Limin Wang; Zhe Wang; Bowen Zhang; Hang Song; Wei Li; Dahua Lin; Yu Qiao; Luc Van Gool; Xiaoou Tang", "journal": "", "ref_id": "b68", "title": "Cuhk & ethz & siat submission to activitynet challenge", "year": "2016" }, { "authors": "Huijuan Xu; Abir Das; Kate Saenko", "journal": "", "ref_id": "b69", "title": "R-c3d: Region convolutional 3d network for temporal activity detection", "year": "2017" }, { "authors": "Mengmeng Xu; Juan-Manuel Pérez-Rúa; Victor Escorcia; Brais Martinez; Xiatian Zhu; Li Zhang; Bernard Ghanem; Tao Xiang", "journal": "", "ref_id": "b70", "title": "Boundary-sensitive pre-training for temporal localization in videos", "year": "2021" }, { "authors": "Mengmeng Xu; Juan Manuel Perez Rua; Xiatian Zhu; Bernard Ghanem; Brais Martinez", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b71", "title": "Low-fidelity video encoder optimization for temporal action localization", "year": "2021" }, { "authors": "Mengmeng Xu; Chen Zhao; David S Rojas; Ali Thabet; Bernard Ghanem", "journal": "", "ref_id": "b72", "title": "G-tad: Sub-graph localization for temporal action detection", "year": "2020-06" }, { "authors": "Ze Yang; Shaohui Liu; Han Hu; Liwei Wang; Stephen Lin", "journal": "", "ref_id": "b73", "title": "Reppoints: Point set representation for object detection", "year": "2019" }, { "authors": "Serena Yeung; Olga Russakovsky; Ning Jin; Mykhaylo Andriluka; Greg Mori; Li Fei-Fei", "journal": "International Journal of Computer Vision", "ref_id": "b74", "title": "Every moment counts: Dense detailed labeling of actions in complex videos", "year": "2018" }, { "authors": "Runhao Zeng; Wenbing Huang; Mingkui Tan; Yu Rong; Peilin Zhao; Junzhou Huang; Chuang Gan", "journal": "", "ref_id": "b75", "title": "Graph convolutional networks for temporal action localization", "year": "2019" }, { "authors": "Chen-Lin Zhang; Jianxin Wu; Yin Li", "journal": "", "ref_id": "b76", "title": "Actionformer: Localizing moments of actions with transformers", "year": "2008" }, { "authors": "Shifeng Zhang; Cheng Chi; Yongqiang Yao; Zhen Lei; Stan Z Li", "journal": "", "ref_id": "b77", "title": "Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection", "year": "2020" }, { "authors": "Chen Zhao; Merey Ramazanova; Mengmeng Xu; Bernard Ghanem", "journal": "", "ref_id": "b78", "title": "Segtad: Precise temporal action detection via semantic segmentation", "year": "2022" }, { "authors": "Chen Zhao; Ali Thabet; Bernard Ghanem", "journal": "", "ref_id": "b79", "title": "Video selfstitching graph network for temporal action localization", "year": "2021" }, { "authors": "Peisen Zhao; Lingxi Xie; Chen Ju; Ya Zhang; Yanfeng Wang; Qi Tian", "journal": "Springer", "ref_id": "b80", "title": "Bottom-up temporal action localization with mutual regularization", "year": "2020" }, { "authors": "Yue Zhao; Yuanjun Xiong; Limin Wang; Zhirong Wu; Xiaoou Tang; Dahua Lin", "journal": "", "ref_id": "b81", "title": "Temporal action detection with structured segment networks", "year": "2017" }, { "authors": "Zhaohui Zheng; Ping Wang; Wei Liu; Jinze Li; Rongguang Ye; Dongwei Ren", "journal": "", "ref_id": "b82", "title": "Distance-iou loss: Faster and better learning for bounding box regression", "year": "2020" }, { "authors": "Benjin Zhu; Jianfeng Wang; Zhengkai Jiang; Fuhang Zong; Songtao Liu; Zeming Li; Jian Sun", "journal": "", "ref_id": "b83", "title": "Autoassign: Differentiable label assignment for dense object detection", "year": "2020" }, { "authors": "Chenchen Zhu; Yihui He; Marios Savvides", "journal": "", "ref_id": "b84", "title": "Feature selective anchor-free module for single-shot object detection", "year": "2019" }, { "authors": "Zixin Zhu; Wei Tang; Le Wang; Nanning Zheng; G Hua", "journal": "", "ref_id": "b85", "title": "Enriching local and global contexts for temporal action localization", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 50.11, 582.88, 77.93, 12.2 ], "formula_id": "formula_0", "formula_text": "{(t s m , t e m , c m )} M m=1" }, { "formula_coordinates": [ 3, 379.81, 573.81, 165.3, 22.86 ], "formula_id": "formula_1", "formula_text": "f ′ ta = softmax( QtK T t √ D )Vt(1)" }, { "formula_coordinates": [ 3, 329.1, 628.03, 100.06, 11.23 ], "formula_id": "formula_2", "formula_text": "(Q d , K d , V d ) ∈ R D×T ," }, { "formula_coordinates": [ 3, 378.24, 660.23, 166.87, 22.86 ], "formula_id": "formula_3", "formula_text": "f ′ ca = softmax( Q d K T d √ T )V d(2)" }, { "formula_coordinates": [ 3, 308.86, 700.98, 98.41, 13.99 ], "formula_id": "formula_4", "formula_text": "f ′ = (1 -θ)f ′ ta + θf ′ T ca ." }, { "formula_coordinates": [ 4, 373.7, 129.8, 171.41, 22.52 ], "formula_id": "formula_5", "formula_text": "p cls i = exp{- (d(i) -µc) 2 2σ 2 c }(3)" }, { "formula_coordinates": [ 4, 317.38, 311.24, 227.73, 39.06 ], "formula_id": "formula_6", "formula_text": "p loc i = exp{- (d(i) -µc,1) 2 2σc,1 } p sot i + exp{- (d(i) -µc,2) 2 2σc,2 } p eot i(4)" }, { "formula_coordinates": [ 4, 401.11, 655.23, 144.01, 26.86 ], "formula_id": "formula_7", "formula_text": "q cls i = Φ cls (fi) q loc i = Φ loc (fi)(5)" }, { "formula_coordinates": [ 5, 135.61, 202.29, 150.75, 26.86 ], "formula_id": "formula_8", "formula_text": "Qcls i = φ(si[c)]) Qloc i = tIoU(∆i, ∆)(6)" }, { "formula_coordinates": [ 5, 126.49, 316.02, 159.88, 10.33 ], "formula_id": "formula_9", "formula_text": "Ls = MSE(q cls , Qcls )(7)" }, { "formula_coordinates": [ 5, 124.64, 403.92, 161.72, 26.88 ], "formula_id": "formula_10", "formula_text": "h cls (c) = p cls 1[c] + q cls h loc (c) = p loc 1[c] + q loc (8)" }, { "formula_coordinates": [ 5, 71.9, 495.56, 214.46, 23.47 ], "formula_id": "formula_11", "formula_text": "L cls = 1 Npos i (1in i h cls i (ci)L focal i + 1 bg i L focal i ) (9)" }, { "formula_coordinates": [ 5, 96.41, 629.32, 189.95, 23.47 ], "formula_id": "formula_12", "formula_text": "L loc = 1 Npos i (1in i h loc i (ci)L DIoU i )(10)" }, { "formula_coordinates": [ 5, 378.84, 340.97, 166.27, 16.75 ], "formula_id": "formula_13", "formula_text": "aeot = arg max i (p eot i 1[c])(11)" }, { "formula_coordinates": [ 5, 359.65, 455.05, 185.46, 50.35 ], "formula_id": "formula_14", "formula_text": "         f cls = 1 T t p cls t 1[c]ft, t ∈ T cls f loc = 1 T t p loc t 1[c]ft, t ∈ T loc(12)" }, { "formula_coordinates": [ 5, 323.41, 548.26, 221.7, 23.44 ], "formula_id": "formula_15", "formula_text": "f bg = 1 T t ft, t ∈ [ ts -δN f , ts ] ∪ [ te , te + δN f ](13)" }, { "formula_coordinates": [ 5, 308.96, 665.56, 236.16, 47.38 ], "formula_id": "formula_16", "formula_text": "L ASCL = 1 N B -log fx∈P f * sim(f * , fx) fx∈P f * sim(f * , fx) + fx∈N f * sim(f * , fx)(14)" }, { "formula_coordinates": [ 6, 105.67, 233.31, 176.96, 9.3 ], "formula_id": "formula_17", "formula_text": "L = L cls + L loc + Ls + λL ASCL (15" }, { "formula_coordinates": [ 6, 282.63, 233.6, 3.73, 7.77 ], "formula_id": "formula_18", "formula_text": ")" } ]
Action Sensitivity Learning for Temporal Action Localization
Temporal action localization (TAL), which involves recognizing and locating action instances, is a challenging task in video understanding. Most existing approaches directly predict action classes and regress offsets to boundaries, while overlooking the discrepant importance of each frame. In this paper, we propose an Action Sensitivity Learning framework (ASL) to tackle this task, which aims to assess the value of each frame and then leverage the generated action sensitivity to recalibrate the training procedure. We first introduce a lightweight Action Sensitivity Evaluator to learn the action sensitivity at the class level and instance level, respectively. The outputs of the two branches are combined to reweight the gradient of the two sub-tasks. Moreover, based on the action sensitivity of each frame, we design an Action Sensitive Contrastive Loss to enhance features, where the action-aware frames are sampled as positive pairs to push away the action-irrelevant frames. The extensive studies on various action localization benchmarks (i.e., MultiThumos, Charades, Ego4D-Moment Queries v1.0, Epic-Kitchens 100, Thumos14 and Activi-tyNet1.3) show that ASL surpasses the state-of-the-art in terms of average-mAP under multiple types of scenarios, e.g., single-labeled, densely-labeled and egocentric.
Jiayi Shao; Xiaohan Wang; Ruijie Quan; Junjun Zheng; Jiang Yang; Yi Yang
[ { "figure_caption": "Action Instance: clothes drying take out clothes from laundry basket hang clothes on the hanger take out laundry basket", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "! 𝑓%'$ ! 𝑓() ! 𝑓$%& \" 𝑓%'$ \" 𝑓() \" 𝑓$%& * 𝑓%'$ * 𝑓() *", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure2. The overview of ASL. Given a video clip, we first leverage a pre-trained 3D-CNN to extract the video feature and then utilize a Transformer Encoder to encode feature. We then use ground-truth location sampling to sample all ground-truth segments and feed these into Action Sensitivity Evaluator. In this module, we model sub-task-specific action sensitivity of each frame from class level and instancelevel. The former is learned by incorporating learnable gaussian-like weights and the latter is learned with an instance-level evaluator. Then each frame's weight in training is adjusted based on action sensitivity. Moreover, we propose an Action Sensitive Contrastive Loss to better enhance the feature and alleviate misalignment problems.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "#cls and #loc denote the number of Gaussian weights used in classification and localization sub-task. shared indicates two sub-tasks share one Gaussian weights. Values of ASCL loss weight λ Values of average mAP (%) (a) Ablation of λ Values of sample length ratio δ Values of average mAP (%) (b) Ablation of δ", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Ablation of hyperparameters in ASCL.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig 3 (3a) shows the performance curve of average mAP corresponding to ASCL weight λ. Average mAP on MultiThumos generally improves when λ increases and slightly drop as λ reaches 0.4. Fig 3(b) reports the average mAP to different sampling length ratios δ. When δ equals 0.2, our method achieves the best. In this case, we set λ to 0.3 and δ to 0.2.", "figure_data": "", "figure_id": "fig_5", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Visualization of (Top) the frame sensitivity to sub-tasks of Action: hang clothes to dry and (bottom) Action: wash hands. Please zoom in for the best view.", "figure_data": "", "figure_id": "fig_7", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Results on MultiThumos and Charades. We report detection-mAP at different tIoU thresholds. Average mAP in [0.1:0.1:0.9] is reported on MultiThumos and Chrades. Best results are in bold. ‡ indicates results trained with stronger image augmentation[59,38]. I3D denotes using I3D [4] features and E2E indicates results trained in an end-to-end manner.", "figure_data": "ModelModality Feature0.2MultiThumos 0.5 0.7Avg.0.2Charades 0.5 0.7 Avg.PDAN [10]RGBI3D---17.3---8.5Coarse-Fine [24] RGBI3D-------6.1MLAD [61]RGBI3D---14.2----MS-TCT [9]RGBI3D---16.3---7.9PointTAD [59]RGBI3D-E2E 36.8 23.3 11.0 21.7 15.9 12.6 8.5 11.3PointTAD ‡ [59]RGBI3D-E2E 39.7 24.9 12.0 23.5 17.5 13.5 9.1 12.1ASLRGBI3D42.4 27.8 13.7 25.5 24.5 16.5 9.4 15.4", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results on Ego4D-Moment Queries v1.0. We report mAP at different tIoU thresholds. Average mAP in [0.1:0.1:0.5] is reported on Ego4D-Moment Queries. Best results are in bold. EgoVLP, SF and OF denote EgoVLP[30], Slowfast[15] and Omnivore [18] features. InterVideo[5] denotes features extracted from VideoMAE-L[62] and fine-tuned on Ego4D-Moment Queries.", "figure_data": "Method/EntryFeature0.1mAP at IoUs, Val set 0.3 0.5Avg.mAP at IoUs, Test set Avg.VSGN [80]SF9.105.763.416.035.68VSGN [30]EgoVLP16.63 11.456.5711.3910.33ReLER [47]SF+OV22.75 17.61 13.43 17.9417.67Actionformer [40] EgoVLP26.84 20.57 14.54 20.60-Actionformer [40] EgoVLP+SF+OV 28.26 21.88 16.28 22.0921.76Actionformer [5]InternVideo---23.2923.59ASLEgoVLP29.45 23.03 16.08 22.8322.25ASLEgoVLP+SF+OV 30.50 24.39 17.45 24.1523.97", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results on EPIC-Kitchens 100 val set.", "figure_data": "We report mAPat different tIoU thresholds and average mAP in [0.1:0.1:0.5]. Allmethods use the same SlowFast [15, 12] features.Sub-Task Method0.1 0.3 0.5 AvgBMN [31]10.8 8.4 5.6 8.4VerbG-TAD [73] Actionformer [77] 26.6 24.2 19.1 23.5 12.1 9.4 6.5 9.4ASL27.9 25.5 19.8 24.6BMN [31]10.3 6.2 3.4 6.5NounG-TAD [73] Actionformer [77] 25.2 22.7 17.0 21.9 11.0 8.6 5.4 8.4ASL26.0 23.4 17.7 22.6", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results on Thumos14 and ActivityNet1.3. We report mAP at different tIoU thresholds. Average mAP in [0.3:0.1:0.7] is reported on THUMOS14 and [0.5:0.05:0.95] on ActivityNet1.3. The best results are in bold.", "figure_data": "ModelFeature0.30.4Thumos14 0.5 0.60.7Avg.0.5ActivityNet1.3 0.75 0.95 Avg.BSN [33]TSN [64] 53.5 45.0 36.9 28.4 20.0 36.8 46.5 30.08.030.0BMN [31]TSN [64] 56.0 47.4 38.8 29.7 20.5 38.5 50.1 34.88.333.9G-TAD [73]TSN [64] 54.5 47.6 40.3 30.8 23.4 39.3 50.4 34.69.034.1P-GCN [76]I3D [4]63.6 57.8 49.1---48.3 33.23.331.1TCANet [44]TSN [64] 60.6 53.2 44.6 36.8 26.7 44.3 52.3 36.76.935.5ContextLoc [86]I3D [4]68.3 63.8 54.3 41.8 26.2 50.9 56.0 35.23.634.2VSGN [80]TSN [64] 66.7 60.4 52.4 41.0 30.4 50.2 52.4 36.08.435.1RTD-Net [58]I3D [4]68.3 62.3 51.9 38.8 23.7 47.2 30.78.630.8SSN [82]TS [54]51.0 41.0 29.8---43.2 28.75.628.3GTAN [39]P3D [45] 57.8 47.2 38.8---52.6 34.18.934.3AFSD [29]I3D [4]67.3 62.4 55.5 43.7 31.1 52.0 52.4 35.36.534.4React [48]I3D [4]69.2 65.0 57.1 47.8 35.6 55.0 49.6 33.08.632.6TadTR [38]I3D [4]62.4 57.4 49.2 37.8 26.3 46.6 49.1 32.68.532.3Actionformer [77] I3D [4]82.1 77.8 71.0 59.4 43.9 66.8 54.2 36.97.636.0ASLI3D [4]83.1 79.0 71.7 59.7 45.8 67.9 54.1 37.48.036.2", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation studies of components. ASE: Action Sensitivity Evaluator. class.: class-level modeling. inst.: instance-level modeling. ASCL: Action Sensitive Contrastive Loss.", "figure_data": "ComponentsmAP at different tIoUs#ASE class. inst.ASCL0.20.50.7Avg.139.6 25.9 11.6 23.42✓41.0 26.5 12.9 24.53✓40.5 26.2 12.0 23.94✓40.2 26.1 11.8 23.75✓✓41.9 27.0 13.6 25.16✓✓41.8 27.2 13.3 25.07✓✓✓42.4 27.8 13.7 25.5", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablation studies of Gaussians weights. cls and loc denotes classification and localization sub-task. For gaussian weights in class-level action sensitivity learning, learnable/fixed denotes parameters learnable/not learnable. None denotes not using gaussian weights.", "figure_data": "#cls.loc.0.10.30.5Avg.1NoneNone40.9 26.3 12.3 24.22None40.9 26.5 12.4 24.43fixedfixed41.0 26.6 12.7 24.64learnable 41.7 26.8 13.0 24.95None41.9 27.1 13.0 24.96learnablefixed42.0 26.9 13.4 25.17learnable 42.427.8 13.7 25.5", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Ablation studies of number of Gaussians weights.", "figure_data": "", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "[25]", "Explanation": "The cited work highlights the various applications of TAL in sports highlighting, human action analysis, and security monitoring, providing a context for the research conducted in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[63]", "Explanation": "The cited work further emphasizes the importance of TAL in video understanding, which aligns with the research focus of the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[46]", "Explanation": "The cited work provides additional evidence of the challenges and importance of TAL in video understanding, which the citing paper aims to address."}, {"Category": "Supporting Evidence", "Citation": "[17]", "Explanation": "The cited work further highlights the relevance of TAL in video understanding, providing a context for the research conducted in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[14]", "Explanation": "The cited work provides a final piece of evidence of the relevance of TAL in video understanding, which the citing paper aims to address."}, {"Category": "Methodological Basis", "Citation": "[77,29]", "Explanation": "The cited works provide a typical method for predicting action categories and locating temporal boundaries in frame-level, which the citing paper adopts in its research to achieve stronger TAL results."}, {"Category": "Methodological Basis", "Citation": "[75]", "Explanation": "The cited work, Multi-Thumos, provides a dataset that the citing paper uses to conduct experiments and verify the effectiveness of the proposed Action Sensitive Contrastive Loss (ASCL). The dataset serves as a benchmark for action localization and is essential for the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[19]", "Explanation": "The cited work, Ego4d-Moment Queries v1.0, is a dataset that the citing paper uses in its research. The dataset is a source of data for the study conducted in the citing paper and contributes to the analysis and results presented."}, {"Category": "Extension or Continuation", "Citation": "[11]", "Explanation": "The cited work, Epic-Kitchens 100, is a dataset that the citing paper extends the research to by conducting experiments and verifying the effectiveness of the proposed Action Sensitive Contrastive Loss (ASCL). The dataset serves as a new benchmark for action localization and contributes to the study conducted in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[57]", "Explanation": "The cited work, Thumos14, is used as a reference to support the claim that the cited work achieves superior results in nearly single-labeled datasets."}, {"Category": "Supporting Evidence", "Citation": "[2]", "Explanation": "The cited work, ActivityNet1.3, is also used as a reference to support the claim that the cited work achieves superior results in nearly single-labeled datasets."}, {"Category": "Methodological Basis", "Citation": "[31,33,56,58,65]", "Explanation": "The cited works are mentioned in the context of action proposal generation, indicating that the citing paper may adopt or adapt the methods and techniques used in these works to generate action proposals."}, {"Category": "Extension or Continuation", "Citation": "[51,70,49,82]", "Explanation": "The cited works are mentioned in the context of integrating action proposal, calibrated backbone, classification, and boundary regression or refinement modules into a single model. The citing paper may build upon this approach by exploring new dimensions or variables in the integration process."}, {"Category": "Extension or Continuation", "Citation": "[76,86,66]", "Explanation": "The cited works are mentioned in the context of investigating proposal relations, utilizing graph modeling, or designing fine-grained temporal representation. The citing paper may extend the research in these areas by exploring new aspects or contexts in the investigation of proposal relations or the utilization of graph modeling and fine-grained temporal representation."}, {"Category": "Extension or Continuation", "Citation": "[73,76]", "Explanation": "The cited works are mentioned in the context of utilizing graph modeling in the context of temporal action localization research. The citing paper may build upon this approach by exploring new dimensions or variables in the utilization of graph modeling in the context of temporal action localization research."}, {"Category": "Extension or Continuation", "Citation": "[44,55]", "Explanation": "The cited works are mentioned in the context of designing fine-grained temporal representation in the context of temporal action localization research. The citing paper may build upon this approach by exploring new dimensions or variables in the design of fine-grained temporal representation in the context of temporal action localization research."}, {"Category": "Methodological Basis", "Citation": "[49,81,32]", "Explanation": "The cited works are mentioned in the context of performing frame-level or segment-level classification and directly localizing or merging segments in the context of temporal action localization research. The citing paper may adopt or adapt the methods and techniques used in these works in the process of frame-level or segment-level classification and direct localization or merging of segments in the context of temporal action localization research."}, {"Category": "Data Source", "Citation": "[80,39]", "Explanation": "The cited works are mentioned in the context of processing the video with the assistance of pre-defined anchors or learned proposals. The citing paper may utilize the data or pre-defined anchors and proposals from these works in the processing of the video in the context of temporal action localization research."}, {"Category": "Methodological Basis", "Citation": "[29,77,79]", "Explanation": "The cited works are mentioned in the context of utilizing existing information and being totally anchor-free in the context of temporal action localization research. The citing paper may adopt or adapt the methods and techniques used in these works in the utilization of existing information and the absence of anchors in the context of temporal action localization research."}, {"Category": "Methodological Basis", "Citation": "[71,72]", "Explanation": "The cited works are mentioned in the context of introducing pre-train-finetune to the temporal action localization task. The citing paper may adopt or adapt the pre-train-finetune approach in the context of the temporal action localization task."}, {"Category": "Methodological Basis", "Citation": "[38,7,37]", "Explanation": "The cited works are mentioned in the context of training the model in an efficient end-to-end manner in the context of temporal action localization research. The citing paper may adopt or adapt the methods and techniques used in these works in the training of the model in an efficient end-to-end manner in the context of temporal action localization research."}, {"Category": "Extension or Continuation", "Citation": "[61,10,9,24,59,8]", "Explanation": "The cited works are mentioned in the context of densely-labeled setting in the context of temporal action localization research. The citing paper may build upon the research in this area by exploring new dimensions or variables in the densely-labeled setting in the context of temporal action localization research."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work, DETR, serves as a methodological basis for the citing paper as it has been successful in object detection and is used as a reference for the one-stage TAL paradigm."}, {"Category": "Methodological Basis", "Citation": "[24]", "Explanation": "The cited work is used in the citing paper to incorporate fixed gaussian-like weights to fuse the coarse and fine stage in the one-stage TAL paradigm."}, {"Category": "Methodological Basis", "Citation": "[39]", "Explanation": "The cited work is used in the citing paper to improve receptive fields and optimize the temporal scale of action proposals in the one-stage TAL paradigm."}, {"Category": "Methodological Basis", "Citation": "[43]", "Explanation": "The cited work is used in the citing paper to incorporate Gaussian kernels to improve receptive fields and optimize the temporal scale of action proposals in the one-stage TAL paradigm."}, {"Category": "Methodological Basis", "Citation": "[24]", "Explanation": "The cited work is used in the citing paper to use fixed gaussian-like weights to better encode videos in the one-stage TAL paradigm."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work is used in the citing paper as a methodological basis for the one-stage TAL paradigm in object detection."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work introduces the NCE method for data feature mining, which the citing paper adopts to improve the contrastive learning objective in their research."}, {"Category": "Methodological Basis", "Citation": "[41]", "Explanation": "The cited work presents the Info-NCE method for data feature mining, which the citing paper incorporates into their research to enhance the contrastive learning objective."}, {"Category": "Methodological Basis", "Citation": "[29]", "Explanation": "The cited work by [29] leverages ranking loss to boost discrimination between foreground and background, which the citing paper adopts in their research to improve the performance of their model."}, {"Category": "Extension or Continuation", "Citation": "[48]", "Explanation": "The cited work by [48] contrasts different actions with a global representation of action segments, which the citing paper extends by introducing a new contrastive loss to contrast between i)same and different action classes, ii)sensitive frames of localization and classification to mitigate the misalignment of sub-tasks."}, {"Category": "Data Source", "Citation": "[50]", "Explanation": "The cited work by [50] contrasts between actions and backgrounds, which the citing paper uses as a reference in their research to contrast more between i)same and different action classes, ii)sensitive frames of localization and classification to mitigate the misalignment of sub-tasks."}, {"Category": "Methodological Basis", "Citation": "[77,29]", "Explanation": "The cited works are used as a basis for the design of a new attention mechanism in the Transformer encoder, which is employed in the ASL model to enhance features."}, {"Category": "Data Source", "Citation": "[77,29]", "Explanation": "The cited works are cited to acknowledge the origin of the feature sequence data and the multiscale representation used in the ASL model."}, {"Category": "Methodological Basis", "Citation": "[77]", "Explanation": "The cited work provides the initialization values for the class-level action sensitivity parameters in the model training process, which the citing paper adopts to ensure accurate results in the sub-tasks."}, {"Category": "Methodological Basis", "Citation": "[60]", "Explanation": "The cited work provides the initialization values for the class-level action sensitivity parameters in the model training process, which the citing paper adopts to ensure accurate results in the sub-tasks."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work introduces the concept of focal loss, which the citing paper adopts in the classification sub-task to improve the accuracy of action classification."}, {"Category": "Methodological Basis", "Citation": "[83]", "Explanation": "The cited work introduces the DIoU loss function, which the citing paper adopts in their research to regress offsets from current frames to boundaries in the localization sub-task."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work introduces the concept of SoftNMS, which the citing paper adopts in the inference process to suppress redundant predictions in action category predictions."}, {"Category": "Data Source", "Citation": "[75]", "Explanation": "The cited work, MultiThumos, is a dataset that the citing paper uses to conduct experiments and validate the efficacy of the proposed ASL method."}, {"Category": "Data Source", "Citation": "[53]", "Explanation": "The cited work, Charades, is another dataset that the citing paper uses in their experiments to evaluate the performance of the proposed ASL method."}, {"Category": "Data Source", "Citation": "[19]", "Explanation": "The cited work, Ego4D-Moment Queries v1.0, is a large-scale egocentric benchmark dataset that the citing paper utilizes in their experiments to test the effectiveness of the proposed ASL method."}, {"Category": "Data Source", "Citation": "[11]", "Explanation": "The cited work, Epic-Kitchens 100, is a dataset that the citing paper uses in their experiments to assess the performance of the proposed ASL method."}, {"Category": "Data Source", "Citation": "[57]", "Explanation": "The cited work, Thumos14, is a single-labeled dataset that the citing paper uses in their experiments to evaluate the effectiveness of the proposed ASL method."}, {"Category": "Data Source", "Citation": "[2]", "Explanation": "The cited work, ActivityNet1.3, is a single-labeled dataset that the citing paper uses in their experiments to test the performance of the proposed ASL method."}, {"Category": "Data Source", "Citation": "[4]", "Explanation": "The cited work, I3D, is a pre-extracted feature used as input for the MultiThumos dataset in the citing paper."}, {"Category": "Data Source", "Citation": "[30]", "Explanation": "The cited work, EgoVLP, is a pre-extracted feature used as input for the Ego4D-MQ1.0 dataset in the citing paper."}, {"Category": "Data Source", "Citation": "[15]", "Explanation": "The cited work, Slowfast, is a pre-extracted feature used as input for the Epic-Kitchens 100 dataset in the citing paper."}, {"Category": "Data Source", "Citation": "[12]", "Explanation": "The cited work, Slowfast, is a pre-extracted feature used as input for the Epic-Kitchens 100 dataset in the citing paper."}, {"Category": "Data Source", "Citation": "[18]", "Explanation": "The cited work, Omnivore, is a pre-extracted feature used as input for the Ego4D-MQ1.0 dataset in the citing paper."}, {"Category": "Data Source", "Citation": "[59]", "Explanation": "The cited work is the source of the video clip splitting and prediction generation method used in the citing paper to train the model on MultiThumos and Charades datasets."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work provides the multi-scale mechanism used in the training process to assign actions of different duration to different feature levels in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[77,60]", "Explanation": "The cited works are extensions of the action sensitivity modeling approach in the citing paper, as they address the issue of action overlap by choosing the action with the shortest duration as the ground-truth target and model its action sensitivity based on that ground-truth."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work, PDAN, serves as a methodological basis for the citing paper in terms of frame-level representation in TAL."}, {"Category": "Methodological Basis", "Citation": "[24]", "Explanation": "The cited work, coarsefine, provides a methodological basis for the citing paper in terms of frame-level representation in TAL."}, {"Category": "Methodological Basis", "Citation": "[61]", "Explanation": "The cited work, MLAD, serves as a methodological basis for the citing paper in terms of frame-level representation in TAL."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work, MS-TCT, provides a methodological basis for the citing paper in terms of frame-level representation in TAL."}, {"Category": "Methodological Basis", "Citation": "[59]", "Explanation": "The cited work, PointTAD, serves as a methodological basis for the citing paper in terms of query-based representation in TAL."}, {"Category": "Supporting Evidence", "Citation": "[77]", "Explanation": "The cited work, Actionformer, serves as a state-of-the-art method for frame-level recognition and localization, providing a strong feature for the citing paper to build upon in their research."}, {"Category": "Extension or Continuation", "Citation": "[30]", "Explanation": "The cited work, EgoVLP, is used in the citing paper to surpass the current best entry in terms of performance, indicating an extension or continuation of the research in the field of action recognition and localization."}, {"Category": "Extension or Continuation", "Citation": "[15]", "Explanation": "The cited work, slowfast, is used in the citing paper to improve the performance of action recognition and localization by combining it with other features, such as omnivore and EgoVLP."}, {"Category": "Extension or Continuation", "Citation": "[18]", "Explanation": "The cited work, omnivore, is used in the citing paper to further enhance the performance of action recognition and localization by combining it with other features, such as slowfast and EgoVLP."}, {"Category": "Extension or Continuation", "Citation": "[5]", "Explanation": "The cited work, In-ternVideo, is used in the citing paper to achieve better performance in action recognition and localization, indicating an extension or continuation of the research in the field."}, {"Category": "Extension or Continuation", "Citation": "[73]", "Explanation": "The cited work, G-TAD, is used in the citing paper to perform better in action recognition and localization on the challenging, egocentric and densely labeled benchmark, demonstrating an extension or continuation of the research in the field."}, {"Category": "Extension or Continuation", "Citation": "[12]", "Explanation": "The cited work, Slowfast, is used in the citing paper to perform better in action recognition and localization on the popular and nearly single-labeled datasets, such as Thumos14 and ActivityNet1.3, indicating an extension or continuation of the research in the field."}, {"Category": "Supporting Evidence", "Citation": "[33]", "Explanation": "The cited work, BSN, is used as a benchmark to compare the results of ASL in Table 4, providing a reference point for the performance of the two-stage method."}, {"Category": "Supporting Evidence", "Citation": "[73]", "Explanation": "The cited work, G-TAD, is also used as a benchmark in Table 4 to compare the results of ASL, providing another reference point for the performance of the two-stage method."}, {"Category": "Supporting Evidence", "Citation": "[76]", "Explanation": "The cited work, P-GCN, is used in Table 4 to compare the results of ASL, providing another benchmark for the performance of the two-stage method."}, {"Category": "Supporting Evidence", "Citation": "[58]", "Explanation": "The cited work, RTDNet, is used in Table 4 to compare the results of ASL, providing another benchmark for the performance of the two-stage method."}, {"Category": "Supporting Evidence", "Citation": "[29]", "Explanation": "The cited work, AFSD, is used in Table 4 to compare the results of ASL, providing another benchmark for the performance of the one-stage method."}, {"Category": "Supporting Evidence", "Citation": "[82]", "Explanation": "The cited work, SSN, is used in Table 4 to compare the results of ASL, providing another benchmark for the performance of the one-stage method."}, {"Category": "Supporting Evidence", "Citation": "[77]", "Explanation": "The cited work, Actionformer, is used in Table 4 to compare the results of ASL, providing another benchmark for the performance of the one-stage method."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b8", "b11", "b12", "b13", "b14", "b15" ], "table_ref": [ "tab_1", "tab_1" ], "text": "Federated Learning (FL) is an emerging paradigm that can preserve data privacy while training machine learning models. In FL [1], a parameter server (PS), e.g., a cloud server, is deployed to coordinate the training process over multiple global iterations. In each global iteration, the PS communicates with participating FL clients owning original private data for exchanging model parameters. A significant challenge in FL lies in the statistical heterogeneity of data owned by different clients (i.e., non-IID data) because the data generated by different clients obeys distinct distributions [1]. Consequently, a single global model trained by FL fails to fit heterogeneous data on individual clients very well. In extreme cases, the data heterogeneity can severely lower model utility, slow down the FL convergence, and even make FL diverge [2].\nTo tackle the challenge of non-IID data on FL clients, personalized FL (pFL) is proposed with the principle to customize models for individual clients. Until now, similarity-based aggregation and model decoupling are two most widely studied approaches to achieve pFL. The principle of the former approach is to aggregate clients of a similar data distribution so that a personalized model can be produced to fit local data [3,4,5,6,7]. However, original data is invisible in FL making similarity estimation difficult. Most existing works require FL clients to expose additional information such as statistical knowledge of data label distribution for similarity estimation [8,9]. Whereas, exposing additional information may incur heavy communication/computation overhead and make clients suffer from privacy leakage [10,11]. For example, FedAP [9] required FL clients to expose statistical knowledge of their private data to estimate client similarity. The Wasserstein distance of normal distributions generated by running statistics of two arbitrary clients' local batch normalization layers is used to measure similarity, which is then used to guide model aggregation under severe non-IID data settings. To avoid exposing additional information to the federated learning (FL) server, FL clients in FedFomo [12] receive additional models belonging to their neighboring clients from the PS to gather knowledge and guide personalized aggregation. However, this can significantly aggravate the communication burden of FL clients. Another approach is to decouple a neural network (NN) model into a feature extractor and a classifier. Previous studies [13,14] suggest that the final fully connected layer in CNN models, such as [15,16], should be included in the classifier part, while other layers should be included in the feature extractor part. The classifier is mainly updated by local training to achieve personalized performance, while the feature extractor is trained across all FL clients to fully utilize all data in the system. Different from existing works, we propose a novel pFedSim (pFL with model similarity) algorithm by combining similarity-based aggregation and model decoupling methods. More specifically, pFedSim decouples a neural network model as a personalized feature extractor and a classifier. Client similarity is measured by the distance of classifiers, and personalized feature extractors are obtained by aggregating over similar clients. Considering that model parameters are randomly initialized, we design pFedSim with two phases: generalization and personalization. In the generalization phase, traditional FL algorithms, e.g., FedAvg, is executed for model training. The personalization phase is an iterative learning process with two distinct operations in each global iteration, which includes: 1) Refining feature extractors and classifiers by local training. 2) Similarity (measured based on the classifier distance) based feature extractor aggregation to fully utilize data across similar clients. Compared with existing model decoupling methods, pFedSim can significantly improve model accuracy by personalizing the feature extractor part. Compared with existing similarity-based methods, pFedSim can more accurately capture client similarity based on the classifier distance. Meanwhile, pFedSim averts heavy communication/computation overhead and privacy leakage risks because no additional information other than model parameters is exposed. Our empirical study by using real datasets, i.e., CIFAR-10, CINIC-10, Tiny-ImageNet and EMNIST, demonstrates that pFedSim significantly outperforms the state-of-the-art baselines under various non-IID data settings. The experiment results demonstrate that pFedSim can improve model accuracy by 2.96% on average.\nTo have a holistic overview of the advantages of pFedSim, we qualitatively compare it with the state-of-the-art baselines in Table 1 from four perspectives: communication load, privacy leakage risk, computation load and requirement for external data (which is usually unavailable in practice). Through the comparison, it is worth noting that pFedSim is the only one without any shortcomings listed in Table 1 because our design is only based on exposed model parameters." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b0", "b2", "b5", "b6", "b18", "b19", "b20", "b21", "b22", "b23", "b24", "b25", "b26", "b27", "b17", "b28", "b16", "b29", "b30", "b8", "b11", "b8", "b12", "b31", "b13", "b32", "b33", "b34", "b33" ], "table_ref": [], "text": "In this section, we overview prior related works and discuss our contribution in comparison to them.\nTraditional FL Traditional FL aims to train a single global model to fit data distributed on all clients. The first traditional model training algorithm in FL is FedAvg [1], which improves training efficiency by updating the model locally over multiple rounds to reduce the communication frequency. However, FedAvg fails to consider the non-IID property of data in FL, and thus its performance is inferior under heterogeneous data settings. To address the challenge of data heterogeneity, various variants of FedAvg were proposed. For examples, [3] and [6] introduced a proximal term to clients' optimization objectives so as to normalize their local model parameters and prevent over-divergence from the global model. [7] added the momentum in model aggregation to mitigate harmful oscillations, which can achieve a faster convergence rate, and hence improve model utility. [19] and [20] changed the empirical loss function, such as cross-entropy loss, to advanced loss functions so as to improve model learning performance.\n[5] combined model-based contrastive learning with FL, bridging the gap between representations produced by the global model and the current local model to limit parameter divergence. Despite the progress made by these works in making a single global model more robust in non-IID data scenarios, they overlooked the difference between the global optimization objective and individual clients' diverse local objectives. Thus, their performance is still unsatisfactory.\nPersonalized FL Later on, more sophisticated personalized FL was proposed that can optimize the model of FL for each individual client through collaborating with other clients of a similar data distribution. pFL is an effective approach to overcoming the data heterogeneity issue in FL. Existing works have proposed different ways to achieve pFL. [21,22,23] were designed based on the similarity between the optimization objectives of various clients in FL, and used meta-learning approaches [24,25] for rapid adaptation of the global model to the local data distribution after fine-tuning at the client side. [26] combined multi-task learning with FL that considers each client as a learning task in order to transfer federation knowledge from the task relationship matrix. [27,28] showed the effectiveness to produce exclusive model parameters and aggregation weights for individual clients by making use of hypernetwork. [18,29,17] combined the knowledge distillation with FL, which leverage the global knowledge to enhance the generalization of each client model. Nevertheless, these works require the availability of a public dataset which is not always valid in FL scenarios. [30,31] mixed the global model with local models to produce a personalized model for each client.\nTransmitting additional client information and model decoupling are two commonly used method to boost the performance of pFL. FedAP [9] pre-trained models on a portion of the training data, and transmitted additional data features to the PS, which is not easy for implementation in practice. FedFomo [12] set a similarity matrix to store all client models on the server like [9], but offloaded the model aggregation to the client side. Based on the similarity matrix, multiple models are distributed to each single client, which will be evaluated by each client based on a local validation set to determine their weights for subsequent model aggregation. However, distributing multiple models to each client can considerably surge the communication cost. On the contrary, model decoupling achieves pFL with a much lower cost. Decoupling the feature extractor and classifier from the FL model can achieve personalization, which has been explored by [13,32,14,33,34]. [35] has demonstrated that FL model performance will deteriorate if the classifier is involved into global model aggregation, given non-IID data. Therefore, to achieve personalization, the classifier should be retained locally without aggregating with classifiers of other clients. [34] designed dual classifiers for pFL, i.e., the personal one and the generic one, for personalization and generalization purposes, respectively. We design pFedSim by combining similarity based pFL and model decoupling based pFL to exert their strengths without incurring their weaknesses. On the one hand, we advance model decoupling by personalize feature extractors based on classifier-based similarity. On the other hand, pFedSim outperforms existing similarity based pFL methods because pFedSim clients never exposes overhead information except model parameters for similarity estimation." }, { "figure_ref": [], "heading": "Preliminary", "publication_ref": [ "b0", "b12", "b13", "b14", "b15" ], "table_ref": [], "text": "We introduce preliminary knowledge of FL and pFL in this section. Here 0 ≤ r ≤ 1 is the participating ratio. Clients in S t will receive the latest model parameters θ t from the PS, and then they perform local training for E ∈ N + epochs. The objective of FL is defined by min θ y))] represents the empirical loss function with model θ on client i and ℓ is the loss defined by a single sample. After conducting local training, clients send their updated models back to the PS. The PS aggregates these local models to generate a new global model, which is then used for the next round. The pseudo code of the FL algorithm, i.e., FedAvg [1], is presented in Algorithm 1, where SGD i (•) is the local optimizer to minimize f i (θ) on the i-th client based on stochastic gradient descent (SGD).\nFL Procedure Let C = {1, . . . ,\n1 n n i=1 f i (θ). Here, f i (θ) = E (x,y)∼Di [ℓ(θ; (x,\nSince the data distribution is non-IID in FL, implying that each D i can be drawn from a different distribution. As a result, training a uniform model θ cannot well fit heterogeneous data distributions on all clients. pFL resorts to learning personalized models denoted by θ 1 , θ 2 , . . . , θ n for n clients.\npFL by Model Decoupling Model decoupling is one of the most advanced techniques to achieve pFL by decoupling a complex neural network (NN) model as a feature extractor and a classifier. Formally, the model θ is decoupled as θ = ω • φ, where ω is the feature extractor and φ is the classifier.\nAccording to previous study [13,14], the final fully connected layer in CNN (convolutional neural network) models such as [15,16] should be included in the classifier part φ that can well capture local data patterns. The generalization capability of CNNs is captured by other layers which should be included in the feature extractor part ω.\nThe objective of pFL via model decoupling is more flexible, which can be formally expressed by min ω\n1 n n i=1 min φ1,...,φn f i (ω, φ i ). Here θ i = ω • φ i .\nThe inner min optimization is conducted on individual clients by tuning classifiers. Whereas, the outer min optimization is dependent on the PS by aggregating multiple feature extractors. Note that the ω part is identical for all clients and personalization is only reflected by the classifier part φ i ." }, { "figure_ref": [], "heading": "pFL Algorithm Design", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct a measurement study to show the effectiveness to measure client similarity by the distance of classifiers. Then, we elaborate the design of the pFedSim algorithm." }, { "figure_ref": [ "fig_0", "fig_0", "fig_1", "fig_1", "fig_1", "fig_1" ], "heading": "Measurement Study", "publication_ref": [ "b35", "b6", "b14", "b36", "b37", "b11", "b8", "b37", "b11" ], "table_ref": [ "tab_5" ], "text": "In FL, the challenge in identifying similar clients lies in the fact that data is privately owned by FL clients, which is unavailable to the PS. Through a measurement study, we unveil that the distance between classifiers, i.e., φ i , is the most effective metric to estimate client similarity.\nOur measurement study consists of three steps: 1) Classifiers can be used to precisely discriminate models trained by non-IID data. 2) The similarity between classifiers is highly correlated with the similarity of data distributions. 3) The classifier-based similarity is more effective than other metrics used in existing works. Step 1. we conduct a toy experiment by using the CIFAR-10 dataset [36] with labels 0 to 9, which is randomly partitioned into 10 subsets according to the Dirichlet distribution (a classical distribution to generate non-IID data) with α = 0.1 [7]. Each subset is used to train an independent LeNet5 [15] model (with five layers in total) for 20 global iterations. Centered-Kernel-Alignment (CKA) proposed by [37] is used as the metric to evaluate the similarity of outputs of the same layer from two models, given the same input data. The score of CKA similarity is from 0 (totally different) to 1 (identical). We use the CKA metric to compute the similarity for each layer between any pair of models in Figure 1. It is interesting to note that CKA comparison of the last layer, i.e., the classifier, can precisely discriminate models trained on different clients. For the comparison of Layer 0, it is almost impossible to discriminate different models. The comparison of Layer 3 is better for discriminating models, but still worse than that of Layer 4.\nStep 2. Although Figure 1 indicates that models can be effectively distinguished through classifiers, it is still opaque how the classifier can guide us to identify similar clients. Thus, we further conduct a toy experiment to evaluate the correlation between classifier similarity and data distribution similarity. We define the following distance between two datasets to measure data similarity between two clients.\ndist(D i , D j ) = 1 - {(x, y)|y ∈ Y i ∩ Y j } (x,y)∼Di∪Dj |D i ∪ D j | .(1)\nIntuitively speaking, dist(D i , D j ) measures the fraction of data samples belonging to labels commonly owned by clients i and j.\nTo visualize the distance between different clients, we employ a label non-IID setting by manually distributing data samples of CIFAR-10 according to their labels to 4 exclusive subsets denoted by {D i , D i ′ , D j , D k }. D i and D i ′ contain data samples with labels {0 ∼ 4}. D j and D k contain data samples with labels {2 ∼ 6} and {5 ∼ 9}, respectively. Based on Eq. ( 1), it is easy to verify that:\ndist(D i , D i ′ ) < dist(D i , D j ) < dist(D j , D k ) < dist(D i , D k ).\nWe define the data similarity as\n1 -dist(D i , D i ′ ), and thus sim(D i , D i ′ ) > sim(D i , D j ) > sim(D j , D k ) > sim(D i , D k ).\nFor a particular dataset, e.g., D i , a model θ i is trained independently. By decoupling θ i , let φ i denote its classifier. Each classifier can be further decomposed into a collection of decision boundaries, denoted by φ i = {φ i,c } c∈Y , where φ i,c represents the c-th class decision boundary in the i-th client's classifier. We define the similarity between two classifiers as the average of similarities between their decision boundaries as follows:\nsim(φ i , φ j ) = 1 |Y| c∈Y sim vec (φ i,c , φ j,c ),(2)\nwhere sim vec (•) is a similarity metric (e.g., cosine similarity in our experiment) for measuring the distance between two vectors. We randomly initialize four LeNet5 models {θ i , θ i ′ , θ j , θ k } under the same random seed and distribute them to corresponding datasets. Then, SGD is conducted to independently train those LeNet5 models for 20 iterations. Because of non-IID data among 4 subsets, two models will diverge after 20 iterations if their data distance is large.\nThe experiment results are shown in Figure 2, which show similarity scores between any two classifiers together with similarity scores between decision boundaries for each class. From the experiment results, we can observe that\nsim(φ i , φ i ′ ) > sim(φ i , φ j ) > sim(φ j , φ k ) > sim(φ i , φ k ). Recall that sim(D i , D i ′ ) > sim(D i , D j ) > sim(D j , D k ) > sim(D i , D k ).\nThis result indicates that the classifier similarity is strongly correlated with the data similarity.\nWe zoom in to investigate the similarity of two classifier decision boundaries for each class. From Figure 2, we can see that the decision boundary similarity is high only if the class label is commonly owned (or missed) from two datasets. Otherwise, the similarity score is very low if the class is only owned by a single dataset. The comparison of decision boundary similarities further verifies the effectiveness to estimate data similarity by using classifier similarity because the label distribution can heavily affect the decision boundary similarity, and hence the classifier similarity.\nMetric (i, i ′ ) (i, j) (j, k) (i, k) Data Similarity 1 0.53 0.32 0 W DB [9]\n1.1 × 10 -3 9 × 10 -4 1.5 × 10 -3 1.3 × 10 -3 M DB [38] 0.24 0.11 0.13 0.08 LDB [12] 8.6 × 10 -6 -3.7 × 10 -4 -3. Step 3. To verify that classifier similarity is the most effective metric for estimating data similarity, we conduct the same experiment as Figure 2 by using metrics proposed in related works.\nMore specifically, we implement Wasserstein distance based similarity (W DB) [9], model difference based similarity (M DB) [38] and evaluation loss based similarity (LDB) [12] based on models we have obtained in Figure 2. W DB is based on distance between outputs of batch norm layers from different models, LDB is based on the difference between empirical loss evaluated on a dataset, and M DB calculates the difference between models before and after model training. We compare data similarity, W DB, M DB, LDB and our classifier similarity (CS) in Table 2, from which we can observe that CS is the best one to estimate data similarity because its values are highly correlated with data similarity.\nThrough above three steps, we have demonstrated the effectiveness to estimate data similarity by using classifier similarity. Besides, classifiers are part of model parameters exposed by clients, implying that no additional information is required. Based on our measurement findings, we design pFedSim since the next subsection." }, { "figure_ref": [], "heading": "pFedSim Design", "publication_ref": [], "table_ref": [], "text": "Inspired by our measurement study, we propose the pFedSim algorithm that performs personalized model aggregation based on the similarity between classifiers. Because the initial global model is randomly generated by the PS (without factoring in local datasets), it is difficult to estimate data similarity based on initialized model parameters. Thereby, we design pFedSim with two phases:\n1. Generalization Phase: It is also called the warm-up phase. In this phase, traditional FL such as FedAvg is conducted to obtain a global model with a relatively effective feature extractor and a classifier. We summarize the workflow of pFedSim and present an illustration in Appendix A.2 to facilitate understanding, which consisting of 4 steps:\n1. The PS distributes the latest models to clients.\n2. Participating clients train models on their respective private datasets.\n3. Participating clients send their updated models back to the PS. Personalization Phase Server executes: Decouples ω Tg-1 and φ Tg-1 from θ Tg-1 for all client i ∈ C in parallel do Set ω\nTg-1 i\n← ω Tg-1" }, { "figure_ref": [], "heading": "Set φ", "publication_ref": [ "b14", "b15", "b8", "b11" ], "table_ref": [], "text": "Tg-1 i ← φ Tg-1 for t = T g , . . . , T do m ← max(r \n(ω t i , φ t i ) ← SGD i (ω t i , φ t i ; D i ) return (ω t i , φ t i )\nHow to Compute Similarity We compute the similarity between classifiers by modifying the cosine similarity2 , which is symmetric, i.e., sim(φ i , φ j ) = sim(φ j , φ i ), with a low computation cost. The similarity of two classifiers is computed by:\nΦ ij = - 1 |Y| c∈Y log 1 -max 0, φ i,c • φ j,c ∥φ i,c ∥∥φ j,c ∥ + ϵ ,(3)\nwhere ϵ is always set as a small positive number (e.g., 10 -8 in our experiments) to avoid yielding extreme values. In our computation, the cosine similarity is further adjusted by two operations: 1) If the cosine similarity of two classifiers is negative, it makes no sense for these two clients to collaborate with each other. Thus, the final similarity is lower bounded by 0. 2) We utilize the negative logarithm function to further adjust the cosine similarity so that clients can be easily discriminated. This operation is similar to the softmax operation in the model output layer in CNNs [15,16].\nSimilarity-Based Feature Extractor Aggregation In the personalization phase, at the beginning of each communication round, the PS will aggregate an exclusive feature extractor for the i-th client in S t according to the similarity matrix Φ ∈ S n×n . The initial value of Φ is an identity matrix. A larger value of an entry, e.g., Φ ij ∈ [0, 1], implies that clients i and j are more similar to each other. The update of the personalized model for the i-th client in the personalization phase is:\nθ t i = φ t i • ω t i = ω t i = 1 j∈C Φij j∈C Φ ij ω t-1 j , φ t i = φ t-1 i .(4)\nNote that personalization is guaranteed because: 1) φ i is only updated locally without aggregating with others. 2) Φ ij (computed based on φ i and φ j ) is incorporated for personalizing the aggregation of feature extractors.\nWe summarize pFedSim in Algorithm 2. Our method pFedSim does not depend on exchanging of additional information rather than model parameters. Therefore, the advantage of pFedSim includes a lower cost and a higher privacy protection level, compared with other baselines such as [9,12]." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "In this section, we report our experiment results conducted with real datasets to demonstrate the superb performance of pFedSim." }, { "figure_ref": [ "fig_4" ], "heading": "Experiment Setup", "publication_ref": [ "b39", "b40", "b6", "b0", "b2", "b5", "b16", "b12", "b13", "b41", "b20", "b11", "b8", "b14", "b15", "b8", "b8" ], "table_ref": [], "text": "Datasets We evaluate pFedSim on four standard image datasets, which are CIFAR-10 [36], CINIC-10 [39], EMNIST [40] and Tiny-ImageNet [41]. We split each dataset into 100 subsets (i.e., n = 100) according to the Dirichlet distribution Dir(α) with α ∈ {0.1, 0.5} to simulate scenarios with non-IID data [7]. The non-IID degree is determined by α. When α is very small, e.g., α = 0.1, the data non-IID degree is more significant implying that the data owned by a particular client cannot cover all classes, i.e., |Y i | ≤ |Y|, where Y i is the label space of data distributed on the i-th client. Figure 3 visualizes an example of the data distribution partitioned according to the Dir(α) distribution. The x-axis represents the client ID while the y-axis represents the class ID. The circle size represents the number of samples of a particular class allocated to a client. As we can see, the non-IID degree is more significant when α is smaller. More dataset details are presented in Appendix A.3. Baselines We compare pFedSim with both FL and state-of-theart pFL baselines. In order to create a baseline for evaluating the and personalization performance, respectively, we implement FedAvg [1] and Local-training-only in our experiments. Additionally, we implement the following baselines for comparison: 1) FedProx [3] is a regular FL method that adds a proximal term to the loss function as a regularization method to prevent the local model from drifting away from the global model; 2) FedDyn [6] is a regular FL method that utilizes a dynamic regularizer for each client to align the client local model with the global model; 3) FedGen [17] is a regular FL method that trains a feature generator to generate virtual features for improving local training; 4) FedPer [13] is a pFL method that preserves the classifier of each client model locally to achieve personalization; 5) FedRep [14] is a pFL method that preserves the classifier locally, and trains the classifier and the feature extractor sequentially; 6) FedBN [42] is a pFL method that preserves batch normalization layers in the model locally to stabilize training; 7) Per-FedAvg [21] is a pFL method that combines first-order meta-learning for quick model adaptation after local fine-tuning; 8) FedFomo [12] is a pFL method that personalizes model aggregation based on the loss difference between client models; 9) FedAP [9] is a pFL method using the Wasserstein distance as the metric to measure the client-wised similarity, which is then used to optimize model aggregation. Implementation We implemented LeNet5 [15] for all methods to conduct performance evaluation on CIFAR-10, CINIC-10 and EMNIST. MobileNetV2 [16] is implemented to classify Tiny-ImageNet images. According to the setup in [9], we evenly split the dataset of each client into the training and test sets with no intersection, i.e.,\n|D train i | = |D test i |.\nOn the client side, we adopt SGD as the local optimizer. The local learning rate is set to 0.01 for all experiments like [9]. All experiments shared the same join ratio r = 0.1, communication round T = 200, local epoch E = 5, and batch size 32. For pFedSim, we set the generalization ratio ρ = 0.5 (i.e., T g = 100, T p = 100). We list the used model architectures and full hyperparameter settings among aforementioned methods in Appendix A.4, A.5 respectively." }, { "figure_ref": [], "heading": "Experiment Results", "publication_ref": [], "table_ref": [], "text": "We evaluate pFedSim from three perspectives: model accuracy, effect of the hyperparameter ρ and overhead. The experiment results demonstrate that: 1) pFedSim achieves the highest model accuracy in non-IID scenarios; 2) The accuracy performance of pFedSim is not sensitive to the hyperparameter. 3) Low overhead is the advantage of pFedSim. The extra experiments such as the effect of hyperparameters and overhead evaluation are presented in Appendix B." }, { "figure_ref": [], "heading": "Comparing Classification Accuracy", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "We compare the average model accuracy performance of pFedSim with other baselines in Table 3. Based on the experiment results, we can observe that: 1) Our proposed algorithm, pFedSim, significantly outperforms other methods achieving the highest model accuracy in all cases. 2) When α = 0.1 indicating a significant non-IID degree of data distribution, the performance of pFL algorithms is better because of their capability to handle non-IID data. In particular, the improvement of pFedSim is more significant when α = 0.1. In contrast, FL methods such as FedAvg deteriorate considerably when α is small. 3) The f-FedAP algorithm also includes generalization and personalization phases. Its model aggregation is based on the similarity of the output of batch norm layers extracted from clients, which may increase the data privacy leakage risk. Moreover, its model accuracy performance is worse than ours indicating that the classifier distance is more effective for estimating client similarity. 4) Classifying the Tiny-ImageNet dataset under our experimental setting is a challenging task because each client only owns 550 training samples on average, but needs to solve a 200-class classification task. In this extreme case, it is worth noting that pFedSim remarkably improves model accuracy performance by almost 10% at most." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Data heterogeneity is one of the biggest challenges hampering the development of FL with high model utility. Despite that significant efforts have been made to solve this challenge through pFL, an efficient, secured and accurate pFL algorithm is still absent. In this paper, we conduct a measure study to show the effectiveness to identify similar clients based on the classifier-based distance. Accordingly, we propose a novel pFL algorithm called pFedSim that only leverages the classifierbased similarity to conduct personalized model aggregation, without exposing additional information or incurring extra communication overhead. Through extensive experiments on four real image datasets, we demonstrate that pFedSim outperforms other FL methods by improving model accuracy 2.96% on average compared with state-of-the-art baselines. Besides, the pFedSim algorithm is of enormous practical value because its overhead cost is low and its performance insensitive to tuning the hyperparameter." }, { "figure_ref": [], "heading": "A Experiment Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 Computing Configuration", "publication_ref": [ "b42", "b39", "b40", "b37" ], "table_ref": [], "text": "We implement our experiments using PyTorch [43], and run all experiments on Ubuntu 18.04.5 LTS with the configuration of Intel(R) Xeon(R) Gold 6226R [email protected] and NVIDIA GeForce RTX 3090 to build up the entire workflow. 270,000 10 2700 (1962) 2700 (843) EMNIST [40] 805,263 62 8142 (2545) 8142 (782) Tiny-ImageNet [41] 110,000 200 1100 (118) 1100 (38) Table 4: Statistical information of used datasets and their distribution information on clients." }, { "figure_ref": [], "heading": "A.2 pFedSim Workflow Illustration", "publication_ref": [], "table_ref": [], "text": "We list the basic information of datasets we used for our experiments in Table 4. To make a fair comparison between different methods, we partition each dataset into 100 subsets (i.e., 100 clients) and the data distributions are fixed in our experiments when comparing different methods." }, { "figure_ref": [], "heading": "A.4 Model Architecture", "publication_ref": [ "b14", "b15", "b16", "b11", "b44", "b41", "b41" ], "table_ref": [ "tab_1", "tab_11" ], "text": "We list the detailed architecture of LeNet5 [15] and MobileNetV2 [16] in this section in Table 5 and Table 9, respectively, which are the model backbone we used in our experiments. We list layers with phase (i.e., ρ = 0.9) would result in insufficient personalization and ultimately degradation of the final performance. Thus ρ should be set within a proper range. Although it is not easy to determine exactly the optimal value of ρ, which is related to the trained model, we can draw the following observations from our experiments. For large models (e.g., MobileNetV2), it is better to set a smaller ρ, implying that the personalization of large models consumes more communication rounds. For small models (e.g., LeNet5), it is better to set a relatively large ρ because personalization can be completed faster. More importantly, the accuracy performance is very close to the highest performance and stable when ρ is in [0. We compare computation and communication overhead between pFedSim with three most representative baselines, i.e., FedAvg, FedFomo and FedGen, in Table 1 to demonstrate the superiority of pFedSim. We split datasets according to Dir(0.1) and follow the experiment setting T = 20, r = 0.1 (i.e., |S| = 10), E = 5. The computation overhead is based on the measured actual running time of each method. The communication overhead is based on the complexity of exchanging model parameters via communications, which is related to model size. To explicitly compare communication overhead, let ν denote the number of parameters of a neural network model, and then ν m simply represent the number of parameters of the model trained by FL. FedGen [17] is a special method requiring an additional generator model for generating virtual features to assist in training, which has a relatively simple architecture (e.g., multi-layer perceptron). Let ν g denote the size of the generator. Finally, we show the comparison results in Table 7.\nComputation Overhead We quantize the computation overhead by the average running time of the local training procedure on clients, which is denoted by\nξ p = T t=0 i∈S t LocalTrainingTime i T |S t ||D train i | .\nWe record local training time in 5 repetitions and report mean and standard deviation. As we can see, ξ p of pFedSim is very close to FedAvg, indicating that pFedSim will not involve extra computation overhead compared to the most basic FL method; FedFomo [12] splits a validation set from the training set additionally for evaluating the performance of models transmitted from the PS that submitted from M neighboring clients. The evaluation result is then used to guide the personalized model aggregation at the client side. Therefore, it is necessary to consider both training and validation time costs for FedFomo. For simplicity, we simply compare the average processing time cost per sample by each method.\nCommunication Overhead Let ξ m denote the communication overhead. Because real communication time is influenced by realistic network conditions, to fairly compare different methods by avoiding the randomness influence from networks, we compare the communication complexity of different methods. To run the most fundamental FedAvg method, each client only needs to transmit ν m parameters implying that its communication complexity is O(ν m ). Notably, the communication complexity of pFedSim is also O(ν m ) because pFedSim does not need to transmit any additional information from clients to the PS. In contrast, both FedFomo and FedGen consume heavier communication overhead. For FedFomo, the PS needs to transmit additional M models of M neighboring clients to each client, and consequently its communication overhead is extremely heavy with O(M ν m ). For FedGen, each client needs to transmit an additional generator other than ν m model parameters. and hence its communication overhead is O(ν m + ν g ). We just evaluate all these methods under the label non-IID setting until now. For comprehensively demonstrating the superiority of pFedSim, we additionally train and evaluate aforementioned methods over DomainNet [45], in which the non-IID nature of data can be divided into two categories: label non-IID and feature non-IID. The former indicates that each FL client's private data does not cover all labels, which is the basis for the data partition method used in Section 5.1. The latter is introduced by [42] and is the intrinsic nature of DomainNet. In Figure 5a, we show image samples from DomainNet that belong to the same class to vividly illustrate the feature non-IID nature. Dataset Description DomainNet contains natural images coming from 6 different domains: Clipart, Infograph, Painting, Quickdraw, Real and Sketch. All domains contain data belonging to the same label space, i.e., 345 classes in total, but images look quite different to each other and have various sizes. DomainNet is widely used to evaluate the capability of domain generalization. In FL, the shift between domains can be considered as the feature non-IID nature [42]." }, { "figure_ref": [], "heading": "B.3 Evaluation over DomainNet", "publication_ref": [ "b15" ], "table_ref": [ "tab_13" ], "text": "Clipart Sketch Quickdraw Painting Real Infograph\nExperimental Settings Due to the constraint of computation resources, we only sample 20 classes in DomainNet (345 classes in total) with 30% data from each class for this experiment, and uniformly rescale images to 64 × 64. We list the data number of each domain in Table 5b. Each domain (with 6 domains in total) is partitioned into 20 FL clients (i.e., 120 FL clients in total). Each FL client contains data from only one domain. The data of FL clients belonging to the same domain are label IID but feature non-IID. We train and evaluate all aforementioned methods using MobileNetV2 [16] under the same experiment setting. We report the average accuracy results by using 5 random seeds in Table 8." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b12" ], "table_ref": [ "tab_13", "tab_8" ], "text": "From Table 8, we can observe that: 1) In the feature non-IID scenario, traditional FL methods generally perform better than most pFL methods, except pFedSim, which is different from the most cases in Table 3. It shows that the label non-IID nature can degrade traditional FL method performance more severely than the feature non-IID nature in FL, which is highly correlated with the classifier part. Due to the label non-IID nature, classifier parameters shift drastically and thus significantly weaken the global classifier, resulting in performance deterioration of a single global model. Under this setting, pFL methods (e.g., FedPer [13]) that personalize classifiers perform better than traditional FL methods. However, the feature non-IID nature does not involve the non-IID of labels. Thus, we infer that the gain from personal classifiers is little. As a result, traditional FL baselines generally outperform pFL baselines that rely primarily on personalized classifiers. 2) Our pFedSim method can guarantee performance superiority because it not only relies on personalization of the classifier, but also warming up (generalization phase) and the classifier similarity-based feature extractor aggregation. The result shows that our method pFedSim outperforms baselines under both label and feature non-IID settings, indicating the robustness of our method." }, { "figure_ref": [], "heading": "C Discussion", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C.1 Broader Impact", "publication_ref": [], "table_ref": [], "text": "In this work, we propose a novel pFL method by using model classifiers to evaluate client similarity. While the primary area studied in this paper (personalized FL) has a significant societal impact since FL solutions have been and are being deployed in many industrial systems. Our work contributes to data privacy protection and improves the utility of the personalized model without revealing additional information other than model parameters. Thus, the impact of this work lies primarily in improving the utility of the personalized model while preserving data privacy. The broader impact discussion on this work is not applicable." }, { "figure_ref": [], "heading": "C.2 Limitations", "publication_ref": [], "table_ref": [], "text": "We summarize the limitations of pFedSim as follows:\n• pFedSim relies heavily on the classifier part of a neural network, so pFedSim is not suitable for solving problems using models without a classifier, e.g., image segmentation and content generation. • pFedSim is designed to solve problems in data non-IID scenarios. When the degree of data non-IID is not significant, the improvement of pFedSim may be slight, which is the common dilemma confronted by most pFL methods. • pFedSim requires the cloud server to store personalized models belonging to all FL clients, which requests additional storage resources on the cloud server side." }, { "figure_ref": [], "heading": "Component Layer", "publication_ref": [ "b43" ], "table_ref": [], "text": "Feature Extractor Conv2D(in=3, out=6, kernel=5, stride=1, pad=0) Conv2D(in=3, out=16, kernel=5, stride=1, pad=0) MaxPool2D(kernel=2, stride=2) Flatten() FC(out=120) FC(out=84) Classifier FC(out=num_classes) abbreviations in PyTorch style. For MobileNetV2, we use the built-in API in TorchVision [44] with its default pretrained model weights." }, { "figure_ref": [], "heading": "A.5 Full Hyperparameter settings", "publication_ref": [ "b2", "b5", "b16", "b20", "b13", "b11", "b8", "b41" ], "table_ref": [], "text": "We list all hyperparameter settings among all the aforementioned methods here. Most hyperparameter settings of baselines are consistent with the values in their own papers.\n• FedProx [3] We set the µ = 1.\n• FedDyn [6] We set the α = 0.01.\n• FedGen [17] We set the hidden dimension of generator to 32; the noise dimension to 32; the input/output channels, latent dimension is adapted to the datasets and model backbone; the number of iterations for training the generator in each communication round to 5; the batch size of generated virtual representation to 32; α generative and α generative to 10 and decay with factor 0.98 in the end of each communication round.\n• Per-FedAvg [21] We set the α = 0.01 and β = 0.001.\n• FedRep [14] We set the epoch for training feature extractor part to 1.\n• FedFomo [12] We set the M = 5 and the ratio of validation set to 0.2.\n• FedAP [9] We implement the version called f-FedAP, which performs FedBN [42] for warming up without relying on pre-training, with a generalization ratio of 0.5 and model momentum µ = 0.5.\n• pFedSim (Ours) We set the generalization ratio ρ = 0.5." }, { "figure_ref": [], "heading": "B Supplementary Experiments", "publication_ref": [], "table_ref": [], "text": "We present more experimental results in the supplementary document, which are conducted for evaluating pFedSim more sufficiently." }, { "figure_ref": [], "heading": "B.1 Role of Generalization Phase", "publication_ref": [], "table_ref": [], "text": "In this experiment, we vary ρ from 0 (without generalization phase) to 1, and then compare the average model accuracy of pFedSim. When ρ = 1, pFedSim is degenerated to FedAvg, and when ρ = 0, pFedSim will not be warmed up. We consider those two cases as control cases to fully demonstrate the importance of the generalization phase. Note that the performance with ρ = 1 is always the worst in all cases because FedAvg is incapable of handling severe data non-IID situations, while the model performance with ρ = 0 is always worse than other cases with ρ ∈ (0, 1) in all experiment settings because the trained model is short in generalization capability. However, an oversized generalization" } ]
2023-05-25
[ { "authors": "B Mcmahan; E Moore; D Ramage; S Hampson; B A Arcas", "journal": "PMLR", "ref_id": "b0", "title": "Communication-efficient learning of deep networks from decentralized data", "year": "2017" }, { "authors": "Y Zhao", "journal": "", "ref_id": "b1", "title": "Federated learning with non-iid data", "year": "2018" }, { "authors": "T Li", "journal": "", "ref_id": "b2", "title": "Federated optimization in heterogeneous networks", "year": "2020" }, { "authors": "S P Karimireddy", "journal": "PMLR", "ref_id": "b3", "title": "Scaffold: Stochastic controlled averaging for federated learning", "year": "2020" }, { "authors": "Q Li; B He; D Song", "journal": "", "ref_id": "b4", "title": "Model-contrastive federated learning", "year": "2021" }, { "authors": "D A E Acar", "journal": "", "ref_id": "b5", "title": "Federated learning based on dynamic regularization", "year": "2021" }, { "authors": "T.-M H Hsu; H Qi; M Brown", "journal": "", "ref_id": "b6", "title": "Measuring the effects of non-identical data distribution for federated visual classification", "year": "2019" }, { "authors": "Y Tan", "journal": "", "ref_id": "b7", "title": "Fedproto: Federated prototype learning across heterogeneous clients", "year": "2022" }, { "authors": "W Lu", "journal": "IEEE Transactions on Big Data", "ref_id": "b8", "title": "Personalized federated learning with adaptive batchnorm for healthcare", "year": "2022" }, { "authors": "L Melis; C Song; E De Cristofaro; V Shmatikov", "journal": "IEEE", "ref_id": "b9", "title": "Exploiting unintended feature leakage in collaborative learning", "year": "2019" }, { "authors": "L Zhu; Z Liu; S Han", "journal": "Advances in neural information processing systems", "ref_id": "b10", "title": "Deep leakage from gradients", "year": "2019" }, { "authors": "M Zhang; K Sapra; S Fidler; S Yeung; J M Alvarez", "journal": "", "ref_id": "b11", "title": "Personalized federated learning with first order model optimization", "year": "2020" }, { "authors": "M G Arivazhagan; V Aggarwal; A K Singh; S Choudhary", "journal": "", "ref_id": "b12", "title": "Federated learning with personalization layers", "year": "2019" }, { "authors": "L Collins; H Hassani; A Mokhtari; S Shakkottai", "journal": "PMLR", "ref_id": "b13", "title": "Exploiting shared representations for personalized federated learning", "year": "2021" }, { "authors": "Y Lecun; L Bottou; Y Bengio; P Haffner", "journal": "", "ref_id": "b14", "title": "Gradient-based learning applied to document recognition", "year": "1998" }, { "authors": "M Sandler; A Howard; M Zhu; A Zhmoginov; L.-C Chen", "journal": "", "ref_id": "b15", "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "year": "2018" }, { "authors": "Z Zhu; J Hong; J Zhou", "journal": "PMLR", "ref_id": "b16", "title": "Data-free knowledge distillation for heterogeneous federated learning", "year": "2021" }, { "authors": "D Li; J Wang", "journal": "", "ref_id": "b17", "title": "Fedmd: Heterogenous federated learning via model distillation", "year": "2019" }, { "authors": "J Zhang", "journal": "PMLR", "ref_id": "b18", "title": "Federated learning with label distribution skew via logits calibration", "year": "2022" }, { "authors": "M Mendieta", "journal": "", "ref_id": "b19", "title": "Local learning matters: Rethinking data heterogeneity in federated learning", "year": "2022" }, { "authors": "A Fallah; A Mokhtari; A Ozdaglar", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b20", "title": "Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach", "year": "2020" }, { "authors": "Y Jiang; J Konečnỳ; K Rush; S Kannan", "journal": "", "ref_id": "b21", "title": "Improving federated learning personalization via model agnostic meta learning", "year": "2019" }, { "authors": "T Li; S Hu; A Beirami; V Smith", "journal": "PMLR", "ref_id": "b22", "title": "Ditto: Fair and robust federated learning through personalization", "year": "2021" }, { "authors": "C Finn; P Abbeel; S Levine", "journal": "PMLR", "ref_id": "b23", "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "year": "2017" }, { "authors": "A Nichol; J Achiam; J Schulman", "journal": "", "ref_id": "b24", "title": "On first-order meta-learning algorithms", "year": "2018" }, { "authors": "V Smith; C.-K Chiang; M Sanjabi; A S Talwalkar", "journal": "Advances in neural information processing systems", "ref_id": "b25", "title": "Federated multi-task learning", "year": "2017" }, { "authors": "A Shamsian; A Navon; E Fetaya; G Chechik", "journal": "PMLR", "ref_id": "b26", "title": "Personalized federated learning using hypernetworks", "year": "2021" }, { "authors": "X Ma; J Zhang; S Guo; W Xu", "journal": "", "ref_id": "b27", "title": "Layer-wised model aggregation for personalized federated learning", "year": "2022-06" }, { "authors": "J Zhang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b28", "title": "Parameterized knowledge transfer for personalized federated learning", "year": "2021" }, { "authors": "Y Deng; M M Kamani; M Mahdavi", "journal": "", "ref_id": "b29", "title": "Adaptive personalized federated learning", "year": "2020" }, { "authors": "F Hanzely; P Richtárik", "journal": "", "ref_id": "b30", "title": "Federated learning of a mixture of global and local models", "year": "2020" }, { "authors": "K Singhal", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b31", "title": "Federated reconstruction: Partially local federated learning", "year": "2021" }, { "authors": "J Oh; S Kim; S.-Y Yun", "journal": "", "ref_id": "b32", "title": "Fedbabu: Towards enhanced representation for federated image classification", "year": "2021" }, { "authors": "H.-Y Chen; W.-L Chao", "journal": "", "ref_id": "b33", "title": "On bridging generic and personalized federated learning for image classification", "year": "2021" }, { "authors": "M Luo", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b34", "title": "No fear of heterogeneity: Classifier calibration for federated learning with non-iid data", "year": "2021" }, { "authors": "A Krizhevsky; G Hinton", "journal": "", "ref_id": "b35", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "S Kornblith; M Norouzi; H Lee; G Hinton", "journal": "PMLR", "ref_id": "b36", "title": "Similarity of neural network representations revisited", "year": "2019" }, { "authors": "F Sattler; K.-R Müller; W Samek", "journal": "IEEE transactions on neural networks and learning systems", "ref_id": "b37", "title": "Clustered federated learning: Model-agnostic distributed multitask optimization under privacy constraints", "year": "2020" }, { "authors": "L N Darlow; E J Crowley; A Antoniou; A J Storkey", "journal": "", "ref_id": "b38", "title": "Cinic-10 is not imagenet or cifar-10", "year": "2018" }, { "authors": "G Cohen; S Afshar; J Tapson; A Van Schaik", "journal": "", "ref_id": "b39", "title": "Emnist: Extending mnist to handwritten letters", "year": "2017" }, { "authors": "Y Le; X S Yang", "journal": "", "ref_id": "b40", "title": "Tiny imagenet visual recognition challenge", "year": "2015" }, { "authors": "X Li; M Jiang; X Zhang; M Kamp; Q Dou", "journal": "", "ref_id": "b41", "title": "Fedbn: Federated learning on non-iid features via local batch normalization", "year": "2021" }, { "authors": "A Paszke", "journal": "Advances in neural information processing systems", "ref_id": "b42", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" }, { "authors": "T ", "journal": "", "ref_id": "b43", "title": "Torchvision: Pytorch's computer vision library", "year": "2016" }, { "authors": "X Peng", "journal": "", "ref_id": "b44", "title": "Moment matching for multi-source domain adaptation", "year": "2019" }, { "authors": "", "journal": "", "ref_id": "b45", "title": "Component Layer Repetition Feature Extractor Conv2D(in=3, out=32, kernel=3", "year": "" }, { "authors": "", "journal": "Conv", "ref_id": "b46", "title": "", "year": null }, { "authors": "", "journal": "", "ref_id": "b47", "title": "Conv2D(in=24, out=144", "year": "" }, { "authors": "", "journal": "", "ref_id": "b48", "title": "Conv2D(in=24, out=144, kernel=1", "year": "" }, { "authors": "", "journal": "", "ref_id": "b49", "title": "Conv2D(in=24, out=144", "year": "" }, { "authors": "", "journal": "", "ref_id": "b50", "title": "Conv2D(in=32, out=192", "year": "" }, { "authors": "", "journal": "", "ref_id": "b51", "title": "Conv2D(in=64, out=384", "year": "" }, { "authors": "", "journal": "", "ref_id": "b52", "title": "Conv2D(in=64, out=384", "year": "" }, { "authors": "", "journal": "", "ref_id": "b53", "title": "Conv2D(in=96, out=576", "year": "" }, { "authors": "", "journal": "", "ref_id": "b54", "title": "Conv2D(in=96, out=576", "year": "" }, { "authors": "", "journal": "Conv2D", "ref_id": "b55", "title": "", "year": "" }, { "authors": "", "journal": "", "ref_id": "b56", "title": "Conv2D(in=160", "year": null }, { "authors": "", "journal": "Conv", "ref_id": "b57", "title": "", "year": null }, { "authors": "", "journal": "", "ref_id": "b58", "title": "Conv2D(in=160", "year": "" }, { "authors": "", "journal": "Conv", "ref_id": "b59", "title": "", "year": null }, { "authors": "", "journal": "Conv2D", "ref_id": "b60", "title": "", "year": "" } ]
[ { "formula_coordinates": [ 3, 108, 690.46, 125.83, 17.29 ], "formula_id": "formula_0", "formula_text": "FL Procedure Let C = {1, . . . ," }, { "formula_coordinates": [ 4, 144.32, 328.37, 192.42, 14.56 ], "formula_id": "formula_1", "formula_text": "1 n n i=1 f i (θ). Here, f i (θ) = E (x,y)∼Di [ℓ(θ; (x," }, { "formula_coordinates": [ 4, 133.23, 535.74, 199.8, 19.34 ], "formula_id": "formula_2", "formula_text": "1 n n i=1 min φ1,...,φn f i (ω, φ i ). Here θ i = ω • φ i ." }, { "formula_coordinates": [ 5, 191.94, 412.67, 312.73, 34.48 ], "formula_id": "formula_3", "formula_text": "dist(D i , D j ) = 1 - {(x, y)|y ∈ Y i ∩ Y j } (x,y)∼Di∪Dj |D i ∪ D j | .(1)" }, { "formula_coordinates": [ 5, 178.75, 522.28, 254.51, 17.29 ], "formula_id": "formula_4", "formula_text": "dist(D i , D i ′ ) < dist(D i , D j ) < dist(D j , D k ) < dist(D i , D k )." }, { "formula_coordinates": [ 5, 108, 538.28, 396, 28.19 ], "formula_id": "formula_5", "formula_text": "1 -dist(D i , D i ′ ), and thus sim(D i , D i ′ ) > sim(D i , D j ) > sim(D j , D k ) > sim(D i , D k )." }, { "formula_coordinates": [ 5, 219.87, 608.39, 284.8, 29.94 ], "formula_id": "formula_6", "formula_text": "sim(φ i , φ j ) = 1 |Y| c∈Y sim vec (φ i,c , φ j,c ),(2)" }, { "formula_coordinates": [ 6, 108, 261.9, 320.51, 33.74 ], "formula_id": "formula_7", "formula_text": "sim(φ i , φ i ′ ) > sim(φ i , φ j ) > sim(φ j , φ k ) > sim(φ i , φ k ). Recall that sim(D i , D i ′ ) > sim(D i , D j ) > sim(D j , D k ) > sim(D i , D k )." }, { "formula_coordinates": [ 6, 310.89, 380.44, 178.94, 21.94 ], "formula_id": "formula_8", "formula_text": "Metric (i, i ′ ) (i, j) (j, k) (i, k) Data Similarity 1 0.53 0.32 0 W DB [9]" }, { "formula_coordinates": [ 7, 127.93, 557.27, 131.51, 23.23 ], "formula_id": "formula_9", "formula_text": "(ω t i , φ t i ) ← SGD i (ω t i , φ t i ; D i ) return (ω t i , φ t i )" }, { "formula_coordinates": [ 7, 191.16, 634.78, 313.51, 30.86 ], "formula_id": "formula_10", "formula_text": "Φ ij = - 1 |Y| c∈Y log 1 -max 0, φ i,c • φ j,c ∥φ i,c ∥∥φ j,c ∥ + ϵ ,(3)" }, { "formula_coordinates": [ 8, 201.89, 174.91, 302.78, 28.96 ], "formula_id": "formula_11", "formula_text": "θ t i = φ t i • ω t i = ω t i = 1 j∈C Φij j∈C Φ ij ω t-1 j , φ t i = φ t-1 i .(4)" }, { "formula_coordinates": [ 9, 243.22, 262.01, 76.32, 17.94 ], "formula_id": "formula_12", "formula_text": "|D train i | = |D test i |." }, { "formula_coordinates": [ 14, 235.21, 596.9, 141.58, 33.73 ], "formula_id": "formula_13", "formula_text": "ξ p = T t=0 i∈S t LocalTrainingTime i T |S t ||D train i | ." } ]
pFedSim: Similarity-Aware Model Aggregation Towards Personalized Federated Learning
The federated learning (FL) paradigm emerges to preserve data privacy during model training by only exposing clients' model parameters rather than original data. One of the biggest challenges in FL lies in the non-IID (not identical and independently distributed) data (a.k.a., data heterogeneity) distributed on clients. To address this challenge, various personalized FL (pFL) methods are proposed such as similarity-based aggregation and model decoupling. The former one aggregates models from clients of a similar data distribution. The later one decouples a neural network (NN) model into a feature extractor and a classifier. Personalization is captured by classifiers which are obtained by local training. To advance pFL, we propose a novel pFedSim (pFL based on model similarity) algorithm in this work by combining these two kinds of methods. More specifically, we decouple a NN model into a personalized feature extractor, obtained by aggregating models from similar clients, and a classifier, which is obtained by local training and used to estimate client similarity. Compared with the state-of-the-art baselines, the advantages of pFedSim include: 1) significantly improved model accuracy; 2) low communication and computation overhead; 3) a low risk of privacy leakage; 4) no requirement for any external public information. To demonstrate the superiority of pFedSim, extensive experiments are conducted on real datasets. The results validate the superb performance of our algorithm which can significantly outperform baselines under various heterogeneous data settings.
Jiahao Tan; Yipeng Zhou; Gang Liu; Jessie Hui Wang; Shui Yu
[ { "figure_caption": "Figure 1 :1Figure 1: The comparison of CKA similarities of different layers when training 10 LeNet5 models.Step 1. we conduct a toy experiment by using the CIFAR-10 dataset[36] with labels 0 to 9, which is randomly partitioned into 10 subsets according to the Dirichlet distribution (a classical distribution to generate non-IID data) with α = 0.1[7]. Each subset is used to train an independent LeNet5[15] model (with five layers in total) for 20 global iterations. Centered-Kernel-Alignment (CKA) proposed by[37] is used as the metric to evaluate the similarity of outputs of the same layer from two models, given the same input data. The score of CKA similarity is from 0 (totally different) to 1 (identical). We use the CKA metric to compute the similarity for each layer between any pair of models in Figure1. It is interesting to note that CKA comparison of the last layer, i.e., the classifier, can precisely discriminate models trained on different clients. For the comparison of Layer 0, it is almost impossible to discriminate different models. The comparison of Layer 3 is better for discriminating models, but still worse than that of Layer 4.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The cosine similarities of classifiers trained through different CIFAR-10 subsets.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "2 .2Personalization Phase In this phase, we personalize feature extractors by adjusting aggregation weights based on classifier similarity. Meanwhile, classifiers are only updated by local training to better fit local data.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "4 .4In the generalization phase, the PS aggregates all received models to produce a new global model. In the personalization phase, the PS obtains the classifier similarity matrix based on uploaded model parameters. Only feature extractors similar to each other are aggregated. Then, the PS go back to Step 1 to start a new communication round. Note that the PS produces a single global model in the generalization phase. However, in the personalization phase, the PS produces a personalized model for each client based on the similarity matrix denoted by Φ. The final output of pFedSim are personalized client models {θ 0 , . . . , θ n }. Algorithm 2 pFedSim Parameters: Join ratio r, number of local epoch E, number of communication round T , generalization ratio ρ, similarity matrix Φ ∈ S n×n Initialize θ 0 , Φ Compute T g ← ⌊ρT ⌋; T p ← T -T g Generalization Phase Perform regular FL e.g., Algorithm 1 in T g communication rounds to obtain θ Tg-1", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Data distribution of CIFAR-10 images allocated to clients with different α.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The execution workflow of pFedSim. The pFedSim method is implemented by conducting the following steps for multiple rounds. 1) The PS distributes the latest models to participating clients at the beginning of each round; 2) Each client updates the received model by local training with their private dataset; 3) Each client sends their trained model parameters back to the PS. 4) In the generalization phase, the PS will aggregate all received models from clients and output a new single model as the target for the next communication round. In the personalization phase, the PS will aggregate an exclusive model for each client participating in the next communication round based on the similarity matrix Φ. The Φ will be updated by calculating the cosine similarity of pairwise classifiers from models submitted by participating clients.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "(a) Example to show the feature non-IID with Airplane images of DomainNet from different domains. Number of data sampled from each domain in DomainNet.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "FedFomo[12] FedAP[9] FedGen[17] FedMD[18] pFedSim (Ours)", "figure_data": "Heavy Communication Load✓✓✓High Privacy Risk✓✓Need Public Data✓Heavy Computation Load✓✓", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of shortcomings between pFedSim and different pFL algorithms.", "figure_data": "", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "n} denote the set of clients in FL with total n clients. The i-th client owns the private dataset D i ∈ X i × Y i , where X i and Y i represent the feature and label space of client i, respectively. FL is conducted by multiple rounds of global iterations, a.k.a., communication rounds.The PS is responsible for model initialization, client selection and model aggregation in each round. Let θ denote model parameters to be learned via FL. At the beginning of communication round t, a subset of clients S t with size |S t | = max(r • n, 1) will be randomly selected to participate in FL.", "figure_data": "Algorithm 1 FedAvg [1]Parameters: Participating ratio r, number of local epochs E, number of communication rounds TInitialized model θ 0Server Execute:for t = 1, . . . , T dom ← max(r • n, 1); S t ← (random set of m clients) for each client i ∈ S t in parallel do θ t i ) i ← ClientUpdate(i, θ t-1 θ t+1 ← i∈S t |Di| |D| θ t i , where |D| = i∈S t |D i |ClientUpdate (i, θ t ) : // run on client iSet θ t i ← θ t for each local epoch doθ t i ← SGD i (θ t i ; D i ) return θ t i", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of different client similarity metrics.", "figure_data": "", "figure_id": "tab_5", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Average model accuracy over datasets with format mean(std). Bold, underline mean the best, second-best results respectively.", "figure_data": "", "figure_id": "tab_8", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Average model accuracy of pFedSim with format mean(std) and different ρ.", "figure_data": "", "figure_id": "tab_10", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "3, 0.7] for all cases, making setting ρ easy in practice. Overheads comparison.", "figure_data": "B.2 Overhead Comparisonξ p (×10 -4 s) with format mean(std)ξ m#DatasetCIFAR-10CINIC-10Tiny-ImageNetEMNIST-FedAvg [1] FedFomo [12] FedGen [17] pFedSim (Ours) 8.69 (0.17) 8.68 (0.16) 11.28 (0.21) 11.45 (0.27) 9.59 (0.38) 10.56 (0.13) 11.18 (0.49) 9.56 (0.48)57.60 (1.65) 73.40 (4.15) 61.73 (2.67) 57.64 (1.80)7.85 (0.42) 9.98 (0.37) 10.07 (0.46) O(ν m + ν g ) O(ν m ) O(M ν m ) 7.89 (0.37) O(ν m )", "figure_id": "tab_11", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Average Accuracy over DomainNet with format mean(std). Bold and underline stand for the best and second-best results. The average results is the weighted average of accuracies of all domains.", "figure_data": "", "figure_id": "tab_13", "figure_label": "8", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "[1]", "Explanation": "The cited work provides the foundational concept of FL and the role of a parameter server in coordinating the training process over multiple global iterations."}, {"Category": "Supporting Evidence", "Citation": "[2]", "Explanation": "The cited work highlights the challenge of data heterogeneity in FL and the impact on model utility, convergence, and even divergence."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work introduces the concept of pFL to address the challenge of non-IID data in FL and the need for customized models for individual clients."}, {"Category": "Extension or Continuation", "Citation": "[1]", "Explanation": "The citing paper builds upon the concept of pFL by exploring similarity-based aggregation and model decoupling as approaches to achieve pFL."}, {"Category": "Methodological Basis", "Citation": "[3,4,5,6,7]", "Explanation": "The cited works provide a method of aggregating clients of a similar data distribution to produce a personalized model that fits local data, which the citing paper adopts in its research."}, {"Category": "Extension or Continuation", "Citation": "[8,9]", "Explanation": "The cited works require FL clients to expose additional information for similarity estimation, which the citing paper further explores to improve the FL process."}, {"Category": "Extension or Continuation", "Citation": "[10,11]", "Explanation": "The cited works highlight the privacy concerns associated with exposing additional information in FL, which the citing paper extends by providing solutions to address these issues."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work in FedFomo provides a method of receiving additional models from neighboring clients to gather knowledge and guide personalized aggregation, which the citing paper adopts in its research to improve the FL process."}, {"Category": "Methodological Basis", "Citation": "[13,14]", "Explanation": "The cited works suggest that the final fully connected layer in CNN models should be included in the classifier part, which the citing paper adopts in their research to improve the performance of the classifier."}, {"Category": "Supporting Evidence", "Citation": "[15,16]", "Explanation": "The cited works provide the neural network models that the citing paper uses in their research, which helps to establish the basis for the proposed pFedSim algorithm."}, {"Category": "Extension or Continuation", "Citation": "pFL with model similarity", "Explanation": "The citing paper builds upon the concept of pFL with model similarity by proposing a novel algorithm that combines similarity-based aggregation and model decoupling methods to improve personalized performance in FL training."}, {"Category": "Methodological Basis", "Citation": "[21,22,23]", "Explanation": "The cited works are used as a basis for the design of pFL approaches that are based on the similarity between the optimization objectives of different clients in FL."}, {"Category": "Extension or Continuation", "Citation": "[24,25]", "Explanation": "The cited works are extended by using meta-learning approaches for rapid adaptation of the global model to the local data distribution in pFL."}, {"Category": "Extension or Continuation", "Citation": "[26]", "Explanation": "The cited work is extended by combining multi-task learning with FL to consider each client as a learning task in pFL and transfer federation knowledge from the task relationship matrix."}, {"Category": "Extension or Continuation", "Citation": "[27,28]", "Explanation": "The cited works are extended by using hypernetwork to produce exclusive model parameters and aggregation weights for individual clients in pFL."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work FedAP pre-trained models on a portion of the training data, which the citing paper adopts in their research to enhance the performance of pFL."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work FedFomo set a similarity matrix to store all client models on the server, which the citing paper utilizes in their research to improve the performance of pFL by offloading the model aggregation to the client side."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work demonstrates that including the classifier in the global model aggregation can lead to performance deterioration in non-IID data scenarios. This finding provides a methodological basis for the design of pFL methods that focus on decoupling the feature extractor and classifier to achieve personalization."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work, FedAvg, is the FL algorithm that the citing paper adopts and uses for local training in the FL procedure."}, {"Category": "Methodological Basis", "Citation": "[13,14]", "Explanation": "The cited works provide the basis for the use of model decoupling in pFL by including the final fully connected layer in the classifier part of the model."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work provides the CIFAR-10 dataset and the LeNet5 model architecture, which the citing paper uses to train models and conduct experiments in the toy experiment."}, {"Category": "Supporting Evidence", "Citation": "[37]", "Explanation": "The cited work by [37] is used as a metric to evaluate the similarity of outputs of the same layer from two models, which is essential in the analysis conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(DB [9])", "Explanation": "The cited work provides a method for estimating data similarity using decision boundary similarity, which the citing paper adopts in its research to compare the effectiveness of different metrics for data similarity estimation."}, {"Category": "Methodological Basis", "Citation": "(M DB [38])", "Explanation": "The cited work introduces a method for estimating data similarity using a modified version of decision boundary similarity, which the citing paper uses to further compare the effectiveness of different metrics for data similarity estimation."}, {"Category": "Methodological Basis", "Citation": "(LDB [12])", "Explanation": "The cited work presents a method for estimating data similarity using a new metric called LDB, which the citing paper uses to compare the effectiveness of different metrics for data similarity estimation."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work introduces the concept of Wasserstein distance based similarity (W DB), which the citing paper adopts in their implementation of data similarity measurement."}, {"Category": "Methodological Basis", "Citation": "[38]", "Explanation": "The cited work presents the model difference based similarity (M DB) method, which the citing paper uses in their data similarity measurement implementation."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work introduces the evaluation loss based similarity (LDB) method, which the citing paper employs in their data similarity measurement implementation."}, {"Category": "Methodological Basis", "Citation": "[15,16]", "Explanation": "The cited works in CNNs are used as a reference for the similarity adjustment operation in the citing paper, which is similar to the softmax operation in the model output layer."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work [9] provides a method for personalizing the aggregation of feature extractors, which the citing paper pFedSim adopts in their research to improve the personalization of the model."}, {"Category": "Data Source", "Citation": "[12]", "Explanation": "The cited work [12] is used as a baseline for comparison in the study conducted in pFedSim, highlighting the reliance on external data and pre-existing models in the research."}, {"Category": "Data Source", "Citation": "[36]", "Explanation": "The cited work, CIFAR-10, is a standard image dataset that the citing paper uses to evaluate pFedSim in a real-world scenario."}, {"Category": "Data Source", "Citation": "[39]", "Explanation": "The cited work, CINIC-10, is another standard image dataset that the citing paper uses to evaluate pFedSim in a real-world scenario."}, {"Category": "Data Source", "Citation": "[40]", "Explanation": "The cited work, EMNIST, is a standard image dataset that the citing paper uses to evaluate pFedSim in a real-world scenario."}, {"Category": "Data Source", "Citation": "[41]", "Explanation": "The cited work, Tiny-ImageNet, is a standard image dataset that the citing paper uses to evaluate pFedSim in a real-world scenario."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work, FedAvg, serves as a baseline method for evaluating the performance of pFedSim in the context of FL and pFL baselines."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work, FedProx, is a regular FL method that adds a proximal term to the loss function as a regularization method to prevent the local model from drifting away from the global model, which serves as a baseline for evaluating the performance of pFedSim in the context of FL baselines."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work, FedDyn, is a regular FL method that utilizes a dynamic regularizer for each client to align the client local model with the global model, which serves as a baseline for evaluating the performance of pFedSim in the context of FL baselines."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work, FedGen, is a regular FL method that trains a feature generator to generate virtual features for improving local training, which serves as a baseline for evaluating the performance of pFedSim in the context of FL baselines."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work, FedPer, is a pFL method that preserves the classifier of each client model locally to achieve personalization, which serves as a baseline for evaluating the performance of pFedSim in the context of pFL baselines."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work, FedRep, is a pFL method that preserves the classifier locally, and trains the classifier and the feature extractor sequentially, which serves as a baseline for evaluating the performance of pFedSim in the context of pFL baselines."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work, LeNet5, is implemented in the citing paper to conduct performance evaluation on CIFAR-10, CINIC-10, and EMNIST datasets. The implementation serves as a methodological basis for the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[16]", "Explanation": "The cited work, MobileNetV2, is implemented in the citing paper to classify Tiny-ImageNet images. The use of this data source provides a specific dataset for the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[40]", "Explanation": "The cited work, EMNIST, is used as a dataset in the experiments conducted in the citing paper to gather data and perform analysis."}, {"Category": "Data Source", "Citation": "[40]", "Explanation": "The cited work, EMNIST, is used as a dataset in the experiments conducted in the citing paper to gather data and perform analysis."}, {"Category": "Data Source", "Citation": "[41]", "Explanation": "The cited work, Tiny-ImageNet, is used as a dataset in the experiments conducted in the citing paper to gather data and perform analysis."}, {"Category": "Data Source", "Citation": "[41]", "Explanation": "The cited work, Tiny-ImageNet, is used as a dataset in the experiments conducted in the citing paper to gather data and perform analysis."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work, LeNet5, provides the model backbone used in the experiments conducted in the citing paper, serving as the methodological basis for the research."}, {"Category": "Data Source", "Citation": "[16]", "Explanation": "The cited work, MobileNetV2, is the model backbone used in the experiments, indicating a reliance on external data or pre-existing models for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work FedGen is used as a method for generating virtual features in the citing paper pFedSim, which is employed to assist in training a neural network model."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work, FedFomo, is used as a reference for the comparison of communication overhead in the citing paper. The comparison is based on the average processing time cost per sample for each method, which is a methodological basis for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[45]", "Explanation": "The cited work, DomainNet, is used as a dataset to evaluate the performance of pFedSim and other FL methods in a non-IID setting. The non-IID nature of the data is divided into two categories, label non-IID and feature non-IID, which is the basis for the data partition method used in Section 5.1."}, {"Category": "Data Source", "Citation": "[42]", "Explanation": "The cited work introduces the concept of feature non-IID nature in DomainNet, which the citing paper uses to describe the nature of the dataset in terms of the shift between domains."}, {"Category": "Data Source", "Citation": "[16]", "Explanation": "The cited work, MobileNetV2, is the model used for training and evaluation in the citing paper. The citation is used to acknowledge the origin of the model and the way it is utilized in the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work, pFedSim, is a pFL method that personalizes classifiers to address the issue of performance degradation in traditional FL methods due to the label non-IID nature. The citing paper adopts this method to improve performance in the feature non-IID scenario by personalizing classifiers and leveraging other techniques such as warming up and feature extractor aggregation."}, {"Category": "Methodological Basis", "Citation": "[44]", "Explanation": "The cited work provides the built-in API in TorchVision for MobileNetV2, which the citing paper uses in its research to implement the model and access its default pretrained model weights."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work FedProx is the basis for the hyperparameter setting of \u00b5 in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work FedDyn is the basis for the hyperparameter setting of \u03b1 in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work FedGen is the basis for the hyperparameter settings of the hidden dimension, noise dimension, input/output channels, latent dimension, number of iterations, batch size, and \u03b1 values in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work Per-FedAvg is the basis for the hyperparameter settings of \u03b1 and \u03b2 in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work FedRep is the basis for the hyperparameter setting of the epoch for training the feature extractor part in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work FedFomo is the basis for the hyperparameter settings of M and the ratio of validation set in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work FedAP is used as a methodological basis for implementing the f-FedAP method in the citing paper, which involves performing FedBN without pre-training and with a generalization ratio of 0.5 and model momentum of 0.5."}, {"Category": "Methodological Basis", "Citation": "[42]", "Explanation": "The cited work FedBN is used as a methodological basis for the f-FedAP method in the citing paper, as it is employed for warming up without pre-training and with a generalization ratio of 0.5 and model momentum of 0.5."}, {"Category": "Extension or Continuation", "Citation": "(Ours)", "Explanation": "The cited work pFedSim in the citing paper is an extension or continuation of the f-FedAP method, as it sets the generalization ratio to 0.5."}]
[ { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "International Conference on Multimedia (MM '23), June 03-05, 2018, Woodstock, NY . ACM, New York, NY, USA, 10 pages. https://doi.org/XXXXXXX. XXXXXXX" }, { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b27", "b51", "b54", "b15", "b32", "b38", "b60", "b24", "b44", "b56", "b1", "b20", "b46", "b47", "b7", "b52", "b8", "b16", "b33", "b9", "b45", "b58", "b0", "b17", "b25", "b39", "b42", "b50", "b3", "b34", "b10", "b14", "b23", "b30", "b49", "b55", "b13", "b21", "b28", "b30", "b57", "b10", "b14", "b30", "b37", "b53", "b43", "b22", "b28" ], "table_ref": [], "text": "Semantic segmentation [28,52,55], which is regarded as pixel-wise dense classification tasks to clarify each part of an image based on what category or object it belongs to, have achieved great advances with the development of deep learning networks and high-quality collected data. Meanwhile, the significant performance improvement of segmentation models also boost its application to satisfy various real-world demands [16,33,39,61], e.g., self-driving systems, virtual reality, etc. Whereas, as various adversarial attacks and variations have been explored, the vulnerability of deep neural networks to these adversarial examples [25,45,57] also have attracted much attention. By introducing imperceptible adversarial perturbation to the input of semantic segmentation model, the segmentation results [2,21,47,48] could be badly corrupted and results in serious safety issues. Moreover, since most existing methods [8,53] developed their approaches relying on assumption of degradation-free scenarios [9], the performance of segmentation model has no guarantee under bad imaging conditions or adverse weather such as rain and fogs.\nFrom the perspective of improve robustness against artificially generated adversarial perturbation, a series of attack and defense methods have been developed. As a special case of the classification tasks, the results of segmentation model will also be degraded by the classical adversarial attacks for image classification tasks, such as FGSM [17], PGD [34] and their variations [10,46,59]. Besides, another line of work [1,18] has explored the difference between semantic segmentation and image classification to design taskspecific segmentation attack methods and generate more effective adversarial examples. As one of most effective defense strategy, Adversarial Training (AT) [26,40,43] addresses the vulnerability of segmentation model by incorporating the adversarial example during the training process, which can be further formulated as the minimax optimization problem. In addition to few AT based methods [51] for semantic segmentation, several works also apply teacher-student structure [4] and multitask learning [35] to improve the robustness of segmentation model.\nTo improve the limited performance of semantic segmentation under extreme weather, recently proposed methods have explored various techniques for low-level tasks. Take the rainy weather as an example, Single Image Deraining (SID) [11,15] aims to remove the degradation noise from the input rainy images and retains as much context details as possible, which could be embedded as the low-level preprocessing procedure to benefit the downstream segmentation tasks. In comparison with the optimization based methods [24,31,50,56], varieties of deep learning based methods [14,22,29] explore different network structures to obtain significant performance based on massive training data. Besides, several methods [31,58] have also incorporated the high-level semantic knowledge as efficient feedback to facilitate the deraining process.\nWhereas, the above methods essentially focus on eliminating a specific influence factor such as bad weather or adversarial attacks to enhance the adaptability or robustness of segmentation model in real-world applications, while implying no uniform understanding of these degradation factors which influence the performance of high-level tasks. To be general, the environmental degradation phenomenon and artificially introduced adversarial perturbation share similar principles for semantic segmentation tasks, and could be regarded as some specific form of degradation factors added to the input of the segmentation input. From this new perspective, we make our attempt to design a novel framework to jointly handle both types of degradation factors without introducing additional network parameters or task-specific loss functions.\nFirstly, we introduce the Naive Adversarial Training (NAT) framework, which improves the robustness of segmentation model based on AT while handling the rain streaks by embedding extra image deraining module. Whereas, separately removing the rain steaks and defending adversarial perturbation will deteriorate the derain model and introduce residual perturbation to the output of the derain model, which finally affects the downstream tasks. Inspired by the idea which designs specialized transformation module concatenated to the original classification model to defend adversarial examples, we here propose to transfer the robustness of segmentation model to the derain model, and design the Preprocessing Enhanced Adversarial Robust Learning (PEARL) framework to simultaneously deal with both adversarial perturbation and rain streaks. Moreover, as opposed to the Negative Adversarial Attack (NAA), we propose the Auxiliary Mirror Attack (AMA) to introduce \"positive\" information prior of the adversarial attack to the supervised training of derain model, which enhances the defense capability of derain model and improves the segmentation results eventually. Experimentally, we conducted extensive experiments and ablation studies to demonstrate the performance improvement of both derain and segmentation results with quantitative and visualization results. Moreover, we also verify the generalization performance of our framework across different datasets.\nThe main contributions of this paper are summarized as follows.\n• To the best of our knowledge, we make the first attempt delving into downstream semantic segmentation tasks influenced by both natural degradation factor (e.g., rain streaks) and artificially generated degradation factors (e.g., adversarial attacks), and significantly improve the downstream task performance under bad weather while retaining the robustness against adversarial attacks. [11,15] has been well developed to deal with different rain streaks and improve the downstream tasks for practical applications. Typically, Li et.al. [31] proposed RESCAN to incorporate dilated convolutional neural networks and recurrent neural networks to remove rain streaks in multiple stages. Ren et.al. [38] constructs a better and simpler baseline deraining network, called PReNet, which provides consistent improvements of the architecture and loss functions. Zamir et.al. [54] introduces a multi-stage architecture called MPRNet to progressively learn restoration functions for degraded inputs and balances the competing goals of spatial details and high-level contextualized information in image restoration tasks. Recently, Valanarasu et. al. [44] proposed transformer-based model with a single encoder and a decoder that can restore an image degraded by any weather condition. Besides, a line of works [23,29] also explore the high-level semantic information, such as the detection and segmentation results, to guide the optimization of deraining process. Note that our propose training framework of image deraining implies no explicit requirements of the network structure or design of loss functions, which makes it capable to incorporate recent-proposed methods to obtain higher performance." }, { "figure_ref": [], "heading": "Adversarial Attacks and Defenses", "publication_ref": [ "b1", "b47", "b35", "b2", "b16", "b33", "b11", "b58", "b26", "b4", "b0", "b17", "b25", "b42", "b50", "b36", "b40" ], "table_ref": [], "text": "Adversarial Attacks. It has been investigated [2,48] that the segmentation model also shows vulnerability to these artificially introduced adversarial examples. Generally speaking, the adversarial attacks include two categories, i.e., black-box attacks [36] and white-box [3] attacks. Here we focus on the gradient-based whitebox attack which is capable to access full knowledge of the model under attack (known as target model), and generated imperceptible perturbations by computing the gradient of target model. Several commonly used adversarial attack methods include FGSM [17] and PGD [34], which generated single-step and multi-step perturbation for the input image. Based on two basic attacks, different attack methods have been explored by introducing practical techniques [12,59]. kurakin et.al. [27] proposed BIM attack and demonstrates that machine learning systems are vulnerable to adversarial examples even in physical world scenarios. Carlini et.al. [5] challenges the effectiveness of defensive distillation and introduces the optimization based attack method denoted as CW. In addition to the above general-purpose attacks, several works [1,18] also conduct impressive investigation on the robustness of segmentation and introduce effective improvements of the PGD attack, which also shows its necessity of training robust segmentation model for better defense against the adversarial degradation factors.\nAdversarial Defense. Generally speaking, Adversarial Training (AT) [26,43] trains the model to defend the adversarial example by minimizing the attack objective, which also make it more timeconsuming due to generation of adversarial example and tasks more epochs to converge. Whereas, few works have explored the effectiveness of AT on the segmentation model tasks. Practically, by setting additional branches in the target model during training and dividing pixels into multiple branches, Xu et.al. [51] proposed DDC-AT for improving the robustness of deep neural networks on semantic segmentation tasks. In addition, another branch of defense methods have investigated different transformations, such as image compression and pixel deflection [37,41], embedded to preprocess the input, thus remove the adversarial perturbation. There also has been a lack of research in recent years that have continued to investigate this direction. Instead of directly using AT to handle different degradation factors, we here employ the preprocessing based idea to construct the robust learning process with embedded derain model, which is supposed to jointly handle both rain streaks and adversarial attacks. Besides, our framework retains more flexibility to be further improved with task-specific model design and additional loss functions." }, { "figure_ref": [], "heading": "PROPOSED METHOD", "publication_ref": [], "table_ref": [], "text": "In this section, we first provide simplified problem definition of different degradation attacks factors and AT to derive the Naive Adversarial Training (NAT) framework for improving robustness of image segmentation model. With analysis of the limitation of NAT, we further propose our Preprocessing Enhanced Adversarial Robust Learning (PEARL) framework with designed Auxiliary Mirror Attack (AMA) generator, by which simultaneously remove the rain streaks and improve the robustness to defend downstream adversarial attacks." }, { "figure_ref": [ "fig_1", "fig_3" ], "heading": "Naive Adversarial Training Framework", "publication_ref": [], "table_ref": [], "text": "Against Degradation Attacks Typically speaking, the adversarial attack is supposed to deteriorate the output label of segmentation model by introducing visually imperceptible perturbation, i.e., 𝜹 to the input image, which can be reformulated as\n𝜹 = arg max 𝜹,∥𝜹 ∥ 𝑝 ≤𝜖 L 𝑎𝑡𝑘 (S(C + 𝜹 |𝝎), Y), (1\n)\nwhere 𝜹 is usually bounded with 𝜖-toleration ℓ 𝑝 -norm, 𝜹 ∈ [0, 1], and L 𝑎𝑡𝑘 is the adversarial loss to measure the distance between generated degraded example and ground truth. Typically, we could consider the same form of L seg to define the adversarial loss function. Based on the above formulation, when we apply 𝐾-step PGD method to generate the adversarial example, the perturbation at 𝑘-th step can be denoted as\n𝜹 𝑘+1 ← Π 𝜖 (𝜹 𝑘 + 𝛼 • 𝑠𝑔𝑛(∇ 𝜹 L 𝑎𝑡𝑘 (S(C + 𝜹 𝑘 |𝝎)), Y)),(2)\nwhere 𝑘 = 0, . . . Whereas, we consider more general setting where the manually designed adversarial attack is essentially regarded as one of the specific form of degradation attacks factors. In this case, we are allowed to consider various degradation factors such as inevitable noises caused by extreme weather, e.g., rain and fog, which are prior existing parts of the original input C. Here we consider the degraded factors as rainstreaks, and denote the degraded rainy image as I when the rain streaks exist in the input. To alleviate the negative impact of both degradation attacks, i.e., the rain streaks and adversarial perturbation, we first propose the Naive AT (NAT) framework, which can be illustrated in Fig. 2(a). It first embed pretrained derain model (denoted as F (•|𝜽 ), where 𝜽 denotes the parameters of derain model) to remove the rain streaks, and retain the robust segmentation model to handle the adversarial attacks for downstream tasks. Whereas, since the derain model encompasses little prior of the adversarial distribution, the perturbations added in the rainy image may deteriorate the deraining results seriously. In Fig. 3, we analyze the heatmap of deraining results processed with pretrained derain model and the one trained with our proposed framework. As it can be observed, when the perturbation generated to attack the segmentation model exists in the rainy image I, it will also severely degrade the derain result and leave imperceptible disturbance in the output, which is a mix of multiple degradation factors. Consequently, the residual perturbation and rain streaks left in C = F (I + 𝜹 𝑛 |𝜽 ) also results in the performance decrease of robust segmentation model to defend the adversarial attacks, which increases the difficulty of AT based framework. As one of the most significant contributions, we fully explore the potential capability of embedded derain model, and design the following Preprocessing Enhanced Adversarial Robust Learning (PEARL) framework to effectively defend both manually designed attack and natural degradation attacks." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Preprocessing Enhanced Adversarial Robust Learning (PEARL) Framework", "publication_ref": [ "b18", "b31" ], "table_ref": [], "text": "To be general, the decomposition mapping function of derain model could be rationally reformulated as: I = C + R, where C and R represent the clean background and rain layers extracted from the degraded input. According to the above formulation of adversarial attacks, the degraded input with the adversarial perturbation is denoted as I + 𝜹 𝑛 . In this case, the introduced adversarial perturbation may be regarded as certain noises added to the clean image to some extent. Typically, the denoising task aims to learn the following denoising mapping function: C → C, where C and C denote the input image with noises and clean image. From this perspective, the rain streaks and adversarial perturbation can be all treated as noises, which could be further removed by embedding denoisers. Inspired by the preprocessing based methods [19,32] which remove the adversarial noise by designing specific transformation modules, we here make our attempt to transfer the robustness of segmentation model against adversarial attacks to the embedded derain model. In this case, we no longer follow the AT based formulation in Eq. ( 3) to implement NAT framework in Fig. 2(a), and introduce the following Preprocessing Enhanced Adversarial Robust Learning (PEARL) framework:\nmin 𝜽 L 𝑑𝑒 𝑓 (F (I + 𝜹 𝑛 |𝜽 ), C) 𝑠.𝑡 .𝜹 ∈ argmax 𝜹,∥𝜹 ∥ 𝑝 ≤𝜖 L 𝑎𝑡𝑘 (S(I + 𝜹 𝑛 |𝝎), Y),(4)\nwhere L 𝑑𝑒 𝑓 could be specified as the objective function of derain model. Besides, we can introduce additional regularization terms to L 𝑑𝑒 𝑓 as task priors of downstream segmentation tasks based on the output of derain model. As described in Fig. 2(b) and Eq. (4), the degraded example is still generated by adding adversarial perturbation to the rainy image, while we replace the outer minimization optimization of S(•|𝝎) in Eq. ( 3) with training derain model to learn the following decomposition mapping function:\nI + 𝜹 𝑛 → C + (R + 𝜹 𝑛 ),(5)\nwhere C is approximated by minimizing L 𝑑𝑒 𝑓 ( C, C). Practically, we simply restore the segmentation weights pretrianed based on the clean images, and optimize the negative attack generator and derain model in an alternative manner. The derain model trained with PEARL framework is supposed to jointly remove the rain streaks and adversarial noise, thus make the derain results, i.e., F (I + 𝜹 𝑛 |𝜽 ), closer to the clean image. Consequently, the preprocessed results have weakened the negative influence of both degradation attack factors, which also enhance the downstream segmentation tasks to a great extent. In the next subsection, to make the utmost of generated adversarial noise based on the inner maximization, we introduce another auxiliary mirror attack to mimic the deterioration process of adversarial attack, and incorporate the generated positive perturbation to facilitate the noise decomposition of derain model." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1", "fig_1", "fig_1" ], "heading": "Auxiliary Mirror Attack (AMA)", "publication_ref": [ "b29", "b41" ], "table_ref": [], "text": "In this subsection, we propose another enhancement technique to assist the optimization of derain model and further improve the performance of downstream segmentation tasks. Based on previous definition of G 𝑛 , the generated perturbation\n𝜹 𝑛 = G 𝑛 (L 𝑎𝑡𝑘 ( Ỹ, Y)|𝜹)\nis added to the input of derain model to involve the degraded attack, as illustrated in Fig. 2(b). By minimizing the outer objective L 𝑑𝑒 𝑓 in Eq. ( 4), we have injected the noise distribution of adversarial attack into the derain model such that F (•|𝜽 ) generalize to this decomposition mapping task and minimize the distance between C and C. Whereas, it has been investigated [30] that due to limited hardware support and influences of inevitable adverse shooting conditions, the given ground truth may also contain unpredictable biases, which misguide the derain tasks even the downstream tasks. The above phenomenon enlightens us to rethink the supervised clean data and refine them with the proposed auxiliary mirror attack.\nSpecifically, inspired by the idea [42] which establishes the correlation between restoration and objective detection tasks by generating pseudo ground truth for upstream restoration tasks, we here design an Auxiliary Mirror Attack (AMA) generator denoted as G 𝑚 (•|𝜹) to generate the mirror attack of NAA aiming to minimize the attack objective L 𝑎𝑡𝑘 . In comparison with the negative impact of 𝜹 𝑛 generated by G 𝑛 (•|𝜹), G 𝑚 (•|𝜹) is supposed to dynamically adjust the derain results with attack prior of the inner maximization objective, and add the \"positive\" perturbation to the clean image. Then the objective of PEARL framework with AMA can be further reformulated as: min\n𝜽 L 𝑑𝑒 𝑓 (F (I + 𝜹 𝑛 |𝜽 ), C + 𝜹 𝑚 ) 𝑠.𝑡 .𝜹 ∈ argmax 𝜹,∥𝜹 ∥ 𝑝 ≤𝜖 L 𝑎𝑡𝑘 (S(I + 𝜹 𝑛 |𝝎), Y),(6)\nwhere 𝜹 𝑚 = G 𝑚 (L 𝑎𝑡𝑘 (S(I + 𝜹 𝑛 |𝝎), Y)|𝜹). We describe the whole pipeline of our PEARL framework with AMA in Fig. 2(b). In some degree, PEARL intends to train the derain model to decompose 6) to the following one:\nI + 𝜹 𝑛 → C + (R + 𝜹 𝑛 ) ⇒ I + 𝜹 𝑛 → (C + 𝜹 𝑚 ) + (R + 𝜹 𝑛 ) ⇒ I + 𝜹 𝑛 → (C + R) + (𝜹 𝑛 + 𝜹 𝑚 ).(7)\nIt can be observed that the generated 𝜹 𝑚 added in C serves as distribution prior of 𝜹 𝑛 , which facilitates the robust learning of derain model in Eq. ( 6) based on its original decomposition function. Meanwhile, since AMA introduces the information of adversarial attack on segmentation model to the ground truth of derain model, the output results will consistently benefit the segmentation tasks to some extent.\nIn comparison with the NAT framework in Fig. 2 (a), which forms two cycles by alternatively optimizing the attack generator and segmentation model with L 𝑎𝑡𝑘 and L 𝑑𝑒 𝑓 , our complete pipeline with both PEARL framework and AMA generator creates a new cycle by introducing AMA to the optimization of deraining model in Fig. 2 (c). Besides, we also analyze the difference of training strategies between NAT frameworks and our PEARL with AMA to help understand how to update the attack and defense module in Fig. 2 (d). In the next section, we will demonstrate the significant performance improvement and generalization performance of this new framework with different quantitative and qualitative metrics on derain and segmentation tasks." }, { "figure_ref": [], "heading": "EXPERIMENTS 4.1 Experimental Settings", "publication_ref": [ "b8", "b12", "b59", "b6", "b19", "b43", "b53", "b37", "b30" ], "table_ref": [], "text": "Dataset and Model. We implement our experiments based on two popular semantic segmentation datasets, including Cityscapes [9] and PASCAL VOC 2012 [13]. In the following, we train the model based on the training dataset of Cityscapes, while both datasets are used for testing to verify the performance improvement and generalization ability of the proposed framework. Here we employ two widely used models, i.e., PSPNet [60] and DeepLabv3 [7] for the downstream segmentation task. ResNet50 [20] is adopted as backbone feature extractor, and we follow the default setting of model configuration for training and testing. As for the derain models, we implement four mainstream deraining models, TransWeather [44], MPRNet [54], PReNet [38] and RESCAN [31] to verify the consistent performance of PEARL framework and its insensitivity to the architecture of derain model.\nDegradation factors and Metrics. For natural degradation factor (i.e. rain streaks), we manually synthesize rain streaks based on original Cityscapes and VOC dataset( the initial PSNR and SSIM are 17.45 / 0.5566). For artificially generated degradation factor (i.e. adversarial attack), we use BIM attack for training, while BIM, PGD, CW are used for testing the defense performance (𝜖 = 4/255, 8/255). As for the metrics, two type of pixel-wise Accuracy (overall accuracy allAcc and mean of class-wise accuracy mAcc) and mean of class-wise Intersection over Union (mIoU) are used to evaluate the performance of segmentation, which also reflects the robustness against different degradation factors. In addition, PSNR and SSIM are used for the low-level restoration tasks. More implementation details could be found in the supplementary materials." }, { "figure_ref": [ "fig_4", "fig_5", "fig_1", "fig_6" ], "heading": "Experimental Results", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We first evaluate the performance of different strategies when both rain streaks and adversarial perturbation exists in the segmentation input. Generally speaking, we consider several basic strategies and our proposed framework to address this challenging task. We use Seg, Robust Seg and Derain + Seg to represent the basic model trained with clean image and two models for only handling rain streaks or adversarial examples. Meanwhile, we test the performance of our proposed NAT framework, PEARL framework and PEARL with AMA generator (denoted as +AMA).\nIn Tab. 1, we consider BIM (𝐾 = 3, 5, 10), PGD and C&W attack constrained by ℓ inf norm together with the rain streaks to attack the segmentaion task on synthesized Cityscapes dataset. As it can be observed, both degradation factors could incur a sharp decline in the performance of downstream segmentation task. When the attack intensity is weak (𝜖 = 4/255, the results can be found in the supplementary material), embedding the pretrained derain model may help protect the segmentation tasks to some extent. Whereas, once the attack intensity increases to 8/255, the deraining model with little attack prior will also be affected by the perturbation, which causes serious performance decrease. Besides, as the adversarial robustness of the segmentation model improves, the perturbation generated by the same attack method also becomes stronger, which can be reflected in the decline of PSNR. In contrast, the three proposed solutions, which take into account both factors, significantly promote the defense capability of segmentation tasks. Among these three solutions, PEARL framework (with AMA) gains much more improvement on both derain and segmentation metrics. Under a relatively weak attack (𝜖 = 4/255), the effectiveness of AMA can not be clearly verified. As the intensity of adversarial attack increases (𝜖 = 8/255), with a slight trade-off on deraining performance (0.1 decrease of PSNR), AMA enables better performance of downstream segmentation task on both mIoU and allAcc metrics. It is also worth noting that for unseen attacks (PGD and CW attack), PEARL framework together with AMA assistance still maintains a stable defense effect.\nFurthermore, we also show the visualization results in Fig. 4 to demonstrate that our PEARL framework helps obtain higher quality derained images and effectively facilitates the downstream segmentation tasks to defend two degradation factors, which leads to better segmentation results. From the processed heat maps in the third row, it can be clearly seen that the output of deraining model trained by PEARL left much less noise than other solutions, which also demonstrates the effectiveness of PEARL framework to obtain derain images with better visual effects. Then we adopt four state-of-the-art deraining methods to verify the insensitivity of PEARL framework to the architecture of derain models, and the results are shown in Fig. 5. We train these four models with the same strategy of PEARL with AMA in Fig. 2 (d). It can be seen that our framework can not only improve the PSNR of these methods under different intensities of attack factors, but also significantly improve the downstream segmentation tasks to a large margin.\nFinally, we fix the trained deraining model and replace the PSP-Net model with DeepLabv3 to show the generalization performance 2 and Figure 6 respectively. It can be seen that in the face of new downstream architecture, except for the unknown attack (CW), the deraining model trained by the PEARL framework and the AMA assistant can achieve a defense effect so close to the original PSPNet model." }, { "figure_ref": [ "fig_7" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "In essence, the motivation of PEARL framework together with AMA is to protect the downstream segmentation tasks from the impact of both natural degradation factor and artificially generated degradation factors. Here we conduct ablation experiments to analyze the practical effect of our framework to defend these degradation factor separately. Specifically, we first validate the deraining model trained by our framework on images with only rain streaks in Tab. 3. It can be observed that the trade-off between accuracy on clean data and the robustness to defend adversarial attacks also influences the performance of derain model trained by PEARL and PEARL with AMA. When only the rain streaks exist in the input, the model trained by our framework also gains worse performance on these images without adversarial perturbation. But we are also surprised to find that the derain performance are further improved as extra bonus to obtain better visualization results. As for Fig. 7, it comes to a conclusion that our framework indeed enables the deraining model to obtain the ability to eliminate adversarial perturbation under different attack intensity to a great extent, and AMA also further improved the segmentation results and quality of restored images when the rain no longer exists." }, { "figure_ref": [], "heading": "Extension", "publication_ref": [], "table_ref": [ "tab_1", "tab_4" ], "text": "Last but not least, we also validate the generalization performance of the proposed framework across different datasets. Specifically, we transfer the deraining model in Table 1 (trained on PSPNet and Cityscapes by PEARL framework) directly to PASCAL VOC dataset (the segmentation modeltrained on PASCAL VOC are also employed), and report the results in Table 4. It can be seen that even in the face of unseen input data distribution and new downstream segmentation model, PEARL framework with the assistance of AMA, can still obtain significant performance in comparison with NAT. " }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we have addressed the robustness of semantic segmentation tasks in a general application scenario where the input image is affected by both natural degradation factors (i.e., rain streaks) and artificially generated degradation factors (i.e., adversarial attacks). Based on the unified understanding of the above degradation factors and analysis of proposed NAT framework, we introduced the PEARL framework, which leverages the adversarial robustness by transferring it to the derain model to simultaneously eliminate the influence of both rain streaks and adversarial perturbation. Moreover, we introduced the AMA generator to the PEARL framework, which provides positive information prior for the defense update as opposed to the NAA generator. We have shown the significant performance improvement of the PEARL framework for handling both types of degradation factors based on different derain and segmentation models. Furthermore, we have verified the generalization performance of the PEARL framework with AMA across different datasets." } ]
2023-05-25
[ { "authors": "Shashank Agnihotri; Margret Keuper", "journal": "", "ref_id": "b0", "title": "CosPGD: a unified white-box adversarial attack for pixel-wise prediction tasks", "year": "2023" }, { "authors": "Anurag Arnab; Ondrej Miksik; Philip Hs Torr", "journal": "", "ref_id": "b1", "title": "On the robustness of semantic segmentation models to adversarial attacks", "year": "2018" }, { "authors": "Anish Athalye; Nicholas Carlini", "journal": "", "ref_id": "b2", "title": "On the robustness of the cvpr 2018 white-box adversarial example defenses", "year": "2018" }, { "authors": "Andreas Bar; Fabian Huger; Peter Schlicht; Tim Fingscheidt", "journal": "", "ref_id": "b3", "title": "On the robustness of redundant teacher-student frameworks for semantic segmentation", "year": "2019" }, { "authors": "Nicholas Carlini; David Wagner", "journal": "Ieee", "ref_id": "b4", "title": "Towards evaluating the robustness of neural networks", "year": "2017" }, { "authors": "Liang-Chieh Chen; George Papandreou; Iasonas Kokkinos; Kevin Murphy; Alan L Yuille", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b5", "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "year": "2017" }, { "authors": "Liang-Chieh Chen; George Papandreou; Florian Schroff; Hartwig Adam", "journal": "", "ref_id": "b6", "title": "Rethinking atrous convolution for semantic image segmentation", "year": "2017" }, { "authors": "Liang-Chieh Chen; Yukun Zhu; George Papandreou; Florian Schroff; Hartwig Adam", "journal": "", "ref_id": "b7", "title": "Encoder-decoder with atrous separable convolution for semantic image segmentation", "year": "2018" }, { "authors": "Marius Cordts; Mohamed Omran; Sebastian Ramos; Timo Rehfeld; Markus Enzweiler; Rodrigo Benenson; Uwe Franke; Stefan Roth; Bernt Schiele", "journal": "", "ref_id": "b8", "title": "The cityscapes dataset for semantic urban scene understanding", "year": "2016" }, { "authors": "Francesco Croce; Matthias Hein", "journal": "PMLR", "ref_id": "b9", "title": "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks", "year": "2020" }, { "authors": "Sen Deng; Mingqiang Wei; Jun Wang; Yidan Feng; Luming Liang; Haoran Xie; Fu ; Lee Wang; Meng Wang", "journal": "", "ref_id": "b10", "title": "Detail-recovery image deraining via context aggregation networks", "year": "2020" }, { "authors": "Yinpeng Dong; Fangzhou Liao; Tianyu Pang; Hang Su; Jun Zhu; Xiaolin Hu; Jianguo Li", "journal": "", "ref_id": "b11", "title": "Boosting adversarial attacks with momentum", "year": "2018" }, { "authors": "Mark Everingham; Luc Van Gool; K I Christopher; John Williams; Andrew Winn; Zisserman", "journal": "International journal of computer vision", "ref_id": "b12", "title": "The pascal visual object classes (voc) challenge", "year": "2010" }, { "authors": "Xueyang Fu; Jiabin Huang; Delu Zeng; Yue Huang; Xinghao Ding; John Paisley", "journal": "", "ref_id": "b13", "title": "Removing rain from single images via a deep detail network", "year": "2017" }, { "authors": "Xueyang Fu; Borong Liang; Yue Huang; Xinghao Ding; John Paisley", "journal": "IEEE transactions on neural networks and learning systems", "ref_id": "b14", "title": "Lightweight pyramid networks for image deraining", "year": "2019" }, { "authors": "Santiago González Izard; Ramiro Sánchez Torres; Oscar Alonso Plaza; Juan Antonio; Juanes Mendez; Francisco José García-Peñalvo", "journal": "Sensors", "ref_id": "b15", "title": "Nextmed: automatic imaging segmentation, 3D reconstruction, and 3D model visualization platform using augmented and virtual reality", "year": "2020" }, { "authors": "Ian J Goodfellow; Jonathon Shlens; Christian Szegedy", "journal": "", "ref_id": "b16", "title": "Explaining and harnessing adversarial examples", "year": "2014" }, { "authors": "Jindong Gu; Hengshuang Zhao; Philip Hs Volker Tresp; Torr", "journal": "Springer", "ref_id": "b17", "title": "SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness", "year": "2022-10-23" }, { "authors": "Shixiang Gu; Luca Rigazio", "journal": "", "ref_id": "b18", "title": "Towards deep neural network architectures robust to adversarial examples", "year": "2014" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b19", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Jan Hendrik Metzen; Chaithanya Mummadi; Thomas Kumar; Volker Brox; Fischer", "journal": "", "ref_id": "b20", "title": "Universal adversarial perturbations against semantic image segmentation", "year": "2017" }, { "authors": "Xiaowei Hu; Chi-Wing Fu; Lei Zhu; Pheng-Ann Heng", "journal": "", "ref_id": "b21", "title": "Depthattentional features for single-image rain removal", "year": "2019" }, { "authors": "Kui Jiang; Zhongyuan Wang; Peng Yi; Chen Chen; Baojin Huang; Yimin Luo; Jiayi Ma; Junjun Jiang", "journal": "", "ref_id": "b22", "title": "Multi-scale progressive fusion network for single image deraining", "year": "2020" }, { "authors": "Li-Wei Kang; Chia-Wen Lin; Yu-Hsiang Fu", "journal": "IEEE transactions on image processing", "ref_id": "b23", "title": "Automatic single-imagebased rain streaks removal via image decomposition", "year": "2011" }, { "authors": "Taeheon Kim; Youngjoon Yu; Yong Man Ro", "journal": "", "ref_id": "b24", "title": "Defending Physical Adversarial Attack on Object Detection via Adversarial Patch-Feature Energy", "year": "1905" }, { "authors": "Alexey Kurakin; Ian Goodfellow; Samy Bengio", "journal": "", "ref_id": "b25", "title": "Adversarial machine learning at scale", "year": "2016" }, { "authors": "Alexey Kurakin; Ian J Goodfellow; Samy Bengio", "journal": "", "ref_id": "b26", "title": "Adversarial examples in the physical world", "year": "2018" }, { "authors": "Guanbin Li; Yuan Xie; Liang Lin; Yizhou Yu", "journal": "", "ref_id": "b27", "title": "Instance-level salient object segmentation", "year": "2017" }, { "authors": "Ruoteng Li; Loong-Fah Cheong; Robby T Tan", "journal": "", "ref_id": "b28", "title": "Heavy rain image restoration: Integrating physics model and conditional adversarial learning", "year": "2019" }, { "authors": "Siyuan Li; Iago Breno Araujo; Wenqi Ren; Zhangyang Wang; Eric K Tokuda; Roberto Hirata; Junior ; Roberto Cesar-Junior; Jiawan Zhang; Xiaojie Guo; Xiaochun Cao", "journal": "", "ref_id": "b29", "title": "Single image deraining: A comprehensive benchmark analysis", "year": "2019" }, { "authors": "Xia Li; Jianlong Wu; Zhouchen Lin; Hong Liu; Hongbin Zha", "journal": "", "ref_id": "b30", "title": "Recurrent squeeze-and-excitation context aggregation net for single image deraining", "year": "2018" }, { "authors": "Fangzhou Liao; Ming Liang; Yinpeng Dong; Tianyu Pang; Xiaolin Hu; Jun Zhu", "journal": "", "ref_id": "b31", "title": "Defense against adversarial attacks using high-level representation guided denoiser", "year": "2018" }, { "authors": "Xiaofeng Liu; Yuzhuo Han; Song Bai; Yi Ge; Tianxing Wang; Xu Han; Site Li; Jane You; Jun Lu", "journal": "", "ref_id": "b32", "title": "Importance-aware semantic segmentation in self-driving with discrete wasserstein training", "year": "2020" }, { "authors": "Aleksander Madry; Aleksandar Makelov; Ludwig Schmidt; Dimitris Tsipras; Adrian Vladu", "journal": "", "ref_id": "b33", "title": "Towards deep learning models resistant to adversarial attacks", "year": "2017" }, { "authors": "Chengzhi Mao; Amogh Gupta; Vikram Nitin; Baishakhi Ray; Shuran Song; Junfeng Yang; Carl Vondrick", "journal": "Springer", "ref_id": "b34", "title": "Multitask learning strengthens adversarial robustness", "year": "2020-08-23" }, { "authors": "Nicolas Papernot; Patrick Mcdaniel; Ian Goodfellow; Somesh Jha; Z Berkay Celik; Ananthram Swami", "journal": "", "ref_id": "b35", "title": "Practical black-box attacks against machine learning", "year": "2017" }, { "authors": "Aaditya Prakash; Nick Moran; Solomon Garber; Antonella Dilillo; James Storer", "journal": "", "ref_id": "b36", "title": "Deflecting adversarial attacks with pixel deflection", "year": "2018" }, { "authors": "Wangmeng Dongwei Ren; Qinghua Zuo; Pengfei Hu; Deyu Zhu; Meng", "journal": "", "ref_id": "b37", "title": "Progressive image deraining networks: A better and simpler baseline", "year": "2019" }, { "authors": "Abhinav Sagar; Rajkumar Soundrapandiyan", "journal": "", "ref_id": "b38", "title": "Semantic segmentation with multi scale spatial attention for self driving cars", "year": "2021" }, { "authors": "Chuanbiao Song; Kun He; Liwei Wang; John E Hopcroft", "journal": "", "ref_id": "b39", "title": "Improving the generalization of adversarial training with domain adaptation", "year": "2018" }, { "authors": "Yang Song; Taesup Kim; Sebastian Nowozin; Stefano Ermon; Nate Kushman", "journal": "", "ref_id": "b40", "title": "Pixeldefend: Leveraging generative models to understand and defend against adversarial examples", "year": "2017" }, { "authors": "Shangquan Sun; Wenqi Ren; Tao Wang; Xiaochun Cao", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b41", "title": "Rethinking Image Restoration for Object Detection", "year": "2022" }, { "authors": "Florian Tramèr; Alexey Kurakin; Nicolas Papernot; Ian Goodfellow; Dan Boneh; Patrick Mcdaniel", "journal": "", "ref_id": "b42", "title": "Ensemble adversarial training: Attacks and defenses", "year": "2017" }, { "authors": "Jeya Maria; Jose Valanarasu; Rajeev Yasarla; M Vishal; Patel", "journal": "", "ref_id": "b43", "title": "Transweather: Transformer-based restoration of images degraded by adverse weather conditions", "year": "2022" }, { "authors": "Yuxuan Wang; Jiakai Wang; Zixin Yin; Ruihao Gong; Jingyi Wang; Aishan Liu; Xianglong Liu", "journal": "", "ref_id": "b44", "title": "Generating transferable adversarial examples against vision transformers", "year": "2022" }, { "authors": "Eric Wong; Leslie Rice; J Zico Kolter", "journal": "", "ref_id": "b45", "title": "Fast is better than free: Revisiting adversarial training", "year": "2020" }, { "authors": "Chaowei Xiao; Ruizhi Deng; Bo Li; Fisher Yu; Mingyan Liu; Dawn Song", "journal": "", "ref_id": "b46", "title": "Characterizing adversarial examples based on spatial consistency information for semantic segmentation", "year": "2018" }, { "authors": "Cihang Xie; Jianyu Wang; Zhishuai Zhang; Yuyin Zhou; Lingxi Xie; Alan Yuille", "journal": "", "ref_id": "b47", "title": "Adversarial examples for semantic segmentation and object detection", "year": "2017" }, { "authors": "Enze Xie; Wenhai Wang; Zhiding Yu; Anima Anandkumar; Jose M Alvarez; Ping Luo", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b48", "title": "SegFormer: Simple and efficient design for semantic segmentation with transformers", "year": "2021" }, { "authors": "Ke Xu; Xin Tian; Xin Yang; Baocai Yin; Rynson Wh Lau", "journal": "IEEE Transactions on Image Processing", "ref_id": "b49", "title": "Intensityaware single-image deraining with semantic and color regularization", "year": "2021" }, { "authors": "Xiaogang Xu; Hengshuang Zhao; Jiaya Jia", "journal": "", "ref_id": "b50", "title": "Dynamic divide-andconquer adversarial training for robust semantic segmentation", "year": "2021" }, { "authors": "Maoke Yang; Kun Yu; Chi Zhang; Zhiwei Li; Kuiyuan Yang", "journal": "", "ref_id": "b51", "title": "Denseaspp for semantic segmentation in street scenes", "year": "2018" }, { "authors": "Changqian Yu; Jingbo Wang; Changxin Gao; Gang Yu; Chunhua Shen; Nong Sang", "journal": "", "ref_id": "b52", "title": "Context prior for scene segmentation", "year": "2020" }, { "authors": "Aditya Syed Waqas Zamir; Salman Arora; Munawar Khan; Fahad Hayat; Ming-Hsuan Shahbaz Khan; Ling Yang; Shao", "journal": "", "ref_id": "b53", "title": "Multi-stage progressive image restoration", "year": "2021" }, { "authors": "Hang Zhang; Kristin Dana; Jianping Shi; Zhongyue Zhang; Xiaogang Wang; Ambrish Tyagi; Amit Agrawal", "journal": "", "ref_id": "b54", "title": "Context encoding for semantic segmentation", "year": "2018" }, { "authors": "He Zhang; M Vishal; Patel", "journal": "", "ref_id": "b55", "title": "Density-aware single image de-raining using a multi-stream dense network", "year": "2018" }, { "authors": "Jiaming Zhang; Qi Yi; Jitao Sang", "journal": "", "ref_id": "b56", "title": "Towards Adversarial Attack on Vision-Language Pre-training Models", "year": "2022" }, { "authors": "Kaihao Zhang; Wenhan Luo; Wenqi Ren; Jingwen Wang; Fang Zhao; Lin Ma; Hongdong Li", "journal": "Springer", "ref_id": "b57", "title": "Beyond monocular deraining: Stereo image deraining via semantic understanding", "year": "2020-08-23" }, { "authors": "Yihua Zhang; Guanhua Zhang; Prashant Khanduri; Mingyi Hong; Shiyu Chang; Sijia Liu", "journal": "PMLR", "ref_id": "b58", "title": "Revisiting and advancing fast adversarial training through the lens of bi-level optimization", "year": "2022" }, { "authors": "Hengshuang Zhao; Jianping Shi; Xiaojuan Qi; Xiaogang Wang; Jiaya Jia", "journal": "", "ref_id": "b59", "title": "Pyramid scene parsing network", "year": "2017" }, { "authors": "Quan Zhou; Yu Wang; Yawen Fan; Xiaofu Wu; Suofei Zhang; Bin Kang; Longin ; Jan Latecki", "journal": "applied soft computing", "ref_id": "b60", "title": "AGLNet: Towards real-time semantic segmentation of self-driving images via attention-guided lightweight network", "year": "2007-03-12" } ]
[ { "formula_coordinates": [ 4, 109.96, 440.35, 181.46, 16.82 ], "formula_id": "formula_0", "formula_text": "𝜹 = arg max 𝜹,∥𝜹 ∥ 𝑝 ≤𝜖 L 𝑎𝑡𝑘 (S(C + 𝜹 |𝝎), Y), (1" }, { "formula_coordinates": [ 4, 291.41, 440.83, 3.17, 7.94 ], "formula_id": "formula_1", "formula_text": ")" }, { "formula_coordinates": [ 4, 76.16, 543.11, 218.42, 10.56 ], "formula_id": "formula_2", "formula_text": "𝜹 𝑘+1 ← Π 𝜖 (𝜹 𝑘 + 𝛼 • 𝑠𝑔𝑛(∇ 𝜹 L 𝑎𝑡𝑘 (S(C + 𝜹 𝑘 |𝝎)), Y)),(2)" }, { "formula_coordinates": [ 5, 102.2, 623.43, 192.39, 34.34 ], "formula_id": "formula_3", "formula_text": "min 𝜽 L 𝑑𝑒 𝑓 (F (I + 𝜹 𝑛 |𝜽 ), C) 𝑠.𝑡 .𝜹 ∈ argmax 𝜹,∥𝜹 ∥ 𝑝 ≤𝜖 L 𝑎𝑡𝑘 (S(I + 𝜹 𝑛 |𝝎), Y),(4)" }, { "formula_coordinates": [ 5, 395.53, 134.44, 163.21, 9.97 ], "formula_id": "formula_4", "formula_text": "I + 𝜹 𝑛 → C + (R + 𝜹 𝑛 ),(5)" }, { "formula_coordinates": [ 5, 471.29, 371.76, 86.46, 9.97 ], "formula_id": "formula_5", "formula_text": "𝜹 𝑛 = G 𝑛 (L 𝑎𝑡𝑘 ( Ỹ, Y)|𝜹)" }, { "formula_coordinates": [ 5, 366.35, 637.64, 192.39, 34.34 ], "formula_id": "formula_6", "formula_text": "𝜽 L 𝑑𝑒 𝑓 (F (I + 𝜹 𝑛 |𝜽 ), C + 𝜹 𝑚 ) 𝑠.𝑡 .𝜹 ∈ argmax 𝜹,∥𝜹 ∥ 𝑝 ≤𝜖 L 𝑎𝑡𝑘 (S(I + 𝜹 𝑛 |𝝎), Y),(6)" }, { "formula_coordinates": [ 6, 63.97, 330.93, 230.62, 23.83 ], "formula_id": "formula_7", "formula_text": "I + 𝜹 𝑛 → C + (R + 𝜹 𝑛 ) ⇒ I + 𝜹 𝑛 → (C + 𝜹 𝑚 ) + (R + 𝜹 𝑛 ) ⇒ I + 𝜹 𝑛 → (C + R) + (𝜹 𝑛 + 𝜹 𝑚 ).(7)" } ]
PEARL: Preprocessing Enhanced Adversarial Robust Learning of Image Deraining for Semantic Segmentation
i Derain + Seg (f)-i NAT (g)-i PEARL (Ours) (h)-i +AMA (Ours) Figure 1: The visualization results of image deraining and semantic segmentation tasks among the baseline (Derain + Seg) and our proposed NAT framework, PEARL framework and PEARL with AMA generator (denoted as +AMA) with the influence of both degradation factors, i.e., rain streaks and PGD attacks. It can be obviously seen that our proposed framework obtains derained images with higher quality which also leads to more accurate segmentation labels.
Xianghao Jiao; Yaohua Liu; Xinjia Gao; Xinyuan Chu; Xin Fan; Risheng Liu
[ { "figure_caption": "In this work, we consider an image segmentation model S(•|𝝎) parameterized by 𝝎. Given a training dataset D tr with labeled data pairs, the segmentation output can be represented as Ỹ = S(C|𝝎), where C denotes the input image, and Ỹ denotes the output label of segmentation. Therefore, this downstream task aims to optimize the following objective: min 𝝎 L seg ( Ỹ, Y), where Y denotes the groundtruth label of segmentation.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The first three subfigures illustrate the Naive Adversarial Training (NAT) training framework for handling rain streaks and adversarial attacks for image segmentation model, our Preprocessing Enhanced Adversarial Robust Learning (PEARL) framework and its whole pipeline with proposed Auxiliary Mirror Attack (AMA) technique. The last subfigure describes the training strategy for NAT, PEARL and PEARL with AMA. We use gray, green and purple lines to denote the optimization cycle of attack, defense and additional flow introduced by the AMA module.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": ", 𝐾 -1, 𝛼 is the step size for perturbation generation, Π 𝜖 (•) and 𝑠𝑔𝑛(•) denotes the projection operation and elementwise 𝑠𝑖𝑔𝑛 operation, respectively. The initial perturbation 𝜹 0 is sampled from uniform distribution 𝑈 (-𝜖, 𝜖). In the following, we use 𝜹 𝑛 to represent the adversarial attack 𝜹 𝐾 generated by a specific Negative Adversarial Attack (NAA) generatior denoted as 𝜹 𝑛 = G 𝑛 (L 𝑎𝑡𝑘 (S(C + 𝜹 |𝝎), Y)|𝜹), (e.g., PGD), in order to distinguish them from the auxiliary mirror attacks we introduced later. As it is mentioned above, AT have been extensively investigated to defend the adversarial attacks by solving the following minimax optimization problem minmize 𝝎 E (C,Y) ∈ D tr maximize 𝜹,∥𝜹 ∥ 𝑝 ≤𝜖 L 𝑎𝑡𝑘 (S(C + 𝜹 𝑛 |𝝎), Y) . (3) By alternatively optimizing 𝝎 and generating new perturbation 𝜹 𝑛 with G 𝑛 (•|𝜹), the robustness of segmentation against adversarial samples generated by different types of NAAs can be consistently improved. The objective of adversarial defense for the segmentation model is denoted as L 𝑑𝑒 𝑓 , which is usually defined as the same form of L 𝑎𝑡𝑘 .", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: We compare the processed heat maps of pretrained derain model and our proposed framework to show the difference between derain results and groundtruth with both rain streaks and BIM attack (𝜖 = 4/255).", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Comparison of the deraining and segmentation results among different methods on synthesized Cityscapes dataset. The second to fourth rows of images represent the deraining results, heat map of the difference between the derained image and clean image, and the segmentation labels.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: We illustrate the performance improvement of our PEARL Framework on PSNR and SSIM based on different attack intensities (𝐾 = 5, 7, 9) and derain models, including RESCAN, PReNet, MPRNet and Transweather.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Illustrating the mIoU and allACC of different classes for NAT, PEARL and Pearl with AMA based on DeepLabv3.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Illustrating the evaluation results of Derain + Seg, PEARL and PEARL with AMA under different attack intensities of BIM (𝐾 = 3, 5, 7).", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Evaluation results with both natural and artificial degradation factors on synthesized Cityscapes dataset. Adversarial attack is generated by BIM (𝐾 = 3, 5, 10), PGD10 and CW, respectively. We report the defense results with perturbation value 𝜖 = 8/255, and more results for the perturbation 𝜖 = 4/255 can be found in the supplementary materials. .36 17.44 2.46 27.61 17.39 2.41 27.40 17.39 2.42 27.87 17.39 1.80 21.74 17.41 Robust Seg 2.16 38.32 17.37 2.08 38.02 17.28 2.06 37.93 17.23 2.05 37.91 17.22 2.09 38.04 17.29 Derain + Seg 9.31 38.28 29.46 3.79 20.02 28.83 1.90 13.56 28.24 1.92 12.84 28.25 3.12 15.13 28.83 NAT 38.39 85.03 29.78 34.31 82.00 28.80 31.37 79.24 28.08 31.31 79.10 28.08 34.57 82.12 28.68 PEARL(Ours) 47.81 88.80 32.62 44.70 87.10 32.31 41.03 83.86 31.86 41.69 84.44 31.88 46.16 87.12 32.30 +AMA(Ours) 48.55 88.81 32.56 46.14 87.55 32.21 43.75 85.95 31.74 44.60 86.34 31.77 47.73 87.84 32.20 the clean image, thus defend the adversarial attack generated by G 𝑛 (•|𝜹), while AMA moves one more step to interpolate the mirror attack of G 𝑛 (•|𝜹) to the ground truth. Consequently, by minimizing L 𝑑𝑒 𝑓 ( C, C + 𝜹 𝑚 ), our proposed framework with AMA turns the decomposition mapping in Eq. (", "figure_data": "MethodsRain+BIM3 mIoU allAcc PSNR mIoU allAcc PSNR mIoU allAcc PSNR mIoU allAcc PSNR mIoU allAcc PSNR Rain+BIM5 Rain+BIM10 Rain+PGD10 Rain+CWSeg2.81 31", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Reporting the defense performance of NAT, PEARL, and PEARL with AMA on the synthesized Cityscapes dataset. BIM Rain + PGD Rain + CW mIoU PSNR mIoU PSNR mIoU PSNR mIoU PSNR NAT 51.94 31.38 33.58 28.03 43.92 30.15 30.89 27.59 PEARL 58.39 33.08 43.70 30.16 52.35 32.77 38.02 31.90 +AMA 57.86 33.12 52.51 32.77 53.68 32.74 41.24 31.76", "figure_data": "MethodsRainRain +", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results of different metrics with single degradation factor, i.e. rain streak.", "figure_data": "Methods mIoU mAcc allAcc PSNR SSIMDerain+Seg 37.52 39.64 97.00 31.41 90.87PEARL31.96 36.37 96.01 33.13 92.69+AMA31.79 36.10 96.53 33.06 93.05", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results of the defense performance on PASCAL VOC dataset. The derain model is the same as the one used in Table 1, while the segmentation model was replaced. BIM Rain + PGD Rain + CW mIoU Acc mIoU Acc mIoU Acc mIoU Acc NAT 53.05 79.71 36.59 66.62 36.28 66.03 36.28 65.91 PEARL 58.41 72.84 43.16 73.50 43.32 73.89 43.41 73.86 +AMA 58.38 72.89 43.54 73.95 43.88 74.42 43.87 74.22", "figure_data": "MethodsRainRain +", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "[25,45,57]", "Explanation": "The cited works on adversarial examples have attracted attention and contributed to the understanding of the vulnerability of deep neural networks to these examples, which is important for the citing paper in understanding the need for improved robustness against adversarial perturbation."}, {"Category": "Extension or Continuation", "Citation": "[2,21,47,48]", "Explanation": "The cited works on segmentation results with adversarial perturbation have expanded the research on the effects of these perturbations on segmentation models, providing a basis for the citing paper to explore the impact of these perturbations on real-world applications."}, {"Category": "Data Source", "Citation": "[8,53]", "Explanation": "The cited works on attack and defense methods for improving robustness against adversarial perturbation have provided a foundation for the citing paper to develop new approaches in this area, by leveraging the data and methods from these works."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work introduces the FGSM attack method, which the citing paper adopts in their research to evaluate the vulnerability of segmentation models to adversarial attacks."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work presents the PGD attack method, which the citing paper uses to further explore the effectiveness of adversarial attacks in degrading the performance of segmentation models."}, {"Category": "Methodological Basis", "Citation": "[1,18]", "Explanation": "The cited works have explored the differences between semantic segmentation and image classification to design task-specific segmentation attack methods, which the citing paper references to develop more effective adversarial examples for segmentation models."}, {"Category": "Supporting Evidence", "Citation": "[26,40,43]", "Explanation": "The cited works have proposed the Adversarial Training (AT) method as a defense strategy for improving the robustness of segmentation models, which the citing paper uses to address the vulnerability of segmentation models to adversarial examples."}, {"Category": "Methodological Basis", "Citation": "[51]", "Explanation": "The cited work has applied the Adversarial Training (AT) method to semantic segmentation, which the citing paper references to further explore the performance of AT in improving the robustness of segmentation models."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work has explored the use of teacher-student structure in improving the robustness of segmentation models, which the citing paper references to further develop methods for improving the performance of segmentation models under extreme weather conditions."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work has applied multitask learning to improve the robustness of segmentation models, which the citing paper references to develop methods for improving the performance of segmentation models under extreme weather conditions."}, {"Category": "Methodological Basis", "Citation": "[11,15]", "Explanation": "The cited works on single image deraining provide a method for removing degradation noise from input images, which the citing paper adopts as a low-level preprocessing procedure to improve the performance of downstream segmentation tasks."}, {"Category": "Extension or Continuation", "Citation": "[24,31,50,56]", "Explanation": "The cited works on optimization based methods explore different network structures to improve performance in deraining tasks, which the citing paper builds upon to develop new methods for enhancing the adaptability and robustness of segmentation models in real-world applications."}, {"Category": "Data Source", "Citation": "[14,22,29]", "Explanation": "The cited works on deep learning based methods provide specific network structures and training data to improve performance in deraining tasks, which the citing paper utilizes as a reference for developing new methods and training data in the field of deraining."}, {"Category": "Supporting Evidence", "Citation": "[31,58]", "Explanation": "The cited works on incorporating high-level semantic knowledge in deraining tasks provide efficient feedback to facilitate the deraining process, which the citing paper uses as evidence to support the development of new methods that incorporate semantic knowledge in the deraining process."}, {"Category": "Methodological Basis", "Citation": "[11,15]", "Explanation": "The cited works have been well developed to deal with rain streaks and improve downstream tasks, which the citing paper adopts in their research to improve performance in bad weather conditions."}, {"Category": "Extension or Continuation", "Citation": "[31,38]", "Explanation": "The cited works have proposed RESCAN and PReNet to address rain streaks in multiple stages and provide a better baseline for deraining networks, which the citing paper builds upon to further improve the performance of their framework."}, {"Category": "Supporting Evidence", "Citation": "[54]", "Explanation": "The cited work introduces a multi-stage architecture that can balance the competing goals of spatial details and high-level contextualized information in image restoration tasks, which the citing paper leverages to improve the performance of image deraining."}, {"Category": "Extension or Continuation", "Citation": "[44]", "Explanation": "The cited work proposes a transformer-based model with a single encoder and a decoder for image restoration under any weather condition, which the citing paper extends to further improve the performance of image deraining."}, {"Category": "Extension or Continuation", "Citation": "[23,29]", "Explanation": "The cited works explore the use of high-level semantic information, such as detection and segmentation results, to guide the optimization of the deraining process, which the citing paper builds upon to improve the performance of image deraining."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work introduces the FGSM attack method, which the citing paper adopts in their research to generate single-step perturbations for input images."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work introduces the PGD attack method, which the citing paper uses to generate multi-step perturbations for input images in their research."}, {"Category": "Extension or Continuation", "Citation": "[27]", "Explanation": "The cited work introduces the BIM attack method, which the citing paper builds upon to demonstrate the vulnerability of machine learning systems to adversarial examples in physical world scenarios."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The cited work challenges the effectiveness of defensive distillation and introduces the CW attack method, which the citing paper adopts in their research to optimize the generation of adversarial examples."}, {"Category": "Methodological Basis", "Citation": "[1,18]", "Explanation": "The cited works have conducted impressive investigation on the robustness of segmentation and introduced effective improvements of the PGD attack, which the citing paper adopts in their research to improve the robustness of segmentation model."}, {"Category": "Extension or Continuation", "Citation": "[26,43]", "Explanation": "The cited works have introduced Adversarial Training (AT) as a general defense method for improving the robustness of the model to adversarial examples, which the citing paper extends by exploring the effectiveness of AT on segmentation model tasks."}, {"Category": "Extension or Continuation", "Citation": "[51]", "Explanation": "The cited work by Xu et.al. has proposed DDC-AT for improving the robustness of deep neural networks on semantic segmentation tasks, which the citing paper extends by exploring the effectiveness of DDC-AT in the context of segmentation model tasks."}, {"Category": "Data Source", "Citation": "[37,41]", "Explanation": "The cited works have investigated different transformations such as image compression and pixel deflection to preprocess the input and remove adversarial perturbations, which the citing paper utilizes as a data source for their research in the field of image processing and segmentation."}, {"Category": "Methodological Basis", "Citation": "[19,32]", "Explanation": "The cited works provide a specific transformation module for removing adversarial noise, which the citing paper adopts in their research to design a transformation module for the same purpose."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work by [30] has been investigated to show the limitations of ground truth data in derain tasks, which has led the citing paper to rethink the supervised clean data and refine them with the proposed auxiliary mirror attack."}, {"Category": "Methodological Basis", "Citation": "[42]", "Explanation": "The cited work establishes the correlation between restoration and objective detection tasks, which the citing paper builds upon to design the Auxiliary Mirror Attack (AMA) generator to generate mirror attacks of the negative attack prior."}, {"Category": "Data Source", "Citation": "[9]", "Explanation": "The cited work, Cityscapes, serves as the training dataset for the model in the citing paper, providing the necessary data for model training and performance evaluation."}, {"Category": "Data Source", "Citation": "[13]", "Explanation": "The cited work, PASCAL VOC 2012, is also used in the citing paper for testing purposes, demonstrating the generalization ability of the proposed framework."}, {"Category": "Methodological Basis", "Citation": "[60]", "Explanation": "The cited work, PSPNet, is employed as a model for the downstream segmentation task in the citing paper, providing a method for image segmentation that the citing paper builds upon."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The cited work, DeepLabv3, is also used as a model for the downstream segmentation task in the citing paper, further demonstrating the versatility of the method employed in the citing paper."}, {"Category": "Data Source", "Citation": "[20]", "Explanation": "The cited work, ResNet50, is used as a backbone feature extractor in the citing paper, providing a specific model configuration for training and testing."}, {"Category": "Extension or Continuation", "Citation": "[44], [54], [38], [31]", "Explanation": "The cited works, TransWeather, MPRNet, PReNet, and RESCAN, are all derain models that are implemented in the citing paper to verify the performance of the proposed framework and its insensitivity to the architecture of derain models."}]
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b6", "b5", "b7", "b9", "b1", "b10", "b14", "b15", "b16", "b17", "b10", "b10", "b10" ], "table_ref": [], "text": "Recent studies show notable differences between autonomous driving systems and human drivers in decisionmaking, raising concerns about the reliability of such systems [1], [2]. Human drivers largely depend on visual assessments to react to road situations, particularly in hazardous scenarios. Recent research suggests that eye-tracking data from human drivers can accurately predict attention, hinting at the potential to enhance autonomous driving decisionmaking [3]- [7]. A lightweight human attention model emerges as a promising tool to bridge this gap. Deployed in selfdriving vehicles, this model could provide real-time visual cues of autonomous decisions, enhancing user trust [6], [8]- [10]. Additionally, coupled with eye-tracking devices, it could help in online training setups, making self-driving models more robust and human-centric [2]. Hence, developing such a Fig. 1: Camera images from BDD-A (first column), Gaze map from BDD-A (second column), Cleansed gaze map from BDD-A (third column).\nHowever, despite the availability of extensive gaze data and the development of gaze prediction models [11]- [15], recent research findings suggest that the approach of using eye-tracking to capture drivers' attention poses certain challenges [16], [17]. Specifically, it has been observed that human attention can be easily diverted towards irrelevant items during the driving process [18], and such unrelated gaze data (Challenge 1) can have a detrimental effect on the training of gaze prediction models. Figure 1 indicates a comparison between the original gaze map in BDD-A [11] (Column 2) and its cleansed version (Column 3). We can see that, although attempts have been made to collect the average gaze from multiple drivers [11], unrelated gaze (e.g., building, tree and poles) still widely exists in the BDD-A dataset compared to our cleansed version [11].\nFurthermore, the gaze prediction models shall have good generalizability when applied to diverse driving scenarios. However, developing an effective gaze prediction model for a new environment often mandates the collection of extensive driving gaze data, which can be costly and time-consuming. Thus, another major challenge lies in the development of a more generalizable gaze prediction model that can perform well across diverse scenarios (Challenge 2). Moreover, for effective in-vehicle ADS decision-making assistance and to provide passengers with visual cues through gaze prediction, it is essential that the model is lightweight to ensure efficient computation on the resource-constrained devices deployed inside the vehicles (Challenge 3).\nTo address the aforementioned challenges, we propose a filtering process aimed at removing unrelated gaze data presented in current datasets, thus allowing the model to focus specifically on driving-related objects during the training process. Additionally, we design a convolutional self-attention model which boosts generalizability by evaluating the interrelations among tokens (regions) in the input images, and trims down model complexity through an efficient token convolution with a smaller kernel size and fewer channels. In a nutshell, our major contributions are threefold:\n• We present an adaptive cleansing method that employs the object's label and embedding information to generate masked inputs, facilitating the refinement of driving gaze datasets. Our cleansing method, when applied to datasets, allows driving gaze prediction models to attain outstanding results. • We propose a novel and light-weight convolutional selfattention gaze prediction model that exhibits enhanced generalizability compared to existing gaze prediction models by employing tokenization and self-attention mechanisms.\n• Our extensive experiments reveal that our cleansing method improves model performance by as much as 8.75%, while our attention model increases generalizability by up to 12.13% over existing state-of-the-art models. Moreover, our model's parameters make up a mere 1.8% of those in state-of-the-art models. The structure of this paper is outlined as follows. Section II delves into a comprehensive review of the pertinent literature related to our investigation. The creation of our dataset is elaborated in Section III, while Section IV describes our modelling approach. The experimental design is highlighted in Section V. We report the outcomes of our rigorous experiments, rooted in the techniques detailed in Section III, IV, and VI. Lastly, potential vulnerabilities to our suggested approaches are discussed in Section VII." }, { "figure_ref": [], "heading": "II. RELATED WORK A. Driving Gaze Datasets", "publication_ref": [ "b18", "b10", "b11", "b19", "b11", "b15", "b17", "b10", "b16" ], "table_ref": [], "text": "Prior research has presented multiple driving gaze datasets. Dr(eye)VE [19] consists of videos collected from real-world driving scenarios though with a smaller number of road users. BDD-A [11] primarily focused on critical driving situations and employed in-lab methods to collect averaged gaze data from 20 drivers, which was gathered in San Francisco. Similarly, DADA-2000 [12] was generated using in-lab methods and comprised video clips depicting 54 different types of accidents across various environments. The dataset was collected in China, where cultural and road conditions differ from previous datasets. Additionally, CDNN [20] proposed a dataset that is collected in a similar way to DADA-2000 [12]. To counter the noise collected from eye-tracking equipment [16], [18], BDD-A [11] used multiple drivers' averaged attention data as the ground truth in their work. However, as shown in Figure 1, some ground truth gaze still focuse on unrelated objects such as the sky or trees. SAGE [17] in another study improved label accuracy by adding objects' semantic segmentation to ground truth. It used Mask-RCNN to obtain objects' semantic segmentation such as vehicles and pedestrians. Nonetheless, the resulting gaze map differs considerably from human gaze maps. In comparison, our cleansing method leverages the existing objects' labels and embedding information in the given dataset to extract the bounding boxes of those semantically relevant objects. By masking all the other pixels, we are able to create a more accurate and focused gaze map. The experiments demonstrate that our gaze map successfully focuses on specific objects that are highly relevant to driving scenarios." }, { "figure_ref": [], "heading": "B. Gaze Prediction Models", "publication_ref": [ "b10", "b14", "b20", "b14", "b19", "b21" ], "table_ref": [], "text": "Previous research primarily focuses on designing gaze prediction models using pre-trained CNN backbones, such as AlexNet [11], YoloV5 [15], or VGG [21]. This approach can be seen as mapping the extracted features to groundtruth gaze. Additionally, a grid-based gaze prediction method has been proposed, which divides the input into grids and predicts gaze for each grid [15]. These methods have achieved decent quantitative results. Another approach, CDNN [20], provides a gaze predictor without a backbone but requires downsizing the original inputs. A major issue in these prior works is their heavy reliance on CNNs, which have limitations in capturing the relationships among different subfields in the input images. The gaze prediction task actually requires capturing these relationships effectively. Furthermore, previous models often require image pre-processing, either through feature extraction or downsizing. Previous research [22] also explored inverse reinforcement learning in gaze prediction. However, their method requires a range of prior labels involving relative distance, driving tasks, vehicle state, and road information, which can be challenging to obtain, especially across diverse datasets. Unlike the previous approach, ours provides a resource-efficient model. It avoids heavy reliance on convolution filters and pre-trained backbones. Instead, it uses a tokenization method to process the original input, utilizing self-attention to capture region relationships. Our model exhibits strong generalizability across various datasets." }, { "figure_ref": [], "heading": "C. Model Compression", "publication_ref": [ "b22", "b23" ], "table_ref": [], "text": "High-performance deep learning models, characterized by their extensive parameters, consume considerable computational resources. To fit them for mobile deployment, downsizing is essential. While recent studies [23], [24] suggest that model compression can reduce computational demands, it often results in diminished model performance. In our approach, we reduce the model's complexity and inference speed on resource-constrained devices with no compromise in accuracy." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_0" ], "heading": "III. DATASET CLEANSING", "publication_ref": [ "b24" ], "table_ref": [], "text": "Figure 2 outlines our proposed pipeline for cleansing existing driving gaze datasets. As depicted in Figure 2 (a), we first use a pre-trained YOLOv5 [25] object detector to obtain the bounding box information from the original inputs. The detail of this part is given in Section III-A. Subsequently, using this detection model, we can generate masked inputs (Figure 2 (b)) and masked gaze maps (Figure 2 (c)) by applying bounding box-specified masks to the original images and gaze maps.The detail of this part is presented in Section III-B." }, { "figure_ref": [ "fig_0" ], "heading": "A. Unrelated Gaze", "publication_ref": [ "b15", "b17", "b25", "b25", "b24" ], "table_ref": [], "text": "The driving gaze datasets collected from human drivers contain human bias [16], [18]. Currently, the major method to collect human attention is to use eye-tracking equipment, therefore drivers' attention can be drawn by unrelated objects, which could hinder the effectiveness of the collected data for training the model. Here, we define the unrelated gaze as the collected attention that is not directly related to the driving activity. For instance, if a driver's attention is not focused on an object or a focused object is not related to driving, then this gaze will be regarded as an unrelated gaze. By investigating current large-scale driving datasets such as BDD-100K [26], we observe that these datasets only provide annotations for specific objects occurring in driving scenes. For example, BDD-100K [26] provides object labels for 10 objects for driving-related tasks including pedestrian, rider, car, truck, bus, train, motorcycle, bicycle, traffic light, and traffic sign. It is natural to think that such objects are thought as important objects in driving scenes, which need to be carefully observed by drivers. Therefore, in this work, we propose to cleanse gaze maps by only keeping gazes on such important objects. To do this, we fine-tune an object detection model YOLOv5 [25] and use it to extract the bounding box information of those objects. With the bounding box information, we can remove the unrelated gaze from the original gaze map to obtain the cleaned gaze map (Figure 2 (a))." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "B. Masked Input and Ground Truth", "publication_ref": [ "b26" ], "table_ref": [], "text": "Several feature extraction approaches, whether based on CNN or VIT [27], extract features from entire input images without weighing the importance of image segments based on semantic meanings. In gaze prediction, some segments bear greater relevance than others. For instance, objects like vehicles and pedestrians offer crucial driving information compared to less relevant elements like the sky or trees. With prior objects' labels and embedding knowledge, we can consider non-critical objects as noise within the current context. As a result, we utilize the bounding box information to identify and select critical objects while masking out all other pixels (Figure 2 (b)) in our cleansing method.\nApart from the label and embedding information, the spatial location of objects is also crucial in the original inputs. Our cleansing method intends to retain as much location information as possible while removing unnecessary objects. In addition to modifying the input, we also process the ground truth gaze map to eliminate labeling noise (e.g., gaze focused on the sky or trees). To accomplish this, we apply the bounding box technique to the original gaze map, selecting only the gaze information located inside the bounding box for those semantically relevant objects (Figure 2 (c))." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1", "fig_1", "fig_1" ], "heading": "IV. CUEING MODEL", "publication_ref": [], "table_ref": [], "text": "The overall architecture of our proposed model is shown in Figure 3. To design a light-weight gaze prediction model, the first step is to conduct tokenization and positional encoding on original inputs, and then downsample the ground truth gaze map (Figure 3 (a)). After that, to further extract information from these tokens, we stack up these tokens and apply token convolutional layers (Figure 3 (b)) and the transformer encoders on them (Figure 3 (c)). Finally, we calculate the loss and upsample the output of the linear layer to visualize the gaze map (Figure 3 (d))." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "A. Tokenization", "publication_ref": [ "b14" ], "table_ref": [], "text": "Grid-based attention prediction yields satisfactory results in the gaze prediction task, as noted in [15]. However, their approach uses a 1 × 1 kernel size in the convolutional layer to map the extracted features to a gaze value array. This does not fully harness the benefits of the \"grid-based\" approach. In our model, we manipulate directly the original image in a \"tokento-point\" approach. We first tokenize original inputs from driving scenarios X ∈ R 3×H×W to X ∈ R T ×3×H ′ ×W ′ where H and W refer to the height and width of original input and T is the number of tokens which is 256 by default. However, it can be adjusted to any power-of-2 value and a perfect square. After conducting the experiments, we determined that 256 strikes a favourable balance between performance and computational efficiency. Since there is the same number of ) to move through the input from the left-top to right-bottom without any overlap to form tokens. The sliding window is a filter with all parameters as 1 and is not trainable. We call this operation tokenization. Different from convolution, we calculate the dot product between the filter and the token at each movement but with no sum. The process is illustrated in Figure 3 (a).\nNotably, this operation does not need any trainable parameters and floating-point operations. This contributes to making our model lightweight. Our tokenization is driven by two primary motivations. First, we aim to decrease the Giga Floating Point Operations Per Second (GFLOPs) in the subsequent convolution process. Second, we intend to map a sequence of tokens to a set of points. These points depict the intensity of the gaze likely associated with each respective token. To match the second motivation, we downsample the ground truth gaze map into points using a similar idea with tokenization. Differently, after receiving the tokens from the ground truth gaze map, we calculate an average gaze value (scaled to between 0 and 1) for each token in the ground truth as the 'point'. The ground truth gaze map is then downsampled from G ∈ R 3×Hgt×Wgt to y ∈ R T ×1 with the number of points equal to the number of tokens. The default number of points is 256. The downsampled ground truth will be used to calculate the loss (Figure 3 (d)). To get ready for the forthcoming token convolution and attention blocks, we implement the positional encoding for those tokens." }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "B. Positional Encoding", "publication_ref": [], "table_ref": [], "text": "Due to the tokenization and the following token convolution and attention blocks, we assign each token a specific positional in the original input (Figure 3 (a)). To obtain accurate positional information on tokens, we prefer that the location information could be feature-like information in the original input. Therefore the location information can be further processed as a feature by our token convolutional layers in the next stage.\nHere we employ a specifically designed absolute positional encoding, and assign a two-dimensional coordinate to each token. These coordinates will equally spread between -1 and 1 through the rows and columns. For example, if we tokenize an input into 3 × 3 tokens, then the first token has coordinates (-1, -1), while the coordinates to its right will be (0, -1) and the coordinates below it will be (-1, 0) (Figure 4). Hence, we can ensure that each token in the original input has a unique coordinate. However, to allow the positional information to be the same dimension as the features of the tokenized input, we use a convolutional layer map all two-dimensional encoded coordinates from P ∈ R 2×H×W to P ∈ R 1×H×W , and then perform an element-wise addition between the positional encoding and those tokens as X = P ⊕ X, where X is the tokenized input. " }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "C. Token Convolution", "publication_ref": [ "b19", "b27" ], "table_ref": [], "text": "To perform token convolution, we stack all the tokens from a single input into a batch, which is indicated as the first step of (b) in Figure 3. The operation is called unfold. Therefore, the dimension in this stage transforms from R T ×3×H ′ ×W ′ to R 3×H ′ ×W ′ with T channels. This is essentially an image-tocolumn transformation instead of a dimensionality reduction. Then, we use the two convolutional layers with a kernel size of 3×3 to downsize each token and use 16 kernels (Figure 3 (b)), the output feature map for each token is v ∈ R 16×Hout×Wout . The optimal number of channels was determined through our experiments. Our findings indicate that utilizing a larger channel size (256 or 512) does not yield significant performance improvements. However, it substantially escalates the demand for computational resources. The advantages of token convolution include reduced GFLOPs. This reduction is achieved as the dimensions of each token, both in terms of width and height, are scaled down by a factor of 1 √ T from the original input dimensions. According to the GFLOPs required in the convolution operation, the GFLOPs needed for token convolution are merely 1 T of those required for standard convolution [20].\nAfter token convolution, we prepare the stacked tokens for attention blocks. Given our interest in inter-token relationships, we need to perform self-attention between these tokens. We then reshape the convolved token column back to the shape before the unfold operation, and this is the operation we call fold in Figure 3 (b). In this step, the dimension of the feature map v for each token will be laid out flat and the resulting feature map will be R T ×16×Hout×Wout .\nTo enable self-attention blocks to process our tokens, we must reduce the dimensionality of the token's feature map. We first merge the height and width dimensions of the token's feature map. To tolerate our interest in inter-token relationships, while also maintaining computational efficiency, we prioritize retaining the merged height and width dimension and processing the channel dimension before the attention block. Here we employ channel attention [28] to the output of the convolutional layer to assign each channel a trainable weight, i.e., v = v ⊗ w, where w is the weight of channels. By applying weight to each channel, we average the channel dimension, and the dimension of the token's feature is reduced from R T ×16×Hout×Wout to R T ×Hout×Wout" }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "D. Attention Between Tokens", "publication_ref": [ "b28" ], "table_ref": [], "text": "Under our settings, the relationship between tokens is important. Ideally, after attention layers, pixels that could generate gaze should become similar. Therefore, we use transformer encoders [29] for our model. Specifically, we use the symmetric self-attention layer to find the attention scores between tokens, since we want to capture the relationship between all the tokens (Figure 3 (c)). We validate the importance of attention blocks in our ablation study (Section VI-C). After the transformer encoders, in order to allow the output to be processed by linear layer, we use a spatial mean in each token to reduce the dimension, then the dimension is reduced from R T ×Hout×Wout to R T ×1 .We use the output from the linear layer to calculate the loss, and the loss function we use is Binary cross-entropy loss. Equation 1 indicates the loss function, where y is the downsampled ground truth, ŷ is the output of the linear layer.\nL(ŷ, y) = - 1 T T i=1 y i • log (ŷ i ) + (1 -y i ) • (1 -log (ŷ i )) (1)\nFinally, to visualize the gaze map, we use interpolation and Gaussian smoothing to upsample the output from the linear layer to form the final gaze map (Figure 3 (d))." }, { "figure_ref": [], "heading": "V. EXPERIMENTAL DESIGN", "publication_ref": [], "table_ref": [], "text": "We proposed four research questions (RQs) and conducted corresponding experiments to evaluate the effectiveness of our proposed model and cleansed datasets:\n• RQ1: Can our cleansed datasets produce more reasonable human gaze than the baseline dataset? • RQ2: To what extent do cleansed datasets enhance the performance of human gaze predictions when compared to using baseline (uncleansed) datasets? • RQ3: How does the generalizability of our human attention model compare to existing gaze prediction models across diverse datasets? • RQ4: Can our human attention model be efficiently deployed on resource-constrained devices, like mobile devices, without compromising on performance?" }, { "figure_ref": [], "heading": "A. Experimental Setup", "publication_ref": [ "b10", "b11", "b10", "b10", "b19", "b14", "b29", "b30" ], "table_ref": [], "text": "To evaluate those research questions, based on two large and popular driving gaze datasets BDD-A [11] and DADA-2000 [12], we produced two cleansed datasets CUEING-B and CUEING-D for comparison. We resized all inputs into 1280×720 which is aligned with the initial size of BDD-A [11] input. In baseline model selection, we included the CUEING model and the latest and popular driving gaze prediction models, including HWS [11], CDNN [20], and Where&What [15]. Our code implementation is based on Pytorch [30] and used the Adam optimizer [31]. We conducted experiments for RQ1, RQ2 and RQ3 on an RTX 3090 GPU, while RQ4 experiments were performed on a Jetson Nano 4G." }, { "figure_ref": [ "fig_2" ], "heading": "B. Experimental Settings 1) RQ1: User Study:", "publication_ref": [ "b10", "b11", "b10", "b11", "b31", "b32", "b33", "b11", "b11", "b14", "b14", "b16", "b10", "b11", "b18", "b10", "b14" ], "table_ref": [], "text": "In order to qualitatively understand if our cleansed datasets produce more reasonable human gaze maps than the baseline dataset, we conducted a user study to compare the BDD-A [11] with the CUEING-B and the DADA-2000 [12] with the CUEING-D. BDD-A [11] and CUEING-B contain 30,073 images, and DADA-2000 [12] and CUEING-D contains 22,171 images. To ensure a 95% confidence level with a 10% confidence interval, we randomly sampled 100 images with gaze maps from both original datasets and their cleansed version, respectively. In those sampled groups, to gain quantitative measurement, we let users decide how many clusters of gaze each image contains and how many of them are reasonable. For example, in Figure 4, the left image has 4 clusters of gaze, and 1 of them is unreasonable, and the right image has 2 clusters of gaze, and all of them are reasonable. We recruited 10 workers in Amazon mTurk [32] to participate in the user study, and all of them are car owners and have extensive driving experience. They can receive $1 for each survey they take. We made 5 surveys in total. During the user study, we did not instruct users on what kinds of gazes are reasonable. Users need to rely entirely on their subjective perceptions to determine which image contains a more reasonable gaze.\nAfter collecting human study results, we calculated statistical information including the mean values of the total number of gazes and number of reasonable gazes, and the ratio of reasonable gazes over the total number of gazes on all datasets.\n2) RQ2: Cleansing Method: To evaluate the effectiveness of the proposed cleansing method, following the principle \"training on synthetic and testing on real\" [33], [34]. We trained the gaze prediction models on the original dataset and cleansed (synthesized) dataset respectively, but the evaluations of the models are only on the original datasets. If the performance of gaze prediction models increases, the cleansing method is proven effective in enhancing gaze prediction models. We chose DADA-2000 [12] as our original dataset. The ground truth of DADA-2000 [12] is sourced from a single driver, the unrelated gaze during driving is more noticeable when compared to the averaged gaze collected from multiple drivers.\nThe performance of gaze prediction models on different datasets was measured from the object level. We measured whether the gaze prediction models can focus the gaze on the correct objects in the ground truth gaze map, which is measured by metrics Accuracy, Precision, Recall, F1, and AUC. In order to determine whether an object is considered 'focused', we defined a threshold criterion based on the gaze predictions within the bounding box of the detected object. Following the approach used in [15], if the maximum predicted gaze value within the bounding box exceeds the threshold of 0.5 (from the empirical study of [15]), the object is deemed as being focused on by the driver. This threshold is independent of the training and pixel-level metrics and is used primarily for object-level evaluation. We also compared the proposed cleansing method with another data processing method SAGE [17]. We used SAGE to create datasets SAGE-B from BDD-A and then followed the same process to train gaze prediction models on the synthesized datasets. We compared the performance of trained models on SAGE-B with models trained on CUEING-B to identify which method is more effective in improving the performance of gaze prediction models. The evaluation metrics used the same as above.\n3) RQ3: Generalizability: We assessed the model's generalizability using two methods. First, a model has strong generalizability if it has advanced performance on the target dataset directly after training on the source dataset. Second, a model was deemed to be more generalizable if, after being trained on the source dataset, its performance was improved when fine-tuned with just 2% of the target dataset's training data.\nIn the experiment, to avoid the possible influence of the proposed dataset cleansing method, we only employed the original BDD-A [11] as the source datasets and used DADA-2000 [12] and Dr(eye)VE [19] as target datasets respectively. To evaluate the second kind of generalizability, we randomly sampled 2% of images from the corresponding target datasets for fine-tuning gaze prediction models.\nWe used the object level metrics introduced in Section V-B2 to assess the models' general performance and generalizability in this RQ. Meanwhile, we introduce additional pixel level metrics for model evaluation. For the pixel level, we measured the similarity between the generated gaze maps and the ground truth gaze maps, using Kullback-Leibler divergence (D KL ) and Pearson's Correlation Coefficient (CC) metrics as in previous works [11], [15]. In RQ2, we did not evaluate our cleansing method at the pixel level. Given our approach of \"training on synthetic and testing on real\", it's understood that synthesized data differs in distribution from real data." }, { "figure_ref": [ "fig_4" ], "heading": "4) RQ4: Mobile Device Testing:", "publication_ref": [ "b14", "b19" ], "table_ref": [], "text": "To assess if the model is suitable for deploying on a mobile device, we first compared different models' complexity. We employed the same metrics as in [15], which include the number of trainable parameters and GFLOPs. These metrics provide a standardized measure for assessing the complexity of different models.\nWe assessed the efficiency of the two gaze prediction models CDNN [20] and CUEING on a mobile device with Nvidia Jetson Nano 4G (Figure 6). We selected CDNN as the comparison baseline because, among the baseline models, it has the second lowest model complexity. Additionally, while CDNN uses standard CNN for input processing, our method utilizes token convolution. On the mobile device, we assessed the energy consumption, processing unit overhead, and inference speed for both models on processing a single image as these metrics are crucial for model deployment on mobile devices." }, { "figure_ref": [], "heading": "VI. RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_5", "fig_5", "fig_6", "fig_6" ], "heading": "A. RQ1: User Study", "publication_ref": [ "b31", "b24", "b25", "b25", "b10", "b10", "b14" ], "table_ref": [], "text": "To process our collected user study results from mTurk [32], we first calculated the mean value using the answers from all users for each question. We then used these mean values to find the ratio of reasonable gaze in each image, and finally, we Min-Max normalized the ratio from the original dataset and its cleansed version to obtain the quantitative results. Figure 7 indicates the quantitative result of our user study. We can see that users consider the datasets that have used our cleansing method (red and green) to contain a more reasonable gaze than their corresponding original datasets (blue and yellow) in both medians and means.\nHere, we further explore the reasons for the outliers in Figure 7. In Figure 8, we present some examples of outliers. The initial row showcases examples from DADA-2000 (on the left) and CUEING-D (on the right). After our cleansing process, our sample has no gaze. This highlights a limitation in our cleansing approach. Given that we train the YOLOv5 model [25] using labels from BDD-100K [26], the detectable objects are confined by the label availability in BDD-100K. In this instance, it is reasonable for the driver to look at the bushes in this circumstance, but the bush is not in the BDD-100K's [26] label set. This points out an interesting future research direction where we can generate more valuable label sets for existing driving gaze datasets.\nThe second row in Figure 8 displays examples from BDD-A [11] and CUEING-B. In the second row, the left image from BDD-A [11] demonstrates a scenario where both datasets fail to detect the white car, despite it being the object of focus for most drivers in similar situations. However, it is important to note that this image represents a single frame from the video, and it is possible that the driver may have shifted their attention to the car later in the video. This highlights the significance of incorporating time series data in gaze prediction rather than relying solely on individual frames. Previous research has indicated that integrating time series data at this stage does not enhance performance and may introduce significant centre bias [15]. As a potential future direction, training a robust gaze attention model on videos is essential." }, { "figure_ref": [ "fig_7" ], "heading": "B. RQ2: Cleansing Method", "publication_ref": [ "b14", "b16", "b16", "b16", "b16", "b16", "b16" ], "table_ref": [ "tab_0", "tab_0", "tab_1" ], "text": "Table I and II list the evaluation result of RQ2. In Table I, most models trained by our cleansed datasets have different levels of improvement in object-level metrics. Specifically, the CUEING model and Where&What [15] are improved up to 7.38% and 6.25% in accuracy, 8.75% and 3.57% in AUC.\nIn Table II, under the principle of 'training on synthetic and testing on real', the performance of SAGE-B [17] is significantly worse than the performance of CUEING-B, but it is worth noting that the recalls of the models trained on SAGE-B [17] are very high. This is because SAGE [17] almost includes all objects' semantic segmentation from the input in their ground truth. Figure 9 indicates a comparison between the generated gaze map using the CUEING model trained by SAGE-B [17] and CUEING-B, the gaze map generated by the model trained by SAGE-B [17] is quite different from real human's gaze. Meanwhile, SAGE [17] tends to capture all objects in the input instead of focusing on the important objects, which hinders the ability of models to capture important objects to benefit ADS's decision-making. As a future direction, we intend to evaluate the importance of objects identified by the gaze model when co-training critical driving models. The goal is to enhance the robustness of selfdriving models, making them aligned with human knowledge." }, { "figure_ref": [], "heading": "C. RQ3: Generalizability", "publication_ref": [ "b14", "b11", "b14", "b18", "b10", "b11", "b11", "b10" ], "table_ref": [ "tab_2", "tab_3", "tab_4" ], "text": "In Tables III and IV, we present the evaluation of models' generalizability on two target datasets. Our CUEING model consistently outperforms the other baseline models, both with and without fine-tuning. For example, in Table III, without fine-tuning, the accuracy of CUEING reaches 58.67%, which outperforms the current state-of-the-art (e.g., Where&What [15]) by 3.54%. When fine-tuning with a portion of DADA-2000 data [12], the CUEING model surpasses the state-ofthe-art (e.g., Where&What [15]) by up to 12.13% in the pixel level metrics (KL value of 2.10 versus 2.39).\nIn Table IV, similar outcomes are evident in the generalization task for Dr(eye)VE [19]. The CUEING model exhibits a notable advantage in precision, achieving 67.99% without finetuning, and its performance in other metrics closely approaches the state-of-the-art.\nGeneralizability Ablation Study. The CUEING model demonstrates good generalizability. However, its workings remain opaque. To determine which component of the model contributes most to its generalizability, we have undertaken an ablation study focused on this aspect. We compared the results from freezing all layers before the last linear layer and freezing all attention blocks in the CUEING model trained by CUEING-B.\nFrom Table V, we can find that the performance decreased for the model trained on the CUEING-B dataset is larger than the model trained on BDD-A [11] under all the settings. This is because when we generalized our model to DADA-2000 [12], we did not apply masks on the input of DADA-2000 [12], and the distribution of the input data is different from the CUEING-B dataset largely. Therefore, from the result of the CUEING model trained on the BDD-A [11] dataset, we find that if we transfer our model under a similar distribution, only allowing the last linear to be trained can reach a similar result with training all the parameters, which only has a 2.67% increase on the KL-divergence. From the result of the CUEING model trained on the CUEING-B dataset, we can find that even if we allow all the blocks other than attention blocks for training, the KL-divergence still increases by 11.11% than allowing all the parameters to be trained. Clearly, attention blocks are a crucial component of our model." }, { "figure_ref": [], "heading": "D. RQ4: Mobile Device Testing", "publication_ref": [ "b14", "b19", "b19", "b19", "b29", "b19", "b19" ], "table_ref": [], "text": "Table VI presents the evaluation metrics for the model complexity by all baseline models. Remarkably, our model delivers commendable results using merely 1.8% of the model parameters needed by Where&What [15], and the GFLOPs required are just 6.12% of what Where&What consumes. Our model distinctly excels in both complexity and performance compared to other baseline models, making it a promising candidate for deployment on vehicles.\nWe evaluated CDNN [20] and CUEING on a Jetson Nano 4G. While CUEING consumed 3W for a single image (1280× 720) inference and took 0.876s, CDNN [20] was inoperable on the Jetson Nano even with the half-sized input due to its hefty computational requirements. We pruned CDNN [20] using PyTorch [30], and it remained too large for the Jetson Nano 4G. We further explored the reason on an Intel 12600-KF CPU, CDNN [20] required 241Mb of memory and 0.728s to infer a single image, whereas in the same setting CUEING only required 164Mb and 0.048s. Even when halving the input size for CDNN [20], it still demanded 0.211s. The processor's ability to process batches in parallel during our tokenization and unfold operations contributes to enhanced performance. While model compression has its merits, the superiority of token convolution in single-image inference highlights the criticality of reducing model computation. " }, { "figure_ref": [], "heading": "VII. THREAT TO VALIDITY", "publication_ref": [ "b10", "b11", "b16", "b18", "b24", "b34", "b35" ], "table_ref": [], "text": "The external validity is regarding the generalizability of the proposed method. To counter this, we conducted experiments on large-scale datasets including BDD-A [11], DADA-2000 [12], SAGE [17], and Dr(eye)VE [19]. The experimental results highlighted the effectiveness of the proposed method across different datasets. The construct validity is about the human-centric study in this work. To ensure that the human study can correct reflect the accuracy of gaze maps produced by different methods, we targeted participants who have driving backgrounds and have the real-world expertise to emulate genuine reactions in driving scenarios. The internal validity arises at the choice of YOLOv5 [25] for the cleansing process.\nThough various object detection models have been proposed in recent years, YOLOv5 is more lightweight and can achieve the highest running speed while maintaining similar object detection performance [35] as more advanced object detection models like YOLOv7 [36]." }, { "figure_ref": [], "heading": "VIII. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "This paper presents a novel method to address data labeling noise, generalization gaps, and model complexity in driving gaze datasets and models using an adaptive dataset cleansing approach and a lightweight convolutional self-attention gaze prediction model. Our tests validate the effectiveness and efficiency of our model, especially on mobile devices.\nAdditionally, our work suggests future research directions, such as developing robust gaze models through multi-camera video data and unconventional semantic objects in driving scenes. Further examination of a streamlined gaze prediction model, along with a refined gaze dataset on driving models, could provide valuable insights, potentially enabling online training in self-driving cars to enhance robustness and personalized driving experiences." } ]
10.5281/zenodo.4154370
[ { "authors": "O Wagner", "journal": "National Public Radio", "ref_id": "b0", "title": "Nearly 400 car crashes in 11 months involved automated tech, companies tell regulators", "year": "" }, { "authors": "A Paul", "journal": "", "ref_id": "b1", "title": "", "year": "2021" }, { "authors": "F Codevilla; M Müller; A López; V Koltun; A Dosovitskiy", "journal": "IEEE", "ref_id": "b2", "title": "End-to-end driving via conditional imitation learning", "year": "2018" }, { "authors": "Y Xia; J Kim; J Canny; K Zipser; T Canas-Bajo; D Whitney", "journal": "", "ref_id": "b3", "title": "Periphery-fovea multi-resolution driving model guided by human attention", "year": "2020" }, { "authors": "R Zhang; F Torabi; G Warnell; P Stone", "journal": "Autonomous Agents and Multi-Agent Systems", "ref_id": "b4", "title": "Recent advances in leveraging human guidance for sequential decision-making tasks", "year": "2021" }, { "authors": "W Bao; Q Yu; Y Kong", "journal": "", "ref_id": "b5", "title": "Drive: Deep reinforced accident anticipation with visual explanation", "year": "2021" }, { "authors": "Y Wang; J Jiang; S Li; R Li; S Xu; J Wang; K Li", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b6", "title": "Decisionmaking driven by driver intelligence and environment reasoning for high-level autonomous vehicles: A survey", "year": "2023" }, { "authors": "P Larsson", "journal": "", "ref_id": "b7", "title": "", "year": "2022" }, { "authors": "K Wiggers", "journal": "", "ref_id": "b8", "title": "", "year": "2020" }, { "authors": "A James", "journal": "", "ref_id": "b9", "title": "", "year": "2022" }, { "authors": "Y Xia; D Zhang; J Kim; K Nakayama; K Zipser; D Whitney", "journal": "Springer", "ref_id": "b10", "title": "Predicting driver attention in critical situations", "year": "2018" }, { "authors": "J Fang; D Yan; J Qiao; J Xue; H Wang; S Li", "journal": "IEEE", "ref_id": "b11", "title": "Dada-2000: Can driving accident be by driver attentionƒ analyzed by a benchmark", "year": "2019" }, { "authors": "A Palazzi; D Abati; F Solera; R Cucchiara", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b12", "title": "Predicting the driver's focus of attention: the dr (eye) ve project", "year": "2018" }, { "authors": "J Fang; D Yan; J Qiao; J Xue; H Yu", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b13", "title": "Dada: Driver attention prediction in driving accident scenarios", "year": "2021" }, { "authors": "Y Rong; N.-R Kassautzki; W Fuhl; E Kasneci", "journal": "ETRA", "ref_id": "b14", "title": "Where and what: Driver attention-based object detection", "year": "2022" }, { "authors": "I Kotseruba; J K Tsotsos", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b15", "title": "Attention for vision-based assistive and automated driving: A review of algorithms and datasets", "year": "2022" }, { "authors": "A Pal; S Mondal; H I Christensen", "journal": "", "ref_id": "b16", "title": "looking at the right stuff\"-guided semantic-gaze for autonomous driving", "year": "2020" }, { "authors": "C Ahlström; K Kircher; M Nyström; B Wolfe", "journal": "Frontiers in neuroergonomics", "ref_id": "b17", "title": "Eye tracking in driver attention research-how gaze data interpretations influence what we learn", "year": "2021" }, { "authors": "S Alletto; A Palazzi; F Solera; S Calderara; R Cucchiara", "journal": "", "ref_id": "b18", "title": "Dr (eye) ve: a dataset for attention-based tasks with applications to autonomous and assisted driving", "year": "2016" }, { "authors": "T Deng; H Yan; L Qin; T Ngo; B Manjunath", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b19", "title": "How do drivers allocate their potential attention? driving fixation prediction via convolutional neural networks", "year": "2019" }, { "authors": "K Simonyan; A Zisserman", "journal": "", "ref_id": "b20", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2015" }, { "authors": "S Baee; E Pakdamanian; I Kim; L Feng; V Ordonez; L Barnes", "journal": "", "ref_id": "b21", "title": "Medirl: Predicting the visual attention of drivers via maximum entropy deep inverse reinforcement learning", "year": "2021" }, { "authors": "M E Celebi; H A Kingravi", "journal": "", "ref_id": "b22", "title": "Linear, deterministic, and orderinvariant initialization methods for the k-means clustering algorithm", "year": "2015" }, { "authors": "S Han; H Mao; W J Dally", "journal": "", "ref_id": "b23", "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "year": "2015" }, { "authors": "G Jocher; A Stoken; J Borovec; Nanocode012; L Christopherstan; Changyu; A Laughing; Hogan; ; F Alexwang; Ingham; Frederik; Guilhen; J Hatovix; J Poznanski; L Fang; M Yu; N Wang; O Gupta; Akhtar; P Petrdvoracek; Rai", "journal": "", "ref_id": "b24", "title": "ultralytics/yolov5: v3.1 -Bug Fixes and Performance Improvements", "year": "1900" }, { "authors": "F Yu; H Chen; X Wang; W Xian; Y Chen; F Liu; V Madhavan; T Darrell", "journal": "", "ref_id": "b25", "title": "Bdd100k: A diverse driving dataset for heterogeneous multitask learning", "year": "2020-06" }, { "authors": "A Dosovitskiy; L Beyer; A Kolesnikov; D Weissenborn; X Zhai; T Unterthiner; M Dehghani; M Minderer; G Heigold; S Gelly", "journal": "", "ref_id": "b26", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2010" }, { "authors": "S Woo; J Park; J.-Y Lee; I S Kweon", "journal": "", "ref_id": "b27", "title": "Cbam: Convolutional block attention module", "year": "2018" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Ł Kaiser; I Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b28", "title": "Attention is all you need", "year": "2017" }, { "authors": "A Paszke; S Gross; F Massa; A Lerer; J Bradbury; G Chanan; T Killeen; Z Lin; N Gimelshein; L Antiga; A Desmaison; A Kopf; E Yang; Z Devito; M Raison; A Tejani; S Chilamkurthy; B Steiner; L Fang; J Bai; S Chintala", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b29", "title": "Pytorch: An imperative style, highperformance deep learning library", "year": "" }, { "authors": "D P Kingma; J Ba", "journal": "", "ref_id": "b30", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "K Crowston", "journal": "Springer", "ref_id": "b31", "title": "Amazon mechanical turk: A research tool for organizations and information systems scholars", "year": "2012" }, { "authors": "G Varol; J Romero; X Martin; N Mahmood; M J Black; I Laptev; C Schmid", "journal": "", "ref_id": "b32", "title": "Learning from synthetic humans", "year": "2017" }, { "authors": "X Peng; B Usman; N Kaushik; D Wang; J Hoffman; K Saenko", "journal": "", "ref_id": "b33", "title": "Visda: A synthetic-to-real benchmark for visual domain adaptation", "year": "2018" }, { "authors": "R Sovit; G Vikas", "journal": "", "ref_id": "b34", "title": "", "year": "2022" }, { "authors": "C Wang; A Bochkovskiy; H M Liao", "journal": "IEEE", "ref_id": "b35", "title": "Yolov7: Trainable bagof-freebies sets new state-of-the-art for real-time object detectors", "year": "2023" } ]
[ { "formula_coordinates": [ 5, 316.96, 116.45, 246.08, 30.32 ], "formula_id": "formula_0", "formula_text": "L(ŷ, y) = - 1 T T i=1 y i • log (ŷ i ) + (1 -y i ) • (1 -log (ŷ i )) (1)" } ]
CUEING: a lightweight model to Capture hUman attEntion In driviNG
Discrepancies in decision-making between Autonomous Driving Systems (ADS) and human drivers underscore the need for intuitive human gaze predictors to bridge this gap, thereby improving user trust and experience. Existing gaze datasets, despite their value, suffer from noise that hampers effective training. Furthermore, current gaze prediction models exhibit inconsistency across diverse scenarios and demand substantial computational resources, restricting their on-board deployment in autonomous vehicles. We propose a novel adaptive cleansing technique for purging noise from existing gaze datasets, coupled with a robust, lightweight convolutional self-attention gaze prediction model. Our approach not only significantly enhances model generalizability and performance by up to 12.13% but also ensures a remarkable reduction in model complexity by up to 98.2% compared to the state-of-the-art, making in-vehicle deployment feasible to augment ADS decision visualization and performance.
Linfeng Liang; Yao Deng; Yang Zhang; Jianchao Lu; Chen Wang; Quanzheng Sheng; Xi Zheng
[ { "figure_caption": "Fig. 2 :2Fig. 2: Dataset Cleansing Process.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 :3Fig. 3: The overall architecture of our proposed CUEING model.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Example of Positional Encoding", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 :5Fig. 5: Example of user study interface, the left side is the overlay of image and gaze map from DADA-2000, the right side is the overlay of image and gaze map from CUEING-D.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 :6Fig. 6: The Jetson nano we used to conduct the mobile device experiment.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 :7Fig. 7: Quantitative result of the user study. The order of the boxes in the figure is BDD-A, CUEING-B, DADA-2000, and CUEING-D (left to right).", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 :8Fig. 8: Outlier examples, the first row is from DADA-2000 (left) and CUEING-D (right), the second row is from BDD-A (left) and CUEING-B (right).", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 :9Fig. 9: Example of prediction results generated by the CUE-ING model trained on SAGE-B (left) and CUEING-B (right).", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Selected models' performance trained by CUEING-D and DADA-2000 datasets, and test on DADA-2000 test set. The best result is bold.", "figure_data": "ModelObject Level (CUEING-D/DADA-2000)Acc(%)Prec(%)Recall(%)F1(%)AUCCDNN41.18/47.0425.89/25.94 84.74/71.45 39.66/38.06 0.58/0.59Where&What 85.14/78.89 46.81/63.14 65.71/65.54 54.67/64.32 0.87/0.84HWS47.16/45.79 25.04/25.4366.04/71.4536.32/37.510.55/0.57CUEING85.80/78.42 48.42/52.50 62.12/57.04 54.42/54.68 0.87/0.80", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "Selected models' performance trained by CUEING-B and SAGE-B datasets, and test on BDD-A test set. The best result is bold.", "figure_data": "ModelObject Level (CUEING-B/SAGE-B)Acc(%)Prec(%)Recall(%)F1(%)AUCCDNN53.48/38.7644.27/37.74 97.26/99.78 60.85/54.770.84/0.60Where&What 78.11/53.4267.60/44.93 72.19/90.46 69.70/60.050.84/0.63HWS57.19/49.6546.21/41.91 92.54/91.9261.64/57.570.78/0.62CUEING77.39/59.1071.64/47.17 68.84/81.86 70.21/59.440.84/0.68", "figure_id": "tab_1", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "Model performance by using BDD-A as source dataset Fine-tuned with DADA-2000 as target dataset. The best result is bold. In the table, * indicates that the model is fine-tuned on the corresponding dataset, and † informs the direct prediction without fine-tuning.", "figure_data": "ModelObject LevelPixel LevelAcc(%) Prec(%)Recall(%) F1(%) AUCKLCCCDNN*36.9324.8487.3738.680.533.52 0.16Where&What* 79.5155.4951.5853.460.792.35 0.33HWS*42.8725.1976.3637.890.565.71 0.08CUEING*78.7152.9859.5556.080.802.19 0.35CDNN †36.8024.8487.6538.710.533.150.05Where&What † 55.1324.4246.1631.940.563.36 0.09HWS †46.2925.3469.5537.140.554.620.13CUEING †58.6724.7339.7030.480.563.65 0.07", "figure_id": "tab_2", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "Model performance by using BDD-A as source datasets Fine-tuned by Dr(eye)VE as target dataset. The best result is bold. In the table, * indicates that the model is fine-tuned on the corresponding dataset, while † informs the direct prediction without fine-tuning.", "figure_data": "ModelObject LevelPixel LevelAcc(%)Prec(%) Recall(%) F1(%) AUC KLCCCDNN*52.1745.9298.7262.680.862.070.40Where&What* 79.4573.8776.8775.340.871.540.57HWS*65.9154.9689.9068.210.833.300.43CUEING*78.4173.1974.3873.780.851.710.53CDNN †55.4547.6998.2064.200.861.960.44Where&What † 76.5467.9880.4673.700.851.760.50HWS †67.0956.1187.7768.450.822.960.46CUEING †75.8567.9977.2472.320.831.880.48", "figure_id": "tab_3", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "Ablation Study on Generalizability of CUEING Model. In the table, † indicates that the model is trained via BDD-A, * informs it is trained via CUEING-B Dataset.", "figure_data": "Freeze BlockKLCCAll except Linear*2.49 0.29Attention Module*2.35 0.32None of the layers* 2.10 0.38All except Linear †2.28 0.34Attention Module †2.23 0.35None of the layers † 2.19 0.36", "figure_id": "tab_4", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "Model complexity of different gaze prediction models, and the best results are bold.", "figure_data": "ModelParam.(M) GFLOPsCDNN0.685.06Where&What7.5217.0HWS3.7521.18CUEING0.140.31", "figure_id": "tab_5", "figure_label": "VI", "figure_type": "table" } ]
[{"Category": "Data Source", "Citation": "[1]", "Explanation": "The cited work provides a study that shows differences in decision-making between autonomous driving systems and human drivers, which serves as a data source for the citing paper to build upon in their research."}, {"Category": "Supporting Evidence", "Citation": "[2]", "Explanation": "The cited work further supports the claim in the citing paper that human drivers rely heavily on visual assessments in road situations, indicating the need for attention models in autonomous driving systems."}, {"Category": "Methodological Basis", "Citation": "[3]- [7]", "Explanation": "The cited works provide research on the use of eye-tracking data to predict attention in human drivers, which the citing paper builds upon to enhance autonomous driving decision-making."}, {"Category": "Extension or Continuation", "Citation": "[6], [8]- [10]", "Explanation": "The cited works suggest the potential for attention models to improve user trust in autonomous driving systems and online training setups, which the citing paper expands upon in its research."}, {"Category": "Data Source", "Citation": "[11]- [15]", "Explanation": "The cited works provide research on gaze prediction models, which the citing paper uses as a data source to develop a human attention model for autonomous driving systems."}, {"Category": "Data Source", "Citation": "[16], [17]", "Explanation": "The cited works highlight challenges in using eye-tracking to capture drivers' attention, which the citing paper acknowledges in its research to address these issues in the development of attention models for autonomous driving systems."}, {"Category": "Supporting Evidence", "Citation": "[18]", "Explanation": "The cited work provides evidence that human attention can be easily diverted towards irrelevant items during the driving process, which is relevant to the discussion in the citing paper about the detrimental effect of unrelated gaze data on the training of gaze prediction models."}, {"Category": "Data Source", "Citation": "[11]", "Explanation": "The cited work is the source of the original gaze map in BDD-A and the cleansed version of the same dataset, which the citing paper uses to illustrate the presence of unrelated gaze in the BDD-A dataset and the need for data cleaning in the development of gaze prediction models."}, {"Category": "Extension or Continuation", "Citation": "[11]", "Explanation": "The cited work is the source of the BDD-A dataset, which the citing paper builds upon by developing a more generalizable gaze prediction model that can perform well across diverse driving scenarios."}, {"Category": "Data Source", "Citation": "[19]", "Explanation": "Dr(eye)VE is a driving gaze dataset that the citing paper uses as a source of data for real-world driving scenarios."}, {"Category": "Data Source", "Citation": "[11]", "Explanation": "BDD-A is a driving gaze dataset that the citing paper uses as a source of data for critical driving situations in San Francisco."}, {"Category": "Data Source", "Citation": "[12]", "Explanation": "DADA-2000 is a driving gaze dataset that the citing paper uses as a source of data for video clips depicting accidents in various environments in China."}, {"Category": "Data Source", "Citation": "[20]", "Explanation": "CDNN is a driving gaze dataset that the citing paper uses as a source of data for a similar study to DADA-2000 in terms of data collection methods."}, {"Category": "Data Source", "Citation": "[16], [18]", "Explanation": "The citing paper mentions the noise collected from eye-tracking equipment in previous studies, which is a data source for the work in BDD-A."}, {"Category": "Data Source", "Citation": "[11]", "Explanation": "BDD-A uses multiple drivers' averaged attention data as ground truth in their work, which is a data source for the study in the citing paper."}, {"Category": "Data Source", "Citation": "[17]", "Explanation": "SAGE uses Mask-RCNN to obtain objects' semantic segmentation for ground truth in their work, which is a data source for the study in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work, AlexNet, is a pre-trained CNN backbone that the citing paper uses in designing gaze prediction models."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work, YoloV5, is a pre-trained CNN backbone that the citing paper uses in designing grid-based gaze prediction methods."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work, VGG, is a pre-trained CNN backbone that the citing paper uses in designing gaze prediction models."}, {"Category": "Data Source", "Citation": "[20]", "Explanation": "The cited work, CDNN, is a gaze predictor that the citing paper uses as a data source for the proposed model."}, {"Category": "Extension or Continuation", "Citation": "[22]", "Explanation": "The cited work, inverse reinforcement learning in gaze prediction, is discussed as a method that the citing paper extends to provide a resource-efficient model without the need for prior labels."}, {"Category": "Methodological Basis", "Citation": "[23], [24]", "Explanation": "The cited works suggest that model compression can reduce computational demands, which the citing paper adopts as a method to reduce the model's complexity and inference speed on resource-constrained devices without compromising accuracy."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work, YOLOv5, serves as the basis for the object detection model used in the proposed pipeline for cleansing existing driving gaze datasets."}, {"Category": "Data Source", "Citation": "[16], [18]", "Explanation": "The cited works are used as a reference to highlight the human bias in driving gaze datasets collected from human drivers, which is a foundational element for the study of the effectiveness of the collected data in training models."}, {"Category": "Extension or Continuation", "Citation": "[26]", "Explanation": "The cited work, BDD-100K, is used to provide a specific example of a driving dataset that only provides annotations for driving-related objects. The citing paper builds upon this work by proposing a method to cleanse gaze maps by focusing on these important objects in driving scenes."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work YOLOv5 is used as the basis for the fine-tuning of the object detection model in the citing paper, which is essential for extracting the bounding box information of objects in the data."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work provides a feature extraction approach based on CNN or VIT that the citing paper adopts in their research to extract features from input images in gaze prediction tasks."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work introduces the concept of grid-based attention prediction, which the citing paper adopts in their approach to map extracted features to a gaze value array."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work provides the GFLOPs required in the convolution operation, which the citing paper uses to determine the GFLOPs needed for token convolution in their research."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work introduces the concept of channel attention, which the citing paper adopts to assign weights to the channels in the convolutional layer output. This method is used to reduce the feature dimension of the tokens, which is a key step in the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[29]", "Explanation": "The cited work on transformer encoders is used as the basis for the model design in the citing paper, specifically in the use of symmetric self-attention layers to capture the relationship between tokens."}, {"Category": "Data Source", "Citation": "[11]", "Explanation": "The cited work BDD-A is used as a data source for the evaluation of research questions in the citing paper."}, {"Category": "Data Source", "Citation": "[12]", "Explanation": "The cited work DADA-2000 is also used as a data source for the evaluation of research questions in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work BDD-A is used as a reference for the resized input size of 1280\u00d7720 in the baseline model selection of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work CDNN is used as a reference for the inclusion in the baseline model selection of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work Where&What is also used as a reference for the inclusion in the baseline model selection of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work Pytorch is used as a code implementation base for the experiments conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[31]", "Explanation": "The cited work Adam optimizer is used in the experiments conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[11]", "Explanation": "The cited work, BDD-A, is extended by the citing paper to include a new dataset, CUEING-B, for qualitative analysis of human gaze maps."}, {"Category": "Extension or Continuation", "Citation": "[12]", "Explanation": "The cited work, DADA-2000, is extended by the citing paper to include a new dataset, CUEING-D, for qualitative analysis of human gaze maps."}, {"Category": "Data Source", "Citation": "[32]", "Explanation": "The cited work, Amazon mTurk, is the source of the user study participants recruited in the citing paper to conduct a qualitative analysis of human gaze maps."}, {"Category": "Supporting Evidence", "Citation": "[12]", "Explanation": "The cited work, DADA-2000, is the original dataset used in the study, and the ground truth of the dataset is the source of the data used in the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work provides the threshold criterion for determining whether an object is considered focused in the ground truth gaze map, which the citing paper adopts in their research to measure the performance of gaze prediction models on different datasets."}, {"Category": "Data Source", "Citation": "SAGE [17]", "Explanation": "The cited work SAGE is used to create the dataset SAGE-B, which the citing paper compares the performance of gaze prediction models trained on CUEING-B to identify the more effective data processing method in improving the performance of the models."}, {"Category": "Supporting Evidence", "Citation": "[11]", "Explanation": "The cited work, BDD-A, is used as the source dataset in the experiment to assess the model's generalizability in terms of performance and generalizability."}, {"Category": "Data Source", "Citation": "[12]", "Explanation": "The target dataset DADA-2000 is used in the experiment to evaluate the model's general performance and generalizability in terms of object level metrics."}, {"Category": "Data Source", "Citation": "[19]", "Explanation": "The Dr(eye)VE dataset is used in the experiment to assess the model's general performance and generalizability in terms of object level metrics."}, {"Category": "Methodological Basis", "Citation": "[11], [15]", "Explanation": "The cited works provide the metrics of Kullback-Leibler divergence and Pearson's Correlation Coefficient for measuring the similarity between generated gaze maps and ground truth maps, which the citing paper adopts in its research to evaluate the performance of its gaze map generation method."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work provides a set of metrics for assessing the complexity of different models, which the citing paper adopts to compare the complexity of the two gaze prediction models in the study."}, {"Category": "Extension or Continuation", "Citation": "[20]", "Explanation": "The cited work, CDNN, is used as a comparison baseline in the study of gaze prediction models on a mobile device, extending the research on the topic by providing a model with a different input processing method."}, {"Category": "Data Source", "Citation": "Figure 6", "Explanation": "The figure is cited to acknowledge the use of a mobile device with Nvidia Jetson Nano 4G in the study of the efficiency of the gaze prediction models on a mobile device."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The citing paper adopts the YOLOv5 model for object detection, which is a method used in the cited work to train the model on BDD-100K dataset."}, {"Category": "Data Source", "Citation": "[26]", "Explanation": "The citing paper uses the BDD-100K dataset as a data source for training the YOLOv5 model, which is a pre-existing dataset that the cited work has previously utilized."}, {"Category": "Data Source", "Citation": "[26]", "Explanation": "The cited work, BDD-100K, is the source of the label set used in the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work, BDD-A, provides the dataset used in the research conducted in the citing paper to analyze the performance of gaze prediction in a specific scenario."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work highlights the importance of integrating time series data in gaze prediction and the potential impact of centre bias in the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[15]", "Explanation": "The cited work provides the Where&What model, which the citing paper uses in their research to evaluate the performance of the CUEING model in object-level metrics."}, {"Category": "Data Source", "Citation": "[17]", "Explanation": "The cited work provides the SAGE-B model, which the citing paper uses in their research to train models and generate gaze maps for comparison with the CUEING-B model."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work, SAGE, is used as a method to identify objects in the input of a self-driving system. The citing paper intends to evaluate the importance of these objects in co-training critical driving models to enhance the robustness of the system."}, {"Category": "Supporting Evidence", "Citation": "[12]", "Explanation": "The cited work, DADA-2000 data, is used as a data source in the citing paper to fine-tune the CUEING model and improve its performance in the generalization task."}, {"Category": "Extension or Continuation", "Citation": "[15]", "Explanation": "The cited work, Where&What, is a state-of-the-art model that the citing paper builds upon in the generalization task for the CUEING model. The CUEING model outperforms the Where&What model in both pixel level metrics and in the fine-tuning process with a portion of DADA-2000 data."}, {"Category": "Extension or Continuation", "Citation": "[19]", "Explanation": "The cited work, Dr(eye)VE, is another state-of-the-art model that the citing paper extends the generalization task for the CUEING model. The CUEING model shows a notable advantage in precision in the Dr(eye)VE task, with a performance that closely approaches the state-of-the-art."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work, BDD-A, serves as a methodological basis for the CUEING model in the citing paper, as the model is trained on the BDD-A dataset and the results of the ablation study are compared to the model performance on the BDD-A dataset."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work provides the model complexity evaluation metrics that the citing paper uses to compare the performance of the model in terms of model parameters and GFLOPs."}, {"Category": "Extension or Continuation", "Citation": "[20]", "Explanation": "The cited work is evaluated on a Jetson Nano 4G, and the citing paper extends the evaluation by providing the power consumption and inference time for a single image inference."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work, CDNN, is used as a benchmark to compare the performance of the citing paper in terms of memory and inference time requirements. The citing paper adopts the model and its performance metrics to highlight the advantages of the proposed CUEING model in terms of memory and inference time."}, {"Category": "Data Source", "Citation": "[11]", "Explanation": "The cited work, BDD-A, is a large-scale dataset that the citing paper utilizes in their experiments to test the generalizability of the proposed method."}, {"Category": "Data Source", "Citation": "[12]", "Explanation": "The cited work, DADA-2000, is another large-scale dataset that the citing paper uses in their experiments to assess the generalizability of the proposed method."}, {"Category": "Data Source", "Citation": "[17]", "Explanation": "The cited work, SAGE, is a dataset that the citing paper utilizes in their experiments to test the effectiveness of the proposed method across different datasets."}, {"Category": "Data Source", "Citation": "[19]", "Explanation": "The cited work, Dr(eye)VE, is a dataset that the citing paper uses in their experiments to evaluate the generalizability of the proposed method across different datasets."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b1", "b3", "b5", "b6", "b7", "b9", "b10" ], "table_ref": [], "text": "The recent release of powerful language models (LMs) such as ChatGPT (OpenAI, 2022), Bard (Pichai, 2023), and Claude (AnthropicAI, 2023) might herald a future where the best AI systems are provided primarily as a fee-based API by large companies. At the same time, open-source LMs are becoming increasingly accurate, with models like LLaMA and FLAN-T5 providing many of the same basic capabilities as their commercial counterparts, albeit at a lower level of performance (Touvron et al., 2023;Chung et al., 2022). This presents an important question, whose answer will have profound future implications: will the most powerful LMs be closed-source or will they be freely distributed for anyone to use, modify, and extend? Both possibilities have important pros and cons, and implications on policy, corporate strategy, and the future of scientific inquiry. Figure 1: Crowdworkers initially rate the quality of our imitation models highly, as ∼70% of their outputs are rated as equal or better than those of ChatGPT (left). However, as we train on more imitation data, our models fail to further close the gap, and even begin to regress along other axes, e.g. factual knowledge according to Natural Questions (center). Our main conclusion is that the biggest limitation of current open-source LMs is their weaker base capabilities. In turn, the best way for the open-source community to improve models is by increasing these capabilities (e.g., via scaling, better pretraining data, etc.,) rather than fine-tuning on more and more imitation data (right).\nIn this work, we study one possible resolution to this question: model imitation (Wallace et al., 2020;Orekondy et al., 2019). The premise of model imitation is that once a proprietary LM is made available via API, one can collect a dataset of API outputs and use it to fine-tune an open-source LM. In theory, this imitation process may provide an easy method to distill (Hinton et al., 2014) the capabilities of any proprietary model, thus implying that open-source LMs will always be competitive with their commercial counterparts. To date, recent works have looked to imitate OpenAI's best systems, e.g., Self-Instruct (Wang et al., 2022a) and Alpaca (Taori et al., 2023), and initial results suggest that these models have achieved near parity with proprietary models. Consequently, there has been a growing sentiment among many members of the broader tech community that closed-source models will soon have no advantage (Patel and Ahmad, 2023).\nThe goal of our work is to critically analyze the efficacy of model imitation by training and evaluating copycats of ChatGPT. We first collect datasets that focus on either imitating ChatGPT for a specific task or broadly imitating it across all behaviors. We then fine-tune LMs on these datasets using a range of model sizes (1.5B-13B), base models (GPT-2 and LLaMA), and data amounts (0.3M-150M tokens). We evaluate using human and GPT-4 evaluations (blind pairwise comparisons with ChatGPT) as well as accuracy on canonical NLP benchmarks (MMLU, NQ, HumanEval).\nWe were initially surprised by how much imitation models improve over their base models: they are far better at following instructions, and their outputs appear similar to ChatGPT's. This was further supported by both human and GPT-4 evaluations, where the outputs of our best imitation model were rated as competitive with ChatGPT (e.g., Figure 1, left).\nHowever, when conducting more targeted automatic evaluations, we found that the imitation models close little to none of the large gap between LLaMA and ChatGPT. In particular, we demonstrate that imitation models improve on evaluation tasks that are heavily supported in the imitation training data. On the other hand, the models do not improve (or even decline in accuracy) on evaluation datasets for which there is little support. For example, training on 100k ChatGPT outputs from broad-coverage user inputs provides no benefits to Natural Questions accuracy (e.g., Figure 1, center), but training exclusively on ChatGPT responses for Natural-Questions-like queries drastically improves task accuracy. Consequently, we conclude that broadly matching ChatGPT using purely imitation would require (1) a concerted effort to collect enormous imitation datasets and (2) far more diverse and higher quality imitation data than is currently available.\nThese findings underscore an inconsistency between LM performance on crowdworker evaluations and NLP benchmarks. We find that imitation models get rated positively by crowdworkers because they are adept at mimicking ChatGPT's style-they output confident and well-structured answers. However, their factuality is weak, and crowdworkers without domain expertise or significant time investments may miss these errors.\nOverall, our key takeaway is that model imitation is not a free lunch: there exists a capabilities gap between today's open-source LMs and their closed-source counterparts that cannot be closed by 1. Function Approximation: Actor-critic algorithms typically use function approximation methods to represent the value and policy functions. This allows them to learn in high-dimensional state spaces.\n2. Critic Network: In actor-critic algorithms, there is an additional critic network that estimates the value function.\nThe critic provides an estimate of the value of a state, which is then used to update the policy. This reduces the variance of the update and stabilizes learning.\n3. Advantage Estimation: Actor-critic algorithms use the advantage function to estimate the quality of an action taken by the policy. The advantage function is the difference between the return of an action and the estimated value of a state. This reduces the variance of the update and allows for more efficient learning.\nOverall, actor-critic algorithms provide a more efficient and stable way of learning in reinforcement learning tasks, particularly in high-dimensional state spaces.\nActor-critic algorithms are a type of reinforcement learning algorithm that improves the REINFORCE algorithm by combining the actor (policy) and critic (value) components. The main changes that actor-critic algorithms make over standard REINFORCE are:\n1. Actor-critic algorithms use a single critic to evaluate the value function of the policy, whereas standard REINFORCE uses a separate critic for each action.\n2. Actor-critic algorithms use a single actor to determine the actions to take, whereas standard REINFORCE uses a separate actor for each action.\n3. Actor-critic algorithms learn the policy and actor simultaneously, whereas standard REINFORCE learns the policy and actor separately.\n4. Actor-critic algorithms use an optimistic estimate of the value function to determine the actor's actions, whereas standard REINFORCE uses the critic's estimate.\nOverall, actor-critic algorithms combine the critic's and actor's estimates in order to learn both the policy and actor simultaneously, while standard REINFORCE learns them separately. This can lead to faster convergence and potentially better generalization, but the choice between the two algorithms depends on the specific problem and the desired behavior of the agent.\nFigure 2: ChatGPT and our best imitation model produce answers with similar style-they start with an overview paragraph, a list of differences, and end with a summary. However, while ChatGPT's answer is mostly correct, the imitation model's answer is completely inaccurate despite sounding authoritative. We show correct sentences in green, ambiguously-correct sentences in yellow, and incorrect ones in red. cheaply fine-tuning on imitation data. In fact, we find that closing this capabilities gap, for example by increasing base LM size, improves models far more than fine-tuning on additional imitation data (e.g., Figure 1, right). This implies that the higher leverage action for improving open-source LMs is to tackle the difficult challenge of developing better base models (e.g. by scaling up models, improving pre-training data quality, improving pre-training, etc.), rather than taking the shortcut of imitating proprietary systems. Nevertheless, we believe that model imitation has utility in subverting the need to annotate high-quality finetuning data if one has a sufficiently strong base LM." }, { "figure_ref": [], "heading": "What is Model Imitation?", "publication_ref": [ "b5", "b6", "b11" ], "table_ref": [], "text": "Proprietary LMs such as ChatGPT consist of two key aspects: proprietary base LMs and proprietary fine-tuning data. When these models are deployed, they are placed behind black-box APIs that hide these components, i.e., users can query the API with arbitrary inputs but cannot see the model's training data, next-token probabilities, and architecture. In model imitation, the goal is to collect data using the API to train an LM that achieves comparable performance to it, i.e., essentially distilling the target LM using an imitation training set (Wallace et al., 2020;Orekondy et al., 2019;Tramèr et al., 2016). Potential reasons for performing imitation range from benign to illegal:\n• Academics can use powerful imitation LMs to drive new research projects.\n• Companies can use imitation LMs to launch services that compete with the proprietary system.\n• Malicious users could use imitation models to accelerate progress on nefarious use cases.\nLocal versus Broad Imitation When performing model imitation, one will either look to perform local \"task-specific\" imitation or more global \"broad-coverage\" imitation. The former imitates the target model on just a specific task or domain, e.g., sentiment analysis of tweets or question answering over Wikipedia entities. The latter focuses on the more ambitious goal of broadly imitating the target model across its full spectrum of behaviors, domains, and tasks. Broad-coverage imitation is challenging because (1) one must collect an extremely diverse imitation dataset and (2) imitation models must capture this wide data distribution and generalize similarly to the target model on a myriad of held-out examples." }, { "figure_ref": [], "heading": "Recent Work on Model Imitation", "publication_ref": [ "b12", "b13", "b14", "b9", "b15", "b16", "b17", "b18", "b10" ], "table_ref": [], "text": "A surge of recent publications have attempted to both locally imitate proprietary models for specific tasks (Sun et al., 2023;Hsieh et al., 2023;Honovich et al., 2022) and broadly imitate models, e.g., Alpaca (Taori et al., 2023), Vicuna (Chiang et al., 2023), Koala (Geng et al., 2023), GPT4ALL (Anand et al., 2023), and more (Wang et al., 2022a;Peng et al., 2023). Many these works conclude that their imitation models achieve near parity with the target model, e.g., Vicuna claims to achieve 90% of the quality of ChatGPT and Google Bard. These claims have since been propagated out into the broader tech community, leading many to believe that open-source LMs are rapidly closing the gap to their closed-source counterparts and that top AI companies will soon have no competitive advantage (Patel and Ahmad, 2023).\nOur goal. The goal of our paper is to critically evaluate this line of reasoning. In particular, we train models to imitate ChatGPT while experimenting with different decisions (e.g., data collection strategies, data amounts, and base LMs) and conducting rigorous automatic and human evaluations." }, { "figure_ref": [], "heading": "Building Imitation Datasets", "publication_ref": [ "b14" ], "table_ref": [], "text": "We consider both task-specific and broad-coverage imitation. For either form of model imitation, one must curate a set of inputs to query to the target model. In practice, one may have a set of inputs in mind (e.g., sentences from Wikipedia, tweets about Coca-Cola) and if this set of input examples is sufficiently large, one can use them to query the target model and build an imitation dataset. In cases when it is impractical or labor intensive to create a large and diverse pool of inputs, one can also create synthetic examples by prompting LMs to iteratively generate examples that are from the same distribution as an initial smaller seed set of inputs (Wang et al., 2022a;Honovich et al., 2022).\nTask-specific imitation For task-specific imitation, we created an imitation dataset tailored to Natural Questions (Kwiatkowski et al., 2019a), i.e., factual knowledge about Wikipedia entities. In particular, we first curated a seed set of ten QA pairs from the validation dataset. We then iteratively generated 6,000 additional examples by prompting ChatGPT with five random QA pairs and asking it to generate similar but distinct examples. All of these examples are single turn, without any dialogue history. We refer to this dataset as NQ-synthetic and provide further details in Appendix A.\nBroad-coverage imitation For the more ambitious goal of broad-coverage imitation data, we leverage the fact that models such as ChatGPT have become so popular that their inputs and outputs are already widely posted on the web. Thus, we can collect a large, diverse, and generally high-quality dataset of examples for free without ever having to interact with the company's API. In particular, we collect examples from three sources:\n• ShareGPT: we use approximately 90K dialogues shared by users on the website ShareGPT.\nTo maintain data quality, we deduplicated on the query level and removed any non-English conversations using a language detector. This leaves approximately 50K examples, each of which consist of multiple turns of dialogue. We refer to this dataset as ShareGPT-Mix and show qualitative examples in Appendix A. We find that ShareGPT-Mix is generally of high quality. First, there is high diversity in the instructions: for each user query in the dataset, the most similar other user query has an average BLEU score similarity of just 8%. This is considerably lower than that of other datasets such as Super-NaturalInstructions (Wang et al., 2022b), which is at 61% BLEU similarity for a similarly sized set of examples. We also manually reviewed different examples and logged their semantic category (see Table 5 in Appendix A). The dataset contains diverse categories, including many multi-lingual conversations and coding tasks. Figure 3: We find that GPT-4 and crowdworker evaluations show the same trends. As we scale up the amount of imitation data, GPT-4's ratings of our imitation models are relatively flat (left). However, as we scale up the base model size, GPT-4's rates the quality of our imitation models increasingly highly (right)." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [], "text": "We train imitation LMs using our ShareGPT-Mix and NQ-synthetic datasets, and we conduct both human and automatic evaluations. We focus our initial results on the ShareGPT-Mix models." }, { "figure_ref": [ "fig_5" ], "heading": "Training and Evaluation Setup", "publication_ref": [ "b22", "b3", "b23", "b24", "b26" ], "table_ref": [], "text": "We study how model imitation improves as we increase the amount of imitation data and vary the capabilities of the underlying base LM. We consider decoder-only models ranging in size from 1.5B to 13B parameters: GPT-2 1.5B (Radford et al., 2019), LLaMA 7B (Touvron et al., 2023), and LLaMA 13B. 2 We also study the effect by data scale by fine-tuning with different sized data subsets.\nDuring training, we chunk the conversations into 2048 tokens blocks. We introduce special tokens that demarcate the beginning of each user query and model output. We fine-tune using standard LM losses on only the model outputs. Following Chung et al. (2022); Chowdhery et al. (2022), we train for one epoch using the AdamW optimizer with gradients re-scaled by the magnitude of each weight. We use a learning rate of 2e-3 with 1000 steps of linear warm-up from 0, and we train with batch size 32. All models are trained in JAX using a combination of fully shared data parallelism and tensor parallelism on TPUs hosted by Google Cloud or on a single Nvidia DGX server with 8 A100 GPUs.\nFor automatic evaluations, we measure performance on 5-shot MMLU (Hendrycks et al., 2021), 3-shot Natural Questions (Kwiatkowski et al., 2019b), and 0-shot HumanEval (Chen et al., 2021). We report the original scoring metrics associated with each dataset (e.g., exact match for NQ). For human evaluation, we conduct blind pairwise output comparisons using Mechanical Turk. In our UI, we present each rater with a task instruction and the output of two unknown models, one of which is ChatGPT and the other is one of our imitation models (see Figure 7 in Appendix B). The raters select which output they prefer or if the two outputs are equal in quality. We use approximately 70 crowd workers and evaluate on 255 held-out prompts. 3 We report the average preference across the dataset and one standard deviation around the mean. Additionally, we conduct evaluations using GPT-4 and present additional details of the prompts used in Appendix C.\nWe release all of our code, pre-trained models, and anonymized human evaluations. 4 Increasing Amount of Imitation Data As we increase the amount of imitation data, there is little improvement on various benchmarks, or even performance regressions (top). On the other hand, scaling up the base LM steadily improves results (bottom), suggesting that the key difference between open-source and closed-source LMs is a raw capabilities gap, rather than the finetuning data used." }, { "figure_ref": [], "heading": "Qualitative Analysis and Crowdworker Evaluation Show Promise", "publication_ref": [], "table_ref": [], "text": "Imitation models are rated highly by crowdworkers. We were initially surprised at the quality of our ShareGPT-mix models: while the base GPT-2 or LLaMA models often fail to follow instructions, the imitation models produce outputs that stay on task. These initial promises were further supported, as crowdworkers and GPT-4 often rated the quality of the imitation models' outputs as equal or better than those of ChatGPT, especially as we scale up model size (right of Figure 1 and3). However, we also find that human ratings quickly saturate as we scale up the amount of imitation data (left of Figure 1 and 3), alluding to possible shortcomings of this approach." }, { "figure_ref": [ "fig_2" ], "heading": "Targeted Automatic Evaluations Expose Failure Modes", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Broad-coverage imitation models fail to close the gap across most tasks. We next ran targeted automatic evaluations to isolate whether specific model capabilities improved after imitation. We found that across every benchmark that we measured, ShareGPT-mix imitation models do not improve (or even decline) in accuracy as compared to the base model, even when adding additional imitation data (Figure 4,top). This shows that imitating ChatGPT on our broad-coverage imitation data does not improve the model across most axes, e.g., factual knowledge, coding, and problem solving.\nWe argue that this occurs because ChatGPT has captured far more knowledge and capabilities from the web as compared to LLaMA. In turn, it is unreasonable to expect that a small amount of imitation data (e.g., 1000x less data than pre-training) would enable one to bridge this gap. Instead, we argue that broadly matching ChatGPT using weaker base LMs such as LLaMA-13B would require a concerted effort to collect an extremely large and diverse imitation dataset that is far closer to the scale of pretraining. It is currently unclear whether such an effort is worth undertaking or feasible.\nto complete a task. We select workers with ≥ 95% approval rating, are located in an English-speaking country, and have at least 100 HITs completed.\n4 Codebase available at https://github.com/young-geng/EasyLM, data available at https://huggingface. co/young-geng/koala-eval, and pre-trained models available at https://huggingface.co./young-geng/koala.\nTraining local imitation models is far more successful. On the other hand, our model trained to locally imitate ChatGPT using the NQ-synthetic data is far more successful. In particular, the imitation models' performance improves significantly as compared to the LLaMA base model (see Table 1) and quickly approaches the accuracy of ChatGPT. This demonstrates that it is far more feasible to distill a specific behavior from ChatGPT as opposed to broadly matching its capabilities.\nA empirical trade-off exists between different evaluation datasets. A curious phenomena is that training on more ShareGPT-Mix data hurts performance as compared to the base model on some of our evaluations (compare the black versus blue lines in Figure 4). We believe that these performance regressions arise from a distribution shift and tension between the conversational-style fine-tuning data and the downstream benchmarks. An open problem is whether these performance regressions can be mitigated using regularization or by mixing in pre-training data during fine-tuning.\nImproving base LMs is the highest leverage action. Rather than increasing imitation data size, we find that using better base LMs (by increasing base model size) does lead to substantial accuracy improvements (Figure 4,bottom). This aligns with our previous claim: there exists a capabilities gap between today's open-source LMs and their closed-source counterparts that cannot be closed by cheaply fine-tuning on imitation data. Instead, the best way to improve open-source LMs is to tackle the difficult challenge of developing better base LMs, whether it be via model scaling or other means." }, { "figure_ref": [ "fig_4" ], "heading": "Imitation Models Learn Style, Not Content", "publication_ref": [ "b27" ], "table_ref": [], "text": "Finally, we investigate why there is a strong discrepancy between crowdworker evaluations, where imitation models appear quite strong, and results on NLP benchmarks, where imitation models appear no better than base LMs. We find that imitation models perform well according to human evaluations because they are adept at mimicking ChatGPT's style-they output fluent, confident, and well-structured answers. In particular, we show in Table 2 that as we add more imitation data, ChatGPT and our imitation models produce outputs with a similar length, similar word choice, similar use of an authoritative tone, and similar low-level structure (e.g., use of lists).\nHowever, as shown in our previous automatic evaluations, the imitation models have weak factuality. In other words, imitation models actually embody some of the worst aspects of AI assistants: their answers sound confident but are less factual than ChatGPT. This is perhaps best elucidated in Figure 2, where the imitation model outputs an answer that is similar in style to ChatGPT's answer but is completely incorrect.\nHuman evaluation is increasingly hard. Unfortunately, crowd workers without domain expertise or significant time investments can easily be deceived by stylistic components-answers that sound confident and correct are often spuriously chosen more often. To improve human evaluation, it is thus increasingly necessary to both engage domain experts, but also to curate a set of highly difficult prompts that can rigorously test different models' capabilities. Surprisingly, our GPT-4 evaluations also showed the same trends as our crowdworker evaluations (albet with a slightly larger absolute preference for ChatGPT's outputs). While this suggests that GPT-4 may be a viable candidate to cheaply emulate human evaluations on some tasks, it also implies that LLMs may replicate some human-like cognitive biases. We look forward to future work that further investigates this possibility.\nImitation models inherit the safety and toxicity style of the teacher model. Finally, despite imitation only providing benefits in mimicking the \"style\" or \"persona\" of the target model, there is still value in doing so. For example, OpenAI has carefully and deliberately trained ChatGPT to be \"harmless\" to end users, often avoiding toxic outputs and refusing to respond to questionable user requests. We find that our imitation models also inherit these components. In particular, we show in Figure 5 that as we finetune on more imitation data, the imitation model's outputs become less toxic on RealToxicityPrompts (Gehman et al., 2020), as the model learns to abstain in a similar fashion to ChatGPT. Consequently, we conclude that model imitation is highly effective in cases when one has a powerful base LM and is looking to subvert the need to annotate expensive finetuning data. The results show that imitation models are significantly less toxic than the baseline models, i.e., they learn to inherit the safety and toxicity guidelines of the target models. Table 2: As we add more imitation data, the style of our models' outputs are increasingly similar to those of ChatGPT. In particular, we generate outputs from our imitation models and compare them to a random ChatGPT response across different metrics. We also report a rough \"upper bound\" by comparing a second random ChatGPT output to the original ChatGPT response (ChatGPT #2)." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b28", "b28", "b29", "b30", "b0", "b31", "b32", "b33", "b34", "b35", "b36" ], "table_ref": [], "text": "Finetuning as a simple knowledge extractor. Our results show that a modest amount of finetuning provides little to no improvements on an LM's knowledge or capabilities. We thus agree with the view that pre-training is the main source of an LM's capabilities, and that finetuning acts as a lightweight method to train the model to extract its own knowledge Schulman (2023). This is the reason why improving models by imitating ChatGPT on a small set of data is insufficient, as the base knowledge is largely unaffected. Furthermore, this view suggests that during finetuning time, you may even want to avoid introducing new knowledge (i.e., do not imitate better models), as you will otherwise be training the model to guess or hallucinate its answers, rather than actually doing the task as intended (Schulman, 2023;Gao, 2021;Goldberg, 2023).\nShould you be worried about imitation? Imitating proprietary LMs comes with many potential implications for small and large companies alike. Our results suggest that the efficacy of model imitation is limited when there is a large gap between the base and target LM. Thus, we believe that companies who can establish a capabilities gap using large amounts of data, compute, or algorithmic advances are the ones who are best positioned to build and maintain competitive advantages. On the other hand, companies that look to build moats by using off-the-shelf LMs with proprietary fine-tuning datasets may be comparatively more vulnerable to imitation.\nPotential confounders to our findings. While we believe our findings are well supported, there are a few potential hidden confounders that could change our conclusions. First, as we are unaware of the pre-training data used by ChatGPT, it is possible that some of the tasks that we evaluate on could have been been contaminated into ChatGPT's training data, thus inflating its accuracy numbers. Moreover, to conduct imitation, we perform supervised learning on the outputs from the target model. However, it also may be possible to use the target model to perform RLHF or constitutional AI (OpenAI, 2022;Christiano et al., 2017;Bai et al., 2022) to further improve results. Lastly, we only considered relatively simple methods for collecting imitation data, however, there may be more advanced methods (e.g., active learning) that may improve the effectiveness or efficiency of model imitation.\nImplications for other forms of model imitation There has been a flurry of recent work that performs model imitation in more indirect ways than we study here. For example, the training process of many recent vision-language model (Li et al., 2022;Liu et al., 2023;Ye et al., 2023;Zhu et al., 2023) includes ChatGPT or GPT-4 outputs at some stages. Furthermore, it has become common to use large LMs in various ways during the data annotation and creation process, e.g., to aid crowd workers, to perform data augmentation, to identify mislabeled data, and more. Our findings may have implications for these approaches, e.g., it is likely that vision-language models that include OpenAI data may have similar failure modes to the ones described in our work.\nTechnical limitations of model imitation Imitating proprietary models also has various technical limitations: the models inherit the weaknesses and biases of proprietary models, imitation does not allow one to directly improve on the design decisions of closed AI companies (e.g., data annotation strategies), and these systems are roughly upper-bounded by the capabilities of the target proprietary model. Moreover, it is difficult to answer certain scientific questions using imitation models because they include proprietary black-box models in their training pipeline." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b7", "b6", "b11", "b37", "b38", "b39", "b5", "b38", "b40", "b41", "b42", "b5", "b43" ], "table_ref": [], "text": "Model distillation Model imitation is similar to model distillation (Hinton et al., 2014), where one trains a student model to imitate a teacher. While conceptually similar, there are several major practical differences. For distillation, the training data, model architecture, and hyperparameters are known for the teacher. In model imitation, one tries to imitate the teacher without this knowledge. Moreover, for distillation it is common to use training objectives that utilize the probability distribution of the teacher whereas in stealing such a distribution is typically unavailable.\nPast work on model imitation Prior work has shown that model imitation is possible for various domains (Orekondy et al., 2019;Tramèr et al., 2016;Lowd and Meek, 2005), including language classifiers (Krishna et al., 2020;Pal et al., 2019) and machine translation systems (Wallace et al., 2020). Nevertheless, past work considers a setting where models are trained from scratch, and thus the main proprietary nature of a model is the company's internal training data. In our setting, systems like ChatGPT are proprietary because they also leverage OpenAI's internal pre-trained LMs that are stronger than any available open-source LM.\nDefending against model imitation Our results show that imitation is a moderate concern for companies. In turn, there is a need to develop methods to mitigate or detect imitation. There is an existing body of work in this direction, e.g., one can detect whether a particular model is trained via imitation (Krishna et al., 2020;Juuti et al., 2019;Szyller et al., 2019;Maini et al., 2021) or slow model stealing by sacrifing some performance (Wallace et al., 2020;Orekondy et al., 2020;Dziedzic et al., 2022a,b). Unfortunately, existing methods often exhibit too severe of a tradeoff to be deployable in practice." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [], "table_ref": [], "text": "In this work, we critically analyzed the efficacy of model imitation. We showed that imitation can indeed improve the style, persona, and instruction adherence of open-source LMs. However, imitation falls short in improving LMs across more challenging axes such as factuality, coding, and problem solving. On one hand, these results indicate that businesses can successfully establish and safeguard a competitive advantage by pre-training powerful base models. Conversely, it also implies that if two groups possess equally competent base LMs, one can easily mimic the persona and behavior of the other model, without needing to annotate expensive fine-tuning data.\nMoving forward, our findings raise a range of technical and societal questions. First, we show that existing crowd worker evaluations have trouble elucidating the differences between imitation models and proprietary ones, despite clear differences existing between them. In turn, the future of human evaluation remains unclear: how can we cheaply and quickly probe the utility of a powerful LLM? " }, { "figure_ref": [], "heading": "A Additional Details on Imitation Data", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "To construct the NQ-synthetic dataset, we first curate seed examples from the Natural Questions validation set in Table 3. We then use the prompting template in Figure 6 shows examples from ShareGPT-Mix and Table 5 shows a breakdown of different categories." }, { "figure_ref": [ "fig_5" ], "heading": "B Amazon Mechanical Turk Interface", "publication_ref": [], "table_ref": [], "text": "We use Amazon Mechanical Turk to conduct human evaluations. We use the UI shown in Figure 7. It shows human evaluators a random task instruction and the output responses from two systems, one of which is our model and the other is ChatGPT. The annotators then choose which response is better according to overall subjective quality. We randomize whether ChatGPT or our imitation models are shown first. We collect 3 unique ratings for every example in the evaluation set and a total of 71 human evaluators participated. In order to get an average score, we use majority voting among the 3 " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank Nicholas Carlini, the members of Berkeley NLP, and the members of Berkeley RAIL for valuable feedback on this project. Eric Wallace is supported by the Apple Scholar in AI/ML Fellowship. Part of this research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC)." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Non-English 14 Invalid 2\nTable 5: We conduct a manual quality review of 50 random user queries from ShareGPT. The dataset contains highly varied categories of task instructions, including coding and multi-lingual queries." }, { "figure_ref": [], "heading": "ShareGPT-Mix Data", "publication_ref": [], "table_ref": [], "text": "User: Return to the Identity and Access Management Commercial Due Diligence. We now want to model the market size for the IAM market. What are the levers and inputs we need to accurately estimate the size and growth of the U.S. market. For example, these may include the number of companies, current saturation, and specific market headwinds and tailwinds you identify. Provide suggestions where you would look for data for each of the levers." }, { "figure_ref": [], "heading": "GPT:", "publication_ref": [], "table_ref": [], "text": "To accurately estimate the size and growth of the U.S. Identity and Access Management (IAM) market, there are several key levers and inputs to consider:\n1. Number of companies: The number of companies operating in the U.S. IAM market is an important lever to consider when estimating market size. This includes both large, established vendors and smaller, niche players. Data on the number of companies can be found through market research reports and industry databases, such as Gartner, IDC, and MarketsandMarkets. 2. Current saturation: ...." }, { "figure_ref": [], "heading": "User: I am building an Elven", "publication_ref": [], "table_ref": [], "text": "Capital with all of it's history, religion, politics, trade routes, people, traditions, laws, general rules, and more. It is called Oniver. Your job is to ask me questions to learn more about the town and help me fill in the blanks to make it a rich and thriving town with lots of history. Ask detailed questions that will make me think. Say I understand if this makes sense. Then ask me the first question.\nGPT: I understand.\nWhat is the history behind Oniver's founding? How did it come to be and what was the context in which it was established?\nUser: Magic is widely used in Oniver. Better magic is a path to improvement in life and status in the capital. It is also the ticket into getting into the Magic University. There is a high importance between magical ability within the noble families of Oniver. Magic is used as a tool by the wealthy to oppress the lower classes. Commoners practicing magic is not forbidden but is looked down upon by the nobles because they see themselves as better than the commoners.\nGPT: I understand.\nWhat kind of economy does Oniver have? Is it based on trade, agriculture, manufacturing or a combination of these things? Are there any restrictions or laws surrounding trade and commerce? .... raters on each example, and then average the scores across all examples. We pay these evaluators roughly $15/hour based on the average time it takes to complete a task. In total, we spend roughly $5000 on our ratings experiments, including service fees." }, { "figure_ref": [], "heading": "C GPT-4 evaluations", "publication_ref": [], "table_ref": [], "text": "Our GPT-4 evaluations follow the procedure from Chiang et al. ( 2023): we prompt GPT-4 with two outputs, one from ChatGPT and one from our imitation models. We then ask GPT-4 to output a preference ranking of the two outputs. We use the same set of evaluation prompts as in our humanpreference evaluations. In Figure 3(a), we see that as we add more imitation data GPT-4's ratings of our model outputs remain reletively flat. However as we increase the base model scale, we see GPT-4's ratings consistently increasing 3(b). These results line up closely with the results from our crowdworker evaluations." } ]
2023-05-25
[ { "authors": " Openai", "journal": "", "ref_id": "b0", "title": "ChatGPT: Optimizing language models for dialogue", "year": "2022" }, { "authors": "Sundar Pichai", "journal": "Google AI Blog", "ref_id": "b1", "title": "An important next step on our AI journey", "year": "2023" }, { "authors": "", "journal": "AnthropicAI", "ref_id": "b2", "title": "Introducing claude", "year": "2023" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b3", "title": "LLaMa: Open and efficient foundation language models", "year": "2023" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Eric Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma", "journal": "", "ref_id": "b4", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Eric Wallace; Mitchell Stern; Dawn Song", "journal": "", "ref_id": "b5", "title": "Imitation attacks and defenses for black-box machine translation systems", "year": "2020" }, { "authors": "Tribhuvanesh Orekondy; Bernt Schiele; Mario Fritz", "journal": "", "ref_id": "b6", "title": "Knockoff nets: Stealing functionality of black-box models", "year": "2019" }, { "authors": "Geoffrey Hinton; Oriol Vinyals; Jeff Dean", "journal": "", "ref_id": "b7", "title": "Distilling the knowledge in a neural network", "year": "2014" }, { "authors": "Yizhong Wang; Yeganeh Kordi; Swaroop Mishra; Alisa Liu; Noah A Smith; Daniel Khashabi; Hannaneh Hajishirzi", "journal": "", "ref_id": "b8", "title": "Self-Instruct: Aligning language model with self generated instructions", "year": "2022" }, { "authors": "Rohan Taori; Ishaan Gulrajani; Tianyi Zhang; Yann Dubois; Xuechen Li; Carlos Guestrin; Percy Liang; Tatsunori B Hashimoto", "journal": "", "ref_id": "b9", "title": "Stanford Alpaca: An instruction-following LLaMA model", "year": "2023" }, { "authors": "Dylan Patel; Afzal Ahmad", "journal": "", "ref_id": "b10", "title": "Google \"we have no moat, and neither does OpenAI", "year": "2023" }, { "authors": "Florian Tramèr; Fan Zhang; Ari Juels; Michael K Reiter; Thomas Ristenpart", "journal": "", "ref_id": "b11", "title": "Stealing machine learning models via prediction APIs", "year": "2016" }, { "authors": "Weiwei Sun; Lingyong Yan; Xinyu Ma; Pengjie Ren; Dawei Yin; Zhaochun Ren", "journal": "", "ref_id": "b12", "title": "Is Chat-GPT good at search? investigating large language models as re-ranking agent", "year": "2023" }, { "authors": "Cheng-Yu Hsieh; Chun-Liang Li; Chih-Kuan Yeh; Hootan Nakhost; Yasuhisa Fujii; Alexander Ratner; Ranjay Krishna; Chen-Yu Lee; Tomas Pfister", "journal": "", "ref_id": "b13", "title": "Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes", "year": "2023" }, { "authors": "Or Honovich; Thomas Scialom; Omer Levy; Timo Schick", "journal": "", "ref_id": "b14", "title": "Unnatural instructions: Tuning language models with (almost) no human labor", "year": "2022" }, { "authors": "Wei-Lin Chiang; Zhuohan Li; Zi Lin; Ying Sheng; Zhanghao Wu; Hao Zhang; Lianmin Zheng; Siyuan Zhuang; Yonghao Zhuang; Joseph E Gonzalez; Ion Stoica; Eric P Xing", "journal": "", "ref_id": "b15", "title": "Vicuna: An open-source chatbot impressing GPT-4 with 90%* chatgpt quality", "year": "2023" }, { "authors": "Xinyang Geng; Arnav Gudibande; Hao Liu; Eric Wallace; Pieter Abbeel; Sergey Levine; Dawn Song", "journal": "BAIR Blog", "ref_id": "b16", "title": "Koala: A dialogue model for academic research", "year": "2023" }, { "authors": "Yuvanesh Anand; Zach Nussbaum; Brandon Duderstadt; Benjamin Schmidt; Andriy Mulyar", "journal": "Turbo", "ref_id": "b17", "title": "GPT4All: Training an assistant-style chatbot with large scale data distillation from GPT-3", "year": "2023" }, { "authors": "Baolin Peng; Chunyuan Li; Pengcheng He; Michel Galley; Jianfeng Gao", "journal": "", "ref_id": "b18", "title": "Instruction tuning with GPT-4", "year": "2023" }, { "authors": "Tom Kwiatkowski; Jennimaria Palomaki; Olivia Redfield; Michael Collins; Ankur Parikh; Chris Alberti; Danielle Epstein; Illia Polosukhin; Jacob Devlin; Kenton Lee; Kristina Toutanova; Llion Jones; Matthew Kelcey; Ming-Wei Change; Andrew M Dai; Jakob Uszkoreit; Quoc Le; Slav Petrov", "journal": "TACL", "ref_id": "b19", "title": "Natural questions: a benchmark for question answering research", "year": "2019" }, { "authors": "Biyang Guo; Xin Zhang; Ziyuan Wang; Minqi Jiang; Jinran Nie; Yuxuan Ding; Jianwei Yue; Yupeng Wu", "journal": "", "ref_id": "b20", "title": "How close is ChatGPT to human experts? Comparison corpus, evaluation, and detection", "year": "2023" }, { "authors": "Yizhong Wang; Swaroop Mishra; Pegah Alipoormolabashi; Yeganeh Kordi; Amirreza Mirzaei; Anjana Arunkumar; Arjun Ashok; Arut Selvan Dhanasekaran; Atharva Naik; David Stap", "journal": "", "ref_id": "b21", "title": "Benchmarking generalization via in-context instructions on 1,600+ language tasks", "year": "2022" }, { "authors": "Alec Radford; Jeff Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "", "ref_id": "b22", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann; Parker Schuh", "journal": "", "ref_id": "b23", "title": "PaLM: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Dan Hendrycks; Collin Burns; Steven Basart; Andy Zou; Mantas Mazeika; Dawn Xiaodong Song; Jacob Steinhardt", "journal": "", "ref_id": "b24", "title": "Measuring massive multitask language understanding", "year": "2021" }, { "authors": "Tom Kwiatkowski; Jennimaria Palomaki; Olivia Redfield; Michael Collins; Ankur Parikh; Chris Alberti; Danielle Epstein; Illia Polosukhin; Jacob Devlin; Kenton Lee", "journal": "TACL", "ref_id": "b25", "title": "Natural questions: A benchmark for question answering research", "year": "2019" }, { "authors": "Mark Chen; Jerry Tworek; Heewoo Jun; Qiming Yuan; Henrique Ponde De Oliveira Pinto; Jared Kaplan; Harri Edwards; Yuri Burda; Nicholas Joseph; Greg Brockman", "journal": "", "ref_id": "b26", "title": "Evaluating large language models trained on code", "year": "2021" }, { "authors": "Suchin Samuel Gehman; Maarten Gururangan; Yejin Sap; Noah A Choi; Smith", "journal": "", "ref_id": "b27", "title": "RealToxici-tyPrompts: Evaluating neural toxic degeneration in language models", "year": "2020" }, { "authors": "John Schulman", "journal": "", "ref_id": "b28", "title": "Reinforcement learning from human feedback: Progress and challenges", "year": "2023" }, { "authors": "Leo Gao", "journal": "Alignment Forum", "ref_id": "b29", "title": "Behavior cloning is miscalibrated", "year": "2021" }, { "authors": "Yoav Goldberg", "journal": "", "ref_id": "b30", "title": "Reinforcement learning for language models", "year": "2023" }, { "authors": "Jan Paul F Christiano; Tom Leike; Miljan Brown; Shane Martic; Dario Legg; Amodei", "journal": "NIPS", "ref_id": "b31", "title": "Deep reinforcement learning from human preferences", "year": "2017" }, { "authors": "Yuntao Bai; Saurav Kadavath; Sandipan Kundu; Amanda Askell; Jackson Kernion; Andy Jones; Anna Chen; Anna Goldie; Azalia Mirhoseini; Cameron Mckinnon", "journal": "", "ref_id": "b32", "title": "Constitutional AI: Harmlessness from AI feedback", "year": "2022" }, { "authors": "Junnan Li; Dongxu Li; Caiming Xiong; Steven Hoi", "journal": "", "ref_id": "b33", "title": "BLIP: Bootstrapping language-image pre-training for unified vision-language understanding and generation", "year": "2022" }, { "authors": "Haotian Liu; Chunyuan Li; Qingyang Wu; Yong Jae Lee", "journal": "", "ref_id": "b34", "title": "Visual instruction tuning", "year": "2023" }, { "authors": "Qinghao Ye; Haiyang Xu; Guohai Xu; Jiabo Ye; Ming Yan; Yiyang Zhou; Junyang Wang; Anwen Hu; Pengcheng Shi; Yaya Shi", "journal": "", "ref_id": "b35", "title": "mPLUG-Owl: Modularization empowers large language models with multimodality", "year": "2023" }, { "authors": "Deyao Zhu; Jun Chen; Xiaoqian Shen; Xiang Li; Mohamed Elhoseiny", "journal": "", "ref_id": "b36", "title": "MiniGPT-4: Enhancing vision-language understanding with advanced large language models", "year": "2023" }, { "authors": "Daniel Lowd; Christopher Meek", "journal": "", "ref_id": "b37", "title": "Adversarial learning", "year": "2005" }, { "authors": "Kalpesh Krishna; Gaurav Singh Tomar; Ankur P Parikh; Nicolas Papernot; Mohit Iyyer", "journal": "", "ref_id": "b38", "title": "Thieves on sesame street! Model extraction of BERT-based APIs", "year": "2020" }, { "authors": "Soham Pal; Yash Gupta; Aditya Shukla; Aditya Kanade; Shirish Shevade; Vinod Ganapathy", "journal": "", "ref_id": "b39", "title": "A framework for the extraction of deep neural networks by leveraging public data", "year": "2019" }, { "authors": "Mika Juuti; Sebastian Szyller; Samuel Marchal; Asokan", "journal": "IEEE EuroS&P", "ref_id": "b40", "title": "PRADA: protecting against DNN model stealing attacks", "year": "2019" }, { "authors": "Sebastian Szyller; Gul Buse; Samuel Atli; Marchal; Asokan", "journal": "", "ref_id": "b41", "title": "DAWN: Dynamic adversarial watermarking of neural networks", "year": "2019" }, { "authors": "Pratyush Maini; Mohammad Yaghini; Nicolas Papernot", "journal": "", "ref_id": "b42", "title": "Dataset inference: Ownership resolution in machine learning", "year": "2021" }, { "authors": "Tribhuvanesh Orekondy; Bernt Schiele; Mario Fritz", "journal": "", "ref_id": "b43", "title": "Prediction poisoning: Towards defenses against DNN model stealing attacks", "year": "2020" }, { "authors": "Adam Dziedzic; Nikita Dhawan; Muhammad Ahmad Kaleem; Jonas Guan; Nicolas Papernot", "journal": "", "ref_id": "b44", "title": "On the difficulty of defending self-supervised learning against model extraction", "year": "2022" }, { "authors": "Adam Dziedzic; Muhammad Ahmad Kaleem; Yu Shen Lu; Nicolas Papernot", "journal": "", "ref_id": "b45", "title": "Increasing the cost of model extraction with calibrated proof of work", "year": "2022" } ]
[]
The False Promise of Imitating Proprietary LLMs
An emerging method to cheaply improve a weaker language model is to finetune it on outputs from a stronger model, such as a proprietary system like ChatGPT (e.g., Alpaca, Self-Instruct, and others). This approach looks to cheaply imitate the proprietary model's capabilities using a weaker open-source model. In this work, we critically analyze this approach. We first finetune a series of LMs that imitate ChatGPT using varying base model sizes (1.5B-13B), data sources, and imitation data amounts (0.3M-150M tokens). We then evaluate the models using crowd raters and canonical NLP benchmarks. Initially, we were surprised by the output quality of our imitation models-they appear far better at following instructions, and crowd workers rate their outputs as competitive with ChatGPT. However, when conducting more targeted automatic evaluations, we find that imitation models close little to none of the gap from the base LM to ChatGPT on tasks that are not heavily supported in the imitation data. We show that these performance discrepancies may slip past human raters because imitation models are adept at mimicking ChatGPT's style but not its factuality. Overall, we conclude that model imitation is a false promise: there exists a substantial capabilities gap between open and closed LMs that, with current methods, can only be bridged using an unwieldy amount of imitation data or by using more capable base LMs. In turn, we argue that the highest leverage action for improving open-source models is to tackle the difficult challenge of developing better base LMs, rather than taking the shortcut of imitating proprietary systems.
Arnav Gudibande; Eric Wallace; Charlie Snell; Xinyang Geng; Hao Liu; Pieter Abbeel; Sergey Levine; Dawn Song
[ { "figure_caption": "Figure 4 :4Figure4: Automatic evaluations. As we increase the amount of imitation data, there is little improvement on various benchmarks, or even performance regressions (top). On the other hand, scaling up the base LM steadily improves results (bottom), suggesting that the key difference between open-source and closed-source LMs is a raw capabilities gap, rather than the finetuning data used.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure5: We evaluate imitation models on Re-alToxicityPrompts and report the average nontoxicity score according to the perspective API. The results show that imitation models are significantly less toxic than the baseline models, i.e., they learn to inherit the safety and toxicity guidelines of the target models.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Our Amazon Mechanical Turk interface for comparing the quality of different model outputs. Evaluators are presented with an instruction and two model outputs, and must rate which one is better or whether they are equal.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "• HC3(Guo et al., 2023): we use the ChatGPT responses from the English Human-ChatGPT Comparison Corpus. This contains ∼27K ChatGPT responses for ∼24K questions.• Discord ChatGPT Bots: we use 10k input-output examples collected from the r/ChatGPT and Turing AI Discord servers, two public channels that allow users to interact with ChatGPT bots.", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "We train imitation models on broadcoverage data from ShareGPT-Mix or targeted Natural-Questions-like data (NQ-synthetic). The broad-coverage models do not improve on zeroshot NQ (or even degrade in performance), demonstrating the ineffectiveness of imitating the capabilities of ChatGPT holistically. However, the NQ-Synthetic models substantially close the gap to ChatGPT on NQ, showing that local imitation of a model is far more feasible in practice.", "figure_data": "ModelImitation Data NQ7B-177BShareGPT-Mix107BNQ-Synthetic2213B-2013BShareGPT-Mix1513BNQ-Synthetic27ChatGPT -31", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Second, given the large gap between LLaMA and ChatGPT (the latter model is faster, cheaper, and more accurate), and the insufficiencies of model imitation, there are obvious open questions on how to best improve open-source LMs (e.g., increasing model scale, improving pre-training data quality, developing new pretraining methods, etc). Finally, our work raises ethical and legal questions, including whether the open-source community should continue to advance progress by \"stealing\" what OpenAI and other companies have done, as well as what legal countermeasures companies can take to protect and license intellectual property. In future work, we hope to delve deeper into these issues and devise better methods for the ethical and responsible deployment of LMs.", "figure_data": "", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Table 4 to randomly sample 5 QA pairs from the seed set to generate new QA samples. New samples are generated with temperature 1.0 and duplicate question-answer pairs are discarded.", "figure_data": "Q: who sang who wants to be a millionare in high society?A: Frank SinatraQ: the last time la dodgers won the world series?A: 1988Q: who plays the medical examiner on hawaii five-o?A: Masi OkaQ: when did the first harry potter movie come out?A: 2001Q: when was the last time india won a gold medal in hockeyat olympicsA: 1980Q: who owns the rights to baby shark songA: SmartStudyQ: how many episodes are in one punch man season 1A: 12Q: name of the bird in the lion kingA: ZazuQ: who sang the rap song change clothesA: Jay-ZQ: who stars as serena in gossip girlA: Blake Lively", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Seed examples curated from the Natural Questions validation set I want you to generate a series of questions and answers. I want the answers to be concise, just a few words. The questions should be lowercased and centered around Wikipedia-like entities. For example,", "figure_data": "Q: {question 1}A: {answer 1}Q: {question 2}A: {answer 2}Q: {question 3}A: {answer 3}Q: {question 4}A: {answer 4}Q: {question 5}A: {answer 5}", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Prompting template used to generate synthetic Natural Questions-like imitation data", "figure_data": "", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "(OpenAI, 2022)", "Explanation": "The cited work, ChatGPT, serves as a methodological basis for the citing paper in the development of language models."}, {"Category": "Data Source", "Citation": "(Pichai, 2023)", "Explanation": "The cited work, Bard, is acknowledged as a data source for the study conducted in the citing paper on the development of language models."}, {"Category": "Data Source", "Citation": "(AnthropicAI, 2023)", "Explanation": "The cited work, Claude, is acknowledged as a data source for the study conducted in the citing paper on the development of language models."}, {"Category": "Methodological Basis", "Citation": "(Touvron et al., 2023)", "Explanation": "The cited work, LLaMA and FLAN-T5, provide a methodological basis for the citing paper in the development of open-source language models."}, {"Category": "Methodological Basis", "Citation": "(Chung et al., 2022)", "Explanation": "The cited work, LLaMA and FLAN-T5, provide a methodological basis for the citing paper in the development of open-source language models."}, {"Category": "Methodological Basis", "Citation": "(Wallace et al., 2020)", "Explanation": "The cited work by Wallace et al. provides a method for model imitation, which the citing paper adopts in their study of improving open-source LMs through imitation."}, {"Category": "Methodological Basis", "Citation": "(Orekondy et al., 2019)", "Explanation": "The cited work by Orekondy et al. also contributes to the study of model imitation, providing a method for fine-tuning open-source LMs using API outputs to improve their capabilities."}, {"Category": "Methodological Basis", "Citation": "(Wang et al., 2022a)", "Explanation": "The cited work by Wang et al. (2022a) is referenced for its self-instructional model, which the citing paper adopts to train and evaluate copycats of ChatGPT."}, {"Category": "Extension or Continuation", "Citation": "(Taori et al., 2023)", "Explanation": "The work by Taori et al. (2023) is mentioned as a recent model that has achieved near parity with proprietary models, indicating a trend in the broader tech community towards model imitation. The citing paper builds upon this trend by training and evaluating copycats of ChatGPT."}, {"Category": "Data Source", "Citation": "(Patel and Ahmad, 2023)", "Explanation": "The work by Patel and Ahmad (2023) is cited to highlight the growing sentiment among the tech community that closed-source models will soon have no advantage. The citing paper uses this data to support the need for model imitation in the field of large language models."}, {"Category": "Methodological Basis", "Citation": "(Wallace et al., 2020)", "Explanation": "The cited work by Wallace et al. provides a methodology for collecting data using a black-box API to train an LM that achieves comparable performance to a proprietary model."}, {"Category": "Methodological Basis", "Citation": "(Orekondy et al., 2019)", "Explanation": "The cited work by Orekondy et al. also provides a methodology for training LMs using imitation training sets to achieve comparable performance to proprietary models."}, {"Category": "Methodological Basis", "Citation": "(Tram\u00e8r et al., 2016)", "Explanation": "The cited work by Tram\u00e8r et al. discusses the use of imitation training sets in the context of model security, providing a methodological basis for understanding the potential risks and benefits of this approach."}, {"Category": "Methodological Basis", "Citation": "(Sun et al., 2023)", "Explanation": "The cited work by Sun et al. provides a method for local imitation of proprietary models for specific tasks, which the citing paper builds upon in their research."}, {"Category": "Methodological Basis", "Citation": "(Hsieh et al., 2023)", "Explanation": "The cited work by Hsieh et al. also contributes a method for local imitation of proprietary models, which the citing paper may have adopted or adapted in their research."}, {"Category": "Methodological Basis", "Citation": "(Honovich et al., 2022)", "Explanation": "The cited work by Honovich et al. may have provided a method for local imitation of proprietary models, which the citing paper may have used in their research."}, {"Category": "Methodological Basis", "Citation": "(Taori et al., 2023)", "Explanation": "The cited work by Taori et al. presents the Alpaca model for broad imitation of models, which the citing paper may have adopted or adapted in their research."}, {"Category": "Methodological Basis", "Citation": "(Chiang et al., 2023)", "Explanation": "The cited work by Chiang et al. introduces the Vicuna model for broad imitation of models, which the citing paper may have utilized in their research."}, {"Category": "Methodological Basis", "Citation": "(Geng et al., 2023)", "Explanation": "The cited work by Geng et al. presents the Koala model for broad imitation of models, which the citing paper may have referenced in their research."}, {"Category": "Methodological Basis", "Citation": "(Anand et al., 2023)", "Explanation": "The cited work by Anand et al. discusses the GPT4ALL model for broad imitation of models, which the citing paper may have mentioned in their research."}, {"Category": "Methodological Basis", "Citation": "(Wang et al., 2022a)", "Explanation": "The cited work by Wang et al. presents a broad imitation of models, which the citing paper may have discussed in their research."}, {"Category": "Methodological Basis", "Citation": "(Peng et al., 2023)", "Explanation": "The cited work by Peng et al. also discusses a broad imitation of models, which the citing paper may have mentioned in their research."}, {"Category": "Extension or Continuation", "Citation": "(Patel and Ahmad, 2023)", "Explanation": "The cited work by Patel and Ahmad may have provided a broader context for the research on open-source LMs and the competitive advantage of top AI companies, which the citing paper may have extended or continued in their study."}, {"Category": "Data Source", "Citation": "(Kwiatkowski et al., 2019a)", "Explanation": "The cited work provides the Natural Questions dataset, which the citing paper uses to curate a seed set of ten QA pairs for task-specific imitation."}, {"Category": "Data Source", "Citation": "(Wang et al., 2022b)", "Explanation": "The cited work by Wang et al. (2022b) provides a dataset with a high level of instruction diversity, which the citing paper uses to compare the average BLEU score similarity of user queries in the dataset to that of the Super-NaturalInstructions dataset."}, {"Category": "Data Source", "Citation": "(Radford et al., 2019)", "Explanation": "The cited work by Radford et al. (2019) provides the GPT-2 1.5B model, which the citing paper uses as a base for their study of model imitation in language models."}, {"Category": "Data Source", "Citation": "(Touvron et al., 2023)", "Explanation": "The cited work by Touvron et al. (2023) introduces the LLaMA 7B model, which the citing paper uses as a base for their study of model imitation in language models."}, {"Category": "Data Source", "Citation": "(Touvron et al., 2023)", "Explanation": "The cited work by Touvron et al. (2023) also introduces the LLaMA 13B model, which the citing paper uses as a base for their study of model imitation in language models."}, {"Category": "Data Source", "Citation": "(Hendrycks et al., 2021)", "Explanation": "The cited work provides the MMLU dataset used in the training and evaluation of the models in the citing paper."}, {"Category": "Data Source", "Citation": "(Kwiatkowski et al., 2019b)", "Explanation": "The Natural Questions dataset is cited as a data source for the performance evaluation of the models in the citing paper."}, {"Category": "Data Source", "Citation": "(Chen et al., 2021)", "Explanation": "The HumanEval dataset is mentioned as a data source for the human evaluation conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Gehman et al., 2020)", "Explanation": "The cited work by Gehman et al. provides the RealToxicityPrompts dataset, which the citing paper uses to evaluate the toxicity of the imitation models in their research on finetuning on imitation data."}, {"Category": "Methodological Basis", "Citation": "(OpenAI, 2022)", "Explanation": "The cited work by OpenAI (2022) is used as a basis for the imitation process conducted in the citing paper, which involves performing supervised learning on the outputs from the target model."}, {"Category": "Extension or Continuation", "Citation": "(Li et al., 2022)", "Explanation": "The cited work by Li et al. (2022) is a recent vision-language model that is similar to the models studied in the citing paper and may have similar failure modes in terms of model imitation."}, {"Category": "Extension or Continuation", "Citation": "(Liu et al., 2023)", "Explanation": "The cited work by Liu et al. (2023) is another recent vision-language model that may have similar failure modes in terms of model imitation as the models studied in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Ye et al., 2023)", "Explanation": "The cited work by Ye et al. (2023) is a recent vision-language model that may have similar failure modes in terms of model imitation as the models studied in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Zhu et al., 2023)", "Explanation": "The cited work by Zhu et al. (2023) is a recent vision-language model that may have similar failure modes in terms of model imitation as the models studied in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Hinton et al., 2014)", "Explanation": "The cited work on model distillation provides the foundational method of training a student model to imitate a teacher, which the citing paper adopts in the process of model imitation."}, {"Category": "Extension or Continuation", "Citation": "(Orekondy et al., 2019)", "Explanation": "The cited work on model imitation in various domains is extended in the citing paper to include the domain of language classifiers and machine translation systems."}, {"Category": "Data Source", "Citation": "(Krishna et al., 2020)", "Explanation": "The cited work on model imitation in the language classifier domain is acknowledged as a data source for the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Pal et al., 2019)", "Explanation": "The cited work on model imitation in the language classifier domain is acknowledged as a data source for the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Wallace et al., 2020)", "Explanation": "The cited work on model imitation in the machine translation system domain is acknowledged as a data source for the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Krishna et al., 2020)", "Explanation": "The cited work by Krishna et al. provides a method for detecting whether a model is trained via imitation, which the citing paper adopts to address the concern of model imitation in the context of proprietary systems."}, {"Category": "Methodological Basis", "Citation": "(Juuti et al., 2019)", "Explanation": "The cited work by Juuti et al. contributes a method for detecting imitation in models, which the citing paper uses to address the issue of model imitation in the context of proprietary systems."}, {"Category": "Methodological Basis", "Citation": "(Szyller et al., 2019)", "Explanation": "The cited work by Szyller et al. provides a method for detecting imitation in models, which the citing paper adopts to address the concern of model imitation in the context of proprietary systems."}, {"Category": "Methodological Basis", "Citation": "(Maini et al., 2021)", "Explanation": "The cited work by Maini et al. contributes a method for detecting whether a model is trained via imitation, which the citing paper uses to address the issue of model imitation in the context of proprietary systems."}, {"Category": "Data Source", "Citation": "(Wallace et al., 2020)", "Explanation": "The cited work by Wallace et al. provides a dataset or model that the citing paper utilizes in its research on detecting model imitation, which serves as a foundational element for the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Orekondy et al., 2020)", "Explanation": "The cited work by Orekondy et al. contributes a dataset or model that the citing paper uses in its research on detecting model imitation, which serves as a foundational element for the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Dziedzic et al., 2022a)", "Explanation": "The cited work by Dziedzic et al. provides a dataset or model that the citing paper utilizes in its research on detecting model imitation, which serves as a foundational element for the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Dziedzic et al., 2022b)", "Explanation": "The cited work by Dziedzic et al. contributes a dataset or model that the citing paper uses in its research on detecting model imitation, which serves as a foundational element for the study conducted in the citing paper."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b8", "b6", "b12", "b0", "b4", "b22", "b5", "b30", "b16", "b2", "b28", "b29", "b2", "b28", "b34", "b31" ], "table_ref": [], "text": "Multilingual neural machine translation (MNMT) is a popular paradigm that uses a unified model to handle the entire translation process for multiple language pairs (Ha et al., 2016;Firat et al., 2016;Johnson et al., 2017). This paradigm is particularly effective at improving the performance of lowresource languages through transfer learning (Aharoni et al., 2019;Dabre et al., 2020;Siddhant et al., 2022). Besides, MNMT is highly deployable since only one model is required (Fan et al., 2021;Yang et al., 2021;NLLB Team et al., 2022).\nHowever, the severely imbalanced distribution of multilingual training data puts the MNMT in 1 In Pareto optimization, Pareto optimal solutions refer to solutions in which none of the objectives can be improved without sacrificing at least one of the other objectives. The set a situation of Pareto optimization (also known as multi-objective optimization). That is, when some languages are optimized, others degenerate. Existing methods can be considered a set of Pareto optimal solutions that trade off on a Pareto frontier, which focus on balancing the performance across different languages by adjusting the sampling distribution (Arivazhagan et al., 2019;Wang et al., 2020;Wu et al., 2021). The widely-used temperature-based sampling (Arivazhagan et al., 2019) is typical evidence of the claim above, which uses a hyper-parameter to smooth the training distribution over all language pairs to enhance the representation of low-source Languages (LRLs) while sacrificing the which of High-Resource Languages (HRLs). Despite the emergence of several sophisof all Pareto optimal solutions forms a Pareto frontier. 2 Our code is publicly available at https://github.com/ OrangeInSouth/Pareto-Mutual-Distillation arXiv:2305.15718v1 [cs.CL] 25 May 2023 ticated dynamic sampling technologies designed to overcome the inflexibility of temperature-based sampling, their performance remains restricted to this Pareto frontier (Wang et al., 2020;Zhou et al., 2021;Zhang et al., 2021)." }, { "figure_ref": [ "fig_0" ], "heading": "HRLs", "publication_ref": [], "table_ref": [], "text": "In this work, we propose a novel training framework, named Pareto Mutual Distillation (Pareto-MD), to push the Pareto frontier of multilingual models. Specifically, Pareto-MD uses different training distributions that favor dissimilar subsets of languages to train two multilingual models simultaneously. These two models learn from each other at each training step with knowledge distillation. The underlying idea of Pareto-MD is to address shortcomings of individual Pareto optimal solutions via access to a better one in terms of that shortcoming, thereby raising the Pareto frontier, as Fig. 1 depicts. To fully exploit the potential of our approach in multilingual settings, we further propose Automatic Pareto Mutual Distillation, which dynamically determines the contribution of distillation learning loss on each objective. These contributions, controlled by a set of distillation weights, adapt automatically to the evolving models, eliminating the need for manual hyper-parameter search.\nWhile our method applies essentially to any multi-objective optimization problem, we specifically demonstrate its benefit on multilingual machine translation. The experimental results on two widely-used datasets demonstrate the effectiveness of our method, which improves up to +2.46 BLEU, and the further analysis shows the Pareto frontier is pushed outwards visibly." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b13", "b24", "b3", "b27" ], "table_ref": [], "text": "Neural machine translation (NMT) is a classic NLP task that translates a sentence x in source language into a sentence y in target language (Kalchbrenner and Blunsom, 2013;Sutskever et al., 2014;Bahdanau et al., 2015;Vaswani et al., 2017). Given a parallel corpus D \" tpx, yq P X ˆYu, the NMT model is commonly trained by minimizing the negative log-likelihood loss:\nL ce \" ÿ px,yq \"D ÿ iď|y| ´log ppy i |x, y ăi ; θq,(1)\nwhere pp¨|¨; θq maps the source sentence and previous generated text to the next target token." }, { "figure_ref": [], "heading": "Multilingual Machine Translation", "publication_ref": [ "b2" ], "table_ref": [], "text": "Given a set of language pairs L, the MNMT model is trained on the combination of |L| parallel datasets: tD train ℓ u |L| ℓ\"1 , where D train ℓ is the dataset of language pair pS ℓ , T ℓ q. In order to encode and decode the text in various languages into and from a universal semantic space, a large multilingual vocabulary V is constructed. The language tag is appended to the beginning of source sentences as a hint of the target language. The MNMT model is also trained with the loss function as Eq.1 over the multilingual datasets.\nTemperature-based Sampling. The multilingual datasets form a distribution P , where P pℓq \" N ℓ ř j N j is the sampling probability of language pair ℓ and we denote the dataset size of D train ℓ by N ℓ . Since sampling probabilities of LRLs are substantially lower than those of HRLs, the optimization towards LRLs can be overwhelmed by those of HRLs. To resolve this issue, Arivazhagan et al. (2019) propose temperature-based sampling, introducing a hyper-parameter τ to re-scale the smoothness of training distribution. Concretely, the sampling probability of each language pair ℓ is set to:\nP pℓq \" N 1{τ ℓ ř j N 1{τ j ,(2)\nincreasing the value of τ produces smoother training distributions and stronger preferences on LRLs." }, { "figure_ref": [], "heading": "Mutual Distillation", "publication_ref": [ "b10", "b32", "b7" ], "table_ref": [ "tab_0" ], "text": "Knowledge Distillation (KD) is a popular technology for knowledge transfer, which originates from compressing a static high-capacity model (teacher model) into a small compact model (student model) (Hinton et al., 2015). Mutual distillation is a variant of KD (Zhang et al., 2018;Guo et al., 2020). Instead of using a pre-trained teacher model, mutual distillation involves training more than one model simultaneously, with each model teaching the other throughout the training process.\nMutual distillation takes the same loss function as vanilla knowledge distillation, that is:\nL kd \" ÿ iď|y| ÿ wPV ´ppw|x, y ăi ; θ T q ¨log ppw|x, y ăi ; θ S q, (3\n)\nwhere V is the target-side vocabulary, θ S and θ T are the student model and teacher model. The major difference of Pareto-MD from vanilla mutual distillation is that we train two models with different sampling distributions to make them favor different sets of objectives. The performance of different step size schedulers 7 is listed in Table 6. The simple scheduler-1 fixes 8 the step size to 1.0, performing relatively poorly." }, { "figure_ref": [], "heading": "9", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "The scheduler-2 decreases the step size from 1.0 to 0 0.2. The scheduler-4 decreases the step size from The performance of different step size schedulers 97 is listed in Table 6. The simple scheduler-1 fixes 98 the step size to 1.0, performing relatively poorly." }, { "figure_ref": [ "fig_14" ], "heading": "99", "publication_ref": [ "b11" ], "table_ref": [ "tab_0" ], "text": "The scheduler-2 decreases the step size from 1.0 to 00 0.2. The scheduler-4 decreases the step size from ve tried to set Dropout rate to t0.2, 0.3u for and report the best results in terms of BLEU r comparison. The code of -IBR (Zhou 021) is also released. However, the result of evaluated in our experiments is lower than ginal paper. Therefore, we report the results original paper.\nffect of ↵ for Uni-MOMD and i-MOMD section, we show the experimental results -MOMD and Bi-MOMD with different val-↵ in Fig. 5. As demonstrated, the value crucial for the performance. The optimal f ↵ varies across different settings. This conn is consistent with former work related to edge distillation (Huang et al., 2022), which hts the importance of deducing distillation ts automatically. ffect of Step Size Scheduler µ rformance of different step size schedulers d in Table 6. The simple scheduler-1 fixes p size to 1.0, performing relatively poorly. heduler-2 decreases the step size from 1.0 to he scheduler-4 decreases the step size from 0.0, achieving the best performance. The ler-3 also decrease the step size from 1.0 to hile not performing searching of distillation s at the end of training. We finally adopt the ler-4 in our Auto-MOMD. have tried to set Dropout rate to t0.2, 0.3u for SD, and report the best results in terms of BLEU fair comparison. The code of -IBR (Zhou al., 2021) is also released. However, the result of IBR evaluated in our experiments is lower than original paper. Therefore, we report the results the original paper." }, { "figure_ref": [ "fig_14" ], "heading": "Effect of ↵ for Uni-MOMD and Bi-MOMD", "publication_ref": [ "b11" ], "table_ref": [], "text": "this section, we show the experimental results Uni-MOMD and Bi-MOMD with different vals of ↵ in Fig. 5. As demonstrated, the value ↵ is crucial for the performance. The optimal ue of ↵ varies across different settings. This consion is consistent with former work related to owledge distillation (Huang et al., 2022), which hlights the importance of deducing distillation ights automatically." }, { "figure_ref": [], "heading": "Effect of Step Size Scheduler µ", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "e performance of different step size schedulers listed in Table 6. The simple scheduler-1 fixes step size to 1.0, performing relatively poorly. e scheduler-2 decreases the step size from 1.0 to . The scheduler-4 decreases the step size from to 0.0, achieving the best performance. The eduler-3 also decrease the step size from 1.0 to , while not performing searching of distillation ights at the end of training. We finally adopt the eduler-4 in our Auto-MOMD. \n↵ k´1 i r`s ↵ k i r`s ` ˆ↵1 r" }, { "figure_ref": [], "heading": "Pareto Mutual Distillation", "publication_ref": [], "table_ref": [], "text": "In this section, we first introduce our training framework Pareto-MD ( §3.1). Next, two strategies that determine the important distillation weights, UNI-PMD and BI-PMD, are shown ( §3.2). To overcome the flaws of these two strategies above, AUTO-PMD is further proposed ( §3.3)." }, { "figure_ref": [ "fig_6" ], "heading": "Framework", "publication_ref": [], "table_ref": [], "text": "We illustrate our Pareto-MD in Fig. 2. Pareto-MD simultaneously trains two models, denoted by θ 1 and θ 2 , using different sampling distributions, P 1 and P 2 , that make each model favor a different set of language pairs. To obtain expected distributions, we adopt temperature-based sampling, as shown in Eq.2, and set τ \" 1 for P 1 , τ ą 1 (e.g., τ \" 5 commonly) for P 2 . In this way, θ 1 prefers HRLs, and θ 2 prefers LRLs. At each training step, for each model θ i where i P t1, 2u, Pareto-MD first draws a language pair ℓ from training distribution P i , then a mini-batch of sentence pairs B ℓ \" tx ℓ , y ℓ u are sampled from D train ℓ . Next, the model θ i is trained to fit B ℓ and match the output of the other model, i.e., θ 3´i . The overall loss function for model θ i is defined as:\nL P M D \" p1 ´αi rℓsq ˆLce pB l ; θ i q `αi rℓs ˆLkd pB ℓ ; θ i , θ 3´i q,(4)\nwhere α i P R |L| is the multilingual distillation weight vector of θ i and α i rℓs P r0, 1s is the distillation weight for language pair ℓ. α i rℓs is crucial as controlling the extent how much θ i should learn from θ 3´i in direction ℓ. When α i rℓs \" 0, θ i does not acquire information from θ 3´i in ℓ. The values \n↵ i r`s \" ↵,(6)\nmeaning that each model affects the other equally. \nk 1 , r O k 2 , distillation weights ↵ k´1 1 , ↵ k´1 2 Output : ↵ k 1 , ↵ k 2 Initialize: Initialize trial results R P R |L|ˆ| r O k i | to a zero matrix 1 for i -1 to 2 do 2 for j -1 to | r O k i | do 3 ↵ 1 i -r O k i rjs 4 Copy model ✓ 1 i -✓i 5\nTrain ✓ 1 i on D trial for one epoch using teacher model ✓3´i and ↵ 1 i with Eq.4 \nα i rℓs \" α,(6)\nmeaning that each model affects the other equally." }, { "figure_ref": [ "fig_7", "fig_7", "fig_7", "fig_7" ], "heading": "AUTO-PMD", "publication_ref": [ "b28" ], "table_ref": [], "text": "Desiderata. Both UNI-PMD and BI-PMD determine the distillation weights of all translation directions based on a pre-defined hyper-parameter α, which dissatisfies the following three expected properties of distillation weights: 1) Language-Adaptability: the optimal distillation weights for different language pairs vary. However, the current strategies set a uniform weight for all language pairs, resulting in sub-optimal performance; 2) Dynamics: existing research on mutual distillation uses a fixed distillation weight throughout the training process, which fails to adapt to the evolving models; 3) Generality: it is empirically discovered that the optimal value of distillation weight varies across different datasets, incurring the extra cost of the manual hyper-parameter search. To satisfy these three properties, we propose Automatic Pareto Mutual Distillation (AUTO-PMD) to automatically decide the value of each direction's distillation weight according to training dynamics.\nApproach. AUTO-PMD updates multilingual distillation weight vector α i every T steps. We denote the values of α i after the k-th update by α k . Note that the subscript i of α i is omitted for clarity. The update process is modeled as Markov Chain (Norris and Norris, 1998). All distillation weights are initialized at the beginning of training as a small value, i.e., α 0 rℓs \" 0.1. Three actions on distillation weight are defined:\nF \" tf Ò p¨q, f Ó p¨q, f \" p¨qu,(7)\nwhich aim to increase, decrease and keep the value of distillation weight unchanged. At the k-th update, AUTO-PMD decides the values of α k according to the previous state α k´1 . We exemplify the process of each update step in Fig. 3 and precisely describe it in Alg. 2. As illustrated in Fig. 3, the update process is divided into three steps.\nIn the first step, given the previous distillation weights α k´1 , AUTO-PMD makes three trials, generating three multilingual distillation weight vectors for the trial training of the next step. Each vector is obtained by performing an action (e.g., increasing) on all values of α k´1 . These three vectors, corresponding to three colorful vectors in Fig. 3, form a set which is referred to as search space r O k . In fact, the trial training of next step should be conducted over the entire search space O k , which is the Cartesian product of possible subsequent states of each language-specific distillation \nk 1 , r O k 2 , distillation weights α k´1 1 , α k´1 2 Output : α k 1 , α k 2 Initialize: Initialize trial results R P R |L|ˆ| r O k i | to a zero matrix 1 for i Ð 1 to 2 do 2 for j Ð 1 to | r O k i | do 3 α 1 i Ð r O k i rjs 4 Copy model θ 1 i Ð θi 5\nTrain θ 1 i on D trial for one epoch using teacher model θ3´i and α 1 i with Eq.4 \nO k \" ą ℓPL tf pα k´1 rℓsq | f P Fu.(8)\nHowever, this search space grows exponentially as the number of languages increases, that is, |O k | \" |F| |L| . To overcome the non-trivial cost, the subspace r O k is adopted. Furthermore, we prove that based on the Distillation Weights Independence assumption, the optimal solution searched in r O k is equivalent to that of O k . The mathematical description of this assumption and the proof are demonstrated in §A.\nNext, AUTO-PMD uses each distillation weight vector in r O k to train the current model on trial set D trial , which is constructed by sampling ρ of D train , for one epoch. The three trained models are evaluated on the validation set, and the languagespecific dev losses of these models form a matrix, which is represented by trial results R P R | r O k |ˆ|L| . The model training of this step incurs overhead, which is proportional to the value of ρ ˆ| r O k |. In this work, we set ρ \" 0.1. Thereby, the extra overhead is 30% of the actual model training.\nFinally, the language-specific optimal actions are selected according to the trial results and then performed on α k´1 rℓs, obtaining the results of α k rℓs. We exemplify this step with Fig. 3. The red model, trained using the increased version of α k´1 (the vector in red), achieves the best performance of FrÑEn. Thus, the α k rℓs of FrÑEn is obtained by increasing the α k´1 rℓs of FrÑEn.\nImplementation of Actions. As aforementioned, three actions for updating distillation weights are defined (in Eq.7). The f \" p¨q is simple:\nf \" pαrℓsq \" αrℓs.(9)\nFor f Ò p¨q and f Ó p¨q, it must ensure that the output is always between r0, 1s. Therefore, the input is first mapped into p´8, `8q using the inverse of sigmoid function and then increased/decreased the value by µ, named step size. Finally, the increased/decreased value is mapped back into r0, 1s using sigmoid function. Formally:\nf Ò pαrℓsq \" σpσ ´1pαrℓsq `µq (10) f Ó pαrℓsq \" σpσ ´1pαrℓsq ´µq (11\n)\nwhere σp¨q is sigmoid function. The step size µ is crucial for weights search. A smaller step size could improve the precision of searched weights while may delay convergence to the optimal weight. Therefore, we design a step size scheduler, setting a large step size in the early training stage and then deducing the step size:\nµ \" c T max ´t T max (12)\nwhere T max is the max training steps. (Wang et al., 2020) ˚dyn." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b29", "b31", "b34", "b26", "b11", "b27", "b18", "b11", "b20", "b21", "b34", "b11" ], "table_ref": [], "text": "--27.00 18.24 MULTIUAT (Wu et al., 2021) ˚dyn.\n--27.83 19.76 CCL-M (Zhang et al., 2021) ˚dyn.\n--28.34 19.53 χ-IBR (Zhou et al., 2021) ˚dyn.\n--29.74 23.44\nExisting Knowledge Distillation-based Strategies MULTI-DISTILL (Tan et al., 2019) τ \" 1 20.18 18.57 29.52 22.31 LSSD (Huang et al., 2022) τ For each dataset, our approach is evaluated in two multilingual translation scenarios: 1) MANY-TO-ONE (M2O): translating multiple languages to English in this work; 2) ONE-TO-MANY (O2M): translating English to other languages.\nHyper-parameters. Even though our proposed training framework can be applied to any model architecture, we verify its effectiveness on the popular Transformer (Vaswani et al., 2017) implemented in fairseq (Ott et al., 2019) with the base version. We use the same model configuration, hyper-parameters, and preprocess procedure as those of Huang et al. (2022) for all baselines and our method. The only difference is that the dropout rate is modified into 0.2 on WMT-6, to accelerate the convergence without performance loss. The complete set of hyper-parameters is demonstrated in Appendix C. The performance is evaluated with the BLEU score (Papineni et al., 2002) using the SacreBLEU toolkit (Post, 2018).\nAs illustrated in §3.1, our Pareto-MD trains two models using different sampling distributions, P 1 and P 2 , and we adopt temperature-based sampling with different values of τ to produce these two distributions. We set τ \" 1 for P 1 and τ \" 5 for P 2 on WMT-6. On TED-8-Diverse, we set τ \" 1 for model-1 and τ \" 3 for model-2 since an overly large value leads to poor performance. For the UNI-PMD and BI-PMD, we manually search the optimal α (in Eq.5 and Eq.6) among t0.2, 0.4, 0.6, 0.8u. The update interval of distillation weights T is set to the step number of one epoch.\nBaselines. We primarily compare our Pareto-MD with: (1) Temperature-based Sampling: the method most related to our work; 2) χ-IBR (Zhou et al., 2021), the state-of-the-art (SOTA) dynamic sampling method, which enables the balancing training based on distributionally robust optimization; 3) LSSD (Huang et al., 2022), another distillationbased training strategy which achieves SOTA performance on TED-8-Diverse and WMT-6 dataset via alleviating the convergence inconsistency problem of MNMT using self-distillation. More details of baselines are demonstrated in Appendix D." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "We summarize the main results in Table 1. As we observed, our methods significantly outperform the temperature-based sampling under M2O and O2M settings on both datasets. The model-2 trained with AUTO-PMD has improved by up to +2.46 BLEU under the M2O setting of WMT-6. Furthermore, Pareto-MD achieves higher BLEU scores than previous methods in most settings. At best, AUTO-PMD outperforms the previous SOTA (LSSD) by +1.22 BLEU scores under the M2O setting of WMT-6. When comparing UNI-PMD and BI-PMD, it is obvious that BI-PMD consistently exceeds UNI-PMD, verifying the motivation that the worse model is also possible to improve the better model via knowledge distillation. AUTO-PMD further surpasses BI-PMD by +0.3\"0.5 BLEU. This improvement proves that our automatic search of distillation weights is indeed reliable. Moreover, AUTO-PMD is more general than UNI-PMD and BI-PMD since it eliminates the need to search for the hyper-parameter α manually 3 ." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_12" ], "heading": "Visualization of Pareto Frontier", "publication_ref": [], "table_ref": [], "text": "In order to clearly assess the impact of our methods on HRLs and LRLs, we visualize the Pareto frontier in Fig. 4. Three important observations 3 The effect of α is shown in Appendix F. can be drawn: 1) overall, the model-1 has been significantly shifted right, and the model-2 has been shifted upwards, proving that Pareto-MD effectively alleviates the shortcomings of each model as we expected; 2) both of model-1 and model-2 are shifted right beyond the original model-2, indicating that the performance of LRLs is improved beyond the original performance bound. The reason may be that the transfer learning from HRLs to LRLs is more effective when the model achieves high performance on both HRLs and LRLs; 3) the model-1 degenerates on the translation of HRLs in the O2M setting. One potential cause is the representation space of HRLs undergoing more intense squeezing in the O2M than in the M2O when the model learns well on LRLs." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Effect of Diverse Sampling Strategies", "publication_ref": [], "table_ref": [], "text": "In the Pareto-MD training framework, two models corresponding to different Pareto optimal solutions are trained collaboratively using distinct training distributions. One natural question that arises is, how would the performance be affected if we trained two models with the same training distribution? This approach, in fact, degenerates into the vanilla mutual distillation method. Therefore, we conduct a comparison experiment on the WMT-6 dataset (M2O setting) shown in Table 2.\nThe results indicate that vanilla mutual distillation underperforms our BI-PMD by about 0.6 BLEU, which supports the effectiveness of using different sampling distributions for our Pareto-MD. Moreover, we propose AUTO-PMD to improve vanilla mutual distillation by +1.1 BLEU totally." }, { "figure_ref": [ "fig_14" ], "heading": "Evolution of Distillation Weights", "publication_ref": [ "b11" ], "table_ref": [], "text": "To better understand the process of AUTO-PMD, we visualize the automatically searched distillation weights in Fig. 5. As it depicts, the distillation weights constantly vary to adapt the dynamic models with a decreasing variance made by the decay of search step size (Eq.12). Besides, it is discovered that the low-resource TrÑEn translation favors a higher value of distillation weight than the highresource FrÑEn translation. This phenomenon makes sense since LRLs suffer from more serious over-fitting (Huang et al., 2022), requiring stronger distillation learning." }, { "figure_ref": [], "heading": "Effect of Step Size Scheduler µ", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "The performance of different step size schedulers is listed in Table 3. The simple scheduler-1 fixes the step size to 1.0, performing relatively poorly. The scheduler-2 decreases the step size from 1.0 to 0.2. The scheduler-4 decreases the step size from 1.0 to 0.0, achieving the best performance. The scheduler-3 also decrease the step size from 1.0 to 0.0, while not performing searching of distillation weights at the end of training. We finally adopt the scheduler-4 in our AUTO-PMD." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b2", "b2", "b28", "b34", "b31", "b34", "b26", "b11" ], "table_ref": [], "text": "For a long time, data imbalance has been a problem hindering multilingual models from performing evenly across different languages. Existing methods pursue balanced performance via designing heuristics (Arivazhagan et al., 2019) or automatic sampling strategies (Arivazhagan et al., 2019;Wang et al., 2020;Zhou et al., 2021;Wu et 2021; Zhang et al., 2021). For example, Wang et al.\n(2020) design a Reinforce Learning based method to automatically adjust the sampling probability of each language pair towards an overall optimal solution. Zhou et al. (2021) vary the distribution via distributional robust optimization. However, their improvement is limited since increasing the training weights of some languages leads to relative decreases in the weights of other languages, resulting in a trade-off on the Pareto frontier. Different from their methods, we overcome this issue by training two models collaboratively. Before our work, there were two approaches also based on knowledge distillation in MNMT. Tan et al. (2019) use pre-defined bilingual models to teach the multilingual model via knowledge distillation. Huang et al. (2022) propose language-specific self-distillation to remedy the convergence inconsistency problem in MNMT using self-distillation. Our Pareto-MD is an extension of mutual distillation on the Pareto optimization problems." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b33" ], "table_ref": [], "text": "In this work, we propose a training framework Pareto-MD to reach a higher Pareto frontier for MNMT. The core of Pareto-MD is the synergy between diverse Pareto optimal solutions via mutual distillation. Besides, we design a novel strategy for deducing distillation weights automatically, achieving better performance and getting rid of hyperparameter searching. Experimental results on the WMT and TED datasets show the effectiveness of our method. Even though we experiment with training two models in this work, our method can naturally apply to train more models. In the fu-ture, we are interested in exploring how to apply our Pareto-MD to the training of large language models (Zhao et al., 2023)." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b7" ], "table_ref": [], "text": "Our Pareto-MD doubles computational cost due to training two models simultaneously, which can be a limitation of our approach. However, Pareto-MD obtains significant improvement that is hard to achieve for previous methods of training individual models, thus worthy. Besides, our approach would not necessarily result in double training time because these two models can be trained in parallel as implemented by Guo et al. (2020). Moreover, Pareto-MD does not affect inference efficiency.\nA Equivalence Between Searching in O k and r O k\nAs illustrated in §3.3, our strategy AUTO-PMD first searches the language-specific optimal multilingual distillation weight vector αℓ for each translation direction ℓ from a search space and then take the αℓ rℓs as the searching result of α k rℓs. To search the optimal solution, the search space should be the entire space O k , which is formalized as:\nO k \" ą ℓPL tf pα k´1 rℓsq | f P Fu,\nHowever, the size of O k grows exponentially as the number of languages increases. Therefore, we instead search in r O k , a subset of O k , which is formalized as:\nr O k \" t t f Ò pα k´1 rℓsq u ℓPL , t f Ó pα k´1 rℓsq u ℓPL , t f \" pα k´1 rℓsq u ℓPL u.\nIn this section, we initially give a formal definition of the searching process. Subsequently, the Distillation Weights Independence (DWI) assumption is introduced. Ultimately, we prove the equivalence between searching in O k and r O k based on the DWI assumption. O k for direction ℓ, based on the Distillation Weights Independence assumption, it is satisfied that: αℓ rℓs \" r α ℓ rℓs.\nProof. Let αℓ rℓs \" f l pα k´1 rℓsq, where f l P F is the language-specific action, the following equation holds: \nL" }, { "figure_ref": [], "heading": "C Hyper-parameters", "publication_ref": [ "b27", "b19", "b25", "b23", "b15" ], "table_ref": [], "text": "In this section, we report the hyper-parameters used in our experiments.\n• We adopt the base-version of Transformer architecture with 6 layers encoders/decoders and 8 attention heads.\n• The embedding dimension is 512 and the Feed-Forward Network has a dimension of 2048.\n• We train models with learning rate η \" 0.0015 and use Adam optimizer (Kingma and Ba, 2015) with β 1 \" 0.9, β 2 \" 0.98, and the same learning rate schedule as Vaswani et al. (2017).\n• Batch size is set to 64K and half-precision training is adopted (Ott et al., 2018). • For regularization, we use the label smoothing as 0.1 (Szegedy et al., 2016). We set the dropout as 0.3 (Srivastava et al., 2014) on the TED-8-Diverse dataset and as 0.2 on the WMT-6 dataset.\n• Models are trained for 70 epochs on WMT-6 and 300 epochs on TED-8-Diverse according to the convergence.\n• For TED-8-Diverse, we preprocess sentececes using sentencepiece (Kudo and Richardson, 2018) with a vocabulary size of 8K for each language. For WMT-6, the vocabulary size is 64K for all languages.\n• For inference, we use beam search with beam size 5.\nAll models are trained on Tesla V100 GPUs." }, { "figure_ref": [], "heading": "D Details about Baselines", "publication_ref": [ "b2", "b11", "b34" ], "table_ref": [], "text": "For temperature-based sampling (Arivazhagan et al., 2019), we adopt the official implementation in fairseq. LSSD is re-implemented successfully with the code released by Huang et al. (2022). Table 7: BLEU score per language pair on the TED-8-DIVERSE dataset. 'Avg.' is the abbreviation of \"average values\". Bold indicates the best performance of each language pair. Languages are sorted in decreasing order from left to right according to data size.\nWe have tried to set Dropout rate to t0.2, 0.3u for LSSD, and report the best results in terms of BLEU for fair comparison. The code of χ-IBR (Zhou et al., 2021) is also released. However, the result of χ-IBR evaluated in our experiments is lower than the original paper. Therefore, we report the results in the original paper." }, { "figure_ref": [], "heading": "E BLEU scores on Individual Languages", "publication_ref": [], "table_ref": [], "text": "In this section, we report the BLEU scores of individual language pairs. For clarity, we only show the results of the temperature-based sampling and our AUTO-PMD. As illustrated in Table . 6 and Table . 7, our method achieves consistent improvements in 3 out of 4 settings.\nIn the one-to-many setting of WMT-6 dataset, the performance of HRLs (i.e., fr and de) drops about 0.7 BLEU. This may be due to the parameter interference from the significantly improved LRLs." }, { "figure_ref": [ "fig_16" ], "heading": "F Effect of α for UNI-PMD and BI-PMD", "publication_ref": [ "b11" ], "table_ref": [], "text": "In this section, we show the experimental results of UNI-PMD and BI-PMD with different values of α in Fig. 6. As demonstrated, the value of α is crucial for the performance. The optimal value of α varies across different settings. This conclusion is consistent with former work related to knowledge distillation (Huang et al., 2022), which highlights the importance of deducing distillation weights automatically." }, { "figure_ref": [], "heading": "G Other Variants of Mutual Distillation", "publication_ref": [], "table_ref": [ "tab_12" ], "text": "In this work, we design another two mutual distillation-based strategies beyond AUTO-PMD: Dynamic Mutual Distillation (DYNAMIC-MD) and Language-Specific Mutual Distillation (LSMD). DYNAMIC-MD adopts the same update process of distillation weights as AUTO-PMD. That is, DY-NAMIC-MD also makes three trials and uses the optimal action to uptate the distillation weight. Differently, DYNAMIC-MD selects a uniform optimal action instead of language-specific optimal actions. LSMD sets fixed and language-specific distillation weights for each language pair. To obtain suitable language-specific distillation weights, we use the distillation weights searched by AUTO-PMD at the last update. The results of these two strategies are listed in Table 8. As the results show, AUTO-PMD achieves higher performance upper-bound than these two strategies." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "Xiaocheng Feng is the corresponding author of this work. We thank the anonymous reviewers for their insightful comments. This work was supported by the National Key R&D Program of China via grant 2020AAA0106502, National Natural Science Foundation of China (NSFC) via grant 62276078, the Key R&D Program of Heilongjiang via grant 2022ZX01A32 and the International Cooperation Project of PCL, PCL2022D01." } ]
10.18653/v1/N19-1388
[ { "authors": "Roee Aharoni; Melvin Johnson; Orhan Firat", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Massively multilingual neural machine translation", "year": "2019" }, { "authors": "Zeyuan Allen; -Zhu ; Yuanzhi Li", "journal": "", "ref_id": "b1", "title": "Towards understanding ensemble, knowledge distillation and self-distillation in deep learning", "year": "2020" }, { "authors": "Naveen Arivazhagan; Ankur Bapna; Orhan Firat; Dmitry Lepikhin; Melvin Johnson; Maxim Krikun; Mia Xu Chen; Yuan Cao; George F Foster; Colin Cherry; Wolfgang Macherey; Zhifeng Chen; Yonghui Wu", "journal": "", "ref_id": "b2", "title": "Massively multilingual neural machine translation in the wild: Findings and challenges", "year": "2019" }, { "authors": "Dzmitry Bahdanau; Kyunghyun Cho; Yoshua Bengio", "journal": "", "ref_id": "b3", "title": "Neural machine translation by jointly learning to align and translate", "year": "2015" }, { "authors": "Raj Dabre; Chenhui Chu; Anoop Kunchukuttan", "journal": "ACM Computing Surveys (CSUR)", "ref_id": "b4", "title": "A survey of multilingual neural machine translation", "year": "2020" }, { "authors": "Angela Fan; Shruti Bhosale; Holger Schwenk; Zhiyi Ma; Ahmed El-Kishky; Siddharth Goyal; Mandeep Baines; Onur Celebi; Guillaume Wenzek; Vishrav Chaudhary", "journal": "J. Mach. Learn. Res", "ref_id": "b5", "title": "Beyond english-centric multilingual machine translation", "year": "2021" }, { "authors": "Orhan Firat; Kyunghyun Cho; Yoshua Bengio", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Multi-way, multilingual neural machine translation with a shared attention mechanism", "year": "2016" }, { "authors": "Qiushan Guo; Xinjiang Wang; Yichao Wu; Zhipeng Yu; Ding Liang; Xiaolin Hu; Ping Luo", "journal": "", "ref_id": "b7", "title": "Online knowledge distillation via collaborative learning", "year": "2020" }, { "authors": "Thanh-Le Ha; Jan Niehues; Alex Waibel", "journal": "", "ref_id": "b8", "title": "Toward multilingual neural machine translation with universal encoder and decoder", "year": "2016" }, { "authors": "Bobby He; Mete Ozay", "journal": "", "ref_id": "b9", "title": "Feature kernel distillation", "year": "2021" }, { "authors": "Geoffrey Hinton; Oriol Vinyals; Jeff Dean", "journal": "", "ref_id": "b10", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "Yichong Huang; Xiaocheng Feng; Xinwei Geng; Bing Qin", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Unifying the convergences in multilingual neural machine translation", "year": "2022" }, { "authors": "Melvin Johnson; Mike Schuster; Quoc V Le; Maxim Krikun; Yonghui Wu; Zhifeng Chen; Nikhil Thorat; Fernanda Viégas; Martin Wattenberg; Greg Corrado; Macduff Hughes; Jeffrey Dean", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b12", "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", "year": "2017" }, { "authors": "Nal Kalchbrenner; Phil Blunsom", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Recurrent continuous translation models", "year": "2013" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b14", "title": "Adam: A method for stochastic optimization", "year": "2015" }, { "authors": "Taku Kudo; John Richardson", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "year": "2018" }, { "authors": "Marta R Nllb Team; James Costa-Jussà; Onur Cross; Maha Çelebi; Kenneth Elbayad; Kevin Heafield; Elahe Heffernan; Janice Kalbassi; Daniel Lam; Jean Licht; Anna Maillard; Skyler Sun; Guillaume Wang; Al Wenzek; Bapi Youngblood; Loic Akula; Gabriel Mejia Barrault; Prangthip Gonzalez; John Hansanti; Semarley Hoffman; Jarrett; Ram Kaushik; Dirk Sadagopan; Shannon Rowe; Chau Spruit; Pierre Tran; Necip Andrews; Shruti Fazil Ayan; Sergey Bhosale; Angela Edunov; Cynthia Fan; Vedanuj Gao; Francisco Goswami; Philipp Guzmán; Alexandre Koehn; Christophe Mourachko; Safiyyah Ropers; Holger Saleem; Jeff Schwenk; Wang", "journal": "", "ref_id": "b16", "title": "No language left behind: Scaling humancentered machine translation", "year": "2022" }, { "authors": "R James; James Norris; Norris Robert", "journal": "Cambridge university press", "ref_id": "b17", "title": "Markov chains", "year": "1998" }, { "authors": "Myle Ott; Sergey Edunov; Alexei Baevski; Angela Fan; Sam Gross; Nathan Ng; David Grangier; Michael Auli", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "year": "2019" }, { "authors": "Myle Ott; Sergey Edunov; David Grangier; Michael Auli", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Scaling neural machine translation", "year": "2018" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Matt Post", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "A call for clarity in reporting BLEU scores", "year": "2018" }, { "authors": "Aditya Siddhant; Ankur Bapna; Orhan Firat; Yuan Cao; Mia Xu Chen; Isaac Caswell; Xavier Garcia", "journal": "", "ref_id": "b22", "title": "Towards the next 1000 languages in multilingual machine translation: Exploring the synergy between supervised and self-supervised learning", "year": "2022" }, { "authors": "Nitish Srivastava; Geoffrey Hinton; Alex Krizhevsky; Ilya Sutskever; Ruslan Salakhutdinov", "journal": "JMLR", "ref_id": "b23", "title": "Dropout: A simple way to prevent neural networks from overfitting", "year": "2014" }, { "authors": "Ilya Sutskever; Oriol Vinyals; Quoc V Le", "journal": "", "ref_id": "b24", "title": "Sequence to sequence learning with neural networks", "year": "2014" }, { "authors": "Christian Szegedy; Vincent Vanhoucke; Sergey Ioffe; Jon Shlens; Zbigniew Wojna", "journal": "", "ref_id": "b25", "title": "Rethinking the inception architecture for computer vision", "year": "2016" }, { "authors": "Xu Tan; Yi Ren; Di He; Tao Qin; Tie-Yan Liu", "journal": "", "ref_id": "b26", "title": "Multilingual neural machine translation with knowledge distillation", "year": "2019" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b27", "title": "Attention is all you need", "year": "2017" }, { "authors": "Xinyi Wang; Yulia Tsvetkov; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Balancing training for multilingual neural machine translation", "year": "2020" }, { "authors": "Minghao Wu; Yitong Li; Meng Zhang; Liangyou Li; Gholamreza Haffari; Qun Liu", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "Uncertainty-aware balancing for multilingual and multi-domain neural machine translation training", "year": "2021" }, { "authors": "Jian Yang; Shuming Ma; Haoyang Huang; Dongdong Zhang; Li Dong; Shaohan Huang; Alexandre Muzio; Saksham Singhal; Hany Hassan; Xia Song; Furu Wei", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Multilingual machine translation systems from Microsoft for WMT21 shared task", "year": "2021" }, { "authors": "Mingliang Zhang; Fandong Meng; Yunhai Tong; Jie Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Competence-based curriculum learning for multilingual machine translation", "year": "2021" }, { "authors": "Ying Zhang; Tao Xiang; Timothy M Hospedales; Huchuan Lu", "journal": "", "ref_id": "b32", "title": "Deep mutual learning", "year": "2018" }, { "authors": "Kun Wayne Xin Zhao; Junyi Zhou; Tianyi Li; Xiaolei Tang; Yupeng Wang; Yingqian Hou; Beichen Min; Junjie Zhang; Zican Zhang; Yifan Dong; Chen Du; Yushuo Yang; Zhipeng Chen; Jinhao Chen; Ruiyang Jiang; Yifan Ren; Xinyu Li; Zikang Tang; Peiyu Liu; Jian-Yun Liu; Ji-Rong Nie; Wen", "journal": "", "ref_id": "b33", "title": "A survey of large language models", "year": "2023" }, { "authors": "Chunting Zhou; Daniel Levy; Xian Li; Marjan Ghazvininejad; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Distributionally robust multilingual machine translation", "year": "2021" } ]
[ { "formula_coordinates": [ 2, 84.1, 654.28, 205.76, 39.76 ], "formula_id": "formula_0", "formula_text": "L ce \" ÿ px,yq \"D ÿ iď|y| ´log ppy i |x, y ăi ; θq,(1)" }, { "formula_coordinates": [ 2, 375.17, 375.79, 149.97, 35.31 ], "formula_id": "formula_1", "formula_text": "P pℓq \" N 1{τ ℓ ř j N 1{τ j ,(2)" }, { "formula_coordinates": [ 2, 340.35, 631.94, 180.55, 59.7 ], "formula_id": "formula_2", "formula_text": "L kd \" ÿ iď|y| ÿ wPV ´ppw|x, y ăi ; θ T q ¨log ppw|x, y ăi ; θ S q, (3" }, { "formula_coordinates": [ 2, 520.9, 655.14, 4.24, 13.15 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 3, 0.77, 94.07, 183.49, 484.18 ], "formula_id": "formula_4", "formula_text": "↵ k´1 i r`s ↵ k i r`s ` ˆ↵1 r" }, { "formula_coordinates": [ 3, 80.85, 651.55, 209.01, 37.09 ], "formula_id": "formula_5", "formula_text": "L P M D \" p1 ´αi rℓsq ˆLce pB l ; θ i q `αi rℓs ˆLkd pB ℓ ; θ i , θ 3´i q,(4)" }, { "formula_coordinates": [ 4, 134.66, 341.01, 114.63, 17.66 ], "formula_id": "formula_6", "formula_text": "↵ i r`s \" ↵,(6)" }, { "formula_coordinates": [ 4, 265.33, 125.59, 170.73, 96.07 ], "formula_id": "formula_7", "formula_text": "k 1 , r O k 2 , distillation weights ↵ k´1 1 , ↵ k´1 2 Output : ↵ k 1 , ↵ k 2 Initialize: Initialize trial results R P R |L|ˆ| r O k i | to a zero matrix 1 for i -1 to 2 do 2 for j -1 to | r O k i | do 3 ↵ 1 i -r O k i rjs 4 Copy model ✓ 1 i -✓i 5" }, { "formula_coordinates": [ 4, 155.84, 513.97, 134.03, 20.64 ], "formula_id": "formula_8", "formula_text": "α i rℓs \" α,(6)" }, { "formula_coordinates": [ 4, 358.78, 486.85, 166.36, 20.64 ], "formula_id": "formula_9", "formula_text": "F \" tf Ò p¨q, f Ó p¨q, f \" p¨qu,(7)" }, { "formula_coordinates": [ 5, 73.33, 116.65, 199.62, 112.32 ], "formula_id": "formula_10", "formula_text": "k 1 , r O k 2 , distillation weights α k´1 1 , α k´1 2 Output : α k 1 , α k 2 Initialize: Initialize trial results R P R |L|ˆ| r O k i | to a zero matrix 1 for i Ð 1 to 2 do 2 for j Ð 1 to | r O k i | do 3 α 1 i Ð r O k i rjs 4 Copy model θ 1 i Ð θi 5" }, { "formula_coordinates": [ 5, 108.64, 378.64, 181.22, 33.63 ], "formula_id": "formula_11", "formula_text": "O k \" ą ℓPL tf pα k´1 rℓsq | f P Fu.(8)" }, { "formula_coordinates": [ 5, 375.68, 156.86, 149.46, 20.66 ], "formula_id": "formula_12", "formula_text": "f \" pαrℓsq \" αrℓs.(9)" }, { "formula_coordinates": [ 5, 347.65, 283.61, 177.49, 46.27 ], "formula_id": "formula_13", "formula_text": "f Ò pαrℓsq \" σpσ ´1pαrℓsq `µq (10) f Ó pαrℓsq \" σpσ ´1pαrℓsq ´µq (11" }, { "formula_coordinates": [ 5, 520.6, 309.22, 4.54, 13.15 ], "formula_id": "formula_14", "formula_text": ")" }, { "formula_coordinates": [ 5, 377.84, 425.89, 147.3, 43.01 ], "formula_id": "formula_15", "formula_text": "µ \" c T max ´t T max (12)" }, { "formula_coordinates": [ 12, 226.28, 149.13, 142.71, 33.63 ], "formula_id": "formula_16", "formula_text": "O k \" ą ℓPL tf pα k´1 rℓsq | f P Fu," }, { "formula_coordinates": [ 12, 232.05, 228.73, 131.18, 57.98 ], "formula_id": "formula_17", "formula_text": "r O k \" t t f Ò pα k´1 rℓsq u ℓPL , t f Ó pα k´1 rℓsq u ℓPL , t f \" pα k´1 rℓsq u ℓPL u." }, { "formula_coordinates": [ 12, 169.45, 617.62, 7.52, 18.93 ], "formula_id": "formula_18", "formula_text": "L" } ]
Towards Higher Pareto Frontier in Multilingual Machine Translation
Multilingual neural machine translation has witnessed remarkable progress in recent years. However, the long-tailed distribution of multilingual corpora poses a challenge of Pareto optimization, i.e., optimizing for some languages may come at the cost of degrading the performance of others. Existing balancing training strategies are equivalent to a series of Pareto optimal solutions, which trade off on a Pareto frontier 1 . In this work, we propose a new training framework, Pareto Mutual Distillation (Pareto-MD), towards pushing the Pareto frontier outwards rather than making trade-offs. Specifically, Pareto-MD collaboratively trains two Pareto optimal solutions that favor different languages and allows them to learn from the strengths of each other via knowledge distillation. Furthermore, we introduce a novel strategy to enable stronger communication between Pareto optimal solutions and broaden the applicability of our approach. Experimental results on the widely-used WMT and TED datasets show that our method significantly pushes the Pareto frontier and outperforms baselines by up to +2.46 BLEU 2 .
Yichong Huang; Xiaocheng Feng; Xinwei Geng; Baohang Li; Bing Qin
[ { "figure_caption": "Figure 1 :1Figure 1: Multilingual performance frontier shifts outwards. X-axis and Y-axis indicate the performance of Low-Resource Languages and High-Resource Languages, respectively. Existing methods reflect a tradeoff on the Pareto frontier (i.e., the gray curve). Our work aims to push the original Pareto frontier i.e., the blue dotted curve. To this effect, we ameliorate each individual model's shortcoming while retaining their strengths, e.g., moving right the solution A to A 1 and moving up the solution B to B 1 , via our Pareto Mutual Distillation.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "4EFEffect of ↵ for Uni-MOMD and 5 Bi-MOMD 6 In this section, we show the experimental results 7 of Uni-MOMD and Bi-MOMD with different val-8 ues of ↵ in Fig. 5. As demonstrated, the value 9 of ↵ is crucial for the performance. The optimal 0 value of ↵ varies across different settings. This con-Effect of Step Size Scheduler µ 6", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": ":Effect of step size scheduler µ in the many-tonslation of WMT-6 dataset.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "of step size scheduler µ in the many-toe translation of WMT-6 dataset.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Illustration of Pareto-MD, using different sampling distributions to train two models. At each step, both models additionally mimic the output of each other via knowledge distillation. The distillation learning of each model is weighted by language-specific distillation weights α i rℓs deduced with specific strategies.", "figure_data": "", "figure_id": "fig_6", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fr→EnFigure 3 :3Figure3: Process of AUTO-PMD updating the distillation weights. At the k-th update, AUTO-PMD makes three trials that perform three actions to all language pairs' weights and then train the current model. Finally, the language-specific optimal actions are selected to update the previous weights. Note that the value of each weight will change by different magnitudes when increased or decreased due to the non-linear nature of sigmoid function.", "figure_data": "", "figure_id": "fig_7", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "In terms of mechanism, both UNI-PMD and BI-PMD determine the distillation weights of all translation directions from a predefined hyper-parameter ↵, which dissatisfies the following three expected properties of distillation weights: 1) Language-Adaptability: w.h.p. language pairs differ in the value of the optimal distillation weight, but the current strategies set a uniform weight for all language pairs, resulting in sub-optimal performance; 2) Dynamics: existing research on mutual distillation uses a fixed distillation weight throughout the training process, which Algorithm 2: AUTO-PMD Input : Multilingual trial datasets tD trial `u|L| `\"1 , validation datasets tD valid `u|L| `\"1 , the training model ✓1and ✓2, search space r O", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: Process of AUTO-PMD updating the distillation weights. At the k-th update, AUTO-PMD makes three trials that perform three actions to all language pairs' weights and then train the current model. Finally, the language-specific optimal actions are selected to update the previous weights. Note that the value of each weight will change by different magnitudes when increased or decreased due to the non-linear nature of sigmoid function.", "figure_data": "", "figure_id": "fig_9", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "training model θ1and θ2, search space r O", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Multilingual performance Pareto frontier on the WMT-6 dataset. Gray dotted curves indicate the Pareto frontier of baselines and the colorful ones mark the frontier made by AUTO-PMD. This figure shows that the Pareto frontier is pushed outwards significantly.", "figure_data": "", "figure_id": "fig_12", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Visualization of automatically search distillation weights in the many-to-one setting of WMT-6 dataset. Due to the space limitation, we only show the weights of one HRL (FrÑEn) and one LRL (TrÑEn)", "figure_data": "", "figure_id": "fig_14", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Definition A.1 (Searching Process). Given the multilingual trial set D trial \" tD trial ℓ u |L| ℓ\"1 , validation set D valid \" tD valid ℓ u |L| ℓ\"1 , student mode θ S , teacher model θ T , and the search space O , for each translation direction ℓ, the searching process of α k rℓs is: α k rℓs \" αℓ rℓs αℓ \" arg min αPO L ce pD valid ℓ ; θpαqq θpαq \" arg min θ L P M D pD trial ; θ S , θ T , αq. Hypothesis A.1 (Distillation Weights Independence). Given two multilingual distillation weight vectors α 1 and α 2 : Dℓ P L, α 1 rℓs \" α 2 rℓs ñ L ce pD valid ℓ ; θpα 1 qq \" L ce pD valid ℓ ; θpα 2 qq Theorem A.1. Let αℓ rℓs denote the searching result in the search space O k for direction ℓ, r α ℓ rℓs denotes the searching result in the search space r", "figure_data": "", "figure_id": "fig_15", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Effect of different values of α on WMT-6 dataset. For clarity, we only depict the results of model-2 trained with τ \" 5.", "figure_data": "", "figure_id": "fig_16", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Effect of step size scheduler µ in the many-toone translation of WMT-6 dataset.", "figure_data": "8 9We have tried to set Dropout rate to t0.2, 0.3u for LSSD, and report the best results in terms of BLEU0for fair comparison. The code of -IBR (Zhou1et al., 2021) is also released. However, the result of-IBR evaluated in our experiments is lower than", "figure_id": "tab_0", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": Effect of step size scheduler µ in the many-to-one translation of WMT-6 dataset.78 79We have tried to set Dropout rate to t0.2, 0.3u for LSSD, and report the best results in terms of BLEU80for fair comparison. The code of -IBR (Zhou81et al., 2021) is also released. However, the result of82-IBR evaluated in our experiments is lower than83the original paper. Therefore, we report the results84in the original paper.85 86E Effect of ↵ for Uni-MOMD and Bi-MOMD87In this section, we show the experimental results88of Uni-MOMD and Bi-MOMD with different val-89ues of ↵ in Fig. 5. As demonstrated, the value90of ↵ is crucial for the performance. The optimal91value of ↵ varies across different settings. This con-92clusion is consistent with former work related to93knowledge distillation (Huang et al., 2022), which94highlights the importance of deducing distillationweights automatically.96", "figure_id": "tab_1", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "BLEU scores on the WMT-6 and TED-8-Diverse dataset. Bold indicates the highest BLEU score in each setting. '*' means results taken from the original paper.", "figure_data": "UNI-PMD BI-PMD AUTO-PMD21.17 Our Pareto Mutual Distillation 19.76 \" 1 τ \" 1 20.76 : 18.96 τ ą 1 21.74 : 19.76 : τ \" 1 21.61 : 19.53 : τ ą 1 21.92 : 20.09 : τ \" 1 21.89 : 20.16 : τ ą 1 22.39 : 20.48 :30.77 29.76 : 29.97 : 30.31 : 30.42 : 31.05 : 30.71 :23.55 22.92 22.91 23.00 : 22.77 23.31 : 23.28 :", "figure_id": "tab_4", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Effect of step size scheduler µ in the many-toone translation of WMT-6 dataset. We have tried for four implementations of the step size scheduler.", "figure_data": "al.,", "figure_id": "tab_7", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "l pα k´1 rℓ 1 squ ℓ 1 PL qq, based on hypothesis A.1. Because t f l pα k´1 rℓ 1 squ ℓ 1 PL P r O k , and r O k Ď O k , then we can infer that:ñ L ce pD valid ℓ ; t f l pα k´1 rℓ 1 squ ℓ 1 PL q \" minWe list data statistic of TED-8-Diverse dataset in Table4. Data statistics of WMT-6 dataset is listed in Table5.", "figure_data": "B Data StatisticsLanguageNumbos (Bosnian)5,664mar (Marathi)9,840hin (Hindi)18,798mkd (Macedonian)25,335ell (Greek)134,327bul (Bulgarian)174,444fra (French)192,304kor (Korean)205,640αP r O kL ce pD valid ℓ; θpαqqñt f l pα k´1 rℓ 1 squ ℓ 1 PL \" arg min αP r O kL ce pD valid ℓ; θpαqqñ ñf l pα k´1 rℓsq \" r α ℓ rℓs αℓ rℓs \" r α ℓ rℓs", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Data statistics for the TED-8-Diverse dataset. 'num' refers to the number of sentence pairs in the training set.", "figure_data": "LanguageData SourceNumtr (Turkish)WMT175,000ro (Romanian) WMT1610,000et (Estonian)WMT1880,000zh (Chinese)WMT17400,000de (German)WMT14 1,500,000fr (French)WMT14 3,000,000", "figure_id": "tab_9", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Data statistics for the WMT dataset. 'num' refers to the number of sentence pairs in the training set.", "figure_data": "", "figure_id": "tab_10", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "34.40 28.70 13.27 16.41 22.65 7.99 20.57 τ ą 1 31.59 26.61 12.56 16.48 23.06 9.29 19.93 AUTO-PMD τ \" 1 34.96 28.79 13.81 17.9 25.22 10.65 21.89 τ ą 1 34.09 28.77 14.05 19.22 26.62 11.60 22.39 BLEU score per language pair on the WMT-6 dataset. 'Avg.' is the abbreviation of \"average values\". Bold indicates the best performance of each language pair. Languages are sorted in decreasing order from left to right according to data size. 19.73 40.73 39.74 38.71 34.34 23.38 11.13 24.88 29.08 τ ą 1 18.79 40.1 39.00 38.11 32.89 22.55 10.36 24.98 28.35 AUTO-PMD τ \" 1 21.14 42.41 41.52 40.67 36.49 25.9 12.32 27.94 31.05 τ ą 1 20.51 42.03 40.93 40.00 36.04 25.71 12.44 28.02 30.71 O2M Temperature Sampling τ \" 1 9.06 40.26 36.10 33.63 25.67 15.56 4.90 16.82 22.75 τ ą 1 8.87 39.96 35.91 33.31 24.35 14.81 4.75 15.87 22.23 AUTO-PMD τ \" 1 9.13 40.94 36.56 34.03 27.15 15.89 5.13 17.64 23.31 τ ą 1 8.90 40.65 36.55 33.64 27.44 16.29 4.90 17.89 23.28", "figure_data": "SettingMethodSamplingfrdezhetrotrAvg.Temperature Samplingτ \" 1M2OO2MTemperature Sampling AUTO-PMDτ \" 1 τ ą 1 τ \" 1 τ ą 136.16 23.89 21.49 11.53 14.85 5.58 18.92 31.21 20.76 20.76 13.28 17.54 8.20 18.63 35.38 23.12 20.84 13.2 18.79 9.65 20.16 34.47 23.00 21.51 14.15 19.54 10.23 20.48SettingMethodSampling korfrabulellmkd hinmarbosAvg.Temperature Samplingτ \" 1M2O", "figure_id": "tab_11", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Other variants of mutual distillation designed by us. DYNAMIC-MD is the abbreviation of Dynamic Mutual Distillation. LSMD is the abbreviation of Language-Specific Mutual Distillation.", "figure_data": "MethodSamplingBLEU M2O O2MAUTO-PMD DYNAMIC-MD LSMDτ \" 1 τ \" 5 τ \" 1 τ \" 5 τ \" 1 τ \" 521.89 20.16 22.39 20.48 22.06 20.33 22.11 20.24 21.47 18.94 21.03 19.46", "figure_id": "tab_12", "figure_label": "8", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "(Ha et al., 2016)", "Explanation": "The cited work by Ha et al. (2016) provides a foundational methodology for the development of multilingual neural machine translation (MNMT) models, which the citing paper builds upon in its research."}, {"Category": "Methodological Basis", "Citation": "(Firat et al., 2016)", "Explanation": "The cited work by Firat et al. (2016) contributes to the development of MNMT models by providing a methodology for handling multiple language pairs in a unified model."}, {"Category": "Methodological Basis", "Citation": "(Johnson et al., 2017)", "Explanation": "The cited work by Johnson et al. (2017) offers a methodology for improving the performance of low-resource languages in MNMT through transfer learning, which the citing paper builds upon in its research."}, {"Category": "Extension or Continuation", "Citation": "(Aharoni et al., 2019)", "Explanation": "The cited work by Aharoni et al. (2019) extends the research on transfer learning in MNMT by focusing on improving the performance of low-resource languages, which the citing paper further explores in its study."}, {"Category": "Extension or Continuation", "Citation": "(Dabre et al., 2020)", "Explanation": "The cited work by Dabre et al. (2020) extends the research on transfer learning in MNMT by focusing on improving the performance of low-resource languages, which the citing paper further explores in its study."}, {"Category": "Extension or Continuation", "Citation": "(Siddhant et al., 2022)", "Explanation": "The cited work by Siddhant et al. (2022) extends the research on transfer learning in MNMT by focusing on improving the performance of low-resource languages, which the citing paper further explores in its study."}, {"Category": "Extension or Continuation", "Citation": "(Fan et al., 2021)", "Explanation": "The cited work by Fan et al. (2021) extends the research on MNMT by focusing on the deployment of a single model for multiple language pairs, which the citing paper further explores in its study."}, {"Category": "Extension or Continuation", "Citation": "(Yang et al., 2021)", "Explanation": "The cited work by Yang et al. (2021) extends the research on MNMT by focusing on the deployment of a single model for multiple language pairs, which the citing paper further explores in its study."}, {"Category": "Extension or Continuation", "Citation": "(NLLB Team et al., 2022)", "Explanation": "The cited work by NLLB Team et al. (2022) extends the research on MNMT by focusing on the deployment of a single model for multiple language pairs, which the citing paper further explores in its study."}, {"Category": "Data Source", "Citation": "(Aharoni et al., 2019)", "Explanation": "The cited work by Aharoni et al. (2019) provides a data source for the study on transfer learning in MNMT, which the citing paper utilizes in its research."}, {"Category": "Data Source", "Citation": "(Dabre et al., 2020)", "Explanation": "The cited work by Dabre et al. (2020) provides a data source for the study on transfer learning in MNMT, which the citing paper utilizes in its research."}, {"Category": "Data Source", "Citation": "(Siddhant et al., 2022)", "Explanation": "The cited work by Siddhant et al. (2022) provides a data source for the study on transfer learning in MNMT, which the citing paper utilizes in its research."}, {"Category": "Data Source", "Citation": "(Fan et al., 2021)", "Explanation": "The cited work by Fan et al. (2021) provides a data source for the study on the deployment of a single model for multiple language pairs in MNMT, which the citing paper utilizes in its research."}, {"Category": "Data Source", "Citation": "(Yang et al., 2021)", "Explanation": "The cited work by Yang et al. (2021) provides a data source for the study on the deployment of a single model for multiple language pairs in MNMT, which the citing paper utilizes in its research."}, {"Category": "Data Source", "Citation": "(NLLB Team et al., 2022)", "Explanation": "The cited work by NLLB Team et al. (2022) provides a data source for the study on the deployment of a single model for multiple language pairs in MNMT, which the citing paper utilizes in its research."}, {"Category": "Methodological Basis", "Citation": "(Arivazhagan et al., 2019)", "Explanation": "The cited work by Arivazhagan et al. (2019) introduces the temperature-based sampling method, which the citing paper adopts to adjust the sampling distribution in a set of Pareto optimal solutions for language performance."}, {"Category": "Methodological Basis", "Citation": "(Wang et al., 2020)", "Explanation": "The cited work by Wang et al. (2020) presents a method to improve the performance of low-resource languages in a set of Pareto optimal solutions, which the citing paper builds upon to enhance the representation of LRLs in the sampling distribution."}, {"Category": "Methodological Basis", "Citation": "(Wu et al., 2021)", "Explanation": "The cited work by Wu et al. (2021) discusses a set of Pareto optimal solutions in the context of language performance, which the citing paper uses to understand the trade-offs between different languages in the sampling distribution."}, {"Category": "Extension or Continuation", "Citation": "(Arivazhagan et al., 2019)", "Explanation": "The cited work by Arivazhagan et al. (2019) introduces the temperature-based sampling method, which the citing paper extends by exploring the use of hyper-parameters to further improve the performance of the method in a set of Pareto optimal solutions."}, {"Category": "Methodological Basis", "Citation": "(Kalchbrenner and Blunsom, 2013)", "Explanation": "The cited work by Kalchbrenner and Blunsom (2013) provides a foundational method for neural machine translation (NMT), which the citing paper adopts in their research on the task of translating sentences in source and target languages."}, {"Category": "Methodological Basis", "Citation": "(Sutskever et al., 2014)", "Explanation": "The cited work by Sutskever et al. (2014) contributes to the citing paper by providing a method for NMT that involves the use of a language model to generate text in a target language from a source sentence."}, {"Category": "Methodological Basis", "Citation": "(Bahdanau et al., 2015)", "Explanation": "The cited work by Bahdanau et al. (2015) offers a method for NMT that involves the use of attention mechanisms to focus on specific words or phrases in the source sentence when generating the target text."}, {"Category": "Methodological Basis", "Citation": "(Vaswani et al., 2017)", "Explanation": "The cited work by Vaswani et al. (2017) provides a method for NMT that uses a transformer architecture to process the input and output sequences in parallel, improving the efficiency and accuracy of the translation process."}, {"Category": "Methodological Basis", "Citation": "(Arivazhagan et al., 2019)", "Explanation": "The cited work by Arivazhagan et al. provides a method for addressing the issue of LRLs being overwhelmed by HRLs in the optimization process, which the citing paper adopts to improve the performance of the MNMT model in training multilingual datasets."}, {"Category": "Methodological Basis", "Citation": "(Hinton et al., 2015)", "Explanation": "The cited work by Hinton et al. (2015) is the origin of the concept of knowledge distillation, which the citing paper builds upon to develop the concept of mutual distillation for knowledge transfer."}, {"Category": "Extension or Continuation", "Citation": "(Zhang et al., 2018)", "Explanation": "The cited work by Zhang et al. (2018) introduces the concept of mutual distillation, which the citing paper further extends by training more than one model simultaneously to improve the knowledge transfer process."}, {"Category": "Extension or Continuation", "Citation": "(Guo et al., 2020)", "Explanation": "The cited work by Guo et al. (2020) also contributes to the development of mutual distillation by training more than one model simultaneously, with each model teaching the other during the training process."}, {"Category": "Data Source", "Citation": "(Hinton et al., 2015)", "Explanation": "The cited work by Hinton et al. (2015) provides the original data and model used in the development of the knowledge distillation concept, which the citing paper uses as a foundational element for the study of mutual distillation."}, {"Category": "Data Source", "Citation": "(Zhang et al., 2018)", "Explanation": "The cited work by Zhang et al. (2018) also serves as a data source for the development of the concept of mutual distillation, providing the basis for the training of more than one model simultaneously in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Guo et al., 2020)", "Explanation": "The cited work by Guo et al. (2020) contributes to the data source for the study of mutual distillation by training more than one model simultaneously, with each model teaching the other during the training process."}, {"Category": "Methodological Basis", "Citation": "(Huang et al., 2022)", "Explanation": "The cited work highlights the importance of deducing distillation thresholds automatically, which the citing paper adopts in their research to improve the performance of their system."}, {"Category": "Methodological Basis", "Citation": "(Huang et al., 2022)", "Explanation": "The cited work by Huang et al. (2022) highlights the importance of automatically deducing distillation rights, which the citing paper adopts in their research to improve the performance of their models."}, {"Category": "Methodological Basis", "Citation": "(Wang et al., 2020)", "Explanation": "The cited work by Wang et al. (2020) provides a method for setting a step size scheduler in the training stage, which the citing paper adopts to improve the precision of weights search and accelerate convergence to the optimal weight."}, {"Category": "Methodological Basis", "Citation": "(Tan et al., 2019)", "Explanation": "The cited work provides the MULTI-DISTILL method, which the citing paper adopts in their research on knowledge distillation strategies for multilingual translation."}, {"Category": "Data Source", "Citation": "(Huang et al., 2022)", "Explanation": "The cited work is the source of the LSSD method used in the study conducted in the citing paper on evaluating knowledge distillation strategies in multilingual translation."}, {"Category": "Methodological Basis", "Citation": "(Vaswani et al., 2017)", "Explanation": "The cited work by Vaswani et al. (2017) provides the model architecture used in the citing paper, which is the popular Transformer implemented in fairseq."}, {"Category": "Data Source", "Citation": "(Ott et al., 2019)", "Explanation": "The cited work by Ott et al. (2019) is the fairseq toolkit used in the citing paper to implement the model architecture."}, {"Category": "Methodological Basis", "Citation": "(Huang et al., 2022)", "Explanation": "The cited work by Huang et al. (2022) provides the model configuration, hyper-parameters, and preprocess procedure used in the citing paper for the baselines and the proposed method."}, {"Category": "Methodological Basis", "Citation": "(Papineni et al., 2002)", "Explanation": "The cited work by Papineni et al. (2002) provides the evaluation metric of BLEU score used in the citing paper to measure the performance of the models."}, {"Category": "Methodological Basis", "Citation": "(Post, 2018)", "Explanation": "The cited work by Post (2018) is the SacreBLEU toolkit used in the citing paper to calculate the BLEU score for evaluating the performance of the models."}, {"Category": "Methodological Basis", "Citation": "(Zhou et al., 2021)", "Explanation": "The cited work, \u03c7-IBR, serves as the state-of-the-art dynamic sampling method that enables the balancing training based on distributionally robust optimization, which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "(Huang et al., 2022)", "Explanation": "The cited work, LSSD, is a distillation-based training strategy that achieves SOTA performance on TED-8-Diverse and WMT-6 dataset by alleviating the convergence inconsistency problem of MNMT using self-distillation, which the citing paper uses as a method to improve their research."}, {"Category": "Methodological Basis", "Citation": "(Huang et al., 2022)", "Explanation": "The cited work by Huang et al. (2022) is referenced to explain the concept of over-fitting in low-resource language translation, which the citing paper uses to justify the need for stronger distillation learning in the process of AUTO-PMD."}, {"Category": "Methodological Basis", "Citation": "(Arivazhagan et al., 2019)", "Explanation": "The cited work by Arivazhagan et al. provides a method for designing heuristics to address data imbalance in multilingual models, which the citing paper adopts to improve the performance of their own model."}, {"Category": "Methodological Basis", "Citation": "(Wang et al., 2020)", "Explanation": "The cited work by Wang et al. presents a method for automatic sampling strategies to address data imbalance in multilingual models, which the citing paper builds upon to improve the performance of their model."}, {"Category": "Methodological Basis", "Citation": "(Zhou et al., 2021)", "Explanation": "The cited work by Zhou et al. introduces a method for distributional robust optimization to address data imbalance in multilingual models, which the citing paper adopts to improve the performance of their model."}, {"Category": "Data Source", "Citation": "(Wu et al., 2021)", "Explanation": "The cited work by Wu et al. provides a dataset or model that the citing paper utilizes in their research on data imbalance in multilingual models."}, {"Category": "Extension or Continuation", "Citation": "(Zhang et al., 2021)", "Explanation": "The cited work by Zhang et al. extends the research on data imbalance in multilingual models by exploring new dimensions or variables to improve the performance of the model."}, {"Category": "Methodological Basis", "Citation": "(Tan et al., 2019)", "Explanation": "The cited work by Tan et al. uses pre-defined bilingual models to teach a multilingual model via knowledge distillation, which the citing paper builds upon to improve the performance of their model."}, {"Category": "Methodological Basis", "Citation": "(Huang et al., 2022)", "Explanation": "The cited work by Huang et al. (2022) provides a method of self-distillation to address the convergence inconsistency problem in MNMT, which the citing paper extends and applies in the context of Pareto optimization problems."}, {"Category": "Extension or Continuation", "Citation": "(Zhao et al., 2023)", "Explanation": "The cited work by Zhao et al. is mentioned as a potential future direction for the citing paper to explore applying the proposed method to the training of large language models."}, {"Category": "Methodological Basis", "Citation": "(Guo et al., 2020)", "Explanation": "The cited work by Guo et al. (2020) provides a method for training two models in parallel, which the citing paper implements to reduce the computational cost of the approach."}, {"Category": "Methodological Basis", "Citation": "(Kingma and Ba, 2015)", "Explanation": "The cited work by Kingma and Ba (2015) provides the Adam optimizer used in the citing paper for training the models with a specific set of parameters, including learning rate, beta values, and learning rate schedule."}, {"Category": "Data Source", "Citation": "(Ott et al., 2018)", "Explanation": "The cited work by Ott et al. (2018) is used to train the models with half-precision training, which is a data-specific method adopted in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Szegedy et al., 2016)", "Explanation": "The cited work by Szegedy et al. (2016) provides the label smoothing technique used in the citing paper as a regularization method to improve the model performance."}, {"Category": "Methodological Basis", "Citation": "(Srivastava et al., 2014)", "Explanation": "The cited work by Srivastava et al. (2014) introduces the dropout method with specific values of 0.3 and 0.2 for the TED-8-Diverse and WMT-6 datasets, respectively, which is used in the citing paper to improve the model performance."}, {"Category": "Methodological Basis", "Citation": "(Vaswani et al., 2017)", "Explanation": "The cited work by Vaswani et al. (2017) provides the learning rate with a value of 0.0015 and the same learning rate schedule as used in the citing paper for training the models."}, {"Category": "Methodological Basis", "Citation": "(Arivazhagan et al., 2019)", "Explanation": "The cited work by Arivazhagan et al. provides the official implementation of temperature-based sampling in fairseq, which the citing paper adopts in their research to conduct their analysis."}, {"Category": "Data Source", "Citation": "(Huang et al., 2022)", "Explanation": "The cited work by Huang et al. releases a code for re-implementing LSSD successfully, which the citing paper utilizes in their research to conduct their analysis."}, {"Category": "Extension or Continuation", "Citation": "(Zhou et al., 2021)", "Explanation": "The cited work by Zhou et al. introduces a new method called \u03c7-IBR, which the citing paper extends by reporting the results in the original paper to provide a fair comparison with other methods."}, {"Category": "Supporting Evidence", "Citation": "(Huang et al., 2022)", "Explanation": "The cited work by Huang et al. (2022) highlights the importance of automatically deducing distillation weights, which is a crucial factor in the performance of UNI-PMD and BI-PMD as shown in Fig. 6 in the citing paper."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b22", "b31", "b35", "b12", "b37", "b20", "b25", "b39", "b0", "b34", "b23", "b38", "b33", "b35", "b25", "b34", "b33", "b35", "b25", "b39", "b25" ], "table_ref": [], "text": "The training of state-of-the-art ad-hoc text retrieval models (Nogueira and Cho, 2020;Santhanam et al., 2021;Zhan et al., 2021;Ren et al., 2021b,a;Gao and Callan, 2021;Zhang et al., 2022;Lu et al., 2022), which are based on transformer Language Models, relies on large-scale datasets that are sparsely annotated, typically comprising only a small number of relevance judgements for each query. 2 These labels are usually derived from sub-mitting the strongest pseudo-relevance signals in user click logs to human judges for verification. Despite potential future endeavors to extend annotation, this sparsity and the resulting issue of false negatives (Qu et al., 2021;Zhou et al., 2022) -i.e., only a minuscule fraction of all documents pertinent to a query are ever seen by users or judges and identified as relevant -will inevitably persist. To eliminate the sparsity, it would be necessary to acquire either human judgements, or perhaps expensive evaluations from Large Language Models, to verify the relevance of the entire document collection (typically tens of millions of documents) with respect to every query in the dataset, leading to an intractable Cartesian product. Consequently, it is crucial to explore optimizing the utilization of existing information, and extract richer structural relationships between documents and queries, without additional annotations.\nTo this end, in the present work we follow a twopronged approach: first, we employ the concept of reciprocal nearest neighbors (rNN) to improve the estimation of semantic similarity between embeddings of queries and documents. Two documents c i and c j are said to be k-reciprocal nearest neighbors if c j is within the k-nearest neighbors of c i , and at the same time c i is within the k-nearest neighbors of c j . Second, we attempt to enhance the queryspecific ranking context used to train dense retrievers, going beyond the notion of using mined candidates merely as negatives for contrastive learning. By ranking context we mean a set of documents that are in meaningful relationship to the query and are jointly evaluated with respect to their relevance to the query (Ai et al., 2018;Zerveas et al., 2022). Specifically, we use the similarity of ground-truth documents to candidates in the same ranking context as the query as evidence to guide the model's predicted relevance probability distribution over candidates.\nDense retrieval, the state-of-the-art approach for single-stage ad-hoc retrieval, is premised on modeling relevance between a query and a document as the geometric proximity (e.g., dot-product or cosine similarity) between their respective embeddings in the common representation vector space. Top retrieval results are therefore the documents whose embeddings are the nearest neighbors of the query embedding. However, this modeling assumption may be sub-optimal: previous work in the field of image re-identification has shown that, while geometric similarity can easily differentiate between candidate embeddings in near proximity from a query embedding, the differences between relevance scores of candidate embeddings become vanishingly small as distance from the query increases (Qin et al., 2011). It was found that the degree of overlap between sets of reciprocal nearest neighbors can be used to compute an improved measure of similarity between query and candidate embeddings (Zhong et al., 2017). Moreover, geometric similarity is used in mining \"hard\" negatives, which have been consistently found to improve performance compared to random in-batch negatives (Xiong et al., 2020;Zhan et al., 2021;Qu et al., 2021;Zerveas et al., 2022). Hard negatives are typically the top-ranked candidates retrieved by a dense retriever (nearest neighbors to a query embedding) that are not explicitly annotated as relevant in the dataset.\nOn the one hand, the effectiveness of mined negatives is limited by how effectively this dense retriever can already embed queries and relevant documents in close proximity within the shared representation space, although the periodical or dynamic retrieval of negatives during training can partially alleviate this problem (Xiong et al., 2020;Zhan et al., 2021). On the other hand, when the retriever used to mine hard negatives indeed succeeds in retrieving candidates that are semantically relevant to the query, these are often not marked as positives due to the sparsity of annotation and are thus spuriously used as negatives for contrastive learning (false negatives)3 , confounding the training signal (Qu et al., 2021;Zhou et al., 2022).\nFor this reason, in this work we investigate to what degree these issues can be mitigated through the use of reciprocal nearest neighbors, essentially extracting additional relationship information between queries and documents beyond flat geometric distances, such as the local degree of node connectivity. Furthermore, unlike all existing dense retrieval methods, instead of using candidates exclusively as negatives, we propose using their estimated similarity to the ground-truth document(s) as evidence for label smoothing; we thus redistribute probability weight in the target score distribution from the ground truth to a larger number of likely false negatives.\nFinally, our work places a strong emphasis on computational efficiency: label smoothing can be performed entirely offline on CPUs and can be trivially parallelized, while no latency is introduced during training and our models can be trained (e.g., on MS MARCO) within hours, using a single GPU with a batch size of 32. Reranking based on reciprocal nearest neighbors, when used, introduces a few milliseconds latency per query on a CPU.\nBy contrast, the current state-of-the-art dense retrieval methods (e.g. (Qu et al., 2021;Ren et al., 2021b)) depend on the existence of better performing, but computationally demanding re-ranking models such as cross-encoders, which are typically run offline on several GPUs with huge batch sizes and are used either for pseudo-labeling additional training data, for discarding negatives which are likely unlabeled positives (i.e., false negatives), or directly for distillation through a teacher-student training scheme. However, besides the very high computational cost of such pipelines, the existence of a model that is more powerful than the retrieval model we wish to train is a very restrictive constraint, and cannot be taken for granted in many practical settings.\nOur main contributions are: (1) We propose evidence-based label smoothing, a novel method which mitigates the problem of false negatives by leveraging the similarity of candidate documents within the ranking context of a query to the annotated ground truth in order to compute soft relevance labels. Different from existing methods like teacher-student distillation or pseudo-labeling, our approach does not rely on the existence of more powerful retrieval methods.\n(2) We explore the applicability of the concept of reciprocal nearest neighbors in improving the similarity metric between query and document embeddings in the novel setting of ad-hoc text retrieval.\n(3) Through extensive experiments on two different large-scale ad-hoc retrieval datasets, we demonstrate that the concept of reciprocal nearest neighbors can indeed enhance the ranking context in a computationally efficient way, both when reranking candidates at inference time, as well as when applied for evidence-based label smoothing intended for training." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b10", "b18", "b34", "b6", "b1", "b0", "b24", "b7", "b8", "b17", "b25", "b9", "b15", "b21", "b23", "b38", "b22", "b25", "b39" ], "table_ref": [], "text": "Our proposed label smoothing, which encourages the model to assign higher relevance scores to documents intimately related to the ground truth, conceptually finds support in prior work that proposed local relevance score regularization (Diaz, 2007), adjusting retrieval scores to respect local inter-document consistency. Despite the entirely different methodology, both methods are premised on the intuition that documents lying closely together in the representation vector space should have similar scores; this in turn is related to the cluster hypothesis, which states that closely related documents (and thus proximal in terms of vector representations) tend to be relevant to the same request (Jardine and van Rijsbergen, 1971). Zerveas et al., 2022 recently argued that jointly scoring a large number of candidate documents (positives and negatives) closely related to the same query within a list-wise loss constitutes a queryspecific ranking context that benefits the assessment of relevance of each individual candidate doc-ument with respect to the query. Thus, they extended well-established insights and empirical findings from Learning-to-Rank literature (Cao et al., 2007;Ai et al., 2019Ai et al., , 2018) ) to the realm of dense retrieval through transformer-based Language Models. While in-depth annotation of candidate documents (i.e., hundreds of relevance judgements per query) explicitly provides a rich context for each query in Learning-to-Rank datasets (Qin et al., 2010;Chapelle and Chang, 2010;Dato et al., 2016), such information is not available in the sparsely annotated, large-scale datasets used to train dense retrieval models. The relationship exploited thus far to \"build a context\" (practically, this means mining hard negatives), is simply that of geometric proximity between the embeddings of a query and candidate documents.\nAddressing the problem of sparse annotation, several works have utilized the relevance estimates from supervised (e.g. Hofstätter et al., 2021;Qu et al., 2021;Ren et al., 2021b) or unsupervised (e.g. lexical: Dehghani et al., 2017;Haddad and Ghosh, 2019) retrieval methods or other dataset-specific heuristics (e.g. bibliography citations: Moro and Valgimigli, 2021) to derive soft labels for documents used to train a model, e.g., in a teacherstudent distillation scheme. In this work, we instead shift the perspective from assigning labels based on similarity with respect to the query, to similarity with respect to the ground-truth document(s), but within a query-specific ranking context. We furthermore leverage the concept of reciprocal nearest neighbors, introduced as a reranking method for image re-identification (Qin et al., 2011;Zhong et al., 2017), to improve the similarity estimate.\nFalse negatives have been identified as a significant challenge by prior work, which has employed powerful but computationally expensive cross-encoders (Nogueira and Cho, 2020) to discard documents that receive a high similarity score to the query and are thus likely relevant from the pool of hard negatives (Qu et al., 2021;Ren et al., 2021b). However, discarding top-ranking hard negatives also discards potentially useful information for training.\nRecently, Zhou et al. (2022) tackled the problem of false negatives through selective sampling of negatives around the rank of the ground-truth document, avoiding candidates that are ranked either much higher than the ground truth (probable false negatives) or much lower (too easy negatives). This approach differs from ours in the perspective of similarity (query-centric vs ground-truth-centric), and in the fact that information is again discarded from the context, as only a small number of negatives is sampled around the positive. Additionally, a query latency of up to 650 ms is added during training. Ren et al. (2021a) leverage the similarity of candidate documents to the ground truth document (positive), but in a different way and to a different end compared to our work: all documents in the batch (\"in-batch negatives\") as well as retrieved candidates are used as negatives in an InfoNCE loss term, which penalizes the model when it assigns a low similarity score between a single positive and the query compared to the similarity score it assigns to pairs of this positive with all other candidates. Thus, it requires that the ground truth lies closer to the query than other candidates, but the detrimental effect of false negatives on the training signal fully persists.\nBy contrast, our method jointly takes into account all positives and other candidates in the ranking context, and through a KL-divergence loss term requires that the predicted relevance of the query with respect to all documents in the ranking context has a similar probability distribution to the target distribution, i.e., the distribution of similarity between all ground truth positives and all candidate documents in the context. False negatives are thus highly likely to receive a non-zero probability in the target distribution, and the penalty when assigning a non-zero relevance score to false negatives is lower." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Similarity metric based on Reciprocal Nearest Neighbors", "publication_ref": [ "b38", "b23", "b38" ], "table_ref": [], "text": "Nearest Neighbors are conventionally retrieved based on the geometric similarity (here, inner product) between embedding vectors of a query q and candidate document c i : s(q, c i ) = ⟨x q , x c i ⟩, with x q = m(q) and x c i = m(c i ) embeddings obtained by a trained retrieval model m. We can additionally define the Jaccard similarity s J that measures the overlap between the sets of reciprocal neighbors of q and c i . We provide a detailed derivation of s J in Appendix A.1. Instead of the pure Jaccard similarity s J , we use a linear mixture with the geometric similarity s controlled by hyperparameter λ ∈ [0, 1]:\ns * (q, c i ) = λ s(q, c i ) + (1 -λ) s J (q, c i ), (1)\nwhich we found to perform better both for reranking (as in Zhong et al., 2017), as well as for label smoothing.\nImportantly, unlike prior work (Qin et al., 2011;Zhong et al., 2017), which considered the entire gallery (collection) of images as a reranking context for each probe, we only use as a context a limited number of candidates previously retrieved for each query. This is done both for computational tractability, as well as to constrain the context to be query-specific when computing the similarity of documents to the ground truth; documents can otherwise be similar to each other with respect to many different topics unrelated to the query. We empirically validate this choice in Section 5.1." }, { "figure_ref": [], "heading": "Evidence-based label smoothing", "publication_ref": [ "b14", "b2" ], "table_ref": [], "text": "Uniform label smoothing is a well-established technique (Goodfellow et al., 2016) that is used to mitigate the effects of label noise and improve score calibration, and was recently also employed for contrastive learning (Alayrac et al., 2022). It involves removing a small proportion ϵ ∈ [0, 1] of the probability mass corresponding to the groundtruth class and uniformly redistributing it among the rest of the classes, thus converting, e.g., a 1-hot vector y = [1, 0, . . . , 0] ∈ R N to:\ny * = [1-ϵ, ϵ/(N -1), . . . , ϵ/(N -1)] ∈ R N (2)\nNevertheless, naively distributing the probability mass ϵ uniformly among all candidates, as in Eq. ( 2), would result in true negatives predominantly receiving a portion of it, apart from the small number of false negatives4 .\nFor this reason, we instead propose correcting the sparse annotation vectors by selectively distributing relevance probability among negatives that are highly likely to be positive, or at least are ambiguous with respect to their relevance to the query. The proportion of probability mass each candidate shall receive depends on its degree of similarity to the annotated ground-truth document, which can be quantified by the Jaccard distance of Eq. ( 11), if we wish to exclusively consider reciprocal nearest neighbors, or the mixed geometric-Jaccard distance of Eq. ( 1), which allows any candidate close to the ground-truth to be considered." }, { "figure_ref": [], "heading": "Algorithm 1 Evidence-based label smoothing", "publication_ref": [], "table_ref": [], "text": "Require: Dense retrieval model m, set of queries Q, document collection C, set of all ground-truth label documents per query L(q), ∀q ∈ Q 1: Compute embedding vectors xq = m(q), ∀q ∈ Q and xc i = m(ci), ∀ci ∈ C. 2: for each query q do 3:\nRetrieve top-N Nearest Neighbors per query based on geometric similarity: s(q, ci) = ⟨xq, xc i ⟩ for all ci ∈ C." }, { "figure_ref": [], "heading": "4:", "publication_ref": [], "table_ref": [], "text": "for each candidate ci, i = 1, . . . , N do 5:\nCompute relevance score r ′′ as mixed geometric and reciprocal-NN Jaccard similarity sJ with respect to all ground-truth documents l:\nr ′′ (q, ci) = 1 |L(q)| l∈L(q) s * (l, ci), s * (l, ci) = λ • s(l, ci) + (1 -λ) • sJ (l, ci), 0 < λ < 1 6:\nTransform scores by applying normalization function fn, boost factor b and cut-off threshold nmax:\nr ′ (q, ci) =      b • fn (r ′′ (q, ci)) if ci ∈ L(q), -∞ if i > nmax, fn (r ′′ (q, ci))\notherwise." }, { "figure_ref": [], "heading": "7:", "publication_ref": [], "table_ref": [], "text": "end for 8: end for 9: Fine-tune model m with target distribution: r(q) = softmax (r ′ (q)), and loss function:\nL (r(q), ŝ(q)) = DKL (r(q) || ŝ(q)),\nwhere ŝ(q) = softmax (ŝ ′ (q)/T ) is the model-predicted score distribution, with T a learnable temp. param.\nSince the value range of similarity scores that each model outputs is effectively arbitrary, before applying a softmax to obtain a distribution over candidates, we (1) perform transformations (e.g., max-min or std-based, see Appendix A.5.1) and multiply the values of the original ground-truth documents by a factor b > 1 to normalize the range and increase the contrast between the top and trailing candidates, and (2) we limit the number of candidates that receive a probability above 0 to the top n max candidates in terms of their similarity to the ground-truth document(s). We found that these transformations primarily depend on the dataset rather than the model, and that training without limiting n max leads to overly diffuse score distributions. In case more than one ground-truth documents exist for the same query, the similarity of each candidate is the mean similarity over all ground-truth documents (see Algorithm 1)." }, { "figure_ref": [ "fig_6" ], "heading": "Computational efficiency", "publication_ref": [], "table_ref": [], "text": "Computing rNN similarity involves computing pairwise similarities among N + 1 ranking context elements (including the query), and reranking re-quires sorting the N candidates by their final similarity. The computational cost is thus O(N 2 ) and O(N log N ), respectively; if we are only interested in the top-k reranked candidates, the latter can be reduced to O(N log k). We find (Sections 5.1, A.4) that a small subset of the full ranking context with size N r < N is generally sufficient when computing rNN-based similarities. For MS MARCO, N r = 60 and the delay per query when reranking on a single CPU and core (AMD EPYC 7532, 2400 MHz) is about 5 ms (Fig. 7).\nEvidence-based label smoothing imposes no cost during training or inference; it only requires offline computation of rNN-similarities for each query context N r and sorting/top-k as above, followed by simple vectorized transformations, e.g. maxmin normalization. Furthermore, all computations above can be trivially ('embarrassingly') parallelized in a multi-CPU/core setup." }, { "figure_ref": [], "heading": "Experimental setting", "publication_ref": [ "b3", "b34", "b34", "b39", "b36", "b17", "b12" ], "table_ref": [], "text": "Datasets. To evaluate the effectiveness of our methods, we use two large-scale, publicly available ad-hoc retrieval collections: the MS MARCO Passage Retrieval dataset (Bajaj et al., 2018), and TripClick, a health document retrieval dataset (Rekabsaz et al., 2021b). Each has distinct characteristics and represents one of the two realistic data settings practically available for training dense retrieval models (see details in Appendix A.2, A.3).\nBaselines. To compute the similarity metric based on reciprocal nearest neighbors, and thus the scores used to either rerank candidates at inference time or calculate the smoothed labels for training, we only need access to the encoder extracting the document and query embeddings. The methods we propose are therefore applicable in principle to any dual-encoder dense retriever. However, we eschew training pipelines based on cross-encoders, both to ensure computational efficiency, as well as to eliminate the dependence on more powerful retrieval methods. Instead, we choose CODER (Zerveas et al., 2022), a fine-tuning framework that enhances the performance of dense retrievers used as \"base models\" through a large ranking context of queryspecific candidate documents and a list-wise loss: it serves as a natural framework to evaluate evidencebased label smoothing, because it allows us to directly utilize a large number of soft labels per query, while being very light-weight computationally.\nFollowing Zerveas et al. (2022), we select the (Ren et al., 2021b) 0 (Zhou et al., 2022) following base models subjected to CODER finetuning : 1. RepBERT (Zhan et al., 2020), a BERT-based model with a typical dual encoder architecture which underpins all state-of-the-art dense retrieval methods, trained using a triplet Max-Margin loss.\n.388 - - - - - - - - - - - ERNIE-Search (Lu et al., 2022) 0.401 - - - - - - - - - - - AR2 (Zhang et al., 2022) 0.395 - - - - - - - - - - - AR2 + SimANS\n2. TAS-B (Hofstätter et al., 2021), one of the topperforming dense retrieval methods on the MS MARCO / TREC-DL 2019, 2020 datasets, which has been optimized with respect to their training process, involving a sophisticated selection of negative documents through clustering of topically related queries.\n3. CoCondenser (Gao and Callan, 2021), the stateof-the-art dense retrieval model, excluding those which make use of heavyweight cross-encoder (query-document term interaction) teacher models or additional pseudo-labeled data samples; it relies on corpus-specific, self-supervised pre-training through a special architecture and contrastive loss component." }, { "figure_ref": [], "heading": "Results and Discussion", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_6" ], "heading": "Inference-time reranking with reciprocal nearest neighbors", "publication_ref": [], "table_ref": [ "tab_2", "tab_3" ], "text": "We first evaluate the effectiveness of reciprocal nearest neighbors at improving the similarity metric between queries and documents. Across all query sets in two important evaluation settings, MS MARCO (Table 1) and TripClick (Table 2), we observe that using a similarity based on reciprocal nearest neighbors can consistently improve ranking effectiveness for all tested models. The magnitude of improvement is generally small, but becomes substantial when measured on the TREC DL datasets (approx. +0.010 nDCG@10), where a greater annotation depth and multi-level relevance labels potentially allow to better differentiate between methods.\nWe furthermore observe that ranking effectiveness initially improves when increasing the size of the ranking context (i.e., the number of candidates considered for reranking), which is expected, because the probability to include a remote groundtruth document in the context increases. However, as this size further increases, ranking effectiveness saturates, often peaking at a context size of a few tens of candidates (Figures 3,4,6,9). We hypothesize that this happens because, as we keep adding negatives in the context, the chance that they disrupt the reciprocal neighborhood relationship between query and positive document(s) increases (see Figure 1).\nWe therefore conclude that we may use a relatively small number N of context candidates for computing reciprocal nearest neighbor similarities, which is convenient because computational com- plexity scales with O(N 2 ). In MS MARCO, a context of 60 candidates corresponds to peak effectiveness for CODER(TAS-B) and introduces a CPU processing delay of only about 5 milliseconds per query (Figure 7). We expect the optimal context size to depend on the average rank that groundtruth documents tend to receive, and for models of similar ranking effectiveness, this would primarily be determined by the characteristics of the dataset. Indeed, we find that the hyperparameters in computing rNN-based similarity (e.g. k, λ, τ , f w ), as well as the context size N , predominantly depend on the dataset, and to a much lesser extent on the dense retriever: hyperparameters optimized for CODER(TAS-B) worked very well for TAS-B, CoCondenser and CODER(CoCondenser) on MS MARCO, but very poorly when transferred to TripClick.\nA more detailed description and discussion of reranking experiments is provided in Appendix A.4." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Evidence-based label smoothing", "publication_ref": [], "table_ref": [ "tab_4", "tab_4", "tab_10" ], "text": "In order to achieve the best possible results using evidence-based label smoothing, one should ideally optimize the hyperparameters related to rNN-based similarity for the specific task of training a retrieval model with recomputed soft labels. However, to avoid repeatedly computing soft labels for the training set, we simply chose an rNN configuration that was optimized for reranking a large pool of candidates (N = 1000) in the MS MARCO collection -i.e., the same one used in the previous section. Although this configuration may not be optimal for our specific task (e.g., small changes in score values might be sufficient for reranking candidates but ineffective as soft training labels), we expect that it can still provide a reliable lower bound of optimal performance.\nFigure 2 shows how the ranking performance of the TAS-B base model (left-most point, step 0) on the validation set evolves throughout fine-tuning through CODER. The red curve corresponds to additionally using evidence-based label smooth-ing computed with reciprocal NN-based similarity (rNN-related hyperparameters are the same as in Section 5.1), whereas for the blue curve the smooth label distribution is computed using pure geometric similarity. We observe the following seemingly paradoxical phenomenon: compared to plain CODER training, label smoothing significantly reduces the validation loss (computed with the original labels, top panel), indicating that the ground truth passages are now receiving proportionally higher scores in the estimated relevance distribution, but the retrieval metric (bottom panel) does not register an improvement.\nIn fact, this phenomenon may be fully explained through the presence of false negatives: through the smooth target label distribution, the model learns to assign high relevance scores to a larger number of documents (diffuse distribution). Therefore, it likely places a proportionally higher relevance distribution weight to the ground truth document compared to plain CODER, essentially improving the relevance estimate for the ground truth, but at the same time it distributes relevance weight to a higher number of candidates, such that the ground truth ends up being ranked slightly lower (see Figure 11).\nThe crucial question therefore is, whether the candidates now receiving a higher relevance score are actually relevant. Since the MS MARCO dev dataset almost always contains only a single positive-labeled passage per query, it is fundamentally ill-suited to measure ranking effectiveness improvements by a training scheme that primarily promotes a diffuse relevance distribution over several candidates.\nFor this reason, we must rely on datasets containing more judgements per query, such as the TREC DL 2019, 2020 datasets: Table 3 shows that evidence-based label smoothing using a similarity based on reciprocal nearest neighbors can significantly improve the performance of each dense retriever even beyond the benefit of the plain CODER fine-tuning framework. Furthermore, using an rNNbased Jaccard similarity as a metric for computing the soft labels yields significantly better performance than using geometric similarity, and the best results are achieved when using a linear combination of the two metrics.\nAs TripClick also contains several (pseudorelevance) labels per query, we additionally evaluate the MS MARCO-trained models zero-shot We thus find that in sparsely annotated datasets like MS MARCO, validation loss might be a better predictor of model generalization than IR metrics such as MRR, and that evaluation on datasets with higher annotation depth (such as TREC DL or TripClick), potentially even in a zero-shot setting, might better reflect the ranking effectiveness of models.\nA critical difference of evidence-based label smoothing from distillation is that soft document labels are computed based on their similarity to the ground truth instead of the query. To demonstrate the importance of this change of perspective, we show how CODER fine-tuning performs when using soft labels coming from geometric similarity with respect to the query, as in distillation (Figure 2, purple curves): even when applying the same transformations to the scores as in the case of evidence-based label smoothing, the model's performance rapidly degrades instead of improving. This is expected, because distillation only works when a superior model is available; training cannot be bootstrapped from the scores of the model itself.\nWe also observe that, unlike evidence-based label smoothing, uniform label smoothing fails to noticeably improve performance compared to plain CODER fine-tuning (Figure 2, Table 3), even when we ensure that the exact same probability weight as in the case of evidence-based smoothing is distributed from the ground-truth positive(s) among the rest of the candidates.\nFinally, we examine how EB label smoothing performs when training in an important alternative setting, TripClick: a dataset with significantly more relevance labels per query, that come from pseudo-relevance feedback without human judgements. Unlike above, here we investigate the joint optimization of rNN-related parameters together with training-specific parameters (e.g., learning rate and linear warm-up steps), instead of using the same rNN-related hyperparameters for label smoothing as for reranking. To allow this, we train on the union of the HEAD and TORSO training subsets (avg. 42 and 9 annotations per query, respectively), and omit the TAIL subset, which consists of a large number of rare queries (each with only 3 annotations on average). We use HEAD Val as a validation set, and evaluate on HEAD Test.\nTable 4 and Figure 10 show that training with mixed geometric/rNN-based smooth labels significantly improves performance also in this dataset setting compared to plain CODER training (+0.010 nDCG@10). To ensure that any improvement cannot be attributed to better hyperparameters found during the joint optimization described above, we also apply the same hyperparameters to plain CODER training (denoted \"hyperparam.\" in the table). We observe similar improvements on TORSO Test and TORSO Val (Appendix Table 9)." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We propose evidence-based label smoothing to address sparse annotation in dense retrieval datasets.\nTo mitigate penalizing the model in case of false negatives during training, we compute the target relevance distribution by assigning non-zero relevance probabilities to candidates most similar to the ground truth. To estimate similarity we leverage reciprocal nearest neighbors, which allows considering local connectivity in the shared representation space, and can independently be used for reranking. Extensive experiments on two large-scale retrieval datasets and three dense retrieval models demonstrate that our method can effectively improve ranking, while being computationally efficient and foregoing the use of resource-heavy cross-encoders. Finally, we find that evaluating on sparsely annotated datasets like MS MARCO dev may systematically underestimate models with less sharp (i.e. more diffuse) relevance score distributions." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b33" ], "table_ref": [], "text": "We believe that in principle, the methods we propose are applicable to any dual-encoder dense retriever: computing the similarity metric based on reciprocal nearest neighbors only requires access to the encoder extracting the document and query embeddings.\nHowever, we note that the reason we were able to compute the soft labels for evidence-based label smoothing completely offline was that we utilized CODER as a fine-tuning framework: CODER only fine-tunes the query encoder, using fixed document representations. Using evidence-based label smoothing in a training method with learnable document embeddings means that the rNN-based similarity has to be computed dynamically at each training step (or periodically every few training steps), because their mutual distances/similarities will change during training, albeit slowly. Similarly, every time candidates/negatives are retrieved dynamically (periodically, as in Xiong et al., 2020, or at each step, as in Zhan et al., 2021) the rNNbased similarity has to be recomputed among this new set. Nevertheless, as we discuss in the paper, we only need to use a context of tens or at most a couple of hundred candidates in order to compute the rNN-based similarity most effectively. Even in these cases, this would therefore introduce at most up to a hundred milliseconds of training delay per batch, while inference would remain unaffected." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [ "b32", "b13", "b28", "b4", "b19", "b5", "b11" ], "table_ref": [], "text": "By being computationally efficient and foregoing the use of resource-heavy cross-encoders in its pipeline, our method allows top-performing dense retrieval models to be fine-tuned on MS MARCO within 7 hours on a single GPU. We therefore believe that it is well-aligned with the goal of training models in an environmentally sustainable way, the importance of which has been recently acknowledged by the scientific community Information Retrieval and more broadly (Scells et al., 2022).\nOn the other hand, the transformer-based Information Retrieval models examined in our study may intrinsically exhibit societal biases and stereotypes. As prior research has discussed (Gezici et al., 2021;Rekabsaz et al., 2021a;Rekabsaz and Schedl, 2020;Bigdeli et al., 2022;Krieg et al., 2022;Bigdeli et al., 2021;Fabris et al., 2020), these biases stem from the latent biases acquired by transformer-based language models throughout their pre-training, as well as the fine-tuning process on IR collections. Consequently, the practical use of these models might result in prejudiced treatment towards various social groups (e.g., as manifested in their representation or ranking in retrieval result lists). We therefore firmly encourage a mindful and accountable application of these models." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 Jaccard similarity based on Reciprocal Nearest Neighbors", "publication_ref": [ "b38", "b38", "b38", "b38", "b38" ], "table_ref": [], "text": "Let C be a collection of documents, including the query used for search, and NN(q, k) denote the set of k-nearest neighbors of a probe q ∈ C -besides the query, q here can also be a document or any other element that can be embedded in the common representation space. If d(q, c i ) ≡ d g (x q , x c i ), c i ∈ C is a metric (distance) in the vector space within which the embeddings of the query x q and documents x i reside, we can formally write:\nNN(q, k) = {c i | d g (x c i , x q ) ≤ d g (x c k , x q ), ∀i ∈ N : 1 ≤ i ≤ |C|},(3)\nwhere |•| denotes the cardinality of a set, and document c k is the k-nearest neighbor of the query based on d, i.e., the k-th element in the list of all documents in C sorted by distance d from the query in ascending order 5 . Naturally, |NN(q, k)| = k.\nThe set of k-reciprocal nearest neighbors can then be defined as:\nR(q, k) = {c i | c i ∈ NN(q, k) ∧ q ∈ NN(c i , k)},\n(4) i.e., to be considered a k-reciprocal neighbor, a document must be included in the k-nearest neighbors of the query, but at the same time the query must also be included in the k-nearest neighbors of the same document. This stricter condition results in a stronger similarity relationship than simple nearest neighbors, and |R(q, k)| ≤ k.\nSince using the above definition as-is can be overly restrictive, prior work has proposed applying it iteratively in order to construct an extended set of highly related documents to the query that would have otherwise been excluded. Thus, Zhong et al. (2017) define the extended set:\nR * (q, k) := R(q, k) ∪ R(c i , τ k), s.t. R(q, k) ∩ R(c i , τ k) ≥ 2 3 R(c i , τ k) , ∀c i ∈ R(q, k).\n(5)\n5 When some measure of similarity s is used instead of a distance d, the relationship equivalently becomes: s(xc i , xq) ≥ s(xc k , xq), and the k-nearest neighbors are the first k documents sorted by s in descending order.\nEffectively, we examine the set of τ k-nearest reciprocal neighbors of each reciprocal neighbor of q (where τ ∈ [0, 1] is a real parameter), and provided that it already has a substantial overlap with the original set of reciprocal neighbors of q, we add it to the extended set. The underlying assumption is that if a document is closely related to a set of documents that are closely related to the query, then it is most likely itself related to the query, even if there is no direct connection in terms of geometric proximity. Thus, one can improve recall at the possible expense of precision.\nAlthough using this new set of neighbors as the new set of candidates and sorting them by their distance d can form the basis of a retrieval method, Zhong et al. (2017) additionally proceed to define a new distance that takes into account this set, which is used alongside d. Specifically, they use the Jaccard distance between the (extended) reciprocal neighbor sets of a query q and documents c i :\nd J (q, c i ) = 1 - R * (q, k) ∩ R * (c i , k) R * (q, k) ∪ R * (c i , k) .(6)\nThis distance quantifies similarity between two elements (here, q and c i ) as a measure of overlap between sets of neighbors robustly related to each of them.\nTo reduce the computational complexity of computing the Jaccard distance, which relies on the time-consuming, CPU-bound operations of finding the intersection and union of sets, one may instead carry out the computation with algebraic operations, by defining for each element q ∈ C sparse vectors of dimensionality |C|, where non-zero dimensions denote graph connectivity to other documents. Instead of using binary vectors, one may assign to each neighbor c i a weight that depends on its geometric distance to the probe q. Thus, following Zhong et al. (2017), we define the elements of reciprocal connectivity vectors v ′ q ∈ |C| as follows:\nv ′ q,c i = f w (d(q, c i )) if c i ∈ R * (q, k) 0 otherwise(7)\nWhile Zhong et al. (2017) exclusively use f w (x) = exp(-x), one one can use any monotonically decreasing function, and we found that f w (x) = -x in fact performs better in our experiments.\nFinally, instead of directly using the sparse vectors above, which would yield a discretized similarity metric, we perform a local expansion, mixing each one of them (including the query) with its k exp neighboring vectors (again including the query, if among the neighbors):\nv c i = 1 k exp kexp j=1 v ′ c j , ∀c j ∈ NN(c i , k exp ) .(8)\nIt is possible to use the element-wise min and max operators on the expanded sparse vectors from Eq. ( 8)) to compute the number of candidates in the intersection and union sets of Eq. ( 6) respectively as:\nR * (q, k) ∩ R * (c i , k) = min(v q , v c i ) (9) R * (q, k) ∪ R * (c i , k) = max(v q , v c i ),(10)\nand thus the Jaccard distance in Eq. ( 6) can be written as:\nd J (q, c i ) = 1 - |C| j=1 min(v q,c j , v c i ,c j ) |C| j=1 max(v q,c j , v c i ,c j ) .(11)\nFinally, we note that instead of the pure Jaccard distance d J , we use as the final distance d * a linear mixing between the geometric distance d and d J with a hyperparameter λ ∈ [0, 1]:\nd * (q, c i ) = λd(q, c i ) + (1 -λ)d J (q, c i ),(12)\nwhich we found to perform better both for reranking (as in Zhong et al., 2017), as well as for label smoothing." }, { "figure_ref": [], "heading": "A.2 Data", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.2.1 MS MARCO and TREC Deep Learning", "publication_ref": [ "b3" ], "table_ref": [], "text": "Following the standard practice in related contemporary literature, we use the MS MARCO dataset (Bajaj et al., 2018), which has been sourced from open-domain logs of the Bing search engine, for training and evaluating our models. The MS MARCO passage collection contains about 8.8 million passages and the training set contains about 503k queries labeled with one or (rarely) more relevant passages (1.06 passages per query, on average), on a single level of relevance.\nFor validation of the trained models we use a subset of 10k samples from \"MS MARCO dev\", which is a set containing about 56k labeled queries, and refer to it as \"MS MARCO dev 10k\". As a test set we use a different, officially designated subset of \"MS MARCO dev\", originally called \"MS MARCO dev.small\", which contains 6980 queries. Often, in literature and leaderboards it is misleadingly referred to as \"MS MARCO dev\".\nBecause of the very limited annotation depth (sparsity) in the above evaluation sets, we also evaluate on the TREC Deep Learning track 2019 and 2020 test sets, each containing 43 and 54 queries respectively, labeled to an average \"depth\" of more than 210 document judgements per query, and using 4 levels of relevance: \"Not Relevant\" (0), \"Related\" (1), \"Highly Relevant\" (2) and \"Perfect\" (3). According to the official (strict) interpretation of relevance labels6 , a level of 1 should not be considered relevant and thus be treated just like a level of 0, while the lenient interpretation considers passages of level 1 relevant when calculating metrics." }, { "figure_ref": [], "heading": "A.2.2 TripClick", "publication_ref": [], "table_ref": [], "text": "TripClick is a recently introduced health IR dataset (Rekabsaz et al., 2021b) based on click logs that refer to about 1.5M MEDLINE articles. The approx. 700k unique queries in its training set are split into 3 subsets, HEAD, TORSO and TAIL, based on their frequency of occurrence: queries in TAIL are asked only once or a couple of times, while queries in HEAD have been asked tens or hundreds of times. As a result, each query in HEAD, TORSO and TAIL on average ends up with 41.9, 9.1 and 2.8 pseudo-relevance labels, using a click-through model (RAW) where every clicked document is considered relevant. The dataset also includes alternative relevance labels using the Document Click-Through Rate (DCTR), on 4 distinct levels (the latter follow the same definitions as the TREC Deep Learning evaluation sets). We note that, although the number of labels per query is much higher than MS MARCO, unlike the latter, these labels have not been verified by human judges.\nFor validation and evaluation of our models we use the officially designated validation and test set, respectively (3.5k queries each)." }, { "figure_ref": [], "heading": "A.3 Evaluation", "publication_ref": [ "b33", "b35", "b17" ], "table_ref": [], "text": "All training and evaluation experiments are produced with the same seed for pseudo-random number generators. We use mean reciprocal rank (MRR), normalized discounted cumulative gain (nDCG), mean average precision (MAP) and recall to evaluate the models on TREC DL tracks, MS MARCO and TripClick, in line with past work (e.g. (Xiong et al., 2020;Zhan et al., 2021;Hofstätter et al., 2021;Rekabsaz et al., 2021b)). While relevance judgements are well-defined in MS MARCO and TripClick, for the TREC DL tracks there exist strict and lenient interpretations of the relevance scores of judged passages (see Section A.2). In this work, we use the official, strict interpretation. We calculate the metrics using the official TREC evaluation software.7 " }, { "figure_ref": [ "fig_2", "fig_3", "fig_4", "fig_5", "fig_7", "fig_6", "fig_7", "fig_8" ], "heading": "A.4 Inference-time reranking with reciprocal nearest neighbors", "publication_ref": [ "b25" ], "table_ref": [ "tab_2", "tab_3" ], "text": "Prior work on rNN reranking considered the entire gallery of images (collection C) as a reranking context for each probe, i.e. N = |C|. With |C| in the order of tens of millions, this is intractable for the task of web retrieval using transformer LMs, and a smaller context size must be used instead.\nTo investigate the importance of the context size, we therefore initially fix the number of in-context candidates per query to a large number within reasonable computational constraints (N = 1000) and optimize the hyperparameters of reciprocal nearest neighbors (e.g. k, k exp , λ, τ , f w ) on the MS MARCO dev.small subset.\nWe first rerank candidates initially ranked by a CODER-optimized TAS-B retriever, denoted as \"CODER(TAS-B)\". To determine an appropriate size of reranking context, we first sort candidates by their original relevance score (geometric similarity) and then recompute query similarity scores with a growing number of in-context candidates (selected in the order of decreasing geometric similarity from the query), while measuring changes in ranking effectiveness.\nFigure 3 shows that rNN-based reranking slightly improves effectiveness compared to ranking purely based on geometric similarity, with the peak improvement registered around a context size of 60 candidates. This behavior is consistent when evaluating rNN-based raranking using the same hyperparameters on different query sets: MS MARCO dev (Fig. 4), which is an order of magnitude larger, and TREC DL 2020 (Fig. 5) and TREC DL 2019 (Fig. 6), where the improvement is larger (possibly because it can be measured more reliably due to the greater annotation depth). In all cases performance clearly saturates as the number of candidates grows (somewhat slower for TREC DL 2019). The same behavior as described above is observed when reranking the original TAS-B model's results using the same hyperparameters chosen for the CODER-trained version, with the performance benefit being approximately twice as large (Fig. 8).\nWhile it is expected that progressively increasing the context size will increase performance, as there is a greater chance to include the ground-truth passage(s) which may have been initially ranked lower (i.e. embedded farther from the query), the peak and subsequent degradation or saturation is a novel finding. We hypothesize that it happens because the more negative candidates are added in the context, the higher the chance that they disrupt the reciprocal neighborhood relationship between query and positive document(s) (see Figure 1).\nWe can therefore conclude that we may use a relatively small number N of context candidates for computing reciprocal nearest neighbors similarities, which is convenient because computational complexity scales with O(N 2 ). For a context of 60 candidates, a CPU processing delay of only about 5 milliseconds per query is introduced (Figure 7). These results additionally indicate that the context size should best be treated as a rNN hyperparameter to be jointly optimized with the rest, which is reasonable, as it is expected to depend on the average rank that ground-truth documents tend to receive.\nAfter optimizing rNN-related hyperparameters (including the context size) on MS MARCO dev.small for CODER(TAS-B), we evaluate rNN reranking on the other evaluation sets (including its ×8 larger superset MS MARCO dev) and present the results in Table 1. We observe that a similarity based on reciprocal nearest neighbors can indeed improve ranking effectiveness compared to using purely geometric similarity. The improvement is more pronounced on the TREC DL datasets (+0.011 nDCG@10), where a greater annotation depth and multi-level relevance labels potentially allow to better differentiate between methods.\nAdditionally, we find that rankings from TAS-B -whose embeddings are relatively similar to CODER(TAS-B) -also improve, despite the fact that hyperparameters were chosen based on the CODER(TAS-B) model (also see Figure 8).\nThe strongest dense retrieval models we evaluate, CoCondenser and CODER(CoCondenser), also show improved performance, again measured primarily on TREC DL: the former improves by +0.009 nDCG@10 on TREC DL 2020 and the latter by 0.009 nDCG@10 on TREC DL 2019. Notably, reranking effectiveness when using the exact same hyperparameters as for CODER(TAS-B) and TAS-B is only very slightly worse.\nBy contrast, when transferring hyperparameters selected for MS MARCO to reranking candidates on the TripClick dataset, we find that performance deteriorates with respect to geometric similarity. Therefore, we can conclude that rNN hyperparameters predominantly depend on the dataset, and to a much lesser extent on the dense retriever.\nAfter optimizing hyperparameters on TripClick HEAD Val, we evaluate on HEAD Test, using both RAW (binary) as well as DCTR (multi-level) relevance labels; we present the results in Table 2. Also for this dataset, which differs substantially in characteristics from MS MARCO, we again observe that using reciprocal nearest neighbors to compute the similarity metric can slightly improve ranking effectiveness for all examined retrieval methods. We also observe the same saturation behavior with respect to the ranking context size, i.e. the number of candidates considered when reranking (Fig. 9). Figure 11: Because many more documents receive higher than zero relevance in the target distribution after label smoothing, by design it promotes a diffuse predicted distribution (bottom). Thus, although the predicted relevance of the ground-truth positive document is now significantly higher compared to when not using label smoothing (top), indicating a model improvement, the document ends up ranking lower because of the dispersed relevance estimates, and thus the MRR metric decreases. By contrast, the KL-divergence (i.e. loss function) correctly captures the improvement in assessing the relevance of the ground-truth positive. We note that in sparsely annotated datasets like MS MARCO, the \"1-hot\" ground-truth annotations (right) are very often incorrect among the top ranks, and some of the candidates ranked more highly than the ground truth (e.g. Candidates 3 and 4 in the figure) may actually be relevant, which would render the MRR metric spurious; Qu et al., 2021 estimate that about 70% of the top 5 candidates retrieved by a top-performing dense retrieval model that are not labeled as positive are in fact relevant." }, { "figure_ref": [], "heading": "A.5 Evidence", "publication_ref": [], "table_ref": [], "text": "Test: DCTR Head RAW Head Model MRR@10 nDCG@10 Recall@10 MRR@10 nDCG@10 Recall@10 " }, { "figure_ref": [ "fig_9", "fig_2", "fig_2" ], "heading": "A.5.1 Score normalization", "publication_ref": [ "b34" ], "table_ref": [], "text": "In standard contrastive learning, including when using a KL-divergence loss, as in CODER (Zerveas et al., 2022), there is a very stark difference between the probability of the handful of ground-truth documents and the zero probability of the negatives in the target (ground-truth) distribution.\nIn evidence-based label smoothing, we are using the continuous similarity scores of candidates with respect to the ground-truth positive document(s) as soft labels for training, which means that that there is a reduced contrast between the highest and smallest score values. Additionally, the output values of the model's similarity estimate reside within an arbitrary value range, determined primarily by the model's weights, and for the same rank, there is a large variance of values between queries (Fig. 12). This means that after passing through a softmax, which is highly non-linear, the target score distribution will be either concentrated or diffuse, depending on the range of score values for each particular query. Normalizing values into the same range will facilitate learning consistent relevance estimates. Furthermore, given a single query, we wish that target scores rapidly decrease as the rank increases (Fig. 13). Figure 13: Similarity scores of the top 1000 candidates for a single query, sorted in descending order. Since they are used as training labels, to avoid very diffuse estimated score distributions, we need to ensure that there is a large contrast between the top and bottom candidates and that probability (i.e. values after the scores pass through a softmax) abruptly decreases after the first few ranks. We achieve this through appropriate normalization -here, max-min (blue) instead of dividing by max (orange).\nTherefore, to facilitate learning, we wish to ensure that (a) there is large enough contrast between the first and last ranks, and (b) this is true for all queries. We can achieve this by applying a normalizing function f n , such as max-min, on the vector s ∈ R N of candidate scores for a single query: f n (s) = s -min(s) max(s) -min(s) (13) or the following, which is based on the standard deviation σ across N candidate scores for a single query:\nf n (s) = s -min(s) σ ,(14)\nwhere σ = i s ij s j /N 2 N ." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "G. Zerveas would like to thank the Onassis Foundation for supporting this research. The contribution of N. Rekabsaz is supported by the State of Upper Austria and the Federal Ministry of Education, Science, and Research, through grant LIT-2021-YOU-215." } ]
2023-10-22
10.1145/3209978.3209985
[ { "authors": "Qingyao Ai; Keping Bi; Jiafeng Guo; W Bruce Croft", "journal": "Association for Computing Machinery", "ref_id": "b0", "title": "Learning a Deep Listwise Context Model for Ranking Refinement", "year": "2018" }, { "authors": "Qingyao Ai; Xuanhui Wang; Sebastian Bruch; Nadav Golbandi; Michael Bendersky; Marc Najork", "journal": "", "ref_id": "b1", "title": "Learning Groupwise Multivariate Scoring Functions Using Deep Neural Networks", "year": "2019" }, { "authors": "Jean-Baptiste Alayrac; Jeff Donahue; Pauline Luc; Antoine Miech; Iain Barr; Yana Hasson; Karel Lenc; Arthur Mensch; Katie Millican; Malcolm Reynolds; Roman Ring; Eliza Rutherford; Serkan Cabi; Tengda Han; Zhitao Gong; Sina Samangooei; Marianne Monteiro; Jacob Menick; Sebastian Borgeaud; Andrew Brock; Aida Nematzadeh; Sahand Sharifzadeh; Mikolaj Binkowski; Ricardo Barreira; Oriol Vinyals; Andrew Zisserman; Karen Simonyan", "journal": "", "ref_id": "b2", "title": "Flamingo: a Visual Language Model for Few-Shot Learning", "year": "2022" }, { "authors": "Payal Bajaj; Daniel Campos; Nick Craswell; Li Deng; Jianfeng Gao; Xiaodong Liu; Rangan Majumder; Andrew Mcnamara; Bhaskar Mitra; Tri Nguyen; Mir Rosenberg; Xia Song; Alina Stoica; Saurabh Tiwary; Tong Wang", "journal": "", "ref_id": "b3", "title": "MS MARCO: A Human Generated MAchine Reading COmprehension Dataset", "year": "2018" }, { "authors": "Amin Bigdeli; Negar Arabzadeh; Shirin Seyedsalehi; Morteza Zihayat; Ebrahim Bagheri", "journal": "Cham. Springer International Publishing", "ref_id": "b4", "title": "A light-weight strategy for restraining gender biases in neural rankers", "year": "2022" }, { "authors": "Amin Bigdeli; Negar Arabzadeh; Morteza Zihayat; Ebrahim Bagheri", "journal": "Cham. Springer International Publishing", "ref_id": "b5", "title": "Exploring gender biases in information retrieval relevance judgement datasets", "year": "2021" }, { "authors": "Zhe Cao; Tao Qin; Tie-Yan Liu; Ming-Feng Tsai; Hang Li", "journal": "Association for Computing Machinery", "ref_id": "b6", "title": "Learning to rank: from pairwise approach to listwise approach", "year": "2007" }, { "authors": "Olivier Chapelle; Yi Chang", "journal": "", "ref_id": "b7", "title": "Yahoo! Learning to Rank Challenge Overview", "year": "2010" }, { "authors": "Domenico Dato; Claudio Lucchese; Maria Franco; Salvatore Nardini; Raffaele Orlando; Nicola Perego; Rossano Tonellotto; Venturini", "journal": "ACM Transactions on Information Systems", "ref_id": "b8", "title": "Fast Ranking with Additive Ensembles of Oblivious and Non-Oblivious Regression Trees", "year": "2016" }, { "authors": "Mostafa Dehghani; Hamed Zamani; Aliaksei Severyn; Jaap Kamps; W Bruce Croft", "journal": "Association for Computing Machinery", "ref_id": "b9", "title": "Neural Ranking Models with Weak Supervision", "year": "2017" }, { "authors": "Fernando Diaz", "journal": "Information Retrieval", "ref_id": "b10", "title": "Regularizing query-based retrieval scores", "year": "2007" }, { "authors": "Alessandro Fabris; Alberto Purpura; Gianmaria Silvello; Gian Antonio Susto", "journal": "Information Processing & Management", "ref_id": "b11", "title": "Gender stereotype reinforcement: Measuring the gender bias conveyed by ranking algorithms", "year": "2020" }, { "authors": "Luyu Gao; Jamie Callan", "journal": "", "ref_id": "b12", "title": "Unsupervised corpus aware language model pre-training for dense passage retrieval", "year": "2021" }, { "authors": "Gizem Gezici; Aldo Lipani; Yucel Saygin; Emine Yilmaz", "journal": "Information Retrieval Journal", "ref_id": "b13", "title": "Evaluation metrics for measuring bias in search engine results", "year": "2021" }, { "authors": "Ian J Goodfellow; Yoshua Bengio; Aaron Courville", "journal": "MIT Press", "ref_id": "b14", "title": "Deep Learning", "year": "2016" }, { "authors": "Dany Haddad; Joydeep Ghosh", "journal": "Association for Computing Machinery", "ref_id": "b15", "title": "Learning More From Less: Towards Strengthening Weak Supervision for Ad-Hoc Retrieval", "year": "2019" }, { "authors": "Sebastian Hofstätter; Sophia Althammer; Mete Sertkan; Allan Hanbury", "journal": "", "ref_id": "b16", "title": "Establishing Strong Baselines for TripClick Health Retrieval", "year": "2022" }, { "authors": "Sebastian Hofstätter; Sheng-Chieh Lin; Jheng-Hong Yang; Jimmy J Lin; A Hanbury", "journal": "", "ref_id": "b17", "title": "Efficiently Teaching an Effective Dense Retriever with Balanced Topic Aware Sampling", "year": "2021" }, { "authors": "N Jardine; C J Van Rijsbergen", "journal": "Information Storage and Retrieval", "ref_id": "b18", "title": "The use of hierarchic clustering in information retrieval", "year": "1971" }, { "authors": "Klara Krieg; Emilia Parada-Cabaleiro; Markus Schedl; Navid Rekabsaz", "journal": "Cham. Springer", "ref_id": "b19", "title": "Do perceived gender biases in retrieval results affect relevance judgements?", "year": "2022" }, { "authors": "Yuxiang Lu; Yiding Liu; Jiaxiang Liu; Yunsheng Shi; Zhengjie Huang; Shikun Feng; Yu Sun; Hao Tian; Hua Wu; Shuaiqiang Wang; Dawei Yin; Haifeng Wang", "journal": "", "ref_id": "b20", "title": "ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self Onthe-fly Distillation for Dense Passage Retrieval", "year": "2022" }, { "authors": "Gianluca Moro; Lorenzo Valgimigli", "journal": "Sensors", "ref_id": "b21", "title": "Efficient Self-Supervised Metric Information Retrieval: A Bibliography Based Method Applied to COVID Literature", "year": "2021" }, { "authors": "Rodrigo Nogueira; Kyunghyun Cho", "journal": "", "ref_id": "b22", "title": "Passage Re-ranking with BERT", "year": "2020" }, { "authors": "Danfeng Qin; Stephan Gammeter; Lukas Bossard; Till Quack; Luc Van Gool", "journal": "", "ref_id": "b23", "title": "Hello neighbor: Accurate object retrieval with k-reciprocal nearest neighbors", "year": "2011" }, { "authors": "Tao Qin; Tie-Yan Liu; Jun Xu; Hang Li", "journal": "Information Retrieval", "ref_id": "b24", "title": "LETOR: A benchmark collection for research on learning to rank for information retrieval", "year": "2010" }, { "authors": "Yingqi Qu; Yuchen Ding; Jing Liu; Kai Liu; Ruiyang Ren; Wayne Xin Zhao; Daxiang Dong; Hua Wu; Haifeng Wang", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "RocketQA: An optimized training approach to dense passage retrieval for opendomain question answering", "year": "2021" }, { "authors": "Navid Rekabsaz; Simone Kopeinik; Markus Schedl", "journal": "", "ref_id": "b26", "title": "Societal biases in retrieved contents: Measurement framework and adversarial mitigation for bert rankers", "year": "2021" }, { "authors": "Navid Rekabsaz; Oleg Lesota; Markus Schedl; Jon Brassey; Carsten Eickhoff", "journal": "Association for Computing Machinery", "ref_id": "b27", "title": "TripClick: The Log Files of a Large Health Web Search Engine", "year": "2021" }, { "authors": "Navid Rekabsaz; Markus Schedl", "journal": "", "ref_id": "b28", "title": "Do neural ranking models intensify gender bias?", "year": "2020" }, { "authors": "Ruiyang Ren; Shangwen Lv; Yingqi Qu; Jing Liu; Wayne Xin Zhao; Qiaoqiao She; Hua Wu; Haifeng Wang; Ji-Rong Wen", "journal": "", "ref_id": "b29", "title": "PAIR: Leveraging Passage-Centric Similarity Relation for Improving Dense Passage Retrieval", "year": "2021" }, { "authors": "Ruiyang Ren; Yingqi Qu; Jing Liu; Wayne Xin Zhao; Qiaoqiao She; Hua Wu; Haifeng Wang; Ji-Rong Wen", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "RocketQAv2: A Joint Training Method for Dense Passage Retrieval and Passage Re-ranking", "year": "2021" }, { "authors": "Keshav Santhanam; Omar Khattab; Jon Saad-Falcon; Christopher Potts; Matei Zaharia", "journal": "", "ref_id": "b31", "title": "Col-BERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction", "year": "2021" }, { "authors": "Harrisen Scells; Shengyao Zhuang; Guido Zuccon", "journal": "Association for Computing Machinery", "ref_id": "b32", "title": "Reduce, reuse, recycle: Green information retrieval research", "year": "2022" }, { "authors": "Lee Xiong; Chenyan Xiong; Ye Li; Kwok-Fung Tang; Jialin Liu; Paul Bennett; Junaid Ahmed; Arnold Overwijk", "journal": "", "ref_id": "b33", "title": "Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval", "year": "2020" }, { "authors": "George Zerveas; Navid Rekabsaz; Daniel Cohen; Carsten Eickhoff", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "CODER: An efficient framework for improving retrieval through COntextual document embedding reranking", "year": "2022" }, { "authors": "Jingtao Zhan; Jiaxin Mao; Yiqun Liu; Jiafeng Guo; Min Zhang; Shaoping Ma", "journal": "ACM", "ref_id": "b35", "title": "Optimizing Dense Retrieval Model Training with Hard Negatives", "year": "2021" }, { "authors": "Jingtao Zhan; Jiaxin Mao; Yiqun Liu; Min Zhang; Shaoping Ma", "journal": "", "ref_id": "b36", "title": "RepBERT: Contextualized Text Embeddings for First-Stage Retrieval", "year": "2020" }, { "authors": "Hang Zhang; Yeyun Gong; Yelong Shen; Jiancheng Lv; Nan Duan; Weizhu Chen", "journal": "", "ref_id": "b37", "title": "Adversarial Retriever-Ranker for dense text retrieval", "year": "2022" }, { "authors": "Zhun Zhong; Liang Zheng; Donglin Cao; Shaozi Li", "journal": "", "ref_id": "b38", "title": "Re-ranking Person Re-identification with k-Reciprocal Encoding", "year": "2017" }, { "authors": "Kun Zhou; Yeyun Gong; Xiao Liu; Wayne Xin Zhao; Yelong Shen; Anlei Dong; Jingwen Lu; Rangan Majumder; Ji-Rong Wen; Nan Duan", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "SimANS: Simple ambiguous negatives sampling for dense text retrieval", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 318.26, 71.88, 206.88, 13.18 ], "formula_id": "formula_0", "formula_text": "s * (q, c i ) = λ s(q, c i ) + (1 -λ) s J (q, c i ), (1)" }, { "formula_coordinates": [ 4, 311.6, 468.79, 213.54, 12.68 ], "formula_id": "formula_1", "formula_text": "y * = [1-ϵ, ϵ/(N -1), . . . , ϵ/(N -1)] ∈ R N (2)" }, { "formula_coordinates": [ 5, 76.98, 223.33, 193.14, 68.12 ], "formula_id": "formula_2", "formula_text": "r ′′ (q, ci) = 1 |L(q)| l∈L(q) s * (l, ci), s * (l, ci) = λ • s(l, ci) + (1 -λ) • sJ (l, ci), 0 < λ < 1 6:" }, { "formula_coordinates": [ 5, 105.14, 304.73, 167.06, 35.35 ], "formula_id": "formula_3", "formula_text": "r ′ (q, ci) =      b • fn (r ′′ (q, ci)) if ci ∈ L(q), -∞ if i > nmax, fn (r ′′ (q, ci))" }, { "formula_coordinates": [ 5, 89.41, 390.27, 134.55, 7.94 ], "formula_id": "formula_4", "formula_text": "L (r(q), ŝ(q)) = DKL (r(q) || ŝ(q))," }, { "formula_coordinates": [ 6, 96.79, 95.08, 392.07, 31.2 ], "formula_id": "formula_5", "formula_text": ".388 - - - - - - - - - - - ERNIE-Search (Lu et al., 2022) 0.401 - - - - - - - - - - - AR2 (Zhang et al., 2022) 0.395 - - - - - - - - - - - AR2 + SimANS" }, { "formula_coordinates": [ 13, 76.95, 283.63, 212.92, 26.75 ], "formula_id": "formula_6", "formula_text": "NN(q, k) = {c i | d g (x c i , x q ) ≤ d g (x c k , x q ), ∀i ∈ N : 1 ≤ i ≤ |C|},(3)" }, { "formula_coordinates": [ 13, 72.36, 444.21, 215.27, 10.63 ], "formula_id": "formula_7", "formula_text": "R(q, k) = {c i | c i ∈ NN(q, k) ∧ q ∈ NN(c i , k)}," }, { "formula_coordinates": [ 13, 80.77, 666.68, 186.82, 56.54 ], "formula_id": "formula_8", "formula_text": "R * (q, k) := R(q, k) ∪ R(c i , τ k), s.t. R(q, k) ∩ R(c i , τ k) ≥ 2 3 R(c i , τ k) , ∀c i ∈ R(q, k)." }, { "formula_coordinates": [ 13, 321.25, 382.59, 203.89, 28.73 ], "formula_id": "formula_9", "formula_text": "d J (q, c i ) = 1 - R * (q, k) ∩ R * (c i , k) R * (q, k) ∪ R * (c i , k) .(6)" }, { "formula_coordinates": [ 13, 315.9, 680.29, 209.24, 25.31 ], "formula_id": "formula_10", "formula_text": "v ′ q,c i = f w (d(q, c i )) if c i ∈ R * (q, k) 0 otherwise(7)" }, { "formula_coordinates": [ 14, 83.26, 174.18, 206.6, 35.03 ], "formula_id": "formula_11", "formula_text": "v c i = 1 k exp kexp j=1 v ′ c j , ∀c j ∈ NN(c i , k exp ) .(8)" }, { "formula_coordinates": [ 14, 83.72, 307.81, 206.14, 50.23 ], "formula_id": "formula_12", "formula_text": "R * (q, k) ∩ R * (c i , k) = min(v q , v c i ) (9) R * (q, k) ∪ R * (c i , k) = max(v q , v c i ),(10)" }, { "formula_coordinates": [ 14, 80.71, 414.53, 209.15, 35.58 ], "formula_id": "formula_13", "formula_text": "d J (q, c i ) = 1 - |C| j=1 min(v q,c j , v c i ,c j ) |C| j=1 max(v q,c j , v c i ,c j ) .(11)" }, { "formula_coordinates": [ 14, 82.14, 534.08, 207.73, 13.18 ], "formula_id": "formula_14", "formula_text": "d * (q, c i ) = λd(q, c i ) + (1 -λ)d J (q, c i ),(12)" }, { "formula_coordinates": [ 25, 248.83, 138.72, 276.31, 24.51 ], "formula_id": "formula_15", "formula_text": "f n (s) = s -min(s) σ ,(14)" } ]
Enhancing the Ranking Context of Dense Retrieval through Reciprocal Nearest Neighbors
Sparse annotation poses persistent challenges to training dense retrieval models, for example by distorting the training signal when unlabeled relevant documents are used spuriously as negatives in contrastive learning. To alleviate this problem, we introduce evidence-based label smoothing, a novel, computationally efficient method that prevents penalizing the model for assigning high relevance to false negatives. To compute the target relevance distribution over candidate documents within the ranking context of a given query, those candidates most similar to the ground truth are assigned a nonzero relevance probability based on the degree of their similarity to the ground-truth document(s). To estimate relevance we leverage an improved similarity metric based on reciprocal nearest neighbors, which can also be used independently to rerank candidates in postprocessing. Through extensive experiments on two large-scale ad hoc text retrieval datasets, we demonstrate that reciprocal nearest neighbors can improve the ranking effectiveness of dense retrieval models, both when used for label smoothing, as well as for reranking. This indicates that by considering relationships between documents and queries beyond simple geometric distance we can effectively enhance the ranking context.
George Zerveas; Navid Rekabsaz; Carsten Eickhoff
[ { "figure_caption": "Figure 2 :2Figure2: Evolution of performance of TAS-B (leftmost point, step 0) on MS MARCO dev validation set, as the model is being fine-tuned through CODER. The red curve corresponds to using evidence-based (EB) label smoothing computed with rNN-based similarity, whereas for the blue curve the smooth label distribution is computed using pure geometric similarity. EB label smoothing significantly reduces validation loss (computed with the original labels, top), indicating that the ground truth passages are receiving higher probability in the estimated relevance distribution, but the retrieval metric (bottom) fails to register an improvement due to annotation sparsity (compare with Fig.10, Appendix). Distillation leads to precipitous degradation of performance.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Hyperparameters for reranking with Reciprocal Nearest Neighbors, MS MARCO.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Performance of reciprocal nearest neighbors-based reranking of CODER(TAS-B) results on MS MARCO dev.small, as the number of candidates in the ranking context grows. Hyperparameters are optimized for a context of 1000 candidates. Performance is slightly improved compared to ranking exclusively based on geometric similarity and peaks at 60 in-context candidates.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Performance of reciprocal nearest neighbors-based reranking of CODER(TAS-B) results on MS MARCO dev, as the number of candidates in the ranking context grows. Hyperparameters are the same as in Fig. 3. Performance is slightly improved compared to ranking exclusively based on geometric similarity and peaks at 60 in-context candidates.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Performance of reciprocal nearest neighbors-based reranking of CODER(TAS-B) results on TREC DL 2020, as the number of candidates in the ranking context grows. Hyperparameters are the same as in Fig. 3. Performance is improved compared to ranking exclusively based on geometric similarity and peaks at 60 in-context candidates.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Performance of reciprocal nearest neighbors-based reranking of CODER(TAS-B) results on TREC DL 2019, as the number of candidates in the ranking context grows. Hyperparameters are the same as in Fig. 3. Performance is improved compared to ranking exclusively based on geometric similarity but does not clearly saturate.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Time delay per query (in milliseconds) when reranking using reciprocal nearest neighbors-based similarity, as the number of candidates in the ranking context grows. Hyperparameters are the same as in Fig. 3. Processing time scales according to O(N 2 ). Processor (1 CPU, 1 core): AMD EPYC 7532 32-Core Processor, 2400 MHz.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Performance of reciprocal nearest neighbors-based reranking of TAS-B results on MS MARCO dev, as the number of candidates in the ranking context grows. Hyperparameters are the same as in Fig. 3. Performance is improved compared to ranking exclusively based on geometric similarity and peaks at approx. 60 in-context candidates.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Performance of reciprocal nearest neighborsbased reranking of CODER(RepBERT) results on TripClick HEAD Test, as the number of candidates in the ranking context grows.", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: Similarity scores per rank across a large number of queries.", "figure_data": "", "figure_id": "fig_9", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Reciprocal Nearest Neighbors, its ranking is improved, because of its reciprocal relationship to the query, which one of 3 nearest neighbors of the query lacks. Adding an extra negative to the context (circle #1) does not affect this ranking, but the second extra negative (#2) disrupts the reciprocal relationship, becoming the 4th nearest neighbor of the positive.", "figure_data": "21Figure 1: Query (yellow star), positive (red cross) andnegative (full blue circles) document embedding vectorsin a shared 2D representation space. Based on top-4Nearest Neighbors, the positive would be ranked lowerthan the 3 nearest neighbors of the query. When us-ing top-4", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Recip. NN reranking, MS MARCO collection. Metrics cut-off @10. Bold: best in model class. As a reference, at the top we include all SOTA dense retrieval models from literature that ourperform the methods we evaluated, noting that, unlike ours, they all rely heavily on cross-encoders for training (e.g. distillation, ranking, pseudolabeling etc). Blue: our contributions.", "figure_data": "0.409-----------TAS-B0.344 0.4080.619 0.344 0.4070.618 0.875 0.6590.222 0.832 0.6200.302R. TAS-B0.347 0.4110.625 0.346 0.4100.623 0.886 0.6640.226 0.828 0.6270.311CODER(TAS-B)0.355 0.4190.633 0.353 0.4160.627 0.857 0.6680.224 0.844 0.6230.306R. CODER(TAS-B)0.357 0.4210.637 0.354 0.4180.631 0.853 0.6790.231 0.860 0.6340.317CoCondenser0.381 0.4460.665 0.381 0.4460.664 0.879 0.6560.226 0.833 0.6180.301R. CoCondenser0.384 0.4490.670 0.381 0.4470.666 0.877 0.6580.226 0.833 0.6270.306CODER(CoCond)0.382 0.4470.668 0.382 0.4470.665 0.895 0.6550.228 0.844 0.6390.314R. CODER(CoCond)0.384 0.4500.671 0.383 0.4480.667 0.895 0.6640.230 0.844 0.6410.316ModelDCTR Head MRR nDCG MRR nDCG Recall RAW HeadBM25 10.276 0.224-0.1990.128BERT-Dot (SciBERT) 20.530 0.243---BERT-Cat (SciBERT) 20.595 0.294---RepBERT [abbrev: RB] 0.526 0.255 0.574 0.3440.199R. RepBERT0.525 0.256 0.575 0.3460.200CODER(RB)0.634 0.316 0.674 0.4190.234R. CODER(RB)0.638 0.317 0.679 0.4180.234RB + CODER(RB)0.637 0.318 0.679 0.4210.235RB + R. CODER(RB)0.641 0.319 0.681 0.4220.236", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Evaluation of label smoothing applied to training CODER(TAS-B) on MS MARCO. Metrics cut-off @10. Bold: best performance in each model class. Blue: our contributions.", "figure_data": "ModelTREC DL 2019 MRR nDCG MAP Recall MRR nDCG MAP Recall TREC DL 2020TAS-B0.875 0.659 0.222 0.259 0.832 0.620 0.302 0.363CODER(TAS-B)0.857 0.668 0.224 0.270 0.844 0.623 0.306 0.365CODER(TAS-B) + uniform sm.0.857 0.669 0.223 0.273 0.835 0.619 0.304 0.360CODER(TAS-B) + geom. smooth labels0.848 0.665 0.220 0.271 0.842 0.626 0.310 0.370CODER(TAS-B) + rNN smooth labels0.857 0.671 0.226 0.276 0.862 0.632 0.315 0.369CODER(TAS-B) + mixed rNN/geom. smooth lab.0.889 0.675 0.227 0.277 0.842 0.637 0.318 0.376CoCondenser0.879 0.656 0.226 0.269 0.833 0.618 0.301 0.366CODER(CoCondenser)0.895 0.655 0.228 0.269 0.844 0.639 0.314 0.384CODER(CoCondenser) + mixed rNN/geom. smooth lab. 0.884 0.661 0.232 0.278 0.856 0.646 0.316 0.383Loss0.120 0.122 0.124 0.126CODER(TAS-B) EB smoothing (rNN) EB smoothing (geom.) Uniform smoothing Distillation (hparam. rNN) Distillation (hparam. geom.) RepBERT0.1180.1160.114050000 100000 150000 200000 [email protected] 0.3350 0.3375 0.3400 0.3425 0.34500Step 50000 100000 150000 200000 CODER(TAS-B) EB smoothing (rNN) EB smoothing (geom.) Uniform smoothing Distillation (hparam. rNN) Distillation (hparam. geom.) RepBERT", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Evolution of performance of RepBERT (left-most point, step 0) on the TripClick HEAD Val validation set, as the model is being fine-tuned through CODER on TripClick HEAD∪TORSO Train. The red curve corresponds to additionally using evidence-based label smoothing computed with reciprocal NN-based similarity, whereas for the blue curve the smooth label distribution is computed using pure geometric similarity. Only evidence-based smoothing with rNN similarity substantially improves performance compared to plain CODER(RepBERT), despite \"CODER(RepBERT) (hyperparam.)\" and \"EB smoothing\" with geometric similarity using the same training hyperparameters. Hyperparameters for training with evidence-based label smoothing, MS MARCO. The hyperparameters related to computing rNN-based similarity are the same as in Table5.", "figure_data": "-based label smoothing0.66 0.680.110 0.115CODER(RepBERT) CODER(RepBERT) (hyperparam.) EB smoothing (geom.) EB smoothing (rNN) [email protected] 0.64Loss0.105 0.1000.56 0.58 0.60CODER(RepBERT) EB smoothing (rNN) RepBERT EB smoothing (geom.) CODER(RepBERT) (hyperparam.)0.090 0.095050000 100000 150000 200000 250000 Step050000 100000 150000 200000 250000 StepFigure 10: EB label smoothing hyperparam. CODER(TAS-B) CODER(CoCondenser)b: boost factor1.2221.525n max : softmax cut-off432f n : normalization func.max-minstd-basedlearning rate: peak value1.73e-061.37e-06learning rate: linear warm-up steps 900012000", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Performance of models trained on MS MARCO but zeroshot-evaluated on TripClick Test. Bold: overall best, Underline: best in model class.", "figure_data": "TAS-B0.2780.1390.1300.3390.1880.113CODER(TAS-B)0.2790.1400.1300.3380.1910.115CODER(TAS-B)0.2850.1430.1340.3440.1950.116+ geom. smooth labelsCODER(TAS-B)0.2880.1440.1340.3470.1950.116+ rNN smooth labelsCODER(TAS-B)0.2840.1420.1320.3420.1920.115+ mixed rNN/geom. smooth lab.CoCondenser0.2420.1140.1050.2930.1570.092CODER(CoCondenser)0.2510.1170.1070.3060.1610.093CODER(CoCondenser)0.2500.1170.1070.3040.1620.094+ mixed rNN/geom. smooth lab.", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Performance of models trained on MS MARCO but zeroshot-evaluated on TripClick Val. Bold: overall best, Underline: best in model class.", "figure_data": "Test RAW TorsoVal RAW Torso", "figure_id": "tab_9", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Label smoothing applied to CODER(RepBERT) trained on TripClick HEAD ∪ TORSO Train, validated on HEAD Val; evaluation on TORSO Test and Val.", "figure_data": "", "figure_id": "tab_10", "figure_label": "9", "figure_type": "table" } ]
[{"Category": "Data Source", "Citation": "(Qu et al., 2021)", "Explanation": "The cited work by Qu et al. provides a dataset that the citing paper uses to study the issue of false negatives in ad-hoc text retrieval models."}, {"Category": "Extension or Continuation", "Citation": "(Zhou et al., 2022)", "Explanation": "The cited work by Zhou et al. extends the research on the issue of false negatives in ad-hoc text retrieval models by exploring new dimensions or variables."}, {"Category": "Methodological Basis", "Citation": "(Qu et al., 2021)", "Explanation": "The cited work by Qu et al. (2021) provides a method for mitigating the issue of false negatives in contrastive learning by using reciprocal nearest neighbors to extract additional relationship information between queries and documents."}, {"Category": "Methodological Basis", "Citation": "(Zhou et al., 2022)", "Explanation": "The cited work by Zhou et al. (2022) also addresses the problem of false negatives in contrastive learning by using reciprocal nearest neighbors to improve the training signal for dense retrieval methods."}, {"Category": "Extension or Continuation", "Citation": "(Qu et al., 2021)", "Explanation": "The citing paper extends the research of Qu et al. (2021) by proposing a new method for using estimated similarity to ground-truth documents as evidence for label smoothing in dense retrieval methods."}, {"Category": "Methodological Basis", "Citation": "(Qu et al., 2021;Ren et al., 2021b)", "Explanation": "The cited works provide a strong basis for the computational methods used in the citing paper, including the use of cross-encoders for re-ranking and the need for a more powerful model in the training process."}, {"Category": "Supporting Evidence", "Citation": "(Diaz, 2007)", "Explanation": "The cited work on local relevance score regularization provides a conceptual basis for the label smoothing method proposed in the citing paper, which aims to improve relevance scores in document retrieval."}, {"Category": "Theoretical Foundation", "Citation": "(Jardine and van Rijsbergen, 1971)", "Explanation": "The cluster hypothesis proposed in the cited work is a foundational theory that the citing paper leverages to support the use of vector representations in document retrieval and relevance assessment."}, {"Category": "Extension or Continuation", "Citation": "(Zerveas et al., 2022)", "Explanation": "The cited work on query-specific ranking contexts in dense retrieval extends the insights and findings from learning-to-rank literature to the realm of transformer-based language models, providing a basis for the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Qin et al., 2010)", "Explanation": "The cited work by Qin et al. provides a detailed annotation of candidate documents, which serves as a foundational method for building a rich context in Learning-to-Rank datasets."}, {"Category": "Data Source", "Citation": "(Chapelle and Chang, 2010)", "Explanation": "The cited work by Chapelle and Chang is a data source for the in-depth annotation of candidate documents in Learning-to-Rank datasets."}, {"Category": "Data Source", "Citation": "(Dato et al., 2016)", "Explanation": "The cited work by Dato et al. is another data source for the in-depth annotation of candidate documents in Learning-to-Rank datasets."}, {"Category": "Extension or Continuation", "Citation": "(Hofst\u00e4tter et al., 2021)", "Explanation": "The cited work by Hofst\u00e4tter et al. addresses the problem of sparse annotation by utilizing relevance estimates from supervised methods to build a context in Learning-to-Rank datasets."}, {"Category": "Extension or Continuation", "Citation": "(Qu et al., 2021)", "Explanation": "The cited work by Qu et al. also extends the work of Hofst\u00e4tter et al. by utilizing relevance estimates from supervised methods to build a context in Learning-to-Rank datasets."}, {"Category": "Extension or Continuation", "Citation": "(Ren et al., 2021b)", "Explanation": "The cited work by Ren et al. further builds upon the work of Hofst\u00e4tter et al. and Qu et al. by utilizing relevance estimates from supervised methods to build a context in Learning-to-Rank datasets."}, {"Category": "Extension or Continuation", "Citation": "(Dehghani et al., 2017)", "Explanation": "The cited work by Dehghani et al. addresses the problem of sparse annotation by utilizing unsupervised retrieval methods to build a context in Learning-to-Rank datasets."}, {"Category": "Extension or Continuation", "Citation": "(Haddad and Ghosh, 2019)", "Explanation": "The cited work by Haddad and Ghosh also extends the work of Dehghani et al. by utilizing unsupervised retrieval methods to build a context in Learning-to-Rank datasets."}, {"Category": "Methodological Basis", "Citation": "(Moro and Valgimigli, 2021)", "Explanation": "The cited work provides a method for deriving soft labels for documents used in a teacher-student distillation scheme, which the citing paper adopts in their research on document similarity estimation."}, {"Category": "Extension or Continuation", "Citation": "(Qin et al., 2011;Zhong et al., 2017)", "Explanation": "The cited works introduce the concept of reciprocal nearest neighbors for image re-identification, which the citing paper extends to improve similarity estimates in the context of document retrieval."}, {"Category": "Data Source", "Citation": "(Qu et al., 2021;Ren et al., 2021b)", "Explanation": "The cited works identify false negatives as a significant challenge in document retrieval and employ cross-encoders to discard top-ranking hard negatives. The citing paper uses this information to address the challenge of false negatives in their research."}, {"Category": "Supporting Evidence", "Citation": "(Zhou et al., 2021)", "Explanation": "The cited work is a recent development in the field of document retrieval that the citing paper builds upon in their research to address the challenge of false negatives and improve similarity estimates."}, {"Category": "Methodological Basis", "Citation": "(2022)", "Explanation": "The cited work by (2022) provides a method of selective sampling of negatives around the rank of the ground-truth document to address the problem of false negatives, which the citing paper adopts to improve the performance of the model in avoiding candidates that are ranked either much higher or much lower than the ground truth."}, {"Category": "Extension or Continuation", "Citation": "Ren et al. (2021a)", "Explanation": "The cited work by Ren et al. (2021a) leverages the similarity of candidate documents to the ground truth document in a different way compared to the citing paper, using all documents in the batch and retrieved candidates as negatives in an InfoNCE loss term to penalize the model when it assigns a low similarity score between a single positive and the query."}, {"Category": "Methodological Basis", "Citation": "(Zhong et al., 2017)", "Explanation": "The cited work by Zhong et al. (2017) provides a method for reranking that the citing paper adopts in their research to improve the performance of their retrieval model."}, {"Category": "Methodological Basis", "Citation": "(Qin et al., 2011;Zhong et al., 2017)", "Explanation": "The cited works provide a method of considering the entire gallery of images as a reranking context for each probe, which the citing paper adopts in its research to compute the similarity of documents to the ground truth."}, {"Category": "Methodological Basis", "Citation": "(Goodfellow et al., 2016)", "Explanation": "The cited work introduces the concept of uniform label smoothing, which the citing paper adopts to mitigate the effects of label noise and improve score calibration in contrastive learning."}, {"Category": "Extension or Continuation", "Citation": "(Alayrac et al., 2022)", "Explanation": "The cited work also employs uniform label smoothing for contrastive learning, which the citing paper builds upon to further improve the performance of the model."}, {"Category": "Supporting Evidence", "Citation": "(4)", "Explanation": "The cited work provides a specific example of how the groundtruth class is converted into a new probability vector using uniform label smoothing, which the citing paper uses to illustrate the process in more detail."}, {"Category": "Methodological Basis", "Citation": "(Bajaj et al., 2018)", "Explanation": "The cited work provides the MS MARCO Passage Retrieval dataset, which serves as the basis for the evaluation of the methods proposed in the citing paper."}, {"Category": "Data Source", "Citation": "(Rekabsaz et al., 2021b)", "Explanation": "The cited work provides the TripClick dataset, which is used as a data source for the health document retrieval task in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Ren et al., 2021b)", "Explanation": "The cited work by Ren et al. (2021b) provides the base model used in the citing paper for the CODER finetuning process, which serves as a foundational element for the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Zhou et al., 2022)", "Explanation": "The cited work by Zhou et al. (2022) also contributes to the base model used in the CODER finetuning process in the citing paper, further enhancing the methodological basis of the research."}, {"Category": "Methodological Basis", "Citation": "(Lu et al., 2022)", "Explanation": "The cited work, ERNIE-Search, provides a method for dense retrieval that the citing paper builds upon in their research on the topic of dense retrieval."}, {"Category": "Extension or Continuation", "Citation": "(Zhang et al., 2022)", "Explanation": "The cited work, AR2, is extended in the citing paper to explore new dimensions and contexts in the field of dense retrieval."}, {"Category": "Methodological Basis", "Citation": "(Hofst\u00e4tter et al., 2021)", "Explanation": "The cited work, TAS-B, serves as a methodological basis for the citing paper in their research on dense retrieval, as it is a top-performing method in the field."}, {"Category": "Methodological Basis", "Citation": "(Gao and Callan, 2021)", "Explanation": "The cited work, CoCondenser, is a state-of-the-art method in dense retrieval that the citing paper adopts in their research, excluding those methods that use heavyweight cross-encoder or additional data samples."}, {"Category": "Methodological Basis", "Citation": "(Xiong et al., 2020)", "Explanation": "The cited work by Xiong et al. (2020) is used as a reference for dynamically retrieving candidates/negatives in the training process, which the citing paper adopts to improve the rNN-based similarity computation."}, {"Category": "Methodological Basis", "Citation": "(Zhan et al., 2021)", "Explanation": "The cited work by Zhan et al. (2021) is used as a reference for dynamically retrieving candidates/negatives in the training process, which the citing paper adopts to improve the rNN-based similarity computation."}, {"Category": "Methodological Basis", "Citation": "(Zhong et al., 2017)", "Explanation": "The cited work by Zhong et al. (2017) introduces the concept of an extended set of highly related documents to the query, which the citing paper adopts in their research to construct a more comprehensive set of similar documents."}, {"Category": "Methodological Basis", "Citation": "(5)", "Explanation": "The cited work introduces the concept of similarity s and the relationship between it and distance d, which the citing paper adopts in their research to examine the set of k-nearest neighbors in a retrieval method."}, {"Category": "Methodological Basis", "Citation": "(2017)", "Explanation": "The cited work defines a new distance measure that the citing paper uses in their research to quantify similarity between elements in a set of neighbors."}, {"Category": "Methodological Basis", "Citation": "(2017)", "Explanation": "The cited work by Zhong et al. (2017) provides the definition of reciprocal connectivity vectors and the function f w (x) used in the citing paper to calculate the vectors. The citing paper adopts and adapts the method from the cited work to implement the calculation of the vectors in their research."}, {"Category": "Supporting Evidence", "Citation": "(Zhong et al., 2017)", "Explanation": "The cited work by Zhong et al. (2017) provides a method for reranking that the citing paper uses in their research, contributing to the overall process of label smoothing."}, {"Category": "Data Source", "Citation": "(Bajaj et al., 2018)", "Explanation": "The MS MARCO dataset is the primary data source for training and evaluating the models in the citing paper, providing a large collection of passages and queries for the research."}, {"Category": "Data Source", "Citation": "(Xiong et al., 2020)", "Explanation": "The cited work is used as a data source for training and evaluation experiments in the citing paper."}, {"Category": "Data Source", "Citation": "(Zhan et al., 2021)", "Explanation": "The cited work is used as a data source for training and evaluation experiments in the citing paper."}, {"Category": "Data Source", "Citation": "(Hofst\u00e4tter et al., 2021)", "Explanation": "The cited work is used as a data source for training and evaluation experiments in the citing paper."}, {"Category": "Data Source", "Citation": "(Rekabsaz et al., 2021b)", "Explanation": "The cited work is used as a data source for training and evaluation experiments in the citing paper."}, {"Category": "Data Source", "Citation": "(TREC evaluation software)", "Explanation": "The official TREC evaluation software is used in the citing paper to calculate metrics for training and evaluation experiments."}, {"Category": "Methodological Basis", "Citation": "(Zerveas et al., 2022)", "Explanation": "The cited work, CODER, serves as a methodological basis for the citing paper in the context of standard contrastive learning, where the authors discuss the use of a KL-divergence loss in the process of training."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b14", "b1", "b13", "b3", "b4", "b10", "b17", "b21", "b5", "b0", "b7", "b2", "b4" ], "table_ref": [ "tab_1" ], "text": "Entity Linking (EL) aims to map entity mentions in free texts to their corresponding entities in a given Knowledge Base (KB). Entity linking acts as a bridge between unstructured text and structured knowledge, and benefits various downstream tasks like question answering (Luo et al., 2018) and knowledge extraction (Chen et al., 2021).\nHowever, not all entity mentions correspond to a specific entity in the KB that suits the mention context (Ling et al., 2015). Take Table 1 as an example, Peter Blackburn is actually a journalist, and the householder is a common phrase rather than an entity. These two mentions do not refer to any entity in the given KB. The identification of these mentions is referred to as NIL prediction. Therefore, to tackle NIL prediction, the entity linking model needs to select mentions whose references are absent in KB, and link them to a special placeholder NIL. Dredze et al. (2010) states NIL prediction as one of the key issues in entity linking, which may lead to a decrease in the recall of entity linking systems. Meanwhile, the incorrectly linked entities may provide false information to downstream tasks.\nThere have been some earlier representation learning based researches that take NIL prediction into consideration (Eshel et al., 2017;Lazic et al., 2015;Peters et al., 2019). They identify NIL mentions by setting a vector similarity threshold or viewing NIL as a special entity. Recently, pretrained language model (PLM) based models (Wu et al., 2020;Fitzgerald et al., 2021;Cao et al., 2021) have achieved great success for their great transferability and expandability. However, these models generally assume that there always exists a correct entity for each mention in the knowledge base, which leaves the NIL prediction problem without adequate attention.\nPrevious entity linking datasets have paid insufficient attention to the NIL prediction problem. For example, some of the previous datasets like AIDA (Hoffart et al., 2011) view it as an auxiliary task, while others like MSNBC (Cucerzan, 2007) and WNED-WIKI (Eshel et al., 2017) does not require NIL prediction at all. There does not yet exist a strong benchmark for the ability on NIL prediction.\nIn this paper, we propose an entity linking dataset NEL that focuses on the NIL prediction problem. About 30% of the mentions in NEL do not have their corresponding entity in the candidate entity set, which requires models to identify these mentions rather than linking them to the wrong candidates. In NEL construction, we take ambiguous entities as seeds, and build the dataset by mining mention contexts related to seed entities on the Wikipedia corpus. Then, human annotators are Peter Grant Blackburn (born 25 March 1968) is an Australian badminton player who affiliated with the Ballarat Badminton Association." }, { "figure_ref": [], "heading": "Non-Entity Phrase", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Mention Context", "publication_ref": [], "table_ref": [], "text": "Most Hindus accept that there is a duty to have a family during the householder stage of life, as debt to family lineage called Pitra Rin (Father's Debt) and so are unlikely to avoid having children altogether . . ." }, { "figure_ref": [], "heading": "The Householder (Film)", "publication_ref": [], "table_ref": [], "text": "The Householder (Hindi title: Gharbar) is a 1963 film by Merchant Ivory Productions, with a screenplay by Ruth Prawer Jhabvala . . . The Householder (Novel)\nThe Householder is a 1960 English-language novel by Ruth Prawer Jhabvala . . . asked to identify whether the mentions correspond to a candidate entity or not, and we further perform entity masking to ensure a fair proportion of NIL data of about 30%.\nIn NIL prediction, we propose to use the widely used bi-encoder and cross-encoder structures as the model backbone, and further integrate type information by adding an entity typing subtask. We combine semantic and type similarity as the final similarity score, and identify NIL mentions by setting a similarity threshold.\nWe conduct a series of experiments on both NEL and previous entity linking datasets. Experimental results show that the models suffer from an accuracy drop when taking NIL prediction into consideration, indicating that the accuracy may be inflated without the NIL prediction task, and NEL could better diagnose the performance of different models. We also conducted ablation studies on how type information and NIL examples affect the models. We discover that the entity typing subtask yields better embedding even when type similarity is not used, and both types of NIL examples in training data would boost the ability of NIL prediction.\nOur contributions can be concluded as:\n• We categorize the NIL prediction problem into two patterns: Missing Entity and Non-Entity Phrase, where the latter one has not received sufficient attention in previous works.\n• We propose an entity linking dataset NEL focusing on NIL prediction, which covers two patterns of NIL data and could act as a benchmark for diagnosing the ability of NIL prediction.\n• We conducted a series of experiments, whose results demonstrate that the accuracy of models may be inflated when not taking NIL prediction into consideration. Meanwhile, both patterns of NIL data in training are essential for triggering the ability of NIL prediction." }, { "figure_ref": [], "heading": "Preliminary", "publication_ref": [], "table_ref": [ "tab_1", "tab_1", "tab_1" ], "text": "Entity mentions M = {m i } refer to text spans potentially corresponding to entities in a document D = (w 1 , w 2 , ..., w n ), where w i is either a plain token or a mention. Each mention m i may correspond to an entity e i ∈ E in the entity set E of knowledge base KB.\nDefinition 1. Entity linking aims to find an optimal mapping Γ : M ⇒ E, which maps entity mentions M = {m i } to their corresponding entities E = {e i }, where e i ∈ KB.\nThe NIL prediction problem is to determine whether an entity mention m is absent from the knowledge base KB. When there does not exist a proper entity e ∈ KB for the given mention m, As demonstrated in Table 1, there exist two situations in real-world entity linking where NIL prediction should be taken into consideration:\n• Missing Entity: The mention m refers to certain entity e that has not been yet included in KB, i.e. m ⇒ e / ∈ KB. For example, in the upper half of Table 1, the mention Peter Blackburn refers to a certain journalist, while entries in English Wikipedia include only people with other occupations, leading to the mention linking to NIL.\n• Non-Entity Phrase: The mention m refers to a common phrase that is not usually viewed as an entity, i.e. m ⇏ e. For example, the mention the householder in the lower half of Table 1 refers to a concept rather than a film or novel." }, { "figure_ref": [], "heading": "Dataset Construction", "publication_ref": [], "table_ref": [], "text": "There does not yet exist a strong benchmark on NIL prediction. We manually annotated 300 examples with their mentions linking to NIL from the widely used entity linking dataset AIDA 1 , and discover that about 10% of these mentions should actually link to an entity. For example, the mention \"EU\" in \"EU rejects German\" should be linked to the European Union rather than NIL (See Appendix D for details). Meanwhile, NIL mentions in AIDA fall mostly in the Missing Entity category. The incorrect and imbalanced data for NIL prediction indicates that the importance of NIL prediction is currently underestimated.\nIn this section, we propose an entity linking dataset named NEL, which focuses on the NIL prediction problem. The construction process is demonstrated in Figure 1.\nUnlike normal entity linking data, there does not exist annotated references for mentions linking to NIL, and the portion of NIL data in the text corpus is unknown. Hyperlinks in Wikipedia can be viewed as mentions linking to non-NIL entities, from which we can find the aliases of entities. We assume that when an alias does not appear as a hyperlink in a certain context, it may be identified as a mention linking to NIL. In this way, we collect such contexts as the raw data. The raw data is then annotated by humans to ensure correctness, and we further mask out some answer entities in the candidate set to control the percentage of NIL in answers." }, { "figure_ref": [], "heading": "Data Collection", "publication_ref": [], "table_ref": [], "text": "Levin (1977) states that the title of creative works could be a place, a personal name, or a certain abstract concept like the choric embodiment of some collectivity (The Clouds, The Birds) or stock types (The Alchemist, Le Misanthrope), which would naturally lead to the two situations where a mention links to NIL. The absence of the referenced entity from the KB would lead to Missing Entity, while an abstract concept not viewed as an entity would lead to Non-Entity Phrase.\nTo discover NIL mentions of both types, we start by taking entities that share an alias with other entities as seeds. We assume that a mention referring to multiple entities has a higher probability of linking to a Missing Entity outside the KB, and the complex meaning of the mention will lead to Non-Entity Phrase. Thus, the aliases of ambiguous seed entities would be good starting points for mining NIL mentions." }, { "figure_ref": [], "heading": "Entity Selection", "publication_ref": [], "table_ref": [], "text": "We further filter ambiguous entities to remove low-quality seeds. First, we remove noise instances like template pages, and entities with less than 5 hyperlinks are also removed. Meanwhile, we discarded entities with a probability greater than 50% of being the linking result, as these entities can generally be considered to be unambiguous and lack difficulty. Finally, 1000 entities are sampled as the seed entity set E s .\nWe use a typing system based on Wikidata to identify the type of selected entities. We view the instance of relation as the type indicator, and utilize the subclass of relation to build a tree-form type system. The detailed type system can be found in Appendix C." }, { "figure_ref": [], "heading": "Mention Discovery", "publication_ref": [], "table_ref": [], "text": "We build an alias table from the 2021 Wikipedia dump by extracting alias-entity pairs (m, e) from internal hyperlinks. All alias m related to entities in the seed entity set E s are gathered as the mention set M . For each mention m ∈ M , we look for its occurrence throughout the Wikipedia corpus (whether it appears as a hyperlink or not) to obtain the entry tuple (C l , m, C r , E m ), where C l and C r represent contexts left and right to the entity mention m, and E m represents the candidate entities set of m. For each mention m, we sampled 5 entries where m appears as a hyperlink and 5 entries where m appears in plain text to balance the number of positive and negative examples, and a total of 10,000 entries are collected." }, { "figure_ref": [], "heading": "Human Annotation and Post-processing", "publication_ref": [], "table_ref": [], "text": "We perform annotation on the above entries with 3 annotators. The annotators are provided with the mention context (C l , m, C r ) and candidate entities E m . Each candidate entity e consists of its title, textual description, and Wikipedia URL. The annotators are asked to choose the answer entity a ∈ E m corresponding to the mention m, or a = N IL if none of the candidates are correct.\nAn expert will further investigate entries in which annotators fail to reach a consensus. The expert is a senior annotator with essential knowledge of entity linking, and will confirm the final annotation result after carefully reading through the context and candidate entities. We use the annotation result as the final answer a if there is an agreement between 3 annotators, and the entity chosen by the expert otherwise. The annotated tuple (C l , m, C r , E m , a) acts as the final entry of our dataset.\nTo further simulate the situation where new emerging entities do not appear in knowledge bases, we perform entity masking on positive entries. We randomly sample 10% entries where a ̸ = N IL, and mask the correct entity in the candidate set E m . In this case, as the correct answer is removed from the candidate list, we have a = N IL, i.e. the mention m corresponds to the empty entity NIL.\nA person is invited to bless the child.\nBless the Child is a 2000 supernatural horror film.\nBless the Child (Film)\n\"Bless the Child\" is the seventh single by Nightwish." }, { "figure_ref": [], "heading": "Bless the Child (Song)", "publication_ref": [], "table_ref": [], "text": "Hotel Blessing\nThe Blessing (song)\nBlessing, Texas" }, { "figure_ref": [], "heading": "Semantic Space", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Type Space", "publication_ref": [], "table_ref": [], "text": "Activity Song Film" }, { "figure_ref": [], "heading": "Candidate Entities", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Bi-Encoder", "publication_ref": [], "table_ref": [], "text": "A person is invited to bless the child.\nBless the Child is a 2000 supernatural horror film.\nA person is invited to bless the child.\n\"Bless the Child\" is the seventh single by Nightwish." }, { "figure_ref": [], "heading": "Cross-Encoder", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Sem Sem+ Type", "publication_ref": [], "table_ref": [], "text": "Sim." }, { "figure_ref": [], "heading": "Sem Sem+ Type", "publication_ref": [], "table_ref": [], "text": "Sim." }, { "figure_ref": [], "heading": "Threshold", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Threshold", "publication_ref": [], "table_ref": [], "text": "Figure 2: The overall structure of PLM-based retrieval models. Candidates which are confusing in the semantic space may be more distinguishable in the type space. Mentions linking to NIL frequently differ from their candidates in their types, so we combine semantic similarity with type similarity for NIL prediction. " }, { "figure_ref": [], "heading": "Dataset Statistics", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Entity Linking with NIL prediction", "publication_ref": [], "table_ref": [], "text": "A mention links to NIL when all of its candidate entities fail to match its context. A common paradigm of NIL prediction is to compute the similarity scores between the mention context and candidates, and judge its linking result on the base of similarities." }, { "figure_ref": [], "heading": "Scoring Similarities", "publication_ref": [], "table_ref": [], "text": "Bi-encoder and cross-encoder are widely adopted scorer structures, as they are well compatible with pretrained language models. Bi-encoder encodes mention contexts and entities into the same dense vector space, while cross-encoder views similarity scoring as a sequence classification task:\ns bi (c, e) = σ(f (c) • g(e)) (1) s cross (c, e) = σ(Wh([c, e]) + b)(2)\nwhere f, g, h are PLM-based encoders, W and b are trainable variables, and σ refers to the sigmoid function.\nThe bi-encoder structure allows precomputing entity embeddings in knowledge bases, which enables efficient retrieval in real-world applications. Compared with bi-encoder, cross-encoder better captures the relation between context and entities with the cross attention mechanism, thus demonstrating higher accuracy." }, { "figure_ref": [], "heading": "Integrating Entity Types", "publication_ref": [ "b6", "b16", "b18", "b12" ], "table_ref": [], "text": "Previous entity linking models (Gupta et al., 2017;Onoe and Durrett, 2020;Raiman and Raiman, 2018) have proved that entity types do help models better disambiguate between candidate entities.\nThe type information can be integrated into biencoders and cross-encoders by adding a typing layer. In the bi-encoder structure, the mention types \nt c = σ(W c f (c) + b c ) (3) t e = σ(W e g(e) + b e )(4)\nwhile they are simultaneously predicted in the cross-encoder structure: To tackle the label imbalance between types, we use the focal loss (Lin et al., 2017) on the typing task:\n[t c , t e ] = σ(W f ([c, e]) + b)(5\nL t = - nt i=1 (y i (1-t i ) γ log t i +(1-y i )t γ i log(1-t i ))\n(6) where n t is the total number of types in the type system, γ is a hyperparameter, y i is the golden label of the i-th type and t i is the predicted label of the i-th type. In bi-encoder, L t is the average of loss on t c and t e , while in cross-encoder L t is directly calculated from [t c , t e ].\nWe train the semantic encoder with binary classification loss L s , and combine L s with L t as the final loss L:\nL = L s + L t (7)" }, { "figure_ref": [], "heading": "Identifying Mentions Linking to NIL", "publication_ref": [], "table_ref": [], "text": "The type similarity is computed with cosine similarity, and the final score is a weighted sum between type similarity and semantic similarity:\ns t (c, e) = cos(t c , t e ) (8) s(c, e) = λs s (c, e) + (1 -λ)s t (c, e) (9)\nwhere λ is a hyperparameter.\nFor each entry (C l , m, C r , E m , a), we concatenate (C l , m, C r ) to form the context c. In the training step, for each candidate entity e ∈ E m , we view (c, e) as a positive example if e = a, and as a negative example if a = N IL or e ̸ = a.\nDuring evaluation, the similarity score s(c, e) is computed between context c and each candidate entity e. If there exist entities with a score equal to or higher than the nil threshold ϵ = 0.5, we choose the entity with the highest similarity score as the answer; If all entities fail to reach the threshold, then the mention m links to NIL. " }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b21", "b0" ], "table_ref": [], "text": "We conduct a series of experiments on two types of datasets, which test the different ability of entity linking models: (1) NEL that tests the ability of NIL prediction; (2) previous EL datasets that tests the ability of entity disambiguation. We choose the following models for comparison: BLINK (Wu et al., 2020) that uses the bi-encoder and crossencoder alone to score candidate entities, CLINK that integrates type information with BLINK, and GENRE (Cao et al., 2021) that generates the linking result with a sequence-to-sequence model." }, { "figure_ref": [], "heading": "Main Experiment Results", "publication_ref": [ "b7", "b2", "b4" ], "table_ref": [ "tab_5" ], "text": "We trained and tested the models on NEL, with BERT-large as the encoder base of BLINK and CLINK, and BART-large as the backbone of GENRE, to observe their ability in NIL prediction. We also experimented on previous entity linking datasets to observe the disambiguation ability of different models. The models are trained on the AIDA-YAGO2-train (Hoffart et al., 2011) dataset, and tested on AIDA-YAGO2-testb, MSNBC (Cucerzan, 2007) and WNED-WIKI (Eshel et al., 2017). 187 distinct types are used in experiments on NEL, and considering that the entity type distribution may be different across datasets, we use a type system with only 14 top-level entity types on previous datasets to make CLINK more transferable (See Appendix C for details). We retain the same textual representation format with BLINK (see Appendix A), while using 128 as the length of context sequences and entity descriptions. All models are implemented with PyTorch and optimized with the Adam (Kingma and Ba, 2014) optimizer with a learning rate of 1e-5.\nTable 3 shows the evaluation results, from which some insights could be discovered:\nType Information Matters. CLINK with crossencoder structure achieves the highest accuracy on almost all datasets, and is still comparable with GENRE on AIDA, from which we may assert that taking type information into consideration is helpful even without NIL prediction. Meanwhile, on both structures, the overall accuracy of CLINK outperforms BLINK on all datasets, proving that the entity typing task assists both bi-encoder and cross-encoder distinguish correct entities.\nEncoder Structure. The cross-encoder structure generally performs better than bi-encoder, but we observe that sometimes bi-encoders show de-cent ability in detecting NIL mentions, especially when type information is not utilized. BLINKbi achieves the highest NIL accuracy score of 69.59 on AIDA with NIL, and has a score of 88.59 on NEL, which is comparable with the bestperforming CLINK-cross. This phenomenon indicates that cross-encoders may be more prone to overfitting, while entity types would alleviate this tendency.\nNIL Entries. On the AIDA dataset, we observe that the models generally suffer from a drop in accuracy when taking NIL entries into consideration, and the drop is more obvious in bi-encoders. This may indicate that the performance of models is inflated without NIL prediction, and NIL entries may confuse the models in practical application." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Influence of the Entity Typing Task", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "We conducted experiments to observe how the entity typing task influences the model. We trained the model with both L t and L s as loss, while setting λ = 1 during evaluation to ignore the influence of type similarity scores. Results shown in Table 4 reflects two observations:\nFirst, candidate type predictions benefit from mention context. We observe that when changing the structure from bi-encoder to cross-encoder, the type prediction accuracy on candidates raises by 5%, where the accuracy on contexts remains unchanged. This is likely because the context helps narrow down the type range, while the context type prediction generally remains unaffected by entity descriptions.\nSecond, unifying the entity typing task in the training process leads to higher overall accuracy, even when the type similarity score is not taken into consideration in the evaluation, which can be demonstrated by the improved score of OAC w/ Typing compared to OAC w/o Typing on both models. This may indicate that predicting entity types would help the model learn semantic embeddings with higher quality." }, { "figure_ref": [ "fig_2" ], "heading": "Influence of NIL Training Data", "publication_ref": [], "table_ref": [], "text": "Compared with previous datasets like AIDA, NEL contains more NIL mentions of the Non-Entity Phrase type. We trained the models with different numbers of Non-Entity Phrase examples to observe the influence of NIL Training Data.\nAs demonstrated by Figure 3, all models suffer from a great decline in NIL accuracy when no NIL examples are used during the training stage, and biencoder is more prone to the accuracy drop. However, by using only 25% of Non-Entity Phrase examples in training, the NIL accuracy would recover to a decent level. Further adding NIL examples has little impact on cross-encoder models, but bi-encoder models still constantly benefit from additional data.\nBesides, ignoring NIL data with the Non-Entity Phrase type will also harm the NIL accuracy and overall accuracy. Both types of NIL training data are necessary to reach to best performance.\nWe discover that entity linking models may be unaware of the NIL mentions when there is insufficient training data. A small amount of training data is enough for cross-encoder models to reach a reasonable accuracy, while bi-encoder models constantly benefit from additional training data." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b21", "b5", "b7", "b19", "b10", "b20", "b9", "b21", "b6", "b18", "b16" ], "table_ref": [], "text": "PLM-based Models in Entity Linking. Using pretrained language models (PLM) to capture semantic information is widely adopted in recent entity linking models. BLINK (Wu et al., 2020) marks the mention in context with pre-defined special tokens, and takes BERT as the encoder base. Two structures are adopted by BLINK to handle different situations: bi-encoder for fast dense retrieval, and cross-encoder for further disambiguation. MOLEMAN (Fitzgerald et al., 2021) searches for similar mention contexts instead of entities, which better captures the diverse aspects an entity reflects in various contexts. GENRE (Cao et al., 2021) finetunes the sequence-to-sequence model BART, directly generating the unique entity name according to the mention context.\nResearch on NIL Prediction. The NIL prediction problem has been long viewed as an auxiliary task of entity linking. Some entity linking datasets (AIDA (Hoffart et al., 2011), TAC-KBP series (Mc-Namee and Dang, 2009)) take the NIL prediction problem into consideration, while some (ACE and MSNBC) (Ratinov et al., 2011) omit mentions linking to NIL. Some research has already been conducted on the NIL prediction problem. Lazic et al. (2015) and Peters et al. ( 2019) set a score threshold to filter reasonable candidates, and mentions with no candidate score above the threshold are linked to NIL. Sil and Florian (2016); Kolitsas et al. (2018) views the NIL placeholder as a special entity, and selecting it as the best match indicates that the mention refers to no entities in the given KB. However, recent entity linking models, which use pretrained language models (PLM) as encoder bases, generally take the in-KB setting, which assumes that each mention has a valid golden entity in the KB (Wu et al., 2020).\nEntity Type Assisted Entity Linking. Entity types can effectively assist entity linking and have been studied in various works. Gupta et al. (2017) jointly encodes mention context, entity description, and Freebase types with bidirectional LSTM to maximize the cosine similarity. DeepType (Raiman and Raiman, 2018) predicts the type probability of each token and gathers relevant tokens to predict the entity types, which would help eliminate candidates with incompatible types. Onoe and Durrett (2020) views entity types as a training objective rather than a feature, predicting fine-grained Wikipedia category tags to select the most relevant entity." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose an entity linking dataset NEL that focuses on the NIL prediction problem. We observe that mentions linking to NIL can be categorized into two patterns: Missing Entity and Non-Entity Phrase, but the latter one has not been paid sufficient attention. We propose an entity linking dataset NEL that focuses on NIL prediction. The dataset is built upon the Wikipedia corpus by choosing ambiguous entities as seeds and collecting relevant mention contexts. NEL is humanannotated to ensure correctness, and entity masking is further performed to control the percentage of NIL.\nWe conducted a series of experiments to examine the performance of PLM-based models on different datasets. Experimental results indicate that the accuracy without considering NIL prediction would be inflated. Meanwhile, sufficient data of both NIL types during training is essential to trigger the ability of NIL prediction. In the future, we may further try to integrate entity types into the pretraining process and explore type transfer between datasets." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Our work still exist some limitations. First, we choose an entity typing system on the base of Wikidata tags, however, the granularity of the typing system remains to be discussed. A system with too many types would introduce noise to long-tail types, while insufficient types would weaken the disambiguation ability of type similarity. Thus, building a type system with adequate granularity remains a challenge.\nSecond, we combine the entity typing task with PLM-based semantic encoders, which require a fixed type system and further finetuning. Integrating the entity typing task into the pretraining process may enhance the transferability of the model and remove the dependency on a fixed type system. Potential Risks. Our proposed dataset NEL centers on ambiguous entities, whose type distribution may not remain the same with other datasets. A potential risk is that the model trained on NEL may experience under-exposure of other entity types, which would damage their transferability and lead to undesired outputs on other datasets." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "In this section, we will discuss the ethical considerations of our work.\nLicenses and terms. The Wikipedia corpus and Wikidata types are obtained via the Wikimedia dump2 , under the CC BY-SA 3.0 license3 . AIDA, MSNBC, and WNED-WIKI are shared under the CC BY-SA 3.0 license. These datasets have been widely used in entity linking research, and we believe that they have been anonymized and desensitized.\nHuman Annotation. We recruited 3 human annotators without a background of expertise in annotation, and 1 expert annotator with adequate knowledge in entity linking for checking. These annotators are employed by commercial data annotation companies. We have paid these recruited annotators with adequate rewards under the agreed working time and price. The annotators are well informed about how these annotated data will be used and released, which has been recorded in the contract.\nIntended use. NEL is an entity linking dataset focusing on the NIL prediction problem. Researchers are intended to use NEL for examining the ability of NIL prediction of newly created entity linking models. AIDA, MSNBC, and WNED-WIKI are intended for entity linking research, which is compatible with our work." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank the anonymous reviewers for their insightful comments. This paper is supported by a grant from the Institute for Guo Qiang, Tsinghua University (2019GQB0003)." }, { "figure_ref": [], "heading": "A Details about the NEL Dataset Construction", "publication_ref": [], "table_ref": [], "text": "A.1 Corpora\nThe NEL dataset is built from the 2021-07 English Wikipedia dump under the CC BY-SA 3.0 license. We take the hyperlinks in the raw xml dump as entity mentions, and retain at most 128 tokens around the mentions as their context. We take 64 tokens left to the mention and 64 tokens right to the mention by default, and more tokens will be included in one side if the other side does not contain enough tokens. We then discard tokens from both ends to ensure that the context plus the mention do not exceed the 128 token limit. Media files (image, audio) and Lua commands are discarded during preprocess." }, { "figure_ref": [], "heading": "A.2 Data Selection", "publication_ref": [], "table_ref": [], "text": "Entries with the following features are viewed as noise and discarded:\n• The mention context contains the token '*', which is usually a list or formula;\n• The mention with only 1 candidate entity, which does not pose much challenge;\n• The mention with more than 20 candidate entities, which is far too challenging;\n• The mention appears as a sub-span of a word;\n• The mention has a probability of over 50% of linking to a certain candidate entity, in which case we view the mention as unambiguous." }, { "figure_ref": [], "heading": "A.3 Textual Representation Format", "publication_ref": [], "table_ref": [], "text": "Textual representation format for bi-encoder:\nTextual representation format for cross-encoder:\nwhere C represents mention context and E represents the textual description of candidate mentions.\n[m start ], [m end ], [m title ] are special tokens." }, { "figure_ref": [], "heading": "B Experiment Details", "publication_ref": [ "b21" ], "table_ref": [], "text": "We use the BERT-large-uncased model as the encoder base, with parameters initialized from the python transformers library. The models are trained on a single NVIDIA GeForce RTX 3090 GPU. We obtain the AIDA, MSNBC and WNED-WIKI dataset from the BLINK (Wu et al., 2020) repository https://github.com/ facebookresearch/BLINK. We trained our model on the AIDA-train split, and evaluated on all three datasets.\nThe hyperparameter configurations are as follows. Detailed hyperparameters are shown in Table 5." }, { "figure_ref": [], "heading": "C Typing System", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C.1 Typing System on NEL", "publication_ref": [], "table_ref": [], "text": "We use a tree-like typing system with 187 distinct types on NEL. The typing system is build on the " }, { "figure_ref": [], "heading": "E Case Study", "publication_ref": [], "table_ref": [], "text": "Table 8 shows some examples predicted by CLINK and BLINK without type information, which reflects how entity influence the linking result.\nIn the first example, models with the bi-encoder structure incorrectly take the \"Gates of Heaven\" entry (which is in fact a documentary film) as the linking result, while CLINK-cross notices the context word \"album\" may indicate the entity type type, and correctly links the mention to the album. In the second example, the \"Home Before Dark\" mention actually refer to a 1997 movie 4 like other movies in the context, however the corresponding entry is absent in the English Wikipedia. The CLINK-cross model is able to identify that the mention should be labelled as NIL, where the other models mistakenly link it to the \"Home Before Dark\" entry, which is an album rather than a movie. We observe that the entity types do help measure the similarity 4 https://www.imdb.com/title/tt0116547/ between mentions and entities, which enhances the performance of CLINK. " } ]
2023-05-25
[ { "authors": "Nicola De Cao; Gautier Izacard; Sebastian Riedel; Fabio Petroni", "journal": "", "ref_id": "b0", "title": "Autoregressive entity retrieval", "year": "2021" }, { "authors": "Lihu Chen; Gaël Varoquaux; Fabian M Suchanek", "journal": "", "ref_id": "b1", "title": "A lightweight neural model for biomedical entity linking", "year": "2021" }, { "authors": "Silviu Cucerzan", "journal": "", "ref_id": "b2", "title": "Large-scale named entity disambiguation based on wikipedia data", "year": "2007" }, { "authors": "Mark Dredze; Paul Mcnamee; Delip Rao; Adam Gerber; Tim Finin", "journal": "", "ref_id": "b3", "title": "Entity disambiguation for knowledge base population", "year": "2010" }, { "authors": "Yotam Eshel; Noam Cohen; Kira Radinsky; Shaul Markovitch; Ikuya Yamada; Omer Levy", "journal": "", "ref_id": "b4", "title": "Named entity disambiguation for noisy text", "year": "2017" }, { "authors": "Nicholas Fitzgerald; Dan Bikel; Jan Botha; Dan Gillick; Tom Kwiatkowski; Andrew Mccallum", "journal": "", "ref_id": "b5", "title": "Moleman: Mention-only linking of entities with a mention annotation network", "year": "2021" }, { "authors": "Nitish Gupta; Sameer Singh; Dan Roth", "journal": "", "ref_id": "b6", "title": "Entity linking via joint encoding of types, descriptions, and context", "year": "2017" }, { "authors": "Johannes Hoffart; Mohamed Amir Yosef; Ilaria Bordino; Hagen Fürstenau; Manfred Pinkal; Marc Spaniol; Bilyana Taneva; Stefan Thater; Gerhard Weikum", "journal": "", "ref_id": "b7", "title": "Robust disambiguation of named entities in text", "year": "2011" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b8", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Nikolaos Kolitsas; Octavian-Eugen; Thomas Ganea; Hofmann", "journal": "", "ref_id": "b9", "title": "End-to-end neural entity linking", "year": "2018" }, { "authors": "Nevena Lazic; Amarnag Subramanya; Michael Ringgaard; Fernando Pereira", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b10", "title": "Plato: A selective context model for entity resolution", "year": "2015" }, { "authors": "Harry Levin", "journal": "The Modern language review", "ref_id": "b11", "title": "The title as a literary genre", "year": "1977" }, { "authors": "Tsung-Yi Lin; Priya Goyal; Ross Girshick; Kaiming He; Piotr Dollár", "journal": "", "ref_id": "b12", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "Xiao Ling; Sameer Singh; Daniel S Weld", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b13", "title": "Design challenges for entity linking", "year": "2015" }, { "authors": "Kangqi Luo; Fengli Lin; Xusheng Luo; Kenny Zhu", "journal": "", "ref_id": "b14", "title": "Knowledge base question answering via encoding of complex query graphs", "year": "2018" }, { "authors": "Paul Mcnamee; Hoa Trang Dang", "journal": "", "ref_id": "b15", "title": "Overview of the tac 2009 knowledge base population track", "year": "2009" }, { "authors": "Yasumasa Onoe; Greg Durrett", "journal": "", "ref_id": "b16", "title": "Fine-grained entity typing for domain independent entity linking", "year": "2020" }, { "authors": "Mark Matthew E Peters; Robert Neumann; Roy Logan; Vidur Schwartz; Sameer Joshi; Noah A Singh; Smith", "journal": "", "ref_id": "b17", "title": "Knowledge enhanced contextual word representations", "year": "2019" }, { "authors": "Jonathan Raiman; Olivier Raiman", "journal": "", "ref_id": "b18", "title": "Deeptype: multilingual entity linking by neural type system evolution", "year": "2018" }, { "authors": "Lev Ratinov; Dan Roth; Doug Downey; Mike Anderson", "journal": "", "ref_id": "b19", "title": "Local and global algorithms for disambiguation to wikipedia", "year": "2011" }, { "authors": "Avirup Sil; Radu Florian", "journal": "", "ref_id": "b20", "title": "One for all: Towards language independent named entity linking", "year": "2016" }, { "authors": "Ledell Wu; Fabio Petroni; Martin Josifoski; Sebastian Riedel; Luke Zettlemoyer", "journal": "", "ref_id": "b21", "title": "Scalable zeroshot entity linking with dense entity retrieval", "year": "2020" } ]
[ { "formula_coordinates": [ 5, 343.36, 479.44, 181.78, 27.17 ], "formula_id": "formula_0", "formula_text": "s bi (c, e) = σ(f (c) • g(e)) (1) s cross (c, e) = σ(Wh([c, e]) + b)(2)" }, { "formula_coordinates": [ 6, 132.8, 253.64, 157.06, 27.17 ], "formula_id": "formula_1", "formula_text": "t c = σ(W c f (c) + b c ) (3) t e = σ(W e g(e) + b e )(4)" }, { "formula_coordinates": [ 6, 118.71, 329.71, 166.91, 10.63 ], "formula_id": "formula_2", "formula_text": "[t c , t e ] = σ(W f ([c, e]) + b)(5" }, { "formula_coordinates": [ 6, 70.87, 423.97, 223.59, 33.74 ], "formula_id": "formula_3", "formula_text": "L t = - nt i=1 (y i (1-t i ) γ log t i +(1-y i )t γ i log(1-t i ))" }, { "formula_coordinates": [ 6, 150.79, 593.69, 139.08, 10.63 ], "formula_id": "formula_4", "formula_text": "L = L s + L t (7)" }, { "formula_coordinates": [ 6, 97.93, 683.4, 191.94, 27.17 ], "formula_id": "formula_5", "formula_text": "s t (c, e) = cos(t c , t e ) (8) s(c, e) = λs s (c, e) + (1 -λ)s t (c, e) (9)" } ]
Learn to Not Link: Exploring NIL Prediction in Entity Linking
Entity linking models have achieved significant success via utilizing pretrained language models to capture semantic features. However, the NIL prediction problem, which aims to identify mentions without a corresponding entity in the knowledge base, has received insufficient attention. We categorize mentions linking to NIL into Missing Entity and Non-Entity Phrase, and propose an entity linking dataset NEL that focuses on the NIL prediction problem. NEL takes ambiguous entities as seeds, collects relevant mention context in the Wikipedia corpus, and ensures the presence of mentions linking to NIL by human annotation and entity masking. We conduct a series of experiments with the widely used biencoder and cross-encoder entity linking models, results show that both types of NIL mentions in training data have a significant influence on the accuracy of NIL prediction. Our code and dataset can be accessed at https: //github.com/solitaryzero/NIL_EL.
Fangwei Zhu; Jifan Yu; Hailong Jin; Juanzi Li; Lei Hou; Zhifang Sui
[ { "figure_caption": ") where σ represents the sigmoid function and W c , b c , W e , b e , W, b are trainable parameters.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "a = arg max e s(c, e), ∃e, s(c, e) ≥ ϵ N IL, ∀e, s(c, e) < ϵ (10)", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Ablation study on the influence of NIL training data. The x-axis indicates the percentage of used NIL data or Non-Entity Phrase data in the training set.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Example of mentions that should be linked to NIL and their potential candidate entities. Mentions are labeled as red.", "figure_data": "", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Statistics of the NEL dataset compared with previous entity linking datasets. *The percentage of two NIL patterns in AIDA is calculated from 300 randomly sampled NIL data, and data with errors do not count as any pattern.", "figure_data": "1", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Compared with previous entity linking datasets, NEL has a higher percentage of NIL data, which could better diagnose the ability of different models in NIL prediction. Meanwhile, mentions linking to NIL in AIDA mostly fall in the Missing Entity situation, while NEL focuses more on the Non-Entity Phrase situation, thus complementing the insufficient attention on Non-Entity Phrase in NIL prediction.", "figure_data": "demonstrates the properties of the NELdataset. NEL includes 6,593 positive examplesand 3,331 negative examples, covering 1,000 men-tions and 3,840 related entities. Each mention hasan average of 3.80 candidate entities. The inter-annotator agreement of NEL is 94.61%, indicatingthat the expert calibrated about 5% of the data. Thefull dataset is split into train/validation/test sets bythe ratio of 80%/10%/10%.NEL contains a fair number of entries, and ishuman-annotated to ensure correctness.", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Experimental results on NEL and previous datasets. Non-NAC, NAC, and OAC represent non-NIL accuracy, NIL accuracy and overall accuracy. *Results of GENRE on AIDA w/o NIL, MSNBC, and WNED-WIKI are taken from the original paper(Cao et al., 2021).", "figure_data": "NEL (our dataset)AIDA w/ NILAIDA w/o NIL MSNBC WNED-WIKINon-NAC NAC OAC Non-NAC NAC OACOACOACOACBLINK-bi72.2788.59 77.7464.5469.59 65.0182.6170.8658.56CLINK-bi79.2479.28 79.2575.9866.36 75.0983.2673.2958.99GENRE*54.0062.84 56.96---88.6088.1071.70BLINK-cross84.0988.89 85.7083.0845.16 79.5887.4982.0269.48CLINK-cross86.9789.19 87.7184.4258.53 82.0388.1689.7072.43t c and entity types t e are predicted separately:", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Experimental results on the influence of the entity typing task on NEL. OAC indicates the overall accuracy of entity linking. The overall accuracy with typing is achieved without using the type similarity score.ModelCtxt Type Acc. Cand Type Acc. OAC w/ Typing OAC w/o Typing", "figure_data": "Bi-encoder83.7593.4678.3577.74Cross-encoder83.9598.0187.4183.67", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "(Ling et al., 2015)", "Explanation": "The cited work by Ling et al. (2015) highlights the issue of not all entity mentions corresponding to a specific entity in the KB, which the citing paper addresses by identifying and handling these mentions in the context of entity linking."}, {"Category": "Supporting Evidence", "Citation": "(Eshel et al., 2017)", "Explanation": "The cited work by Eshel et al. (2017) provides a dataset (WNED-WIKI) that does not require NIL prediction, which the citing paper uses to highlight the need for a strong benchmark for the ability to predict NIL in entity linking tasks."}, {"Category": "Supporting Evidence", "Citation": "(Cucerzan, 2007)", "Explanation": "The cited work by Cucerzan (2007) also does not require NIL prediction in the entity linking task, which the citing paper uses to further emphasize the need for a strong benchmark in this area."}, {"Category": "Supporting Evidence", "Citation": "(Eshel et al., 2017)", "Explanation": "The cited work by Eshel et al. (2017) also provides a dataset (WNED-WIKI) that does not require NIL prediction, which the citing paper uses to highlight the need for a strong benchmark for the ability to predict NIL in entity linking tasks."}, {"Category": "Methodological Basis", "Citation": "(Gupta et al., 2017)", "Explanation": "The cited work by Gupta et al. provides a method for integrating entity types into biencoders and cross-encoders, which the citing paper adopts in its research to improve entity disambiguation."}, {"Category": "Methodological Basis", "Citation": "(Onoe and Durrett, 2020)", "Explanation": "The cited work by Onoe and Durrett provides a method for integrating entity types into biencoders and cross-encoders, which the citing paper adopts in its research to improve entity disambiguation."}, {"Category": "Methodological Basis", "Citation": "(Raiman and Raiman, 2018)", "Explanation": "The cited work by Raiman and Raiman provides a method for integrating entity types into biencoders and cross-encoders, which the citing paper adopts in its research to improve entity disambiguation."}, {"Category": "Methodological Basis", "Citation": "(Lin et al., 2017)", "Explanation": "The cited work by Lin et al. (2017) introduces the concept of focal loss, which the citing paper adopts in the typing task to tackle label imbalance between types."}, {"Category": "Methodological Basis", "Citation": "(Wu et al., 2020)", "Explanation": "The cited work BLINK is used as a baseline for comparison in the experiments conducted in the citing paper to test the ability of NIL prediction in entity linking models."}, {"Category": "Methodological Basis", "Citation": "(Cao et al., 2021)", "Explanation": "The cited work GENRE is also used as a baseline for comparison in the experiments conducted in the citing paper to test the ability of entity disambiguation in entity linking models."}, {"Category": "Data Source", "Citation": "(Hoffart et al., 2011)", "Explanation": "The cited work provides the AIDA-YAGO2-train dataset that the models are trained on in the citing paper."}, {"Category": "Data Source", "Citation": "(Cucerzan, 2007)", "Explanation": "The cited work provides the MSNBC dataset that the models are tested on in the citing paper."}, {"Category": "Data Source", "Citation": "(Eshel et al., 2017)", "Explanation": "The cited work provides the WNED-WIKI dataset that the models are tested on in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Wu et al., 2020)", "Explanation": "BLINK marks mention in context with pre-defined special tokens and uses BERT as the encoder base, providing a methodological basis for the citing paper to build upon in their research on entity linking."}, {"Category": "Extension or Continuation", "Citation": "(Fitzgerald et al., 2021)", "Explanation": "MOLEMAN searches for similar mention contexts, which extends the research on entity linking by focusing on capturing the diverse aspects of entities in various contexts."}, {"Category": "Extension or Continuation", "Citation": "(Cao et al., 2021)", "Explanation": "GENRE finetunes the sequence-to-sequence model BART to directly generate unique entity names from mention contexts, which extends the research on entity linking by exploring a new method for generating entity names."}, {"Category": "Data Source", "Citation": "(Wu et al., 2020)", "Explanation": "BLINK is cited for its use of pre-defined special tokens in marking mention contexts, providing a data source for the citing paper to reference in their research on entity linking."}, {"Category": "Methodological Basis", "Citation": "(Lazic et al., 2015)", "Explanation": "The cited work by Lazic et al. introduces a method of setting a score threshold to filter reasonable candidates for entity linking, which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "(Peters et al., 2019)", "Explanation": "The cited work by Peters et al. also uses a method of setting a score threshold to filter reasonable candidates for entity linking, which the citing paper builds upon in their research."}, {"Category": "Methodological Basis", "Citation": "(Sil and Florian, 2016)", "Explanation": "The cited work by Sil and Florian views the NIL placeholder as a special entity, which the citing paper adopts in their research to consider the NIL prediction problem in entity linking."}, {"Category": "Methodological Basis", "Citation": "(Kolitsas et al., 2018)", "Explanation": "The cited work by Kolitsas et al. also views the NIL placeholder as a special entity, which the citing paper builds upon in their research to consider the NIL prediction problem in entity linking."}, {"Category": "Methodological Basis", "Citation": "(Wu et al., 2020)", "Explanation": "The cited work by Wu et al. highlights the in-KB setting in entity linking research, which the citing paper acknowledges in their research to consider the assumption of valid golden entities in the KB for entity linking."}, {"Category": "Methodological Basis", "Citation": "(Gupta et al., 2017)", "Explanation": "The cited work by Gupta et al. (2017) provides a method of jointly encoding mention context, entity description, and Freebase types with bidirectional LSTM to maximize cosine similarity, which the citing paper adopts in their research on entity type prediction."}, {"Category": "Methodological Basis", "Citation": "(Raiman and Raiman, 2018)", "Explanation": "The cited work by Raiman and Raiman (2018) introduces the DeepType model for predicting entity types, which the citing paper may have adopted in their research to improve entity type prediction."}, {"Category": "Methodological Basis", "Citation": "(Onoe and Durrett, 2020)", "Explanation": "The cited work by Onoe and Durrett (2020) views entity types as a training objective to predict fine-grained Wikipedia category tags, which the citing paper may have adopted in their research to improve entity type prediction and select the most relevant entity."}, {"Category": "Data Source", "Citation": "(Wu et al., 2020)", "Explanation": "The cited work provides the AIDA, MSNBC, and WNED-WIKI datasets used in the citing paper for training and evaluation purposes."}]
[ { "figure_ref": [], "heading": "", "publication_ref": [ "b1", "b2", "b3", "b4" ], "table_ref": [], "text": "We provide a detailed comparison between POPE and prior works, including (a) OnePose/OnePose++ [2,3] which relies on a large number of posed support views and the corresponding bounding box; (b) Gen6D [4] that replace 2D-3D matching pipeline with a refiner network; and (c) ZeroPose [5] which further utilized depth maps. Different from all these methods, the proposed method POPE eliminates the need for densely annotated support views and enables accurate object retrieval in new viewpoints without relying on any assumptions about the object's category. Here, denotes a close-category detector, means a correlation-based detector, and denotes a open-world detector." }, { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b5", "b9", "b12", "b13", "b15", "b1", "b2", "b3", "b4", "b16", "b18", "b3", "b19", "b21", "b22", "b23", "b23", "b21" ], "table_ref": [], "text": "Robotic systems and augmented reality/virtual reality (AR/VR) applications have become ubiquitous across numerous industries, facilitating the execution of intricate tasks ot offering immersive user experiences. Describing the status of objects, particularly their six degrees-of-freedom (6DoF) poses, is a crucial step towards achieving in-depth scene understanding and delicate interactions. More importantly, given the diverse nature of real-world scenarios, it is essential to have a method that can operate on arbitrary object assets.\nHowever, enabling object 6DoF pose estimation on unseen objects using simple and easy-to-obtained references is challenging. Traditional instance-level [6,7,8,9,10,11,12] or category-level [13,14,15] pose estimators exhibit limitations in handling diverse objects, as they are specifically designed for particular instances or categories. These design principles restrict their generalization capabilities to unseen instances or categories during testing, due to their reliance on CAD models or a welldefined category-level canonical space. Later, tremendous efforts have been devoted to addressing the aforementioned challenges by adopting structure-from-motion (SfM [16]) techniques [2,3], reducing the number of support views [4], or leveraging depth maps and self-supervised trained Vision Transformers [5]. A detailed visual comparison is summarized in Figure 2.\nA straightforward way to accomplish 6DoF object pose estimation with a single support view is to estimate relative poses [17,18] by performing 2D-2D matching between query and reference images. However, dense matching on arbitrary objects is highly unstable, especially for wide-baseline camera views or a clustered background. Besides the difficulties in image matching, another substantial issue in real-world scenes arises from the potential for heavy occlusion of the target object, which makes it hard to be detected. Previous methods propose to adopt off-the-shelf detectors [19] for specific instances/categories, or design a correlation-based object detector on a small scale dataset [4]. Consequently, their robustness when dealing with novel objects in diverse scenes are not guaranteed.\nTo tackle the obstacles of open-world detection on arbitrary target objects and the robust 2D-2D matching, a promising avenue is leveraging the power of the foundation model that is trained on a vastly large-scale dataset. Recently, the community has witnessed the emerging properties of these foundation models on few-shot or even zero-shot generalization, crossing from language [20,21] to vision [22,23,24].\nThese advancements have shed light on the under-explored problem of zero-shot object pose estimation -the tantalizing possibility of making no assumption on the object category and using only one reference image. Specifically, the newly arising capability of performing zero-shot segmentation across various image data domains [24] and non-parametric instance-level recognition [22] have shown potential in addressing these challenges.\nIn this paper, we introduce a novel task named Promptable Object Pose Estimation to tackle the challenge of estimating the 6DoF object pose between the given object prompts (a single image for each instance, used as support) and any new captured viewpoint with complex backgrounds (target). Our proposed model, called POPE, consists of four main features in one unified pipeline: (i) Segment Objects generates a set of valid segmentation proposals for any image at a new viewpoint; (ii) Retrieve Objects constructs object-level matching between object prompts and segmented object proposals at two views; (iii) Pose Objects estimate the relative pose by utilizing the matched correspondences between paired object images; and (iv) Online Refinement for Arbitrary View-number triggers a coarseto-fine pose estimation process with efficient 2D-2D global matching and 2D-3D local matching for retrieved objects, on newly target views. We outline our contributions below:\n• We establish a new and challenging task: Promptable Object Pose Estimation, which aims to estimate the pose of an object in wild scenarios, with no assumptions on object category and using only one reference image.\n• To tackle this problem, we propose a 6DoF pose foundation model, POPE, that seamlessly integrates the power of pre-trained foundation models and 3D geometry principles for highquality segmentation, hierarchical object retrieval, and robust image matching, to enable accurate object pose estimation in diverse and uncontrolled environments.\n• For evaluation, we introduce a large-scale test dataset containing diverse data sources. POPE outperforms existing generalizable pose estimators and demonstrates remarkable effectiveness for both promptable pose estimation and downstream 3D-vision tasks." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b24", "b22", "b21", "b23", "b25", "b7", "b12", "b8", "b29", "b4", "b45", "b45", "b47", "b3", "b1", "b2", "b3", "b1", "b2", "b48", "b49", "b51", "b53", "b56", "b57", "b58", "b51" ], "table_ref": [], "text": "Large-scale Pre-trained 2D Foundation Models. Models trained on large-scale datasets, demonstrating the scaling effect with data-parameter balance, are regarded as foundation models. Recently, we witnessed that the foundation models [21] demonstrated strong generalization capability, serving as the base model in a wide range of tasks [21]. For example, CLIP [25] utilizes contrastive learning to construct a joint embedding space of text and image modalities. Similarly, self-supervised models such as DINO [23] and DINOv2 [22] show emerging properties for learning robust visual features. Segment-Anything Model (SAM) [24] demonstrates promptable segmentation ability that supports interactive segmentation with visual promptings such as points and bounding boxes. In this paper, we achieve the goal of promptable object pose estimation by harnessing the power of foundation models. We build a system that integrates the essence of SAM and DINO to help POPE handle cluttered real scenes by performing dense segmentation and instance-level matching. Generalizable Object Pose Estimator. Early approaches for estimating the 6DoF pose of objects build instance-level [26,27,28] or category-level [13,15,29,30,31,15,32,33,34,35,5] [46], object mask [47, 46,48] and reference images [4,2,3]. Specifically, Gen6D [4] first detects the target object and initializes a pose estimate from dense reference views. Then, Gen6D refines the pose using feature volume and a 3D neural network. OnePose [2] and OnePose++ [3] construct a sparse point cloud from the RGB sequences of all support viewpoints and then determine the object poses by matching the target view with the sparse point cloud. However, these works still require dense support views, i.e. ≥32 views, where each view needs to be annotated with ground-truth poses. We argue the requirement of dense support views is not practical for real-world applications. To this end, we propose the paradigm of promptable pose estimation, where we only use one support view as the reference. We turn the 6DoF object pose estimation task into relative pose estimation between the retrieved object in the target view and the support view. Thus, we do not have any hypothesis of object category, achieving generalizable object pose estimation. Two-view Object Pose Estimation. The methods of estimating the relative camera pose between two views can be classified into two categories: i) correspondence-based methods, and ii) direct pose regression methods. The correspondence-based methods establish cross-view pixel-level correspondences, and the pose can be recovered by solving the fundamental matrix [49]. The methods establish the correspondences based on hand-crafted features, e.g. SIFT [50], and SURF [51], or using learned features [52,53,54,55,56,57]. Some of the methods also incorporate robust estimation methods [58], or the synergy between shape reconstruction and pose estimation [59] [52,53], we propose a coarse-to-fine paradigm. We first build instance-level correspondence by matching the prompt object (shown in the support image) with segmented object instances in the target image, which identifies the highly possible regions of the prompt object. Then we establish fine-grained dense correspondence between the support image and the identified regions in the target image, which avoids noisy matching with cluttered background regions." }, { "figure_ref": [], "heading": "Propmtable Object Pose Estimation Task", "publication_ref": [ "b1", "b2", "b4", "b3", "b1", "b2", "b4" ], "table_ref": [], "text": "Generalizable 6DoF object pose estimators play a crucial role in robotics and 3D vision tasks by accurately determining the position and orientation of novel objects in 3D space, without the need for fine-tuning.\nHowever, current methods [4, 2, 3, 5, 47] have limitations. They can only handle cases where an off-the-shelf detector is used for closed-category object separation from the background [2,3,5].\nAdditionally, the number of support views required for a robotics system to grasp an object is often uncertain due to occlusions, object appearance variations, and sensor limitations [47]. Furthermore, the tedious requirement of pose annotation [4,2,3] or depth maps [5] in the support view makes it challenging to scale up and generalize to various scenes. These limitations hinder the deployment of existing pose estimators in diverse and uncontrolled scenes. To address these challenges, we propose to decompose the 6DoF object pose estimation problem into relative object pose estimation. This approach reduces the reliance on absolute pose annotation and allows for easy extension from two-view to multiple-view scenarios. Moreover, we introduce an Open-world Detector that is category-agnostic and robust to occlusion and pose variation." }, { "figure_ref": [], "heading": "Task Definition", "publication_ref": [], "table_ref": [], "text": "We introduce a novel task of Propmtable Object Pose Estimation (POPE). The primary goal of this task is to estimate the relative poses for all objects in one scene image according to a series of (single-view) reference image prompts. Specifically, our POPE model receives an arbitrary scene image and a sequence of arbitrary reference images as the input. As the output, POPE simultaneously detects all the objects from the scene and annotates their poses according to the references.\nWhy Promptable? The use of object prompts allows for higher interactivity and flexibility, enabling end users to indicate their interest in specific objects through prompts such as object images or even abstract sketches. The promptable setting eliminates the reliance on predefined categories or assumptions regarding the size and shape of objects, resulting in a more generalizable approach that can be applied to any object as long as it is included in the set of object prompts.\nWhy Single-View Prompt? We argue that in most user cases, only single-image references are presented and prefered. On the one hand, consistent images captured for the same object from different angles barely exist in the wild and web collection. On the other hand, estimating 6DoF pose with multiple views requires additional calibration of the reference views resulting in a chicken-egg problem. Enabling high-performance two-view geometry also frees the robotic agent from acquiring a CAD model and benefits 3D reconstruction with fewer views. Despite estimating the poses through only one reference view being a challenging setting, fortunately, it can be endowed with the prevalent foundation models which enable robust feature representation for both detection and matching. In addition, single-reference pose estimation can be served as a starting point for multi-view geometry.\nOur POPE pipeline can be seamlessly integrated into a multi-view progressive reconstruction pipeline, which consistently boosts pose estimation and reconstruction accuracy starting with a set of unposed images from scratch." }, { "figure_ref": [], "heading": "Preliminary of Two-view Pose Estimation.", "publication_ref": [], "table_ref": [], "text": "The task of estimating the relative camera poses from two separate images, without a 3D CAD model, is referred to as two-view object pose estimation. Classic geometric vision theory suggests that the camera poses and depth maps can be computed from image matching points alone, without any additional information [63].\nGiven a set of image matching points x i and x ′ i in homogeneous coordinates, along with a known camera intrinsic matrix K, the task of two-view object pose estimation is to find the camera rotation matrix R, translation vector t, and corresponding 3D homogeneous point X i . The goal is to satisfy the equations\nx i = K [I|0] X i and x ′ i = K [R|t] X i for all i.\nA classical method to solve this problem consists of three steps: computing the essential matrix E from the image matching points, extracting the relative camera pose R and t from E, and triangulating the matching points to get X i . The essential matrix can be solved using at least 5 matching points [64], and R and t can be computed from E using matrix decomposition. There is a scale ambiguity for relative camera pose estimation, and the 3D point X i can be computed with a global scale ambiguity." }, { "figure_ref": [ "fig_1" ], "heading": "Modular Approach to Zero-shot Promptable Object Pose Estimation", "publication_ref": [ "b23", "b23", "b64", "b22", "b21", "b3", "b4" ], "table_ref": [], "text": "Directly applying a two-view image matching framework between a prompt image and a complex target containing the same object is prone to failure. This is because a complex scene can have numerous noisy matches, especially when limited to only two observations. Hence, in this paper, we propose a modular approach to address this problem by breaking it down into multiple steps. First, we formulate an Open-world Detector that segments and identifies the queried object prompts in the target image. Next, we establish correspondences with new views, refining incorrect object retrievals and solving the task of relative pose estimation. Open-world Object Detector. In this paper, we propose a robust and general detector that conditions on the user-provided object prompt image I P and the image in the target view I T , without making any assumptions about object categories. The proposed detector aims to obtain the matched object mask in the target view, by generating all K valid masks M = {m 1 , m 2 , ..., m K } within I T using automatic object mask generation from a segmentation model [24], and retrieving the masked object image with the best global image properties. Specifically, we generate densely uniform points on the image lattice as prompts for promptable segmentation model (SAM) [24] to obtain M, which represents the object segments. The next goal is to retrieve the masked object image in the target view I T by establishing the relationship between the object prompt image I P and the masked object image set\nI K T = {I 1 T , I 2 T , ..., I K T },\ngiven one object prompt image I P and K object segments in the target. However, we cannot guarantee that the image pairs have enough texture [65] or sufficient image content overlapping for local feature matching of the open-world objects. Inspired by recent progress in self-supervised pre-trained Vision Transformer (ViT) models [23], we employ the retrieval augmented data engine in the DINO-v2 model [22] to perform robust global retrieval. Here, we utilize the embedded [CLS] token to capture global image properties and construct a cosine similarity matrix of shape 1 × K via the inner product of the [CLS] tokens: S (P, T, k) = ⟨CLS P , CLS T (k)⟩, which reveals the object relationship between the prompt image I P and the k th masked image in set\nI K T .\nBy finding the highest score within the matrix, we retrieve the matched image of the same object in two views. Moreover, extending from a single prompt image to multiple ones (e.g., M ) can be easily achieved by scaling up the similarity matrix to M × K. Hierarchical Retrieval Refinement with Local Descriptors. However, despite being trained on a large-scale dataset, DINO-v2 may generate high similarity scores for objects with similar appearances, resulting in erroneous global object-level retrieval (last column of Figure 3). This, in turn, can negatively impact the accuracy of the pose estimation stage. In order to address this issue, we propose a fine-grained approach that incorporates local descriptors to enhance the retrieval process and provide more reliable object matches. Specifically, we leverage local descriptors to summarize the similarities of local visual patterns, including edges, corners, and textures. These descriptors complement the potentially erroneous retrievals obtained solely from global representations. To implement this approach, we consider the Top-K proposals generated by DINO-v2, ranking the similarity scores in descending order.\nWe then establish image correspondences using a transformer-based local feature estimation framework [53] when using natural RGB images as prompt. The predicted confidence matrix P c represents the matching probabilities for all correspondences. To determine the confidence level of the matches, we introduce a confidence criterion based on a threshold value σ. We select and count the matches with confidence scores higher than the threshold in the total number of matches n. This criterion is defined as Criteria = 1 n n i=1 1(c i ≥ σ), where c i denotes the confidence score of the i-th match, and 1 is the indicator function that returns 1 if its argument is true, and 0 otherwise. The proposal with the largest criteria score among the Top-K proposals is selected as the best-matched pair, providing a more reliable estimation of the object pose. Pose Estimation. With dense correspondences established across the best-matched views, we proceed to estimate the relative pose of the cameras. This pose estimation involves determining the rotation R ∈ SO(3) and the translation vector t ∈ R 3 by matching descriptors, computing the essential matrix, and applying RANSAC to handle outliers and ensure reliable results [64]. It is important to note that our method is capable of recovering the relative rotation accurately. However, the predicted translation is up-to-scale, similar to other relative pose estimator [4,5]. This limitation arises from the fact that recovering the absolute translation (or object scale) is an ill-posed problem when only considering two views, as it is susceptible to scale-translation ambiguity. To address this issue, we employ the PnP algorithm and utilize the bounding box of the prompted object in the uncropped support view to recovering the scale of translation." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "We initially demonstrate our approach for achieving zero-shot 6DoF object pose estimation on four different datasets using a two-view scenario. Subsequently, we validate the proposed open-world detector by assessing its segmentation and retrieval accuracy. Finally, in order to adapt POPE to multiview pose estimation and evaluate the accuracy of multiple-view pose, we visualize the performance using additional input target frames and assess the pose on the task of novel view synthesis." }, { "figure_ref": [], "heading": "Evaluation Setup", "publication_ref": [ "b0", "b65", "b1", "b2", "b3", "b23", "b21", "b4" ], "table_ref": [], "text": "Datasets. We evaluate our method on four widely used 6DoF object pose estimation datasets, to test the zero-shot transferbility of POPE without any finetuning. The LINEMOD Dataset [1] is a standard benchmark dataset for 6DoF object pose estimation with the ground-truth CAD model. The LINEMOD dataset consists of images of thirteen low-textured objects under cluttered scenes and varying lighting conditions. The YCB-Video Dataset [66] consists of 92 RGBD videos of 21 YCB objects, with medium clustered background and ground-truth CAD model for evaluation. The OnePose Dataset [2] contains around 450 real-world video sequences of 150 objects with rich textures and simple background. Each frame is annotated with camera poses and 3D bounding box. The OnePose++ Dataset [3] supplements the original OnePose dataset with 40 household low-textured objects. As the pose distribution in different datasets varies, we organize the manage the test set in a balanced distribution from 0 • to 30 • . Overall, the test set contains 5796 pairs on LIMEMOD, 2843 pairs on YCB-Video, 2751 pairs on OnePose, and 3166 pairs on OnePose++. Model Selection and Baselines. We compared our proposed POPE method with two other approaches: LoFTR [53], an image-matching based method that directly performs correspondence matching for pose estimation, and Gen6D [4], which utilizes a correlation-based network to discover object boxes, find pose initialization, and refine the relative object pose. We excluded the comparison with OnePose and OnePose++ as they are unable to generate point clouds from a single support view. In POPE, we utilize pre-trained models for different tasks: the Segment Anything model [24] with a ViT-H architecture for object mask generation, the DINO-v2 model [22] pre-trained with ViT-S/14 for object proposal generation, and the LoFTR model [53] pre-trained with indoor scenes for natural image-based image matching. We set σ as 0.9 and the K as 3 in the experiments. It is important to note that the evaluated promptable object pose estimation does not rely on labeled examples for fine-tuning, including the pose in the support view and object masks, for any objects in real-world environments. Evaluation. We report the median error for each pair of samples, along with the accuracy at 15 • and 30 • , following the standard practice in relative object pose estimation [5]. The accuracy metrics represent the percentage of predictions with errors below these thresholds. In the main draft, our evaluation primarily focuses on the two-view settings, while we provide additional results on downstream applications (multiple-view pose estimation, novel view synthesis)." }, { "figure_ref": [ "fig_3" ], "heading": "Comparisons", "publication_ref": [ "b3", "b1", "b2" ], "table_ref": [ "tab_2", "tab_2" ], "text": "Results on LINEMOD and YCB-video datasets. We present the overall average median error and pose accuracy under different thresholds in Table 1. Due to space limitations, we include the full table in the Sec 4.2 and demonstrate the median error for five instances in this section. It is evident from the results that the proposed POPE consistently outperforms other methods across all metrics, exhibiting a significant margin over each instance. The qualitative results, visualized in Figure 4, highlight important observations. Gen6D [4] heavily relies on accurate initialization for pose refinement and struggles in single-reference scenarios. LoFTR [53] fails to provide accurate matches when handling clustered scenes with object occlusions, resulting in inaccurate box predictions. It is important to note that the visualization of object boxes incorporates ground-truth translation to address scale ambiguity. Results on OnePose and OnePose++ datasets. In addition to the dataset containing multiple objects in cluttered scenes, we also evaluate the proposed framework on recently introduced one-shot object pose estimation datasets. Unlike previous approaches that rely on pose or box annotations, we conduct zero-shot two-view pose estimation without such annotations. The results in Table 1 demonstrate that POPE achieves a smaller median error in the relative object pose estimation task for both datasets. As the pose gap increases, LoFTR can improve its accuracy by utilizing the entire image for matching, incorporating more textural details from the background while still performing on par with our method. Figure 5: Qualitative results on the OnePose [2] and OnePose++ [3] datasets. Ground-truth poses are depicted with green boxes, while estimated poses are represented by blue boxes. Gen6D performs poorly compared to correspondence-based methods due to the significant pose gap between the support and target views. LoFTR is susceptible to the presence of similar patterns between the object and background (last row). In contrast, our proposed POPE exhibits strong generalization ability on both textured and textureless single object datasets." }, { "figure_ref": [ "fig_7" ], "heading": "Scaling from 2-view to Multi-view Promptable Pose Estimation (POPE)", "publication_ref": [ "b15", "b66" ], "table_ref": [], "text": "To address the requirement for sparse-view datasets in real-world scenarios, we have expanded our method from 2-view promptable pose estimation (POPE) to accommodate multi-view scenarios. Initially, we utilize the image matching results obtained from the 2-view POPE. We utilize the semi-dense correspondences from LOFTR [53] to reconstruct a semi-dense point cloud using COLMAP [16].\nTo introduce a new target viewpoint, we randomly select an image and perform object segmentation in a promptable manner. This enables us to retrieve the object's identity and exclude any negative effects caused by the clustered background. Subsequently, we conduct image matching between the prompt image and the newly added object image, register it, and extract correspondences between the new image and the semi-dense point cloud. The pose of the new object image is estimated by solving PnP. Finally, we update the sparse point cloud by minimizing reprojection errors and perform back-projection to obtain an optimized, accurate object point cloud, as well as updated object poses.\nTo demonstrate the scalability of our method, we visualize the performance curve by randomly increasing the number of views. Figure 6 illustrates that the overall accuracy significantly improves as more visual information is incorporated. Novel View Synthesis, an Application of POPE Our next objective is to validate the accuracy of our predicted pose estimation and demonstrate its practical applicability in downstream applications. To this end, we employ the estimated multi-view poses obtained from our POPE model, in combination with a pre-trained and generalizable Neural Radiance Field (GNT) [67]. Specifically, we configure the GNT with a source view number of 10 and utilize ground truth poses for source view warping. Subsequently, we leverage the estimated poses from our POPE model to generate new viewpoints based on the obtained POPE poses. Notably, our rendered results exhibit a remarkable resemblance to the ground-truth color image, as depicted in Figure 7, validating the precision of our estimated poses. These findings provide compelling evidence supporting the accuracy and reliability of our pose estimation method, paving the way for its effective implementation in diverse downstream applications. " }, { "figure_ref": [ "fig_9" ], "heading": "Accuracy", "publication_ref": [ "b0", "b65", "b1", "b2" ], "table_ref": [ "tab_3", "tab_4", "tab_5", "tab_6", "tab_7" ], "text": "Acc15 Acc30\n(a) The accuracy under 15°(Acc15) and 30°(Acc30). 8, Figure 9 and Figure 10 to further illustrate the effectiveness of our promptable 6DoF object pose estimation method. This approach leverages a prompt image containing the object of interest, and our algorithm, POPE, demonstrates the ability to recognize objects of various categories through segmentation and retrieval processes, ultimately achieving accurate estimation of relative object poses. Necessity of Open-world Object Detector. Challenging scenes, such as cluttered or complex backgrounds, occlusions, or variations in lighting conditions, can pose significant challenges for traditional two-view object detection and pose estimation (see Tab 1). Whereas our proposed method utilize a open-world object detector, not limited to a specific group of classes, improves the zero-shot generalization by the retrieval-and-matching strategy. When retrieving using the global feature representation, which may mistakenly have large activations with non-related objects (Figure 11), results in the inaccurate 6DoF estimation in the later stage. The proposed hierarchical representation for object retrieval across viewpoints (Table 2), both improves the segmentation and retrieval accuracy, as well as benefits the subsequent pose estimation.\nQuantitative Results on Each Instance We provide a comprehensive analysis of the average median error and pose accuracy across various thresholds. Specifically, we present per-instance metrics for two-view 6DoF object pose estimation, focusing on datasets with clustered backgrounds, namely LINEMOD [1] and YCB-Video [66]. The results are summarized in Table 3 and Table 4, revealing a significant improvement in both per-instance accuracy and overall accuracy. This obser- vation highlights the effectiveness of our promptable approach in mitigating the negative impact of background clutter and substantially enhancing the estimation accuracy. Furthermore, we present per-instance metrics for two-view 6DoF object pose estimation on datasets containing single object with rich textures [2] and poor textures [3] of each scebe. As depicted in Table 5 and Table 6, our method outperforms other two-view-based methods in terms of pose accuracy, with assistance of foreground object segmentation and retrieval. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we present Promptable Object Pose Estimator (POPE), a zero-shot solution for estimating the 6DoF object pose in any scene with only one support image. Our solution highlights the potential of leveraging 2D pre-trained foundation models to lift the typical object pose estimation to generalize in a more practical paradigm. It features a modular design that decomposes the promptable object pose estimation into several steps. We demonstrate the scalability of our proposed solution to use single support image as prompt under extreme clustered scenes, the extension to multiple viewpoints, and the validation on novel view synthesis. Several potential directions for future work exist, including distilling large-scale foundation models into smaller ones for enabling real-time inference, and incorporating single-view depth information from a monocular depth estimator to enhance zero-shot accuracy. We envision that our solution will enable users to generate photorealistic 3D assets for augmented or virtual reality applications using only a few images, even as sparse as two." } ]
2023-05-25
[ { "authors": "Stefan Hinterstoisser; Vincent Lepetit; Slobodan Ilic; Stefan Holzer; Gary Bradski; Kurt Konolige; Nassir Navab", "journal": "Springer", "ref_id": "b0", "title": "Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes", "year": "2012" }, { "authors": "Jiaming Sun; Zihao Wang; Siyu Zhang; Xingyi He; Hongcheng Zhao; Guofeng Zhang; Xiaowei Zhou", "journal": "", "ref_id": "b1", "title": "Onepose: One-shot object pose estimation without cad models", "year": "2022" }, { "authors": "Xingyi He; Jiaming Sun; Yuang Wang; Di Huang; Hujun Bao; Xiaowei Zhou", "journal": "", "ref_id": "b2", "title": "Onepose++: Keypoint-free one-shot object pose estimation without cad models", "year": "2023" }, { "authors": "Yuan Liu; Yilin Wen; Sida Peng; Cheng Lin; Xiaoxiao Long; Taku Komura; Wenping Wang", "journal": "Springer", "ref_id": "b3", "title": "Gen6d: Generalizable model-free 6-dof object pose estimation from rgb images", "year": "2022" }, { "authors": "Walter Goodwin; Sagar Vaze; Ioannis Havoutis; Ingmar Posner", "journal": "Springer", "ref_id": "b4", "title": "Zero-shot category-level object pose estimation", "year": "2022" }, { "authors": "Wadim Kehl; Fabian Manhardt; Federico Tombari; Slobodan Ilic; Nassir Navab", "journal": "", "ref_id": "b5", "title": "Ssd-6d: Making rgb-based 3d detection and 6d pose estimation great again", "year": "2017" }, { "authors": " Bugra Tekin; N Sudipta; Pascal Sinha; Fua", "journal": "", "ref_id": "b6", "title": "Real-time seamless single shot 6d object pose prediction", "year": "2018" }, { "authors": "Yu Xiang; Tanner Schmidt; Venkatraman Narayanan; Dieter Fox", "journal": "", "ref_id": "b7", "title": "Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes", "year": "2017" }, { "authors": "Sergey Zakharov; Ivan Shugurov; Slobodan Ilic", "journal": "", "ref_id": "b8", "title": "Dpod: 6d pose object detector and refiner", "year": "2019" }, { "authors": "Zhigang Li; Gu Wang; Xiangyang Ji", "journal": "", "ref_id": "b9", "title": "Cdpn: Coordinates-based disentangled pose network for real-time rgb-based 6-dof object pose estimation", "year": "2019" }, { "authors": "Sida Peng; Yuan Liu; Qixing Huang; Xiaowei Zhou; Hujun Bao", "journal": "", "ref_id": "b10", "title": "Pvnet: Pixel-wise voting network for 6dof pose estimation", "year": "2019" }, { "authors": "Yann Labbé; Justin Carpentier; Aubry Mathieu; Josef Sivic", "journal": "Springer", "ref_id": "b11", "title": "Cosypose: Consistent multiview multi-object 6d pose estimation", "year": "2020" }, { "authors": "He Wang; Srinath Sridhar; Jingwei Huang; Julien Valentin; Shuran Song; Leonidas J Guibas", "journal": "", "ref_id": "b12", "title": "Normalized object coordinate space for category-level 6d object pose and size estimation", "year": "2019" }, { "authors": "Adel Ahmadyan; Liangkai Zhang; Artsiom Ablavatski; Jianing Wei; Matthias Grundmann", "journal": "", "ref_id": "b13", "title": "Objectron: A large scale dataset of object-centric videos in the wild with pose annotations", "year": "2021" }, { "authors": "Zijian Xu Chen; Jie Dong; Andreas Song; Otmar Geiger; Hilliges", "journal": "Springer", "ref_id": "b14", "title": "Category level object pose estimation via neural analysis-by-synthesis", "year": "2020" }, { "authors": "L Johannes; Jan-Michael Schonberger; Frahm", "journal": "", "ref_id": "b15", "title": "Structure-from-motion revisited", "year": "2016" }, { "authors": "Samarth Sinha; Jason Y Zhang; Andrea Tagliasacchi; Igor Gilitschenski; David B Lindell", "journal": "", "ref_id": "b16", "title": "Sparsepose: Sparse-view camera pose regression and refinement", "year": "2022" }, { "authors": "Jason Y Zhang; Deva Ramanan; Shubham Tulsiani", "journal": "Springer", "ref_id": "b17", "title": "Relpose: Predicting probabilistic relative rotation for single objects in the wild", "year": "2022" }, { "authors": "Jocher Glenn; T Nishimura; Minerva; Vilariño", "journal": "", "ref_id": "b18", "title": "", "year": "2002" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b19", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b20", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Maxime Oquab; Timothée Darcet; Théo Moutakanni; Huy Vo; Marc Szafraniec; Vasil Khalidov; Pierre Fernandez; Daniel Haziza; Francisco Massa; Alaaeldin El-Nouby", "journal": "", "ref_id": "b21", "title": "Dinov2: Learning robust visual features without supervision", "year": "2023" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Hervé Jégou; Julien Mairal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b22", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Alexander Kirillov; Eric Mintun; Nikhila Ravi; Hanzi Mao; Chloe Rolland; Laura Gustafson; Tete Xiao; Spencer Whitehead; Alexander C Berg; Wan-Yen Lo", "journal": "", "ref_id": "b23", "title": "Segment anything", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b24", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Lin Yen-Chen; Pete Florence; Jonathan T Barron; Alberto Rodriguez; Phillip Isola; Tsung-Yi Lin", "journal": "IEEE", "ref_id": "b25", "title": "inerf: Inverting neural radiance fields for pose estimation", "year": "2021" }, { "authors": "Tomas Hodan; Daniel Barath; Jiri Matas", "journal": "", "ref_id": "b26", "title": "Epos: Estimating 6d pose of objects with symmetries", "year": "2020" }, { "authors": "Yan Di; Fabian Manhardt; Gu Wang; Xiangyang Ji; Nassir Navab; Federico Tombari", "journal": "", "ref_id": "b27", "title": "Sopose: Exploiting self-occlusion for direct 6d pose estimation", "year": "2021" }, { "authors": "Yilin Wen; Xiangyu Li; Hao Pan; Lei Yang; Zheng Wang; Taku Komura; Wenping Wang", "journal": "", "ref_id": "b28", "title": "Disentangled implicit shape and pose learning for scalable 6d pose estimation", "year": "2021" }, { "authors": "Xinke Deng; Junyi Geng; Timothy Bretl; Yu Xiang; Dieter Fox", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b29", "title": "icaps: Iterative categorylevel object pose and shape estimation", "year": "2022" }, { "authors": "Jiehong Lin; Hongyang Li; Ke Chen; Jiangbo Lu; Kui Jia", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b30", "title": "Sparse steerable convolutions: An efficient learning of se (3)-equivariant features for estimation and tracking of object poses in 3d space", "year": "2021" }, { "authors": "Wei Chen; Xi Jia; Jin Hyung; Jinming Chang; Linlin Duan; Ales Shen; Leonardis", "journal": "", "ref_id": "b31", "title": "Fs-net: Fast shape-based network for category-level 6d object pose estimation with decoupled rotation mechanism", "year": "2021" }, { "authors": "Jiehong Lin; Zewei Wei; Zhihao Li; Songcen Xu; Kui Jia; Yuanqing Li", "journal": "", "ref_id": "b32", "title": "Dualposenet: Category-level 6d object pose and size estimation using dual pose network with refined learning of pose consistency", "year": "2021" }, { "authors": "Meng Tian; Marcelo H Ang; Hee Gim; Lee", "journal": "Springer", "ref_id": "b33", "title": "Shape prior deformation for categorical 6d object pose and size estimation", "year": "2020" }, { "authors": "Yan Di; Ruida Zhang; Zhiqiang Lou; Fabian Manhardt; Xiangyang Ji; Nassir Navab; Federico Tombari", "journal": "", "ref_id": "b34", "title": "Gpv-pose: Category-level object pose estimation via geometry-guided pointwise voting", "year": "2022" }, { "authors": "Yang Xiao; Xuchong Qiu; Pierre-Alain Langlois; Mathieu Aubry; Renaud Marlet", "journal": "", "ref_id": "b35", "title": "Pose from shape: Deep pose estimation for arbitrary 3d objects", "year": "2019" }, { "authors": "Giorgia Pitteri; Aurélie Bugeau; Slobodan Ilic; Vincent Lepetit", "journal": "", "ref_id": "b36", "title": "3d object detection and pose estimation of unseen objects in color images with local surface embeddings", "year": "2020" }, { "authors": "Meghal Dani; Karan Narain; Ramya ", "journal": "", "ref_id": "b37", "title": "3dposelite: a compact 3d pose estimation using node embeddings", "year": "2021" }, { "authors": "Stefan Hinterstoisser; Stefan Holzer; Cedric Cagniart; Slobodan Ilic; Kurt Konolige; Nassir Navab; Vincent Lepetit", "journal": "IEEE", "ref_id": "b38", "title": "Multimodal templates for real-time detection of texture-less objects in heavily cluttered scenes", "year": "2011" }, { "authors": "Vassileios Balntas; Andreas Doumanoglou; Caner Sahin; Juil Sock; Rigas Kouskouridas; Tae-Kyun Kim", "journal": "", "ref_id": "b39", "title": "Pose guided rgbd feature learning for 3d object pose estimation", "year": "2017" }, { "authors": "Paul Wohlhart; Vincent Lepetit", "journal": "", "ref_id": "b40", "title": "Learning descriptors for object recognition and 3d pose estimation", "year": "2015" }, { "authors": "Martin Sundermeyer; Maximilian Durner; Yen En; Zoltan-Csaba Puang; Narunas Marton; Kai O Vaskevicius; Rudolph Arras; Triebel", "journal": "", "ref_id": "b41", "title": "Multi-path learning for object pose estimation across domains", "year": "2020" }, { "authors": "Yi Li; Gu Wang; Xiangyang Ji; Yu Xiang; Dieter Fox", "journal": "", "ref_id": "b42", "title": "Deepim: Deep iterative matching for 6d pose estimation", "year": "2018" }, { "authors": "Brian Okorn; Qiao Gu; Martial Hebert; David Held", "journal": "IEEE", "ref_id": "b43", "title": "Zephyr: Zero-shot pose hypothesis rating", "year": "2021" }, { "authors": "Benjamin Busam; Hyun Jun Jung; Nassir Navab", "journal": "", "ref_id": "b44", "title": "I like to move it: 6d pose estimation as an action decision process", "year": "2020" }, { "authors": "Keunhong Park; Arsalan Mousavian; Yu Xiang; Dieter Fox", "journal": "", "ref_id": "b45", "title": "Latentfusion: End-to-end differentiable reconstruction and rendering for unseen object pose estimation", "year": "2020" }, { "authors": "Lin Yen-Chen; Pete Florence; Jonathan T Barron; Alberto Rodriguez; Phillip Isola; Tsung-Yi Lin", "journal": "", "ref_id": "b46", "title": "inerf: Inverting neural radiance fields for pose estimation", "year": "2020" }, { "authors": "Yunzhi Lin; Thomas Müller; Jonathan Tremblay; Bowen Wen; Stephen Tyree; Alex Evans; Patricio A Vela; Stan Birchfield", "journal": "", "ref_id": "b47", "title": "Parallel inversion of neural radiance fields for robust pose estimation", "year": "2022" }, { "authors": "Anastasiya Mishchuk; Dmytro Mishkin; Filip Radenovic; Jiri Matas", "journal": "", "ref_id": "b48", "title": "Working hard to know your neighbor's margins: Local descriptor learning loss", "year": "2017" }, { "authors": " David G Lowe", "journal": "International journal of computer vision", "ref_id": "b49", "title": "Distinctive image features from scale-invariant keypoints", "year": "2004" }, { "authors": "Herbert Bay; Tinne Tuytelaars; Luc Van Gool", "journal": "Lecture notes in computer science", "ref_id": "b50", "title": "Surf: Speeded up robust features", "year": "2006" }, { "authors": "Paul-Edouard Sarlin; Daniel Detone; Tomasz Malisiewicz; Andrew Rabinovich", "journal": "", "ref_id": "b51", "title": "Superglue: Learning feature matching with graph neural networks", "year": "2020" }, { "authors": "Jiaming Sun; Zehong Shen; Yuang Wang; Hujun Bao; Xiaowei Zhou", "journal": "", "ref_id": "b52", "title": "Loftr: Detector-free local feature matching with transformers", "year": "2021" }, { "authors": "Wei Jiang; Eduard Trulls; Jan Hendrik Hosang; Andrea Tagliasacchi; Kwang Moo; Yi ", "journal": "", "ref_id": "b53", "title": "Cotr: Correspondence transformer for matching across images", "year": "2021" }, { "authors": "Jiankun Li; Peisen Wang; Pengfei Xiong; Tao Cai; Zi-Ping Yan; Lei Yang; Jiangyu Liu; Haoqiang Fan; Shuaicheng Liu", "journal": "", "ref_id": "b54", "title": "Practical stereo matching via cascaded recurrent network with adaptive correlation", "year": "2022" }, { "authors": "Philipp Lindenberger; Paul-Edouard Sarlin; Viktor Larsson; Marc Pollefeys", "journal": "", "ref_id": "b55", "title": "Pixel-perfect structure-from-motion with featuremetric refinement", "year": "2021" }, { "authors": "Ufuk Efe; Kutalmis Gokalp Ince; A Aydin Alatan", "journal": "", "ref_id": "b56", "title": "Dfm: A performance baseline for deep feature matching", "year": "2021" }, { "authors": "Paul A Beardsley; H S Philip; Andrew Torr; Zisserman", "journal": "", "ref_id": "b57", "title": "3d model acquisition from extended image sequences", "year": "1996" }, { "authors": "Yuhe Jin; Dmytro Mishkin; Anastasiia Mishchuk; Jiri Matas; P Fua; Kwang Moo Yi; Eduard Trulls", "journal": "International Journal of Computer Vision", "ref_id": "b58", "title": "Image matching across wide baselines: From paper to practice", "year": "2020" }, { "authors": "Ruojin Cai; Bharath Hariharan; Noah Snavely; Hadar Averbuch-Elor", "journal": "", "ref_id": "b59", "title": "Extreme rotation estimation using dense correlation volumes", "year": "2021" }, { "authors": "Chris Rockwell; Justin Johnson; David F Fouhey", "journal": "", "ref_id": "b60", "title": "The 8-point algorithm as an inductive bias for relative pose prediction by vits", "year": "2022" }, { "authors": "Hanwen Jiang; Zhenyu Jiang; Kristen Grauman; Yuke Zhu", "journal": "", "ref_id": "b61", "title": "Few-view object reconstruction with unknown categories and camera poses", "year": "2022" }, { "authors": "H Christopher; Longuet- Higgins", "journal": "Nature", "ref_id": "b62", "title": "A computer algorithm for reconstructing a scene from two projections", "year": "1981" }, { "authors": "David Nistér", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b63", "title": "An efficient solution to the five-point relative pose problem", "year": "2004" }, { "authors": "Wei Chen; Yu Liu; Weiping Wang; Erwin M Bakker; Theodoros Georgiou; Paul Fieguth; Li Liu; Michael S Lew", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b64", "title": "Deep learning for instance retrieval: A survey", "year": "2022" }, { "authors": "Berk Calli; Arjun Singh; Aaron Walsman; Siddhartha Srinivasa; Pieter Abbeel; Aaron M Dollar", "journal": "IEEE", "ref_id": "b65", "title": "The ycb object and model set: Towards common benchmarks for manipulation research", "year": "2015" }, { "authors": "Mukund Varma; Peihao Wang; Xuxi Chen; Tianlong Chen; Subhashini Venugopalan; Zhangyang Wang", "journal": "", "ref_id": "b66", "title": "Is attention all that nerf needs?", "year": "2022" } ]
[ { "formula_coordinates": [ 5, 166.07, 224.98, 199.36, 12.32 ], "formula_id": "formula_0", "formula_text": "x i = K [I|0] X i and x ′ i = K [R|t] X i for all i." }, { "formula_coordinates": [ 5, 148.54, 504.45, 95.33, 12.48 ], "formula_id": "formula_1", "formula_text": "I K T = {I 1 T , I 2 T , ..., I K T }," }, { "formula_coordinates": [ 5, 108, 591.72, 16.28, 12.48 ], "formula_id": "formula_2", "formula_text": "I K T ." } ]
POPE: 6-DoF Promptable Pose Estimation of Any Object, in Any Scene, with One Reference
Pose Estimation New View (Target) (a). Framework of Promptable Object Pose Estimator. (b). 6DoF pose estimation of
Zhiwen Fan; Panwang Pan; Peihao Wang; Yifan Jiang; Dejia Xu; Hanwen Jiang; Zhangyang Wang
[ { "figure_caption": "Figure 2 :2Figure 2: Comparing POPE with previous frameworks. We provide a detailed comparison between POPE and prior works, including (a) OnePose/OnePose++ [2, 3] which relies on a large number of posed support views and the corresponding bounding box; (b) Gen6D [4] that replace 2D-3D matching pipeline with a refiner network; and (c) ZeroPose [5] which further utilized depth maps. Different from all these methods, the proposed method POPE eliminates the need for densely annotated support views and enables accurate object retrieval in new viewpoints without relying on any assumptions about the object's category. Here, denotes a close-category detector, means a correlation-based detector, and denotes a open-world detector.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Failed matches. Relying solely on the similarity score of the [CLS] token for global representation can lead to inaccurate matches, especially in clustered scenarios. This motivates us to incorporate local descriptor information for improved retrieval.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure4: Qualitative results on the LINEMOD[1] and YCB-Video[66] datasets. Ground-truth poses are visualized with green boxes, while estimated poses are represented by blue boxes. Gen6D performs poorly compared to correspondence-based methods, as it relies on a closed initial view for relative pose estimation. LoFTR tends to produce noisy matching results when using the object prompt and new view directly as input. POPE demonstrates robustness against complex and cluttered backgrounds by employing open-world object detection and finding correspondences through cropped and centralized images.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "The Median Error with different view number.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: We present plots illustrating the accuracy and median error as the number of views increases from 2 to 16.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 9 :Figure 10 :910Figure 9: Visual Examples of Promptable Object Pose Estimation for Indoor Test Cases. we present visual examples showcasing the retrieved object masks and the estimated relative poses in the context of promptable object pose estimation for indoor test cases.", "figure_data": "", "figure_id": "fig_8", "figure_label": "910", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Ablation study. Visualizations of retrieved object masks and proposals, selected from the Top-3 proposals using the global [CLS] token similarity.", "figure_data": "", "figure_id": "fig_9", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "We conduct experiments on zero-shot two-view object pose estimation on LINEMOD dataset, and report Median Error and Accuracy at 30º, 15º averaged across all 13 categories. We also report Accuracy at 30º broken down by class for an illustrative subset of categories.", "figure_data": "DatasetMethodAll Categories Med. Err (↓) Acc30 (↑) Acc15 (↑) EggboxPer Category (Med. Err ↓) Can Iron Hole. CameraGen6D [4]44.8550.3640.09631.781 30.407 30.094 45.288 35.970LINEMOD [1]LoFTR [53]33.0360.5620.32416.887 17.585 17.904 31.782 22.550Ours15.7310.7700.48310.530 12.699 13.157 14.779 15.102Med. Err (↓) Acc30 (↑) Acc15 (↑)008003005006010YCB-Video [66]Gen6D [4] LoFTR [53]54.477 19.54190.232 0.6860.077 0.47845.461 80.992 50.587 66.999 50.597 15.359 36.942 17.832 28.999 17.475Ours13.94110.8010.5447.78718.385 14.171 20.100 15.428Med. Err (↓) Acc30 (↑) Acc15 (↑) Mfmi.OreoTaip.Diyc.TeeOnePose++ [3]Gen6D [4] LoFTR [53]35.428 9.0120.411 0.8910.158 0.70316.963 16.612 17.787 19.132 19.867 4.077 3.938 4.147 5.041 5.312Ours6.2730.8960.7281.7651.2032.1472.7693.799Med. Err (↓) Acc30 (↑) Acc15 (↑)Yell.Teab.Oran.Gree.Inst.OnePose [2]Gen6D [4] LoFTR [53]17.785 4.3510.893 0.9630.389 0.91835.811 31.536 36.829 30.609 48.317 9.773 6.488 9.439 7.3482 17.136Ours2.1550.9620.9115.4703.1948.0443.96716.492Promptable Object Pose Estimation in Arbitrary Scene We provide supplementary visualexamples in Figure", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation Studies. We conducted an analysis of the segmentation, retrieval, and relative pose estimation tasks to validate the model design. The correlation-based detector in Gen6D[4] often performs poorly in the clustered LINEMOD dataset when using only a single reference image (top row). The proposed framework, utilizing an Open-world Detector that relies on global representation (second row), shows slightly lower performance compared to our full model, which incorporates hierarchical representation (last row). The results are averaged over a subset comprising 1/10 of the LINEMOD dataset.", "figure_data": "MethodSegmentation Acc. mIoU (↑) Accuracy(↑)Retrieval Acc. mAP↑Pose Acc. Med. Err↓ Acc30↑ Acc15↑Gen6D [4]0.0870.1020.06744.6440.3690.106Ours(Global,Top-1)0.6050.8150.81714.9120.7870.493Ours(Hierarchical,Top-3)0.6210.8420.84412.6390.8100.529", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "We conduct experiments on zero-shot two-view object pose estimation on LINEMOD dataset, and report Median Error and Accuracy at 30º, 15º averaged across all 13 scenes.", "figure_data": "MetricsMethodapebenchvise cameracancatdrillerPer Instance duck eggboxglueholepuncherironlampphoneAvgGen6D0.0160.1250.1120.1200.0280.1570.0290.1140.0230.06700.1080.2410.1050.096Acc15 (↑)LoFTR0.0910.4230.3380.4290.1720.4450.1900.4330.1190.2530.4110.5820.3220.324Ours0.4390.4500.4930.5310.444 0.47916 0.4560.6070.3800.5020.5850.4670.4450.483Gen6D0.1330.4450.4000.4850.2320.4820.2030.4370.1470.2790.4960.6090.3800.364Acc30 (↑)LoFTR0.2910.6630.6080.7125 0.3880.6870.3700.7220.2480.4800.7380.8550.5420.562Ours0.7890.7100.7640.8260.7320.7650.7580.8400.6860.8090.8570.7330.7430.770Gen6D 79.70532.50435.970 30.407 54.468 30.665 57.292 31.781 88.04445.28830.094 25.551 39.392 44.855Med. Err (↓)LoFTR 70.09419.22722.550 17.585 43.069 18.356 44.083 16.887 90.00031.78217.904 11.871 26.063 33.036Ours16.71617.76215.102 12.699 17.921 15.926 17.641 10.530 19.14414.77913.157 16.203 16.929 15.731", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "We conduct experiments on zero-shot two-view object pose estimation on YCB-Video dataset, and report Median Error and Accuracy at 30º, 15º averaged across all 10 scenes.", "figure_data": "MetricsMethod001002003004005Per Instance 006007008009010AvgGen6D0.0460.0630.0280.0170.0840.0270.2500.1020.0730.0850.077Acc15 (↑)LoFTR0.4830.5390.2970.2450.4570.2981.000 0.4953 0.5080.4570.478Ours0.4410.5470.4010.4570.5210.3810.9370.7380.5240.4920.544Gen6D0.2040.1900.1400.1080.2530.1380.5620.3080.2210.1920.232Acc30 (↑)LoFTR0.6370.8170.4810.4850.7390.5061.0000.785 0.6885 0.7210.686Ours0.6550.8570.7550.7480.8160.6801.0000.9530.7780.7710.801Gen6D53.87 49.995 80.992 64.819 50.587 66.999 27.633 45.461 53.817 50.597 54.477Med. Err (↓)LoFTR 17.198 13.484 36.942 31.474 17.832 28.999 2.038 15.359 14.613 17.475 19.541Ours18.582 12.133 18.385 17.257 14.171 20.100 1.408 7.7875 14.156 15.428 13.941", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "We conduct experiments on zero-shot two-view object pose estimation on OnePose dataset, and report Median Error and Accuracy at 30º, 15º averaged across all 10 objects.", "figure_data": "MetricsMethodaptamiljzhgminipuff hlyormosiapie brownhousePer Instance oreo mfmilkcake diycookies taipingcookiesteeAvgGen6D0.3500.4450.3870.3970.4240.4210.4170.3570.3940.2990.389Acc15 (↑)LoFTR0.8720.9310.9640.8970.9840.9570.9470.8220.9750.8340.918Ours0.8710.9590.9250.8860.9680.9750.9200.80.9630.8490.911Gen6D0.8450.9140.9250.9010.9440.9140.9230.7960.9380.8310.893Acc30 (↑)LoFTR0.9450.9820.9820.9780.9920.9920.9690.8780.9930.9180.963Ours0.9490.9790.9730.9740.9760.9850.9670.8950.9930.9300.962Gen6D19.542 16.35617.34817.50016.74716.61216.96319.13217.78719.867 17.785Med. Err (↓)LoFTR5.4074.1823.9783.8693.5553.9384.0775.0414.1475.3124.351Ours2.9971.4601.7862.1551.4701.20331.7652.7692.1473.7992.155", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "We conduct experiments on zero-shot two-view object pose estimation on OnePose++ dataset, and report Median Error and Accuracy at 30º, 15º averaged across all 9 objects.", "figure_data": "DatasetMethodPer Instance toyrobot yellowduck sheep fakebanana teabox orange greenteapot lecreusetcupinstaAvgGen6D0.1710.1230.1970.1560.2040.1350.1850.1850.0670.158Acc15 (↑)LoFTR0.7940.6760.7720.680.7820.6850.7830.7080.4430.703Ours0.7530.7680.7810.6830.8440.70.8600.7080.4600.728Gen6D0.4510.3610.4720.4230.4780.3880.4790.4130.2320.411Acc30 (↑)LoFTR0.9120.9010.9220.8930.9030.8550.9690.9280.7380.891Ours0.8820.9360.9010.880.9190.9050.9530.9070.7810.896Gen6D32.99835.81131.36636.20231.536 36.82930.60935.18548.317 35.428Med. Err (↓)LoFTR6.3689.7737.3368.7516.4889.4397.3488.47217.136 9.012Ours3.7925.4704.4354.9903.1948.0443.9676.07216.492 6.273", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[2,3]", "Explanation": "The cited works OnePose and OnePose++ are the basis for the proposed POPE method, as they rely on a large number of posed support views and bounding boxes for object detection and pose estimation."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work Gen6D replaces the 2D-3D matching pipeline with a refiner network, which serves as the methodological basis for the POPE method in terms of object detection and pose estimation."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The cited work ZeroPose utilizes depth maps for object detection and pose estimation, which provides a methodological basis for the POPE method in terms of object detection and pose estimation in new viewpoints."}, {"Category": "Extension or Continuation", "Citation": "[2,3,4,5]", "Explanation": "The proposed POPE method extends the works of OnePose, OnePose++, Gen6D, and ZeroPose by eliminating the need for densely annotated support views and enabling accurate object retrieval in new viewpoints without relying on any assumptions about the object category."}, {"Category": "Methodological Basis", "Citation": "[6,7,8,9,10,11,12]", "Explanation": "The cited works provide instance-level pose estimation methods that the citing paper adopts to enable object 6DoF pose estimation on unseen objects using simple and easy-to-obtain references."}, {"Category": "Extension or Continuation", "Citation": "[13,14,15]", "Explanation": "The cited works focus on category-level pose estimation, which the citing paper extends by exploring the challenges of handling diverse objects and proposing a method that can operate on arbitrary object assets."}, {"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work introduces the structure-from-motion (SfM) technique, which the citing paper adopts in addressing the challenges in object pose estimation with a single support view."}, {"Category": "Data Source", "Citation": "[2,3]", "Explanation": "The cited works provide the data and methods used in the study of object pose estimation with a single support view, which the citing paper references to build upon."}, {"Category": "Extension or Continuation", "Citation": "[4]", "Explanation": "The cited work reduces the number of support views in object pose estimation, which the citing paper further extends by exploring new dimensions and variables in the same research area."}, {"Category": "Extension or Continuation", "Citation": "[5]", "Explanation": "The cited work leverages depth maps and self-supervised trained Vision Transformers in object pose estimation, which the citing paper extends by exploring new techniques and methods in the same research area."}, {"Category": "Methodological Basis", "Citation": "[17,18]", "Explanation": "The cited works estimate relative poses in object pose estimation with a single support view, which the citing paper adopts in their research to address the challenges in the same area."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work proposes off-the-shelf detectors for specific instances/categories in object pose estimation, which the citing paper references in their study to improve the robustness of object detection in diverse scenes."}, {"Category": "Methodological Basis", "Citation": "[24]", "Explanation": "The cited work on zero-shot segmentation in different image data domains provides a method for performing segmentation without any assumptions on the object category, which is useful for addressing the challenge of estimating object pose in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work on non-parametric instance-level recognition has the potential to address the challenge of estimating object pose by providing a method for recognizing objects at the instance level without any prior knowledge of the object category."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work is a foundation for the citing paper as it demonstrates the scaling effect with data-parameter balance, which the citing paper leverages to build a system that integrates the essence of foundation models for the goal of promptable object pose estimation."}, {"Category": "Supporting Evidence", "Citation": "[25]", "Explanation": "The cited work, CLIP, utilizes contrastive learning to construct a joint embedding space of text and image modalities, which the citing paper leverages to build a system for promptable object pose estimation."}, {"Category": "Supporting Evidence", "Citation": "[23]", "Explanation": "The cited work, DINO, demonstrates emerging properties for learning robust visual features, which the citing paper leverages to build a system for promptable object pose estimation."}, {"Category": "Supporting Evidence", "Citation": "[22]", "Explanation": "The cited work, DINOv2, also shows emerging properties for learning robust visual features, which the citing paper leverages to build a system for promptable object pose estimation."}, {"Category": "Extension or Continuation", "Citation": "[24]", "Explanation": "The cited work, SAM, demonstrates promptable segmentation ability that supports interactive segmentation with visual promptings such as points and bounding boxes. The citing paper extends this work by building a system that integrates the essence of SAM and DINO to help POPE handle cluttered real scenes by performing dense segmentation and instance-level matching."}, {"Category": "Methodological Basis", "Citation": "[26,27,28]", "Explanation": "The cited works provide instance-level methods for estimating the 6DoF pose of objects, which the citing paper adopts to build its own approach for object pose estimation."}, {"Category": "Methodological Basis", "Citation": "[13,15,29,30,31,15,32,33,34,35,5]", "Explanation": "The cited works contribute category-level methods for object pose estimation, which the citing paper uses to develop its own approach for object pose estimation."}, {"Category": "Methodological Basis", "Citation": "[46]", "Explanation": "The cited work provides a method for estimating object mask, which the citing paper uses in its approach for object pose estimation."}, {"Category": "Methodological Basis", "Citation": "[47]", "Explanation": "The cited work presents a method for estimating reference images, which the citing paper utilizes in its approach for object pose estimation."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work, Gen6D, is referenced for its use in detecting target objects and initializing pose estimates from dense reference views. The citing paper adopts this method in its approach for object pose estimation."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work, OnePose, is mentioned for its construction of a sparse point cloud from RGB sequences of support viewpoints. The citing paper uses this method in its approach for object pose estimation."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work, OnePose++, is referenced for its use in matching the target view with a sparse point cloud to determine object poses. The citing paper adopts this method in its approach for object pose estimation."}, {"Category": "Methodological Basis", "Citation": "[50]", "Explanation": "The cited work, SIFT, is a hand-crafted feature used in the correspondence-based methods for establishing cross-view pixel-level correspondences in the two-view object pose estimation task."}, {"Category": "Methodological Basis", "Citation": "[51]", "Explanation": "The cited work, SURF, is another hand-crafted feature used in the correspondence-based methods for establishing cross-view pixel-level correspondences in the two-view object pose estimation task."}, {"Category": "Methodological Basis", "Citation": "[52]", "Explanation": "The cited work proposes a method that establishes cross-view pixel-level correspondences using learned features and incorporates robust estimation methods in the two-view object pose estimation task."}, {"Category": "Methodological Basis", "Citation": "[53]", "Explanation": "The cited work also proposes a method that establishes cross-view pixel-level correspondences using learned features and incorporates the synergy between shape reconstruction and pose estimation in the two-view object pose estimation task."}, {"Category": "Methodological Basis", "Citation": "[54]", "Explanation": "The cited work uses learned features to establish cross-view pixel-level correspondences in the two-view object pose estimation task."}, {"Category": "Methodological Basis", "Citation": "[55]", "Explanation": "The cited work also uses learned features to establish cross-view pixel-level correspondences in the two-view object pose estimation task."}, {"Category": "Methodological Basis", "Citation": "[56]", "Explanation": "The cited work uses learned features to establish cross-view pixel-level correspondences in the two-view object pose estimation task."}, {"Category": "Methodological Basis", "Citation": "[57]", "Explanation": "The cited work uses learned features to establish cross-view pixel-level correspondences in the two-view object pose estimation task."}, {"Category": "Methodological Basis", "Citation": "[58]", "Explanation": "The cited work proposes a method that establishes cross-view pixel-level correspondences using learned features and incorporates robust estimation methods in the two-view object pose estimation task."}, {"Category": "Methodological Basis", "Citation": "[59]", "Explanation": "The cited work proposes a method that establishes cross-view pixel-level correspondences using learned features and incorporates the synergy between shape reconstruction and pose estimation in the two-view object pose estimation task."}, {"Category": "Methodological Basis", "Citation": "[2,3,5]", "Explanation": "The cited works provide the closed-category object separation method that the citing paper adopts for the object pose estimation process."}, {"Category": "Extension or Continuation", "Citation": "[4,2,3]", "Explanation": "The cited works are mentioned as having limitations in the citing paper, indicating that the author intends to address these issues in a new approach."}, {"Category": "Data Source", "Citation": "[47]", "Explanation": "The cited work is referenced for the information on the number of support views required for robotics systems to grasp objects, which the citing paper uses in its research on the topic."}, {"Category": "Methodological Basis", "Citation": "[24]", "Explanation": "The cited work provides a method for automatic object mask generation that the citing paper adopts in the proposed detector to obtain the matched object mask in the target view."}, {"Category": "Methodological Basis", "Citation": "[24]", "Explanation": "The cited work provides the promptable segmentation model (SAM) that the citing paper adopts to obtain object segments in the image lattice."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work provides the retrieval augmented data engine in the DINO-v2 model that the citing paper uses to perform robust global retrieval in the self-supervised pre-trained Vision Transformer (ViT) models."}, {"Category": "Methodological Basis", "Citation": "[53]", "Explanation": "The cited work [53] provides a transformer-based local feature estimation framework that the citing paper adopts to establish image correspondences in the process of object pose estimation."}, {"Category": "Methodological Basis", "Citation": "[64]", "Explanation": "The cited work provides the method of estimating the relative pose of cameras by matching descriptors, computing the essential matrix, and applying RANSAC to handle outliers, which the citing paper adopts in their research."}, {"Category": "Data Source", "Citation": "[1]", "Explanation": "The LINEMOD Dataset is cited as a standard benchmark dataset for 6DoF object pose estimation, providing a ground-truth CAD model for evaluation in the citing paper."}, {"Category": "Data Source", "Citation": "[66]", "Explanation": "The YCB-Video Dataset is cited for its RGBD videos of 21 YCB objects, with medium clustered background and ground-truth CAD model for evaluation in the citing paper."}, {"Category": "Data Source", "Citation": "[2]", "Explanation": "The OnePose Dataset is cited for its real-world video sequences of 150 objects with rich textures and simple background, along with camera poses and 3D bounding box annotations for evaluation in the citing paper."}, {"Category": "Data Source", "Citation": "[3]", "Explanation": "The OnePose++ Dataset is cited for its addition of 40 household low-textured objects to the original OnePose dataset, providing a more diverse range of objects for evaluation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[24]", "Explanation": "The cited work, Segment Anything model, is utilized in the POPE method for object mask generation, providing a methodological basis for the task of object mask generation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work, DINO-v2 model, is pre-trained with ViT-S/14 and used in the POPE method for object proposal generation, providing a methodological basis for the task of object proposal generation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[53]", "Explanation": "The cited work, LoFTR model, is pre-trained with indoor scenes and used in the POPE method for image matching, providing a methodological basis for the task of image matching in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[5]", "Explanation": "The cited work is a standard practice in relative object pose estimation, and the citing paper extends the evaluation to include the two-view settings and downstream applications of multiple-view pose estimation and novel view synthesis."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work, Gen6D, is used as a method for pose refinement in the citing paper, but it is noted that the method relies on accurate initialization and struggles in single-reference scenarios."}, {"Category": "Methodological Basis", "Citation": "[53]", "Explanation": "The cited work, LoFTR, is used for object pose estimation in the citing paper, but it is noted that the method fails to provide accurate matches in clustered scenes with object occlusions."}, {"Category": "Data Source", "Citation": "Sec 4.2", "Explanation": "The cited work is a full table in the Sec 4.2 of the paper, which is used as a data source for the results presented in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "Figure 4", "Explanation": "The cited work is a visualization of object boxes in the citing paper, which is an extension or continuation of the results presented in the paper."}, {"Category": "Data Source", "Citation": "one-shot object pose estimation datasets", "Explanation": "The cited work is a dataset used in the citing paper for one-shot object pose estimation, which is a new approach compared to previous methods that rely on pose or box annotations."}, {"Category": "Supporting Evidence", "Citation": "[2]", "Explanation": "The cited work, OnePose, is the source of the dataset used in the qualitative results presented in Figure 5. The results demonstrate the performance of POPE in the relative object pose estimation task compared to the method used in the cited work."}, {"Category": "Supporting Evidence", "Citation": "[3]", "Explanation": "The cited work, OnePose++, is the source of the dataset used in the qualitative results presented in Figure 5. The results show the performance of POPE in the relative object pose estimation task compared to the method used in the cited work."}, {"Category": "Extension or Continuation", "Citation": "LoFTR", "Explanation": "The cited work, LoFTR, is discussed in the context of the results presented in Table 1. The method is used to improve accuracy in the relative object pose estimation task by utilizing the entire image for matching and incorporating more textural details from the background."}, {"Category": "Extension or Continuation", "Citation": "Gen6D", "Explanation": "The cited work, Gen6D, is mentioned in the context of the qualitative results presented in Figure 5. The method is compared to correspondence-based methods due to the significant pose gap between the support and target views."}, {"Category": "Extension or Continuation", "Citation": "LoFTR", "Explanation": "The cited work, LoFTR, is discussed in the context of the qualitative results presented in Figure 5. The method is shown to be susceptible to the presence of similar patterns between the object and background, leading to performance issues in the relative object pose estimation task."}, {"Category": "Methodological Basis", "Citation": "[53]", "Explanation": "The cited work, LOFTR, provides a method for reconstructing a semi-dense point cloud using COLMAP, which the citing paper utilizes in their research to address the need for sparse-view datasets in real-world scenarios."}, {"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work, COLMAP, is used in the cited work to reconstruct a semi-dense point cloud, which the citing paper leverages in their research to address the requirement for sparse-view datasets in real-world scenarios."}, {"Category": "Extension or Continuation", "Citation": "Randomly selecting an image and performing object segmentation in a promptable manner is an extension of the method presented in the cited work."}, {"Category": "Data Source", "Citation": "The cited work provides a method for conducting image matching between the prompt image and the newly added object image, which the citing paper utilizes in their research to estimate the pose of the new object image."}, {"Category": "Extension or Continuation", "Citation": "The cited work provides a method for minimizing reprojection errors and performing back-projection to obtain an optimized, accurate object point cloud, which the citing paper extends to update the sparse point cloud in their research."}, {"Category": "Methodological Basis", "Citation": "The cited work provides a method for solving PnP to estimate the pose of the new object image, which the citing paper utilizes in their research to update the sparse point cloud."}, {"Category": "Extension or Continuation", "Citation": "The cited work provides a method for visualizing the performance curve by randomly increasing the number of views, which the citing paper extends to demonstrate the scalability of their method."}, {"Category": "Methodological Basis", "Citation": "[67]", "Explanation": "The cited work, Neural Radiance Field (GNT), is used as a pre-trained and generalizable model in the downstream application of novel view synthesis. The POPE model is employed to obtain multi-view poses, which are then used in combination with the GNT to generate new viewpoints and render results that are similar to the ground-truth color image."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work, LINEMOD, provides a dataset that the citing paper uses to conduct a comprehensive analysis of the average median error and pose accuracy in two-view 6DoF object pose estimation."}, {"Category": "Methodological Basis", "Citation": "[66]", "Explanation": "The cited work, YCB-Video, is another dataset that the citing paper uses to evaluate the performance of two-view 6DoF object pose estimation in a clustered background environment."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work provides a dataset with single object and rich textures, which the citing paper uses to assess the performance of two-view 6DoF object pose estimation in a specific scenario."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work offers a dataset with single object and poor textures, which the citing paper utilizes to study the performance of two-view 6DoF object pose estimation in a different context."}]
[ { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Figure 1: Given a set of image and text style information, with the 3D geometry information, our method achieves better style transfer results on various text conditions and keeps consistency across arbitrary novel views." }, { "figure_ref": [], "heading": "Abstract", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a novel language-guided 3D arbitrary neural style transfer method (CLIP3Dstyler). We aim at stylizing any 3D scene with an arbitrary style from a text description, and synthesizing the novel stylized view, which is more flexible than the image-conditioned style transfer. Compared with the previous 2D method CLIP-Styler, we are able to stylize a 3D scene and generalize to novel scenes without re-train our model. A straightforward solution is to combine previous image-conditioned 3D style transfer and text-conditioned 2D style transfer meth-ods. However, such a solution cannot achieve our goal due to two main challenges. First, there is no multi-modal model matching point clouds and language at different feature scales (e.g. low-level, high-level). Second, we observe a style mixing issue when we stylize the content with different style conditions from text prompts. To address the first issue, we propose a 3D stylization framework to match the point cloud features with text features in local and global views. For the second issue, we propose an improved directional divergence loss to make arbitrary text styles more distinguishable as a complement to our framework. We conduct extensive experiments to show the effectiveness of our model on text-guided 3D scene style transfer." }, { "figure_ref": [ "fig_19", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b10", "b18", "b3", "b29", "b4", "b5", "b17", "b10", "b3", "b18", "b17", "b5", "b4", "b4", "b17", "b5", "b10", "b5", "b10", "b10" ], "table_ref": [], "text": "Vision-Language models [11,22,19,4,30] have shown superior advantages over most of the current tasks, such as semantic segmentation, object detection and action recognition. However, the 3D scenes stylised with the guidance of Vision-Language driven is rarely explored.\nIn this paper, we propose a 3D stylization framework that stylizes the 3D scenes via a given text description of the style, which can be applied to stylize novel scenes without further training. The proposed method will have multiple potential applications in the rising VR, AR, MetaVerse, etc., with more flexible user-defined features.\nThe related work would be the arbitrary 3D scene stylization via a given style image [5,6,18], and the textdriven 2D image stylization [11,4,19]. The current 3D stylization work is built upon the point cloud 3D representations [18,6] or the recently popular Neural Radiance Field (NeRF) [5]. Even though NeRF has several advantages over point clouds, such as easy training with only multiple views inputs and smoother interpolation between different views, the NeRF-based stylization models can only be applied to a single scene [5], which is inapplicable for stylizing multiple scenes or generalization on novel scenes, shown in Figure 2. In the CLIPNeRF [31], although the proposed 3D stylization method with text style condition shows a stable 3D consistency before and after stylization, it leads to a barely obvious style effect with the given text style condition (Figure 9), which is still under-explored for NeRFbased method. Thus, in this paper, we build our 3D CLIP-Styler upon the point cloud representation. One of the key components of arbitrary style image stylization is to match the content and the style of the stylized images with the input content and the style images. To match the 3D feature and the given 2D style feature, [18,6] project the 3D point descriptor to the 2D plane and transfer it to a 2D image feature matching problem with a pre-trained image encoder. For the text-driven 2D image stylization CLIPstyler [11] utilizes the image encoder and the text encoder of CLIP [22] to match the image and text features in the same metric space.\nIn the under-explored text-driven 3D point cloud stylization task, we need to match the 3D point cloud descriptor with the text features. However, no such multi-modal model matches the feature of the point cloud and text caption. Generally, the natural solution is to bridge the ideas in [6], and [11] to project the 3D point cloud back to the 2D image space and match the text-image feature using CLIP. However, this straightforward solution faces two major challenges. First, the previous works refer to a style image to transfer the content images, where the content and the style are of the same modality. Thus, the multi-scale features extracted from different layers of a pre-trained VGG are all in the same feature space for the content, style, and stylized images, which is crucial for balancing faithful global style and local content details. However, in the pre-trained CLIP network, there is no such concept of a multi-scale feature for image-text matching; there is only a deep feature from the text encoder. In general, the deep style feature should transfer the deep content feature, but the point cloud descriptors are shallow layer's feature, which will cause blurred content and unfaithful style effect for stylizing novel views. Second, we observe that directly adopting the style matching loss of CLIPstyler [11] for multiple text styles transfer would lead to a mixing of style effects as shown in Figure 3, which is undesirable for general arbitrary stylization. And this style mixing effect is the key factor preventing the model from learning different text styles.\nTo address the above issues, we propose a more general framework for language-guided 3D point cloud stylization (CLIP3Dstyler). Our proposed CLIP3Dstyler enables the model to be trained with multiple scenes with arbitrary text styles and achieve content consistency and faithful style effect without mixing. To achieve content consistency and faithful style effect, we propose complimenting the local point cloud descriptor with a feature from the global view of the entire 3D scene to match the global text style feature. Furthermore, we designed a light module to extract the global feature, and the additional overhead can be ignored compared to our full model, which is efficient yet effective. To fix the style mixing problem when we have multiple styles, we propose an improved directional divergence loss for segregating the style effect from one to another, enhancing stylization significantly. Our model can also generalize to stylize novel scenes without retraining, as shown in Figure 2. Text styles. For more details, please refer to Section 3 Our contribution is summarized into three points: 1) we present a new framework to address the languageguided 3D style transfer learning task; 2) We rethink the text style as a global information and generate associated global features from the point cloud to achieve better performance with a higher CLIP score; 3) We introduce a new directional divergence loss to solve the style mixing problem." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b18", "b8", "b10", "b15", "b0", "b27", "b16", "b28", "b13", "b1", "b4", "b5", "b17", "b5", "b17", "b14", "b19", "b31", "b33", "b7", "b24", "b11", "b34", "b11", "b34", "b19" ], "table_ref": [], "text": "Language-driven model. Recently, OpenAI introduced the CLIP[22] pre-trained model to bridge the text and image modality. By contrastive Learning from 40 million image and text pairs, CLIP shows giant potential ability about the language conditional model and brings many exciting interaction research between the text and image. For example, StyleCLIP [19] performs a different level of attribute manipulation from the text information of StyleGAN [9]. DALL•E 2[3] integrates the CLIP and diffusion model to generate a high-resolution and vivid image. CLIPstyler [11] performs text-driven style transfer learning by the minimize the difference of directional distance of text and image feature extracted by CLIP encoder. Our task of text 3D stylization want to extend 2D style transfer to the 3D space, as it requires the synthesis of novel view, and keeps consistency between the different views.\nNovel view synthesis. Multi-images or single images novel view synthesis can achieve by projection [16,1], and volume-based differentiate rendering. For volume rendering [28,17,29,14], each pixel of a view image emits a ray of light. Each pixel gets the value in the picture by integrating the color and opacity value along the ray. The neural network is used as an implicit representation of 3D space. This neural network must be forward thousands of times in render process, which is the most severe bottleneck of render speed. Inspired by the traditional render pipeline, SynSin [33] proposed a projection based differentiate render pipeline by soft projecting the points onto a plane with a z-buffer to generate the feature map at the associate viewpoint. After the projection operation, it is attached with a decoder to create the render result. It is very efficient and capable of generalizing to arbitrary scene method. Hence, our model based on the projection differentiate rendering.\n3D stylization. 3D content stylization [2,5,6,18] has attracted growing interest in recent years. StyleScene [6] constructs the point cloud from a set of images and performs linear style transformation on each point associate feature. 3D Photo Stylization [18] generates the point cloud by depth estimation from the single images and performs the GCN to extract the geometrical feature of the 3D scene. By Adaptive Attention Normalization(AdaAttN) [15], the styles and contents are matched and combined by attention mechanism. With the post-back projection operation, novel views are synthesized. CLIPNeRF[31] use the neural radiance field as 3d representation, and text prompt as style information to perform stylization, but it can only change the color of the scene. In contrast, our approach achieves a very significant style migration effect.\nDeep learning for point cloud processing. The deep neural network that gets the point cloud as input has been widely researched in classification [20,21,32,34], semantic segmentation [36,8,25], and object detection [12,35]. The voxel-based method[37, 12,35] rasterizes the 3D space and gets the feature map for post-operation, but with the resolution increasing, memory consumption goes up exponentially. Point-Based [20,21,36] methods directly take the point cloud into the network without the complex preprocessing technique and are more memory friendly. Our approach uses the Point-based method to extract the features from the point cloud to handle millions of points in a scene." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Given a set of 3D scenes of point clouds {P n } N n=1 and text style description {S m } M m=1 , our goal is to stylize the given 3D point clouds with arbitrary text style and synthesize novel stylization views of 3D scenes. To briefly introduce our training framework. We decompose the proposed " }, { "figure_ref": [], "heading": "Point Cloud Construction", "publication_ref": [ "b25", "b26", "b5" ], "table_ref": [], "text": "Given a group of images from a specific scene, we can estimate the relative image pose via COLMAP [26]. After this, we can calculate the full-depth map from MVS [27] to construct a 3D point cloud for this scene. Similar to [6], rather than build a point cloud from the image level features, we downsample it into a 3D point cloud of features given the extracted 2D feature map from VGG pre-trained model." }, { "figure_ref": [], "heading": "Point Cloud Stylization", "publication_ref": [], "table_ref": [], "text": "Our methods insert the style into the point cloud descriptor by changing the distribution of content features. Specifically, a Linear transformation module[13] will predict a transformation matrix T by matching the covariance statistic of content features and text style features. Given the feature vectors of the point cloud and text style embedding, the modulated point cloud will be computed in the equation below.\nf d p = T (f c p -f c p ) + f s (1)\nwhere f c p is the mean of feature in the point cloudf c p , f s is the mean of text style features f s , f d p is transferred feature of point cloud. In the previous work, the reference images provide the multi-scale features extracted from different layers of a pre-trained VGG. The feature of point cloud extracted from multi-view images by a three-layer pre-trained VGG encoder, each point's feature represents the surrounding local receptive field in an image. The multiscale features from reference images provide a matching representation scale for point cloud stylization.\nHowever, the connection across the modalities of CLIP is limited to the last layer of the encoders, which only provides the deep feature of text style. Therefore, there is a different representation scale problem between the content and style. To remove this contradiction, we extract a global feature from the point cloud to match the global text style feature by point-wise convolution and max pooling operation. Then, we use the same transformation matrix T to transfer the global feature representation. The global feature will be attached to each point's feature to prepare for view projection. The modulated global feature will be calculated below.\nf c g = M axP ool(conv(f c )) f d g = T (f c g -f c ) + f s(2)\nThe transformation matrix is calculated from the text style covariance matrix T s and point cloud content covariance matrix T c . The text features are obtained by feeding the text encoder with different prompt sentences integrating a single text style. Once the style and content features have been calculated, the following convolution layers and a full-connected layer will compute the covariance matrix T s and T c . Finally, we obtain the transformation matrix T = T s T c ." }, { "figure_ref": [], "heading": "Stylized Novel View Synthesis", "publication_ref": [ "b23" ], "table_ref": [], "text": "After the point cloud has been stylized, the next step is to generate the stylized images with a specified view. View synthesis can be achieved by projecting the point's features \"An ink wash painting\" \"pop art of night city\" \"A Photo\" < l a t e x i t s h a 1 _ b a s e 6 4 = \" n X 4 J c K 3 g 3 < l a t e x i t s h a 1 _ b a s e 6 4 = \" w E I 8 J z t g P f 6 N 9 2 x 6 C u s V k C / 0 L Z g = \" > A A A B 7 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m k q M e i F 4 8 V 7 A e 0 o U y 2 m 3 b t Z h N 2 N 0 I J / Q 9 e P C j i 1 f / j z X / j t s 1 B W to an image plane with a z-buffer, camera pose and intrinsic parameters. Finally, the projected features are mapped to an image by a decoder.\nP i + T s a 3 L U 1 a R P g Z E F M = \" > A A A B 7 n i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s x I U Z d F N y 4 r 2 A e 0 Q 8 m k G R u a y Q z J H a E M / Q g 3 L h R x 6 / e 4 8 2 9 M p 7 P Q 1 g O B k 3 P u T e 4 9 Q S K F Q d f 9 d k p r 6 x u b W + X t y s 7 u 3 v 5 B 9 f C o Y + J U M 9 5 m s Y x 1 L 6 C G S 6 F 4 G w V K 3 k s 0 p 1 E g e T e Y 3 M 7 9 7 h P X R s T q A a c J 9 y P 6 q E Q o G E U r d Y W 9 8 i E b V m t u 3 c 1 B V o l X k B o U a A 2 r X 4 N R z N K I K 2 S S G t P 3 3 A T 9 j G o U T P J Z Z Z A a n l A 2 s a / 3 L V U 0 4 s b P 8 n F n 5 M w q I x L G 2 h 6 F J F d / d 2 Q 0 M m Y a B b Y y o j g 2 y 9 5 c / M / r p x h e + 5 l Q S Y p c s c V H Y S o J x m S + O x k J z R n K q S W U a W F n J W x M N W V o E 6 r Y E L z l l V d J 5 6 L u X d Y b 9 4 1 a 8 6 a I o w w n c A r n 4 M E V N O E O W t A G B h N 4 h l d 4 c x L n x X l 3 P h a l J a f o O Y Y / c D 5 / A E w V j 4 4 = < / l a t e x i t > image c < l a t e x i t s h a 1 _ b a s e 6 4 = \" r 6 V Y m P t 2 7 d K J V u 4 a g Y m c H p + D o P 4 = \" > A A A B 8 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E 1 G P R i 8 c K 9 g O a E D b b q V 2 7 2 Y T d j V B C / 4 Y X D 4 p 4 9 c 9 4 8 9 + 4 b X P Q 1 g c L b 9 6 b Y W Z f l A q u j e t + O 6 W V 1 b X 1 j f J m Z W t 7 Z 3 e v u n / Q 1 k m m G L Z Y I h L V j a h G w S W 2 D D c C u 6 l C G k c C O 9 H o Z u p 3 n l B p n s h 7 M 0 4 x i O m D 5 A P O q L G S z 2 2 J Y a 7 9 8 H E S V m t u 3 Z 2 B L B O v I D U o 0 A y r X 3 4 / Y V m M 0 j B B t e 5 5 b m q C n C r D m c B J x c 8 0 p p S N 7 I q e p Z L G q I N 8 d v O E n F i l T w a J s k 8 a M l N / T + Q 0 1 n o c R 7 Y z p m a o F 7 2 p + J / X y 8 z g K s i 5 T D O D k s 0 X D T J B T E K m A Z A + V 8 i M G F t C m e L 2 V s K G V F F m b E w V G 4 K 3 + O V l 0 j 6 r e x f 1 8 7 v z W u O 6 i K M M R 3 A M p + D B J T T g F p r Q A g Y p P M M r v D m Z 8 + K 8 O x / z 1 p J T z B z C H z i f P 2 O i k e 0 = < / l a t e x i t > image s j < l a t e x i t s h a 1 _ b a s e 6 4 = \" b k O K x l K m i u 5 5 T B U 1 c Q F n E 7 3 r u L Q = \" > A A A B 8 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m k q M e i F 4 8 V 7 A c 0 I W y 2 W 7 t 0 s w m 7 E 6 G E / g 0 v H h T x 6 p / x 5 r 9 x 2 + a g r Q 8 W 3 r w 3 w 8 y + K J X C o O t + O 6 W 1 9 Y 3 N r f J 2 Z W d 3 b / + g e n j U M U m m G W + z R C a 6 F 1 H D p V C 8 j Q I l 7 6 W a 0 z i S v B u N b 2 d + 9 4 l r I x L 1 g J O U B z F 9 V G I o G E U r + c K W P M y N H 4 p p W K 2 5 d X c O s k q 8 g t S g Q C u s f v m D h G U x V 8 g k N a b v u S k G O d U o m O T T i p 8 Z n l I 2 t i v 6 l i o a c x P k 8 5 u n 5 M w q A z J M t H 0 K y V z 9 P Z H T 2 J h J H N n O m O L I L H s z 8 T + v n + H w O s i F S j P k i i 0 W D T N J M C G z A M h A a M 5 Q T i y h T A t 7 K 2 E j q i l D G 1 P F h u A t f 3 m V d C 7 q 3 m W 9 c d + o N W + K O M p w A q d w D h 5 c Q R P u o A V t Y J D C M\n/ Q x T V a R k = \" > A A A B 7 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E 1 G P R i 8 c K 9 g P a U D b b T b t 2 s w m 7 E 7 G E / g c v H h T x 6 v / x 5 r 9 x 2 + a g r Q 8 G H u / N M D M v S K Q w 6 L r f T m F l d W 1 9 o 7 h Z 2 t r e 2 d 0 r 7 x 8 0 T Z x q x h s s l r F u B 9 R w K R R v o E D J 2 4 n m N A o k b w W j m 6 n f e u T a i F j d 4 z j h f k Q H S o S C U b R S E / k T 9 l i v X H G r 7 g x k m X g 5 q U C O e q / 8 1 e 3 H L I 2 4 Q i a p M R 3 P T d D P q E b B J J + U u q n h C W U j O u A d S x W N u P G z 2 b U T c m K V P g l j b U s h m a m / J z I a G T O O A t s Z U R y a R W 8 q / u d 1 U g y v / E y o J E W u 2 H x R m E q C M Z m + T v p C c 4 Z y b A l l W t h b C R t S T R n a g E o 2 B G / x\nM = \" > A A A B 8 n i c b V B N S 8 N A E N 3 U r 1 q / q h 6 9 L B b B U 0 m k q M e i F 4 8 V 7 A e k I W y 2 m 3 b p Z j f s T s Q S + j O 8 e F D E q 7 / G m / / G b Z u D t j 4 Y e L w 3 w 8 y 8 K B X c g O t + O 6 W 1 9 Y 3 N r f J 2 Z W d 3 b / + g e n j U M S r T l L W p E k r 3 I m K Y 4 J K 1 g Y N g v V Q z k k S C d a P x 7 c z v P j J t u J I P M E l Z k J C h 5 D G n B K z k A 3 u C M D f 9 k E / D a s 2 t u 3 P g V e I V p I Y K t M L q V 3 + g a J Y w C V Q Q Y 3 z P T S H I i Q Z O B Z t W + p l h K a F j M m S + p Z I k z A T 5 / O Q p P r P K A M d K 2 5 K A 5 + r v i Z w k x k y S y H Y m B E Z m 2 Z u J / 3 l + B v F 1 k H O Z Z s A k X S y K M 4 F B 4 d n / e M A 1 o y A m l h C q u b 0 V 0 x H R h I J N q W J D 8 J Z f X i W d i 7 p 3 W W / c N 2 r N m y K O M j p B p + g c e e g K N d E d a q E 2 o k i h Z / S K 3 h x w X p x 3 5 2 P R W n K K m W P 0 B 8 7 n D 9 v 2 k a Q = < / l a t e x i t > text s i < l a t e x i t s h a 1 _ b a s e 6 4 = \" k P W 8 x 8 g z d K X K F p b A R n O r 0 q E b K B M = \" > A A A B 8 n i c b V B N S 8 N A E N 3 U r 1 q / q h 6 9 L B b B U 0 l E 1 G P R i 8 c K 9 g P S E D b b T b t 2 s x t 2 J 2 I J / R l e P C j i 1 V / j z X / j t s 1 B W x 8 M P N 6 b Y W Z e l A p u w H W / n d L K 6 t r 6 R n m z s r W 9 s 7 t X 3 T 9 o G 5 V p y l p U C a W 7 E T F M c M l a w E G w b q o Z S S L B O t H o Z u p 3 H p k 2 X M l 7 G K c s S M h A 8 p h T A l b y g T 1 B m J t e + D A J q z W 3 7 s 6 A l 4 l X k B o q 0 A y r X 7 2 + o l n C J F B B j P E 9 N 4 U g J x o 4 F W x S 6 W W G p Y S O y I D 5 l k q S M B P k s 5 M n + M Q q f R w r b U s C n q m / J 3 K S G D N O I t u Z E B i a R W 8 q / u f 5 G c R X Q c 5 l m g G T d L 4 o z g Q G h a f /\nv 5 B + f C o q Z N M M f R Z I h L V D q l G w S X 6 h h u B 7 V Q h j U O B r X B 0 O / N b T 6 g 0 T + S D G a c Y x H Q g e c Q Z N V b y o 0 f e Y 7 1 y x a 2 6 c 5 B V 4 u W k A j k a v f J X t 5 + w L E Z p m K B a d z w 3 N c G E K s O Z w G m p m 2 l M K R v R A X Y s l T R G H U z m x 0 7 J m V X 6 J E q U L W n I X P 0 9 M a G x 1 u M 4 t J 0 x N U O 9 7 M 3 E / 7 x O Z q L r Y M J l m h m U b L E o y g Q x C Z l 9 T v p c I T N i b A l l i t t b C R t S R Z m x + Z R s C N 7 y y 6 u k e V H 1 L q u 1 + 1 q l f p P H U Y Q T O I V z 8 O A K 6 n A H D f C B A Y\nU 8 K y Z x 7 Q 1 a K 7 o = \" > A A A B 8 X i c b V B N S w M x E J 2 t X 7 V + V T 1 6 C R b B U 9 m V o h 6 L X j x W s B / Y r k s 2 z b a x 2 W R J s k J Z + i + 8 e F D E q / / G m / / G t N 2 D t j 4 Y e L w 3 w 8 y 8 M O F M G 9 f 9 d g o r q 2 v r G 8 X N 0 t b 2 z u 5 e e f + g p W W q C G 0 S y a X q h F h T z g R t G m Y 4 7 S S K 4 j j k t B 2 O r q d + + 4 k q z a S 4 M + O E + j E e C B Y x g o 2 V 7 q M H F m S 6 F z x O g n L F r b o z o G X i 5 a Q C O R p B + a v X l y S N q T C E Y 6 2 7 n p s Y P 8 P K M M L p p N R L N U 0 w G e E B 7 V o q c E y 1 n 8 0 u n q A T q / R R J J U t Y d B M / T 2 R 4 V j r c R z a z h i b o V 7 0 p u J / X j c 1 0 a W f M Z G k h g o y X x S\np o 0 F r H q B E Q z w S V r I k f B O o l i J A o E a w e j 6 6 n f f m J K 8 1 j e 4 T h h X k Q G k o e c E j T S f e h n u u c / T h 7 Q L 1 e c q j O D v U z c n F Q g R 8 M v f / X 6 M U 0 j J p E K o n X X d R L 0 M q K Q U 8 E m p V 6 q W U L o i A x Y 1 1 B J I q a 9 b H b x x D 4 x S t 8 O Y 2 V K o j 1 T f 0 9 k J N J 6 H A W m M y I 4 1 I v e V P z P 6 6 Y Y X n o Z l 0 m K T N L 5 o j A V N s b 2 9 H 2 7 z x W j K M a G E K q 4 u d W m Q 6 I I R R N S y Y T g L r 6 8 T F p n V f e 8 W r u t V e p X e R x F O I J j O A U X L q A O N 9 C A J l C Q 8 A y v 8 G Z\nx h J x Y p U / C W N t S S G b q 7 4 m M R s a M o 8 B 2 R h S H Z t G b i v 9 5 n R T D K z 8 T K k m R K z Z f F K a S Y E y m 7 5 O + 0 J y h H F t C m R b 2 V s K G V F O G N q S S D c F b f H m Z N M + q 3 k X 1 / O 6 8 U r v O 4 y j C E R z D K X h w C T W\nV k K H n I K Q E j + X Y l f A A / 0 3 0 f p v g M 5 w + K f b v q 1 J w Z 8 D J x C 1 J F B Z q + / d U f x D S N m A Q q i N Y 9 1 0 n A y 4 g C T g W b l v u p Z g m h Y z J k P U M l i Z j 2 s t n y U 3 x i l A E O Y 2 V K A p 6 p v y c y E m k 9 i Q L T G R E Y 6 U U v F / / z e i m E V 1 7 G Z Z I C k 3 T + U Z g K D D H O k 8 A D r h g F M T G E U M X N\nG I U x M g Q Q h U 3 u 2 I 6 I I p Q M J k V T Q j u 7 M n z p H F a d s / K l Z t K q X q Z x 1 F A B + g Q H S M X n a M q u k Y 1 V E c U Z e g J v a B X 6 9 F 6 t t 6 s 9 2 n r g p X P 7 K E / s D 6 + A X O g l l Q = < / l a t e x i t > f t s j f t s i < l a t e x i t s h a 1 _ b a s e 6 4 = \" U I y W L v T c Q w d q o W R J I F f C 4 D r v E U g = \" > A A A B 7 H i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E 1 G P R i 8 c K p i 2 0 o W y 2 m 3 b p Z h N 2 J 0 I J / Q 1 e P C j i 1 R / k z X / j t s 1 B W x 8 M P N 6 b Y W Z e m E p h 0 H W / n d L a + s b m V n m 7 s r O 7 t 3 9 Q P T x q m S T T j P s s k Y n u h N R w K R T 3 U a D k n V R z G o e S t 8 P x 3 c x v P 3 F t R K I e c Z L y I K Z D J S L B K F r J 7 4 U c a b 9 a c + v u H G S V e A W p Q Y F m v / r V G y Q s i 7 l C J q k x X c 9 N M c i p R s E k n 1 Z 6 m e E p Z W M 6 5 F 1 L F Y 2 5 C f L 5 s V N y Z p U B i R J t S y G Z q 7 8 n c h o b M 4 l D 2 x l T H J l l b y b + 5 3 U z j G 6 C X K g 0 Q 6 7 Y Y l G U S Y I J m X 1 O B k J z h n J i C W V a 2 F s J G 1 F N G d p 8 K j Y E b / n l V d K 6 q H t X 9 c\nd i c i i u q N m S T v i + D M = \" > A A A C A n i c b V D L S g M x F M 3 4 r P U 1 6 k r c B I t Q N 2 V G i r o s u n F Z w T 6 g M 5 R M e t u G J p k h y Q i 1 F D f + i h s X i r j 1 K 9 z 5 N 6 b t L L T 1 w I X D O f c m 9 5 4 o 4 U w b z / t 2 l p Z X V t f W c x v 5 z a 3 t n V 1 3 b 7 + u 4 1 R R q N G Y x 6 o Z E Q 2 c S a g Z Z j g 0 E w V E R B w a 0 e B 6 4 j f u Q W k W y z s z T C A U p C d Z l 1 F i r N R 2 D w W T T L A H w E G Q p 7 G 2 z x S D C A w 5 b b s F r + R N g R e J n 5 E C y l B t u 1 9 B J 6 a p A G k o J 1 q 3 f C 8 x 4 Y g o w y i H c T 5 I N S S E D k g P W p Z K I k C H o + k J Y 3 x i l Q 7 u x s q W N H i q / p 4 Y E a H 1 U E S 2 U x\nu i d J U i j s z S k j I U V / Q m G J k r N R 1 D z k V l N M H A o O g i K W 2 z 5 Q D x J I B O u 2 6 J a / i T Q E X i Z + T E s h R 7 7 p f Q U / i l B N h M E N a d 3 w v M W G G l K G Y k X E x S D V J E B 6 i P u l Y K h A n O s y m N 4 z h i V V 6 M J b K l j B w q v 6 e y B D X e s Q j 2 8 m R G e h 5 b y L + 5 3 V S E 1 + G G R V J a o j A s 4 / i l E E j 4 S Q Q 2 K O K Y M N G l i C s q N 0 V 4 g F S C B s b W 9 G G 4 M + f v E i a Z x X / v F K 9 r Z Z q V 3 k c B X A E j k E Z + O A C 1 M A N q I M G w O A R P I N X 8 O Y 8 O S / O u / M x\nx 8 M P N 6 b Y W Z e k A i u j e t + O 4 W 1 9 Y 3 N r e J 2 a W d 3 b / + g f H j U 0 n G q K G v S W M S q E 6 B m g k v W N N w I 1 k k U w y g Q r B 2 M b 2 d + + 4 k p z W P 5 Y C Y J 8 y M c S h 5 y i s Z K r R 6 K Z I T 9 c s W t u n O Q V e L l p A I 5 G v 3 y V 2 8 Q 0 z R i 0 l C B W n c 9 N z F + h s p w K t i 0 1 E s 1 S 5 C O c c i 6 l k q M m P a z + b V T c m a V A Q l j Z U s a M l d / T 2 Q Y a T 2 J A t s Z o R n p Z W 8 m / u d 1 U x N e + x m X S W q Y p I t F Y S q I i c n s d T L g i l E j J p Y g V d z e S u g I F V J j A y r Z E L z l l 1 d J 6 6 L q X V Z r 9 7 V K / S a P o w g n c A r n 4 M E V 1 O E O G t A E C o / w D K / w 5 s T O i / P u f C x a C 0 4 + c w x / 4 H z + A I 5 v j y E = < / l a t e x i t > ↵ < l a t e x i t s h a 1 _ b a s e 6 4 = \" a 0 e 1 a p U k t G / x D Y c X a G O s v f A + 3 7 Y = \" > A A A B 9 H i c b V B N S 8 N A E J 3 4 W e t X 1 a O X Y B E q S E m k q H g q i q C 3 C v Y D 2 l A 2 2 0 2 7 d L O J u 5 N C K f 0 d X j w o 4 t U f 4 8 1 / 4 7 b N Q V s f D D z e m 2 F m n h 8 L r t F x v q 2 l 5 Z X V t f X M R n Z z a 3 t n N 7 e 3 X 9 N R o i i r 0 k h E q u E T z Q S X r I o c B W v E i p H Q F 6 z u 9 2 8 m f n 3 A l O a R f M R h z L y Q d C U P O C V o J O + 2 f V + 4 O m 1 h j y E 5 a e f y T t G Z w l 4 k b k r y k K L S z n 2 1 O h F N Q i a R C q J 1 0 3 V i 9 E Z E I a e C\nN d + f C Q r I Q 5 q 1 f C F a O 2 E W / c 4 = \" > A A A B 9 H i c b V B N S 8 N A E J 3 4 W e t X 1 a O X Y B E q S E m k q H g q i u C x Q r + g D W W z 3 b R L N 5 u 4 O y m U 0 t / h x Y M i X v 0 x 3 v w 3 b t s c t P X B w O O 9 G W b m + b H g G h 3 n 2 1 p Z X V v f 2 M x s Z b d 3 d v f 2 c w e H d R 0 l i r I a j U S k m j 7 R T H D J a s h R s G a s G A l 9 w R r + 4 G 7 q N 4 Z M a R 7 J K o 5 i 5 o W k J 3 n A K U E j e f e d a u H m v I 1 9 h u S s k 8 s 7 R W c G e 5 m 4 K c l D i k o n 9 9 X u R j Q J m U Q q i N Y t 1 4 n R G x O F n A o 2 y b Y T z W J C B 6 T H W o Z K E j L t j W d H T + x T o 3 T t I F K m J N o z 9 f f E m I R a j 0 L f d I Y E + 3 r R m 4 r / e a 0 E g 2 t v z G W c I J N 0 v i h I h I 2 R P U 3 A 7 n L F K I q R I Y Q q b m 6 1 a Z 8 o Q t H k l D U h u I s v L 5 P 6 R d G 9 L J Y e S / n y b R p H B o 7 h B A r g w h W U 4 Q E q U A M K T / A M r / B m\nProjection. Our projector follows the Wiles et al. [33], and project the point cloud features into a 2D plane to generate a feature map. In the project process, a z-buffer will accumulate the K-sorted closet points for a pixel. A point will affect r radius pixels on the image plane for better backpropagation and optimization of the decoder.\nDecoder. The Decoder maps the projected feature map to an RGB image. The decoder is implemented by the convolution network, following the design of U-net [24]. It includes the down-sampling and up-sampling operations." }, { "figure_ref": [], "heading": "Loss Function", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Style Loss", "publication_ref": [ "b3", "b10", "b10" ], "table_ref": [], "text": "To guide the content scene to follow the semantics of text style, StyleGAN-NADA [4] proposed a directional loss that aligns the direction distance of the CLIP feature between the text-image pairs of source and target. The direction loss is given by:\n∆T = E T (text s ) -E T (text c ), ∆I = E I (f ({P s n } N n=1 )) -E I (image c ), L dir = 1 - ∆I • ∆T |∆I||∆T |(3)\nwhere {P s n } N n=1 is stylized point cloud, E T is CLIP text encoder, E I is CLIP image encoder, t target is the text style, t source is \"a Photo\". f is an operation that projects the points to the associate view of the ground truth image and renders the transferred image with a decoder.\nTo improve the local texture of transferred images, CLIPStyler [11] proposed a patch-based CLIP directional loss. Specifically, the model will randomly crop several patches from rendered image s . The size of cropped images is fixed, and following the random perspective, augmentation will be applied on the N cropped patches îmage i s . To alleviate some patch images that are easier to minimize the CLIP loss, the model will reject the patch that the l i patch value is larger than the specific threshold τ . The PatchCLIP loss is defined as below:\nimage s = f ({P sj n } N n=1 ), ∆T = E T (text s ) -E T (text c ), ∆I = E I (aug( îmage i s )) -E I (image c ), l i patch = 1 - ∆I • ∆T |∆I||∆T | , L i patch = 1 N N i R(l i patch , τ ) whereR(s, τ ) = 0, if s ≤ τ s, otherwise(4)\nWhen we directly adopt the style matching loss of CLIPstyler [11] for multiple text styles transfer, the different styles are easy to mix because the previous PatchCLIP loss only constrains the directional distance from source to target and has no constraint between the different styles. To solve this issue, we propose a directional divergence loss.\nIn a batch, we randomly sample N pairs data and minimize the L dir loss between the different style data pairs. The following equation describes this loss:\n∆T = E T (text s i ) -E T (text s j ), image s i = f ({P si n } N n=1 ), image s j = f ({P sj n } N n=1 ) ∆I = E I (image s i ) -E I (image s j ), L dir = 1 - ∆I • ∆T |∆I||∆T | ,(5)\nwhere text s i and text s j are different text styles from the dataset, {P si n } N n=1 and {P sj n } N n=1 are point cloud that has been transferred by different style. Further, we project different stylized views in a batch to assist the model coverage faster and more robustly. If we use the equation above to measure the similarity of the text-image directional distance between different styles, the content disparity of different views will also be included. By calculating the similarity between different views, we successfully remove this noise in the loss function.\nf i c = E I (image c i ), f j c = E I (image c j ) L cd = 1 - f i c • f j c |f i c ||f j c | ,(6)\nwhere image c i and image c j are content image pairs used for point cloud construction before. Altogether, our style loss function is L s = L patch + L dir -L cd ." }, { "figure_ref": [], "heading": "Content Loss", "publication_ref": [], "table_ref": [], "text": "The preservation of content information is ensured by the VGG perceptual loss L f eat between the synthesized image and the ground truth image. An RGB pixel level L1 loss is used to stabilize the model in the training stage. Finally, the content loss is\nL c = λ f eat L f eat + λ rgb L rgb ." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b22", "b9" ], "table_ref": [], "text": "To align the experimental settings with the previous method [23], We conduct our experiments on the dataset [10]. In this dataset, we split the scenes into training and testing sets. We train our model on sixteen scenes and test the generalization on the four hold-out testing scenes \"M60, Truck, Train, and Playground\". Besides, our text prompts for stylization are kept consistent among training and test sets." }, { "figure_ref": [ "fig_0" ], "heading": "Qualitative results", "publication_ref": [ "b10", "b6", "b10", "b5" ], "table_ref": [], "text": "To compare the effect of novel stylized views between different methods, we stylized the whole 3D scenes from different methods and sample the stylized views from the same camera poses for comparison.\nDue to that StyleScene is conditioned with style images rather than text prompts in our model, we search for the most matched style images to our text prompts for StyleScene to generate the comparable stylized images. And for CLIPstyler [11], we modify the original AdaIN [7] to the more advanced linear style transfer [13] for a fair comparison, which is kept consistent for StyleScene and our method. Because CLIPstyler [11] only supports a single scene and cannot prevent the issue of style mixing effect, mentioned in Figure 3, we need to train different models to enumerate all the combinations of different scenes and text styles, in other words, one model for stylizing a single scene with the specific text style. However, our method supports multi-scene, multi-style and generalizable to novel scenes within one model training. Figure 6 shows the qualitative comparison of our approach with novel view syntheses to the 2D text stylization method and stylescene [6].\nWith the geometry information augmenting our 3D stylization, our model generates more preferable stylization results, and better consistency across the views, which is more stable than the 2D method. " }, { "figure_ref": [], "heading": "Quantitative results", "publication_ref": [], "table_ref": [], "text": "User Study. In this qualitative experiment, we create an anonymous voting pool for comparing different methods. We generate 21 different stylized scenes and convert them to GIF format via the order of camera poses. In the voting section, the users are asked to choose the preferable stylized scenes w.r.t. the more faithful style effect or better view consistency. In total, we have 60 participants successfully finishing the voting questionnaire. As shown in Fig-" }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [], "table_ref": [ "tab_1", "tab_2" ], "text": "StyleScene 4: CLIP score of global and non-global. By enhanced the global feature transformation, the stylization views achieve higher CLIP score, and better match text described style. the results from 21 different styles for all test scenes and report the mean value in Table 2 and Table 3. For either the short-range or long-range consistency, the proposed method reaches the lower RMSE values, which means the color of pixels projected onto different views from a point variant is smaller than the 2D method." }, { "figure_ref": [], "heading": "Ablation studies", "publication_ref": [], "table_ref": [], "text": "Compare with CLIPNeRF. To compare with the alternative solution of 3D scene stylization, we conduct a straightforward comparison between our method and the recent CLIPNeRF [31] on the same scene and the same text style prompts. It is obvious CLIPNeRF seems to only learn a color shift for the given style, which is not feasible compared with ours. Effect of global feature transformation. We compare the stylization results with the non-global and global operations and calculate the CLIP score to measure whether the global information helps the model better match the textdescribed style. In Figure 7, with the global information, we obtained a much higher contrast and full texture details on the ground and the object surface. Without the global transferred feature, point-wise feature style transformation causes the obscure or indistinct of the truck and tank in the figure. We also measure the CLIP score of the render results with global and non-global style transformation in Table 4." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this paper, we introduce the text conditional stylization on 3D scenes generating novel views from a set of images with a semantic text style. The model effectively distinguishes the different text styles by involving new directional distance constraints. Integrating the global transferred feature to the projected feature map enhances the model performance on fine details. We demonstrated the efficacy of our methods through extensive qualitative and quantitative studies." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Frame 1 Frame 6 Frame 11 Frame 16 Frame 21 Frame 26 Forest Frame 1 Frame 6 Frame 11 Frame 16 Frame 21 Frame 26 A fauvism style painting with bright color Frame 1 Frame 6 Frame 11 Frame 16 Frame 21 Frame 26 An ink wash painting Frame 1 Frame 6 Frame 11 Frame 16 Frame 21 Frame 26\nThe scream by edvard munch Figure 11: additional results" }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [ "b10", "b10" ], "table_ref": [], "text": "Training. We first train the decoder without inserting the style into the point cloud feature with a batch size of 4 and a learning rate of 0.0001. The λ rgb pixel of reconstruction loss L rgb is 0.005, and the λ f eat of feature perception loss L f eat is 1.0. We then train the style transformation module with a batch size of 4 and a learning rate of 0.0001. To coverage faster of the model, we select different views from a scene in a batch. We also involve the L tv and L gs loss from [11] to alleviate the side artifacts and maintain the global content of an image. For hyperparameters, we set the λ rgb , λ f eat , λ s and λ tv as 5×10 -3 , 1, 1.5×10, 1.3×10 -6 respectively. We use the Adm optimizer with β 1 = 0.9 and β 2 = 0.9999 for all network training.\nCLIP Style Implementation Details. The input of the CLIP model is an image with 224 × 224 resolution. We have to resize the image before feeding it into the CLIP's image encoder. Following the [11], we randomly cropped 64 patches with the size of 96 and applied random perspective augmentation on each patch. For the threshold rejection τ , we set τ as 0.7. To measure the directional divergence of different styles, we randomly sample 80% of all pairs of different styles to reduce the computation cost.\nStyle Transformation Module. Following the Linear Transformation[13] module, we compress the feature of the point cloud from 256 to 64 by an MLP layer. By reducing the feature's dimension, we accelerate the covariance matrix computation of the point cloud and transformation process.\nFor the text style, we insert the text prompt into 79 template sentences to get the different perspectives representation of the style description. We then compress the feature size from 512 to 64 to calculate the covariance matrix. 2D Method Experiment. Since we do not have the ground truth image of the synthesized views, we have to generate the associate view without the style first, as shown in Figure 10, and use the 2D method to stylize all of the views. In the training stage of the 2D method, each model only supports a single style and image. Therefore, we train the model with the multi-images from a single scene and a style." }, { "figure_ref": [], "heading": "Limitation and Future Direction", "publication_ref": [], "table_ref": [], "text": "First, our model relies on the structure from motion to generate the point cloud of a scene, and it requires lots of images of overlapped views. If the pictures we provide have no overlap view with each other or the number of photos is minimal, the sfm algorithm cannot generate a good point cloud to represent a scene. Second, we extract the feature from the 2D images by a pre-trained VGG encoder as a feature of the point cloud; the geometry information is not inserted into the feature, which causes a consistency problem across the views. Third, we train the model with a batch size of 4, which means we have to cache four different stylized point clouds, and it consumes a lot of memory. With better memory consumption optimization, we can enlarge the batch size to make the model coverage faster. For example, we can pre-calculate the projected feature and record the point cloud's index. We then only need to cache the projected stylized feature, and we do not need to copy other features to reduce the memory consumption of the GPU. Watercolor painting " }, { "figure_ref": [], "heading": "Additional Results", "publication_ref": [], "table_ref": [], "text": "" } ]
2023-05-26
[ { "authors": "Kara-Ali Aliev; Artem Sevastopolsky; Maria Kolos; Dmitry Ulyanov; Victor Lempitsky", "journal": "", "ref_id": "b0", "title": "Neural point-based graphics", "year": "2020" }, { "authors": "Weimin Xu Cao; Katashi Wang; Ryosuke Nagao; Nakamura", "journal": "", "ref_id": "b1", "title": "Psnet: A style transfer network for point cloud stylization on geometry and color", "year": "2020-03" }, { "authors": "Aditya Ramesh", "journal": "", "ref_id": "b2", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "Rinon Gal; Or Patashnik; Haggai Maron; Gal Chechik; Daniel Cohen-Or", "journal": "", "ref_id": "b3", "title": "Stylegan-nada: Clip-guided domain adaptation of image generators", "year": "2021" }, { "authors": "Jiatao Gu; Lingjie Liu; Peng Wang; Christian Theobalt", "journal": "", "ref_id": "b4", "title": "Stylenerf: A style-based 3d aware generator for highresolution image synthesis", "year": "2022" }, { "authors": "Hsin-Ping Huang; Hung-Yu Tseng; Saurabh Saini; Maneesh Singh; Ming-Hsuan Yang", "journal": "", "ref_id": "b5", "title": "Learning to stylize novel views", "year": "2021" }, { "authors": "Xun Huang; Serge Belongie", "journal": "", "ref_id": "b6", "title": "Arbitrary style transfer in real-time with adaptive instance normalization", "year": "2017" }, { "authors": "Mingyang Jiang; Yiran Wu; Tianqi Zhao; Zelin Zhao; Cewu Lu", "journal": "", "ref_id": "b7", "title": "Pointsift: A sift-like network module for 3d point cloud semantic segmentation", "year": "2018" }, { "authors": "Tero Karras; Samuli Laine; Timo Aila", "journal": "", "ref_id": "b8", "title": "A style-based generator architecture for generative adversarial networks", "year": "2018" }, { "authors": "Arno Knapitsch; Jaesik Park; Qian-Yi Zhou; Vladlen Koltun", "journal": "ACM Trans. Graph", "ref_id": "b9", "title": "Tanks and temples: Benchmarking large-scale scene reconstruction", "year": "2017-07" }, { "authors": "Gihyun Kwon; Jong Chul; Ye ", "journal": "", "ref_id": "b10", "title": "Clipstyler: Image style transfer with a single text condition", "year": "2009" }, { "authors": "Alex H Lang; Sourabh Vora; Holger Caesar; Lubing Zhou; Jiong Yang; Oscar Beijbom", "journal": "", "ref_id": "b11", "title": "Pointpillars: Fast encoders for object detection from point clouds", "year": "2018" }, { "authors": "Xueting Li; Sifei Liu; Jan Kautz; Ming-Hsuan Yang", "journal": "", "ref_id": "b12", "title": "Learning linear transformations for fast arbitrary style transfer", "year": "2019" }, { "authors": "Lingjie Liu; Jiatao Gu; Kyaw Zaw Lin; Tat-Seng Chua; Christian Theobalt", "journal": "NeurIPS", "ref_id": "b13", "title": "Neural sparse voxel fields", "year": "2020" }, { "authors": "Songhua Liu; Tianwei Lin; Dongliang He; Fu Li; Meiling Wang; Xin Li; Zhengxing Sun; Qian Li; Errui Ding", "journal": "", "ref_id": "b14", "title": "Adaattn: Revisit attention mechanism in arbitrary neural style transfer", "year": "2021" }, { "authors": "Moustafa Meshry; Dan B Goldman; Sameh Khamis; Hugues Hoppe; Rohit Pandey; Noah Snavely; Ricardo Martin-Brualla", "journal": "", "ref_id": "b15", "title": "Neural rerendering in the wild", "year": "2019-06" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "", "ref_id": "b16", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "Fangzhou Mu; Jian Wang; Yicheng Wu; Yin Li", "journal": "", "ref_id": "b17", "title": "3d photo stylization: Learning to generate stylized novel views from a single image", "year": "2022" }, { "authors": "Or Patashnik; Zongze Wu; Eli Shechtman; Daniel Cohen-Or; Dani Lischinski", "journal": "", "ref_id": "b18", "title": "Styleclip: Text-driven manipulation of stylegan imagery", "year": "2021-10" }, { "authors": "Hao Charles R Qi; Kaichun Su; Leonidas J Mo; Guibas", "journal": "", "ref_id": "b19", "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "year": "2016" }, { "authors": "Li Charles R Qi; Hao Yi; Leonidas J Su; Guibas", "journal": "", "ref_id": "b20", "title": "Point-net++: Deep hierarchical feature learning on point sets in a metric space", "year": "2017" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark; Gretchen Krueger; Ilya Sutskever", "journal": "", "ref_id": "b21", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Gernot Riegler; Vladlen Koltun", "journal": "", "ref_id": "b22", "title": "Free view synthesis", "year": "2020" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "", "ref_id": "b23", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Peer Radu Alexandru Rosu; Jan Schütt; Sven Quenzel; Behnke", "journal": "", "ref_id": "b24", "title": "Latticenet: Fast point cloud segmentation using permutohedral lattices", "year": "2020" }, { "authors": "Johannes Lutz; Schönberger ; Jan-Michael Frahm", "journal": "", "ref_id": "b25", "title": "Structure-from-motion revisited", "year": "2016" }, { "authors": "Johannes Lutz Schönberger; Enliang Zheng; Marc Pollefeys; Jan-Michael Frahm", "journal": "", "ref_id": "b26", "title": "Pixelwise view selection for unstructured multi-view stereo", "year": "2016" }, { "authors": "P Pratul; Boyang Srinivasan; Xiuming Deng; Matthew Zhang; Ben Tancik; Jonathan T Mildenhall; Barron", "journal": "", "ref_id": "b27", "title": "Nerv: Neural reflectance and visibility fields for relighting and view synthesis", "year": "2021" }, { "authors": "Matthew Tancik; Vincent Casser; Xinchen Yan; Sabeek Pradhan; Ben Mildenhall; Pratul Srinivasan; Jonathan T Barron; Henrik Kretzschmar", "journal": "", "ref_id": "b28", "title": "Block-NeRF: Scalable large scene neural view synthesis", "year": "2022" }, { "authors": "Yael Vinker; Ehsan Pajouheshgar; Jessica Y Bo; Roman Christian Bachmann; Amit Haim Bermano; Daniel Cohen-Or; Amir Zamir; Ariel Shamir", "journal": "", "ref_id": "b29", "title": "Clipasso: Semantically-aware object sketching", "year": "2022" }, { "authors": "Can Wang; Menglei Chai; Mingming He; Dongdong Chen; Jing Liao", "journal": "", "ref_id": "b30", "title": "Clip-nerf: Text-and-image driven manipulation of neural radiance fields", "year": "2021" }, { "authors": "Yue Wang; Yongbin Sun; Ziwei Liu; Sanjay E Sarma; Michael M Bronstein; Justin M Solomon", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b31", "title": "Dynamic graph cnn for learning on point clouds", "year": "2019" }, { "authors": "Olivia Wiles; Georgia Gkioxari; Richard Szeliski; Justin Johnson", "journal": "", "ref_id": "b32", "title": "SynSin: End-to-end view synthesis from a single image", "year": "2020" }, { "authors": "Wenxuan Wu; Zhongang Qi; Li Fuxin", "journal": "", "ref_id": "b33", "title": "Pointconv: Deep convolutional networks on 3d point clouds", "year": "2019-06" }, { "authors": "Yan Yan; Yuxing Mao; Bo Li", "journal": "Sensors", "ref_id": "b34", "title": "Second: Sparsely embedded convolutional detection", "year": "2018" }, { "authors": "Hengshuang Zhao; Li Jiang; Jiaya Jia; H S Philip; Vladlen Torr; Koltun", "journal": "", "ref_id": "b35", "title": "Point transformer", "year": "2021-10" }, { "authors": "Yin Zhou; Oncel Tuzel", "journal": "", "ref_id": "b36", "title": "Voxelnet: End-to-end learning for point cloud based 3d object detection", "year": "2017" } ]
[ { "formula_coordinates": [ 4, 121.53, 651.76, 164.83, 19.67 ], "formula_id": "formula_0", "formula_text": "f d p = T (f c p -f c p ) + f s (1)" }, { "formula_coordinates": [ 4, 371.28, 502.4, 173.84, 35.67 ], "formula_id": "formula_1", "formula_text": "f c g = M axP ool(conv(f c )) f d g = T (f c g -f c ) + f s(2)" }, { "formula_coordinates": [ 5, 124.73, 83.27, 27.21, 57.78 ], "formula_id": "formula_2", "formula_text": "P i + T s a 3 L U 1 a R P g Z E F M = \" > A A A B 7 n i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s x I U Z d F N y 4 r 2 A e 0 Q 8 m k G R u a y Q z J H a E M / Q g 3 L h R x 6 / e 4 8 2 9 M p 7 P Q 1 g O B k 3 P u T e 4 9 Q S K F Q d f 9 d k p r 6 x u b W + X t y s 7 u 3 v 5 B 9 f C o Y + J U M 9 5 m s Y x 1 L 6 C G S 6 F 4 G w V K 3 k s 0 p 1 E g e T e Y 3 M 7 9 7 h P X R s T q A a c J 9 y P 6 q E Q o G E U r d Y W 9 8 i E b V m t u 3 c 1 B V o l X k B o U a A 2 r X 4 N R z N K I K 2 S S G t P 3 3 A T 9 j G o U T P J Z Z Z A a n l A 2 s a / 3 L V U 0 4 s b P 8 n F n 5 M w q I x L G 2 h 6 F J F d / d 2 Q 0 M m Y a B b Y y o j g 2 y 9 5 c / M / r p x h e + 5 l Q S Y p c s c V H Y S o J x m S + O x k J z R n K q S W U a W F n J W x M N W V o E 6 r Y E L z l l V d J 5 6 L u X d Y b 9 4 1 a 8 6 a I o w w n c A r n 4 M E V N O E O W t A G B h N 4 h l d 4 c x L n x X l 3 P h a l J a f o O Y Y / c D 5 / A E w V j 4 4 = < / l a t e x i t > image c < l a t e x i t s h a 1 _ b a s e 6 4 = \" r 6 V Y m P t 2 7 d K J V u 4 a g Y m c H p + D o P 4 = \" > A A A B 8 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E 1 G P R i 8 c K 9 g O a E D b b q V 2 7 2 Y T d j V B C / 4 Y X D 4 p 4 9 c 9 4 8 9 + 4 b X P Q 1 g c L b 9 6 b Y W Z f l A q u j e t + O 6 W V 1 b X 1 j f J m Z W t 7 Z 3 e v u n / Q 1 k m m G L Z Y I h L V j a h G w S W 2 D D c C u 6 l C G k c C O 9 H o Z u p 3 n l B p n s h 7 M 0 4 x i O m D 5 A P O q L G S z 2 2 J Y a 7 9 8 H E S V m t u 3 Z 2 B L B O v I D U o 0 A y r X 3 4 / Y V m M 0 j B B t e 5 5 b m q C n C r D m c B J x c 8 0 p p S N 7 I q e p Z L G q I N 8 d v O E n F i l T w a J s k 8 a M l N / T + Q 0 1 n o c R 7 Y z p m a o F 7 2 p + J / X y 8 z g K s i 5 T D O D k s 0 X D T J B T E K m A Z A + V 8 i M G F t C m e L 2 V s K G V F F m b E w V G 4 K 3 + O V l 0 j 6 r e x f 1 8 7 v z W u O 6 i K M M R 3 A M p + D B J T T g F p r Q A g Y p P M M r v D m Z 8 + K 8 O x / z 1 p J T z B z C H z i f P 2 O i k e 0 = < / l a t e x i t > image s j < l a t e x i t s h a 1 _ b a s e 6 4 = \" b k O K x l K m i u 5 5 T B U 1 c Q F n E 7 3 r u L Q = \" > A A A B 8 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m k q M e i F 4 8 V 7 A c 0 I W y 2 W 7 t 0 s w m 7 E 6 G E / g 0 v H h T x 6 p / x 5 r 9 x 2 + a g r Q 8 W 3 r w 3 w 8 y + K J X C o O t + O 6 W 1 9 Y 3 N r f J 2 Z W d 3 b / + g e n j U M U m m G W + z R C a 6 F 1 H D p V C 8 j Q I l 7 6 W a 0 z i S v B u N b 2 d + 9 4 l r I x L 1 g J O U B z F 9 V G I o G E U r + c K W P M y N H 4 p p W K 2 5 d X c O s k q 8 g t S g Q C u s f v m D h G U x V 8 g k N a b v u S k G O d U o m O T T i p 8 Z n l I 2 t i v 6 l i o a c x P k 8 5 u n 5 M w q A z J M t H 0 K y V z 9 P Z H T 2 J h J H N n O m O L I L H s z 8 T + v n + H w O s i F S j P k i i 0 W D T N J M C G z A M h A a M 5 Q T i y h T A t 7 K 2 E j q i l D G 1 P F h u A t f 3 m V d C 7 q 3 m W 9 c d + o N W + K O M p w A q d w D h 5 c Q R P u o A V t Y J D C M" }, { "formula_coordinates": [ 5, 126.11, 178.43, 2.08, 5.15 ], "formula_id": "formula_3", "formula_text": "/ Q x T V a R k = \" > A A A B 7 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E 1 G P R i 8 c K 9 g P a U D b b T b t 2 s w m 7 E 7 G E / g c v H h T x 6 v / x 5 r 9 x 2 + a g r Q 8 G H u / N M D M v S K Q w 6 L r f T m F l d W 1 9 o 7 h Z 2 t r e 2 d 0 r 7 x 8 0 T Z x q x h s s l r F u B 9 R w K R R v o E D J 2 4 n m N A o k b w W j m 6 n f e u T a i F j d 4 z j h f k Q H S o S C U b R S E / k T 9 l i v X H G r 7 g x k m X g 5 q U C O e q / 8 1 e 3 H L I 2 4 Q i a p M R 3 P T d D P q E b B J J + U u q n h C W U j O u A d S x W N u P G z 2 b U T c m K V P g l j b U s h m a m / J z I a G T O O A t s Z U R y a R W 8 q / u d 1 U g y v / E y o J E W u 2 H x R m E q C M Z m + T v p C c 4 Z y b A l l W t h b C R t S T R n a g E o 2 B G / x" }, { "formula_coordinates": [ 5, 124.69, 197.25, 22.07, 26.27 ], "formula_id": "formula_4", "formula_text": "M = \" > A A A B 8 n i c b V B N S 8 N A E N 3 U r 1 q / q h 6 9 L B b B U 0 m k q M e i F 4 8 V 7 A e k I W y 2 m 3 b p Z j f s T s Q S + j O 8 e F D E q 7 / G m / / G b Z u D t j 4 Y e L w 3 w 8 y 8 K B X c g O t + O 6 W 1 9 Y 3 N r f J 2 Z W d 3 b / + g e n j U M S r T l L W p E k r 3 I m K Y 4 J K 1 g Y N g v V Q z k k S C d a P x 7 c z v P j J t u J I P M E l Z k J C h 5 D G n B K z k A 3 u C M D f 9 k E / D a s 2 t u 3 P g V e I V p I Y K t M L q V 3 + g a J Y w C V Q Q Y 3 z P T S H I i Q Z O B Z t W + p l h K a F j M m S + p Z I k z A T 5 / O Q p P r P K A M d K 2 5 K A 5 + r v i Z w k x k y S y H Y m B E Z m 2 Z u J / 3 l + B v F 1 k H O Z Z s A k X S y K M 4 F B 4 d n / e M A 1 o y A m l h C q u b 0 V 0 x H R h I J N q W J D 8 J Z f X i W d i 7 p 3 W W / c N 2 r N m y K O M j p B p + g c e e g K N d E d a q E 2 o k i h Z / S K 3 h x w X p x 3 5 2 P R W n K K m W P 0 B 8 7 n D 9 v 2 k a Q = < / l a t e x i t > text s i < l a t e x i t s h a 1 _ b a s e 6 4 = \" k P W 8 x 8 g z d K X K F p b A R n O r 0 q E b K B M = \" > A A A B 8 n i c b V B N S 8 N A E N 3 U r 1 q / q h 6 9 L B b B U 0 l E 1 G P R i 8 c K 9 g P S E D b b T b t 2 s x t 2 J 2 I J / R l e P C j i 1 V / j z X / j t s 1 B W x 8 M P N 6 b Y W Z e l A p u w H W / n d L K 6 t r 6 R n m z s r W 9 s 7 t X 3 T 9 o G 5 V p y l p U C a W 7 E T F M c M l a w E G w b q o Z S S L B O t H o Z u p 3 H p k 2 X M l 7 G K c s S M h A 8 p h T A l b y g T 1 B m J t e + D A J q z W 3 7 s 6 A l 4 l X k B o q 0 A y r X 7 2 + o l n C J F B B j P E 9 N 4 U g J x o 4 F W x S 6 W W G p Y S O y I D 5 l k q S M B P k s 5 M n + M Q q f R w r b U s C n q m / J 3 K S G D N O I t u Z E B i a R W 8 q / u f 5 G c R X Q c 5 l m g G T d L 4 o z g Q G h a f /" }, { "formula_coordinates": [ 5, 337.55, 94.44, 2.64, 5.52 ], "formula_id": "formula_5", "formula_text": "v 5 B + f C o q Z N M M f R Z I h L V D q l G w S X 6 h h u B 7 V Q h j U O B r X B 0 O / N b T 6 g 0 T + S D G a c Y x H Q g e c Q Z N V b y o 0 f e Y 7 1 y x a 2 6 c 5 B V 4 u W k A j k a v f J X t 5 + w L E Z p m K B a d z w 3 N c G E K s O Z w G m p m 2 l M K R v R A X Y s l T R G H U z m x 0 7 J m V X 6 J E q U L W n I X P 0 9 M a G x 1 u M 4 t J 0 x N U O 9 7 M 3 E / 7 x O Z q L r Y M J l m h m U b L E o y g Q x C Z l 9 T v p c I T N i b A l l i t t b C R t S R Z m x + Z R s C N 7 y y 6 u k e V H 1 L q u 1 + 1 q l f p P H U Y Q T O I V z 8 O A K 6 n A H D f C B A Y" }, { "formula_coordinates": [ 5, 322.89, 125.47, 1.95, 4.83 ], "formula_id": "formula_6", "formula_text": "U 8 K y Z x 7 Q 1 a K 7 o = \" > A A A B 8 X i c b V B N S w M x E J 2 t X 7 V + V T 1 6 C R b B U 9 m V o h 6 L X j x W s B / Y r k s 2 z b a x 2 W R J s k J Z + i + 8 e F D E q / / G m / / G t N 2 D t j 4 Y e L w 3 w 8 y 8 M O F M G 9 f 9 d g o r q 2 v r G 8 X N 0 t b 2 z u 5 e e f + g p W W q C G 0 S y a X q h F h T z g R t G m Y 4 7 S S K 4 j j k t B 2 O r q d + + 4 k q z a S 4 M + O E + j E e C B Y x g o 2 V 7 q M H F m S 6 F z x O g n L F r b o z o G X i 5 a Q C O R p B + a v X l y S N q T C E Y 6 2 7 n p s Y P 8 P K M M L p p N R L N U 0 w G e E B 7 V o q c E y 1 n 8 0 u n q A T q / R R J J U t Y d B M / T 2 R 4 V j r c R z a z h i b o V 7 0 p u J / X j c 1 0 a W f M Z G k h g o y X x S" }, { "formula_coordinates": [ 5, 309.95, 207.03, 2.46, 4.83 ], "formula_id": "formula_7", "formula_text": "p o 0 F r H q B E Q z w S V r I k f B O o l i J A o E a w e j 6 6 n f f m J K 8 1 j e 4 T h h X k Q G k o e c E j T S f e h n u u c / T h 7 Q L 1 e c q j O D v U z c n F Q g R 8 M v f / X 6 M U 0 j J p E K o n X X d R L 0 M q K Q U 8 E m p V 6 q W U L o i A x Y 1 1 B J I q a 9 b H b x x D 4 x S t 8 O Y 2 V K o j 1 T f 0 9 k J N J 6 H A W m M y I 4 1 I v e V P z P 6 6 Y Y X n o Z l 0 m K T N L 5 o j A V N s b 2 9 H 2 7 z x W j K M a G E K q 4 u d W m Q 6 I I R R N S y Y T g L r 6 8 T F p n V f e 8 W r u t V e p X e R x F O I J j O A U X L q A O N 9 C A J l C Q 8 A y v 8 G Z" }, { "formula_coordinates": [ 5, 333.81, 209.25, 2.07, 4.4 ], "formula_id": "formula_8", "formula_text": "x h J x Y p U / C W N t S S G b q 7 4 m M R s a M o 8 B 2 R h S H Z t G b i v 9 5 n R T D K z 8 T K k m R K z Z f F K a S Y E y m 7 5 O + 0 J y h H F t C m R b 2 V s K G V F O G N q S S D c F b f H m Z N M + q 3 k X 1 / O 6 8 U r v O 4 y j C E R z D K X h w C T W" }, { "formula_coordinates": [ 5, 345.93, 187.93, 1.95, 4.83 ], "formula_id": "formula_9", "formula_text": "V k K H n I K Q E j + X Y l f A A / 0 3 0 f p v g M 5 w + K f b v q 1 J w Z 8 D J x C 1 J F B Z q + / d U f x D S N m A Q q i N Y 9 1 0 n A y 4 g C T g W b l v u p Z g m h Y z J k P U M l i Z j 2 s t n y U 3 x i l A E O Y 2 V K A p 6 p v y c y E m k 9 i Q L T G R E Y 6 U U v F / / z e i m E V 1 7 G Z Z I C k 3 T + U Z g K D D H O k 8 A D r h g F M T G E U M X N" }, { "formula_coordinates": [ 5, 322.91, 203.49, 118.8, 28.57 ], "formula_id": "formula_10", "formula_text": "G I U x M g Q Q h U 3 u 2 I 6 I I p Q M J k V T Q j u 7 M n z p H F a d s / K l Z t K q X q Z x 1 F A B + g Q H S M X n a M q u k Y 1 V E c U Z e g J v a B X 6 9 F 6 t t 6 s 9 2 n r g p X P 7 K E / s D 6 + A X O g l l Q = < / l a t e x i t > f t s j f t s i < l a t e x i t s h a 1 _ b a s e 6 4 = \" U I y W L v T c Q w d q o W R J I F f C 4 D r v E U g = \" > A A A B 7 H i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E 1 G P R i 8 c K p i 2 0 o W y 2 m 3 b p Z h N 2 J 0 I J / Q 1 e P C j i 1 R / k z X / j t s 1 B W x 8 M P N 6 b Y W Z e m E p h 0 H W / n d L a + s b m V n m 7 s r O 7 t 3 9 Q P T x q m S T T j P s s k Y n u h N R w K R T 3 U a D k n V R z G o e S t 8 P x 3 c x v P 3 F t R K I e c Z L y I K Z D J S L B K F r J 7 4 U c a b 9 a c + v u H G S V e A W p Q Y F m v / r V G y Q s i 7 l C J q k x X c 9 N M c i p R s E k n 1 Z 6 m e E p Z W M 6 5 F 1 L F Y 2 5 C f L 5 s V N y Z p U B i R J t S y G Z q 7 8 n c h o b M 4 l D 2 x l T H J l l b y b + 5 3 U z j G 6 C X K g 0 Q 6 7 Y Y l G U S Y I J m X 1 O B k J z h n J i C W V a 2 F s J G 1 F N G d p 8 K j Y E b / n l V d K 6 q H t X 9 c" }, { "formula_coordinates": [ 5, 433.66, 218.05, 1.78, 4.4 ], "formula_id": "formula_11", "formula_text": "d i c i i u q N m S T v i + D M = \" > A A A C A n i c b V D L S g M x F M 3 4 r P U 1 6 k r c B I t Q N 2 V G i r o s u n F Z w T 6 g M 5 R M e t u G J p k h y Q i 1 F D f + i h s X i r j 1 K 9 z 5 N 6 b t L L T 1 w I X D O f c m 9 5 4 o 4 U w b z / t 2 l p Z X V t f W c x v 5 z a 3 t n V 1 3 b 7 + u 4 1 R R q N G Y x 6 o Z E Q 2 c S a g Z Z j g 0 E w V E R B w a 0 e B 6 4 j f u Q W k W y z s z T C A U p C d Z l 1 F i r N R 2 D w W T T L A H w E G Q p 7 G 2 z x S D C A w 5 b b s F r + R N g R e J n 5 E C y l B t u 1 9 B J 6 a p A G k o J 1 q 3 f C 8 x 4 Y g o w y i H c T 5 I N S S E D k g P W p Z K I k C H o + k J Y 3 x i l Q 7 u x s q W N H i q / p 4 Y E a H 1 U E S 2 U x" }, { "formula_coordinates": [ 5, 412.54, 140.12, 2.94, 7.58 ], "formula_id": "formula_12", "formula_text": "u i d J U i j s z S k j I U V / Q m G J k r N R 1 D z k V l N M H A o O g i K W 2 z 5 Q D x J I B O u 2 6 J a / i T Q E X i Z + T E s h R 7 7 p f Q U / i l B N h M E N a d 3 w v M W G G l K G Y k X E x S D V J E B 6 i P u l Y K h A n O s y m N 4 z h i V V 6 M J b K l j B w q v 6 e y B D X e s Q j 2 8 m R G e h 5 b y L + 5 3 V S E 1 + G G R V J a o j A s 4 / i l E E j 4 S Q Q 2 K O K Y M N G l i C s q N 0 V 4 g F S C B s b W 9 G G 4 M + f v E i a Z x X / v F K 9 r Z Z q V 3 k c B X A E j k E Z + O A C 1 M A N q I M G w O A R P I N X 8 O Y 8 O S / O u / M x" }, { "formula_coordinates": [ 5, 254.67, 98.18, 216, 17.91 ], "formula_id": "formula_13", "formula_text": "x 8 M P N 6 b Y W Z e k A i u j e t + O 4 W 1 9 Y 3 N r e J 2 a W d 3 b / + g f H j U 0 n G q K G v S W M S q E 6 B m g k v W N N w I 1 k k U w y g Q r B 2 M b 2 d + + 4 k p z W P 5 Y C Y J 8 y M c S h 5 y i s Z K r R 6 K Z I T 9 c s W t u n O Q V e L l p A I 5 G v 3 y V 2 8 Q 0 z R i 0 l C B W n c 9 N z F + h s p w K t i 0 1 E s 1 S 5 C O c c i 6 l k q M m P a z + b V T c m a V A Q l j Z U s a M l d / T 2 Q Y a T 2 J A t s Z o R n p Z W 8 m / u d 1 U x N e + x m X S W q Y p I t F Y S q I i c n s d T L g i l E j J p Y g V d z e S u g I F V J j A y r Z E L z l l 1 d J 6 6 L q X V Z r 9 7 V K / S a P o w g n c A r n 4 M E V 1 O E O G t A E C o / w D K / w 5 s T O i / P u f C x a C 0 4 + c w x / 4 H z + A I 5 v j y E = < / l a t e x i t > ↵ < l a t e x i t s h a 1 _ b a s e 6 4 = \" a 0 e 1 a p U k t G / x D Y c X a G O s v f A + 3 7 Y = \" > A A A B 9 H i c b V B N S 8 N A E J 3 4 W e t X 1 a O X Y B E q S E m k q H g q i q C 3 C v Y D 2 l A 2 2 0 2 7 d L O J u 5 N C K f 0 d X j w o 4 t U f 4 8 1 / 4 7 b N Q V s f D D z e m 2 F m n h 8 L r t F x v q 2 l 5 Z X V t f X M R n Z z a 3 t n N 7 e 3 X 9 N R o i i r 0 k h E q u E T z Q S X r I o c B W v E i p H Q F 6 z u 9 2 8 m f n 3 A l O a R f M R h z L y Q d C U P O C V o J O + 2 f V + 4 O m 1 h j y E 5 a e f y T t G Z w l 4 k b k r y k K L S z n 2 1 O h F N Q i a R C q J 1 0 3 V i 9 E Z E I a e C" }, { "formula_coordinates": [ 5, 255.07, 196.52, 8.28, 8.92 ], "formula_id": "formula_14", "formula_text": "N d + f C Q r I Q 5 q 1 f C F a O 2 E W / c 4 = \" > A A A B 9 H i c b V B N S 8 N A E J 3 4 W e t X 1 a O X Y B E q S E m k q H g q i u C x Q r + g D W W z 3 b R L N 5 u 4 O y m U 0 t / h x Y M i X v 0 x 3 v w 3 b t s c t P X B w O O 9 G W b m + b H g G h 3 n 2 1 p Z X V v f 2 M x s Z b d 3 d v f 2 c w e H d R 0 l i r I a j U S k m j 7 R T H D J a s h R s G a s G A l 9 w R r + 4 G 7 q N 4 Z M a R 7 J K o 5 i 5 o W k J 3 n A K U E j e f e d a u H m v I 1 9 h u S s k 8 s 7 R W c G e 5 m 4 K c l D i k o n 9 9 X u R j Q J m U Q q i N Y t 1 4 n R G x O F n A o 2 y b Y T z W J C B 6 T H W o Z K E j L t j W d H T + x T o 3 T t I F K m J N o z 9 f f E m I R a j 0 L f d I Y E + 3 r R m 4 r / e a 0 E g 2 t v z G W c I J N 0 v i h I h I 2 R P U 3 A 7 n L F K I q R I Y Q q b m 6 1 a Z 8 o Q t H k l D U h u I s v L 5 P 6 R d G 9 L J Y e S / n y b R p H B o 7 h B A r g w h W U 4 Q E q U A M K T / A M r / B m" }, { "formula_coordinates": [ 5, 86.49, 542.99, 199.88, 61.69 ], "formula_id": "formula_15", "formula_text": "∆T = E T (text s ) -E T (text c ), ∆I = E I (f ({P s n } N n=1 )) -E I (image c ), L dir = 1 - ∆I • ∆T |∆I||∆T |(3)" }, { "formula_coordinates": [ 5, 310.02, 356.36, 235.1, 92.38 ], "formula_id": "formula_16", "formula_text": "image s = f ({P sj n } N n=1 ), ∆T = E T (text s ) -E T (text c ), ∆I = E I (aug( îmage i s )) -E I (image c ), l i patch = 1 - ∆I • ∆T |∆I||∆T | , L i patch = 1 N N i R(l i patch , τ ) whereR(s, τ ) = 0, if s ≤ τ s, otherwise(4)" }, { "formula_coordinates": [ 5, 319.8, 576.38, 225.31, 76.63 ], "formula_id": "formula_17", "formula_text": "∆T = E T (text s i ) -E T (text s j ), image s i = f ({P si n } N n=1 ), image s j = f ({P sj n } N n=1 ) ∆I = E I (image s i ) -E I (image s j ), L dir = 1 - ∆I • ∆T |∆I||∆T | ,(5)" }, { "formula_coordinates": [ 6, 86.34, 138.26, 200.03, 49.76 ], "formula_id": "formula_18", "formula_text": "f i c = E I (image c i ), f j c = E I (image c j ) L cd = 1 - f i c • f j c |f i c ||f j c | ,(6)" }, { "formula_coordinates": [ 6, 109.06, 303.07, 121.18, 17.29 ], "formula_id": "formula_19", "formula_text": "L c = λ f eat L f eat + λ rgb L rgb ." } ]
CLIP3Dstyler: Language Guided 3D Arbitrary Neural Style Transfer
A sketch with black pencil Figure 2: In this figure, we show that our model can be generalized to the hold-out test scenes without retraining our model on it.
Chenkaizhao
[ { "figure_caption": "FireFigure 3 :3Figure 3: Our proposed style divergence loss prevents the model from mixing styles", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Method Overview. Our method starts from the point cloud reconstruction from a set of images and generate feature for each point. Point-wise and global feature representation stylized by given a text style. Integrating the per-points and global transferred features are projected to novel view and decoded to the RGB images. method into three components as shown in Figure 4; the first component generates a point cloud of the features via projecting image features into the corresponding 3D coordinates (depth map). The second component comprises a light global point cloud feature extractor and one off-theshelf prompt feature extractor (e.g. clip text encoder). In the last component, we generate the stylized point cloud feature of the specific view, which mixes the content feature from a specific view of the point cloud with the text style feature and the complement global point cloud feature. After all the steps, we project the stylized point cloud features into a 2D stylized view.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "7 z C m 5 M 5 L 8 6 7 8 7 F o L T n F z D H 8 g f P 5 A 2 I d k e w = < / l a t e x i t > image s i < l a t e x i t s h a 1 _ b a s e 6 4 = \" b v L T 9 U n t j u / f Z D Z t y P 3", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "5 W X S P K t 6 F 9 X z u / N K 7 T q P o w h H c A y n 4 M E l 1 O A W 6 t A A B g / w D K / w 5 s T O i / P u f M x b C 0 4 + c w h / 4 H z + A M c u j 0 Y = < / l a t e x i t > text c < l a t e x i t s h a 1 _ b a s e 6 4 = \" 8 7 D y n w x J a 2 7 a 6 2 o w k s T j 9 X n S k Y", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "4 z 7 X j I I Y W 0 K o 5 v Z W T I d E E w o 2 p Y o N w V t 8 e Z m 0 z + r e R f 3 8 7 r z W u C 7 i K K M j d I x O k Y c u U Q P d o i Z q I Y o U e k a v 6 M 0 B 5 8 V 5 d z 7 m r S W n m D l E f + B 8 / g D d e 5 G l < / l a t e x i t > text s j < l a t e x i t s h a 1 _ b a s e 6 4 = \" l / J d 4 s 2 R b W 5 e h X 5 f a H B P Z X 1 7 Y x M = \" > A A A B 7 H i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m k q M e i F 4 8 V T F t o Y 9 l s J + 3 S z S b s b o R S + h u 8 e F D E q z / I m / / G b Z u D t j 4 Y e L w 3 w 8 y 8 M B V c G 9 f 9 d g p r 6 x u b W 8 X t 0 s 7 u 3", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "d n e I U 3 R z o v z r v z s W g t O P n M M f y B 8 / k D u S + O p A = = < / l a t e x i t > f i c < l a t e x i t s h a 1 _ b a s e 6 4 = \" T + g n F V R M D K J l 3 0 R o 1 f T H p C 4 s z Y E = \" > A A A B + 3 i c b V D L S s N A F L 2 p r 1 p f s S 7 d D B b B j S W R o i 6 L b l x W s A / o I 0 y m k 3 b o Z B J m J m I J + R U 3 L h R x 6 4 + 4 8 2 + c t l l o 6 4 E L h 3 P u 5 d 5 7 / J g z p R 3 n 2 y q s r W 9 s b h W 3 S z u 7 e / s H 9 m G 5 p a J E E t o k E Y 9 k x 8 e K c i Z o U z P N a S e W F I c + p 2 1 / c j v z 2 4 9 U K h a J B z 2 N a T / E I 8 E C R r A 2 k m e X A y 9 V P U 9 n A 4 b OU e C R A f P s i l N 1 5 k C r x M 1 J B X I 0 P P u r N 4 x I E l K h C c d K d V 0 n 1 v 0 U S 8 0 I p 1 m p l y g a Y z L B I 9 o 1 V O C Q q n 4 6 v z 1 D p 0 Y Z o i C S p o R G c / X 3 R I p D p a a h b z p D r M d q 2 Z u J / 3 n d R A f X / Z S J O N F U k M W i I O F I R 2 g W B B o y S Y n m U 0 M w k c z c i s g Y S 0 y 0 i a t k Q n C X X1 4 l r Y u q e 1 m t 3 d c q 9 Z s 8 j i I c w w m c g Q t X U I c 7 a E A T C D z B M 7 z C m 5 V Z L 9 a 7 9 b F o L V j 5 z B H 8 g f X 5 A z e I k + o = < / l a t e x i t > t e x i t s h a 1 _ b a s e 6 4 = \" m j j S L I Q + s 8 7 E y d 5 F W B 0 u R d w c H L Y = \" > A A A B 8 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E 1 G P R i 8 c K 9 g P b W D b b T b t 0 s w m 7 E 6 G E / g s v H h T x 6 r / x 5 r 9 x 2 + a g r Q 8 G H u / N M D M v S K Q w 6 L r f T m F l d W 1 9 o 7 h Z 2 t r e 2 d 0 r 7 x 8 0 T Z x q x h s s l r F u B 9 R w K R R v o E D J 2 4 n m N A o k b w W j m 6 n f e u L a i F j d 4 z j h f k Q H S o S C U b T S Q / g o e p n p 9 s S k V 6 6 4 V X c G s k y 8 n F Q g R 7 1 X / u r 2 Y 5 Z G X C G T 1 J i O 5 y b o Z 1 S j Y J J P S t 3 U 8 I S y E R 3 w j q W K R t z 4 2 e z i C T m x S p + E s b a l k M z U 3 x M Z j Y w Z R 4 H t j C g O z a I 3 F f / z O i m G V 3 4 m V J I i V 2 y + K E w l w Z h M 3 y d 9 o T l D O b a E M i 3 s r Y Q N q a Y M b U g l G 4 K 3 + P I y a Z 5 V v Y v q + d 1 5 p X a d x 1 G E I z i G U / D g E m p w C 3 V o A A M F z / A K b 4 5 x X p x 3 5 2 P e W n D y m U P 4 A + f z B 8 r 1 k Q I = < / l a t e x i t > f i s i < l a t e x i t s h a 1 _ b a s e 6 4 = \" 8 Q r b 5 / e G n e b 1 D b", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "l H B m J p u + j P l O U G D 6 2 B B P F 7 K 2 I D L H C x N i Q S j Y E b / H l Z d I 6 q 3 r n 1 d p t r V K / y u M o w h E c w y l 4 c A F 1 u I E G N I G A g G d 4 h T d H O y / O u / M x b y 0 4 + c w h / I H z + Q P M e p E D < / l a t e x i t > f i s j < l a t e x i t s h a 1 _ b a s e 6 4 = \" H K a 8 z U e e Q l + 3 4u Z E 9 n F F T F d H 8 U g = \" > A A A C A H i c b V D L S s N A F L 3 x W e s r 6 s K F m 8 E i u L E k U t R l 0 Y 3 L C v Y B b Q 2 T 6 a Q d O 5 m E m Y l Q Q j b + i h s X ir j 1 M 9 z 5 N 0 7 b C N p 6 4 M K Z c + 5 l 7 j 1 + z J n S j v N l L S w u L a + s F t a K 6 x u b W 9 v 2 z m 5 D R Y k k t E 4 i H s m W j x X l T N C 6 Z p r T V i w p D n 1 O m / 7 w a u w 3 H 6 h U L B K 3 e h T T b o j 7 g g W M Y G 0 k z 9 4 P 7 p i X q o 5 3 n 6 E T 9 P N g m W e X n L I z A Z o n b k 5K k K P m 2 Z + d X k S S k A p N O F a q 7 T q x 7 q Z Y a k Y 4 z Y q d R N E Y k y H u 0 7 a h A o d U d d P J A R k 6 M k o P B Z E 0 J T S a q L 8 n U h w q N Q p 9 0 x l i P V C z 3 l j 8 z 2 s n O r j o p k z E i a a C T D 8 K E o 5 0 h M Z p o B 6 T l G g + M g Q T y c y u i A y w x E S b z I o m B H f 2 5 H n S O C 2 7 Z + X K T a V U v c z j K M A B H M I x u H A O V b i G G t S B Q A Z P 8 A K v 1 q P 1 b L 1 Z7 9 P W B S u f 2 Y M / s D 6 + A V E f l j 4 = < / l a t e x i t > t e x i t s h a 1 _ b a s e 6 4 = \" k H a h u H 4 i W C f s Y p N v 3 u y x I B 6 u x h Y = \" > A A A B 7 H i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 m k q M e i F 4 8 V T F t o Y 9 l s N + 3 S z S b s T o R S + h u 8 e F D E q z / I m / / G b Z u D t j 4 Y e L w 3 w 8 y 8 M J X C o O t + O 4 W 1 9 Y 3 N r e J 2 a W d 3 b / + g f H j U N E m m G f d Z I h P d D q n h U i j u o 0 D J 2 6 n m N A 4 l b 4 W j 2 5 n f e u L a i E Q 9 4 D j l Q U w H S k S C U b S S H / X Y I / b K F b f q z k F W i Z e T C u R o 9 M p f 3 X 7 C s p g r Z J I a 0 / H c F I M J 1 S i Y 5 N N S N z M 8 p W x E B 7 x j q a I x N 8 F k f u y U n F m l T 6 J E 2 1 J I 5 u r v i Q m N j R n H o e 2 M K Q 7 N s j c T / / M 6 G U b X w U S o N E O u 2 G J R l E m C C Z l 9 T v p C c 4 Z y b A l l W t h b C R t S T R n a f E o 2 B G / 5 5 V X S v K h 6 l 9 X a f a 1 S v 8 n j K M I J n M I 5 e H A F d b i D B v j A Q M A z v M K b o 5 w X 5 9 3 5 W L Q W n H z m G P 7 A + f w B y d G O r w = = < / l a t e x i t > f t c < l a t e x i t s h a 1 _ b a s e 6 4 = \" 2 O W n 9 i R m 3 r 5 C U S 6 Y 4 X n A F 9 C Q W B s = \" > A A A B 8 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 B I v g q S R S 1 G P R i 8 c K 9 g P b G D b b T b t 2 s w m 7 E 6 G E / g s v H h T x 6 r / x 5 r 9 x 2 + a g r Q 8 G H u / N M D M v S A T X 6 D j f V m F l d W 1 9 o 7 h Z 2 t r e 2 d 0 r 7 x + 0 d J w q y", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "p 6 8 V 6 t z 7 m r Q U r n z m E P 7 A + f w D d k 5 E O < / l a t e x i t > f t s j < l a t e x i t s h a 1 _ b a s e 6 4 = \" K Z B d d S W l M N A 0 K w V A M u Z e A 5 S 4 + j E = \" > A A A B 8 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E 1 G P R i 8 c K 9 g P b W D b b T b t 0 s w m 7 E 6 G E / g s v H h T x 6 r / x 5 r 9 x 2 + a g r Q 8 G H u / N M D M v S K Q w 6 L r f T m F l d W 1 9 o 7 h Z 2 t r e 2 d 0 r 7 x 8 0 T Z x q x h s s l r F u B 9 R w K R R v o E D J 2 4 n m N A o k b w W j m 6 n f e u L a i F j d 4 z j h f k Q H S o S C U b T S Q 9 j L T L c n J o / Y K 1 f c q j s D W S Z e T i q Q o 9 4 r f 3 X 7 M U s j r p B J a k z H c x P 0 M 6 p R M M k n p W 5 q e E L Z i A 5 4 x 1 J F I 2 7 8 b H b", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "4 h T o 0 g I G C Z 3 i F N 8 c 4 L 8 6 7 8 z F v L T j 5 z C H 8 g f P 5 A 9 w M k Q 0 = < / l a t e x i t > f t s i < l a t e x i t s h a 1 _ b a s e 6 4 = \" x o u g J 9 P / D 5 m O B / w z T s p 9 L W z C a h k = \" > A A A B / H i c b V D L S s N A F J 3 4 r P U V 7 d L N Y B H c W B I p 6 r L o x m U F + 4 A 2 h s l 0 0 g 6 d T M L M j V B C / R U 3 L h R x 6 4 e 4 8 2 + c t F l o 6 4 E L Z 8 6 5 l 7 n 3 B I n g G h z n 2 1 p Z X V v f 2 C x t l b d 3 d v f 2 7 Y P D t o 5 T R V m L x i J W 3 Y B o J r h k L e A g W D d R j E S B Y J 1 g f J P 7 n U e m N I / l P U w S 5 k", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "r p i O i C I U T F 5 l E 4 K 7 e P I y a Z / X 3 I t a / a 5 e b V w X c Z T Q E T p G p 8 h F l 6 i B b l E T t R B F E / S M X t G b 9 W S 9 W O / W x 7 x 1 x S p m K u g P r M 8 f t W G U K g = = < / l a t e x i t > t e x i t s h a 1 _ b a s e 6 4 = \" x 2 U l 9 m T l e W 0 N y e M a 0 f oX v F q D 0 e o = \" > A A A C A H i c b V D L S s N A F J 3 4 r P U V d e H C z W A R 3 F g S K e q y 6 M Z l B f u A t o b J d N K O n U z C z I 1 Q Q j b + i h s X i r j 1 M 9 z 5 N 0 7 b C N p 6 4 M K Z c + 5 l 7 j 1 + L L g G x / m y F h a X l l d W C 2 v F 9 Y 3 N r W 1 7 Z 7 e h o 0 R R V q e R i F T L J 5 o J L l k d O A j W i h U j o S 9 Y 0 x 9 e j f 3 m A 1 O a R / I W R j H r h q Q v e c Ap A S N 5 9 n 5 w B 1 6 q O 9 5 9 h k / w z 4 N n n l 1 y y s 4 E e J 6 4 O S m h H D X P / u z 0 I p q E T A I V R O u 2 6 8 T Q T Y k C T g X L i p 1 E s 5 j Q I e m z t q G S h E x 3 0 8 k B G T 4 y S g 8 H k T I l A U / U 3 x M p C b U e h b 7 p D A k M 9 K w 3 F v / z 2 g k E F 9 2 U y z g B J u n 0 o y A R G C I 8 T g P 3 u", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "u H y 1 r j t o i j D C d w C u f g w T U 0 4 B 6 a 4 A M D A c / w C m + O c l 6 c d + d j 0 V p y i p l j + A P n 8 w f G x I 6 t < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" M 4 z / / F w w m J +", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "D T 1 / P e R P z P a 6 W m e x m O m E x S A 5 L O P u q m H J s Y T / L A H a a A G j 6 0 h F D F 7 K 6 Y 9 o k i 1 N j U 8 j Y E f / 7 k R V I / K / n n p f J t u V C 5 y u L I o S N 0 j I r I R x e o g m 5 Q F d U Q R Y / o G b 2 i N + f J e X H e n Y 9 Z 6 5 K T z R y g P 3 A + f w B O x Z a 8 < / l a t e x i t > minimize cosine( ) < l a t e x i t s h a 1 _ b a s e 6 4 = \" W B c d J / k J 4 U 5 c X 0 E W r 7 n F O h W Z 6 i Y = \" > A A A C A 3 i c b V D L S g M x F M 3 4 r P U 1 6 k 4 3 w S L U T Z m R o i 6 L b l x W s A / o D C W T Z t r Q P I Y k I 9 S h 4 M Z f c e N C E b f + h D v / x r S d h b Y e u H A 4 5 9 7 k 3 h M l j G r j e d / O 0 v L K 6 t p 6 Y a O 4 u b W 9 s + v u 7 T e 1 T B U m D S y Z V O 0 I a c K o I A 1 D D S P t R B H E I 0 Z a 0 f B 6 4 r f", "figure_data": "", "figure_id": "fig_12", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "a 1 1 y 8 p k D 8 A f O 5 w 8 e D p c w < / l a t e x i t > minimize cosine(↵)", "figure_data": "", "figure_id": "fig_13", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "j b O t R L O Y 0 D 7 p s q a h k o R M e 6 P p 0 W P 7 2 C g d O 4 i U K Y n 2 V P 0 9 M S K h 1 s P Q N 5 0 h w Z 6 e 9 y b i f 1 4 z w e D S G 3 E Z J 8 g k n S 0 K E m F j Z E 8 S s D t c M Y p i a A i h i p t b b d o j i l A 0 O W V N C O 7 8 y 4 u k d l Z 0 z 4 u l h 1 K + f J 3 G k Y F D O I I C u H A B Z b i D C l S B w h M 8 w y u 8 W Q P r x X q 3 P m a t S 1 Y 6 c w B / Y H 3 + A D d m k R s = < / l a t e x i t > E I (:, ✓) < l a t e x i t s h a 1 _ b a s e 6 4 = \" R O w T", "figure_data": "", "figure_id": "fig_14", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Loss function. To support multiple scenes and styles in one model, only the constraint from source to target will cause the style mix-up. The constraint between styles can strengthen the model to distinguish different styles effectively.", "figure_data": "", "figure_id": "fig_15", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :Figure 7 :67Figure 6: Comparisons to 2D image text conditional methods and Stylescene. We compare the images generated from stylizing the novel views with our model.method and maintain the content of a scene. View Consistency. To measure inconsistency between the pair of stylized views, we reproject the pixel to the 3D space and project back to another plane by the camera intrinsic and extrinsic of a pair of views. By doing this, we can measure the color changing of a pair of pixels in different views pro-", "figure_data": "", "figure_id": "fig_16", "figure_label": "67", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: User study. We conduct the user to compare our method with other approaches from consistency and stylization perspectives.Dataset global non-global Truck 0.2849 0.2808 M60 0.2874 0.2751 Playground 0.2822 0.2678 Train 0.2859 0.2810 Average 0.2851 0.2761 Table 4: CLIP score of global and non-global. By enhanced the global feature transformation, the stylization views achieve higher CLIP score, and better match text described style. the results from 21 different styles for all test scenes and report the mean value in Table 2 and Table 3. For either the short-range or long-range consistency, the proposed method reaches the lower RMSE values, which means the color of pixels projected onto different views from a point variant is smaller than the 2D method.", "figure_data": "", "figure_id": "fig_17", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: We also compare with the CLIPNeRF[31]; Note that we do not train the model on the nerf llff dataset; we only infer the model on these datasets and styles, but the CLIPNeRF needs to optimize for each scene and style.Effect of global feature transformation. We compare the stylization results with the non-global and global operations and calculate the CLIP score to measure whether the global information helps the model better match the textdescribed style. In Figure7, with the global information, we obtained a much higher contrast and full texture details on the ground and the object surface. Without the global transferred feature, point-wise feature style transformation causes the obscure or indistinct of the truck and tank in the figure. We also measure the CLIP score of the render results with global and non-global style transformation in Table4.", "figure_data": "", "figure_id": "fig_19", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "CLIP Score. We compare the stylized image of our method with other 2D methods to determine whether our approach better matches the semantic text style.", "figure_data": "DatasetStyleSene SVS→CLIP+LT OursTruck0.23710.26510.2849M600.23620.26250.2874Playground0.23600.26590.2822Train0.23440.26000.2859Average0.23590.26330.2851", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Short-range consistency. We use the (t-1)-th and t frame of video to measure the color variance by RMSE.", "figure_data": "SVS→CLIP+LT OursTruck0.08350.09780.0827M600.09390.10370.0644Playground0.07620.09330.0441Train0.08270.11300.0818Average0.08410.10190.0683DatasetStyleScene SVS→CLIP+LT OursTruck0.10650.11090.1007M600.11500.12390.0754Playground0.11520.10910.0631Train0.10950.12850.1066Average0.11160.11810.0864", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Long-range consistency. We use the (t-7)-th and t frame of the video to measure the color variance by RMSE. ure 8, The users deem that our approach reaches better consistency and conforms more to the target style. Stylization Quality. To quantify the stylization quality, we calculated the cosine similarity between the CLIP embedding of output stylized images and associated targets style text defined by CLIPstyler[11]. To better measure the local texture style transformation quality, we randomly crop 64 patches before calculating the image CLIP embedding.The comparison with other 2D methods is shown in Table 1. With the advantage of 3D geometry, we achieve better stylization results with higher CLIP scores than the 2D", "figure_data": "OursSVS → CLIPstyler +LT", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[11,22,19,4,30]", "Explanation": "The cited works have shown the potential of Vision-Language models in various tasks, which the citing paper leverages to explore the use of these models in 3D scene stylization."}, {"Category": "Extension or Continuation", "Citation": "[5,6,18]", "Explanation": "The cited works on arbitrary 3D scene stylization provide a foundation for the proposed method in the citing paper, which extends the research in this area by exploring the use of text descriptions for stylization."}, {"Category": "Extension or Continuation", "Citation": "[11,4,19]", "Explanation": "The cited works on text-driven 2D image stylization provide a basis for the proposed method in the citing paper, which extends the research in this area by applying the concept to 3D scene stylization."}, {"Category": "Methodological Basis", "Citation": "[18,6]", "Explanation": "The cited works provide a method of projecting 3D point descriptors to 2D images and using a pre-trained image encoder to match the 3D features with the given 2D style features, which the citing paper adopts in their research on arbitrary style image stylization."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work, CLIPstyler, provides a method for text-driven 2D image stylization that utilizes the image and text encoders of CLIP to match image and text features in the same metric space. The citing paper adopts this method to match the 3D point cloud descriptor with text features in the text-driven 3D point cloud stylization task."}, {"Category": "Extension or Continuation", "Citation": "[22]", "Explanation": "The cited work, CLIP, is a pre-trained VGG that is used in the method of the cited work, CLIPstyler, to project the 3D point cloud back to the 2D image space and match text-image features using CLIP. The citing paper extends this method to the text-driven 3D point cloud stylization task by using the same method to match the 3D point cloud descriptor with text features."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work, CLIPstyler, provides a style matching loss that the citing paper adopts to achieve multiple text style transfer in the context of language-guided 3D point cloud stylization."}, {"Category": "Supporting Evidence", "Citation": "[22]", "Explanation": "The cited work, CLIP, serves as a foundational pre-trained model for bridging the text and image modality, which is essential for the research conducted in the citing paper on text-driven style transfer learning and novel view synthesis."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work, StyleCLIP, provides a method of attribute manipulation from text information for style transfer learning, which the citing paper extends to the 3D space in the context of text 3D stylization."}, {"Category": "Extension or Continuation", "Citation": "[3]", "Explanation": "The cited work, DALL\u2022E 2, integrates the CLIP and diffusion model to generate high-resolution and vivid images, which the citing paper extends to the task of text 3D stylization by performing text-driven style transfer learning in a 3D space."}, {"Category": "Extension or Continuation", "Citation": "[11]", "Explanation": "The cited work, CLIPstyler, performs text-driven style transfer learning by minimizing the difference in directional distance of text and image features extracted by CLIP encoder. The citing paper extends this method to the context of text 3D stylization in a multi-view setting."}, {"Category": "Methodological Basis", "Citation": "[16,1]", "Explanation": "The cited works on projection in multi-image and single image novel view synthesis provide a methodological basis for the research on text 3D stylization in the citing paper, which requires the synthesis of novel views in a multi-view setting."}, {"Category": "Methodological Basis", "Citation": "[28,17,29,14]", "Explanation": "The cited works on volume rendering in novel view synthesis provide a methodological basis for the research on text 3D stylization in the citing paper, as it requires the synthesis of novel views through a process of light emission and volume rendering."}, {"Category": "Methodological Basis", "Citation": "[33]", "Explanation": "The cited work, SynSin, proposed a projection-based differentiate render pipeline that the citing paper adopts in their research to improve the efficiency and generalizability of the render process."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work introduces the Adaptive Attention Normalization (AdaAttN) method, which the citing paper adopts to match and combine styles and contents using an attention mechanism for novel view synthesis."}, {"Category": "Data Source", "Citation": "[31]", "Explanation": "The cited work, CLIPNeRF, is used as a data source for the neural radiance field and text prompt information, which the citing paper utilizes in the style migration process."}, {"Category": "Extension or Continuation", "Citation": "[12]", "Explanation": "The cited work on object detection is extended in the citing paper to include a more memory-friendly method for processing point clouds in 3D space."}, {"Category": "Data Source", "Citation": "[26]", "Explanation": "The cited work, COLMAP, is used as a data source to estimate the relative image pose in the citing paper."}, {"Category": "Data Source", "Citation": "[27]", "Explanation": "The cited work, MVS, is used as a data source to calculate the full-depth map and construct a 3D point cloud for the scene in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work provides a method of downsampling a point cloud of features from a 2D feature map to construct a 3D point cloud in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[24]", "Explanation": "The cited work, U-net, is used as a basis for implementing the decoder in the citing paper, as it provides a well-established method for down-sampling and up-sampling operations in the decoder design."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work StyleGAN-NADA proposed a directional loss to align the direction distance of CLIP features between text-image pairs, which the citing paper adopts in their research to guide the content scene in following the semantics of text style."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work CLIPStyler proposed a patch-based CLIP directional loss to improve the local texture of transferred images, which the citing paper adopts in their research to enhance the quality of the transferred images."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work, CLIPstyler, provides the style matching loss that the citing paper adopts for multiple text styles transfer. The method is used to constrain the directional distance from source to target in the transfer process."}, {"Category": "Data Source", "Citation": "[10]", "Explanation": "The cited work is the dataset used in the experiments conducted in the citing paper. The dataset is split into training and testing sets, and the model is trained and tested on the corresponding scenes and text prompts."}, {"Category": "Extension or Continuation", "Citation": "[11]", "Explanation": "The cited work, StyleScene, is used as a reference for the comparison of novel stylized views between different methods. The citing paper extends the work by searching for the most matched style images to text prompts and modifying the AdaIN method to a more advanced linear style transfer for a fair comparison."}, {"Category": "Data Source", "Citation": "[13]", "Explanation": "The cited work, linear style transfer, is used as a method for a fair comparison in the study conducted in the citing paper. The citing paper leverages the work of linear style transfer to ensure a fair comparison in the research conducted."}, {"Category": "Supporting Evidence", "Citation": "[11]", "Explanation": "The cited work provides the L tv and L gs loss that the citing paper uses to alleviate side artifacts and maintain global content in the image."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work provides a method for image resizing that the citing paper adopts in their research on the image encoding process."}, {"Category": "Extension or Continuation", "Citation": "[13]", "Explanation": "The cited work introduces the Linear Transformation module, which the citing paper extends by compressing the feature size of the point cloud and text style to accelerate the covariance matrix computation and transformation process."}]
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b7", "b12" ], "table_ref": [], "text": "Modeling and simulating transportation systems realistically pose a challenge due to the high variability and diversity of traffic behaviors, as well as the spatial and temporal fluctuations that are difficult to model. Various traffic simulation models such as SUMO [1], MATSim [2], AimSun [3], VISSIM [4], and others have been developed to simulate traffic systems with diverse scales. Although these models are useful, they still encounter limitations in realistically simulating the growing complexity and heterogeneity of urban transportation systems due to the restricted capability of the underlying parametric models and manually encoded rules [5]. To address this gap, advanced traffic simulation techniques are necessary that can generate more realistic traffic behaviors from real-world data [6], [7]. This is critical for aiding traffic planners and policymakers in making wellinformed decisions.\nTraditional approaches often rely on physical dynamic models and implement data-driven approaches to learn parameters in the pre-defined models [8]. However, such approaches may introduce oversimplifications and assumptions that curtail their accuracy and applicability [9]. As a result, traditional models are suitable for specific tasks but not scalable or extensible, posing challenges in adapting to varying environments and managing large and complex data inputs. Furthermore, the intrinsic complexity of transportation systems, influenced by diverse agents and factors that affect traffic behavior, makes it a challenging task to realistically capture the temporal variability and complexity of traffic conditions. The dynamic and constantly evolving nature of transportation environments necessitates a flexible approach to simulating the traffic system that can quickly adapt to changes in the environment.\nTo solve these problems in traffic simulation, we have developed TransWorldNG (where NG denotes the new generation), which automatically generates simulation scenarios from multi-scale and high-dimensional data, the framework of TransWorldNG is shown in Fig. 1. The first generation of TransWorld was initially developed by CAST Lab that uses Agent-based modeling (ABM) technology and object-oriented programming [10], [11], [7]. Building on its framework, we have re-designed a data-driven traffic simulator that is empowered by the foundation model utilizing Data-driven algorithms and Graph Computing techniques to simulate intricate traffic systems [12].\nOne of the key features of TransWorldNG is the utilization of graph structures and dynamic graph generation algorithms to model the intricate relationships and interactions among agents in the traffic system. This approach enhances previous ABM-based traffic simulation techniques by providing a more comprehensive and adaptable representation of the changing environment. Additionally, the use of graph structures and dynamic graph generation algorithms can enhance the scalability and efficiency of TransWorldNG by enabling parallel processing of the simulation and supporting the handling of large-scale data.\nTo overcome the limitations of traditional modeling approaches that rely on physical dynamics models, TransWorldNG adopts a data-driven approach with behavior models that are directly learned from real-world data. This approach provides a more direct and dependable representation of the real scenario. Furthermore, the graph structure of TransWorldNG allows for adaptive scaling, which amplifies its flexibility. Users can easily modify the nodes or edges in the graph structure to input multi-source data, with varying ..." }, { "figure_ref": [], "heading": "Transportation Services", "publication_ref": [], "table_ref": [], "text": "Fig. 1. The framework of TransWorldNG. TransWorldNG is built upon data-driven approaches, with the ability to handle multi-scale and multi-source data. This flexibility enables TransWorldNG to be used for a wide range of traffic-related tasks, making it a powerful tool for accurate and realistic simulations of urban transportation systems.\ndegrees of granularity. This study presents the functionality and structure of TransWorldNG, the contributions of this paper are as follows:\n• A unified graph-based structure has been proposed that permits a flexible representation of the varying traffic environments, facilitating TransWorldNG to adapt to the environment changes in real-time. • A data-driven traffic simulation framework has been introduced which can realistically and efficiently learn traffic dynamics from real-world data. • The underlying software designing principles, comprising of system structure, workflows, and interfaces for users, have been provided." }, { "figure_ref": [], "heading": "II. RELATED WORKS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Multi-agent Traffic Modeling and Simulation", "publication_ref": [ "b13", "b14", "b13", "b15", "b14", "b16", "b17", "b9", "b14", "b2", "b18" ], "table_ref": [], "text": "Agent-based modeling is a widely used technique for modeling and simulating transportation systems, which involves simulating the interactions of a large number of agents in a system with different characteristics, behaviors, and interactions with other agents [13], [14]. The theoretical framework has developed over several decades, including game theory, control theory, graph theory, and complex network theory [13], [15]. Transportation systems can be considered as multi-agent systems composed of different types of traffic participants, each with their own goals and behaviors, and their interactions affect the changes in the entire traffic system. Recent research on modeling and simulation of complex traffic systems is mostly based on multi-agent methods [14], [16].\nThe modeling of multi-agent systems involves using mathematical models to describe the behavior of individual agents or the entire system in order to better understand traffic evolution and complex traffic phenomena [17]. In the field of traffic systems, models are typically categorized into three types based on their modeling scales: macroscopic, mesoscopic, and microscopic models [9], [14], [2], [18]." }, { "figure_ref": [], "heading": "B. Data-driven Traffic Modeling and Simulation", "publication_ref": [ "b19", "b20", "b21", "b22", "b23", "b24" ], "table_ref": [], "text": "The advancement of modeling complex transportation systems is expected to be driven by the availability of largescale and multi-source data [19]. Data-driven techniques in transportation modeling utilize machine learning, deep learning, and other algorithms to analyze large-scale and multi-source data and learn rules directly from the data to models. This is in contrast to knowledge-driven approaches, which rely on human-defined rules and models to develop transportation models [20]. Urban big data can be used to assess the effects of different characteristics, such as road network topology and intersection shapes, on traffic flow in urban areas. Machine learning techniques, such as neural networks, support vector machines, and regression trees, can be trained using these data to anticipate traffic flow, speed, and congestion [21]. This can provide valuable insights into the behavior of urban transportation systems and inform effective transportation planning and management strategies. Previous data-driven approaches in transportation research are mostly employed for single-task research, such as forecasting vehicle trajectories, predicting traffic congestion, route optimization, and so on [22], [23], [24]. These approaches have limitations in their ability to handle the complex interactions between multiple types of agents in a heterogeneous environment for large-scale systems." }, { "figure_ref": [], "heading": "III. FRAMEWORK, SYSTEM STRUCTURE, AND WORKFLOWS OF TRANSWORLDNG", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "A. The Framework of TransWorldNG", "publication_ref": [ "b25", "b26", "b27", "b27", "b28", "b29" ], "table_ref": [], "text": "Transportation system modeling traditionally involves defining the behavior of agents and their interactions beforehand, which is time-consuming and error-prone when new agents or scenarios need to be added. A graph-based approach to transportation system modeling offers a more efficient and adaptable solution, as it allows for the representation of data and relationships between agents in a natural and straightforward way [25], [26]. By using a graph data structure, new data can be added to the system by introducing new nodes or edges to the graph, without the need to hard-code specific behaviors or rules. Fig. 2 illustrates the topology of the hierarchical graph of the dynamic traffic system at various scales, from the vehicle level presenting the dynamics of individual vehicles to the intersection level showing traffic signal control strategies and traffic conditions at bottlenecks, to the street, block, and city levels showing strategies and policies that have impacts on a larger scale. This multi-scale approach allows TransWorldNG NG to provide a comprehensive view of the traffic system and enable transportation planners to make informed decisions based on the simulation results. 1) Representation of transportation system via heterogeneous dynamic graphs: TransWorldNG uses a unified graph data structure to represent traffic systems, this makes it flexible to changes in the environment, as it would allow for easy updates and modifications. New data can be added to the graph by introducing new nodes or edges, without the need to hard-code specific behaviors or rules. This flexibility and adaptability make it easier to model and simulate large and complex transportation systems. Fig. 3 illustrates an example of a traffic scenario represented as a graph, showing how the relationships and interactions in a transportation system can be represented using a graph.\nMathematically, we can define the traffic system as a dynamic heterogeneous graph, G n (V n , E n , O n , R n ). The graph consists of vertices (V n ) that represent agents and edges (E n ) that define the relationships between those agents. Each agent (v i ) is associated with a node type by a unique mapping: (φ :\nφ : V n → O n , o i ∈ O n(1)\nV n → O n ).\nSimilarly, each edge (e i ) is directed and associated with an edge type:\nψ : E n → R n , r i ∈ R n , e i = (u i , v i ) (2)\nThe attributes of agents can be represented as node features on the graph. For instance, a vehicle agent might have attributes such as position, speed, and acceleration, which can be saved as node features. Assuming nodes and edges have feature dimensions D v o i and D e r i respectively, features can be represented as:\nF v n ∈ R |V n |×D v o i , F e n ∈ R |E n |×D e r i(3)\n2) Dynamic Graph Learning model to simulate traffic behavior and relationships: TransWorldNG can learn from the data and generate simulation scenarios without relying on pre-defined models or assumptions. The use of a datadriven and model-free approach allows TransWorldNG to discover new patterns and relationships in the data that may not have been previously known or considered. This can lead to insights and solutions that were not possible with traditional modeling approaches.\nTo simulate complex traffic behavior and relationships, the Heterogeneous Graph Transformer (HGT) model can be used to model heterogeneous graphs in transportation systems [27]. The HGT model is a powerful graph neural network that can handle the heterogeneity of graphs by utilizing specific representations for different types of nodes and edges. It uses a multi-head attention mechanism to aggregate information from neighbors of different node types, and a graph transformer layer to update the node embeddings, for details refer to [27].\nWe denote the output of the l-th HGT layer as H (l) , H (l) [v] is the node representation of node v at the l-th HGT layer. By stacking L layers, the node representation for the whole graph can be represented as H (L) . Since the traffic network is time-varying, we consider the evolution of the traffic system as a conditional graph translation process that consists of a sequence of static graphs:\nT : G 0 → G 1 • • • → G n(4)\nGiven the dynamic heterogeneous graph, G(V, E, O, R) shows the state of the traffic simulation system, with input node features, denoted as H (l-1) [v] . The output of the l-th HGT layer for the target node v is denoted as H (l) [v]. To aggregate information to the target node from all its neighbors, it can be achieved by updating the vector H(l) [v]:\nH(l) [v] = ∑ ∀(u)∈N(v) (Attention(u, e, v) • Message(u, e, v)) (5)\nThe l-th HGT layer's output H (l) [v] for the target node v is equal to:\nH (l) [v] = A-Linear φ (v) (θ H(l) [v]) + H (l-1) [v](6)\nThe MSE loss is employed to measure the difference between the predicted values and the true values. In practice, the MSE loss can be optimized using various optimization algorithms such as Stochastic Gradient Descent (SGD), Adam, or RMSProp to minimize the difference between predicted and true values [28], [29]." }, { "figure_ref": [ "fig_3" ], "heading": "B. System Structure", "publication_ref": [], "table_ref": [], "text": "The overall system architecture of TransWorldNG is shown in Fig. 4. The system supports data inputs from different sources, including sensors, GPS devices, and other connected devices. These data inputs are processed and transformed into a graph data structure, which is then fed into the simulation core in the simulation layer. Using mathematical models and algorithms, the simulation core simulates traffic flow, predicts congestion, and optimizes the transport network based on different traffic scenarios. The software layers can be divided into three categories: data layer, simulation layer, and interface layer. Data Layer: The data layer includes both graph and nongraph data. The non-graph data is stored in a relational database like MySQL and Mangodb, while the graph data structure is stored in a graph database. This allows for the efficient handling of different types of data in a complementary manner.\nSimulation Layer: The simulation layer includes the simulation core, which consists of the simulation core, controllers, and analysis modules.\n• SimCore: The simulation core consists of the model libraries, graph engine, optimization module, and verification and validation processes. The model libraries provide a range of models for simulating and analyzing the transportation network data, while the graph engine provides algorithms for processing and analyzing the graph data. The optimization module uses algorithms to find the optimal parameters for the models, and the verification and validation processes ensure that the data and results are accurate and reliable. " }, { "figure_ref": [ "fig_6" ], "heading": "C. Workflow", "publication_ref": [], "table_ref": [], "text": "TransWorldNG is designed to be intuitive and user-friendly. The simulation core generates traffic patterns based on the input data and parameters specified by the user. These traffic patterns can be visualized in real-time or exported for further analysis. The workflow of TransWorldNG compared to traditional simulation models can be found in Fig. 6. The key modules in TransWorldNG are the following: agents, while edges represent the relationships and interactions between agents. • Graph Embedding: The graph is embedded into a high-dimensional space using a heterogeneous graph transformer model. • Pre-training: The graph transformer model is pre-trained on a large dataset of real-world traffic scenarios, enabling it to learn the patterns and relationships. • Simulation: The pre-trained is then used to generate simulations of new traffic scenarios. The system can make dynamic adjustments during the simulation to model changes in the traffic environment." }, { "figure_ref": [], "heading": "• Evaluation:", "publication_ref": [], "table_ref": [], "text": "The simulations generated by TransWorldNG can be evaluated based on various metrics, such as accuracy and efficiency, which can help to improve the system for better performance." }, { "figure_ref": [ "fig_7" ], "heading": "IV. CASE STUDY", "publication_ref": [ "b0" ], "table_ref": [], "text": "This section aims to demonstrate the capabilities and advantages of TransWorldNG compared to existing traffic simulators. A case study is conducted to compare TransWorldNG with SUMO [1], a widely used traffic simulator. A 4-way signalized intersection is simulated using both TransWorldNG and SUMO. Fig. 7 (a) shows the 4-way signalized intersection, which is a classic example scenario in SUMO. The road network has 8 roads and 16 lanes, and there are 768 vehicles running in this network. These vehicles all start from the left direction and have three default routes they can take: going down to the right road, turning left to the north road, or turning right to the south road. The scenario also has one traffic light located at the central intersection." }, { "figure_ref": [], "heading": "A. Data-Driven Traffic Behavior Learning with TransWorldNG", "publication_ref": [], "table_ref": [], "text": "We investigated the ability of TransWorldNG to learn car-following behaviors from data. To evaluate its perfor- " }, { "figure_ref": [], "heading": "B. Impact of Data Collection Interval on Model Performance", "publication_ref": [], "table_ref": [], "text": "Since TransWorldNG is a data-driven approach, to understand the trade-off between prediction accuracy and data collection frequency in TransWorldNG, we conducted experiments with different data collection intervals (5 and 10 steps) and compared the results with SUMO. The findings reveal interesting insights into the relationship between data collection interval and prediction accuracy. As expected, with shorter data collection intervals (e.g., 5 steps), the TransWorldNG model can capture more frequent updates in traffic dynamics, resulting in higher prediction accuracy. However, as the data collection interval increases (e.g., 10 steps), the prediction accuracy decreases, indicating that the model's ability to capture real-time changes in traffic dynamics is reduced. These findings highlight the importance of data collection frequency in the TransWorldNG model and emphasize the need for careful consideration of data collection intervals to achieve optimal prediction accuracy." }, { "figure_ref": [ "fig_11" ], "heading": "C. Assessing the Computational Performance of TransWorldNG", "publication_ref": [], "table_ref": [], "text": "Evaluating the computational performance of TransWorldNG is an important aspect of assessing the system's efficiency and scalability for large-scale traffic simulations. Simulation time and the number of agents are typically inversely proportional, meaning that as the number of agents increases, the simulation time will also increase. One way to evaluate the computational performance of TransWorldNG is to measure the percentage increase in runtime as the percentage increase in the number of agents.\nFig. 11 compares the percentage increase in simulation calculation time between TransWorldNG and SUMO as the system scale increases. The results demonstrate that as the system scale increases, the percentage increase in simulation calculation time grows substantially more slowly for TransWorldNG than for SUMO. This indicates that the proposed framework can dramatically improve the computing efficiency of large-scale traffic simulation. These results highlight the benefits of using TransWorldNG as a framework for large-scale traffic simulation.\nOne of the key reasons for TransWorldNG's good performance is its use of a graph structure and pre-trained models. The use of a graph structure enables parallel processing of traffic data, which can significantly reduce simulation calculation time. Additionally, the pre-trained models used in TransWorldNG can help reduce the amount of computation required during simulation, as the models have already learned many of the underlying patterns and relationships in the traffic data. Another factor that contributes to TransWorldNG's superior performance is its model-free approach. Unlike SUMO, which relies on pre-defined models of traffic behavior, TransWorldNG is able to adapt to different traffic scenarios and levels of abstraction without the need for extensive model development and calibration. This allows for more efficient and flexible simulation of complex traffic scenarios." }, { "figure_ref": [], "heading": "V. CONCLUSION", "publication_ref": [ "b19", "b30", "b31" ], "table_ref": [], "text": "This study introduced the simulation framework and system structure of TransWorldNG, which utilize a traffic foundation model with data-driven automatic modeling capabilities to resolve the issues of limited structural complexity and high computation complexity of traditional simulators. The graph structure and data-driven method permit dynamic adjustments during simulation to reflect real-time changes in the urban system environment, allowing for the insertion of new data and expert knowledge for real-time mapping of the simulation system to the actual city. TransWorldNG can facilitate eventdriven causal analysis of urban phenomena and can combine multi-field data to provide a simulation test platform for integrated decision-making. Future directions for TransWorldNG could include the integration of emerging technologies such as Mobility as a Service (MaaS) and AI-driven simulation technologies [19], as well as the development of more robust functionality using the framework presented in this study.\nWhile TransWorldNG offers many advantages for traffic simulation, there are also some potential challenges that should be explored in future research. One potential challenge is the need for high-quality data that accurately represent real-world traffic patterns and behaviors. TransWorldNG relies on large amounts of data to generate its simulations, so the accuracy and quality of this data can significantly impact the reliability and usefulness of the simulations. Additionally, the collection, processing, and storage of such large amounts of data can also be a challenge [30]. In addition, while TransWorldNG is designed to be highly scalable and flexible, it may still face challenges in terms of computational resources and processing power. Running largescale simulations can require significant computing resources, potential solutions include using cloud computing, distributed computing, and parallel processing, which need to be studied in future research. Furthermore, traffic simulation often involves predicting traffic flow over extended time periods. The potential use of large language models (LLMs), such as GPT, to generate a wider range of realistic scenarios may improve the accuracy and effectiveness of traffic simulations [31]." }, { "figure_ref": [], "heading": "Data PreTrain Model", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Multi-source Data", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "DATA AVAILABILITY", "publication_ref": [], "table_ref": [], "text": "The SUMO simulation platform and data is publicly available at https://www.eclipse.org/sumo. The simulation data of the 4-way intersection scenario is available from SUMO at https://github.com/eclipse/sum o/blob/main/docs/web/docs/Tutorials." }, { "figure_ref": [], "heading": "CODE AVAILABILITY", "publication_ref": [], "table_ref": [], "text": "The code of TransWorldNG was implemented in Python using the deep learning framework of PyTorch. Code, trained models, and scripts reproducing the experiments of this paper are available at https://github.com/PJSAC/Trans WorldNG." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "This work is supported by the Shanghai Artificial Intelligence Laboratory." } ]
2023-05-25
10.1007/978-1-4419-6142-6
[ { "authors": "P A Lopez; E Wiessner; M Behrisch; L Bieker-Walz; J Erdmann; Y.-P Flotterod; R Hilbrich; L Lucken; J Rummel; P Wagner", "journal": "", "ref_id": "b0", "title": "Microscopic Traffic Simulation using SUMO", "year": "" }, { "authors": "H I Maui", "journal": "IEEE", "ref_id": "b1", "title": "", "year": "2018-11" }, { "authors": "H Andreas; N Kai; A W ; Kay ", "journal": "", "ref_id": "b2", "title": "Introducing MATSim", "year": "2016-08" }, { "authors": " Aimsun", "journal": "", "ref_id": "b3", "title": "AIMSUN Next", "year": "2022" }, { "authors": " Ptv", "journal": "Tech. Rep", "ref_id": "b4", "title": "Vissim 5.40-01, user manual", "year": "2012" }, { "authors": "X Yan; Z Zou; S Feng; H Zhu; H Sun; H X Liu", "journal": "Nature Communications", "ref_id": "b5", "title": "Learning naturalistic driving environment with statistical realism", "year": "2023-04" }, { "authors": "W Fan; P Chen; D Shi; X Guo; L Kou", "journal": "Tsinghua Science and Technology", "ref_id": "b6", "title": "Multi-agent modeling and simulation in the AI age", "year": "2021-10" }, { "authors": "L Li; Y Lin; Y Wang; F.-Y Wang", "journal": "IEEE Intelligent Systems", "ref_id": "b7", "title": "Simulation Driven AI: From Artificial to Actual and Vice Versa", "year": "2023-01" }, { "authors": "D Wang; K Ozbay; Z Bian", "journal": "IEEE", "ref_id": "b8", "title": "A Mixture Model-based Clustering Method for Fundamental Diagram Calibration Applied in Large Network Simulation", "year": "2020-09" }, { "authors": "J Barceló", "journal": "Springer", "ref_id": "b9", "title": "Fundamentals of Traffic Simulation, ser", "year": "2010" }, { "authors": "F.-Y Wang; J J Zhang", "journal": "IEEE", "ref_id": "b10", "title": "Transportation 5.0 in CPSS: Towards ACP-based society-centered intelligent transportation", "year": "2017-10" }, { "authors": "X Wang; R Jiang; L Li; Y Lin; X Zheng; F.-Y Wang", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b11", "title": "Capturing Car-Following Behaviors by Deep Learning", "year": "2018-03" }, { "authors": "X Wang; D Wang; L Chen; Y Lin", "journal": "", "ref_id": "b12", "title": "Building Transportation Foundation Model via Generative Graph Transformer", "year": "2023" }, { "authors": "C M Macal; M J North", "journal": "", "ref_id": "b13", "title": "Agent-based modeling and simulation", "year": "" }, { "authors": "J Nguyen; S T Powers; N Urquhart; T Farrenkopf; M Guckert", "journal": "Transportation Research Interdisciplinary Perspectives", "ref_id": "b14", "title": "An overview of agent-based traffic simulators", "year": "2021-12" }, { "authors": "P Feldman; A Bucchiarone", "journal": "Springer International Publishing", "ref_id": "b15", "title": "Diversity in Massively Multi-agent Systems: Concepts, Implementations, and Normal Accidents", "year": "2019" }, { "authors": "B Chen; H H Cheng", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b16", "title": "A Review of the Applications of Agent Technology in Traffic and Transportation Systems", "year": "2010-06" }, { "authors": "D Wang; M Tayarani; B Yueshuai He; J Gao; J Y Chow; H Oliver; K Gao; Ozbay", "journal": "Transportation Research Part A: Policy and Practice", "ref_id": "b17", "title": "Mobility in post-pandemic economic reopening under social distancing guidelines: Congestion, emissions, and contact exposure in public transit", "year": "2021-11" }, { "authors": "H Zhang; S Feng; C Liu; Y Ding; Y Zhu; Z Zhou; W Zhang; Y Yu; H Jin; Z Li", "journal": "ACM", "ref_id": "b18", "title": "CityFlow: A Multi-Agent Reinforcement Learning Environment for Large Scale City Traffic Scenario", "year": "2019-05" }, { "authors": "T Kevan", "journal": "", "ref_id": "b19", "title": "Can AI Take Simulation to a New Level?", "year": "2020-11" }, { "authors": "Y Wu; H Tan; L Qin; B Ran; Z Jiang", "journal": "Transportation Research Part C: Emerging Technologies", "ref_id": "b20", "title": "A hybrid deep learning based traffic flow prediction method and its understanding", "year": "2018-05" }, { "authors": "A M Avila; I Mezić", "journal": "Nature Communications", "ref_id": "b21", "title": "Data-driven analysis and forecasting of highway traffic dynamics", "year": "2020" }, { "authors": "N Van Oort; D Sparing; T Brands; R M P Goverde", "journal": "Public Transport", "ref_id": "b22", "title": "Data driven improvements in public transport: the Dutch example", "year": "2015-12" }, { "authors": "B S Kerner", "journal": "Springer", "ref_id": "b23", "title": "Introduction to Modern Traffic Flow Theory and Control", "year": "2009" }, { "authors": "R Jia; P Jiang; L Liu; L Cui; Y Shi", "journal": "IEEE Internet of Things Journal", "ref_id": "b24", "title": "Data Driven Congestion Trends Prediction of Urban Transportation", "year": "2018-04" }, { "authors": "Y Zhou; Y Zhang; Z Zhao; K Zhang; C Gou", "journal": "IEEE Journal of Radio Frequency Identification", "ref_id": "b25", "title": "Towards Driving Scene Understanding: A Paradigm and Benchmark Dataset for Ego-centric Traffic Scene Graph Representation", "year": "2022" }, { "authors": "Z Wu; S Pan; F Chen; G Long; C Zhang; P S Yu", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b26", "title": "A Comprehensive Survey on Graph Neural Networks", "year": "2021-01" }, { "authors": "Z Hu; Y Dong; K Wang; Y Sun", "journal": "", "ref_id": "b27", "title": "Heterogeneous Graph Transformer", "year": "2020" }, { "authors": "D P Kingma; J Ba", "journal": "", "ref_id": "b28", "title": "Adam: A Method for Stochastic Optimization", "year": "2014" }, { "authors": "S Ruder", "journal": "", "ref_id": "b29", "title": "An overview of gradient descent optimization algorithms", "year": "2016" }, { "authors": "Y Yu; S Yao; J Li; F.-Y Wang; Y Lin", "journal": "", "ref_id": "b30", "title": "SWDPM: A Social Welfare-Optimized Data Pricing Mechanism", "year": "2023" }, { "authors": "F.-Y Wang; Q Miao; X Li; X Wang; Y Lin", "journal": "IEEE/CAA Journal of Automatica Sinica", "ref_id": "b31", "title": "What Does ChatGPT Say: The DAO from Algorithmic Intelligence to Linguistic Intelligence", "year": "2023-03" } ]
[ { "formula_coordinates": [ 3, 135.21, 724.72, 164.26, 9.72 ], "formula_id": "formula_0", "formula_text": "φ : V n → O n , o i ∈ O n(1)" }, { "formula_coordinates": [ 3, 340.41, 281.42, 42.32, 9.72 ], "formula_id": "formula_1", "formula_text": "V n → O n )." }, { "formula_coordinates": [ 3, 370.34, 319.9, 188.33, 9.72 ], "formula_id": "formula_2", "formula_text": "ψ : E n → R n , r i ∈ R n , e i = (u i , v i ) (2)" }, { "formula_coordinates": [ 3, 374.25, 422.85, 184.42, 15.33 ], "formula_id": "formula_3", "formula_text": "F v n ∈ R |V n |×D v o i , F e n ∈ R |E n |×D e r i(3)" }, { "formula_coordinates": [ 4, 132.14, 92.62, 167.32, 9.9 ], "formula_id": "formula_4", "formula_text": "T : G 0 → G 1 • • • → G n(4)" }, { "formula_coordinates": [ 4, 66.16, 203.09, 233.31, 20.76 ], "formula_id": "formula_5", "formula_text": "H(l) [v] = ∑ ∀(u)∈N(v) (Attention(u, e, v) • Message(u, e, v)) (5)" }, { "formula_coordinates": [ 4, 88.52, 270.88, 210.94, 13 ], "formula_id": "formula_6", "formula_text": "H (l) [v] = A-Linear φ (v) (θ H(l) [v]) + H (l-1) [v](6)" } ]
TransWorldNG: Traffic Simulation via Foundation Model
Traffic simulation is a crucial tool for transportation decision-making and policy development. However, achieving realistic simulations in the face of the high dimensionality and heterogeneity of traffic environments is a longstanding challenge. In this paper, we present TransWordNG, a traffic simulator that uses Data-driven algorithms and Graph Computing techniques to learn traffic dynamics from real data. The functionality and structure of TransWorldNG are introduced, which utilize a foundation model for transportation management and control. The results demonstrate that TransWorldNG can generate more realistic traffic patterns compared to traditional simulators. Additionally, TransWorldNG exhibits better scalability, as it shows linear growth in computation time as the scenario scale increases. To the best of our knowledge, this is the first traffic simulator that can automatically learn traffic patterns from real-world data and efficiently generate accurate and realistic traffic environments.
Ding Wang; Xuhong Wang; Liang Chen; Shengyue Yao; Ming Jing; Honghai Li; Li Li; Fei-Yue Wang; Yilun Lin
[ { "figure_caption": "Fig. 2 .2Fig. 2. Hierarchical graph structure of TransWorldNG. A hierarchical graph structure that consists of sub-graphs, with the lowest level often representing the finest granularity of simulated interactions. These subgraphs are interconnected, allowing information to flow seamlessly through the different levels of the hierarchy.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Graph representation of a traffic scenario involving two cars, Car A moving straight from left to right and Car B turning left at a signalized intersection. (a) A picture of the real traffic scenario; (b) An abstract representation of the traffic scenario; (c) Graph representation of the traffic scenario.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Overall System Architecture of TransWorldNG. This figure illustrates the key components and their relationships.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "•Controller: The controller module is responsible for controlling network dynamics, traffic signals, and agent behaviors, and uses the simulation core to simulate different scenarios.• Analysis: The analysis module provides insights into the transportation network's performance by processing and analyzing simulation results, such as link statistics, trajectory analysis, traffic counts, congestion analysis, efficiency measures, and more. Interface Layer: The interface layer includes the GUI interface that displays simulation results to the user, shown in Fig 5.The GUI interface provides visualizations and graphs to help the user understand and interpret the simulation results.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. GUI of TransWorldNG. The figure shows the graphical user interface of TransWorldNG, which is used to interact with the traffic simulator. The GUI is designed to be user-friendly and intuitive. The main window of the GUI displays a 3D visualization of the simulated traffic environment.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. Comparison of traffic simulation workflows: (a) TransWorldNG, which uses graph construction and embedding techniques to obtain a pre-trained model for different simulation tasks, and (b) Traditional traffic simulators that require building and calibrating pre-defined behavior models. When the environment changes to new states, TransWorldNG can quickly adapt by fine-tuning the pre-trained model with new data, while traditional simulators need to start from scratch and repeat the entire simulation process, which is highlighted in the red dotted box.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. The case study scenario: (a) Traffic network of the simulated environment and (b) corresponding graph representation of the traffic system at one time step. The graph representation captures the structure and connections of the traffic system.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. Comparison of the car-following model generated by TransWorldNG, IDM, and the Krauss mode, which is the default car-following model of SUMO. Subplots (a) and (b) present the vehicle acceleration and speed for the front car, as predicted by the three models, respectively, showing the ability of the TransWorldNG model to learn car following behavior from data and can generate similar patterns compared to those well-known models.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .9Fig. 9. Histogram of the distribution of speed deviation in the car following behavior. The speed deviation is defined as the speed difference between the lead and follower vehicles. A speed deviation of zero indicates that the front and follower vehicles are traveling at a relatively consistent speed.", "figure_data": "", "figure_id": "fig_9", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 10 .10Fig. 10. Impact of Data Collection Interval (DCI) on Model Performance of TransWorldNG. This result shows the comparison of predicted vehicle speed between SUMO (as a reference) and TransWorldNG with data collection intervals of 5 and 10 steps, respectively. The result provides insights into the trade-off between prediction accuracy and data collection frequency in the TransWorldNG model.", "figure_data": "", "figure_id": "fig_10", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Fig. 11 .11Fig. 11. Illustration of the computational efficiency of TransWorldNG compared to SUMO.", "figure_data": "", "figure_id": "fig_11", "figure_label": "11", "figure_type": "figure" } ]
[{"Category": "Data Source", "Citation": "[1]", "Explanation": "The cited work SUMO is used as a data source for the development of traffic simulation models in the citing paper."}, {"Category": "Data Source", "Citation": "[2]", "Explanation": "The cited work MATSim is used as a data source for the development of traffic simulation models in the citing paper."}, {"Category": "Data Source", "Citation": "[3]", "Explanation": "The cited work AimSun is used as a data source for the development of traffic simulation models in the citing paper."}, {"Category": "Data Source", "Citation": "[4]", "Explanation": "The cited work VISSIM is used as a data source for the development of traffic simulation models in the citing paper."}, {"Category": "Data Source", "Citation": "[5]", "Explanation": "The cited work highlights the limitations of traditional traffic simulation models in capturing the growing complexity and heterogeneity of urban transportation systems, serving as a data source for the citing paper to address this gap."}, {"Category": "Data Source", "Citation": "[6]", "Explanation": "The cited work is used as a data source to highlight the need for advanced traffic simulation techniques that can generate more realistic traffic behaviors from real-world data in the citing paper."}, {"Category": "Data Source", "Citation": "[7]", "Explanation": "The cited work is used as a data source to highlight the need for advanced traffic simulation techniques that can generate more realistic traffic behaviors from real-world data in the citing paper."}, {"Category": "Data Source", "Citation": "[8]", "Explanation": "The cited work is used as a data source to highlight the limitations of traditional traffic simulation models in learning parameters in pre-defined models, which motivates the development of more advanced techniques in the citing paper."}, {"Category": "Data Source", "Citation": "[9]", "Explanation": "The cited work highlights the challenges in adapting traditional traffic simulation models to varying environments and managing large and complex data inputs, which motivates the development of more advanced techniques in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[10], [11], [7]", "Explanation": "The cited works provide the foundation for the development of the first generation of TransWorld, which uses ABM technology and object-oriented programming in traffic simulation."}, {"Category": "Data Source", "Citation": "[12]", "Explanation": "The cited work is the basis for the data-driven traffic simulator in TransWorldNG, which utilizes data-driven algorithms and graph computing techniques to simulate complex traffic systems."}, {"Category": "Methodological Basis", "Citation": "[13], [14]", "Explanation": "The cited works provide a theoretical framework for agent-based modeling techniques in transportation systems, which the citing paper adopts in its research on modeling and simulating traffic systems."}, {"Category": "Extension or Continuation", "Citation": "[16]", "Explanation": "The cited work extends the research on modeling and simulation of complex traffic systems by focusing on multi-agent methods, which the citing paper builds upon in its study of traffic systems as multi-agent systems."}, {"Category": "Data Source", "Citation": "[9], [14], [2], [18]", "Explanation": "The cited works provide a categorization of models in traffic systems based on their modeling scales, which the citing paper utilizes in its research on modeling and simulation of traffic systems."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work provides a discussion on the availability of large-scale and multi-source data, which serves as the basis for the data-driven techniques discussed in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[20]", "Explanation": "The cited work highlights the contrast between knowledge-driven approaches and data-driven techniques in transportation modeling, providing evidence for the discussion in the citing paper."}, {"Category": "Data Source", "Citation": "[21]", "Explanation": "The cited work mentions the use of urban big data to assess the effects of road network topology and intersection shapes on traffic flow, which serves as a data source for the machine learning techniques discussed in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[22], [23], [24]", "Explanation": "The cited works are mentioned in the context of previous data-driven approaches in transportation research, which the citing paper extends by discussing the limitations in handling complex interactions in heterogeneous environments for large-scale systems."}, {"Category": "Methodological Basis", "Citation": "[25], [26]", "Explanation": "The cited works provide a graph-based approach to transportation system modeling that the citing paper adopts to represent data and relationships between agents in a more efficient and adaptable way."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The HGT model is used as a methodological basis for modeling heterogeneous graphs in transportation systems, as it is a powerful graph neural network that can handle the heterogeneity of graphs by utilizing specific representations and a multi-head attention mechanism."}, {"Category": "Methodological Basis", "Citation": "[28], [29]", "Explanation": "The cited works are used to optimize the MSE loss in the citing paper using optimization algorithms such as SGD, Adam, or RMSProp, which are employed to minimize the difference between predicted and true values."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work, SUMO, serves as the basis for the case study conducted in the citing paper to compare the capabilities and advantages of TransWorldNG with existing traffic simulators."}, {"Category": "Extension or Continuation", "Citation": "[19]", "Explanation": "The cited work on AI-driven simulation technologies is mentioned as a future direction for TransWorldNG, indicating a potential extension or continuation of the research in this area."}, {"Category": "Extension or Continuation", "Citation": "[30]", "Explanation": "The cited work highlights the challenges associated with collecting, processing, and storing large amounts of data, which the citing paper extends by exploring potential solutions such as cloud computing and parallel processing."}, {"Category": "Data Source", "Citation": "[31]", "Explanation": "The cited work introduces the use of large language models (LLMs) in traffic simulation, which the citing paper further explores as a potential solution to improve the accuracy and effectiveness of traffic simulations."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b8", "b32", "b61", "b15", "b45", "b10", "b1", "b53", "b61", "b32", "b44", "b64", "b46", "b58", "b55", "b66", "b56", "b4", "b27", "b30", "b52" ], "table_ref": [], "text": "With the ubiquity of graph-structured data emerging from various modern applications, Graph Neural Networks (GNNs) have gained increasing attention from both researchers and practitioners. GNNs have been applied to many application domains, including quantum chemistry (Duvenaud et al. 2015;Dai, Dai, and Song 2016;Masters et al. 2022), social science (Ying et al. 2018;Fan et al. 2019;Shiao et al. 2022), transportation (Peng et al. 2020;Derrow-Pinion et al. 2021) and neuroscience (Ahmedt-Aristizabal et al. 2021;Wang et al. 2021), and have attained promising results on graph classification (Ying et al. 2018;Masters et al. 2022), node classification (Shi et al. 2020) and link prediction (Zhang and Chen 2018;Suresh et al. 2023) tasks.\nMost GNNs are limited in terms of their expressive power. Xu et al., (Xu et al. 2018) show that GNNs are at most as powerful as 1-dimentional Weisfeiler-Leman (1-WL) test (Weisfeiler and Leman 1968) in distinguishing non-isomorphic graph structures. This is because a vanilla GNN essentially operates on a subtree rooted at each node in its message passing, i.e., it treats every neighbor of the node equally in its message aggregation. In this regard, it overlooks any discrepancy that may exist in the connectivities between neighbors. To address this limitation, efforts have been devoted to incorporating local substructure information to GNNs. Several studies attempt to encode such local information through induced subgraphs (Zhao et al. 2021), overlap subgraphs (Wijesinghe and Wang 2021) and spatial encoding (Bouritsas et al. 2022), to enhance GNNs' expressiveness. But the local structures they choose are not able to capture the complete picture of the 1-hop neighborhood of an edge. Some others incorporate shortest path information to edges in message passing via distance encoding (Li et al. 2020), adaptive breath/depth functions (Liu et al. 2019) and affinity matrix (Wan et al. 2021) to control the message from neighbors at different distances. However, the descriptor used to encode the substructure may overlook some connectivities between neighbors. Furthermore, some of the above models also suffer from high computational cost due to the incorporation of certain substructures.\nIn this paper, we aim to develop a model that overcomes the above drawbacks and yet is able to empower GNNs' expressiveness. (1) We define a new type of substructures named union subgraphs, each capturing the entire closed neighborhood w.r.t. an edge. (2) We design an effective substructure descriptor that encodes high-order connectivities and it is easy to incorporate with arbitrary message-passing neural network (MPNN) or We propose a new model, namely Union Subgraph Neural Network (UnionSNN), which is strictly more expressive than the vanilla GNNs (1-WL) in theory and also computationally efficient in practice. Our contributions are summarized as follows:\n• We investigate different types of connectivities existing in the local neighborhood and identify the substructure, named \"union subgraph\", that is able to capture the com-2 Related Work" }, { "figure_ref": [], "heading": "Substructure-Enhanced GNNs", "publication_ref": [ "b56", "b66", "b4", "b2", "b48", "b18", "b26", "b57", "b34", "b28", "b63", "b60", "b27", "b14" ], "table_ref": [], "text": "In recent years, several GNN architectures have been designed to enhance their expressiveness by encoding local substructures. GraphSNN (Wijesinghe and Wang 2021) brings the information of overlap subgraphs into the message passing scheme as a structural coefficient. However, the overlap subgraph and the substructure descriptor used by GraphSNN are not powerful enough to distinguish all nonisomorphic substructures in the 1-hop neighborhood. Zhao et al. (Zhao et al. 2021) encode the induced subgraph for each node and inject it into node representations. (Bouritsas et al. 2022) introduces structural biases in the aggregation function to break the symmetry in message passing. For these two methods, the neighborhood under consideration should be pre-defined, and the subgraph matching is extremely expensive (O(n k ) for k-tuple substructure) when the substructure gets large. Similarly, a line of research (Bodnar et al. 2021;Thiede, Zhou, and Kondor 2021;Horn et al. 2021) develops new WL aggregation schemes to take into account substructures like cycles or cliques. Despite these enhancements, performing cycle counting is very time-consuming. Other Transformer-based methods (Dwivedi and Bresson 2020;Kreuzer et al. 2021;Wu et al. 2021;Mialon et al. 2021) incorporate local structural information via positional encoding (Lim et al. 2022;Zhang et al. 2023). Graphormer (Ying et al. 2021) combines the node degree and the shortest path information for spatial encoding, while other works (Li et al. 2020;Dwivedi et al. 2021) employ random walk based encodings that can encode k-hop neighborhood information of a node. However, these positional encodings only consider relative distances from the center node and ignore high-order connectivities between the neighbors." }, { "figure_ref": [], "heading": "Path-Related GNNs", "publication_ref": [ "b27", "b30", "b47", "b59", "b49", "b52" ], "table_ref": [], "text": "A significant amount of works have focused on the application of shortest paths and their related techniques to GNNs. Li et al. (Li et al. 2020) present a distance encoding module to augment node features and control the receptive field of message passing. GeniePath (Liu et al. 2019) proposes an adaptive breath function to learn the importance of different-sized neighborhoods and an adaptive depth function to extract and filter signals from neighbors within different distances. PathGNN (Tang et al. 2020) imitates how the Bellman-Ford algorithm solves the shortest path problem in generating weights when updating node features. SPN (Abboud, Dimitrov, and Ceylan 2022) designs a scheme, in which the representation of a node is propagated to each node in its shortest path neighborhood. Some recent works adapt the concept of curvature from differential geometry to reflect the connectivity between nodes and the possible bottleneck effects. CurvGN (Ye et al. 2019) reflects how easily information flows between two nodes by graph curvature information, and exploits curvature to reweigh different channels of messages. Topping et al. (Topping et al. 2021) propose Balanced Forman curvature that better reflects the edges having bottleneck effects, and alleviate the over-squashing problem of GNNs by rewiring graphs. SNALS (Wan et al. 2021) utilizes an affinity matrix based on shortest paths to encode the structural information of hyperedges. Our method is different from these existing methods by introducing a shortest-path-based substructure descriptor for distinguishing non-isomorphic substructures." }, { "figure_ref": [], "heading": "Local Substructures to Empower MPNNs", "publication_ref": [], "table_ref": [], "text": "In this section, we first introduce MPNNs. We then investigate what kind of local substructures are beneficial to improve the expressiveness of MPNNs." }, { "figure_ref": [], "heading": "Message Passing Neural Networks", "publication_ref": [ "b58" ], "table_ref": [], "text": "We represent a graph as G = (V, E, X), where V = {v 1 , ..., v n } is the set of nodes, E ∈ V × V is the set of edges, and\nX = {x v | v ∈ V } is the set of node features. The set of neighbors of node v is denoted by N (v) = {u ∈ V | (v, u) ∈ E}.\nThe l-th layer of an MPNN (Xu et al. 2018) can be written as:\nh (l) v = AGG (l-1) (h (l-1) v , MSG (l-1) ({h (l-1) u , u ∈ N (v)})),(1)\nwhere h\n(l)\nv is the representation of node v at the l-th layer, h\n(0) v = x v , AGG(•)\nand MSG(•) denote the aggregation and message functions, respectively." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Local Substructures to Improve MPNNs", "publication_ref": [ "b56" ], "table_ref": [], "text": "According to Eq. (1), MPNN updates the representation of a node isotropously at each layer and ignores the structural connections between the neighbors of the node. Essentially, the local substructure utilized in the message passing of MPNN is a subtree rooted at the node. Consequently, if two non-isomorphic graphs have the same set of rooted subtrees, they cannot be distinguished by MPNN (and also 1-WL). Such an example is shown in Figure 1 (a). A simple fix to this problem is to encode the local structural information about each neighbor, based on which neighbors are treated unequally in the message passing. One natural question arises: which substructure shall we choose to characterize the 1-hop local information?\n(a) ! ! \"# (a, b) ! $ \"# (a, d) ! % \"# (c, d) ! & \"# (d, f) (b) u v a b d c f ! ! ∩ ! \" ! ! ∪ ! \" ! !∪\"\nTo answer the above question, we consider two adjacent nodes v and u, and discuss different types of edges that may exist in their neighbor sets, N (v) and N (u). We define the closed neighbor set of node v as\nÑ (v) = N (v) ∪ {v}. The induced subgraph of Ñ (v) is denoted by S v , which defines the closed neighborhood of v. The common closed neighbor set of v and u is N vu = Ñ (v) ∩ Ñ (u) and the exclusive neighbor set of v w.r.t u is defined as N -u v = Ñ (v) -N vu .\nAs shown in Figure 1(b), there are four types of edges in the closed neighborhood of {v, u}:\n• E vu 1 ∈ N vu × N vu : edges between the common closed neighbors of v and u, such as (a, b);\n• E vu 2 ∈ (N vu × N -u v ) ∪ (N vu × N -v u ):\nedges between a common closed neighbor of v and u and an exclusive neighbor of v/u, such as (a, d);\n• E vu 3 ∈ N -u v ×N -v u :\nedges between two exclusive neighbors of v and u from different sides, such as (c, d);\n• E vu 4 ∈ (N -u v × N -u v ) ∪ (N -v u × N -v u )\n: edges between two exclusive neighbors of v or u from the same side, such as (d, f ). We now discuss three different local substructures, each capturing a different set of edges. Overlap Subgraph (Wijesinghe and Wang 2021). The overlap subgraph of two adjacent nodes v and u is defined as S v∩u = S v ∩ S u . The overlap subgraph contains only edges in E vu 1 . Union Minus Subgraph. The union minus subgraph of two adjacent nodes v, u is defined as S - v∪u = S v ∪S u . The union minus subgraph consists of edges in E vu 1 , E vu 2 and E vu 4 . Union Subgraph. The union subgraph of two adjacent nodes v and u, denoted as S v∪u , is defined as the induced subgraph of Ñ (v) ∪ Ñ (u). The union subgraph contains all four types of edges mentioned above.\nIt is obvious that union subgraph captures the whole picture of the 1-hop neighborhood of two adjacent nodes. This subgraph captures all types of connectivities within the neighborhood, providing an ideal local substructure for enhancing the expressive power of MPNNs. We illustrate how effective different local substructures are in improving MPNNs through an example in Appendix A. Note that we restrict the discussion to the 1-hop neighborhood because we aim to develop a model based on the MPNN scheme, in which a single layer of aggregation is performed on the 1-hop neighbors." }, { "figure_ref": [ "fig_1" ], "heading": "Union Isomorphism", "publication_ref": [ "b56", "b5", "b38", "b29", "b59" ], "table_ref": [ "tab_2", "tab_2" ], "text": "We now proceed to define the isomorphic relationship between the neighborhoods of two nodes i and j based on union subgraphs. The definition follows that of overlap isomorphism in (Wijesinghe and Wang 2021). Overlap Isomorphism. S i and S j are overlap-isomorphic, denoted as S i ≃ overlap S j , if there exists a bijective mapping g: Ñ (i) → Ñ (j) such that g(i) = j, and for any v ∈ N (i) and g(v) = u, S i∩v and S j∩u are isomorphic (ordinary graph isomorphic). Union Isomorphism. S i and S j are union-isomorphic, denoted as S i ≃ union S j , if there exists a bijective mapping g: Ñ (i) → Ñ (j) such that g(i) = j, and for any v ∈ N (i) and g(v) = u, S i∪v and S j∪u are isomorphic (ordinary graph isomorphic).\nAs shown in Figure 2, we have two subgraphs G 1 and G 2 . These two subgraphs are overlap-isomorphic. The bijective mapping g is: g(k 1 ) = g(k 2 ), ∀k ∈ {v, a, b, c, d, e, f, u, h}. Take a pair of corresponding overlap subgraphs as an example: S v1∩u1 and S v2∩u2 . They are based on two red edges (v 1 , u 1 ), and (v 2 , u 2 ) and the rest of edges in the overlap subgraphs are colored in blue. It is easy to see that S v1∩u1 and S v2∩u2 are isomorphic (ordinary one). In this ordinary graph isomorphism, its bijective mapping between the nodes is not required to be the same as g. It could be the same or different, as long as the ordinary graph isomorphism holds between this pair of overlap subgraphs. In this example, any pair of corresponding overlap subgraphs defined under g are isomorphic (ordinary one). In fact, all overlap subgraphs have the same structure in this example. In this sense, the concept of \"overlap isomorphism\" does not look at the neighborhood based on a single edge, but captures the neighborhoods of all edges and thus the \"overlap\" connectivities of the two subgraphs G 1 and G 2 . As for the union subgraph concept we define, the union subgraph of nodes v 1 and u 1 (S v1∪u1 ) is the same as G 1 , and the union subgraph of nodes v 2 and u 2 (S v2∪u2 ) is the same as G 2 . Therefore, S v1∪u1 and S v2∪u2 are not union-isomorphic. This means that union isomorphism is able to distinguish the local structures centered at v 1 and v 2 but overlap isomorphism cannot. \nLet U = {S v∪u |(v, u) ∈ E} be the set of union sub- graphs in G.\nIn order to fuse the information of union subgraphs in message passing, we need to define a function f (•) to describe the structural information of each S v∪u ∈ U. Ideally, given two union subgraphs centered at node v,\nS v∪u = (V v∪u , E v∪u ) and S v∪u ′ = (V v∪u ′ , E v∪u ′ ), we want f (S v∪u ) = f (S v∪u ′ ) iff S v∪u\nand S v∪u ′ are isomorphic. We abstract the following properties of a good substructure descriptor function f (•):\n• Size Awareness. f (S v∪u ) ̸ = f (S v∪u ′ ) if |V v∪u | ̸ = |V v∪u ′ | or |E v∪u | ̸ = |E v∪u ′ |; • Connectivity Awareness. f (S v∪u ) ̸ = f (S v∪u ′ ) if |V v∪u | = |V v∪u ′ | and |E v∪u | = |E v∪u ′ | but S v∪u and S v∪u ′ are not isomorphic; • Isomorphic Invariance. f (S v∪u ) = f (S v∪u ′ ) if S v∪u\nand S v∪u ′ are isomorphic.\nFigure 3 illustrates the properties. Herein, we design f (•) as a function that transforms S v∪u to a path matrix P vu ∈ R |Vv∪u|×|Vv∪u| such that each entry:\nP vu ij = PathLen(i, j, Sv∪u), i, j ∈ Vv∪u,(2)\nwhere PathLen(•) denotes the length of the shortest path between i and j in S v∪u . We choose the path matrix over the adjacency matrix or the Laplacian matrix as it explicitly encodes high-order connectivities between the neighbors. In addition, with a fixed order of nodes, we can get a unique P vu for a given S v∪u , and vice versa. We formulate it in Theorem 2. Theorem 2. With a fixed order of nodes in the path matrix, we can obtain a unique path matrix P vu for a given union subgraph S v∪u , and vice versa.\nIt is obvious that our proposed f (•) satisfies the abovementioned three properties, with a node permutation applied in the isomorphic case.\nDiscussion on Other Substructure Descriptor Functions.\nIn the literature, some other functions have also been proposed to describe graph substructures. (1) Edge Betweenness (Brandes 2001) is defined by the number of shortest paths between any pair of nodes in a (sub)graph G that pass through an edge. When applying the edge betweenness to (v, u) in S v∪u , the metric would remain the same on two different union subgraphs, one with an edge in E vu 4 and one without. This shows that edge betweenness does not satisfy Size Awareness; (2) Wijesinghe andWang (Wijesinghe andWang 2021) puts forward a substructure descriptor as a function of the number of nodes and edges. This descriptor fails to distinguish non-isomorphic subgraphs with the same size, and thus does not satisfy Connectivity Awareness; (3) Discrete Graph Curvature, e.g., Olliveier Ricci curvature (Ollivier 2009;Lin, Lu, and Yau 2011), has been introduced to MPNNs in recent years (Ye et al. 2019). Ricci curvature first computes for each node a probability vector of length |V | that characterizes a uniform propagation distribution in the neighborhood. It then defines the curvature of two adjacent nodes as the Wasserstein distance of their corresponding probability vectors. Similar to edge betweenness, curvature doesn't take into account the edges in E vu 4 in its computation and thus does not satisfy Size Awareness either. We detail the definitions of these substructure descriptor functions in Appendix C." }, { "figure_ref": [ "fig_2" ], "heading": "Network Design", "publication_ref": [ "b19", "b60" ], "table_ref": [], "text": "For the path matrix of an edge (v, u) to be used in message passing, we need to further encode it as a scalar. We perform Singular Value Decomposition (SVD) (Horn and Johnson 2012) on the path matrix and extract the singular values: P = U ΣV * . The sum of the singular values of P vu , denoted as a vu = sum(Σ vu ) , is used as the local structural coefficient of the edge (v, u) ∈ E. Note that since the local structure never changes in message passing, we can compute the structural coefficients in preprocessing before the training starts. A nice property of this structural coefficient is that, it is permutation invariant thanks to the use of SVD and the sum operator. With arbitrary order of nodes, the computed a vu remains the same, which removes the condition required by Theorem 2. UnionSNN. We now present our model, namely Union Subgraph Neural Network (UnionSNN), which utilizes unionsubgraph-based structural coefficients to incorporate local substructures in message passing. For each vertex v ∈ V , the node representation at the l-th layer is generated by:\nh (l) v = MLP 1 (l-1) ((1 + ϵ (l-1) )h (l-1) v + u∈N (v) Trans (l-1) (ã vu )h (l-1) u ),(3)\nwhere ϵ (l-1) is a learnable scalar parameter and ãvu = a vu u∈N (v) a vu . MLP 1 (•) denotes a multilayer perceptron (MLP) with a non-linear function ReLU. To transform the weight ãvu to align with the multi-channel representation h training:\nTrans(a) = softmax(MLP2(a)),(4)\nwhere MLP 2 denotes an MLP with ReLU and a channelwise softmax function softmax(•) normalizes the outputs of MLP separately on each channel. For better understanding, we provide the pseudo-code of UnionSNN in Appendix D.\nAs a Plugin to Empower Other Models. In addition to a standalone UnionSNN network, our union-subgraph-based structural coefficients could also be incorporated into other GNNs in a flexible and yet effective manner. For arbitrary MPNNs as in Eq. ( 1), we can plugin our structural coefficients via an element-wise multiplication:\nh (l) v = AGG (l-1) (h (l-1) v , MSG (l-1) ( {Trans (l-1) (ã vu )h (l-1) u , u ∈ N (v)})).\n(5)\nFor transformer-based models, inspired by the spatial encoding in Graphormer (Ying et al. 2021), we inject our structural coefficients into the attention matrix as a bias term:\nAvu = (hvWQ) (huWK ) T √ d + Trans(ã vu ),(6)\nwhere the definition of Trans(•) is the same as Eq. ( 4) and shared across all layers, h v , h u ∈ R 1×d are the node representations of v and u, W Q , W K ∈ R d×d are the parameter matrices, and d is the hidden dimension of h v and h u .\nDesign Comparisons with GraphSNN. Our UnionSNN is similar to GraphSNN in the sense that both improve the expressiveness of MPNNs (and 1-WL) by injecting the information of local substructures. However, UnionSNN is superior to GraphSNN in the following aspects. (1) Union subgraphs in UnionSNN are stronger than overlap subgraphs in GraphSNN, as ensured by Theorem 1.\n(2) The shortestpath-based substructure descriptor designed in UnionSNN is more powerful than that in GraphSNN: the latter fails to possess the property of Connectivity Awareness (as elaborated in Section 4.1). An example of two non-isomorphic subgraphs S v∩u and S v ′ ∩u ′ is shown in Figure 4. They have the same structural coefficients in GraphSNN.\n(3) The aggregation function in UnionSNN works on adjacent nodes in the input graph, while that in GraphSNN utilizes the structural coefficients on all pairs of nodes (regardless of their adjacency). Consequently, GraphSNN requires to pad the adjacency matrix and feature matrix of each graph to the maximum graph size, which significantly increases the computational complexity. The advantages of UnionSNN over GraphSNN are also evidenced by experimental results in Section 5.4. \n% % & % % & ≄" }, { "figure_ref": [ "fig_3" ], "heading": "Expressive Power of UnionSNN", "publication_ref": [ "b40", "b22" ], "table_ref": [], "text": "We formalize the following theorem to show that Union-SNN is more powerful than 1-WL test in terms of expressive power.\nTheorem 3. UnionSNN is more expressive than 1-WL in testing non-isomorphic graphs.\nThe stronger expressiveness of UnionSNN over 1-WL is credited to its use of union subgraphs, with an effective encoding of local neighborhood connectivities via the shortestpath-based design of structural coefficients. The Connection with Higher-Order WL Tests. As discussed in (Papp and Wattenhofer 2022), high-order WL tests are concepts of global comparisons over two graphs. Therefore, it is arguable if the WL hierarchy is a suitable tool to measure the expressiveness of GNN extensions as the latter focus on locality. Nonetheless, there exist some graph structures on which the 3-WL test is not stronger than Union-SNN, i.e., some graphs can be distinguished by UnionSNN but not by 3-WL. As an example, UnionSNN can distinguish the 4x4 Rook's graph and the Shrikhande graph (as shown in Figure 5, and cited from (Huang et al. 2022)), while 3-WL cannot, which suggests that UnionSNN is stronger than 3-WL on such graphs. The induced graphs of arbitrary nodes v and v ′ in the two graphs cannot be distinguished by 1-WL. Thus the original graphs cannot be distinguished by 3-WL. However, the numbers of 3-cycles and 4-cycles in their union subgraphs are different, and UnionSNN is able to reflect the number of 3-cycles and 4-cycles with the consideration of E vu 3 and E vu 4 edges in union subgraphs. In addition, the range of the union subgraph of an edge is inherently limited to 3-hop neighbors of each end node of the edge. As a result, one should expect that UnionSNN's power is theoretically upper-bounded by 4-WL. Furthermore, UnionSNN has limitations in distinguishing cycles with length larger than 6, as this would require information further than 3 hops." }, { "figure_ref": [], "heading": "Experimental Study", "publication_ref": [ "b23", "b20", "b43", "b23", "b25", "b58", "b17", "b50", "b6", "b31", "b36", "b60", "b42" ], "table_ref": [ "tab_1", "tab_2" ], "text": "In this section, we evaluate the effectiveness of our proposed model under various settings and aim to answer the following research questions: RQ1. Can UnionSNN outperform existing MPNNs and transformer-based models? RQ2. Can other GNNs benefit from our structural coefficient? RQ3. How do different components affect the performance of UnionSNN? RQ4. Is our runtime competitive with other substructure descriptors? We conduct experiments on four tasks: graph classification, graph regression, node classification and cycle detection. When we use UnionSNN to plugin other models we use the prefix term \"Union-\", such as UnionGCN. The implementation details are introduced in Appendix E. Datasets. For graph classification, we use 10 benchmark datasets. Eight of them were selected from the TUDataset (Kersting et al. 2016), including MUTAG, PROTEINS, ENZYMES, DD, FRANKENSTEIN (denoted as FRANK in our tables), Tox21, NCI1 and NCI109. The other two datasets OGBG-MOLHIV and OGBG-MOLBBBP were selected from Open Graph Benchmark (Hu et al. 2020). For graph regression, we conduct experiments on ZINC10k and ZINC-full datasets (Dwivedi et al. 2020). For node classification, we test on five datasets, including citation networks (Cora, Citeseer, and PubMed (Sen et al. 2008)) and Amazon co-purchase networks (Computer and Photo (McAuley et al. 2015)). For cycle detection, we conduct experiments on the detection of cycles of lengths 4 and 6, implemented as a graph classification task (Kersting et al. 2016). These datasets cover various graph sizes and densities. Statistics of the datasets used are summarized in Tables 1 and2. Baseline Models. We select various GNN models as baselines, including (1) classical MPNNs such as GCN (Kipf and Welling 2016), GIN (Xu et al. 2018), GraphSAGE (Hamilton, Ying, and Leskovec 2017), GAT (Veličković et al. 2017), GatedGCN (Bresson and Laurent 2017); (2) WL-based GNNs such as 3WL-GNN (Maron et al. 2019);\n(3) transformer-based methods such as UGformer (Nguyen, Nguyen, and Phung 2019), Graphormer (Ying et al. 2021) and GPS (Rampášek et al. 2022) " }, { "figure_ref": [], "heading": "Performance on Different Graph Tasks", "publication_ref": [ "b35" ], "table_ref": [ "tab_4", "tab_5", "tab_7" ], "text": "Graph-Level Tasks. For graph classification, we report the results on 8 TUDatasets in Table 3 and the results on 2 OGB datasets in Table 4. Our UnionSNN outperforms all baselines in 7 out of 10 datasets (by comparing UnionSNN with all baselines without \"ours\"). We further apply our structural coefficient as a plugin component to four MPNNs: GCN, GatedGCN, GraphSAGE and GIN. The results show that our structural coefficient is able to boost the performance of the base model in almost all cases, with an improvement of up to 11.09% (Achieved when plugging in our local encoding into GraphSAGE on the ENZYMES dataset: (58.32% -52.50%) / 52.50% = 11.09%.). For graph regression, we report the mean absolute error (MAE) on ZINC10k and ZINC-full. Detecting cycles requires more than 1-WL expressivity (Morris et al. 2019). To further demonstrate the effectiveness of UnionSNN, we conduct experiments on the detection of cycles of lengths 4 and 6. Note that GCN makes a random guess on the cycle detection with an accuracy of 50%. By incorporating our structural coefficient to GCN, we are able to remarkably boost the accuracy to around 95%. The large gap proves that our structural coefficient is highly effective in capturing local information.\nNode-Level Tasks. We report the results of node classification in Table 7. UnionSNN outperforms all baselines on all 5 datasets. Again, injecting our structural coefficients to GCN, GIN, and GraphSNN achieves performance improvement over base models in almost all cases. Remarkably, we find that integrating our structural coefficients into GCN, GIN, and GraphSNN can effectively reduce the standard deviations (stds) of the classification accuracy of these models in most cases, and some are with a large margin. For instance, the std of classification accuracy of GCN on Cora is reduced from 4.41 to 0.42 after injecting our structural coefficients." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_8", "tab_9" ], "text": "In this subsection, we validate empirically the design choices made in different components of our model: (1) the local substructure;\n(2) the substructure descriptor;\n(3) the encoding method from a path matrix to a scalar. All experiments were conducted on 6 graph classification datasets.\nLocal Substructure. We test three types of local substructures defined in Section 3.2: overlap subgraphs, union minus subgraphs and union subgraphs. They are denoted as \"overlap\", \"minus\", and \"union\" respectively in Table 8. The best results are consistently achieved by using union subgraphs. Substructure Descriptor. We compare our substructure descriptor with four existing ones discussed in Section 4.1. We replace the substructure descriptor in UnionSNN with edge betweenness, node/edge counting, Ricci curvature, and Laplacian matrix (other components unchanged), and obtain four variants, namely BetSNN, CountSNN, CurvSNN, and LapSNN. Table 9 shows our UnionSNN is a clear winner: it achieves the best result on 5 out of 6 datasets. This experiment demonstrates that our path matrix better captures structural information.\nPath Matrix Encoding Method. We test three methods that transform a path matrix to a scalar: (1) sum of all elements in the path matrix (matrix sum); (2) maximum eigenvalue of the path matrix (eigen max); (3) sum of all singular values of the matrix (svd sum) used by UnionSNN in Section 4.2. Table 10 shows that the encoding method \"svd sum\" performs the best on 5 out of 6 datasets." }, { "figure_ref": [ "fig_4" ], "heading": "Case Study", "publication_ref": [], "table_ref": [], "text": "In this subsection, we investigate how the proposed structural coefficient a vu reflects local connectivities. We work on an example union subgraph S v∪u in Figure 6 and modify its nodes/edges to study how the coefficient a vu varies with the local structural change. We have the following observations: (1) with the set of nodes unchanged, deleting an edge increases a vu ; (2) deleting a node (and its incident edges) decreases a vu ; (3) the four types of edges in the closed neighborhood (Section 3.2) have different effects to a vu : E vu 1 <E vu 2 <E vu 3 <E vu 4 (by comparing -ab, -ad, -de, and +df). These observations indicate that a smaller coefficient will be assigned to an edge with a denser local substructure. This matches our expectation that the coefficient should be small for an edge in a highly connected neighborhood. The rationale is, such edges are less important in message passing as the information between their two incident nodes can flow through more paths. By using the coefficients that well capture local connectivities, the messages from different neighbors could be properly adjusted when passing to the center node. This also explains the effectiveness of UnionSNN in performance experiments. " }, { "figure_ref": [], "heading": "Efficiency Analysis", "publication_ref": [ "b2" ], "table_ref": [ "tab_1", "tab_1" ], "text": "In this subsection, we conduct experiments on PROTEINS, DD and FRANKENSTEIN datasets, which cover various number of graphs and graph sizes. Preprocessing Computational Cost. UnionSNN computes structural coefficients in preprocessing. We compare its preprocessing time with the time needed in baseline models for pre-computing their substructure descriptors, including edge betweenness (Betweenness) in BetSNN, node/edge counting (Count ne) in GraphSNN, Ricci curvature (Curvature) in CurvGN, and counting cycles (Count cycle) in (Bodnar et al. 2021). As shown in Table 11, the preprocessing time of UnionSNN is comparable to that of other models. This demonstrates that our proposed structural coefficient is able to improve performance without significantly sacrificing efficiency. Our computational cost could be further reduced by pre-computing in the input graph all node pairs with a distance of 1, 2, or 3 (all possible distances in our union subgraphs). The local path matrix could be computed efficiently by simple checking. This can avoid repeated shortest path computations in each local neighborhood. Table 11: Time cost (seconds) for computing structural coefficients in preprocessing. We are not reporting the time of counting cycles on DD since it took more than 100 hours." }, { "figure_ref": [], "heading": "Runtime Computational Cost", "publication_ref": [], "table_ref": [ "tab_13" ], "text": "We conduct an experiment to compare the total runtime cost of UnionSNN with those in other MPNNs. The results are reported in Table 12. Although UnionSNN runs slightly slower than GCN and GIN, it runs over 4.56 times faster than WL-based MPNN (3WL-GNN) and is comparable to MPNN with positional encoding (GatedGCN-LSPE). Compared with GraphSNN, Union-SNN runs significantly faster: the efficiency improvement approaches an order of magnitude on datasets with large graphs, e.g., DD. This is because UnionSNN does not need to pad the adjacency matrix and the feature matrix of each graph to the maximum graph size in the dataset, as Graph-SNN does. " }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "We present UnionSNN, a model that outperforms 1-WL in distinguishing non-isomorphic graphs. UnionSNN utilizes an effective shortest-path-based substructure descriptor applied to union subgraphs, making it more powerful than previous models. Our experiments demonstrate that UnionSNN surpasses state-of-the-art baselines in both graph-level and node-level classification tasks while maintaining its computational efficiency. The use of union subgraphs enhances the model's ability to capture neighbor connectivities and facilitates message passing. Additionally, when applied to existing MPNNs and Transformer-based models, UnionSNN improves their performance by up to 11.09%." }, { "figure_ref": [ "fig_5" ], "heading": "A An Example of the Power of Local Substructures", "publication_ref": [], "table_ref": [], "text": "In Figure 7, two graphs G 1 and G 2 contain different numbers of four cycles and are hence non-isomorphic. Since the refined colors by the 1-WL algorithm for these two graphs are the same, MPNNs will generate the same representation for them as well (see the colors generated by 1-WL and MPNNs in the second row). When using the overlap subgraph to encode the local substructure for edges, as shown in the third row, all edges will obtain the same color since their overlap subgraphs are the same. When applying the union minus subgraph for edge encoding, it detects different types of edges with various local substructures, but it is still not powerful enough to distinguish G 1 from G 2 . Since the edges of the corresponding positions in the two graphs have the same color, the assigned colors of the nodes at the corresponding positions are also the same (see the fourth row). In contrast, when applying the union subgraph (the last row), edges in G 1 and G 2 clearly depict different local substructures, and thus the nodes and edges of the two graphs exhibit different colors. This example demonstrates that the union subgraph is a more powerful local substructure to distinguish non-isomorphic graph pairs when compared against 1-WL or other types of local substructures." }, { "figure_ref": [], "heading": "B Proofs of Theorems", "publication_ref": [], "table_ref": [], "text": "B.1 Proof of Theorem 1 Theorem 1. If S i ≃ union S j , then S i ≃ overlap S j ; but not vice versa. Proof. By the definition of union isomorphism, if S i ≃ union S j , then there exists a bijective mapping g : Ñ (i) → Ñ (j) such that g(i) = j and for any v ∈ N (i) and g(v) = u, S i∪v and S j∪u are isomorphic. Since g(i) = j and g(v) = u, according to the definition of graph isomorphism, v 1 ∈ Ñ (i) ∩ Ñ (v) if and only if g(v 1 ) ∈ Ñ (j) ∩ Ñ (u). Therefore, g(•) is a bijective mapping from\nÑ (i) ∩ Ñ (v) to Ñ (j) ∩ Ñ (u), and v 1 , v 2 ∈ Ñ (i) ∩ Ñ (v)\nare adjacent if and only if g(v 1 ), g(v 2 ) ∈ Ñ (j) ∩ Ñ (u) are adjacent. This proves that S i∩v and S j∩u are isomorphic, and thus S i ≃ overlap S j . On the contrary, it is possible that S i ≃ overlap S j but S i ̸ ≃ union S j , as shown by graphs G 1 and G 2 in Figure 2. □" }, { "figure_ref": [], "heading": "B.2 Proof of Theorem 2", "publication_ref": [], "table_ref": [], "text": "Theorem 2. With a fixed order of nodes in the path matrix, we can obtain a unique path matrix P vu for a given union subgraph S v∪u , and vice versa.\nProof. We prove the theorem in two steps.\nStep 1: We prove that we get a unique P vu from a given S v∪u . With the node order fixed, the rows and the columns of P vu are fixed. Since P vu stores the lengths of all-pair shortest paths, the matrix is unique given an input union subgraph.\nStep 2: We prove that we can recover a unique S v∪u from a given P vu . The node set of S v∪u can be recovered from the row (or column) indices of P vu . The edge set of S v∪u can be recovered from the entries in P vu with the value of \"1\". Both the node set and the edge set can be uniquely constructed from P vu , and thus S v∪u is unique. □" }, { "figure_ref": [], "heading": "B.3 Proof of Theorem 3", "publication_ref": [ "b56", "b56" ], "table_ref": [], "text": "The proof of Theorem 3 follows a similar flow to that of Theorem 4 in (Wijesinghe and Wang 2021).\nWe first define the concept of subtree isomorphism. Let h v denote the node feature of a node v ∈ V .\nSubtree Isomorphism. S i and S j are subtreeisomorphic, denoted as S i ≃ subtree S j , if there exists a bijective mapping g: Ñ (i) → Ñ (j) such that g(i) = j, and for any v ∈ N (i) and g(v) = u, h v = h u .\nWe assume that H, A and W are three countable sets that H is the node feature space, A is the structural coefficient space, and W = {a ij h i |a ij ∈ A, h i ∈ H}. Suppose H and W are two multisets that contain elements from H and W, respectively, and |H| = |W |. In order to prove Theorem 3, we need to use Lemmas 1 and 2 in (Wijesinghe and Wang 2021). To be self-contained, we repeat the lemmas here and refer the readers to Appendix C of (Wijesinghe and Wang 2021) for the proofs.\nLemma 1. There exists a function f s.t. π(H, W ) = h∈H,w∈W f (h, w) is unique for any distinct pair of multisets (H, W ).\nLemma 2. There exists a function f s.\nt. π ′ (h v , H, W ) = γf (h v , |H|h v ) + h∈H,w∈W f (h, w) is unique for any dis- tinct (h v , H, W ), where h v ∈ H, |H|h v ∈ W ,\nand γ can be an irrational number.\nFrom the lemmas above, we can now prove Theorem 3: Theorem 3. UnionSNN is more expressive than 1-WL in testing non-isomorphic graphs.\nProof. By Theorem 3 in (Wijesinghe and Wang 2021), if a GNN M satisfies the two following conditions, then M is strictly more powerful than 1-WL in distinguishing nonisomorphic graphs. 1. M can distinguish at least one pair of neighborhood subgraphs S i and S j such that S i and S j are subtreeisomorphic, but they are not isomorphic, and {{ã iv |v ∈ N (i)}} ̸ = {{ã ju |u ∈ N (j)}}, where ãvu is the normalized value of a vu ; 2. The aggregation scheme Φ(h\n(t) v , {{h (t) u |u ∈ N (v)}}, {{ (ã uv , h (t) u )|u ∈ N (v)}}) is injective.\nFor condition 1, the pair of graphs in Figure 2 satisfies the condition, and can be distinguished by UnionSNN as they are not union-isomorphic.\nFor condition 2, by Lemmas 1 and 2 and the fact that the MLP Trans(•) is a universal approximator and can model and learn the required functions, we conclude that Union-SNN satisfies this condition.\nTherefore, UnionSNN is more expressive than 1-WL in testing non-isomorphic graphs. □" }, { "figure_ref": [], "heading": "C Definitions of Three Other Substructure", "publication_ref": [ "b59" ], "table_ref": [], "text": "Descriptor Functions\nEdge Betweenness. The betweenness centrality of an edge e ∈ E in a graph G = (V, E, X) is defined as the sum of the fraction of all-pair shortest paths that pass through e: \nc G (e) = v,u∈V σ(v, u, G|e) σ(v, u, G) ,(7)\nwhere σ(v, u, G) is the number of shortest paths between v and u, and σ(v, u, G|e) is the number of those paths passing through edge e.\nNode/Edge Counting. GraphSNN (Wijesinghe and Wang 2021) defines a structural coefficient based on the number of nodes and edges in the (sub)graph. When applied to the union subgraph, we have\nω(S v∪u ) = |E v∪u | |V v∪u | • |V v∪u -1| |V v∪u | λ ,(8)\nwhere λ = 1 for node classification and λ = 2 for graph classification.\nOllivier Ricci Curvature. Given a node v ∈ V in a graph G = (V, E, X), a probability vector of v is defined as:\nµ α v : u →    α, u = v 1-α dv , u ∈ N (v), 0, otherwise(9)\nwhere α ∈ [0, 1) is a hyperparameter. Following (Ye et al. 2019), we use α = 0.5 in all our experiments. The α-Ricci curvature κ α vu on an edge (v, u) ∈ E is defined by:\nκ α vu = 1 - Wass(µ α v , µ α u ) d(v, u) ,(10)\nwhere Wass(•) denotes the Wasserstein distance and d(•) denotes the shortest path length between v and u. The Wasserstein distance can be estimated by the optimal transportation distance, which can be solved by the following linear programming:\nmin M i∈ Ñ (v),j∈ Ñ (u) d (i, j) M (i, j) s.t. j∈ Ñ (u) M (i, j) = µ α v (i) , ∀i ∈ Ñ (v); i∈ Ñ (v) M (i, j) = µ α u (j) , ∀j ∈ Ñ (u).(11)" }, { "figure_ref": [], "heading": "D Algorithm of UnionSNN", "publication_ref": [], "table_ref": [], "text": "See Algorithm 1 in the following." }, { "figure_ref": [], "heading": "E Implementation Details", "publication_ref": [ "b21", "b51", "b24", "b16", "b62", "b41", "b54" ], "table_ref": [], "text": "TU Datasets. We following the setup of ( (2); A feature transformation function Trans, as in Eq.( 4).\nOutput: Node representations H (L) ∈ R N ×D that can be fed into a downstream prediction head for graph/node-level prediction.\n1: for Edge (v, u) ∈ E do 2:\nP vu ij = PathLen(i, j, S v∪u ), i, j ∈ V v∪u # Compute shortest path matrix for each edge Update node representations by Eq. (3) 9: end for 10: return H (L) OGB Datasets. For graph classification on OGB datasets, we use the scaffold splits, run 10 times with different random seeds, and report the average ROC-AUC due to the class imbalance issue. We fix the initialized learning rate to 0.001, weight decay to 0, and dropout to 0. We keep the total number of parameters of all GNNs to be less than 100K by adjusting the hidden dimensions. ZINC Datasets. For graph regression, we use the data split of ZINC10k and ZINC-full datasets from (Dwivedi et al. 2020), run 5 times with different random seeds, and report the average test MAE. The edge features have not been used. We fix the initialized learning rate to 0.001, weight decay to 0, and dropout to 0. By adjusting the hidden dimensions, we keep the total number of parameters of all GNNs to be less than 500K. For the GPS model, we choose GINE (Hu et al. 2019) as the MPNN layer, Graphormer as the GlobAttn layer and the random-walk structural encoding (RWSE) as the positional encoding. Cycle Detection Datasets. The datasets are proposed by (Vignac, Loukas, and Frossard 2020), in which graphs are labeled by containing 4/6 cycles or not. Each dataset contains 9000 graphs for training, 1000 graphs for validation and 10000 graphs for test. The settings of models are the same with OGB datasets. Node Classification Datasets. For semi-supervised node classification, we use the standard split in the original paper and report the average accuracy for 10 runs with different random seeds. We use a 0.001 learning rate with 0 dropout, and choose the best values of the weight decay from {0.005, 0.001, 0.0005} and hidden dimension from {64, 128, 256}.\nFor all experiments, the whole network is trained in an end-to-end manner using the Adam optimizer (Kingma and Ba 2014). We use the early stopping criterion, i.e., we stop the training once there is no further improvement on the validation loss during 25 epochs. The implementation of Graphormer and GPS is referred to their official code1 , which is built on top of PyG (Fey and Lenssen 2019) and GraphGym (You, Ying, and Leskovec 2020). The code of all the other methods was implemented using PyTorch (Paszke et al. 2017) and Deep Graph Library (Wang et al. 2019) packages. All experiments were conducted in a Linux server with Intel(R) Core(TM) i9-10940X CPU (3.30GHz), GeForce GTX 3090 GPU, and 125GB RAM." }, { "figure_ref": [], "heading": "F Time Complexity of Structure Descriptors", "publication_ref": [ "b7", "b59" ], "table_ref": [], "text": "Given a graph G = (V, E), where |V | = N and |E| = M , assume n and m are the average node number and edge number among all union subgraphs in the whole graph, respectively. The complexity of each substructure descriptor is analyzed as follows:\nBetweenness. The time complexity of calculating a single-source shortest path in an unweighted graph via breadth-first search (BFS) (Bundy and Wallen 1984) is O(m+n). It can be applied to compute the all-pairs shortest paths in a union subgraph with each node as the source, and the time complexity is O((m + n)n). For the whole graph, the complexity is O((m + n)nM ).\nCount ne. The complexity of counting the node number and edge number in a single union subgraph are n and m, respectively. For the whole graph, the complexity is O((m+ n)M ).\nCurvature. According to (Ye et al. 2019), the complexity of calculating the curvatures is O((d u • d v ) ω ), where d u and d v are the degrees of the nodes u and v. Here we approximate both terms by N , then the complexity of calculating curvatures is O(N 2ω ), in which ω is the exponent of the complexity of matrix multiplication (the best known is 2.373).\nCount cycle. The complexity of counting k-tuple substructures, for example k-cycles, in the whole graph is O(N k ), where k ≥ 3 in general.\nOurs. The complexity of constructing a shortest path matrix and performing SVD for a single union subgraph are O((m + n)n) and O(n 3 ), respectively. So the total complexity is O(n 3 M ).\nGenerally speaking, M > N ≫ m > n. Thus, the complexity of our structure descriptor is slightly higher than that of Count ne and Betweenness, and significantly lower than that of Curvature and Count cycle." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This research/project is supported by the National Research Foundation, Singapore under its Industry Alignment Fund -Pre-positioning (IAF-PP) Funding Initiative, and the Ministry of Education, Singapore under its MOE Academic Research Fund Tier 2 (STEM RIE2025 Award MOE-T2EP20220-0006). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore, and the Ministry of Education, Singapore." } ]
2024-01-09
[ { "authors": "R Abboud; R Dimitrov; İ İ Ceylan", "journal": "", "ref_id": "b0", "title": "Shortest Path Networks for Graph Property Prediction", "year": "2022" }, { "authors": "D Ahmedt-Aristizabal; M A Armin; S Denman; C Fookes; L Petersson", "journal": "Sensors", "ref_id": "b1", "title": "Graph-based deep learning for medical diagnosis and analysis: past, present and future", "year": "2021" }, { "authors": "C Bodnar; F Frasca; Y Wang; N Otter; G F Montufar; P Lio; M Bronstein", "journal": "", "ref_id": "b2", "title": "Weisfeiler and lehman go topological: Message passing simplicial networks", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b3", "title": "", "year": "" }, { "authors": "G Bouritsas; F Frasca; S P Zafeiriou; M Bronstein", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b4", "title": "Improving graph neural network expressivity via subgraph isomorphism counting", "year": "2022" }, { "authors": "U Brandes", "journal": "Journal of mathematical sociology", "ref_id": "b5", "title": "A faster algorithm for betweenness centrality", "year": "2001" }, { "authors": "X Bresson; T Laurent", "journal": "", "ref_id": "b6", "title": "Residual gated graph convnets", "year": "2017" }, { "authors": "A Bundy; L Wallen", "journal": "Catalogue of artificial intelligence tools", "ref_id": "b7", "title": "Breadth-first search", "year": "1984" }, { "authors": "H Dai; B Dai; L Song", "journal": "", "ref_id": "b8", "title": "Discriminative embeddings of latent variable models for structured data", "year": "2016" }, { "authors": " Pmlr", "journal": "", "ref_id": "b9", "title": "", "year": "" }, { "authors": "A Derrow-Pinion; J She; D Wong; O Lange; T Hester; L Perez; M Nunkesser; S Lee; X Guo; B Wiltshire", "journal": "", "ref_id": "b10", "title": "Eta prediction with graph neural networks in google maps", "year": "2021" }, { "authors": "D K Duvenaud; D Maclaurin; J Iparraguirre; R Bombarell; T Hirzel; A Aspuru-Guzik; R P Adams", "journal": "Advances in neural information processing systems", "ref_id": "b11", "title": "Convolutional networks on graphs for learning molecular fingerprints", "year": "2015" }, { "authors": "V P Dwivedi; X Bresson", "journal": "", "ref_id": "b12", "title": "A generalization of transformer networks to graphs", "year": "2020" }, { "authors": "V P Dwivedi; C K Joshi; A T Luu; T Laurent; Y Bengio; X Bresson", "journal": "", "ref_id": "b13", "title": "Benchmarking Graph Neural Networks", "year": "2020" }, { "authors": "V P Dwivedi; A T Luu; T Laurent; Y Bengio; X Bresson", "journal": "", "ref_id": "b14", "title": "Graph neural networks with learnable structural and positional representations", "year": "2021" }, { "authors": "W Fan; Y Ma; Q Li; Y He; E Zhao; J Tang; D Yin", "journal": "", "ref_id": "b15", "title": "Graph neural networks for social recommendation", "year": "2019" }, { "authors": "M Fey; J E Lenssen", "journal": "", "ref_id": "b16", "title": "Fast graph representation learning with PyTorch Geometric", "year": "2019" }, { "authors": "W Hamilton; Z Ying; J Leskovec", "journal": "Advances in neural information processing systems", "ref_id": "b17", "title": "Inductive representation learning on large graphs", "year": "2017" }, { "authors": "M Horn; E De Brouwer; M Moor; Y Moreau; B Rieck; K Borgwardt", "journal": "", "ref_id": "b18", "title": "Topological graph neural networks", "year": "2021" }, { "authors": "R A Horn; C R Johnson", "journal": "Cambridge university press", "ref_id": "b19", "title": "Matrix analysis", "year": "2012" }, { "authors": "W Hu; M Fey; M Zitnik; Y Dong; H Ren; B Liu; M Catasta; J Leskovec", "journal": "Advances in neural information processing systems", "ref_id": "b20", "title": "Open graph benchmark: Datasets for machine learning on graphs", "year": "2020" }, { "authors": "W Hu; B Liu; J Gomes; M Zitnik; P Liang; V Pande; J Leskovec", "journal": "", "ref_id": "b21", "title": "Strategies for pre-training graph neural networks", "year": "2019" }, { "authors": "Y Huang; X Peng; J Ma; M Zhang", "journal": "", "ref_id": "b22", "title": "Boosting the Cycle Counting Power of Graph Neural Networks with I 2 -GNNs", "year": "2022" }, { "authors": "K Kersting; N M Kriege; C Morris; P Mutzel; M Neumann", "journal": "", "ref_id": "b23", "title": "Benchmark Data Sets for Graph Kernels", "year": "2016" }, { "authors": "D P Kingma; J Ba", "journal": "", "ref_id": "b24", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "T N Kipf; M Welling", "journal": "", "ref_id": "b25", "title": "Semi-supervised classification with graph convolutional networks", "year": "2016" }, { "authors": "D Kreuzer; D Beaini; W Hamilton; V Létourneau; P Tossou", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b26", "title": "Rethinking graph transformers with spectral attention", "year": "2021" }, { "authors": "P Li; Y Wang; H Wang; J Leskovec", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b27", "title": "Distance encoding: Design provably more powerful neural networks for graph representation learning", "year": "2020" }, { "authors": "D Lim; J Robinson; L Zhao; T Smidt; S Sra; H Maron; S Jegelka", "journal": "", "ref_id": "b28", "title": "Sign and basis invariant networks for spectral graph representation learning", "year": "2022" }, { "authors": "Y Lin; L Lu; S.-T Yau", "journal": "Tohoku Mathematical Journal, Second Series", "ref_id": "b29", "title": "Ricci curvature of graphs", "year": "2011" }, { "authors": "Z Liu; C Chen; L Li; J Zhou; X Li; L Song; Y Qi", "journal": "", "ref_id": "b30", "title": "Geniepath: Graph neural networks with adaptive receptive paths", "year": "2019" }, { "authors": "H Maron; H Ben-Hamu; H Serviansky; Y Lipman", "journal": "Advances in neural information processing systems", "ref_id": "b31", "title": "Provably powerful graph networks", "year": "2019" }, { "authors": "D Masters; J Dean; K Klaser; Z Li; S Maddrell-Mander; A Sanders; H Helal; D Beker; L Rampášek; D Beaini", "journal": "", "ref_id": "b32", "title": "GPS++: An Optimised Hybrid MPN-N/Transformer for Molecular Property Prediction", "year": "2022" }, { "authors": "J Mcauley; C Targett; Q Shi; Van Den; A Hengel", "journal": "", "ref_id": "b33", "title": "Image-based recommendations on styles and substitutes", "year": "2015" }, { "authors": "G Mialon; D Chen; M Selosse; J Mairal", "journal": "", "ref_id": "b34", "title": "Graphit: Encoding graph structure in transformers", "year": "2021" }, { "authors": "C Morris; M Ritzert; M Fey; W L Hamilton; J E Lenssen; G Rattan; M Grohe", "journal": "", "ref_id": "b35", "title": "Weisfeiler and leman go neural: Higher-order graph neural networks", "year": "2019" }, { "authors": "D Q Nguyen; T D Nguyen; D Phung", "journal": "", "ref_id": "b36", "title": "Universal Graph Transformer Self-Attention Networks", "year": "2019" }, { "authors": "A Nouranizadeh; M Matinkia; M Rahmati; R Safabakhsh", "journal": "", "ref_id": "b37", "title": "Maximum Entropy Weighted Independent Set Pooling for Graph Neural Networks", "year": "2021" }, { "authors": "Y Ollivier", "journal": "Journal of Functional Analysis", "ref_id": "b38", "title": "Ricci curvature of Markov chains on metric spaces", "year": "2009" }, { "authors": "P A Papp; K Martinkus; L Faber; R Wattenhofer", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b39", "title": "Dropgnn: random dropouts increase the expressiveness of graph neural networks", "year": "2021" }, { "authors": "P A Papp; R Wattenhofer", "journal": "", "ref_id": "b40", "title": "A Theoretical Comparison of Graph Neural Network Extensions", "year": "2022" }, { "authors": "A Paszke; S Gross; S Chintala; G Chanan; E Yang; Z Devito; Z Lin; A Desmaison; L Antiga; A Lerer; H Peng; H Wang; B Du; M Z A Bhuiyan; H Ma; J Liu; L Wang; Z Yang; L Du; S Wang", "journal": "Information Sciences", "ref_id": "b41", "title": "Spatial temporal incidence dynamic graph neural networks for traffic flow forecasting", "year": "2017" }, { "authors": "L Rampášek; M Galkin; V P Dwivedi; A T Luu; G Wolf; D Beaini", "journal": "", "ref_id": "b42", "title": "Recipe for a General, Powerful, Scalable Graph Transformer", "year": "2022" }, { "authors": "P Sen; G Namata; M Bilgic; L Getoor; B Galligher; T Eliassi-Rad", "journal": "AI magazine", "ref_id": "b43", "title": "Collective classification in network data", "year": "2008" }, { "authors": "Y Shi; Z Huang; S Feng; H Zhong; W Wang; Y Sun", "journal": "", "ref_id": "b44", "title": "Masked label prediction: Unified message passing model for semi-supervised classification", "year": "2020" }, { "authors": "W Shiao; Z Guo; T Zhao; E E Papalexakis; Y Liu; N Shah", "journal": "", "ref_id": "b45", "title": "Link Prediction with Non-Contrastive Learning", "year": "2022" }, { "authors": "S Suresh; M Shrivastava; A Mukherjee; J Neville; P Li", "journal": "", "ref_id": "b46", "title": "Expressive and Efficient Representation Learning for Ranking Links in Temporal Graphs", "year": "2023" }, { "authors": "H Tang; Z Huang; J Gu; B.-L Lu; H Su", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b47", "title": "Towards scale-invariant graph-related problem solving by iterative homogeneous gnns", "year": "2020" }, { "authors": "E Thiede; W Zhou; R Kondor", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b48", "title": "Autobahn: Automorphism-based graph neural nets", "year": "2021" }, { "authors": "J Topping; F Di Giovanni; B P Chamberlain; X Dong; M M Bronstein", "journal": "", "ref_id": "b49", "title": "Understanding over-squashing and bottlenecks on graphs via curvature", "year": "2021" }, { "authors": "P Veličković; G Cucurull; A Casanova; A Romero; P Lio; Y Bengio", "journal": "", "ref_id": "b50", "title": "Graph attention networks", "year": "2017" }, { "authors": "C Vignac; A Loukas; P Frossard", "journal": "Advances in neural information processing systems", "ref_id": "b51", "title": "Building powerful and equivariant graph neural networks with structural message-passing", "year": "2020" }, { "authors": "C Wan; M Zhang; W Hao; S Cao; P Li; C Zhang", "journal": "", "ref_id": "b52", "title": "Principled hyperedge prediction with structural spectral features and neural networks", "year": "2021" }, { "authors": "J Wang; A Ma; Y Chang; J Gong; Y Jiang; R Qi; C Wang; H Fu; Q Ma; D Xu", "journal": "Nature communications", "ref_id": "b53", "title": "scGNN is a novel graph neural network framework for single-cell RNA-Seq analyses", "year": "2021" }, { "authors": "M Wang; L Yu; D Zheng; Q Gan; Y Gai; Z Ye; M Li; J Zhou; Q Huang; C Ma", "journal": "", "ref_id": "b54", "title": "Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs", "year": "2019" }, { "authors": "B Weisfeiler; A Leman", "journal": "NTI, Series", "ref_id": "b55", "title": "The reduction of a graph to canonical form and the algebra which appears therein", "year": "1968" }, { "authors": "A Wijesinghe; Q Wang", "journal": "", "ref_id": "b56", "title": "A New Perspective on\" How Graph Neural Networks Go Beyond Weisfeiler-Lehman?", "year": "2021" }, { "authors": "Z Wu; P Jain; M Wright; A Mirhoseini; J E Gonzalez; I Stoica", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b57", "title": "Representing long-range context for graph neural networks with global attention", "year": "2021" }, { "authors": "K Xu; W Hu; J Leskovec; S Jegelka", "journal": "", "ref_id": "b58", "title": "How powerful are graph neural networks?", "year": "2018" }, { "authors": "Z Ye; K S Liu; T Ma; J Gao; C Chen", "journal": "", "ref_id": "b59", "title": "Curvature graph network", "year": "2019" }, { "authors": "C Ying; T Cai; S Luo; S Zheng; G Ke; D He; Y Shen; T.-Y Liu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b60", "title": "Do transformers really perform badly for graph representation", "year": "2021" }, { "authors": "Z Ying; J You; C Morris; X Ren; W Hamilton; J Leskovec", "journal": "Advances in neural information processing systems", "ref_id": "b61", "title": "Hierarchical graph representation learning with differentiable pooling", "year": "2018" }, { "authors": "J You; Z Ying; J Leskovec", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b62", "title": "Design space for graph neural networks", "year": "2020" }, { "authors": "B Zhang; S Luo; L Wang; D He", "journal": "", "ref_id": "b63", "title": "Rethinking the expressive power of gnns via graph biconnectivity", "year": "2023" }, { "authors": "M Zhang; Y Chen", "journal": "Advances in neural information processing systems", "ref_id": "b64", "title": "Link prediction based on graph neural networks", "year": "2018" }, { "authors": "M Zhang; P Li", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b65", "title": "Nested graph neural networks", "year": "2021" }, { "authors": "L Zhao; W Jin; L Akoglu; N Shah", "journal": "", "ref_id": "b66", "title": "From stars to subgraphs: Uplifting any GNN with local structure awareness", "year": "2021" } ]
[ { "formula_coordinates": [ 2, 319.5, 486.39, 238.5, 30.66 ], "formula_id": "formula_0", "formula_text": "X = {x v | v ∈ V } is the set of node features. The set of neighbors of node v is denoted by N (v) = {u ∈ V | (v, u) ∈ E}." }, { "formula_coordinates": [ 2, 338.21, 536.61, 219.79, 9.85 ], "formula_id": "formula_1", "formula_text": "h (l) v = AGG (l-1) (h (l-1) v , MSG (l-1) ({h (l-1) u , u ∈ N (v)})),(1)" }, { "formula_coordinates": [ 2, 353.37, 554.42, 8.79, 6.12 ], "formula_id": "formula_2", "formula_text": "(l)" }, { "formula_coordinates": [ 2, 325.87, 567.98, 72.63, 12.79 ], "formula_id": "formula_3", "formula_text": "(0) v = x v , AGG(•)" }, { "formula_coordinates": [ 3, 183.54, 60.67, 261, 81.98 ], "formula_id": "formula_4", "formula_text": "(a) ! ! \"# (a, b) ! $ \"# (a, d) ! % \"# (c, d) ! & \"# (d, f) (b) u v a b d c f ! ! ∩ ! \" ! ! ∪ ! \" ! !∪\"" }, { "formula_coordinates": [ 3, 54, 290.53, 238.5, 61.74 ], "formula_id": "formula_5", "formula_text": "Ñ (v) = N (v) ∪ {v}. The induced subgraph of Ñ (v) is denoted by S v , which defines the closed neighborhood of v. The common closed neighbor set of v and u is N vu = Ñ (v) ∩ Ñ (u) and the exclusive neighbor set of v w.r.t u is defined as N -u v = Ñ (v) -N vu ." }, { "formula_coordinates": [ 3, 58.48, 400.76, 172.26, 12.2 ], "formula_id": "formula_6", "formula_text": "• E vu 2 ∈ (N vu × N -u v ) ∪ (N vu × N -v u ):" }, { "formula_coordinates": [ 3, 58.48, 435.67, 90.86, 12.2 ], "formula_id": "formula_7", "formula_text": "• E vu 3 ∈ N -u v ×N -v u :" }, { "formula_coordinates": [ 3, 58.48, 459.63, 169.99, 12.2 ], "formula_id": "formula_8", "formula_text": "• E vu 4 ∈ (N -u v × N -u v ) ∪ (N -v u × N -v u )" }, { "formula_coordinates": [ 4, 54, 314.1, 238.5, 19.92 ], "formula_id": "formula_9", "formula_text": "Let U = {S v∪u |(v, u) ∈ E} be the set of union sub- graphs in G." }, { "formula_coordinates": [ 4, 54, 368.9, 238.5, 20.61 ], "formula_id": "formula_10", "formula_text": "S v∪u = (V v∪u , E v∪u ) and S v∪u ′ = (V v∪u ′ , E v∪u ′ ), we want f (S v∪u ) = f (S v∪u ′ ) iff S v∪u" }, { "formula_coordinates": [ 4, 58.48, 416.87, 234.02, 68.88 ], "formula_id": "formula_11", "formula_text": "• Size Awareness. f (S v∪u ) ̸ = f (S v∪u ′ ) if |V v∪u | ̸ = |V v∪u ′ | or |E v∪u | ̸ = |E v∪u ′ |; • Connectivity Awareness. f (S v∪u ) ̸ = f (S v∪u ′ ) if |V v∪u | = |V v∪u ′ | and |E v∪u | = |E v∪u ′ | but S v∪u and S v∪u ′ are not isomorphic; • Isomorphic Invariance. f (S v∪u ) = f (S v∪u ′ ) if S v∪u" }, { "formula_coordinates": [ 4, 96.27, 538.99, 196.23, 11.13 ], "formula_id": "formula_12", "formula_text": "P vu ij = PathLen(i, j, Sv∪u), i, j ∈ Vv∪u,(2)" }, { "formula_coordinates": [ 4, 358, 580.79, 200, 40.56 ], "formula_id": "formula_13", "formula_text": "h (l) v = MLP 1 (l-1) ((1 + ϵ (l-1) )h (l-1) v + u∈N (v) Trans (l-1) (ã vu )h (l-1) u ),(3)" }, { "formula_coordinates": [ 5, 110.56, 175.95, 181.94, 8.06 ], "formula_id": "formula_14", "formula_text": "Trans(a) = softmax(MLP2(a)),(4)" }, { "formula_coordinates": [ 5, 94.32, 309.2, 157.87, 29.77 ], "formula_id": "formula_15", "formula_text": "h (l) v = AGG (l-1) (h (l-1) v , MSG (l-1) ( {Trans (l-1) (ã vu )h (l-1) u , u ∈ N (v)}))." }, { "formula_coordinates": [ 5, 93.04, 390.53, 199.46, 23.38 ], "formula_id": "formula_16", "formula_text": "Avu = (hvWQ) (huWK ) T √ d + Trans(ã vu ),(6)" }, { "formula_coordinates": [ 5, 397.09, 213.67, 209.9, 30.8 ], "formula_id": "formula_17", "formula_text": "% % & % % & ≄" }, { "formula_coordinates": [ 13, 57.16, 469.96, 235.34, 12.17 ], "formula_id": "formula_18", "formula_text": "Ñ (i) ∩ Ñ (v) to Ñ (j) ∩ Ñ (u), and v 1 , v 2 ∈ Ñ (i) ∩ Ñ (v)" }, { "formula_coordinates": [ 13, 319.5, 322.81, 238.5, 35 ], "formula_id": "formula_19", "formula_text": "t. π ′ (h v , H, W ) = γf (h v , |H|h v ) + h∈H,w∈W f (h, w) is unique for any dis- tinct (h v , H, W ), where h v ∈ H, |H|h v ∈ W ," }, { "formula_coordinates": [ 13, 332.45, 505.76, 225.55, 26.35 ], "formula_id": "formula_20", "formula_text": "(t) v , {{h (t) u |u ∈ N (v)}}, {{ (ã uv , h (t) u )|u ∈ N (v)}}) is injective." }, { "formula_coordinates": [ 14, 115.56, 388.23, 176.94, 26.8 ], "formula_id": "formula_21", "formula_text": "c G (e) = v,u∈V σ(v, u, G|e) σ(v, u, G) ,(7)" }, { "formula_coordinates": [ 14, 92.39, 511.98, 200.11, 23.23 ], "formula_id": "formula_22", "formula_text": "ω(S v∪u ) = |E v∪u | |V v∪u | • |V v∪u -1| |V v∪u | λ ,(8)" }, { "formula_coordinates": [ 14, 109.94, 596.3, 182.56, 34.58 ], "formula_id": "formula_23", "formula_text": "µ α v : u →    α, u = v 1-α dv , u ∈ N (v), 0, otherwise(9)" }, { "formula_coordinates": [ 14, 118.97, 682.54, 173.53, 23.89 ], "formula_id": "formula_24", "formula_text": "κ α vu = 1 - Wass(µ α v , µ α u ) d(v, u) ,(10)" }, { "formula_coordinates": [ 14, 355.95, 453.09, 202.05, 83.31 ], "formula_id": "formula_25", "formula_text": "min M i∈ Ñ (v),j∈ Ñ (u) d (i, j) M (i, j) s.t. j∈ Ñ (u) M (i, j) = µ α v (i) , ∀i ∈ Ñ (v); i∈ Ñ (v) M (i, j) = µ α u (j) , ∀j ∈ Ñ (u).(11)" } ]
Union Subgraph Neural Networks
Graph Neural Networks (GNNs) are widely used for graph representation learning in many application domains. The expressiveness of vanilla GNNs is upper-bounded by 1dimensional Weisfeiler-Leman (1-WL) test as they operate on rooted subtrees through iterative message passing. In this paper, we empower GNNs by injecting neighbor-connectivity information extracted from a new type of substructure. We first investigate different kinds of connectivities existing in a local neighborhood and identify a substructure called union subgraph, which is able to capture the complete picture of the 1-hop neighborhood of an edge. We then design a shortestpath-based substructure descriptor that possesses three nice properties and can effectively encode the high-order connectivities in union subgraphs. By infusing the encoded neighbor connectivities, we propose a novel model, namely Union Subgraph Neural Network (UnionSNN), which is proven to be strictly more powerful than 1-WL in distinguishing nonisomorphic graphs. Additionally, the local encoding from union subgraphs can also be injected into arbitrary messagepassing neural networks (MPNNs) and Transformer-based models as a plugin. Extensive experiments on 18 benchmarks of both graph-level and node-level tasks demonstrate that UnionSNN outperforms state-of-the-art baseline models, with competitive computational efficiency. The injection of our local encoding to existing models is able to boost the performance by up to 11.09%. Our code is available at https://github.com/AngusMonroe/UnionSNN.
Jiaxing Xu; Aihu Zhang; Qingtian Bian; Vijay Prakash Dwivedi; Yiping Ke
[ { "figure_caption": "Figure 1 :1Figure 1: (a) A pair of non-isomorphic graphs not distinguishable by 1-WL; (b) An example of various local substructures for two adjacent nodes v and u.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Three properties that a good substructure descriptor function f (•) should exhibit.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Example of two non-isomorphic subgraphs with the same structural coefficient in GraphSNN Time and Space Complexities. Given that the path matrices have been precomputed, our time and space complexities for model training are in line with those of GIN and Graph-SNN. Denote m as the number of edges in a graph, f and d as the dimensions of input and output feature vectors, and k as the number of layers. Time and space complexities of UnionSNN are O(kmf d) and O(m), respectively.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: These two graphs can be distinguished by Union-SNN but not by 3-WL.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Structural coefficient analysis.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: An example graph pair and their color refinements by using different local substructures.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Statistics of Graph Classification/Regression Datasets.", "figure_data": "DatasetGraph # Class # Avg Node # Avg Edge #MetricMUTAG188217.9319.79AccuracyPROTEINS1113239.0672.82AccuracyENZYMES600632.6362.14AccuracyDD11782284.32715.66AccuracyFRANKENSTEIN4337216.9017.88AccuracyTox218169218.0918.50AccuracyNCI14110229.8732.30AccuracyNCI1094127229.6832.13AccuracyOGBG-MOLHIV41127225.5027.50ROC-AUCOGBG-MOLBBBP2039224.0625.95ROC-AUCZINC10k12000-23.1649.83MAEZINC-full249456-23.1649.83MAE4CYCLES20000236.0061.71Accuracy6CYCLES20000256.0087.84AccuracyNode # Edge # Class # Feature # Training #Cora2708542971433140Citeseer3327473263703120PubMed1971744338350060Computer 13381 2591591076671338Photo74871265308745749", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Statistics of Node Classification Datasets.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "; (4) state-of-the-art graph pooling methods such as MEWISPool(Nouranizadeh et al. 2021); (5) methods that introduce structural information by shortest paths or curvature, such as GeniePath(Liu et al. 2019), CurvGN(Ye et al. 2019), and NestedGIN(Zhang and ± 10.49 74.34 ± 2.09 67.67 ± 3.74 74.25 ± 3.76 62.85 ± 1.90 90.35 ± 0.71 78.07 ± 1.94 74.34 ± 2.18 3WL-GNN 84.06 ± 6.62 60.18 ± 6.35 54.17 ± 6.25 74.84 ± 2.63 58.68 ± 1.93 90.31 ± 1.33 78.39 ± 1.54 77.97 ± 2.22 UGformer 75.66 ± 8.67 70.17 ± 5.42 64.57 ± 4.53 75.51 ± 3.52 56.13 ± 2.51 88.06 ± 0.50 68.84 ± 1.54 66.37 ± 2.74 MEWISPool 84.73 ± 4.73 68.10 ± 3.97 53.66 ± 6.07 76.03 ± 2.59 64.63 ± 2.83 88.13 ± 0.05 74.21 ± 3.26 75.30 ± 1.45 CurvGN 87.25 ± 6.28 75.73 ± 2.87 56.50 ± 7.13 72.16 ± 1.88 61.89 ± 2.41 90.87 ± 0.38 79.32 ± 1.65 77.30 ± 1.78 NestedGIN 86.23 ± 8.82 68.55 ± 3.22 54.67 ± 9.99 70.04 ± 4.32 67.07 ± 1.46 91.42 ± 1.18 82.04 ± 2.23 79.94 ± 1.59 DropGIN 84.09 ± 8.42 73.39 ± 4.90 67.50 ± 6.42 71.05 ± 2.68 66.91 ± 2.16 91.66 ± 0.92 82.19 ± 1.39 81.13 ± 0.43", "figure_data": "MUTAG PROTEINS ENZYMESDDFRANKTox21NCI1NCI109GAT 77.56 GatedGCN-LSPE 88.33 ± 3.88 73.94 ± 2.72 64.50 ± 5.92 76.74 ± 2.04 67.74 ± 2.65 91.71 ± 0.71 80.75 ± 1.67 80.13 ± 2.33GraphSNN84.04 ± 4.09 71.78 ± 4.11 67.67 ± 3.74 76.03 ± 2.59 67.17 ± 2.25 92.24 ± 0.59 70.87 ± 2.78 70.11 ± 1.86GCN77.13 ± 5.24 73.89 ± 2.85 64.33 ± 5.83 72.16 ± 2.83 58.80 ± 1.06 90.10 ± 0.77 79.73 ± 0.95 75.91 ± 1.53UnionGCN (ours)81.87 ± 3.81 75.02 ± 2.50 64.67 ± 7.14 69.69 ± 4.18 61.72 ± 1.76 91.63 ± 0.72 80.41 ± 1.94 79.50 ± 1.82GatedGCN77.11 ± 10.05 76.18 ± 3.12 66.83 ± 5.08 72.58 ± 3.04 61.40 ± 1.92 90.83 ± 0.96 80.32 ± 2.07 78.19 ± 2.39UnionGatedGCN(ours) 77.14 ± 8.14 76.91 ± 3.06 67.83 ± 6.87 72.50 ± 2.22 61.47 ± 2.54 91.31 ± 0.89 80.95 ± 2.11 78.21 ± 2.58GraphSAGE80.38 ± 10.98 74.87 ± 3.38 52.50 ± 5.69 73.10 ± 4.34 52.95 ± 4.01 88.36 ± 0.15 63.94 ± 2.52 65.46 ± 1.12UnionGraphSAGE(ours) 83.04 ± 8.70 74.57 ± 2.35 58.32 ± 2.64 73.85 ± 4.46 56.75 ± 3.85 88.59 ± 0.12 69.36 ± 1.64 69.87 ± 1.04GIN86.23 ± 8.17 72.86 ± 4.14 65.83 ± 5.93 70.29 ± 2.96 66.50 ± 2.37 91.74 ± 0.95 82.29 ± 1.77 80.95 ± 1.87UnionGIN (ours)88.86 ± 4.33 73.22 ± 3.90 67.83 ± 6.10 70.47 ± 4.98 68.02 ± 1.47 91.74 ± 0.74 82.29 ± 1.98 82.24 ± 1.24UnionSNN (ours)87.31 ± 5.29 75.02 ± 2.50 68.17 ± 5.70 77.00 ± 2.37 67.83 ± 1.99 91.76 ± 0.85 82.34 ± 1.93 81.61 ± 1.78", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Graph classification results (average accuracy ± standard deviation) over 10-fold-CV. The first and second best results on each dataset are highlighted in bold and underlined. The winner between a base model with and without our structural coefficient injected is highlighted in gray background . The same applies to all tables.", "figure_data": "Li 2021); (6) GNN with subgraph aggregation method, suchas DropGIN (Papp et al. 2021); (7) GNNs with positionalencoding, such as GatedGCN-LSPE (Dwivedi et al. 2021);(8) GraphSNN (Wijesinghe and Wang 2021).modelOGBG-MOLHIV OGBG-MOLBBBPGraphSAGE70.37 ± 0.4260.78 ± 2.43GCN73.49 ± 1.9964.04 ± 0.43GIN70.60 ± 2.5664.10 ± 1.05GAT70.60 ± 1.7863.30 ± 0.53CurvGN73.17 ± 0.8966.51 ± 0.80GraphSNN73.05 ± 0.4062.84 ± 0.36UnionSNN (ours)74.44 ± 1.2168.28 ± 1.47", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Table6shows the superiority of UnionSNN over 5 baseline models for detecting cycles. Node classification results (average accuracy ± standard deviation) over 10 runs.", "figure_data": "CoraCiteseerPubMedComputerPhotoGraphSAGE70.60 ± 0.64 55.02 ± 3.40 70.36 ± 4.29 80.30 ± 1.30 89.16 ± 1.03GAT74.82 ± 1.95 63.82 ± 2.81 74.02 ± 1.11 85.94 ± 2.35 91.86 ± 0.47GeniePath72.16 ± 2.69 57.40 ± 2.16 70.96 ± 2.06 82.68 ± 0.45 89.98 ± 1.14CurvGN74.06 ± 1.54 62.08 ± 0.85 74.54 ± 1.61 86.30 ± 0.70 92.50 ± 0.50GCN72.56 ± 4.41 58.30 ± 6.32 74.44 ± 0.71 84.58 ± 3.02 91.71 ± 0.55UnionGCN (ours)74.48 ± 0.42 59.02 ± 3.64 74.82 ± 1.10 88.84 ± 0.27 92.33 ± 0.53GIN75.86 ± 1.09 63.10 ± 2.24 76.62 ± 0.64 86.26 ± 0.56 92.11 ± 0.32UnionGIN (ours)75.90 ± 0.80 63.66 ± 1.75 76.78 ± 1.02 86.81 ± 2.12 92.28 ± 0.19GraphSNN75.44 ± 0.73 64.68 ± 2.72 76.76 ± 0.54 84.11 ± 0.57 90.82 ± 0.30UnionGraphSNN (ours) 75.58 ± 0.49 65.22 ± 1.12 76.92 ± 0.56 84.58 ± 0.46 90.60 ± 0.58UnionSNN (ours)76.86 ± 1.58 65.02 ± 1.02 77.06 ± 1.07 87.76 ± 0.36 92.92 ± 0.38MUTAGPROTEINSENZYMESDDNCI1NCI109overlap 85.70 ± 7.40 71.33 ± 5.35 65.00 ± 5.63 73.43 ± 4.07 73.58 ± 1.73 72.96 ± 2.01minus87.31 ± 5.29 68.70 ± 3.61 65.33 ± 4.58 74.79 ± 4.63 80.66 ± 1.90 78.70 ± 2.48union87.31 ± 5.29 75.02 ± 2.50 68.17 ± 5.70 77.00 ± 2.37 82.34 ± 1.93 81.61 ± 1.78", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Ablation study on local substructure. .29 75.02 ± 2.50 68.17 ± 5.70 77.00 ± 2.37 82.34 ± 1.93 81.61 ± 1.78", "figure_data": "MUTAGPROTEINSENZYMESDDNCI1NCI109BetSNN80.94 ± 6.60 69.44 ± 6.15 65.00 ± 5.63 70.20 ± 5.15 74.91 ± 2.48 73.70 ± 1.87CountSNN 84.65 ± 6.76 70.79 ± 5.07 66.50 ± 6.77 74.36 ± 7.21 81.74 ± 2.35 79.80 ± 1.67CurvSNN85.15 ± 7.35 72.77 ± 4.42 67.17 ± 6.54 75.88 ± 3.24 81.34 ± 2.27 80.64 ± 1.85LapSNN89.39 ± 5.24 68.32 ± 3.49 66.17 ± 4.15 76.31 ± 2.85 81.39 ± 2.08 81.34 ± 2.93UnionSNN 87.31 ± 5", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Ablation study on substructure descriptor.", "figure_data": "MUTAGPROTEINSENZYMESDDNCI1NCI109matrix sum 88.89 ± 7.19 71.32 ± 5.48 65.17 ± 6.43 70.71 ± 4.07 80.37 ± 2.73 79.84 ± 1.89eigen max86.73 ± 5.84 71.78 ± 3.24 67.67 ± 6.88 74.29 ± 3.26 81.37 ± 2.08 79.23 ± 2.01svd sum87.31 ± 5.29 75.02 ± 2.50 68.17 ± 5.70 77.00 ± 2.37 82.34 ± 1.93 81.61 ± 1.78", "figure_id": "tab_9", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Ablation study on path matrix encoding method.", "figure_data": "", "figure_id": "tab_10", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": Time cost (hours) for a single run with 10-fold-CV, including training, validation, test (excluding prepro-cessing).", "figure_id": "tab_13", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Input: A graph G = (V, E, X) with |V | nodes and |E| edges; The UnionSNN update function as in Eq. (3); A shortest path substructure descriptor PathLen, as in Eq.", "figure_data": "Algorithm 1: Algorithm for an L layer UnionSNN.Dwivedi et al.2020), which splits each dataset into 8:1:1 for training, val-idation and test, respectively. The model evaluation and se-lection are done by collecting the accuracy from the singleepoch with the best 10-fold cross-validation averaged ac-curacy. We choose the best values of the initialized learn-ing rate from {0.02, 0.01, 0.005, 0.001}, weight decay from{0.005, 0.001, 0.0005}, hidden dimension from {64, 128,256} and dropout from {0, 0.2, 0.4}.", "figure_id": "tab_14", "figure_label": "", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "(Duvenaud et al. 2015)", "Explanation": "The cited work by Duvenaud et al. provides foundational methods and techniques for applying GNNs in quantum chemistry applications, which the citing paper builds upon in its research on the same topic."}, {"Category": "Methodological Basis", "Citation": "(Dai, Dai, and Song 2016)", "Explanation": "The cited work by Dai, Dai, and Song contributes to the field of GNNs by providing methods and techniques for applying GNNs in quantum chemistry applications, which the citing paper builds upon in its research on the same topic."}, {"Category": "Methodological Basis", "Citation": "(Masters et al. 2022)", "Explanation": "The cited work by Masters et al. contributes to the field of GNNs by providing methods and techniques for applying GNNs in quantum chemistry applications, which the citing paper builds upon in its research on the same topic."}, {"Category": "Methodological Basis", "Citation": "(Ying et al. 2018)", "Explanation": "The cited work by Ying et al. provides methods and techniques for applying GNNs in social science applications, which the citing paper builds upon in its research on the same topic."}, {"Category": "Methodological Basis", "Citation": "(Fan et al. 2019)", "Explanation": "The cited work by Fan et al. contributes to the field of GNNs by providing methods and techniques for applying GNNs in social science applications, which the citing paper builds upon in its research on the same topic."}, {"Category": "Methodological Basis", "Citation": "(Shiao et al. 2022)", "Explanation": "The cited work by Shiao et al. provides methods and techniques for applying GNNs in social science applications, which the citing paper builds upon in its research on the same topic."}, {"Category": "Methodological Basis", "Citation": "(Peng et al. 2020)", "Explanation": "The cited work by Peng et al. provides methods and techniques for applying GNNs in transportation applications, which the citing paper builds upon in its research on the same topic."}, {"Category": "Methodological Basis", "Citation": "(Derrow-Pinion et al. 2021)", "Explanation": "The cited work by Derrow-Pinion et al. contributes to the field of GNNs by providing methods and techniques for applying GNNs in transportation applications, which the citing paper builds upon in its research on the same topic."}, {"Category": "Methodological Basis", "Citation": "(Ahmedt-Aristizabal et al. 2021)", "Explanation": "The cited work by Ahmedt-Aristizabal et al. provides methods and techniques for applying GNNs in neuroscience applications, which the citing paper builds upon in its research on the same topic."}, {"Category": "Methodological Basis", "Citation": "(Wang et al. 2021)", "Explanation": "The cited work by Wang et al. contributes to the field of GNNs by providing methods and techniques for applying GNNs in neuroscience applications, which the citing paper builds upon in its research on the same topic."}, {"Category": "Methodological Basis", "Citation": "(Zhang and Chen 2018)", "Explanation": "The cited work by Zhang and Chen provides methods and techniques for applying GNNs in link prediction tasks, which the citing paper builds upon in its research on the same topic."}, {"Category": "Methodological Basis", "Citation": "(Suresh et al. 2023)", "Explanation": "The cited work by Suresh et al. provides methods and techniques for applying GNNs in link prediction tasks, which the citing paper builds upon in its research on the same topic."}, {"Category": "Methodological Basis", "Citation": "(Weisfeiler and Leman 1968)", "Explanation": "The cited work introduces the Weisfeiler-Leman test, which is a method used in the citing paper to evaluate the power of GNNs in distinguishing non-isomorphic graph structures."}, {"Category": "Extension or Continuation", "Citation": "(Bouritsas et al. 2022)", "Explanation": "The cited work builds upon the research of the citing paper by exploring the use of spatial encoding to enhance the expressiveness of GNNs in local substructure information."}, {"Category": "Data Source", "Citation": "(Li et al. 2020)", "Explanation": "The cited work provides the data of shortest path information used in the message passing of GNNs via distance encoding in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Liu et al.", "Explanation": "The cited work introduces a method of adaptive breath/depth functions in message passing of GNNs, which is adopted in the citing paper to enhance the expressiveness of GNNs in local substructure information."}, {"Category": "Methodological Basis", "Citation": "(Wan et al. 2021)", "Explanation": "The cited work introduces the concept of affinity matrix for controlling the message from neighbors at different distances, which the citing paper adopts in the design of the new substructure descriptor to encode high-order connectivities."}, {"Category": "Methodological Basis", "Citation": "(Bouritsas et al. 2022)", "Explanation": "The cited work introduces structural biases in the aggregation function, which the citing paper adopts to break the symmetry in message passing in their own research."}, {"Category": "Methodological Basis", "Citation": "(Bodnar et al. 2021)", "Explanation": "The cited work develops a new WL aggregation scheme to take into account substructures like cycles or cliques, which the citing paper may have used in their own research to improve the expressiveness of GNN architectures."}, {"Category": "Methodological Basis", "Citation": "(Thiede, Zhou, and Kondor 2021)", "Explanation": "The cited work also develops a new WL aggregation scheme to take into account substructures like cycles or cliques, which the citing paper may have used in their own research to enhance the expressiveness of GNN architectures."}, {"Category": "Methodological Basis", "Citation": "(Horn et al. 2021)", "Explanation": "The cited work also develops a new WL aggregation scheme to take into account substructures like cycles or cliques, which the citing paper may have used in their own research to improve the expressiveness of GNN architectures."}, {"Category": "Methodological Basis", "Citation": "(Lim et al. 2022)", "Explanation": "The cited work introduces the concept of local structural information via positional encoding, which the citing paper adopts in their research to perform cycle counting in a more efficient manner."}, {"Category": "Methodological Basis", "Citation": "(Zhang et al. 2023)", "Explanation": "The cited work also contributes to the research by incorporating local structural information via positional encoding, which the citing paper builds upon to improve the performance of cycle counting."}, {"Category": "Extension or Continuation", "Citation": "(Dwivedi and Bresson 2020)", "Explanation": "The cited work is an extension of the research on cycle counting, as it introduces a new method that enhances the process of performing cycle counting in a more efficient manner."}, {"Category": "Extension or Continuation", "Citation": "(Kreuzer et al. 2021)", "Explanation": "The cited work is also an extension of the research on cycle counting, as it introduces a new method that improves the performance of the process."}, {"Category": "Extension or Continuation", "Citation": "(Wu et al. 2021)", "Explanation": "The cited work further extends the research on cycle counting by introducing a new method that enhances the process of performing cycle counting."}, {"Category": "Extension or Continuation", "Citation": "(Mialon et al. 2021)", "Explanation": "The cited work is another extension of the research on cycle counting, as it introduces a new method that improves the performance of the process."}, {"Category": "Data Source", "Citation": "(Li et al. 2020)", "Explanation": "The cited work is a data source for the research on cycle counting, as it provides a dataset or pre-existing model that the citing paper utilizes in their study."}, {"Category": "Data Source", "Citation": "(Dwivedi et al. 2021)", "Explanation": "The cited work is also a data source for the research on cycle counting, as it provides a dataset or pre-existing model that the citing paper utilizes in their study."}, {"Category": "Methodological Basis", "Citation": "(Li et al. 2020)", "Explanation": "The cited work presents a distance encoding module that the citing paper adopts to augment node features and control the receptive field of message passing in GNNs."}, {"Category": "Methodological Basis", "Citation": "(Liu et al. 2019)", "Explanation": "The cited work proposes an adaptive breath function that the citing paper uses to learn the importance of different-sized neighborhoods in GNNs."}, {"Category": "Methodological Basis", "Citation": "(Tang et al. 2020)", "Explanation": "The cited work imitates the Bellman-Ford algorithm to generate weights in updating node features in GNNs, which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "(Abboud, Dimitrov, and Ceylan 2022)", "Explanation": "The cited work designs a scheme to propagate node representations in the shortest path neighborhood, which the citing paper incorporates in their research on GNNs."}, {"Category": "Methodological Basis", "Citation": "(Ye et al. 2019)", "Explanation": "The cited work reflects the connectivity between nodes and the possible bottleneck effects by graph curvature information, which the citing paper adapts in their research on GNNs."}, {"Category": "Methodological Basis", "Citation": "(Wan et al. 2021)", "Explanation": "The cited work SNALS utilizes an affinity matrix based on shortest paths to encode the structural information of hyperedges, which the citing paper adopts in their research to encode the structural information of substructures in a different context."}, {"Category": "Methodological Basis", "Citation": "(Xu et al. 2018)", "Explanation": "The cited work introduces the concept of Multi-layer Perceptron Neural Networks (MPNN), which the citing paper adopts in their research to represent graphs and perform node representation learning."}, {"Category": "Methodological Basis", "Citation": "(Brandes 2001)", "Explanation": "The cited work by Brandes (2001) introduces the concept of Edge Betweenness, which the citing paper adopts as a metric to measure the number of shortest paths between nodes in a (sub)graph."}, {"Category": "Methodological Basis", "Citation": "(Wijesinghe andWang 2021)", "Explanation": "The cited work by Wijesinghe and Wang (2021) proposes a substructure descriptor that the citing paper uses to measure the number of nodes and edges in a (sub)graph."}, {"Category": "Methodological Basis", "Citation": "(Ollivier 2009;Lin, Lu, and Yau 2011)", "Explanation": "The cited works by Ollivier (2009) and Lin, Lu, and Yau (2011) introduce the concept of Discrete Graph Curvature, which the citing paper adopts in recent years to measure the curvature of (sub)graphs in MPNNs."}, {"Category": "Methodological Basis", "Citation": "(Ying et al. 2021)", "Explanation": "The cited work provides the inspiration for the spatial encoding used in the attention matrix in the citing paper, which is a key methodological element in the research conducted."}, {"Category": "Methodological Basis", "Citation": "(Papp and Wattenhofer 2022)", "Explanation": "The cited work provides a discussion on the use of high-order WL tests in measuring the expressiveness of GNN extensions, which the citing paper builds upon in its own research on the use of UnionSNN in testing non-isomorphic graphs."}, {"Category": "Methodological Basis", "Citation": "(Kersting et al. 2016)", "Explanation": "The cited work provides the benchmark datasets used in the graph classification task, which serve as a foundational element for the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Hu et al. 2020)", "Explanation": "The cited work provides the OGBG-MOLHIV and OGBG-MOLBBBP datasets used in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Dwivedi et al. 2020)", "Explanation": "The cited work provides the ZINC10k and ZINC-full datasets used in the experiments on graph regression in the citing paper."}, {"Category": "Data Source", "Citation": "(Sen et al. 2008)", "Explanation": "The cited work provides the citation networks (Cora, Citeseer, and PubMed) used in the experiments on node classification in the citing paper."}, {"Category": "Data Source", "Citation": "(McAuley et al. 2015)", "Explanation": "The cited work provides the Amazon co-purchase networks (Computer and Photo) used in the experiments on node classification in the citing paper."}, {"Category": "Data Source", "Citation": "(Kersting et al. 2016)", "Explanation": "The cited work provides the datasets used in the experiments on cycle detection in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Hamilton, Ying, and Leskovec 2017)", "Explanation": "The cited work GraphSAGE is used as a methodological basis for the citing paper, as it provides a method for graph representation learning that the citing paper adopts or adapts in its research."}, {"Category": "Methodological Basis", "Citation": "(Veli\u010dkovi\u0107 et al. 2017)", "Explanation": "The cited work GAT is also used as a methodological basis for the citing paper, as it provides a method for graph attention that the citing paper may have adopted or adapted in its research."}, {"Category": "Methodological Basis", "Citation": "(Bresson and Laurent 2017)", "Explanation": "The cited work GatedGCN is also used as a methodological basis for the citing paper, as it provides a method for gated graph convolution that the citing paper may have adopted or adapted in its research."}, {"Category": "Extension or Continuation", "Citation": "(Maron et al. 2019)", "Explanation": "The cited work 3WL-GNN is used to extend the research of the citing paper, as it introduces a new method for graph learning that the citing paper may have explored in more depth or in a different context."}, {"Category": "Extension or Continuation", "Citation": "(Nguyen, Nguyen, and Phung 2019)", "Explanation": "The cited work UGformer is also used to extend the research of the citing paper, as it introduces a new method for graph learning that the citing paper may have explored in more depth or in a different context."}, {"Category": "Extension or Continuation", "Citation": "(Ying et al. 2021)", "Explanation": "The cited work Graphormer is also used to extend the research of the citing paper, as it introduces a new method for graph learning that the citing paper may have explored in more depth or in a different context."}, {"Category": "Extension or Continuation", "Citation": "(Ramp\u00e1\u0161ek et al. 2022)", "Explanation": "The cited work GPS is also used to extend the research of the citing paper, as it introduces a new method for graph learning that the citing paper may have explored in more depth or in a different context."}, {"Category": "Methodological Basis", "Citation": "(Morris et al. 2019)", "Explanation": "The cited work by Morris et al. provides the necessary theoretical basis for detecting cycles in the graph, which the citing paper uses in its research to improve the performance of the model in graph regression tasks."}, {"Category": "Methodological Basis", "Citation": "(Bodnar et al. 2021)", "Explanation": "The cited work provides the method of counting cycles, which the citing paper uses in their research to improve performance in the UnionSNN model."}, {"Category": "Methodological Basis", "Citation": "(Ye et al. 2019)", "Explanation": "The cited work provides a method for calculating the \u03b1-Ricci curvature on an edge in a graph, which the citing paper adopts in their research to measure the distance between nodes in a graph."}, {"Category": "Supporting Evidence", "Citation": "(Dwivedi et al. 2020)", "Explanation": "The cited work provides the data split of the ZINC10k and ZINC-full datasets used in the citing paper for graph regression analysis, which serves as a foundational element for the study conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Hu et al. 2019)", "Explanation": "The cited work, GINE, is used as the MPNN layer in the GPS model in the citing paper, which extends the research on MPNN layers in graph processing."}, {"Category": "Extension or Continuation", "Citation": "(Vignac, Loukas, and Frossard 2020)", "Explanation": "The datasets proposed in the cited work are used in the study of cycle detection in the citing paper, which extends the research on cycle detection in graph analysis."}, {"Category": "Data Source", "Citation": "(OGB datasets)", "Explanation": "The OGB datasets are used as the source of data in the study of node classification in the citing paper, highlighting the reliance on external data for the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Kingma and Ba 2014)", "Explanation": "The cited work by Kingma and Ba (2014) provides the implementation of the Adam optimizer, which the citing paper uses in the training of the whole network in an end-to-end manner."}, {"Category": "Methodological Basis", "Citation": "(Fey and Lenssen 2019)", "Explanation": "The cited work by Fey and Lenssen (2019) provides the PyG package, which the citing paper uses as a basis for implementing Graphormer and GPS."}, {"Category": "Methodological Basis", "Citation": "(You, Ying, and Leskovec 2020)", "Explanation": "The cited work by You, Ying, and Leskovec (2020) provides the GraphGym package, which the citing paper uses for implementing Graphormer and GPS."}, {"Category": "Methodological Basis", "Citation": "(Paszke et al. 2017)", "Explanation": "The cited work by Paszke et al. (2017) provides the PyTorch package, which the citing paper uses for implementing the code of all the other methods."}, {"Category": "Methodological Basis", "Citation": "(Wang et al. 2019)", "Explanation": "The cited work by Wang et al. (2019) provides the Deep Graph Library package, which the citing paper uses for implementing the code of all the other methods."}, {"Category": "Methodological Basis", "Citation": "(Bundy and Wallen 1984)", "Explanation": "The cited work provides the breadth-first search (BFS) algorithm for calculating single-source shortest paths in an unweighted graph, which the citing paper adopts to compute the all-pairs shortest paths in a union subgraph."}, {"Category": "Data Source", "Citation": "(Ye et al.", "Explanation": "The cited work is the source of the analysis of the time complexity of each substructure descriptor in the whole graph, which the citing paper utilizes in its research on graph complexity."}]
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21" ], "table_ref": [], "text": "I N recent years, 3D models have been applied to many applica- tions, such as fabrication, augmented reality, and education. An increasing number of researchers have focused on how to satisfy the huge industrial demands for 3D models. Obtaining 3D models via professional software (such as Maya, Blender, and 3DSMAX) is a laborious manual process that requires specific expertise for the user. Thus, obtaining 3D models more efficiently and concisely has become a hot topic recently. However, the complex visual and structural information of the 3D models creates substantial challenges and difficulties. Consequently, different types of approaches have been proposed to handle this problem [1], [2], [3], [4], and several works have attempted to recover 3D information from 2D images (rendered view [5], [6], [7], [8], [9], scene [10], [11], [12], sketch [13], [14], [15]). In addition, some cross-modal 3D retrieval methods [16], [17], [18] are used to search and match the 3D models in databases, which reduces the difficulty of acquiring models, but still falls short of human expectations in terms of the accuracy and matching requirements.\nA more convenient way of acquiring 3D models is to use natural language. Based on natural language, humans just need to express their thoughts precisely without the need to provide any additional information, such as images or similar 3D objects. However, this way does not meet well with the expectation of humans for 3D models. In recent years, only a few works have focused on this challenging field. Text2Shape [19] is the first work to generate colored 3D shapes from flexible natural language. Their work uses a similar idea with several cross-modal representation methods [20], [21] and consists of two parts. First, they learn the joint representations of text and 3D shapes. Then, they use the learned text embedding as input conditions to predict the corresponding 3D shapes directly by training a GAN structure. However, the method only generates an approximate appearance that matches the input text and does not achieve a sufficiently satisfactory generation quality.\nThe recent work [22] adopts a more straightforward approach to guide the 3D model generation using textual information, which first trains a 3D autoencoder(AE) and directly projects the text features into the learned 3D feature space. Using the aligned text feature to feed with the learned 3D shape decoder, their methods can achieve a favorable 3D shape generation performance.\nHowever, due to the huge cross-modal gap between text and 3D model, the aforementioned methods still have limitations when faced with some specific situations:\n• Rough description: A single sentence cannot fully evolve all the geometric information. Meanwhile, many sentences may also lack detailed descriptions, especially a 3D structure information description. We need to consider how to supplement this information.\n• Diversity description: Different people often have different descriptions of the same object. The flexibility of natural language also causes ambiguities in learning stable crossmodal representations. The lack of large-scale text-3D datasets further amplifies this kind of ambiguity and leads to the uneven quality of the generated 3D shapes." }, { "figure_ref": [], "heading": "Motivation", "publication_ref": [ "b18", "b22", "b18" ], "table_ref": [], "text": "In light of our analysis, we hope the 3D generation model can automatically introduce some prior knowledge in the same way as humans. Fig. 1 (b) shows this motivation. When we say: \"an executive chair with five legs with wheels on it with cushions covered with blue material\". A human can think about characteristics such as \"five legs\", \"wheels\", and related 3D models. These pieces of information can help humans synthesize the final 3D model to handle the rough description problem. Inspired by this, we hope to leverage similar prior knowledge to assist with text-3D generation methods. In this process, the diversity prior knowledge can help us to synthesize diversity 3D models and handle the diversity description problem. Specifically, we need to address the following fundamental challenges:\n• How to define the format of prior knowledge, which maintains the latent geometric structure information and the related 3D models in a human-like way. We also need to ensure that this prior knowledge is useful for improving the model generation;\n• How can prior knowledge be achieved based on the input textual information? We need to capture the correlation between the prior knowledge and the input text. This correlation can also be used to search the related prior knowledge based on the text in the testing step;\n• How to design the generation network to leverage the prior knowledge to enhance the geometric detail and improve the generation qualities.\nIn this paper, we propose a novel 3D generation model via textual information (T2TD) address these issues. Specifically, our framework is built upon the existing text-3D data set [19], which explicitly define the entity and edge to construct a text-3D knowledge that maintains the correlation between the text and 3D shape, as well as the related 3D shape and attributes. Here, we define the related 3D shape and textual attributes as prior knowledge. The knowledge graph can save the prior knowledge and introduce more knowledge information as the data increase. In the generation step, we apply [23] to search the prior knowledge from the knowledge graph according to the text description. However, it should be noted that the searched shapes' prior knowledge is only similar to the text description, but not completely consistent. To remove irrelevant shape information, we propose an effective casual model to select shape information from the prior shape knowledge, selecting feature information strongly related to the text description. Finally, we apply a multi-layer transformer structure to progressively fuse the prior knowledge and the textual attribute information, which compensates for the lack of structural information in the text and enhances the final performance of the 3D generation models. Compared with a traditional generation model, we add prior knowledge into the generation network, which can improve the final generation performance. The final experimental results also demonstrate that our approach significantly improves the 3D model generation quality and performs favorably against the SOTA methods on the Text2Shape [19] datasets." }, { "figure_ref": [], "heading": "Contribution", "publication_ref": [], "table_ref": [], "text": "The contributions of this paper can be summarized as follows:\n• We define the format of prior knowledge and first propose a novel 3D shape knowledge graph to bridge the gap between the text and the 3D models. In addition, using our constructed 3D commonsense knowledge graph, we can save and achieve richer prior knowledge;" }, { "figure_ref": [], "heading": "•", "publication_ref": [], "table_ref": [], "text": "We propose a novel casual inference model to select the related feature and remove the unrelated 3D information from the prior knowledge, which can achieve more useful information for final 3D model generation;" }, { "figure_ref": [], "heading": "•", "publication_ref": [], "table_ref": [], "text": "We propose a novel text-3D generation model (T2TD), which can fuse the useful prior knowledge and generate the related 3D model according to the textual information and greatly reduces the difficulty of obtaining 3D model data;\nThe remainder of this article is organized as follows. Section 2 presents several related works. Section 3 provides the details of our approach. The corresponding experimental results and analysis are given in Section 4. Finally, we discuss the limitations and our future work and conclude this paper in Section 5." }, { "figure_ref": [], "heading": "RELATED WORKS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "3D Shape Generation", "publication_ref": [ "b0", "b1", "b4", "b5", "b6", "b2", "b23", "b24", "b25", "b8", "b26", "b27", "b28", "b29", "b30", "b31", "b21", "b28", "b32", "b32", "b33", "b34", "b35" ], "table_ref": [], "text": "Recently, there has been a considerable amount of work devoted to the task of 3D shape generation. In the traditional methods, the frameworks always generate 3D data for a specific 3D shape representations, such as 3D voxels [1], [2], [5], [6], [7], point clouds [3], [24], [25], [26] and meshes [9], [27], [28]. However, these methods have a common limitation is that the generated 3D shapes are limited in a specific resolution, which causes inflexibility in practical applications.\nTo solve the problem, recent works start to explore the implicit functions [29], [30], [31], [32] to represent 3D shapes. The implicit function-based methods calculate the 3D model surface by encoding the point coordinates and predicting occupancy of each position, together with the Marching Cubes algorithm, which can generate 3D shapes with arbitrary resolution. In addition to the 3D generation task [22], [29], [33], the implicit functions have been used in many other tasks, such as image-based 3D reconstruction [33], [34] and 3D shape deformation tasks [35], [36]. Fig. 2. The overall framework of T2TD mainly includes three parts: a) A pre-trained representation module, which learns the 3D geometric information through an autoencoder and learns text-3D joint representations through cross-modal contrastive learning. b) Constructing the text-3D knowledge graph to structurally associate the texts and 3D shapes, which is used to provide prior information for the generative network. c) A text-3D generation network to leverage text input and retrieve prior knowledge to generate 3D shapes." }, { "figure_ref": [], "heading": "Text-Image Generation", "publication_ref": [ "b36", "b37", "b38", "b19", "b20", "b39", "b40", "b41", "b42", "b43", "b44", "b45", "b46", "b47", "b48", "b49", "b50" ], "table_ref": [], "text": "With the publications of large-scale text-image datasets [37], [38], [39], remarkable progress has been made in the field of text-image representations [20], [21]. Many related works begin to focus on how to use natural language to get high-quality and semantically consistent images. In the early research of this field, many approaches [40], [41], [42] leverage the GAN [43] structure by feeding text embeddings as the conditional input to generate corresponding images through natural language. And the subsequent works [44], [45], [46], [47], [48], [49] improved the GAN-based framework from different aspects. Recently, several approaches have been proposed [50], [51] which are not based on GAN structure and get favorable generation performance.\nCompared with 2D images, the 3D shape expresses the complete spatial structure of a real object and has rich geometric information. As for the text-3D generation task, the lack of largescale text-3D datasets also poses difficulties to the research of this kind of task. Therefore, we hope to use the knowledge graph to make full use of the existing dataset and improve the text-3D performance." }, { "figure_ref": [], "heading": "Text-3D Generation", "publication_ref": [ "b18", "b51", "b52", "b51", "b52", "b18", "b39", "b42", "b21", "b53", "b54", "b55" ], "table_ref": [], "text": "Currently, most of the related text-3D work is engaged to handle the text-3D retrieval [19], [52], [53] or 3D shape captioning [52], [53] tasks, there are only a few works engaged in addressing the challenging task of using natural language to generate 3D shapes. Text2shape [19] adopts a similar idea with text-to-image generation methods [40] to train the generator with a GAN [43] structure and directly predict 3D volume as its output. However, due to the inadequate joint learning of natural language and 3D shapes, it fails to generate favorable 3D shapes consistent with the input text.\nText-Guided [22] takes an alternative approach to solve this problem. It aligns text and shape features in the same embedding space learned by the 3D autoencoder. As a result, the extracted text features are directly used to generate 3D models using the 3D decoder. In addition, to diversify the generation results, they adopt an IMLE-based (Implicit Maximum Likelihood Estimation) generator to apply random noise on the learned text feature, which avoids the mode collapse of GANs.\nThere are also some other works to achieve the task of text-3D generation from different perspectives. Such as [54] engage in generating high-quality textures for 3D mesh according to text descriptions, and [55] exploit the CLIP [56] model to generate approximate shape from the description in a Zero-Shot way. Different from the manner to generate 3D models of them, the aim of our method is to use text information to directly generate 3D shapes with semantic consistent details of structure and color information, we do not treat them as competitors." }, { "figure_ref": [], "heading": "Visual Generation via Prior Knowledge", "publication_ref": [ "b47", "b56" ], "table_ref": [], "text": "Several previous works have successfully introduced prior knowledge into cross-modal visual generation tasks. To overcome the deficiency of detailed 2D information in the text descriptions, the RifeGAN [48] utilizes an external knowledge database to retrieve similar sentences according to the input text descriptions to supply detailed semantic information. In the 3D field, Mem3D [57] utilizes retrieved 3D shapes to serve as the prior knowledge to help the network recover 3D information from 2D images with complex backgrounds and heavy occlusions.\nIn our proposed method, we use both of the above types of prior knowledge to assist in the text-3D generation tasks, which are from semantic and visual two perspectives. With the corresponding generative networks, we can effectively integrate prior knowledge into the generation process." }, { "figure_ref": [], "heading": "APPROACH", "publication_ref": [], "table_ref": [], "text": "In this section, we detail our approach. Fig. 2 shows the framework which includes three key parts. 1) Pre-trained representation model: it is used to learn the textual and 3D model features in the common feature space. The aim of this operation is to build the correlation between the text and the 3D shape for the knowledge graph construction; 2) 3D shape knowledge graph: we define the entity, edge, and related attribute information to save the prior knowledge in the knowledge graph, which can be used to search and associate the related shapes and semantic information based on query text; 3) 3D model generation module: it is used to fuse the cross-model prior knowledge information to make up for the lack of structural information, and generate the target 3D model. We will detail these modules in the next subsections." }, { "figure_ref": [], "heading": "Pre-trained Representation Module", "publication_ref": [], "table_ref": [], "text": "This module is concerned with data preprocessing. Specifically, we exploit a 3D shape autoencoder to fully learn the representations of the 3D shapes with rich geometric and color information.\nIn addition, we propose a joint feature representation model to train the text in the same latent space with the 3D shapes. We will detail these modules in the following subsections." }, { "figure_ref": [ "fig_2" ], "heading": "Text Encoder", "publication_ref": [ "b57", "b55", "b58" ], "table_ref": [], "text": "The text encoder E t is a 6-layer transformer encoder [58], which is shown in Fig. 3. Here, the structure of the transformer can effectively improve the performance of textual embeddings, which have been proven by many classic approaches [56], [59]. We first extract the embeddings x t ∈ R L×ew of the query text, where L is the length of the sentence and e w indicates word embeddings. Then, E t receives x t , Here, the transformer encoder consists of the multi-head self-attention layers, which attempt to find more latent correlation information between the words and reduce the redundant information to improve the performance of the final textual representation. The transformer output is operated by the pool function and achieves the final text feature f t ∈ R d ." }, { "figure_ref": [ "fig_2" ], "heading": "Shape Encoder", "publication_ref": [], "table_ref": [], "text": "We use the 3D volume as the input to learn the information from 3D models. The basic structure of the networks is shown in Fig. 3.\nInspired by the basic method in previous work, our voxel encoder E v consists of 5 3D convolutional blocks to take a 3D input x v ∈ R rv×rv×rv×4 and calculate it to the 3D shape features f s ∈ R d , where r v represents the resolution of the input 3D shapes, and d represents the dimension of the extracted features." }, { "figure_ref": [], "heading": "Implicit Shape Decoders", "publication_ref": [ "b28" ], "table_ref": [], "text": "Inspired by [29], we exploit the implicit 3D shape representation as the prediction output of the shape encoder. Here, we sample the 3D volume as an RGBA point sequence S ∈ R N ×(1+3) , with a sampled sequence representing the 3D spatial position of each point, where N represents the number of sampled points. Respectively, we applied shape decoder D s and color decoder D c to predict shape occupancy and RGB color for each point. By concatenating the point position p with the extracted f s , D s predicts the shape occupancy value with five fully-connected and leaky-ReLU layers. D c has the same architecture as D s and outputs the predicted RGB color values according to the same point position p." }, { "figure_ref": [], "heading": "Optimization", "publication_ref": [ "b59" ], "table_ref": [], "text": "To pre-establish the basic relationship between the text and shape information, inspired by ConVIRT [60], we introduce a crossmodal contrastive loss to optimize the pre-trained modules. In a mini-batch with n shape-text pairs, the i th pairs can be represented as (x ti , x vi ), which can be defined as the positive pairs. In contrast, the negative pair can be defined as (x ti , x vj ) or (x vi , x tj ), i ̸ = j. The loss function can be written as:\nl t→v i = -log exp (⟨E t (x ti ), E v (x vi )⟩) n j=1 exp E t (x ti ), E v (x vj ) ,(1)\nl v→t i = -log exp (⟨E v (x vi ), E t (x ti )⟩) n j=1 exp E v (x vi ), E t (x tj ) ,(2)\nwhere ⟨ ⟩ is the cosine similarity between two feature vectors. We maximize the feature similarity between the positive pairs and minimize the negative pairs. The final cross-modal contrastive loss can be written as:\nL joint = 1 n n i=1 αl t→v i + (1 -α)l v→t i ,(3)\nwhere α ∈ [0, 1] is the weight parameter that controls the balance of the loss function between two calculating directions. The introduction of the optimized target can pre-establish the relationship between text and shape information. The learned cross-modal correlation will be further exploited in the knowledge graph's construction step.\nIn addition, the 3D shape autoencoder architecture (E v ,D s ,D c ) is trained to obtain the geometric and color information for reconstructing the final 3D shape. With the 3D shape feature f s extracted by E v , D s and D c are optimized with :\nL ae = ||D s (f s p) -I s || 2 + ||D c (f s p) × I s -I c || 2 ,(4)\nwhere the I s and I c are the sampled ground truth values of the point occupancy and the color corresponding to the same point position p. Here, D s and D c are trained to predict the shape and color separately, and the loss function is applied to minimize the L2 distance between the predicted values and the ground truth. To predict the color values according to the point occupancy, the optimization of the predicted color only takes effect on point positions where the occupancy is 1 in the I s . " }, { "figure_ref": [], "heading": "Text-3D Knowledge Graph", "publication_ref": [], "table_ref": [], "text": "In this work, we propose a novel 3D knowledge graph to save the 3D shape prior knowledge, which can store the association between the natural language and the 3D shapes from multiple perspectives. In the process of knowledge graph construction, we define different entities and relations to map the entire text-3D dataset into a knowledge graph K.\n• 3D Shape Entity (S): It represents each 3D shape from the dataset. Here, we utilize the pre-trained 3D shape encoder E v to extract features of each 3D shape as the shape entity descriptor in K." }, { "figure_ref": [], "heading": "•", "publication_ref": [ "b60" ], "table_ref": [], "text": "Text Entity (T): The text description of each 3D shape.\nWe extract text features with the pre-trained text encoder E t as the text entity descriptor.\n• Attribute Entity (A): It can be seen as the sparse semantic label describing the certain perspective of the 3D model. For example, one 3D shape description \"comfortable red color chair with four legs\" has attributes of {'comfortable', 'red', 'chair', 'four legs'}. In the proposed framework, we use the keyphrase toolkit [61] to extract the attribute entities from each 3D shape description. After a manual adjustment, 377 words and 1, 679 noun phrases and descriptive phrases are finally selected as attribute entities.\nSimilarly, we utilize the pre-trained text encoder E t to extract features for each selected attribute as entity descriptors.\nAccording to these entities, we further define the following relations, which can also be regarded as the edges in the graph:\n• Similar Shape Edge (S-S): It describes the correlation among the 3D model entities. To construct prior relationships, for each 3D shape with its multiple text descriptions, we conduct multiple text-shape retrievals and one shapeshape retrieval using the pre-trained encoders E t and E v based on cosine distance. For each 3D shape, we gather all the retrieval results and calculate the similarity scores with their retrieved frequencies and cosine distances. The top k 3D shapes with higher similarity scores are selected to build S-S relations, and each similarity score is set as the weight of the edges;\n• Caption Edge (T-S): It stores the original correlation between the text and the 3D shapes, and the T-S edge simply links the text entities with its 3D shape. In the application scenario of this knowledge graph, a 3D shape is described by multiple texts. Therefore, in this knowledge graph, a shape entity is often linked by multiple text entities, while a text entity is linked by only one shape entity;" }, { "figure_ref": [], "heading": "•", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Attribute Edge (S-A and T-A):", "publication_ref": [ "b22" ], "table_ref": [], "text": "The T-A edge links text entities and their contained attribute entities, and the S-A edge links the 3D shapes with all its matched attribute entities to their text descriptions. These edges can be used to bridge the relationship between two shape entities or text entities.\nBased on these definitions, the 3D shape knowledge graph can effectively save clear shape information, attribute information, and textual description information. The different edges can help us to find related textual and shape information according to the query text.\nIn general, we are inspired by the mechanism of human thoughts to consider similar shapes (S-S) and attributes (S-A) from two different prior knowledge perspectives. Here, the S-S edge helps us find similar 3D models via the query text. The S-A edge helps us to find the related attribute information. For example, when we obtain the description of an object: \"a red chair has four legs and a threaded backrest\". We can extract the related attribute information: four legs, red, chair, threaded backrest. This attribute information can be utilized to find the related shape information as the shape prior knowledge.\nThe mathematical method is described as follows. For a query text T , we first find its related attribute entities in the constructed knowledge graph. Then, we apply the text encoder to extract f t (T ) and f t (a i ) as the feature of the text and attributes respectively.\nFinally, the multi-entity search method [23] is used to search related shape entities as the prior knowledge. For details, please refer to Algorithm.1." }, { "figure_ref": [], "heading": "Algorithm 1 Process of prior knowledge retrieval", "publication_ref": [], "table_ref": [], "text": "Require: text description T , text-3d knowledge graph K with entities {A, S, T } and edges {S -S, T -S, T -A, S -A} Ensure: related shapes P s = {s 1 , s 2 , ...s m } and related attributes P a = {a 1 , a 2 , ...a n } 1: Match existing attribute entities with T , 2: for a i in A do " }, { "figure_ref": [], "heading": "3D Shape Generative Network", "publication_ref": [], "table_ref": [], "text": "The goal of this module is to fuse query text and multi-modal prior knowledge for more accurate structure information representation, which includes four key parts: 1) Feature selection: We introduce the causal inference model to remove the unrelated structure information from the prior shape feature. 2) Prior fusion module: It learns the correlation between select prior knowledge and input textual information, combines them into the fused feature, and feeds into the generative network. 3) Generative network: By finetuning the pre-trained autoencoder with the guidance of prior knowledge, it projects text features into 3D feature space to achieve text-3D generation. 4) Diversity generation: It improves the diversity of the generation results within the proposed prior guided IMLE structure. We will detail these modules in the following subsections." }, { "figure_ref": [], "heading": "Feature Selection", "publication_ref": [ "b61" ], "table_ref": [], "text": "Based on the query text T , we can obtain the related 3D shapes P s = {s 1 , s 2 , ...s m } and semantic attributes P a = {a 1 , a 2 , ...a n } as prior knowledge from the 3D shape knowledge graph K. However, we note that the related 3D shape s i either resembles the query text or matches only part of the information in the query text.\nWe hope to remove this unrelated information, save the useful information for the next fusion operation and guarantee the completeness of the fusion feature. Based on this analysis, we are inspired by [62] and introduce the causal model into the fusion model.\nWe first construct the causal graph as in Fig. 5(a), where the nodes denote the data variables and the directed edges denote the (functional) causality. Here, X = {f 1 , ..., f m } denotes the features of the retrieval shapes extracted by the shape encoder E v . Y is the fusion feature of the target shape, which is constructed by X. E = E v is the shape encoder learned by the pre-trained model detailed in Section.3.1. C is the redundant information or unrelated feature in X. E → X denotes the feature X extracted by encoder E. E → C denotes the interference information is also extracted by E. X → C denotes that C exists in A. X → Y ← C means that Y can be directed by X and also be influenced by C. In other words, the second way, X → C → Y , a) b)\nFig. 5. The causal graph, X denotes the features of retrieval shapes extracted by the shape encoder Y is the fusion feature of target shape, E is the shape encoder, C is the redundant information or unrelated feature in X that act as the confounders in this causal model.\nis inevitable because the encoder E is not for feature fusion in the training step. Our goal is to find the true causality between X and Y , and eliminate the interference of C. To pursue the true causality between X and Y , we need to use the causal intervention P (Y |do(X)) instead of the likelihood P (Y |X).\nIn this paper, we propose using the backdoor adjustment to achieve P (Y |do(X)). The backdoor adjustment for the graph is as follows:\nP (Y |do(X)) = d P (Y |X = x, C = g(X = x, E))P (E),(5)\nwhere g means that feature X causes C or C is extracted from the prior feature X. comes from {f 1 , f m } extracted from the related prior shapes, which includes the related structure information corresponding to the query text T . Meanwhile, it also includes unrelated information C. We apply the random sampling estimation to eliminate the influence of C.\nFirst, we connect {f 1 , ..., f n } to construct X = {f 1 : f 2 : ... : f n } ∈ R 1×nd . Suppose that F is the index set of the feature dimensions of X. We divide F into n equal-size disjoint subsets. F i is the index of X. g(x, E) := {k|k ∈ F i ∩ I t }, where I t is an index set whose corresponding absolute values in X are larger than the threshold t. We set t = e -3 in this paper.\nHere, we hope the final selected feature can contain as much information as possible about the structure information described by text T . Based on the pre-trained model E v and E t , the text feature and shape feature belong to the same feature space. We think the selected feature should be similar to the target shape feature f s (ground truth). Thus, we define Y ≈ f s . Eq.5 can be rewritten as:\nP (Y |do(X)) = 1 n n i=1 P (f s |[X] c ),(6)\nwhere c = {k|k ∈ F i ∩I t } is implemented as the index set defined above.\n[X] c is a feature selector that selects the dimensions of x according to the index set c. n is the number of samplings. i is the i-th sampling. Here, we add one MLP layer to handle the selected feature [X] c . The process can be defined as x ′ i = J i ([X] c , w i ). w i is the parameter of the MLP layer. Based on this design, the final loss function can be written as:\nL = 1 n n i=1 log( exp(f s • x ′ T i ) n j=1 exp(f s • x ′ T j )\n).\nBy optimization, we will obtain n number of optimization function J. For the shape prior knowledge X = {f 1 : f 2 : ... : f n }, we can obtain the processed and the selected features F ′ p = {x ′ 1 , ..., x ′ n } as the input of the prior fusion module." }, { "figure_ref": [], "heading": "Prior Fusion Module", "publication_ref": [ "b21" ], "table_ref": [], "text": "The PFM hierarchically integrates F ′ p = {x ′ 1 , ..., x ′ n } and F a with f t in two steps. For each step, the calculation process is based on the stacked transformer blocks. Specifically, each layer of the transformer block has a multi-head attention module and a position-wise feed-forward network (FFN). The first step is to update f t with shape priors F ′ p , setting F 1 t = {f t ⊕ F ′ p } as the initial input sequence of the text feature and the selected shape prior. F i t is the input feature of the i th layer, and the calculation process of each layer can be written as:\nQ = W Q • F i-1 t , K = W K • F i-1 t , V = W V • F i-1 t F i t = M ultihead(Q, K, V ) F i t = F F N (F i t ), (8\n)\nwhere i is the index of the transformer layers. Finally, in the last l th layer, we can obtain the updated text feature as f ′ t . This step aims to leverage the attention mechanism to learn the correlation between the text information and the shape priors, thus enriching the text feature with 3D information. Then, we adopt a similar idea to [22] to fuse attribute information in spatial feature space. Concatenating f ′ t with the points position p into the spatial feature d+3) . Using fully-connected layers to convert S t and F a into Ŝt , Fa with a lower favorable input dimension, similar to the first step, the attribute fusion step can be formulated as:\nS t = {f ′ t p} ∈ R N ×(\nQ = W Q • Ŝj-1 t , K = W K • Fa , V = W V • Fa Ŝj t = M ultihead(Q, K, V ) Ŝj t = F F N ( t ).(9)\nIn the m th layer of the final part, the calculated t = Ŝm t will serve as extra information, with S t into S = {S t ⊕ Ŝ′ t }, which is the final fused feature used to feed into the 3D shape decoder. To adapt the dimension of the fused feature, the existing D = {D s , D c } is extended to the dimension of the Ŝ′ t , and the extended 3D shape decoder is denoted as D ′ = {D ′ s , D ′ c }." }, { "figure_ref": [ "fig_3" ], "heading": "Generative Network", "publication_ref": [ "b62" ], "table_ref": [], "text": "The basic framework of the generation network is shown in Fig. 4(b,c), which includes the encoder E v and E t utilized for extracting the text and 3D shape features, respectively. The fusion module (PFM) fuses the query text information with prior knowledge. The decoder D ′ is used to predict the final 3D shape model.\nTo optimize the parameters of E v , P F M , and D′ as well as to initialize the parameters of the network with the pre-trained checkpoint, we use the same L ae introduced above to renew training the autoencoder with prior knowledge guidance, which is formulated as:\nL ae = ||D ′ s (S) -I s || 2 + ||D ′ c (S) × I s -I c || 2 . (10\n)\nFor the framework to gain the ability to generate from text to 3D, an L2 norm-based regression loss L reg is applied to project text feature f t into 3D latent space.\nL reg = ||f t -f s || 2 , (11\n)\nwhere f t and f s are the extracted features of the text description T and its corresponding 3D shape ground truth V . In the text-3D generation process, the f t can be directly used to synthesize the 3D model generation under the guidance of prior knowledge. Finally, the optimization target of the entire generation network is:\nL = λL ae + (1 -λ)L reg , (12\n)\nwhere λ is the weight parameter that controls the balance of the loss functions. We applied the Adam method [63] to optimize the generative network and obtain the parameters of E t , E v , P F M , and D ′ for the text-3D generation." }, { "figure_ref": [], "heading": "Diversity Generation", "publication_ref": [ "b21", "b63", "b21" ], "table_ref": [], "text": "Different people have different ideas. Therefore, the same text description should produce diverse shapes. To achieve the diverse shape generation results from the same text description, we adopt a similar idea with [22] by applying an IMLE [64] (implicit maximum likelihood estimation)-based latent generator G to the extracted text features f t for randomness. Here, given a set of random noise Z = {z 1 , z 2 . . . z l }, the perturbed feature is formulated as\nF ′ = G(f t , Z) = {f ′ 1 , f ′ 2 , . . . , f ′ l }.\nHowever, the original IMLE process has the limitation that it is challenging to generate a sufficiently large number of samples. The reason for this is that the optimized process minimizes the distance between F ′ and the ground truth f s , which would result in no more significant changes from the random noise. This conclusion is supported by the final example [22].\nTo overcome this difficulty, we introduce the shape of prior knowledge from the knowledge graph G to increase the diversity. We achieve the related shape priors F p = {f 1 p , f 2 p , ..., f m p } based on the query text T . Then, we resample a number of reference features F g = {f 1 g , f 2 g , . . . , f h g } using a linear interpolation function, which is calculated as:\nF g = f t + (F p -f t ) σ • η,(13)\nwhere σ and η control the range and step of the interpolation function, and f t is the feature of T . The sampled F g is an\nWooden chair with white color sponge on seating area and back support." }, { "figure_ref": [], "heading": "Input Text", "publication_ref": [], "table_ref": [], "text": "Prior Knowledge Generated / GT Fig. 7. Several text-guided generation results. The models generated by our method basically contain the specific shape descriptions described in the text. The prior knowledge provided by the knowledge graph here provides certain supplementary information to ensure the similarity of the generative models.\nadditional optimization objective, not f s . For each perturbed feature f ′ , we reselect its optimization target by calculating its cosine similarity between F g . The process is marked as follows:\nf target = arg min i=1,...,h d(G ϕ (f t , Z), f i g ),(14)\nwhere d is the distance metric, ϕ is the weights of the generator G. The goal is to find the optimization target f target from F g . The final optimized loss function can be written as:\nL G = min k=1,...,l ||G ϕ (f t , z k ) -f target || 2 2 .(15)\nIn the optimization process, we first need to fix the parameter ϕ to find f target . Then, we optimize G based on the new target information. Here, F g provides a richer reference in the training process. The related experiments also demonstrate that our method can produce more variable models." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "To evaluate the effectiveness of the proposed framework, we carried out a series of experiments. At the beginning of this section, we introduce the dataset details and the experiment settings. In Sec. 4.3, we visualize several generated results and make comparisons with the SOTA methods. To further verify the effectiveness of each proposed module, we conducted ablation studies and comparative experiments, as shown in Sec. 4.4." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b18", "b64", "b18" ], "table_ref": [], "text": "We conduct the experiments on the text-3D dataset in [19], which consists of a primitive subset and a ShapeNet [65] subset. We use the ShapeNet subset to build the text-3D knowledge graph and conduct experiments. It contains 6,521 chairs and 8,378 tables of 3D volumes. Five text captions are presented for each 3D shape. To conduct the experiments, we follow the same training/validation/testing split as in the previous related works [19]." }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [], "table_ref": [], "text": "We implement our proposed framework on PyTorch and use an Nvidia Tesla A40 GPU to complete all experiments. To pre-train the representation module, we first train the autoencoder in the output resolution of 16 3 , then further refine the parameters in 16 3 , and finally the L joint is utilized to optimize the text encoder. The process is optimized with an Adam optimizer with a beta of 0.99, an initial learning rate of 10 -4 .\nBased on the data pre-processing, we construct the knowledge graph and build the training data. For the text-3D generation network, we train the network end-to-end by initializing the network with the pre-trained parameters. To make the training process stable, we adjust the weight of each proposed loss function to α = 1, β = 0.1. Similarly, we use Adam optimizer with a beta of 0.99, initial learning of 10 -5 to train the network. With a batch size of 32, it takes up about 42 GB of GPU memory and takes around 50 hours to train 400 epochs. We select the trained models with the lowest validation loss for visualization and calculate the metrics for quantitative analysis." }, { "figure_ref": [ "fig_6", "fig_6" ], "heading": "Comparison with the SOTA Methods", "publication_ref": [ "b18", "b21", "b65", "b18", "b21" ], "table_ref": [], "text": "In this section, two existing approaches, Text2Shape [19] and Text-Guided 3D [22] are served as the compared methods. Followed by these methods, we adopt the same evaluation metrics to make quantitative comparisons. Evaluation metrics include 1) IOU (Intersection Over Union): which is used to measure the shape similarity between two 3D shapes; 2) EMD (Earth Mover's Distance): which is used to measure the color similarity; 3) IS (Inception Score [66]): it is used to measure the diversity and quality of the generated shapes; 4) Acc(Classification accuracy): it measures the accuracy of generated 3D shapes in the correct category, which is calculated by a pre-trained 3D shape classifier. The final experimental results are shown in Table .1. From these results, our approach achieves the best performance. In IOU, EMD, and ACC, our approach obtains 2.3%, 0.12%, and 1.2% improvements, respectively. We think there are some reasons as follows: vs. ours(c) vs. GT. By comparison, our algorithm obtains the 3D models that better fit the text description.\n• Text2Shape [19] first applies triplet loss to guide the cross-modal feature learning in a joint space, and then directly predicts the 3D volume with the text feature as conditional input. Due to the inadequate 3D information and the unstable training process of GAN, the synthesis generation is low-quality.\n• Text-Guided [22] achieves the better improvement compared to Text2Shape. However, it only pays attention to the information alignment between the text-3D pair of the training data and ignores the training difficulties caused by the flexibility and ambiguity of the natural language. It tends to generate 3D shapes that are similar to the ground truth 3D data.\n• Our approach achieves the best performance. In our method, the introduction of prior knowledge can supplement additional information for text description to help generate 3D shapes. In addition, the introduction of the causal inference model eliminates the irrelevant information in the related shapes, so as to provide prior knowledge with higher confidence, which can greatly enhance the final generation performance. Fig. 7 shows some generation results conditioned with the input text description and the retrieved prior knowledge. Fig. 8 compares the generation qualities of our framework with some classic generation methods. From the visualization results, we make the following observations:\n• As seen in Figure 7, most of the 3D models retrieved by the proposed method can semantically match the input text descriptions, and they can supply the generative network with supplementary 3D information for more accurate shape generation. For example, in the last example, the generative network may be difficult to understand how the textual information \"with the drawer\" can be represented for the 3D shape. The retrieved 3D shapes can help the generative network to determine the basic structural characteristics of the generation target, to ensure the final generation quality.\n• From Fig. 8, our approach provides a more accurate model than the other two methods. The introduction of additional prior knowledge ensures that our method can generate 3D shapes that better match the input description. Especially in the last example, when faced with inadequate textual information, with only a few attributes \"brown\" and \"folding chair\", the two comparison methods only generate approximate appearance, while our methods can produce a more accurate 3D shape." }, { "figure_ref": [ "fig_7", "fig_7" ], "heading": "Diversity Generation", "publication_ref": [], "table_ref": [], "text": "To evaluate the performance of the diversity generator G. Some diversity generation results are shown in Fig. 9. By applying an IMLE-based latent generator, the text feature f t can be converted into a different perturbed f ′ . Using the trained D ′ , diverse 3D shapes corresponding to the single input text can be generated.\nFrom Fig. 9, we find more variations in the results obtained by our approach compared to the Text-guide 3D method. For example, our approach can achieve tables of different heights in the third example. Meanwhile, these generative models have more variation in the ground truth shape. In practice, it can meet the requirements of more users. These results also demonstrate the performance of our diversity generator G." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [ "b21", "b64", "b43" ], "table_ref": [], "text": "We conducted extensive ablation studies to verify the effectiveness of each proposed module. The experimental results are shown in Table . 2 and Table . 3. In this section, we introduce each experiment setting in our ablation studies and analyze the effectiveness of each proposed module.\nFor a quantitative evaluation, we adopt the same metrics as the ablation study settings in [22], which applied Point Score (PS) and Frechet Point Distance(FPD) to evaluate the qualities of generated 3D shapes. In our cases, we sample the 3D shapes in ShapeNet [65] into colored point clouds of 55 classes, then train the classification-based inception network for PS and FPD calculations. The R-Precision [44] is also applied here to measure the correlation between the input text description and generated 3D shapes. We use the text and shape encoder trained by L joint in the proposed representation module to extract features of the original text description in the test set and the generated 3D shapes. For each generated shape, we use the extracted feature to retrieve the related text, and calculate the retrieval accuracy in the top 20 results as its R-Precision. We will detail our observations in the next subsections." }, { "figure_ref": [], "heading": "Loss Function", "publication_ref": [], "table_ref": [], "text": "First, we conducted experiments to verify the influence of each applied loss function to determine the circumstances necessary for the framework.\n• \"Baseline\" means that we directly use the text encoder E v and shape decoder D ′ pre-trained in the step of learning joint representation. It achieves the worst results.\n• \"+L reg \" means that we only optimize the text encoder E t to project the text feature into the learned autoencoder space. The aim of L reg is to constrain the encoded text feature f t to be similar to the extracted feature f s of their corresponding 3D shape.\n• \"+L reg + L ae \" indicates that we further applied L ae to train the entire framework end-to-end. In this experimental setting, the trained autoencoder is further finetuned with the joint of textual information. It achieves better performance compared with \"+L reg \"." }, { "figure_ref": [], "heading": "Prior Knowledge", "publication_ref": [], "table_ref": [], "text": "Given the input text description, we retrieve the prior knowledge of the related attributes and the 3D shapes to assist with the text-3D generation in the proposed framework. In the ablation experiment of this part, the previous setting \"+L reg + L ae \" can be seen as the baseline that does not use any prior knowledge in the entire process. The experimental settings include three parts: + attr prior + shape prior + shape prior (Casual Model) + both prior (Causal Model)" }, { "figure_ref": [], "heading": "Round coffee table with glass top and four flat metal legs gray in color.", "publication_ref": [], "table_ref": [], "text": "A plastic chair that is padded. The seat is black and the back is white." }, { "figure_ref": [], "heading": "Round shape blue color rolling material chair.", "publication_ref": [], "table_ref": [], "text": "A red sofa chair. The seating is rectangling for comfort. Fig. 10. Visualization of ablation studies, which shows the effects of the introduction of each module. From the shown results we can find that: without the introduction of prior knowledge,+Lreg + Lae can only generate roughly matching shapes, and the utilization of the attribute and shape priors can enrich the details from different perspectives. The experiment setting \"+attr prior\" makes the generated shapes more semantically compatible with the input text, and \"+shape prior\" introduces more accurate geometrics and richer colors to the generated shapes. Finally, the introduction of the \"+causal model\" provides a better generation based on the introduction of both prior knowledge.\n• \"+attribute prior\" means that we only add the attribute prior F a into the training process, and use the proposed prior fusion module to update the spatial features. By constructing the attention map between the spatial features and the attribute information, the generated shapes achieve better qualities both in their generated structures and colors, which is also reflected in the increase of quantitative metrics.\n• \"+shape prior\" means that we only add the retrieved shape prior F p to update the extracted text feature f t . The aim of introducing the shape prior is to supplement the lack of specific geometric information of f t in the high-level feature space. The number of utilized shapes is limited by 5, which is the default setting of our proposed framework. From the shown results, we can see that the introduction of the shape prior can also improve the generation quality.\n• \"+shape prior (Causal Model)\" means that we add the causal model to select useful features for the next feature fusion operation. The number of utilized shapes is also limited by 5. From the results, the causal model brings a significant improvement, which demonstrates that the causal model can effectively reduce the unrelated shape information from the shape prior knowledge and improve the performance of the fusion feature.\n• \"+both prior (Causal Model)\" means that we applied both the attributes and the shape prior knowledge into the generation framework, which is also the final method we introduced in the main paper. The introduction of two kinds of prior knowledge can achieve mutual compatibility and make the best generation qualities." }, { "figure_ref": [], "heading": "Prior Fusion Method", "publication_ref": [ "b56" ], "table_ref": [], "text": "The way to integrate the retrieved prior knowledge is also critical. To best leverage the correlations between the text and the retrieved 3D shapes, we designed the prior fusion transformer (PFM) to update the text feature with prior knowledge. To verify its effectiveness, similar to prior works [57], we set two fusion methods as comparison methods. The \"concatenate\" means we simply connect the shapes' feature with the text feature, and use a fully connected layer to transform the fusion feature into a favorable dimension. The \"Average Fusion\" means that we directly use the average pooling function to fuse the text features with prior knowledge. The experimental results are shown in Table . 3, and the proposed prior fusion modules perform better.\nFrom these experimental results, we find that the introduction of prior knowledge greatly improves the performance of the generation model. The shape of prior knowledge brings a larger improvement, which also demonstrates that the shape of the prior knowledge effectively makes up for the lack of structure information and the final performance. The causal model also plays a key role in the step of feature fusion, which provides a more plausible explanation for the increase in performance. The corresponding experimental results also demonstrate its superiority. The PFM modules applied to the transformer structure can reduce the effect of redundant information and improve the performance of the fused feature." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [ "b18" ], "table_ref": [], "text": "In this paper, we proposed a novel text-3D generation model with the utilization of prior knowledge. Here, we first proposed a novel 3D shape knowledge graph to bridge the gap between text and 3D models. We save and achieve richer and more accurate prior knowledge like human beings. Then, we proposed a novel casual model to select useful and related features and remove the unrelated structure information from the searched shapes' prior knowledge. Combined with information fusion model of this paper, we achieve an effective fusion feature as the input of the 3D generation model. The final experimental results demonstrated that our approach significantly improves 3D mode generation quality and performs favorably against the SOTA methods on the Text2shape [19] datasets.\nFrom these experiments, we find that the 3D shape knowledge graph plays one key role in this work, which saves the correlation between text and 3D shapes. If we introduce more data and increase the size of the knowledge graph, it provides more accurate related prior knowledge like a wiser old man to help the target 3D generation. In future work, we will expand the existing database to increase the size of the knowledge graph. Meanwhile, the causal model plays a very important role in the selection of features. The related experiments also demonstrate this conclusion. In future work, we plan to introduce more partial structure information to structure causal graphs and optimization mechanisms. The generation model can more intelligently filter and utilize prior knowledge." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by the National Natural Science Foundation of China (62272337, 61872267) and the Natural Science Foundation of Tianjin (16JCZDJC31100, 16JCZDJC31100)." } ]
[ { "authors": "J Liu; F Yu; T Funkhouser", "journal": "IEEE", "ref_id": "b0", "title": "Interactive 3d modeling with a generative adversarial network", "year": "2017" }, { "authors": "J Li; K Xu; S Chaudhuri; E Yumer; H Zhang; L Guibas", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b1", "title": "Grass: Generative recursive autoencoders for shape structures", "year": "2017" }, { "authors": "R Li; X Li; K.-H Hui; C.-W Fu", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b2", "title": "Sp-gan: Sphere-guided 3d shape generation and manipulation", "year": "2021" }, { "authors": "C Zhu; K Xu; S Chaudhuri; R Yi; H Zhang", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b3", "title": "Scores: Shape composition with recursive substructure priors", "year": "2018" }, { "authors": "C B Choy; D Xu; J Gwak; K Chen; S Savarese", "journal": "Springer", "ref_id": "b4", "title": "3d-r2n2: A unified approach for single and multi-view 3d object reconstruction", "year": "2016" }, { "authors": "H Xie; H Yao; X Sun; S Zhou; S Zhang", "journal": "", "ref_id": "b5", "title": "Pix2vox: Context-aware 3d reconstruction from single and multi-view images", "year": "2019" }, { "authors": "H Xie; H Yao; S Zhang; S Zhou; W Sun", "journal": "International Journal of Computer Vision", "ref_id": "b6", "title": "Pix2vox++: Multiscale context-aware 3d object reconstruction from single and multiple images", "year": "2020" }, { "authors": "X Zhang; R Ma; C Zou; M Zhang; X Zhao; Y Gao", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b7", "title": "View-aware geometry-structure joint learning for single-view 3d shape reconstruction", "year": "2021" }, { "authors": "N Wang; Y Zhang; Z Li; Y Fu; H Yu; W Liu; X Xue; Y.-G Jiang", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b8", "title": "Pixel2mesh: 3d mesh model generation via image guided deformation", "year": "2020" }, { "authors": "X Zhang; R Ma; C Zou; M Zhang; X Zhao; Y Gao", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b9", "title": "View-aware geometry-structure joint learning for single-view 3d shape reconstruction", "year": "2021" }, { "authors": "S Liu; Y Hu; Y Zeng; Q Tang; B Jin; Y Han; X Li", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b10", "title": "See and think: Disentangling semantic scene completion", "year": "2018" }, { "authors": "S Li; C Zou; Y Li; X Zhao; Y Gao", "journal": "", "ref_id": "b11", "title": "Attention-based multi-modal fusion network for semantic scene completion", "year": "2020" }, { "authors": "Z Lun; M Gadelha; E Kalogerakis; S Maji; R Wang", "journal": "IEEE", "ref_id": "b12", "title": "3d shape reconstruction from sketches via multi-view convolutional networks", "year": "2017" }, { "authors": "L Wang; C Qian; J Wang; Y Fang", "journal": "", "ref_id": "b13", "title": "Unsupervised learning of 3d model reconstruction from hand-drawn sketches", "year": "2018" }, { "authors": "S.-H Zhang; Y.-C Guo; Q.-W Gu", "journal": "", "ref_id": "b14", "title": "Sketch2model: View-aware 3d modeling from single free-hand sketches", "year": "2021" }, { "authors": "W Nie; W Wang; A Liu; J Nie; Y Su", "journal": "ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM)", "ref_id": "b15", "title": "Hgan: Holistic generative adversarial networks for two-dimensional image-based three-dimensional object retrieval", "year": "2019" }, { "authors": "L Jing; E Vahdani; J Tan; Y Tian", "journal": "", "ref_id": "b16", "title": "Cross-modal center loss for 3d cross-modal retrieval", "year": "2021-06" }, { "authors": "M.-X Lin; J Yang; H Wang; Y.-K Lai; R Jia; B Zhao; L Gao", "journal": "", "ref_id": "b17", "title": "Single image 3d shape retrieval via cross-modal instance and category contrastive learning", "year": "2021-10" }, { "authors": "K Chen; C B Choy; M Savva; A X Chang; T Funkhouser; S Savarese", "journal": "Springer", "ref_id": "b18", "title": "Text2shape: Generating shapes from natural language by learning joint embeddings", "year": "2018" }, { "authors": "S Reed; Z Akata; H Lee; B Schiele", "journal": "", "ref_id": "b19", "title": "Learning deep representations of fine-grained visual descriptions", "year": "2016" }, { "authors": "S E Reed; Z Akata; S Mohan; S Tenka; B Schiele; H Lee", "journal": "Advances in neural information processing systems", "ref_id": "b20", "title": "Learning what and where to draw", "year": "2016" }, { "authors": "Z Liu; Y Wang; X Qi; C.-W Fu", "journal": "", "ref_id": "b21", "title": "Towards implicit text-guided 3d shape generation", "year": "2022" }, { "authors": "H Xiong; S Wang; M Tang; L Wang; X Lin", "journal": "Knowledge-Based Systems", "ref_id": "b22", "title": "Knowledge graph question answering with semantic oriented fusion model", "year": "2021" }, { "authors": "D W Shu; S W Park; J Kwon", "journal": "", "ref_id": "b23", "title": "3d point cloud generative adversarial network based on tree structured graph convolutions", "year": "2019" }, { "authors": "S Luo; W Hu", "journal": "", "ref_id": "b24", "title": "Diffusion probabilistic models for 3d point cloud generation", "year": "2021" }, { "authors": "L Zhou; Y Du; J Wu", "journal": "", "ref_id": "b25", "title": "3d shape generation and completion through point-voxel diffusion", "year": "2021" }, { "authors": "N Wang; Y Zhang; Z Li; Y Fu; W Liu; Y.-G Jiang", "journal": "", "ref_id": "b26", "title": "Pixel2mesh: Generating 3d mesh models from single rgb images", "year": "2018" }, { "authors": "J Tang; X Han; M Tan; X Tong; K Jia", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b27", "title": "Skeletonnet: A topologypreserving solution for learning mesh reconstruction of object surfaces from rgb images", "year": "2021" }, { "authors": "Z Chen; H Zhang", "journal": "", "ref_id": "b28", "title": "Learning implicit fields for generative shape modeling", "year": "2019" }, { "authors": "J J Park; P Florence; J Straub; R Newcombe; S Lovegrove", "journal": "", "ref_id": "b29", "title": "Deepsdf: Learning continuous signed distance functions for shape representation", "year": "2019" }, { "authors": "Q Xu; W Wang; D Ceylan; R Mech; U Neumann", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b30", "title": "Disn: Deep implicit surface network for high-quality single-view 3d reconstruction", "year": "2019" }, { "authors": "J Chibane; T Alldieck; G Pons-Moll", "journal": "", "ref_id": "b31", "title": "Implicit functions in feature space for 3d shape reconstruction and completion", "year": "2020" }, { "authors": "P Mittal; Y.-C Cheng; M Singh; S Tulsiani", "journal": "", "ref_id": "b32", "title": "Autosdf: Shape priors for 3d completion, reconstruction and generation", "year": "2022" }, { "authors": "L Mescheder; M Oechsle; M Niemeyer; S Nowozin; A Geiger", "journal": "", "ref_id": "b33", "title": "Occupancy networks: Learning 3d reconstruction in function space", "year": "2019" }, { "authors": "Z Zheng; T Yu; Q Dai; Y Liu", "journal": "Computer Vision Foundation / IEEE", "ref_id": "b34", "title": "Deep implicit templates for 3d shape representation", "year": "2021" }, { "authors": "Y Deng; J Yang; X Tong", "journal": "", "ref_id": "b35", "title": "Deformed implicit field: Modeling 3d shapes with learned dense correspondence", "year": "2021" }, { "authors": "M.-E Nilsback; A Zisserman", "journal": "IEEE", "ref_id": "b36", "title": "Automated flower classification over a large number of classes", "year": "2008" }, { "authors": "T.-Y Lin; M Maire; S Belongie; J Hays; P Perona; D Ramanan; P Dollár; C L Zitnick", "journal": "Springer", "ref_id": "b37", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "B Thomee; D A Shamma; G Friedland; B Elizalde; K Ni; D Poland; D Borth; L.-J Li", "journal": "Communications of the ACM", "ref_id": "b38", "title": "Yfcc100m: The new data in multimedia research", "year": "2016" }, { "authors": "S Reed; Z Akata; X Yan; L Logeswaran; B Schiele; H Lee", "journal": "PMLR", "ref_id": "b39", "title": "Generative adversarial text to image synthesis", "year": "2016" }, { "authors": "H Zhang; T Xu; H Li; S Zhang; X Wang; X Huang; D N Metaxas", "journal": "", "ref_id": "b40", "title": "Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks", "year": "2017" }, { "authors": "H Zhang; T Xu; H Li; S Zhang; X Wang; X Huang; D N Metaxas", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b41", "title": "Stackgan++: Realistic image synthesis with stacked generative adversarial networks", "year": "2018" }, { "authors": "I Goodfellow; J Pouget-Abadie; M Mirza; B Xu; D Warde-Farley; S Ozair; A Courville; Y Bengio", "journal": "Communications of the ACM", "ref_id": "b42", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "T Xu; P Zhang; Q Huang; H Zhang; Z Gan; X Huang; X He", "journal": "", "ref_id": "b43", "title": "Attngan: Fine-grained text to image generation with attentional generative adversarial networks", "year": "2018" }, { "authors": "Z Li; T Zhang; P Wan; D Zhang", "journal": "", "ref_id": "b44", "title": "Segan: structure-enhanced generative adversarial network for compressed sensing mri reconstruction", "year": "2019" }, { "authors": "T Qiao; J Zhang; D Xu; D Tao", "journal": "", "ref_id": "b45", "title": "Mirrorgan: Learning text-toimage generation by redescription", "year": "2019" }, { "authors": "N Zheng; J Ding; T Chai", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b46", "title": "Dmgan: Adversarial learning-based decision making for human-level plant-wide operation of process industries under uncertainties", "year": "2020" }, { "authors": "J Cheng; F Wu; Y Tian; L Wang; D Tao", "journal": "", "ref_id": "b47", "title": "Rifegan: Rich feature generation for text-to-image synthesis from prior knowledge", "year": "2020" }, { "authors": "M Tao; H Tang; F Wu; X.-Y Jing; B.-K Bao; C Xu", "journal": "", "ref_id": "b48", "title": "Dfgan: A simple and effective baseline for text-to-image synthesis", "year": "2022" }, { "authors": "A Ramesh; M Pavlov; G Goh; S Gray; C Voss; A Radford; M Chen; I Sutskever", "journal": "PMLR", "ref_id": "b49", "title": "Zero-shot text-to-image generation", "year": "2021" }, { "authors": "M Ding; Z Yang; W Hong; W Zheng; C Zhou; D Yin; J Lin; X Zou; Z Shao; H Yang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b50", "title": "Cogview: Mastering text-to-image generation via transformers", "year": "2021" }, { "authors": "Z Han; C Chen; Y.-S Liu; M Zwicker", "journal": "", "ref_id": "b51", "title": "Shapecaptioner: Generative caption network for 3d shapes by learning a mapping from parts detected in multiple views to sentences", "year": "2020" }, { "authors": "Z Han; M Shang; X Wang; Y.-S Liu; M Zwicker", "journal": "", "ref_id": "b52", "title": "Y2seq2seq: Cross-modal representation learning for 3d shape and text by joint reconstruction and prediction of view and word sequences", "year": "2019" }, { "authors": "O Michel; R Bar-On; R Liu; S Benaim; R Hanocka", "journal": "", "ref_id": "b53", "title": "Text2mesh: Text-driven neural stylization for meshes", "year": "2022" }, { "authors": "A Sanghi; H Chu; J G Lambourne; Y Wang; C.-Y Cheng; M Fumero; K R Malekshan", "journal": "", "ref_id": "b54", "title": "Clip-forge: Towards zero-shot text-to-shape generation", "year": "2022" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark", "journal": "PMLR", "ref_id": "b55", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "S Yang; M Xu; H Xie; S Perry; J Xia", "journal": "", "ref_id": "b56", "title": "Single-view 3d object reconstruction from shape priors in memory", "year": "2021" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Ł Kaiser; I Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b57", "title": "Attention is all you need", "year": "2017" }, { "authors": "J Devlin; M.-W Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b58", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Y Zhang; H Jiang; Y Miura; C D Manning; C P Langlotz", "journal": "", "ref_id": "b59", "title": "Contrastive learning of medical visual representations from paired images and text", "year": "2020" }, { "authors": "F Boudin", "journal": "", "ref_id": "b60", "title": "Pke: an open source python-based keyphrase extraction toolkit", "year": "2016" }, { "authors": "Z Yue; H Zhang; Q Sun; X.-S Hua", "journal": "Advances in neural information processing systems", "ref_id": "b61", "title": "Interventional few-shot learning", "year": "2020" }, { "authors": "D P Kingma; J Ba", "journal": "", "ref_id": "b62", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "K Li; J Malik", "journal": "", "ref_id": "b63", "title": "Implicit maximum likelihood estimation", "year": "2018" }, { "authors": "A X Chang; T Funkhouser; L Guibas; P Hanrahan; Q Huang; Z Li; S Savarese; M Savva; S Song; H Su", "journal": "", "ref_id": "b64", "title": "Shapenet: An informationrich 3d model repository", "year": "2015" }, { "authors": "C Szegedy; V Vanhoucke; S Ioffe; J Shlens; Z Wojna", "journal": "", "ref_id": "b65", "title": "Rethinking the inception architecture for computer vision", "year": "2016" } ]
[ { "formula_coordinates": [ 4, 337.49, 369.25, 226.51, 24.68 ], "formula_id": "formula_0", "formula_text": "l t→v i = -log exp (⟨E t (x ti ), E v (x vi )⟩) n j=1 exp E t (x ti ), E v (x vj ) ,(1)" }, { "formula_coordinates": [ 4, 337.49, 400.07, 226.51, 24.68 ], "formula_id": "formula_1", "formula_text": "l v→t i = -log exp (⟨E v (x vi ), E t (x ti )⟩) n j=1 exp E v (x vi ), E t (x tj ) ,(2)" }, { "formula_coordinates": [ 4, 355.28, 477.11, 208.72, 29.41 ], "formula_id": "formula_2", "formula_text": "L joint = 1 n n i=1 αl t→v i + (1 -α)l v→t i ,(3)" }, { "formula_coordinates": [ 4, 317.56, 633.46, 246.44, 20.44 ], "formula_id": "formula_3", "formula_text": "L ae = ||D s (f s p) -I s || 2 + ||D c (f s p) × I s -I c || 2 ,(4)" }, { "formula_coordinates": [ 6, 317.47, 323.11, 246.53, 27.82 ], "formula_id": "formula_4", "formula_text": "P (Y |do(X)) = d P (Y |X = x, C = g(X = x, E))P (E),(5)" }, { "formula_coordinates": [ 6, 367.69, 573.27, 196.31, 29.41 ], "formula_id": "formula_5", "formula_text": "P (Y |do(X)) = 1 n n i=1 P (f s |[X] c ),(6)" }, { "formula_coordinates": [ 6, 362.17, 691.07, 143.82, 29.41 ], "formula_id": "formula_6", "formula_text": "L = 1 n n i=1 log( exp(f s • x ′ T i ) n j=1 exp(f s • x ′ T j )" }, { "formula_coordinates": [ 7, 60.33, 486.78, 235.98, 42.99 ], "formula_id": "formula_8", "formula_text": "Q = W Q • F i-1 t , K = W K • F i-1 t , V = W V • F i-1 t F i t = M ultihead(Q, K, V ) F i t = F F N (F i t ), (8" }, { "formula_coordinates": [ 7, 296.31, 504.48, 3.69, 8.24 ], "formula_id": "formula_9", "formula_text": ")" }, { "formula_coordinates": [ 7, 48, 615.42, 103.19, 12.19 ], "formula_id": "formula_10", "formula_text": "S t = {f ′ t p} ∈ R N ×(" }, { "formula_coordinates": [ 7, 79.12, 666.05, 220.88, 49.81 ], "formula_id": "formula_11", "formula_text": "Q = W Q • Ŝj-1 t , K = W K • Fa , V = W V • Fa Ŝj t = M ultihead(Q, K, V ) Ŝj t = F F N ( t ).(9)" }, { "formula_coordinates": [ 7, 329.8, 233.78, 230.24, 14.34 ], "formula_id": "formula_12", "formula_text": "L ae = ||D ′ s (S) -I s || 2 + ||D ′ c (S) × I s -I c || 2 . (10" }, { "formula_coordinates": [ 7, 560.04, 237.39, 3.96, 8.24 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 7, 397.8, 295.38, 162.24, 9.65 ], "formula_id": "formula_14", "formula_text": "L reg = ||f t -f s || 2 , (11" }, { "formula_coordinates": [ 7, 560.04, 296.19, 3.96, 8.24 ], "formula_id": "formula_15", "formula_text": ")" }, { "formula_coordinates": [ 7, 380.01, 377.33, 180.04, 9.65 ], "formula_id": "formula_16", "formula_text": "L = λL ae + (1 -λ)L reg , (12" }, { "formula_coordinates": [ 7, 560.04, 378.13, 3.96, 8.24 ], "formula_id": "formula_17", "formula_text": ")" }, { "formula_coordinates": [ 7, 367.82, 544.28, 142.16, 12.48 ], "formula_id": "formula_18", "formula_text": "F ′ = G(f t , Z) = {f ′ 1 , f ′ 2 , . . . , f ′ l }." }, { "formula_coordinates": [ 7, 387.07, 699.43, 176.93, 22.31 ], "formula_id": "formula_19", "formula_text": "F g = f t + (F p -f t ) σ • η,(13)" }, { "formula_coordinates": [ 8, 94.8, 353.48, 205.2, 16.66 ], "formula_id": "formula_20", "formula_text": "f target = arg min i=1,...,h d(G ϕ (f t , Z), f i g ),(14)" }, { "formula_coordinates": [ 8, 94.72, 420.52, 205.28, 16.66 ], "formula_id": "formula_21", "formula_text": "L G = min k=1,...,l ||G ϕ (f t , z k ) -f target || 2 2 .(15)" } ]
T2TD: Text-3D Generation Model based on Prior Knowledge Guidance
In recent years, 3D models have been utilized in many applications, such as auto-driver, 3D reconstruction, VR, and AR. However, the scarcity of 3D model data does not meet its practical demands. Thus, generating high-quality 3D models efficiently from textual descriptions is a promising but challenging way to solve this problem. In this paper, inspired by the ability of human beings to complement visual information details from ambiguous descriptions based on their own experience, we propose a novel text-3D generation model (T2TD), which introduces the related shapes or textual information as the prior knowledge to improve the performance of the 3D generation model. In this process, we first introduce the text-3D knowledge graph to save the relationship between 3D models and textual semantic information, which can provide the related shapes to guide the target 3D model generation. Second, we integrate an effective causal inference model to select useful feature information from these related shapes, which removes the unrelated shape information and only maintains feature information that is strongly relevant to the textual description. Meanwhile, to effectively integrate multi-modal prior knowledge into textual information, we adopt a novel multi-layer transformer structure to progressively fuse related shape and textual information, which can effectively compensate for the lack of structural information in the text and enhance the final performance of the 3D generation model. The final experimental results demonstrate that our approach significantly improves 3D model generation quality and outperforms the SOTA methods on the text2shape datasets.
Weizhi Nie; Ruidong Chen; Weijie Wang; Bruno Lepri
[ { "figure_caption": "1 ) 3 )Fig. 1 .131Fig. 1. a) A single caption can only describe part of the appearance of a 3D object, and ambiguous descriptions may cause difficulties for text-3D works. b) Inspired by the human thinking mode, we think the two types of prior knowledge (semantic attributes and related shapes) can be used to provide more detailed information and enhance the text-3D generation task.", "figure_data": "", "figure_id": "fig_0", "figure_label": "131", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. The basic architecture of the encoder networks. (a) The transformer-based text encoder, it converts the input text description into a global sentence feature. (b)The CNN-based 3D shape encoder, it converts the colored 3D volume into a global 3D feature. (c)The implicit shape decoder, it takes a 3D shape feature with a point coordinate as input and predicts the occupancy probability or the RGB value of each sampled position.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Overview of the framework implementation, which mainly consists of four parts: a) Constructing the knowledge graph by defining the entities and relations in the graph. b)Retrieve two types of prior knowledge and the extracted features by the proposed encoders. c)The training process of the text-3D generative network, which mainly aims to reduce the gap between text and 3D modalities by introducing prior knowledge. d)To further diversify the generation results, adapted to our methods, we propose a prior guided IMLE to fully utilize the prior knowledge.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "3 : 6 : 7 :367If a i in T then insert a i into P a 4: end for 5: Search mode ( P a , A -S , ? ) in K. Get P ′ s Search mode ( P ′ s , S -S , ? ) in K. Get P ′′ s Set P s with top m retrieved 3D object of {s 1 , s 2 , ...s m } sorted by weight scores from P ′ s and P ′′ s . 8: return P s = {s 1 , s 2 , ...s m },P a = {a 1 , a 2 , ...a n }", "figure_data": "", "figure_id": "fig_4", "figure_label": "367", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig.6. The network structure of the Prior Fusion Module(PFM). The left part fuses the shape prior information, which enriches the text feature with 3D information. The right part is used to fuse the prior attribute information.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. Generation results by Text2shape [19](a) vs. Text-Guided [22](b) vs. ours(c) vs. GT. By comparison, our algorithm obtains the 3D models that better fit the text description.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .9Fig.9. Visualization of several diversifying generation results. These generative models have more variation regarding the ground truth shape, which can meet more users' requirements.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "Text-3D KGGround Truth 3D ShapeRelated 3D Shapes", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "chair,wooden,white,backsupport,•••cozy,chair,blue,wideseating,•••", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "round,gray,chair,circularbase,•••", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "black,restingchairfour,legs,•••", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Input TextPrior KnowledgeGenerated / GTbrown,round,wooden,table,grey,•••", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "twolayered,table,blue,grey legs,•••", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "brown,table,drawer,lowerbase,•••", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "squareshaped,table,long legs,wood,•••", "figure_id": "tab_9", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Quantitative comparison with the SOTA methods: following the prior works, we identically report IOU, EMD, IS and Accuracy (Acc. (%)) metrics to serve as the comparable quantitative evaluations.", "figure_data": "MethodIOU↑ EMD↓IS↑Acc↑Text2Shape [19]9.640.4443 1.96 97.37Text-Guided [22]12.210.2071 1.97 97.48Ours14.22 14.22 14.22 0.1742 0.1742 0.1742 1.97 1.97 1.97 98.15 98.15 98.15", "figure_id": "tab_10", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "A brown foldingchair.(a)(b)(c)Ground Truth", "figure_id": "tab_11", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Quantitative experiment results of the ablation studies: we report IOU, PS, FPD, and R-Precision (R-P.(%) to quantitatively evaluate the effectiveness of the applied loss functions and prior knowledge.", "figure_data": "MethodIOU↑PS↑FPD↓R-P↑baseline9.232.54234.919.34+Lreg12.202.93111.4338.00+Lreg + Lae13.133.1451.1241.84+attribute prior13.343.1846.5443.35+shape prior14.073.2240.3542.32+shape prior(Causal)14.133.2736.2243.85+both prior(Causal)14.22 14.22 14.223.35 3.35 3.3530.71 30.71 30.7145.70 45.70 45.70", "figure_id": "tab_12", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Quantitative comparison of the prior fusion methods: the metrics IOU, PS, FPD, and R-Precision (R-P.(%) are also here to evaluate the performance of different prior fusion methods.", "figure_data": "MethodIOU↑PS↑FPD↓R-P↑Concatenate12.743.0845.7341.31Average Fusion13.322.9674.2840.77+Ours(PFM)14.07 14.07 14.073.22 3.22 3.2240.35 40.35 40.3542.32 42.32 42.32", "figure_id": "tab_13", "figure_label": "3", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work introduces a method for obtaining 3D models in a more efficient and concise manner, which the citing paper adopts in its research to address the industrial demand for 3D models."}, {"Category": "Extension or Continuation", "Citation": "[2]", "Explanation": "The cited work further extends the research on obtaining 3D models by focusing on improving the efficiency and conciseness of the process, which the citing paper continues to explore in its study."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work provides a method for handling the challenges and difficulties in obtaining 3D models, which the citing paper adopts in its research to address the complex visual and structural information of 3D models."}, {"Category": "Data Source", "Citation": "[4]", "Explanation": "The cited work is a data source for the research conducted in the citing paper, as it provides information on the types of approaches used in handling the problem of obtaining 3D models."}, {"Category": "Extension or Continuation", "Citation": "[5]", "Explanation": "The cited work extends the research on recovering 3D information by focusing on the recovery of 3D information from rendered views, which the citing paper continues to explore in its study of obtaining 3D models from images."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work provides a method for recovering 3D information from rendered views, which the citing paper adopts in its research to address the challenge of obtaining 3D models from images."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The cited work introduces a method for recovering 3D information from rendered views, which the citing paper adopts in its research to address the challenge of obtaining 3D models from images."}, {"Category": "Methodological Basis", "Citation": "[8]", "Explanation": "The cited work provides a method for recovering 3D information from rendered views, which the citing paper adopts in its research to address the challenge of obtaining 3D models from images."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work offers a method for recovering 3D information from rendered views, which the citing paper adopts in its research to address the challenge of obtaining 3D models from images."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work provides a method for recovering 3D information from scene data, which the citing paper adopts in its research to address the challenge of obtaining 3D models from images."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work offers a method for recovering 3D information from scene data, which the citing paper adopts in its research to address the challenge of obtaining 3D models from images."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work provides a method for recovering 3D information from scene data, which the citing paper adopts in its research to address the challenge of obtaining 3D models from images."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work offers a method for recovering 3D information from sketch data, which the citing paper adopts in its research to address the challenge of obtaining 3D models from images."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work provides a method for recovering 3D information from sketch data, which the citing paper adopts in its research to address the challenge of obtaining 3D models from images."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work offers a method for recovering 3D information from sketch data, which the citing paper adopts in its research to address the challenge of obtaining 3D models from images."}, {"Category": "Methodological Basis", "Citation": "[16], [17], [18]", "Explanation": "The cited works are used as a basis for cross-modal 3D retrieval methods in the citing paper, which is a technique for searching and matching 3D models in databases to reduce the difficulty of acquiring models."}, {"Category": "Extension or Continuation", "Citation": "[19]", "Explanation": "The cited work, Text2Shape, is the first to generate colored 3D shapes from natural language, which serves as a starting point for the extension of research in this field in the citing paper."}, {"Category": "Data Source", "Citation": "[20], [21]", "Explanation": "The cited works are used as a data source for cross-modal representation methods in the citing paper, which is a technique for learning joint representations of text and 3D shapes."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work provides a more straightforward approach to guide 3D model generation using textual information, which the citing paper adopts to improve the generation quality of 3D shapes."}, {"Category": "Supporting Evidence", "Citation": "[19]", "Explanation": "The cited work provides a text-3D data set that defines the entity and edge to construct a text-3D knowledge graph, which is used in the proposed T2TD model to maintain the correlation between text and 3D shape and related 3D shape and attributes."}, {"Category": "Extension or Continuation", "Citation": "[23]", "Explanation": "The cited work is applied in the generation step of the T2TD model to search the prior knowledge from the knowledge graph based on the text description. However, the search results are only similar to the text description, requiring an effective causal model to select shape information and remove irrelevant features."}, {"Category": "Extension or Continuation", "Citation": "[19]", "Explanation": "The cited work Text2Shape is used as a dataset for evaluation, and the citing paper extends the research by adding prior knowledge to the generation network to improve the final performance of 3D model generation."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work introduces the concept of 3D voxels as a representation for 3D shapes, which the citing paper builds upon in their research on generating 3D data."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work also contributes to the field of 3D shape generation by presenting another method for representing 3D shapes using voxels, which the citing paper may have considered in their research."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The cited work provides another method for representing 3D shapes using voxels, which the citing paper may have considered in their research on generating 3D data."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work further contributes to the field of 3D shape generation by presenting another method for representing 3D shapes using voxels, which the citing paper may have considered in their research."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The cited work also contributes to the field of 3D shape generation by presenting another method for representing 3D shapes using voxels, which the citing paper may have considered in their research on generating 3D data."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work introduces the concept of point clouds as a representation for 3D shapes, which the citing paper may have used in their research on generating 3D data."}, {"Category": "Methodological Basis", "Citation": "[24]", "Explanation": "The cited work provides another method for representing 3D shapes using point clouds, which the citing paper may have considered in their research on generating 3D data."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work also contributes to the field of 3D shape generation by presenting another method for representing 3D shapes using point clouds, which the citing paper may have considered in their research."}, {"Category": "Methodological Basis", "Citation": "[26]", "Explanation": "The cited work further contributes to the field of 3D shape generation by presenting another method for representing 3D shapes using point clouds, which the citing paper may have considered in their research on generating 3D data."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work introduces the concept of meshes as a representation for 3D shapes, which the citing paper may have used in their research on generating 3D data."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work provides another method for representing 3D shapes using meshes, which the citing paper may have considered in their research on generating 3D data."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work also contributes to the field of 3D shape generation by presenting another method for representing 3D shapes using meshes, which the citing paper may have considered in their research on generating 3D data."}, {"Category": "Methodological Basis", "Citation": "[29]", "Explanation": "The cited work introduces the concept of implicit functions as a method for representing 3D shapes, which the citing paper may have used in their research on generating 3D data."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work further contributes to the field of 3D shape generation by presenting another method for representing 3D shapes using implicit functions, which the citing paper may have considered in their research."}, {"Category": "Methodological Basis", "Citation": "[31]", "Explanation": "The cited work also contributes to the field of 3D shape generation by presenting another method for representing 3D shapes using implicit functions, which the citing paper may have considered in their research on generating 3D data."}, {"Category": "Methodological Basis", "Citation": "[32]", "Explanation": "The cited work further contributes to the field of 3D shape generation by presenting another method for representing 3D shapes using implicit functions, which the citing paper may have considered in their research on generating 3D data."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work provides a method for generating 3D shapes, which the citing paper adopts in their research on the task of 3D generation."}, {"Category": "Methodological Basis", "Citation": "[29]", "Explanation": "The cited work also contributes to the research on 3D generation by providing a method for generating 3D shapes."}, {"Category": "Methodological Basis", "Citation": "[33]", "Explanation": "The cited work is used in the research on image-based 3D reconstruction, which the citing paper also explores in their work."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work is used in the research on image-based 3D reconstruction, which the citing paper also studies in their work."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work contributes to the research on 3D shape deformation tasks, which the citing paper also focuses on in their study."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work is used in the research on 3D shape deformation tasks, which the citing paper also explores in their work."}, {"Category": "Data Source", "Citation": "[37], [38], [39]", "Explanation": "The cited works provide large-scale text-image datasets that serve as the data source for the research conducted in the citing paper on text-image representations."}, {"Category": "Methodological Basis", "Citation": "[20], [21]", "Explanation": "The cited works contribute to the field of text-image representations by providing methods and techniques that the citing paper builds upon in its research."}, {"Category": "Extension or Continuation", "Citation": "[40], [41], [42]", "Explanation": "The cited works focus on using GAN structure to generate images from text embeddings, and the citing paper extends this research by improving the framework from different aspects."}, {"Category": "Extension or Continuation", "Citation": "[44], [45], [46], [47], [48], [49]", "Explanation": "The cited works further improve the GAN-based framework for text-image generation, and the citing paper continues this research by exploring new dimensions and variables."}, {"Category": "Extension or Continuation", "Citation": "[50], [51]", "Explanation": "The cited works propose approaches that are not based on GAN structure and achieve favorable generation performance, and the citing paper extends this research by exploring new methods and techniques in this area."}, {"Category": "Data Source", "Citation": "[52], [53], [54]", "Explanation": "The cited works provide text-3D datasets that serve as the data source for the research conducted in the citing paper on text-3D generation."}, {"Category": "Methodological Basis", "Citation": "[55], [56], [57]", "Explanation": "The cited works contribute to the field of text-3D generation by providing methods and techniques that the citing paper builds upon in its research."}, {"Category": "Extension or Continuation", "Citation": "[58], [59], [60]", "Explanation": "The cited works focus on using knowledge graph to make full use of existing datasets and improve text-3D performance, and the citing paper extends this research by exploring new methods and techniques in this area."}, {"Category": "Methodological Basis", "Citation": "[19], [52], [53]", "Explanation": "The cited works provide a similar idea of using text-to-image generation methods to train a generator with a GAN structure for text-3D retrieval and 3D shape captioning tasks, which the citing paper adopts in their research on using natural language to generate 3D shapes."}, {"Category": "Extension or Continuation", "Citation": "[22]", "Explanation": "The cited work Text-Guided aligns text and shape features in the same embedding space learned by a 3D autoencoder, which the citing paper extends by using the extracted text features to generate 3D models with a 3D decoder and applying random noise to avoid mode collapse in the generator."}, {"Category": "Extension or Continuation", "Citation": "[54]", "Explanation": "The cited work engages in generating high-quality textures for 3D mesh, which is an extension of the task of text-3D generation in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[55]", "Explanation": "The cited work exploits the CLIP model to generate approximate shape from the description in a Zero-Shot way, which is an extension of the task of text-3D generation in the citing paper."}, {"Category": "Data Source", "Citation": "[56]", "Explanation": "The cited work is the CLIP model used in the cited work to generate approximate shape from the description in a Zero-Shot way, which is a data source for the citing paper."}, {"Category": "Methodological Basis", "Citation": "[48]", "Explanation": "The RifeGAN is cited as a method for utilizing an external knowledge database to retrieve similar sentences and provide detailed semantic information for text descriptions in cross-modal visual generation tasks."}, {"Category": "Methodological Basis", "Citation": "[57]", "Explanation": "The Mem3D is cited as a method for utilizing retrieved 3D shapes to provide prior knowledge and help recover 3D information from 2D images with complex backgrounds and heavy occlusions in text-3D generation tasks."}, {"Category": "Methodological Basis", "Citation": "[56], [59]", "Explanation": "The cited works provide the foundational transformer encoder structure that the citing paper adopts in their research to improve the performance of textual embeddings."}, {"Category": "Methodological Basis", "Citation": "[29]", "Explanation": "The cited work provides the inspiration for the use of an implicit 3D shape representation as the prediction output of the shape encoder in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[60]", "Explanation": "The cited work, ConVIRT, provides the inspiration for the crossmodal contrastive loss function used in the citing paper to pre-establish the relationship between text and shape information."}, {"Category": "Methodological Basis", "Citation": "[61]", "Explanation": "The cited work provides a keyphrase toolkit that the citing paper uses to extract attribute entities from 3D shape descriptions, which serves as a methodological basis for the proposed framework."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work provides a method for multi-entity search that the citing paper adopts in the process of finding related shape entities as prior knowledge."}, {"Category": "Methodological Basis", "Citation": "[62]", "Explanation": "The cited work provides the inspiration for introducing the causal model into the fusion model in the citing paper, which is used to improve the feature extraction process and ensure the completeness of the fusion feature."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work [22] provides the idea of attribute fusion in spatial feature space, which the citing paper adopts to enrich the text feature with 3D information in the final part of the process."}, {"Category": "Methodological Basis", "Citation": "[63]", "Explanation": "The cited work by Adam is used as a method for optimizing the generative network in the citing paper, providing a specific technique for training the parameters of the model."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work introduces the idea of using an IMLE-based latent generator to achieve diverse shape generation results from the same text description, which the citing paper adopts in their research to improve the quality of the shape generation process."}, {"Category": "Data Source", "Citation": "[19]", "Explanation": "The cited work provides the text-3D dataset used in the experiments conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work, Text2Shape, provides a method for generating 3D shapes from text inputs, which the citing paper adopts in their research to compare their approach to the existing method."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work, Text-Guided 3D, also provides a method for generating 3D shapes from text inputs, which the citing paper compares their approach to in their research to assess performance."}, {"Category": "Supporting Evidence", "Citation": "[66]", "Explanation": "The cited work, Inception Score, is used as a metric to measure the diversity and quality of the generated shapes in the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Table .1)", "Explanation": "The data presented in Table .1 serves as a source of information for the evaluation metrics used in the research conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[19]", "Explanation": "Text2Shape is cited as a baseline method in the citing paper, and the citing paper builds upon it by introducing a new approach for cross-modal feature learning and direct prediction of 3D volume with text features as conditional input."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "Text-Guided is cited as a method that achieves better improvement compared to Text2Shape, and the citing paper adopts the idea of information alignment between text-3D pairs in the training data to improve the training process of GAN for synthesis generation."}, {"Category": "Supporting Evidence", "Citation": "[20]", "Explanation": "The cited work by [20] is used to highlight the importance of prior knowledge in text description for generating high-quality 3D shapes, and the citing paper further builds upon this idea by introducing a new approach to provide more relevant and accurate prior knowledge for the generation process."}, {"Category": "Data Source", "Citation": "[21]", "Explanation": "The cited work by [21] is used to acknowledge the use of a specific dataset in the research conducted in the citing paper, highlighting the reliance on external data as a foundational element for the study."}, {"Category": "Extension or Continuation", "Citation": "[23]", "Explanation": "The cited work by [23] is used to discuss the use of causal inference models in the generation of 3D shapes, and the citing paper further builds upon this idea by introducing a new approach to eliminate irrelevant information in related shapes to improve the generation performance."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work provides the experimental settings and metrics used in the ablation studies, which the citing paper adopts in their own research to evaluate the effectiveness of the proposed modules."}, {"Category": "Methodological Basis", "Citation": "[57]", "Explanation": "The cited work provides a comparison method for the text feature update process in the prior fusion transformer, which the citing paper adopts to improve the performance of the generation model."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work, Text2shape, serves as the basis for the evaluation of the proposed approach in the citing paper, providing a dataset and metrics for assessing the performance of the model in generating 3D models from text inputs."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b12", "b7", "b16", "b15", "b3", "b4", "b1", "b27", "b17", "b6", "b8", "b15", "b17", "b6", "b26", "b27", "b24", "b11", "b5", "b22" ], "table_ref": [], "text": "Text-based recommendation (Li et al., 2010;Gu et al., 2016;Okura et al., 2017;Malkiel et al., 2020) aims to recommend relevant textual content (e.g., news articles, Twitter posts) to people based on their behaviors as represented in historical log texts. For instance, engagement recommendation (Cheng et al., 2022) on social media (e.g., Twitter and Reddit) helps users discover and engage with interested threads by modeling their browsing history.\nPretrained language models (Devlin et al., 2019;Brown et al., 2020) have made waves in recent text-based recommendation research (Zhang et al., 2021;Qi et al., 2022;Geng et al., 2022). The most common practice is using PLM encoders (BERT family) to learn representations of user history and candidate item texts. Recommendation matching scores are computed over the user and item representations and finally optimized by noise contrastive estimation (NCE) loss (Gutmann and Hyvärinen, 2010) for ranking multiple candidates.\nUnlike encoding single text, using PLM to encode multi-turn texts of user history is nontrivial. Existing works (Malkiel et al., 2020;Qi et al., 2022;Geng et al., 2022) concatenate multi-turn history texts as a whole input text, then use one PLM encoder to learn the holistic user representation. This is a standard PLM encoding manner but ignores the relation among history turns, as all word tokens from different history turns are equally attended2 . In contrast, previous studies point out that learning the relation among user history turns is also beneficial (Zeng et al., 2020;Qi et al., 2021). Another approach is using PLM encoders to learn representations from multi-turn history texts, followed by an additional aggregation network to fuse the multi-turn representations (Wu et al., 2021;Li et al., 2022). However, the imposed aggregation networks (with newly initialized parameters) weaken the representation power of PLM encoders which are already pretrained on large-scale corpora.\nThis work introduces UniTRec, a Unified text-totext Transformer framework for text-based Recommendation. In the encoder component of UniTRec, we design local-and global-attention to learn user history representations through tailored attention masking, which aims to jointly model word-level and turn-level relations of user history. UniTRec can utilize the full power of PLM encoders because it preserves the intact structure of PLM encoders without newly imposed parameters.\nDifferent from most previous works that predict user-candidate matching scores solely based on the representations learned by Transformer encoders, we argue that conditioned on user representations Local attention on word-level context. We first concatenate the multi-turn history texts as the input tokens X = [x 1 1 , x 2 1 , ..., x Dong et al. (2019), we tailor the attention masking in Transformer self-attention to learn the word-level context of each turn. Specifically, we allow word tokens from the same turn to attend to each other, while tokens from different turns are excluded from self-attention computation: M i,j = 0, token x i and x j in the same turn -∞, otherwise\n|t 1 | 1 , ..., x 1 N , x 2 N , ..., x |t N | N ]. Inspired by\nAttention(Q, K, V ) = softmax( QK T √ d k +M)V\n(1) , where Q, K, V are self-attention query, key, and value in Vaswani et al. (2017), M is the mask matrix to achieve local-attention inside each turn text. The local self-attention blocks consist of L 1 layers, by which original PLM encoders can be adapted to learn word-level context representations of turns.\nGlobal attention on turn-level context. Over the local self-attention layers, we leverage global self-attention to model the relation among history turns. Specifically, tokens from all turns attend to each other in self-attention computation (by setting the mask matrix M = 0). In this way, Transformer encoders can perform global interaction among each token (and turn) to learn turn-level context representations of user history. There are L 2 layers in the global self-attention blocks, which can also be inherited from PLM encoders directly." }, { "figure_ref": [], "heading": "Joint Contrastive Ranking Objectives", "publication_ref": [], "table_ref": [], "text": "Conditioned on the history representation, we input the candidate text to Transformer decoders to predict how likely it should be recommended. It is worth noting that Transformer decoders can naturally perform effective cross-attention interaction between history and candidate hidden states." }, { "figure_ref": [], "heading": "Objective on Discriminative Scores", "publication_ref": [ "b10", "b11", "b17" ], "table_ref": [], "text": "Motivated by Lewis et al. (2020), we feed the last hidden state of decoder output h T to an MLP scorehead which predicts the user-candidate matching score S d = ScoreHead(h T ). The matching score is discriminative, as higher scores indicate higher user-candidate matching probabilities.\nFollowing previous works (Li et al., 2022;Qi et al., 2022), we adopt negative sampling with NCE loss to optimize matching score prediction. Given the user history and its ground truth matched candidate C i , UniTRec predicts the matching score as S d+ i . In addition, K unmatched negative candidates {C j } K j=1 are sampled from the candidate set, and their matching scores are {S d- j } K j=1 . The NCE loss is represented in a contrastive form:\nL d i = -log exp(S d+ i ) exp(S d+ i ) + K j=1 exp(S d- j )\n(2)" }, { "figure_ref": [], "heading": "Objective on Candidate Text Perplexity", "publication_ref": [ "b19" ], "table_ref": [], "text": "As aforementioned, UniTRec leverages perplexity to rank candidate texts. Since lower perplexity indicates higher user-candidate matching probability, regarding the candidate text Y = [y 1 , y 2 , ..., y T ], we define the perplexity-based matching score S p as its negative perplexity3 :\nS p = -PPL(Y ) = 1 T T i=1 log p θ (y i |y <i ) (3)\n, where p θ (•) denotes the target probability output from the UniTRec Transformer decoder. Similar to Eq. ( 2), we optimize the perplexity-based matching score S p in the NCE loss form. As perplexity empirically varies in a wide range, we introduce a temperature parameter τ to balance the joint NCE loss gradients following Radford et al. (2021).\nL p i = -log exp(τ • S p+ i ) exp(τ • S p+ i ) + K j=1 exp(τ • S p- j )(4\n) , where τ is learnable and initialized to 1. On the training dataset D, the joint contrastive learning objective is formulated as:\nL = |D| i=1 L d i + L p i (5)" }, { "figure_ref": [], "heading": "Model Initialization and Inference", "publication_ref": [ "b10" ], "table_ref": [], "text": "As UniTRec is a standard text-to-text Transformer, we initialize the parameters from pretrained BART (Lewis et al., 2020). In inference, UniTRec predicts the discriminative and perplexity-based scores for each candidate item, respectively. The two separate scores S d and S p are normalized, averaged, and finally ranked as the output. Detailed ranking process is provided in Appendix B." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b25", "b23", "b26", "b14", "b0", "b9", "b21", "b13" ], "table_ref": [], "text": "We evaluate UniTRec on three text-based recommendation tasks: 1) NewsRec, to recommend news articles to users based on their browsing history. We use the MIND-small dataset (Wu et al., 2020) for experiments. 2) QuoteRec, to recommend quotations to users based on their conversation history. We use the Reddit-quotation dataset (Wang et al., 2021) for experiments. 3) EngageRec, to recommend social media posts for users to engage with based on their comment history. We use the dataset released by Zeng et al. (2020) for experiments. Detailed dataset statistics is provided in Appendix A. Implementation Details. The UniTRec encoder and decoder both consist of 6 Transformer layers with 768-dimensional hidden states and 12 attention heads. We set L 1 = 3 and L 2 = 3. We use AdamW optimizer (Loshchilov and Hutter, 2019) to train UniTRec with cosine learning rate decay.\nBaselines. We compare UniTRec with competitive baselines: 1) GRU4Rec (Balázs et al., 2016) utilizes a GRU network to learn multi-turn history.\n2) SASRec (Kang and McAuley, 2018) encodes user history with a self-attention based sequential model. 3) BERT4Rec (Sun et al., 2019) employs bidirectional self-attention to model user history. 4) RoBERTa-Sim, a simple yet strong baseline men-NewsRec QuoteRec EngageRec Model MRR NDCG@5/10 HR@5/10 MRR NDCG@5/10 HR@5/10 MRR NDCG@5/10 HR@5/10 GRU4Rec Note that we do not consider other methods that use non-text inputs (e.g., user profile, text topic labels). For fair comparison, all baseline models use pretrained 12-layer RoBERTa-base (Liu et al., 2019) as text encoders to learn embeddings of texts." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Table 1 shows the performance of experiment models. From the results of NewsRec and QuoteRec, we can see that UniTRec outperforms all baseline models by a clear margin. Also, RoBERTa-Sim and UNBERT that directly use the [CLS] hidden states to represent user history, surpass other baselines that build additional aggregation networks upon the whole RoBERTa outputs. As displayed in the results, EngageRec is the most difficult task. We inspect the dataset and find that the texts on social media contain too much noise (e.g., URL and emoji), and the user history contains less number of turns. Nevertheless, UniTRec achieves better overall performance than other baseline models, validating its robustness on noisy text inputs and limited user history." }, { "figure_ref": [], "heading": "Ablation Studies and Analyses", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We further conduct ablation studies on UniTRec. The experiment results are reported in Table 2.\nInitialization of UniTRec. We train UniTRec from scratch without initialization from pretrained BART (refer to w/o BART Init). The recommendation performance significantly drops in all three tasks, which indicates that acquiring effective text understanding ability from PLM is a necessary key to UniTRec performance.\nLocal and global attention. We investigate the function of two-level attention modules of the Uni-TRec history encoder. Concretely, we set L 1 = 0 in w/o Local-Att and L 2 = 0 in w/o Global-Att, where L 1 + L 2 = 6. We can observe that removing local and global attention from the original UniTRec history encoder both lead to suboptimal performance, while the performance drop is more significant in w/o Global-Att. The results justify the effectiveness of jointly modeling two-level history contexts through adapted Transformer attention masking without additional parameters." }, { "figure_ref": [], "heading": "Discriminative and perplexity-based objectives.", "publication_ref": [], "table_ref": [], "text": "We probe into training UniTRec with standalone discriminative (Disc-Score only) and perplexitybased (PPL-Score only) contrastive objectives, respectively. We can see that the discriminative objective yields better performance than the perplexitybased objective. Besides, the model performance on both standalone objectives declines compared to the original joint objective. The results indicate that the discriminative and perplexity-based matching scores are complementary and can jointly provide more accurate signals of user history and candidate text matching for text-based recommendation.\nWe present a unified Transformer UniTRec for textbased recommendation. UniTRec learns two-level contexts of multi-turn user history and jointly exploits discriminative matching scores and candidate text perplexity as matching objectives. Empirical experiments on three text-based recommendation datasets corroborate the effectiveness of UniTRec." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Our model only focuses on utilizing text information for recommendation, which is a key limitation of this work. In real-world settings, recommender systems are usually required to handle heterogeneous information inputs. UniTRec is a pure textbased recommender modeling user history and candidate texts as inputs. However, incorporating additional side information (e.g., user profile, text topic labels, and dwell time of user behaviors) could further improve the recommendation performance and alleviate the cold start problem. Furthermore, UniTRec only models two-level relations of user behavior history. Nonetheless, incorporating more user behavior information, such as implicit and negative feedback, could further enhance the recommendation performance. " }, { "figure_ref": [], "heading": "A Dataset Statistics", "publication_ref": [ "b27", "b26" ], "table_ref": [ "tab_3" ], "text": "The detailed statistics of the three text-based recommendation datasets are displayed in Table 3. Note that we use news titles as the text inputs for News-Rec following Qi et al. (2021). NewsRec regards the user clicked and non-clicked news as candidate texts, while QuoteRec and EngageRec regard all potential quotation texts and post texts as candidates. from Zeng et al. (2020) that formulates the task as recommending candidate users to given posts based on post content, we formulate the task as recommending candidate posts to given users based on user history. " }, { "figure_ref": [], "heading": "Algorithm 1 Candidate Ranking Processs", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B Inference Ranking", "publication_ref": [], "table_ref": [], "text": "Given the user history and M candidate texts, UniTRec first predicts the discriminative ranking scores S d = {S d 1 , S d 2 , ..., S d M } and perplexitybased ranking scores S p = {S p 1 , S p 2 , ..., S p M } of the candidates. Algorithm 1 outlines an approach to aggregate the final ranking based on S d and S p . Note that the function Rank(S) 4 denotes outputting the sorted order of elements in a score list S. There exist other ways to average the ranking of S d and S p , which we leave for future work to explore.\n4 Rank(S) works similarly to scipy.stats.rankdata(). For example in ascending order, Rankasc({0.2, 0.6, 0.7, 0.4}) = scipy.stats.rankdata ([0.2, 0.6, 0.7, 0.4\n]) = [1, 3, 4, 2]" }, { "figure_ref": [], "heading": "C Qualitative Analysis", "publication_ref": [], "table_ref": [], "text": "We show randomly sampled outputs of UniTRec, for instance, demonstrated on the news recommendation and quote recommendation tasks. The 25 US cities where it's easiest to get a mortgage #4\nBurning questions for Cowboys vs Giants on \"Monday Night Football\"\n#5\nWho's the favorite to win 2019 NFL rushing title?\n#6\nGrading all 32 NFL teams heading into the last eight weeks of the 2019 season #7\nJennifer Aniston looks amazing in a makeup-free selfie, plus more news #8\nThis $12 million \"mansion yacht\" is made entirely of stainless steel and it's a first for the industry. Clicked denotes the ground truth user-click labels. Note that the experiment history logs are anonymized and delinked, which is always the first priority of the recommendation study." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We appreciate constructive comments from anonymous reviewers. The research described in this paper is partially supported by CUHK under Project No. 3230366." }, { "figure_ref": [], "heading": "Turn", "publication_ref": [], "table_ref": [], "text": "Conversation Threading History #1 I own an FJ. It's a great car and even on stockies. It s great offroad.\n#2 I feel bad for you that you run the risk of being associated with the typical FJ owner." }, { "figure_ref": [], "heading": "#3", "publication_ref": [], "table_ref": [], "text": "What is a typical FJ owner? I've not heard anything bad about FJ owners." }, { "figure_ref": [], "heading": "#4", "publication_ref": [], "table_ref": [], "text": "It's like someone who drives a jeep wrangler in NYC. There's no need. Tons of FJ owners do that have it and not use it for what it's made for." }, { "figure_ref": [], "heading": "#5", "publication_ref": [], "table_ref": [], "text": "God forbid someone likes the design of a car and doesn't use it offroad." }, { "figure_ref": [], "heading": "#6", "publication_ref": [], "table_ref": [], "text": "Then buy a much more economic environmentalist friendly version. If you buy something and always use it for much less than it's purpose, why buy it?\nOr people can buy whatever the hell they want because it's their money and not yours." }, { "figure_ref": [], "heading": "#8", "publication_ref": [], "table_ref": [], "text": "You're entirely right. Just like people can be rude just because you can do it, because you have the ability but why should you ass. I wasn't aware that somebody buying a vehicle that they like and you don't was morally wrong. The lady doth protest too much, methinks.\n0.022 0.013 7\nIt's all about the money.\n0.020 0.013 8\nAnybody driving slower than you is an idiot, and anyone going faster than you is a maniac? 0.012 0.018 9\nOpportunity is missed by most people. Society is becoming more efficient, which is a good thing. People should realize there's no point in holding back this technology just for the sake of keeping people employed. If this were beneficial, then calculators and computers shouldn't exist either.\n#2\nOne small problem is that people need to pay rent and eat.\n#3\nSo we should ditch computers and go back to the typing pool? Should we get rid of heavy earth moving equipment and just use hundreds of guys with hand tools to build everything? It would employ a hell of a lot more people.\n#4\nNo one's saying that. I don't think anyone is really against automation, but as it increases, there are soon going to be more people that there are jobs that actually need doing. I actually believe we've already passed this point. So what do we do with the people, who can't get jobs simply because there are none? It's an issue that need assessed immediately.\n#5\nTons and tons and tons of American jobs have been replaced by new jobs created by technology or in support of technology years ago. An office might have needed people to handle filing paperwork, keeping it in order, and retrieving, where now a document management system has made them completely redundant. The upshot is that to access that DMS, people are out there selling computers, installing computers, servicing computers, and supporting end users building the servers installing, supporting monitoring backing them up, and all that jobs that come in support of those progress is progress. And it advances human efficiency and knowledge. These are just one or two examples, but the answer is not to kill progress. Other countries simply won't. The answer is to push education to the forefront, so people are prepared for these jobs and whatever other challenges the future may bring." }, { "figure_ref": [], "heading": "#6", "publication_ref": [], "table_ref": [], "text": "This is true. But it s unfortunate technological advances tend to reduce low skill jobs and replace them with high skill jobs. It would feel more fair if the low skilled workers could all do training programs and become high skilled workers. But this isn't really the case. Those jobs end up being taken by someone who had better educational opportunities or someone younger who still has time to take advantage of education.\n#7\nThe reality is the reality. Unfortunate or not educating people will create more educated people to handle high skill jobs, and I'll tell you being a desktop support technician isn't high skill. As that's where we push in the future, any amount of hand wringing won't change the facts. We must educate our people if we want to be a global leader in more than homelessness poverty.\n#8\nEducation won't matter. We are at the end of the job age at some point in the near future. We are going to have to deal with the fact that getting a job isn't a reality for a significant percentage of the population. Society will have to radically change as it did during the industrial revolution.\n#9\nMuch cheaper to heavily discourage having more children free abortions. Then in years there won't be so many useless people who can apparently be replaced by a simple robot.\n#10\nVirtually every job will be replaced by automation name skilled trades that can't be automated. I imagine you'd be surprised at how hard this is. Are pharmacists useless, surgeons, accountants? I'd bet that your job is just as replaceable as these." }, { "figure_ref": [], "heading": "Candidate Quote Texts", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "S d S p R", "publication_ref": [], "table_ref": [], "text": "Ground truth There's no such thing as a free lunch. Nature abhors a vacuum." }, { "figure_ref": [], "heading": "0.024 7", "publication_ref": [], "table_ref": [], "text": "There is no substitute for hard work.\n0.024 0.017 8\nThere are three kinds of lies: lies, damned lies, and statistics. Table 5: Case analyses of quote recommendation. We demonstrate the candidate quotes of the top 10 rankings out of all candidates. Note that there is only one ground truth quote for each conversation history." } ]
2023-05-25
10.18653/v1/n19-1423
[ { "authors": "Hidasi Balázs; Karatzoglou Alexandros; Baltrunas Linas; Tikk Domonkos", "journal": "", "ref_id": "b0", "title": "Session-based recommendations with recurrent neural networks", "year": "2016-05-02" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b1", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b2", "title": "", "year": "" }, { "authors": "Daniel Cheng; Kyle Yan; Phillip Keung; Noah A Smith", "journal": "European Language Resources Association", "ref_id": "b3", "title": "The engage corpus: A social media dataset for text-based recommender systems", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2019-06-02" }, { "authors": "Li Dong; Nan Yang; Wenhui Wang; Furu Wei; Xiaodong Liu; Yu Wang; Jianfeng Gao; Ming Zhou; Hsiao-Wuen Hon", "journal": "", "ref_id": "b5", "title": "Unified language model pre-training for natural language understanding and generation", "year": "2019" }, { "authors": "Shijie Geng; Shuchang Liu; Zuohui Fu; Yingqiang Ge; Yongfeng Zhang", "journal": "Association for Computing Machinery", "ref_id": "b6", "title": "Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5)", "year": "2022" }, { "authors": "Youyang Gu; Tao Lei; Regina Barzilay; Tommi Jaakkola", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Learning to refine text based recommendations", "year": "2016" }, { "authors": "Michael Gutmann; Aapo Hyvärinen", "journal": "PMLR", "ref_id": "b8", "title": "Noisecontrastive estimation: A new estimation principle for unnormalized statistical models", "year": "2010" }, { "authors": "Wang-Cheng Kang; Julian Mcauley", "journal": "", "ref_id": "b9", "title": "Selfattentive sequential recommendation", "year": "2018" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Jian Li; Jieming Zhu; Qiwei Bi; Guohao Cai; Lifeng Shang; Zhenhua Dong; Xin Jiang; Qun Liu", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "MINER: Multi-interest matching network for news recommendation", "year": "2022" }, { "authors": "Yize Li; Jiazhong Nie; Yi Zhang; Bingqing Wang; Baoshi Yan; Fuliang Weng", "journal": "", "ref_id": "b12", "title": "Contextual recommendation based on text mining", "year": "2010" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b13", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b14", "title": "Decoupled weight decay regularization", "year": "2019-05-06" }, { "authors": "Itzik Malkiel; Oren Barkan; Avi Caciularu; Noam Razin; Ori Katz; Noam Koenigstein", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "RecoBERT: A catalog language model for text-based recommendations", "year": "2020" }, { "authors": "Shumpei Okura; Yukihiro Tagami; Shingo Ono; Akira Tajima", "journal": "Association for Computing Machinery", "ref_id": "b16", "title": "Embedding-based news recommendation for millions of users", "year": "2017" }, { "authors": "Fanchao Qi; Yanhui Yang; Jing Yi; Zhili Cheng; Zhiyuan Liu; Maosong Sun", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "QuoteR: A benchmark of quote recommendation for writing", "year": "2022" }, { "authors": "Tao Qi; Fangzhao Wu; Chuhan Wu; Peiru Yang; Yang Yu; Xing Xie; Yongfeng Huang", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "HieRec: Hierarchical user interest modeling for personalized news recommendation", "year": "2021" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark; Gretchen Krueger; Ilya Sutskever", "journal": "", "ref_id": "b19", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b20", "title": "", "year": "" }, { "authors": "Fei Sun; Jun Liu; Jian Wu; Changhua Pei; Xiao Lin; Wenwu Ou; Peng Jiang", "journal": "Association for Computing Machinery", "ref_id": "b21", "title": "Bert4rec: Sequential recommendation with bidirectional encoder representations from transformer", "year": "2019" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Ł Ukasz Kaiser; Illia Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b22", "title": "Attention is all you need", "year": "2017" }, { "authors": "Lingzhi Wang; Xingshan Zeng; Kam-Fai Wong", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Quotation recommendation and interpretation based on transformation from queries to quotations", "year": "2021" }, { "authors": "Chuhan Wu; Fangzhao Wu; Tao Qi; Yongfeng Huang", "journal": "Association for Computing Machinery", "ref_id": "b24", "title": "Empowering news recommendation with pre-trained language models", "year": "2021" }, { "authors": "Fangzhao Wu; Ying Qiao; Jiun-Hung Chen; Chuhan Wu; Tao Qi; Jianxun Lian; Danyang Liu; Xing Xie; Jianfeng Gao; Winnie Wu; Ming Zhou", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "MIND: A large-scale dataset for news recommendation", "year": "2020" }, { "authors": "Xingshan Zeng; Jing Li; Lu Wang; Zhiming Mao; Kam-Fai Wong", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Dynamic online conversation recommendation", "year": "2020" }, { "authors": "Qi Zhang; Jingjie Li; Qinglin Jia; Chuyuan Wang; Jieming Zhu; Zhaowei Wang; Xiuqiang He", "journal": "Main Track", "ref_id": "b27", "title": "Unbert: User-news matching bert for news recommendation", "year": "2021" } ]
[ { "formula_coordinates": [ 2, 306.14, 85.72, 220.18, 27.1 ], "formula_id": "formula_0", "formula_text": "|t 1 | 1 , ..., x 1 N , x 2 N , ..., x |t N | N ]. Inspired by" }, { "formula_coordinates": [ 2, 306.46, 222.26, 205.88, 28.19 ], "formula_id": "formula_1", "formula_text": "Attention(Q, K, V ) = softmax( QK T √ d k +M)V" }, { "formula_coordinates": [ 3, 81.58, 327.29, 184.03, 31.62 ], "formula_id": "formula_2", "formula_text": "L d i = -log exp(S d+ i ) exp(S d+ i ) + K j=1 exp(S d- j )" }, { "formula_coordinates": [ 3, 77.14, 480.62, 212.72, 24.43 ], "formula_id": "formula_3", "formula_text": "S p = -PPL(Y ) = 1 T T i=1 log p θ (y i |y <i ) (3)" }, { "formula_coordinates": [ 3, 73.65, 612.08, 211.98, 43.09 ], "formula_id": "formula_4", "formula_text": "L p i = -log exp(τ • S p+ i ) exp(τ • S p+ i ) + K j=1 exp(τ • S p- j )(4" }, { "formula_coordinates": [ 3, 129.08, 701.34, 160.79, 21.79 ], "formula_id": "formula_5", "formula_text": "L = |D| i=1 L d i + L p i (5)" }, { "formula_coordinates": [ 7, 215.8, 764.9, 55.29, 7.86 ], "formula_id": "formula_6", "formula_text": "]) = [1, 3, 4, 2]" } ]
UniTRec: A Unified Text-to-Text Transformer and Joint Contrastive Learning Framework for Text-based Recommendation
Prior study has shown that pretrained language models (PLM) can boost the performance of text-based recommendation. In contrast to previous works that either use PLM to encode user history as a whole input text, or impose an additional aggregation network to fuse multiturn history representations, we propose a unified local-and global-attention Transformer encoder to better model two-level contexts of user history. Moreover, conditioned on user history encoded by Transformer encoders, our framework leverages Transformer decoders to estimate the language perplexity of candidate text items, which can serve as a straightforward yet significant contrastive signal for user-item text matching. Based on this, our framework, UniTRec, unifies the contrastive objectives of discriminative matching scores and candidate text perplexity to jointly enhance text-based recommendation. Extensive evaluation shows that UniTRec delivers SOTA performance on three text-based recommendation tasks.
Zhiming Mao; Huimin Wang; Yiming Du; Kam-Fai Wong
[ { "figure_caption": "Figure 1 :1Figure 1: An example of perplexity-based ranking for candidate item texts, conditioned on user history. The illustrated task is text-based news recommendation.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Overview of UniTRec. In training, matching scores S d and S p are optimized by the NCE loss, respectively. In inference, S d and S p are normalized and combined to derive the final output ranking.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Input: discriminative scores S d = {S d 1 , Sd 2 , ..., S d M }, perplexity-based scores S p = {S p 1 , S p 2 , ..., S p M }. Output: final averaged ranking R. 1: Derive the normalized discriminative scores S d norm = softmax(S d ). 2: Derive the normalized perplexity-based scores S p norm = softmax(S p ). 3: Derive the geometric average scores S = log (S d norm ) + log (S p norm ). 4: Sort the averaged scores S by descending order to derive the final ranking: R ← Rank des ( S). 5: return R", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Experiment results on three text-based recommendation tasks. MRR denotes mean reciprocal rank, NDCG denotes normalized discounted cumulative gain, and HR denotes hit ratio (presented in percentage). The overall performance of UniTRec is better than other baseline models with p-value < 0.05, validated by unpaired t-test.", "figure_data": "32.9136.20/42.5350.33/68.35 34.0834.65/37.9344.45/54.63 2.121.04/1.511.27/2.65SASRec32.6036.03/42.3750.63/68.64 33.6334.30/37.4944.32/54.20 2.401.49/1.952.16/3.47BERT4Rec32.8736.18/42.4050.21/67.97 33.5934.26/37.2743.76/53.05 3.041.98/3.232.81/6.67RoBERTa-Sim 32.9636.47/42.8151.06/69.08 37.1337.96/41.1848.14/58.06 3.742.66/3.754.42/7.70UNBERT33.0936.53/42.8450.87/68.82 39.7540.74/43.6950.90/60.04 2.831.96/2.673.11/5.24UniTRec33.7637.63/43.7452.61/69.89 41.2442.38/45.3152.87/61.88 4.063.23/4.294.58/7.68NewsRecQuoteRecEngageRecModelMRR NDCG@5/10HR@5/10MRR NDCG@5/10HR@5/10MRR NDCG@5/10 HR@5/10UniTRec33.7637.63/43.7452.61/69.89 41.2442.38/45.3152.87/61.88 4.063.23/4.294.58/7.68w/o BART Init30.3133.32/39.6947.55/65.78 19.0217.66/20.8022.45/32.16 2.240.86/1.611.27/3.62w/o Local-Att33.3437.22/43.3252.28/69.54 40.4441.63/44.5652.09/61.15 3.923.19/4.154.38/7.36w/o Global-Att33.2237.06/43.1752.14/69.47 40.2541.47/44.2652.07/60.76 3.642.78/3.593.89/6.35Disc-Score only 33.0736.76/43.0351.68/69.46 40.5941.81/44.6552.39/61.14 3.822.99/3.604.49/6.85PPL-Score only 32.8336.39/42.5951.05/68.67 40.3141.43/44.4752.13/61.20 3.292.39/3.033.86/5.66", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Recommendation performance of ablation model variants.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Statistics of three text-based recommendation training datasets. History and candidate tokens denote the number of BPE-tokenized tokens. The test set distribution is closed to the training sets (except candidates of EngageRec) and hence omitted. Note that the max length of each history log is truncated to 1024 tokens.", "figure_data": "DatasetNewsRec QuoteRec EngageRecAvg. history turns26.094.243.29Avg. history tokens414.40279.82286.82Avg. candidates37.2311117163Avg. candidate tokens16.1519.11102.42", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Table 4 and 5 showcase the qualitative samples. Mac Engel: As long as these results are acceptable, Dallas Cowboys will continue to be losers Maryland Congressman Elijah Cummings, a Democrat and Chair of House Oversight and Reform Committee, has died: CNN Clicked Taylor Swift Rep Hits Back at Big Machine, Claims She's Actually Owed $7.9 Million in Unpaid Royalties Christian Navarro Slams Disney for Casting \"the White Guy\" in The Little Mermaid ✗ I've been writing about tiny homes for a year and spent 2 nights in a 300-foot home to see what it is all about", "figure_data": "TurnHistory News Texts#1#2NFL world reacts to officials handing Packers win over Lions#3#4Unprecedented movement detected on California earthquake fault capable of 8.0 temblor#5Bag Explodes While Being Loaded On Volaris Flight At Midway Airport#6Orlando Scandrick rips Eagles: They have \"accountability issues\"#7Meghan King Edmonds, Jim Edmonds' Nanny Denies Cheating Allegations#8Nearly $400M worth of cocaine and marijuana intercepted by US Coast Guard#9Former NBA first-round pick arrested in sex sting operation#10China's trade with US shrinks in October despite optimismCandidate News TextsS dS pR0.0950.0694✗Former North Carolina State, NBA player Anthony Grundy dies in stabbing, police say0.1720.1553✗13 Reasons Why's 0.0480.0657✗Opinion: Colin Kaepernick is about to get what he deserves: a chance0.3030.2501✓3 Indiana judges suspended after a night of drinking turned into a White Castle brawl0.0760.0595✗66 Cool Tech Gifts Anyone Would Be Thrilled to Receive0.0090.0059✗Police find 26 children behind false wall at Colorado day care0.0340.11660.0290.0198✗Report: Police investigating woman's death after Redskins' player Montae Nicholson took her to hospital0.2350.2612TurnHistory News Texts#1Toddler dancing to celebrate 11 months cancer-free goes viral#2NFL Week 8 Power Rankings: Old-school football rules the day#3", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Take a peek inside Clicked Opinion: Colin Kaepernick is about to get what he deserves: a chance ✓ U.S. Troops Will Die If They Remain in Syria, Bashar Al-Assad Warns Pete Davidson, Kaia Gerber Are Dating, Trying to Stay \"Low Profile\" Taylor Swift Rep Hits Back at Big Machine, Claims She's Actually Owed $7.9 Million in Unpaid Royalties ✗ 13 Reasons Why's Christian Navarro Slams Disney for Casting \"the White Guy\" in The Little Mermaid Some believe Mason Rudolph, hit in head with his own helmet, isn't getting enough blame Police investigating woman's death after Redskins' player Montae Nicholson took her to hospital Case analyses of news recommendation. History News Texts are sorted by user-clicked timestamps. S d , S p , and R are normalized discriminative, perplexity-based scores, and average ranking as described in Appendix B.", "figure_data": "Candidate News TextsS dS pR0.3300.40010.0240.01110✗0.0640.0336✗The Hottest Tech Gifts This Holiday Season0.0500.0278✗0.0460.03870.0600.0964✓0.1540.1792✓South Carolina teen gets life in prison for deadly elementary school shooting0.0660.0465✗The Unlikely Star of My Family's Thanksgiving Table0.0470.0219✗0.1580.1493✗(ii) Qualitative Example-B from news recommendation.", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "(Li et al., 2010;Gu et al., 2016;Okura et al., 2017;Malkiel et al., 2020)", "Explanation": "The cited works provide foundational methods and techniques for text-based recommendation, which the citing paper builds upon in its own research on the same topic."}, {"Category": "Supporting Evidence", "Citation": "(Zhang et al., 2021;Qi et al., 2022;Geng et al., 2022)", "Explanation": "The cited works offer evidence of the effectiveness of using pre-trained language models in text-based recommendation research, which the citing paper leverages in its own study of the same approach."}, {"Category": "Data Source", "Citation": "(Devlin et al., 2019;Brown et al., 2020)", "Explanation": "The cited works are the origin of the pre-trained language models used in the citing paper for text-based recommendation research, providing the data and models for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Malkiel et al., 2020)", "Explanation": "The cited work by Malkiel et al. provides a method of encoding multi-turn texts of user history by concatenating them as a whole input text and using a single PLM encoder to learn the user representation."}, {"Category": "Methodological Basis", "Citation": "(Qi et al., 2022)", "Explanation": "The cited work by Qi et al. also uses a method of encoding multi-turn texts of user history by concatenating them as a whole input text and using a single PLM encoder to learn the user representation."}, {"Category": "Methodological Basis", "Citation": "(Geng et al., 2022)", "Explanation": "The cited work by Geng et al. uses a method of encoding multi-turn texts of user history by concatenating them as a whole input text and using a single PLM encoder to learn the user representation."}, {"Category": "Extension or Continuation", "Citation": "(Zeng et al., 2020)", "Explanation": "The cited work by Zeng et al. extends the research on learning the relation among user history turns by pointing out that it is beneficial to do so."}, {"Category": "Extension or Continuation", "Citation": "(Qi et al., 2021)", "Explanation": "The cited work by Qi et al. also extends the research on learning the relation among user history turns by pointing out that it is beneficial to do so."}, {"Category": "Methodological Basis", "Citation": "(Wu et al., 2021)", "Explanation": "The cited work by Wu et al. uses a method of learning representations from multi-turn history texts with PLM encoders and an additional aggregation network to fuse the multi-turn representations."}, {"Category": "Methodological Basis", "Citation": "(Li et al., 2022)", "Explanation": "The cited work by Li et al. also uses a method of learning representations from multi-turn history texts with PLM encoders and an additional aggregation network to fuse the multi-turn representations."}, {"Category": "Methodological Basis", "Citation": "(Dong et al., 2019)", "Explanation": "The cited work by Dong et al. (2019) provides a tailored attention masking technique for learning word-level context in user history representations, which the citing paper adopts in its own research on user history modeling."}, {"Category": "Methodological Basis", "Citation": "(Vaswani et al., 2017)", "Explanation": "The cited work by Vaswani et al. (2017) provides the self-attention query, key, and value components that the citing paper uses in the local self-attention blocks to learn word-level context representations of turns."}, {"Category": "Extension or Continuation", "Citation": "(Vaswani et al., 2017)", "Explanation": "The citing paper extends the work of Vaswani et al. (2017) by leveraging global self-attention to model the relation among history turns, allowing for global interaction among tokens and turns to learn turn-level context representations of user history."}, {"Category": "Methodological Basis", "Citation": "(Lewis et al., 2020)", "Explanation": "The cited work by Lewis et al. (2020) provides the basis for the use of an MLP scorehead in the citing paper to predict user-candidate matching scores, which is a key methodological element in the study conducted in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Li et al., 2022;Qi et al., 2022)", "Explanation": "The cited works by Li et al. (2022) and Qi et al. (2022) provide evidence to support the use of negative sampling with NCE loss in the citing paper for optimizing matching score prediction, which is a key element in the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Radford et al., 2021)", "Explanation": "The cited work by Radford et al. (2021) introduces the concept of temperature parameter to balance the joint NCE loss gradients, which the citing paper adopts in the optimization of the perplexity-based matching score in Eq. (4). This method is used to control the range of perplexity values and ensure a more stable optimization process."}, {"Category": "Methodological Basis", "Citation": "(Lewis et al., 2020)", "Explanation": "The cited work by Lewis et al. (2020) provides the pretrained model for the UniTRec text-to-text Transformer, which the citing paper uses to initialize the parameters for its own research."}, {"Category": "Supporting Evidence", "Citation": "(Wu et al., 2020)", "Explanation": "The cited work provides the MIND-small dataset for experiments in the NewsRec task, which is used to evaluate UniTRec in the context of text-based recommendation."}, {"Category": "Supporting Evidence", "Citation": "(Wang et al., 2021)", "Explanation": "The cited work provides the Reddit-quotation dataset for experiments in the QuoteRec task, which is used to evaluate UniTRec in the context of text-based recommendation."}, {"Category": "Supporting Evidence", "Citation": "(Zeng et al., 2020)", "Explanation": "The cited work provides the dataset released by Zeng et al. for experiments in the EngageRec task, which is used to evaluate UniTRec in the context of text-based recommendation."}, {"Category": "Methodological Basis", "Citation": "(Bal\u00e1zs et al., 2016)", "Explanation": "The cited work introduces the GRU network for learning multi-turn history, which the citing paper adopts in their model to process user history."}, {"Category": "Methodological Basis", "Citation": "(Kang and McAuley, 2018)", "Explanation": "The cited work proposes the self-attention based sequential model for user history encoding, which the citing paper uses in their model to process user history."}, {"Category": "Methodological Basis", "Citation": "(Sun et al., 2019)", "Explanation": "The cited work employs bidirectional self-attention to model user history, which the citing paper incorporates in their model to process user history."}, {"Category": "Data Source", "Citation": "(Liu et al., 2019)", "Explanation": "The cited work introduces the RoBERTa-base model as a text encoder for learning text embeddings, which the citing paper uses in their model to process text inputs."}, {"Category": "Data Source", "Citation": "(Qi et al., 2021)", "Explanation": "The cited work provides the text inputs for the News-Rec dataset, which is used in the citing paper to study text-based recommendation."}, {"Category": "Methodological Basis", "Citation": "(Zeng et al., 2020)", "Explanation": "The cited work formulates the text-based recommendation task in a way that the citing paper adopts and adapts to study the relationship between post content and user history in text-based recommendation."}]
[ { "figure_ref": [ "fig_0" ], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b2", "b4", "b5", "b6", "b8" ], "table_ref": [], "text": "A FTER continuous development, person re-identification (Re-ID) has attracted high attention with great achievement recently. However, conventional Re-ID methods only rely on single visible images, which results in limited application in low illumination scenarios such as hazy or dark environment. To facilitate the Re-ID task in low illumination scenario, Wu et al. [1] first launch the RGB and near-infrared cross-modality Re-ID. Despite of the recent efforts in cross-modality Re-ID, the additional heterogeneity between the visible and infrared modalities brings huge challenging for cross-modality Re-ID.\nWith the popularity of diverse kinds of cameras (e.g., infrared cameras), multi-modality person re-identification and vehicle re-identification receives increasing interests in the computer vision community [2], [3], due to the strong complementary benefits from different modalities. Li et al. [4] first This research is supported in part by the National Natural Science Foundation of China (61976002), the Natural Science Foundation of Anhui Higher Education Institution of China (KJ2020A0033), and the University Synergy Innovation Program of Anhui Province (GXXT-2021-038 and GXXT-2019-025).\nA. Zheng and C. Li are with the Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, School of Artificial Intelligence, Anhui University, Hefei, 230601, China (e-mail: [email protected]; [email protected]).\nZ. He, Z. Wang and J. Tang are with Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University, Hefei, 230601, China (e-mail: [email protected]; [email protected]; [email protected]) contribute the multi-spectral vehicle datasets RGBN300 and RGBNT100, and propose a heterogeneity-collaboration aware multi-stream revolutionary network for multi-spectral vehicle Re-ID task. Recently, Zheng et al. [3] contribute a multimodality Re-ID benchmark dataset RGBNT201 containing images of each person in three modality: visible, near-infrared and thermal-infrared, making full use of the complementary information of multiple modalities. Meanwhile, they propose a progressive fusion method to combine multi-modality information.\nAlthough multi-modality images have great advantages, the requirements for complete multi-modality data are relatively strict. As shown in Fig. 1, one or two modalities may arbitrarily-missing during the test stage caused by shooting conditions, equipment damage or storage errors. Therefore, partial multi-modality Re-ID is essential in real-life applications.\nThere are three major issues to be addressed in partial multimodality Re-ID.\n• The missing state of modalities is arbitrary. Traditional methods usually train an independent model for each situation, which is not only inefficient, but also low scalability.\n• The recovery of missing modalities is a difficult task.\nExisting generative adversarial network (GAN) based arXiv:2305.15762v1 [cs.CV] 25 May 2023 methods [5], [6] try to use existing data to generate missing data. However, the large amount of training data and unstable training process significantly limit the performance.\n• The ability of multi-modality representation is affected when some modalities are missing. Existing methods often enhance feature representation through multimodality fusion [7]- [9]. But direct and fixed fusion approaches lose modality-specific information and can not adapt to a variety of missing states. To handle the first issue, we design a novel dynamic enhancement network (DENet) to automatically recover the representation of missing modalities and perform dynamic interactions according the missing state. In particular, DENet contains three feature extraction branches and a set of feature transformation branches. The former is used to extract features of available modalities, and the latter is to recover features of missing modalities. These branches only need training once, and then dynamically compose the feature representation and perform dynamic enhancement according to the missing state in the test stage.\nTo solve the second issue, we propose the cross-modality feature transformation (CMFT), including the up-sample and down-sample structures. Specifically, we send the features of available modality into CMFT to transform it to the features of the missing modality. The missing image recovered by upsample operation is constrained by reconstruction loss, and then the output representations of CMFTs are constrained by the similarity loss, so as to recover more real and discriminative missing information. In the training stage, we train the CMFT for each feature transformation between one modality to another one using the multi-modality data. In the testing stage, we dynamically use the CMFT moudle to handle the arbitrary missing state.\nFinally, for the third issue, we propose a dynamic enhancement module (DEM). The DEM gets rid of the bondage of fixed fusion and adopts dynamic cutting strategy to realize the feature enhancement of arbitrary missing cases, so as to process and correlate the information of multiple modalities in different missing states. Specifically, we first build a complete directed enhancement graph by taking the modalities as graph nodes, and then dynamically cut some graph edges according the missing state. In particular, if a modality is missing, we cut two edges that take this node as the arc tail. Based on the generated graph, we achieve the feature enhancement under the missing state. Through this way, we can adapt to a variety of missing states in a dynamic manner.\nOur contributions are summarized as follows.\n• We propose a novel dynamic enhancement network to solve the problem of missing modalities which frequently occur in real-world scenario in multi-modality Re-ID task.\n• To recover the missing data, we design the cross-modality feature transformation module with the constraints of the reconstruction loss and similarity loss. • To achieve feature enhancement for arbitrary missing states, we propose a dynamic cutting strategy to adjust the enhancement graph adaptively.\n• We carry out extensive comparative experiments with different state-of-the-art methods on RGBNT201 and RGBNT100 to verify the effectiveness of DENet in various missing cases." }, { "figure_ref": [], "heading": "II. RELATED WORK A. Person Re-ID", "publication_ref": [ "b9", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21" ], "table_ref": [], "text": "Person re-identification mainly solves the recognition and matching problem of person images taken by different non overlapping cameras [10]- [15]. It can identify person according to their wearing, posture, hairstyle and other information, so it is widely used in intelligent security and other fields. As one of the hot topics in computer vision research, it has made great progress after years of development. And these deep learning-based Re-ID methods can be roughly divided into two categories: feature learning and distance metric learning. The feature learning network aims to learn a robust and discriminative feature representation for person image. For example, Wei et al. [16] proposed a global-local-alignment descriptor (GLAD) to overcome pose changes and misalignments. Metric learning aims to learn the similarity between images. The more commonly used metric learning loss is the triple loss [17], and there are continuous works to improve this loss function to significantly improve the Re-ID performance [18], [19]. At the same time, it also faces great challenges, such as different viewpoints [20], illumination changes [21], occlusion [22], etc. Especially in low light conditions, the RGB singlemodality Re-ID can not adapt to the night environment, so its performance is limited." }, { "figure_ref": [], "heading": "B. Cross-Modality and Multi-Modality Re-ID", "publication_ref": [ "b0", "b22", "b23", "b9", "b24", "b25", "b26", "b1", "b2" ], "table_ref": [], "text": "To overcome the illumination challenges of the RGB singlemodality Re-ID task, Wu et al. [1] first launch the RGBinfrared cross-modality person re-identification task with the infrared and visible cross-modality person Re-ID dataset SYSU-MM01 and a baseline method Deep Zero-Padding. The emerging achievements of this task roughly fail into the following three categories: 1) Representation learning based methods, which aim to extract the robust and discriminative features shared by two modality images [23], [24]. 2) Metric learning based methods, the key of which kind of methods is to design a reasonable metric method or loss function to learn the similarity of two images [10], [25]. 3) Modality transformation based methods, which transform the crossmodality task into the single-modality task, so as to reduce the modality heterogeneity [26], [27].\nIn order to overcome the limitation in RGB Re-ID task as well as the heterogeneity problem in cross-modality Re-ID task, the multi-modality Re-ID task is proposed. Mogelmose et al. propose a tri-modal person Re-ID [2], combining RGB, depth and thermal data. Special features are obtained from three modalities, which are then evaluated using a joint classifier. Zheng et al. [3] take advantage of the complementary strengths of the visible, near-infrared and thermal-infrared modalities to achieve robust Re-ID performance. This work contributes a comprehensive benchmark dataset RGBNT201, including 201 identities captured from various challenging conditions, to facilitate research on RGB-NIR-TIR multimodality person re-identification." }, { "figure_ref": [], "heading": "C. Partial Multi-Modality Learning", "publication_ref": [ "b27", "b28", "b29", "b30" ], "table_ref": [], "text": "In view of the wide application of multi-modality learning, many fields are committed to the exploration of this study, and some work have paid attention to the problem of missing modality. When processing medical images, Zhang et al. [28] propose to use CycleGAN to generate missing information at the data level. Secondly, based on the imputation method, HeMIS [29] use statistical features as embedding for decoding, and feature fusion adopts the fusion calculation method of mean and variance. In addition, Shen et al. [30] propose an adaptive network to design a loss function for the model to generate features similar to the real features in the case of modality-missing. Tsai et al. [31] point out in their work on multi-modality representation learning that models must be robust to unexpected missing or noisy modalities during testing, and they propose to optimize for a joint generativediscriminative objective across multi-modality data and labels." }, { "figure_ref": [], "heading": "D. Multi-Modality Feature Representation and Fusion", "publication_ref": [ "b31", "b32", "b33", "b34", "b35", "b36" ], "table_ref": [], "text": "Multi-modality feature representation plays an important role in multi-modality tasks. The main task is to learn better feature representation of multi-modality data with the complementarity of multiple modalities. The main problem is how to combine the data from different modalities with different degrees of noise. Two commonly used multi-modality representations are the joint representation and the coordinated representation. The former maps the information of different modalities to the same feature space [32], and the later maps the information of each modal respectively with certain constraints between each modality after mapping [33]. Another key issue is the multi-modality feature fusion, which integrates the information from different modalities through different fusion strategies to obtain more discriminative information for specific tasks. It can be roughly divided into early fusion [34], late fusion [35], [36], and mixed fusion [37]." }, { "figure_ref": [ "fig_1" ], "heading": "III. DENET: DYNAMIC ENHANCEMENT NETWORK", "publication_ref": [], "table_ref": [], "text": "In order to solve the ubiquitous modality-missing problem in multi-modality Re-ID task, we propose the Dynamic Enhancement Network (DENet), which consists of multi-branch feature extraction, cross-modality feature transformation module, and dynamic enhancement module, as shown in Fig. 2. In the following, we describe their details." }, { "figure_ref": [], "heading": "A. Multi-branch Feature Extraction", "publication_ref": [ "b37", "b38" ], "table_ref": [], "text": "In training phase, we use all real visible, near-infrared, and thermal-infrared modalities. In order to obtain complementary information, we design a multi-branch network to extract the features of each modality from individual image triplet. We select the commonly used ResNet50 [38] as the backbone. Furthermore, we add the channel and spatial attention layer [39] to each branch to focus the most informative features. It is worth noting that the three ResNet50 branches are independent to each other without sharing the parameters. The feature F rgb , F nir and F tir extracted by corresponding branch preserve modality-specific information as much as possible." }, { "figure_ref": [ "fig_1" ], "heading": "B. Cross-modality Feature Transformation Module", "publication_ref": [], "table_ref": [], "text": "In order to deal with unpredictable modality-missing problem in the testing phase, we introduce the cross-modal feature transformation (CMFT) to learn the conversion relationship between the existing modality and missing modality, as shown in Fig. 2. Taking the example that NIR and TIR modalities are missing, we use two CMFTs, specifically called R2N and R2T and take the R2N as an example to illustrate the details.\nThe module passes the RGB feature F rgb through a upsample block to obtain the intermediate fake-image and then obtains the recover feature through down-sample block.\nX ′ nir = U pSample(F rgb ),(1)\nF ′ nir = DownSample(X ′ nir ).(2)\nWe add pixel-level constraints to above images and features by minimizing\nL rec = ∥X nir -X ′ nir ∥ 2 2 ,(3)\nL sim = ∥θ n (G(F nir )) -θ ′ n (G(F ′ nir ))∥ 1 .(4)\nwhere\nX nir denotes NIR images, G denotes Global Av- erage Pooling, θ n (x) = ReLU (BN (W n x)) and θ ′ n (x) = ReLU (BN (W ′ n x\n)) denote the projection functions that embed the feature vectors into the same space. The two functions are equivalent to provide an effective supervision for the transformation module to obtain more adequate semantic information.\nThe CMFT is trained end-to-end, and fixed all parameters after training, then works in testing phase to recover more realistic and discriminative missing information under missing scenario. " }, { "figure_ref": [ "fig_3", "fig_3", "fig_4" ], "heading": "C. Dynamic Enhancement Module", "publication_ref": [ "b39", "b40" ], "table_ref": [], "text": "Each individual branch is concerned with the learning of discriminative features in a given deterministic modality. For the purpose to make a single branch use the internal correlation between different modalities without interactive information, we adopt the enhancement operation. However, considering the unpredictable missing modalities, how to maintain the ability of multi-modality representation is crucial. Existing enhancement methods with fixed structure [40], [41] are not suitable for our problem. To this end, we propose the Dynamic Enhancement Module (DEM), which can adaptively adjust the enhancement branches according to the missing states to maintain the representation ability of multiple modalities.\nAs shown in Fig. 3 (a), we construct a complete directed enhancement graph by taking the modalities as graph nodes, and we show the dynamic cutting strategy in the case of missing TIR in Fig. 3 (b), which cuts the two enhancement edges that take the TIR node as the arc tail adaptively. The details of cross-modality enhancement are shown in Fig. 4, in which the feature F target of the target modal is enhanced by the source feature F source to obtain the corresponding F enhance target . This progress can be described below. First, we implement three 1×1 convolutions of F source and F target to obtain F ′ source , F ′′ source and F ′ target . Then we perform dot product as follows,\nF ′′′ source = (F ′ source ⊗ F ′ target ) ⊗ F ′′ source .(5)\nAfter convolution and unsequeeze operations, we obtain the final representation F enhance target by adding the original target feature,\nF enhance target = F ′′′ source + F target .(6)\nAt last, we integrate different modalities by concatenating corresponding features to form the final person or vehicle discriminator.\nF c1 = Concat(F enhance rgb , F enhance nir , F enhance tir ),(7)\nF c2 = Concat(F rgb , F enhance nir ′ , F enhance tir ′ ),(8)\nF f inal = Concat(F c1 , F c2 ).(9)\nIn the test stage, if there is no missing (i.e. missing rate η = 0), then the model obtains the discriminant features as usual. If missing, we can judge the missing modalities and missing rate η, and complete the missing information through our CMFT module, so as to get different feature combinations under different missing states to adapt to the missing situation." }, { "figure_ref": [], "heading": "D. Objective Function", "publication_ref": [ "b41" ], "table_ref": [], "text": "Since the features obtained after global average pool (GAP) are in Euclidean space and the triplet loss is suitable for constraints in free Euclidean space, we apply the triplet loss directly to the features after GAP. Then, the features are normalized to the hypersphere through BN layer to optimize the classification loss. The triplet loss and cros-entropy loss (CE loss) is computed as:\nL tri = max{d(a, p) -d(a, n) + α, 0},(10)\nL CE = - N i=1 q i log(p i ) q i = 0, y ̸ = i q i = 1, y = i,(11)\nwhere d(a, p) and d(a, n) are feature distances of positive and negative pairs and the α is the margin of triplet loss.\nIn order to prevent the network from overfitting, we adopt the label smooth strategy [42]. Label smooth will change the real probability distribution and cross-entropy loss function as follows,\np i = 1 -β , i = y β/N , i ̸ = y,(12)\nL ′ CE = (1 -β) * L CE , i = y β * L CE , i ̸ = y, (13\n)\nwhere β is a small hyperparameters, which encourage the model to be less confident on the training set. Through the above description, the overall loss function in the training phase can be formulated as:\nL = L ReID + ρL rec + µL sim ,(14)\nwhere L ReID denotes all the triplet loss and cross entropy loss we used. ρ and µ are the balance hyperparameters between these losses. We set ρ = 1 and µ = 1 in training stage according to the experiment results of parameter analysis." }, { "figure_ref": [], "heading": "E. Implementation Details", "publication_ref": [], "table_ref": [], "text": "The implementation platform is Pytorch with a NVIDIA GTX 1080Ti GPU. We use the ResNet-50 pre-trained on ImageNet as the backbone to extract 2,048d features for RGB, NIR and TIR images from the global average pooling layer. The batch size is set to 8 and the initial learning rate is set to 0.01 and then reduce it to 0.001 and 0.0001 at epoch 30 and 55. In the training phase, we set the dropout to 0.5 to prevent overfitting. And we use stochastic gradient descent (SGD), setting momentum to 0.9 and weight decay to 0.0005 to fine-tune the network." }, { "figure_ref": [], "heading": "IV. EVALUATION", "publication_ref": [], "table_ref": [], "text": "To verify the effectiveness of our proposed method, we compare it with state-of-the-art methods on the benchmark multi-modality person Re-ID dataset RGBNT201 ( Evaluation Protocols. Followed by the evaluation protocols in the commonly used Market-1501 dataset, we use mean Average Precision (mAP) and Cumulative Matching Characteristics (CMC) as the metrics. mAP is the area under the precision recall curve, which measures the quality of the model in all categories. CMC curve calculates the hit probability of Top-k, which comprehensively reflects the performance of the classifier. In our experiment, we show the scores of Rank-1, Rank-5 and Rank-10." }, { "figure_ref": [], "heading": "B. Evaluation on RGBNT201 Dataset", "publication_ref": [ "b42", "b43", "b44", "b45", "b2", "b2", "b24", "b46", "b47" ], "table_ref": [ "tab_1", "tab_1", "tab_2", "tab_1" ], "text": "In order to verify the effectiveness of DENet in processing partial-missing multi-modality data, we implement several groups of comparative experiments on RGBNT201 dataset according to the increasing number of missing modalities. No modality missing. To verify the generality of the proposed method for multi-modality person Re-ID, we compare our method with the state-of-the-art single and multi modality methods in Table I. Firstly, we compare our DENet with four single modality methods, including MLFN [43], HACNN [44], OSNet [45], and CAL [46]. For the single modality method, we expand the network into three branches, and concatenate the obtained features as the representation for Re-ID task. Without missing, we put the obtained features into the enhancement module to have a complete directed enhance. Secondly, we compare with multi-modality Re-ID methods PFNet [3] in complete setting. Clearly, our DENet significantly beats both the existing multi-modality Re-ID methods PFNet and these single modality methods, which demonstrates the effectiveness of the proposed method in complete multi-modality Re-ID. One modality complete-missing. In the case with one modality complete-missing, which means one fixed modality is missing for all the samples (taking TIR or NIR as example), we carry out a series of comparative experiments on RGB-NIR and RGB-TIR settings respectively. In this case, we also compare our method with four single modality methods mentioned above. We extend the single modality network into two branches and concatenate the two features of existing modalities as the representation. Additionally, we evaluate the multi-modality Re-ID method PFNet [3] in the fashion of missing modality. The result are shown in Table I. Furthermore, we construct the RGBNT201 dataset into cross-modality setting by query one modality from the other modality gallery. Thus, we evaluate three cross-modality Re-ID methods HC loss [25], DDAG [47] and MPANet [48] for comparison. The results are shown in Table II. Cross-modality methods mainly focus on reducing the modality heterogeneity while ignoring the complementarity in different modalities, thus present generally poor accuracy in both mAP and Rank-1. Single modality methods integrate the multi-modality information between the two existing modalities by simple concatenation, thus still achieves relatively poor performance. By considering the heterogeneity of multi-modality data and recovering the modality missing by cross-modality transfer, PFNet achieves large improvement comparing to the cross-modality and single modality methods. However, due to the low constraints of the recovery process and insufficient attention to modalityspecific features, which limits its performance. Benefit from the necessary restoration of missing modalities by the feature transformation module, as well as the useful information mining and interaction of dynamic feature enhancement, our DENet significantly boosts the performance, which evidences the effectiveness of our multi-modality while handling the partial multi-modality person re-identification. Two modality complete-missing. In the case of missing two modalities (taking TIR and NIR as example), we take RGB image as input and compare our method with the above three single-modality Re-ID method and the multi-modality Re-ID method PFNet with missing modalities. Consistently shown in Table I, our method significantly beats both single and multimodality methods, which demonstrates the effectiveness of the proposed method in partial multi-modality Re-ID." }, { "figure_ref": [], "heading": "C. Evaluation on RGBNT100 Dataset", "publication_ref": [ "b3", "b48", "b49" ], "table_ref": [ "tab_3", "tab_4" ], "text": "To verify the generality of our method, we further validate our DENet on the multi-spectral vehicle Re-ID dataset RGBNT100 [4]. Firstly, we evaluate our DENet on the RGBNT100 dataset in the case of diverse modality missing comparing with the baseline, as shown in Table III. It can be observed that DENet achieves the best performance when there is no modality missing. The performance degrades when one modality (NIR or TIR) missing, and even worse when both NIR and TIR are missing. This evidence that we can use the complementary information provided by multi-modality images to force the network to learn more discriminant feature representations, so as to obtain more robust performance. Moreover, by introducing Cross-Modality Feature Transformation (CMFT) and Dynamic Enhancement Module (DEM), our DENet significantly improves the baseline in all the cases. This verifies the effectiveness of our method by applying our model to the multi-modality vehicle dataset RGBNT100.\nIn addition, we compare our method DENet with the multimodality vehicle Re-ID method HAMNet and the advanced single-modal Re-ID method DMML [49] and Circle Loss [50]. Specifically, we extend the single-modal models to three branches, followed by the feature concatenation to obtain the final vehicle feature representation, then to facilitate the multimodality Re-ID task. As shown in Table IV, DENet sigfinicantly beats these comparative methods, which evidences the effectiveness of our method while handling multi-modality information." }, { "figure_ref": [], "heading": "D. Ablation Study", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "To verify the contribution of each component, reconstruction loss L rec , similarity loss L sim and the Dynamic Enhancement Module (DEM) in our method, we conduct the following ablation study on RGBNT201 with missing rate η = 0.25. As shown in Table V, each component plays indispensable the availability of existing features and restored features reasonably, rather than treating them equally. Furthermore, in the complete setting with no modality missing, we compare our method with the single-direction cyclic enhancement strategy, which takes RGB, NIR and TIR as nodes and R2N, N2T and T2R as edges. It can be seen form Table VII, either singledirection cyclic enhancement or fixed enhancement works overshadowed than the dual-direction full enhancement in the DEM in the case of data integrity." }, { "figure_ref": [], "heading": "E. Parameter Analysis", "publication_ref": [], "table_ref": [], "text": "There are three important parameters in our method, the influence of the values of parameters ρ and µ, and the missing rate η. First, we evaluate the influence of ρ and µ on training accuracy, as shown in Fig. 5 (a)-(c). Our method is relatively robust under the setting of µ = 1, and achieves the best performance when ρ = 1 and µ = 1. Therefore, in other experiments, we set ρ = 1 and µ = 1 for comparison. Second, to explore the capability of the proposed method while handling performance with the more realistic partial multi-modality Re-ID, we conduct experiments with different missing rates on RGBNT201 dataset. Specifically, we randomly remove M images (could be any modality for each multi-modality/triplet sample) from N quegallery images in the test stage. Fig. 5 (d) demonstrates the performance against the missing rate η = M/N . It can be seen that the two evaluation indicators first decreased with the increase of the missing rate, and turned to an upward trend near the missing rate of 0.6 and 0.5 respectively. The main reason may be that the image quality of some NIR and TIR is not clear, and the features extracted from the original real images are not recognized as the features converted from RGB, which also reflects the effectiveness of the feature transformation module." }, { "figure_ref": [ "fig_6", "fig_7", "fig_7", "fig_7" ], "heading": "F. Visualization", "publication_ref": [ "b50", "b51", "b52", "b45" ], "table_ref": [], "text": "To illustrate the effectiveness of the proposed dynamic enhancement module (DEM) on RGBNT201 dateset, we randomly visualize the feature distribution of five identities from the test set by t-SNE [51], where different colors and shapes represent different identities and modalities respectively. From Fig. 6 feature distribution in Fig. 6 (b) more compact. In addition, benefit from the two loss functions L rec and L sim in the CMFT module, we can obtain more realistic recovered feature representation as shown in Fig. 6 (c)-(d). The transformation significantly alleviate the modality differences, thus the features with same identity but different modalities cluster better than the original.\nSecondly, since one of the key points of multi-modality person Re-ID is to improve the discriminability of the features, we further explicitly show the effectiveness of the cross-modality feature transformation compared to the GAN-based generation method [52] in terms of missing modality recovery. We apply Grad-Cam [53] on the generated/recovered feature maps to visualize the class activation maps (CAMs) and overlay them on the original RGB images as shown in Fig. 7. We observe that the GAN-based feature generation method is more heavily disturbed by the background and does not pay enough attention to the target. In contrast, the features recovered by our method better emphasize the most discriminative regions required for classification. The stronger the feature discriminability, the better the network performance.\nFinally, we visualize the several ranking results of our model retrieved on the RGBNT201 dataset, as shown in Fig. 8. For clarity, we only show the RGB images in the ranking results. Fig. 8 (a) demonstrates the ranking results of an example query in different missing scenarios. We can see that the best retrieval results achieve when the multi-modality data is complete, and the performance is relatively the worst when both NIR and TIR are missing. This reflects that the complementarity of multimodality data plays an effective role in improving network performance. Fig. 8 (b) further demonstrates the results of our DENet comparing with the state-of-the-art CAL [46] when missing NIR and TIR modalities. It can be seen that comparing with the method without special handling of modality missing, our DENet complements the missing information through the CMFT module and then selectively interacts through the DEM, which can achieve much more robust Re-ID performance.\nV. CONCLUSION In this paper, we propose a novel partial multi-modality Re-ID method DENet to cope with the missing problem as well as improve the multi-modality representation. First, we introduce the cross-modality feature transformation module to recover the representation of missing data. In addition, we design a dynamic enhancement module with a complete directed graph, and then cut relative edges according to the missing state dynamically. It improves the representation ability of multiple modalities under changeable missing scenarios. Comprehensive experiment results on the RGBNT201 and RGBNT100 datasets demonstrate the performance of our method and validate that our DENet is better compatible with challenging modality-missing environments. In the future, we will focus on more complex situations to achieve more robust partial multimodality Re-ID performance." } ]
[ { "authors": "A Wu; W Zheng; H Yu; S Gong; J Lai", "journal": "", "ref_id": "b0", "title": "Rgb-infrared crossmodality person re-identification", "year": "2017" }, { "authors": "A Mogelmose; C Bahnsen; T B Moeslund; A Clapes; S Escalera", "journal": "", "ref_id": "b1", "title": "Tri-modal person re-identification with rgb, depth and thermal features", "year": "2013" }, { "authors": "A Zheng; Z Wang; Z Chen; C Li; J Tang", "journal": "", "ref_id": "b2", "title": "Robust multi-modality person re-identification", "year": "2021" }, { "authors": "H Li; C Li; X Zhu; A Zheng; B Luo", "journal": "", "ref_id": "b3", "title": "Multi-spectral vehicle re-identification: A challenge", "year": "2020" }, { "authors": "Z Zhang; Y Lin; Y Zheng", "journal": "", "ref_id": "b4", "title": "Translating and segmenting multimodal medical volumes with cycle-and shape-consistency generative adversarial network", "year": "2018" }, { "authors": "K Kamnitsas; C Baumgartner; C Ledig; V Newcombe; J P Simpson; A D Kane; D K Menon; A Nori; A Criminisi; D Rueckert", "journal": "", "ref_id": "b5", "title": "Unsupervised domain adaptation in brain lesion segmentation with adversarial networks", "year": "2017" }, { "authors": "E Bruni; N Tran; M Baroni", "journal": "The Journal of Artificial Intelligence Research", "ref_id": "b6", "title": "Multimodal distributional semantics", "year": "2014" }, { "authors": "A James; B Dasarathy", "journal": "Information Fusion", "ref_id": "b7", "title": "Medical image fusion: A survey of the state of the art", "year": "2014" }, { "authors": "Q Y Jiang; W J Li", "journal": "", "ref_id": "b8", "title": "Deep cross-modal hashing", "year": "2017" }, { "authors": "M Ye; Z Wang; X Lan; P Yuen", "journal": "", "ref_id": "b9", "title": "Visible thermal person reidentification via dual-constrained top-ranking", "year": "2018" }, { "authors": "H Luo; W Jiang; Y Gu; F Liu; X Liao; S Lai; J Gu", "journal": "IEEE Transactions on Multimedia", "ref_id": "b10", "title": "A strong baseline and batch normalization neck for deep person re-identification", "year": "2020" }, { "authors": "X Yang; P Zhou; M Wang", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b11", "title": "Person reidentification via structural deep metric learning", "year": "2019" }, { "authors": "Z Liu; H Lu; H Ruan; M Yang", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b12", "title": "Person reidentification by joint local distance metric and feature transformation", "year": "2019" }, { "authors": "Z Zhang; C Lan; W Zeng; X Jin; Z Chen", "journal": "", "ref_id": "b13", "title": "Relation-aware global attention for person re-identification", "year": "2020" }, { "authors": "H Wang; L Jiao; S Yang; L Li; Z Wang", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b14", "title": "Simple and effective: Spatial rescaling for person reidentification", "year": "2022" }, { "authors": "L Wei; S Zhang; H Yao; W Gao; Q Tian", "journal": "", "ref_id": "b15", "title": "Glad: Global-localalignment descriptor for pedestrian retrieval", "year": "2017" }, { "authors": "F Schroff; D Kalenichenko; J Philbin", "journal": "", "ref_id": "b16", "title": "Facenet: A unified embedding for face recognition and clustering", "year": "2015" }, { "authors": "D Cheng; Y Gong; S Zhou; J Wang; N Zheng", "journal": "", "ref_id": "b17", "title": "Person reidentification by multi-channel parts-based cnn with improved triplet loss function", "year": "2016" }, { "authors": "W Chen; X Chen; J Zhang; K Huang", "journal": "", "ref_id": "b18", "title": "Beyond triplet loss: a deep quadruplet network for person re-identification", "year": "2017" }, { "authors": "Z Zhu; X Jiang; F Zheng; X Guo; W Zheng", "journal": "", "ref_id": "b19", "title": "Viewpointaware loss with angular regularization for person re-identification", "year": "2020" }, { "authors": "Y Huang; Z Zha; X Fu; W Zhang", "journal": "", "ref_id": "b20", "title": "Illumination-invariant person re-identification", "year": "2019" }, { "authors": "R Hou; B Ma; H Chang; X Gu; S Shan; X Chen", "journal": "", "ref_id": "b21", "title": "Vrstc: occlusion-free video person re-identification", "year": "2019" }, { "authors": "M Ye; X Lan; J Li; P Yuen", "journal": "", "ref_id": "b22", "title": "Hierarchical discriminative learning for visible thermal person re-identification", "year": "2018" }, { "authors": "Y Lu; Y Wu; B Liu; T Zhang; B Li; Q Chu; N Yu", "journal": "", "ref_id": "b23", "title": "Crossmodality person re-identification with shared-specific feature transfer", "year": "2020" }, { "authors": "Y Zhu; Z Yang; L Wang; S Zhao; X Hu; D Tao", "journal": "Neurocomputing", "ref_id": "b24", "title": "Hetero-center loss for cross-modality person re-identification", "year": "2020" }, { "authors": "P Dai; R Ji; H Wang; Q Wu; Y Huang", "journal": "", "ref_id": "b25", "title": "Cross-modality person re-identification with generative adversarial training", "year": "2018" }, { "authors": "Z Wang; Z Wang; Y Zheng; Y Chuang; S Satoh", "journal": "", "ref_id": "b26", "title": "Learning to reduce dual-level discrepancy for infrared-visible person re-identification", "year": "2019" }, { "authors": "Z Zhang; Y Lin; Y Zheng", "journal": "", "ref_id": "b27", "title": "Translating and segmenting multimodal medical volumes with cycle-and shape-consistency generative adversarial network", "year": "2018" }, { "authors": "M Havaei; N Guizard; N Chapados; Y Bengio", "journal": "Medical Image Computing and Computer-Assisted Intervention", "ref_id": "b28", "title": "Hemis: Hetero-modal image segmentation", "year": "2016" }, { "authors": "Y Shen; M Gao", "journal": "", "ref_id": "b29", "title": "Brain tumor segmentation on mri with missing modalities", "year": "2019" }, { "authors": "Y Tsai; P Liang; A Zadeh; L Morency; R Salakhutdinov", "journal": "", "ref_id": "b30", "title": "Learning factorized multimodal representations", "year": "2019" }, { "authors": "Y Jiang; Z Wu; J Wang; X Xue; S Chang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b31", "title": "Exploiting feature and class relationships in video categorization with regularized deep neural networks", "year": "2018" }, { "authors": "Y Peng; J Qi; Y Yuan", "journal": "IEEE Transactions on Image Processing", "ref_id": "b32", "title": "Modality-specific cross-modal similarity measurement with recurrent attention network", "year": "2018" }, { "authors": "S Poria; I Chaturvedi; E Cambria; A Hussain", "journal": "", "ref_id": "b33", "title": "Convolutional mkl based multimodal emotion recognition and sentiment analysis", "year": "2016" }, { "authors": "Z Li; F Zhou", "journal": "", "ref_id": "b34", "title": "Fssd: Feature fusion single shot multibox detector", "year": "2017" }, { "authors": "G Huang; Z Liu; V Laurens; K Weinberger", "journal": "", "ref_id": "b35", "title": "Densely connected convolutional networks", "year": "2017" }, { "authors": "Z Lan; L Bao; S Yu; W Liu; A Hauptmann", "journal": "Multimedia Tools and Applications", "ref_id": "b36", "title": "Multimedia classification and event detection using double fusion", "year": "2014" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b37", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "S Woo; J Park; J Y Lee; I S Kweon", "journal": "", "ref_id": "b38", "title": "Cbam: Convolutional block attention module", "year": "2018" }, { "authors": "K Chen; T Bui; C Fang; Z Wang; R Nevatia", "journal": "", "ref_id": "b39", "title": "Amc: Attention guided multi-modal correlation learning for image search", "year": "2017" }, { "authors": "X Wei; T Zhang; Y Li; Y Zhang; F Wu", "journal": "", "ref_id": "b40", "title": "Multi-modality cross attention network for image and sentence matching", "year": "2020" }, { "authors": "C Szegedy; V Vanhoucke; S Ioffe; J Shlens; Z Wojna", "journal": "", "ref_id": "b41", "title": "Rethinking the inception architecture for computer vision", "year": "2016" }, { "authors": "X Chang; T M Hospedales; T Xiang", "journal": "", "ref_id": "b42", "title": "Multi-level factorisation net for person re-identification", "year": "2018" }, { "authors": "W Li; X Zhu; S Gong", "journal": "", "ref_id": "b43", "title": "Harmonious attention network for person re-identification", "year": "2018" }, { "authors": "K Zhou; Y Yang; A Cavallaro; T Xiang", "journal": "", "ref_id": "b44", "title": "Omni-scale feature learning for person re-identification", "year": "2019" }, { "authors": "Y Rao; G Chen; J Lu; J Zhou", "journal": "", "ref_id": "b45", "title": "Counterfactual attention learning for fine-grained visual categorization and re-identification", "year": "2021" }, { "authors": "M Ye; J Shen; D J Crandall; L Shao; J Luo", "journal": "", "ref_id": "b46", "title": "Dynamic dual-attentive aggregation learning for visible-infrared person reidentification", "year": "2020" }, { "authors": "Q Wu; P Dai; J Chen; C.-W Lin; Y Wu; F Huang; B Zhong; R Ji", "journal": "", "ref_id": "b47", "title": "Discover cross-modality nuances for visible-infrared person reidentification", "year": "2021" }, { "authors": "G Chen; T Zhang; J Lu; J Zhou", "journal": "", "ref_id": "b48", "title": "Deep meta metric learning", "year": "2019" }, { "authors": "Y Sun; C Cheng; Y Zhang; C Zhang; L Zheng; Z Wang; Y Wei", "journal": "", "ref_id": "b49", "title": "Circle loss: A unified perspective of pair similarity optimization", "year": "2020" }, { "authors": "L Maaten; G Hinton", "journal": "Journal of Machine Learning Research", "ref_id": "b50", "title": "Visualizing data using t-sne", "year": "2008" }, { "authors": "G Wang; T Zhang; J Cheng; S Liu; Y Yang; Z Hou", "journal": "", "ref_id": "b51", "title": "Rgbinfrared cross-modality person re-identification via joint pixel and feature alignment", "year": "2019" }, { "authors": "R Selvaraju; M Cogswell; A Das; R Vedantam; D Parikh; D Batra", "journal": "", "ref_id": "b52", "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "year": "2017" } ]
[ { "formula_coordinates": [ 4, 119.95, 457.29, 180.07, 14.34 ], "formula_id": "formula_0", "formula_text": "X ′ nir = U pSample(F rgb ),(1)" }, { "formula_coordinates": [ 4, 112.98, 496.35, 187.04, 14.34 ], "formula_id": "formula_1", "formula_text": "F ′ nir = DownSample(X ′ nir ).(2)" }, { "formula_coordinates": [ 4, 123.49, 554.09, 176.53, 14.34 ], "formula_id": "formula_2", "formula_text": "L rec = ∥X nir -X ′ nir ∥ 2 2 ,(3)" }, { "formula_coordinates": [ 4, 90.71, 593.15, 209.31, 14.34 ], "formula_id": "formula_3", "formula_text": "L sim = ∥θ n (G(F nir )) -θ ′ n (G(F ′ nir ))∥ 1 .(4)" }, { "formula_coordinates": [ 4, 48.96, 618.67, 251.06, 34.53 ], "formula_id": "formula_4", "formula_text": "X nir denotes NIR images, G denotes Global Av- erage Pooling, θ n (x) = ReLU (BN (W n x)) and θ ′ n (x) = ReLU (BN (W ′ n x" }, { "formula_coordinates": [ 4, 353.05, 585.58, 209.99, 14.34 ], "formula_id": "formula_5", "formula_text": "F ′′′ source = (F ′ source ⊗ F ′ target ) ⊗ F ′′ source .(5)" }, { "formula_coordinates": [ 4, 374.45, 641.37, 188.58, 14.34 ], "formula_id": "formula_6", "formula_text": "F enhance target = F ′′′ source + F target .(6)" }, { "formula_coordinates": [ 4, 340.69, 705.06, 222.35, 12.69 ], "formula_id": "formula_7", "formula_text": "F c1 = Concat(F enhance rgb , F enhance nir , F enhance tir ),(7)" }, { "formula_coordinates": [ 4, 350.58, 737.2, 212.46, 12.69 ], "formula_id": "formula_8", "formula_text": "F c2 = Concat(F rgb , F enhance nir ′ , F enhance tir ′ ),(8)" }, { "formula_coordinates": [ 5, 116.94, 69.49, 183.09, 9.65 ], "formula_id": "formula_9", "formula_text": "F f inal = Concat(F c1 , F c2 ).(9)" }, { "formula_coordinates": [ 5, 94.91, 278.68, 205.11, 9.65 ], "formula_id": "formula_10", "formula_text": "L tri = max{d(a, p) -d(a, n) + α, 0},(10)" }, { "formula_coordinates": [ 5, 88.78, 304.77, 211.25, 30.32 ], "formula_id": "formula_11", "formula_text": "L CE = - N i=1 q i log(p i ) q i = 0, y ̸ = i q i = 1, y = i,(11)" }, { "formula_coordinates": [ 5, 127.86, 413.46, 172.16, 23.08 ], "formula_id": "formula_12", "formula_text": "p i = 1 -β , i = y β/N , i ̸ = y,(12)" }, { "formula_coordinates": [ 5, 103.73, 449.58, 192.14, 24 ], "formula_id": "formula_13", "formula_text": "L ′ CE = (1 -β) * L CE , i = y β * L CE , i ̸ = y, (13" }, { "formula_coordinates": [ 5, 295.87, 456.69, 4.15, 8.64 ], "formula_id": "formula_14", "formula_text": ")" }, { "formula_coordinates": [ 5, 111.65, 536.03, 188.38, 9.65 ], "formula_id": "formula_15", "formula_text": "L = L ReID + ρL rec + µL sim ,(14)" } ]
ON NEURAL NETWORKS AND LEARNING SYSTEMS
Many existing multi-modality studies are based on the assumption of modality integrity. However, the problem of missing arbitrary modalities is very common in real life, and this problem is less studied, but actually important in the task of multi-modality person re-identification (Re-ID). To this end, we design a novel dynamic enhancement network (DENet), which allows missing arbitrary modalities while maintaining the representation ability of multiple modalities, for partial multi-modality person Re-ID. To be specific, the multi-modal representation of the RGB, near-infrared (NIR) and thermal-infrared (TIR) images is learned by three branches, in which the information of missing modalities is recovered by the feature transformation module. Since the missing state might be changeable, we design a dynamic enhancement module, which dynamically enhances modality features according to the missing state in an adaptive manner, to improve the multi-modality representation. Extensive experiments on multi-modality person Re-ID dataset RGBNT201 and vehicle Re-ID dataset RGBNT100 comparing to the state-ofthe-art methods verify the effectiveness of our method in complex and changeable environments.
Aihua Zheng; Ziling He; Zi Wang; Chenglong Li; Jin Tang
[ { "figure_caption": "Fig. 1 .1Fig. 1. The training data is complete and multi-modality, but the image may lose one or two modalities during the test stage.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig.2. An overview of our proposed partial multi-modality re-identification network DENet. First, we extract the features of each modality from individual target image triplet. To solve the missing modalities, we employ the the cross-modality feature transformation (CMFT), including the up-sample and downsample structures. And then, we feed the obtained features into the dynamic enhancement module(DEM), which can be adjusted according to different arbitrarily-missing scenarios to maintain the representation ability of multiple modalities. Finally, we concatenate the enhanced features to obtain the final representation.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. (a) The complete enhancement graph. (b) The dynamic-cut strategy taking the missing of TIR as an example.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. The detail of the enhancement operation. Where the feature of target modal is enhanced by the source feature to obtain the corresponding improved feature.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .Fig. 6 .56Fig. 5. The parameter analysis on RGBNT201 dataset shows the influence of different coefficient settings on the accuracy of the learned model.", "figure_data": "", "figure_id": "fig_5", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. Comparison of the Class Activation Map (CAM) of recovered (b) NIR features and (c) TIR features overlaid on the existing (a) RGB images.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. (a) The ranking results of an example query in different missing scenarios. (b) The ranking results of our DENet comparing with the state-of-the-art method when missing NIR and TIR modalities. The green and red boxes indicate the correct and false matching respectively.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Zheng et al., 2021) and multi-spectral vehicle dataset RGBNT100 (Li et al., 2020). ] is a multi-spectral vehicle Re-ID dataset contributed by Li et al. in 2020. It contains RGB, NIR and TIR vehicle images of 100 identities from 8 camera views. The training set in RGBNT100 contains 8675 image triples of 50 vehicles. The other 50 vehicles with 8575 image triples are used for the test, and 1715 image triples are randomly selected as the query set.", "figure_data": "A. Datasets and Evaluation ProtocolsRGBNT201 [3] is the first multi-modality person Re-IDdataset with four non-overlapping views collected in a campusscene, each with three cameras recording RGB, NIR, and TIRdata simultaneously. It covers the challenges of part occlu-sion, low illumination, background clutter, high illumination,motion blur, low illumination and so on. The dataset contains201 identities, and we select 141 identities for training, 30identities for verification, and the remaining 30 identities fortesting.RGBNT100 [4", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "RESULTS OF OUR METHOD ON RGBNT201 COMPARING WITH STATE-OF-THE-ART METHODS IN THE CASE OF ONE MODALITY COMPLETE-MISSING.", "figure_data": "MethodsNo MissingMissing NIRMissing TIRMissing NIR+TIRmAPRank-1mAPRank-1mAPRank-1mAPRank-1MLFN [43]24.6623.6821.0321.4822.5222.7819.40 19.85Single-ModalityHACNN [44] OSNet [45]19.34 22.1214.71 22.8516.78 17.9013.50 18.4413.44 20.4311.23 21.0312.34 10.43 16.76 17.23CAL [46]25.6326.3023.0523.5821.3522.4020.52 21.63Multi-ModalityPFNet [3] DENet (Ours)38.46 42.4138.88 42.2331.90 35.4029.78 36.8225.50 33.0025.83 35.4026.40 23.44 32.42 29.20", "figure_id": "tab_1", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "RESULTS OF OUR METHOD ON RGBNT201 ARE COMPARED WITH STATE-OF-THE-ART METHODS IN THE CASE OF COMPLETE-MISSINGOF TWO MODALITIES AND NO MISSING OF MODALITIES.", "figure_data": "Missing NIRMissing TIRMethodsRGB to TIRTIR to RGBRGB to NIRNIR to RGBmAPRank-1mAPRank-1mAPRank-1mAPRank-1HC loss [25]16.7414.2516.5319.3220.5422.1921.8022.93Cross-ModalityDDAG [47]18.0914.7920.0118.0521.3920.3724.7621.66MPANet [48]19.0020.8220.2823.1420.0430.9626.0326.54OursDENetmAP: 35.4Rank-1: 36.82mAP: 33.00Rank-1: 35.40", "figure_id": "tab_2", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "RESULTS BETWEEN OUR METHOD DENET AND BASELINE MODEL WHEN USING DATA OF DIFFERENT MODAL COMBINATIONS ON RGBNT100 DATASET.", "figure_data": "MethodsNo MissingMissing NIRMissing TIRMissing NIR+TIRmAP Rank-1 Rank-5 mAPRank-1 Rank-5mAP Rank-1 Rank-5 mAPRank-1 Rank-5Baseline62.187.189.058.985.587.653.879.282.046.570.074.2DENet68.189.291.162.085.588.156.080.984.550.174.278.0", "figure_id": "tab_3", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "WITH STATE-OF-THE-ART SINGLE-MODALITY AND MULTI-MODALITY RE-ID METHODS ON RGBNT100.", "figure_data": "MethodsmAP Rank-1 Rank-5 Rank-10Single-ModalityDMML [49] Circle Loss [50] 59.4 58.582.0 81.785.1 83.786.2 85.2Multi-ModalityHAMNet [4] DENet (Ours) 68.1 64.184.7 89.288.0 91.189.4 92.0TABLE VABLATION STUDY OF PARTIAL MULTI-MODALITY RE-ID ON RGBNT201WITH MISSING RATE η = 0.25. \"✓\" INDICATES THAT THE COMPONENT ISINCLUDED.Lrec L sim DEMmAP Rank-1 Rank-5 Rank-10---25.9523.1236.1345.20✓--30.5227.9141.4051.73-✓-31.0528.6045.9456.85--✓31.4229.1044.9853.82✓-✓32.9331.6447.6759.90-✓✓31.2032.5852.2561.17✓✓-33.4035.5252.4062.83✓✓✓34.3040.1857.4265.90", "figure_id": "tab_4", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "EXPERIMENT BETWEEN CMFT MODULE AND GAN-BASED RECOVERY APPROACHES ON RGBNT201 WITH MISSING RATE η = 0.25. Fixed enhancement strategy means that only the RGB is used to enhance NIR and TIR respectively, while the TIR and NIR will not enhance other modalities, given that they may be missing. As shown in TableVII, the flexible use of dynamic cutting strategy for dynamic enhancement can obtain better performance than fixed enhancement. The main reason is that our strategy considers", "figure_data": "MethodsmAPRank-1Rank-5Rank-10Image Generation30.7032.6141.4550.74Feature Generation31.6235.3047.2557.73DENet (Ours)34.3040.1857.4265.90TABLE VIICOMPARATIVE EXPERIMENT OF DYNAMIC ENHANCEMENT AND OTHERENHANCEMENT STRATEGIES ON RGBNT201 DATATSET.Missing ModalityMethodsmAP Rank-1 Rank-5 Rank-10TIRFixed Ours30.00 33.0032.51 35.4053.62 55.0262.10 62.90NIRFixed Ours34.85 35.4034.60 36.8248.82 53.6057.81 64.10NIR+TIRFixed Ours31.10 32.4226.91 29.2042.00 45.2652.26 53.38Fixed39.2536.6452.3762.80Nonesingle-direction 35.0233.1447.1057.95Ours42.4142.2355.3064.52role in our method. In the case of partial multi-modalitydata, compared with no conversion constraints, our dynamicenhancement module can maintain the authenticity of therestored features as much as possible, thus can effectivelydeal with the missing data. The performance can be furtherimproved by using the dynamic feature enhancement modulefor modal interaction under different missing state.Evaluation on CMFT. The proposed cross-modality featuretransformation (CMFT) aims to recover the features of missingmodalities. To further verify the effectiveness of this module,we first compare it with other conversion approaches includingGAN-based image generation and feature generation meth-ods on RGBNT201 dataset. These two comparison methodsuse the training strategy of Generative Adversarial Networks(GAN) to generate false missing images and features respec-tively, so as to replace our CMFT module. As can be seen fromTable VI, the CMFT module can achieve superior performancethan the other two compared ways. One of the main reasonsis that the training of GAN needs a large amount of trainingdata and requires better synchronization between the generatorand the discriminator. The training process is prone to nonconvergence, resulting in unstable results, which limits theimprovement on multi-modality person Re-ID.Evaluation on DEM. In order to further verify the effective-ness of the proposed dynamic enhancement module (DEM),we design the comparative experiments of fixed enhancementin diverse modality missing cases.", "figure_id": "tab_5", "figure_label": "VI", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "[1]", "Explanation": "The cited work by Wu et al. is the first to launch the RGB and near-infrared cross-modality Re-ID, which provides foundational data and research insights for the citing paper to explore the Re-ID task in low illumination scenarios."}, {"Category": "Extension or Continuation", "Citation": "[2], [3]", "Explanation": "The cited works on multi-modality person re-identification and vehicle re-identification have received increasing interest in the computer vision community, indicating a trend of extending research in this area and providing a basis for the citing paper to build upon."}, {"Category": "Data Source", "Citation": "[4]", "Explanation": "The cited work by Li et al. is the first to launch the multi-modality Re-ID research, which serves as a data source for the citing paper to build upon in their own research on the Re-ID task in low illumination scenarios."}, {"Category": "Data Source", "Citation": "[3]", "Explanation": "The cited work by Zheng et al. contributes a multimodality Re-ID benchmark dataset called RGBNT201, which includes images of each person in three modalities (visible, near-infrared, and thermal-infrared). This dataset serves as a foundational element for the study conducted in the citing paper on multi-spectral vehicle Re-ID."}, {"Category": "Supporting Evidence", "Citation": "[5], [6]", "Explanation": "The cited works on generative adversarial network (GAN) based methods are used to support the claim that the recovery of missing modalities is a difficult task, highlighting the challenges in generating missing data for partial multi-modality Re-ID."}, {"Category": "Methodological Basis", "Citation": "[10]- [15]", "Explanation": "The cited works provide a foundational understanding of the person re-identification problem and its applications in intelligent security and other fields, which the citing paper builds upon in its research on the topic."}, {"Category": "Extension or Continuation", "Citation": "[16]", "Explanation": "The cited work by Wei et al. proposed a global-local-alignment descriptor (GLAD) for feature learning in person re-identification, which the citing paper extends by exploring new methods and techniques in this area."}, {"Category": "Supporting Evidence", "Citation": "[17]", "Explanation": "The cited work on the triple loss is a foundational metric learning method in person re-identification, which the citing paper uses to underpin its own research on improving the Re-ID performance through metric learning."}, {"Category": "Data Source", "Citation": "[18], [19]", "Explanation": "The cited works on improving the triple loss for Re-ID provide a data source for the citing paper to reference in its research on further enhancing the Re-ID performance through metric learning."}, {"Category": "Supporting Evidence", "Citation": "[20], [21], [22]", "Explanation": "The cited works on challenges in person re-identification, such as different viewpoints, illumination changes, and occlusion, provide supporting evidence for the citing paper in identifying and addressing these issues in its research on the topic."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work by Wu et al. introduces the RGB-infrared cross-modality person re-identification task and a baseline method, which the citing paper adopts to establish a new research direction in the field of person re-identification."}, {"Category": "Extension or Continuation", "Citation": "[2]", "Explanation": "The cited work by Mogelmose et al. extends the research on person re-identification by proposing a tri-modal approach that combines RGB, depth, and thermal data, building upon the work of Wu et al. in the field of cross-modality person re-identification."}, {"Category": "Data Source", "Citation": "[3]", "Explanation": "The cited work by Zheng et al. provides a benchmark dataset, RGBNT201, that the citing paper uses to evaluate the performance of the joint classifier in person re-identification research."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work by Zhang et al. proposes a method using CycleGAN to generate missing information, which the citing paper adopts in their research to process medical images."}, {"Category": "Methodological Basis", "Citation": "[29]", "Explanation": "The cited work by HeMIS uses statistical features as embedding for decoding and feature fusion with a mean and variance calculation method, which the citing paper adopts in their research to process medical images."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work by Shen et al. proposes an adaptive network with a loss function to generate features similar to real features in the case of modality-missing, which the citing paper adopts in their research to process medical images."}, {"Category": "Methodological Basis", "Citation": "[31]", "Explanation": "The cited work by Tsai et al. highlights the need for models to be robust to missing or noisy modalities during testing, and proposes a joint generative-discriminative objective for multi-modality data and labels, which the citing paper adopts in their research to process medical images."}, {"Category": "Methodological Basis", "Citation": "[32]", "Explanation": "The cited work introduces the concept of joint representation, which the citing paper adopts to map the information of different modalities to the same feature space for multi-modality tasks."}, {"Category": "Methodological Basis", "Citation": "[33]", "Explanation": "The cited work presents the concept of coordinated representation, which the citing paper utilizes to map the information of each modal in a multi-modality task with certain constraints between modalities."}, {"Category": "Extension or Continuation", "Citation": "[34]", "Explanation": "The cited work discusses the early fusion strategy for multi-modality feature fusion, which the citing paper extends by exploring the use of this strategy in multi-modality tasks."}, {"Category": "Extension or Continuation", "Citation": "[35], [36]", "Explanation": "The cited works present the late fusion strategy for multi-modality feature fusion, which the citing paper builds upon by discussing the use of this strategy in multi-modality tasks."}, {"Category": "Extension or Continuation", "Citation": "[37]", "Explanation": "The cited work introduces the concept of mixed fusion for multi-modality feature fusion, which the citing paper further extends by exploring the use of this strategy in multi-modality tasks."}, {"Category": "Methodological Basis", "Citation": "[38]", "Explanation": "The cited work, ResNet50, is used as the backbone in the design of a multi-branch network in the citing paper to extract features from individual image triplets in a multi-modal approach."}, {"Category": "Extension or Continuation", "Citation": "[39]", "Explanation": "The cited work, the channel and spatial attention layer, is used in the design of the multi-branch network in the citing paper to focus on the most informative features in each modality."}, {"Category": "Methodological Basis", "Citation": "[40]", "Explanation": "The cited work provides a method for enhancing features in a given deterministic modality, which the citing paper adopts to make a single branch use the internal correlation between different modalities without interactive information."}, {"Category": "Methodological Basis", "Citation": "[41]", "Explanation": "The cited work also contributes to the enhancement of features in a given deterministic modality, which the citing paper uses to make a single branch use the internal correlation between different modalities without interactive information."}, {"Category": "Extension or Continuation", "Citation": "Fig. 3 (a)", "Explanation": "The cited work in Fig. 3 (a) extends the concept of constructing a complete directed enhancement graph by taking the modalities as graph nodes, which the citing paper further develops in the case of missing TIR in Fig. 3 (b)."}, {"Category": "Extension or Continuation", "Citation": "Fig. 3 (b)", "Explanation": "The cited work in Fig. 3 (b) extends the idea of dynamic cutting strategy in the case of missing TIR, which the citing paper builds upon to show the adaptivity of the enhancement branches in maintaining the representation ability of multiple modalities."}, {"Category": "Data Source", "Citation": "Fig. 4", "Explanation": "The cited work in Fig. 4 provides the details of cross-modality enhancement, which the citing paper utilizes in the process of feature enhancement in a given deterministic modality."}, {"Category": "Methodological Basis", "Citation": "(42)", "Explanation": "The cited work introduces the label smooth strategy, which the citing paper adopts to prevent the network from overfitting in their research on feature extraction and classification."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work, PFNet, serves as a basis for the multi-modality Re-ID method implemented in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[43]-[46]", "Explanation": "The cited works provide a range of single modality Re-ID methods that the citing paper extends by comparing their performance with the proposed method in a multi-modality setting."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work, HC loss, is used as a method for cross-modality Re-ID in the case of one modality complete-missing, providing a specific approach for handling the missing modality in the data."}, {"Category": "Methodological Basis", "Citation": "[47]", "Explanation": "The cited work, DDAG, is also used as a method for cross-modality Re-ID in the case of one modality complete-missing, providing another approach for handling the missing modality in the data."}, {"Category": "Methodological Basis", "Citation": "[48]", "Explanation": "The cited work, MPANet, is used as a method for cross-modality Re-ID in the case of one modality complete-missing, providing a final approach for handling the missing modality in the data."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work provides the RGBNT100 dataset, which the citing paper uses to evaluate the performance of their method in the case of missing modalities."}, {"Category": "Extension or Continuation", "Citation": "[49]", "Explanation": "The cited work, DMML, is compared to the citing paper to assess the performance of their method in the context of multi-modality vehicle Re-ID."}, {"Category": "Extension or Continuation", "Citation": "[50]", "Explanation": "The cited work, Circle Loss, is compared to the citing paper to evaluate the performance of their method in the context of advanced single-modal Re-ID."}, {"Category": "Supporting Evidence", "Citation": "[51]", "Explanation": "The cited work by t-SNE is used to illustrate the effectiveness of the proposed dynamic enhancement module in the citing paper by visualizing the feature distribution of different identities and modalities."}, {"Category": "Extension or Continuation", "Citation": "[52]", "Explanation": "The cited work on GAN-based generation method is further discussed in the citing paper to show the effectiveness of cross-modality feature transformation in terms of missing modality recovery."}, {"Category": "Supporting Evidence", "Citation": "[53]", "Explanation": "The cited work on Grad-Cam is applied in the citing paper to visualize the class activation maps and overlay them on the original RGB images to demonstrate the effectiveness of the cross-modality feature transformation."}, {"Category": "Methodological Basis", "Citation": "[46]", "Explanation": "The cited work, CAL, is used as a comparison method in the study conducted in the citing paper, providing a basis for evaluating the performance of the proposed model in missing modality scenarios."}]
[ { "figure_ref": [], "heading": "", "publication_ref": [ "b13", "b5", "b6", "b14", "b24", "b37", "b48", "b36", "b42", "b23", "b35", "b40", "b19", "b27", "b16", "b46", "b22", "b25", "b31", "b12", "b1", "b18", "b27", "b42", "b19", "b3", "b17", "b16" ], "table_ref": [], "text": "Fig. 1. Examples of ranking results comparison among the conventional single query, multi-shot and the proposed multi-query ReID on our collected MuRI dataset via ResNet-50 [14], where multi-shot (or multi-query) ReID is achieved by averaging the consecutive (or the different viewpoints) vehicle image features. The true and false matchings are bounded in green and red boxes, respectively.\nMost existing image-based vehicle Re-ID methods [6], [7], [15], [25], [38], [49] rely on the single query. However, the dramatic appearance changes caused by different viewpoints lead to the huge intra-class discrepancy.\nTo solve the viewpoint diversity, some works use vehicle keypoint information [37], [43] or vehicle local area features [24], [36], [41] to perform local feature alignment. Moreover, meta-information (e.g. vehicle attributes, spatialtemporal information) has also been explored to alleviate the difference from different viewpoints. Zheng et al. [20] introduce attribute fusion and Liu et al. [28] utilize attributes and spatial-temporal information to learn global vehicle representations. However, the key issue in this single-shot fashion is that one vehicle image only has a specific viewpoint as the single query, thus it is very challenging to match the gallery images with different viewpoints. To explore the comprehensive information in multi-shot images, Jin et al. [17] propose a multi-shot teacher branch that mines the information of multi-viewpoint images to guide the single-image student branch during training. However, they cannot guarantee that the teacher network contains information from multiple viewpoints, and they only use one query image in the inference phase, which still can not fully utilize the information from multiple query images. Zheng et al. [47] evaluate person Re-ID in the so-called multi-query fashion by average or max operations on the multiple person images from the same camera to obtain a new query feature in the inference process.\nHowever, the variability between query images in the same camera is small, and the average or max operations cannot fully utilize the diversity and complementarity among the multiple queries.\nIn fact, we can easily access multiple images of a certain vehicle from diverse viewpoints or scenes as the query in the real-life surveillance. On the one hand, in a certain scene, we can easily obtain the multi-view images of the same vehicle via the crowded dome and box cameras or the robust tracking algorithms. On the other hand, we can obtain the cross-scene multi-view images of the same vehicle by correlating the corresponding cross-scene tracklets. In addition to the above intelligent acquisition, we can also manually construct the multi-view query images in the surveillance system.\nBy integrating the complementary information among the multi-query images scenario, we can learn more comprehensive appearance feature representation of a certain vehicle, which is expected to significantly overcome the dramatic appearance changes caused by different viewpoints. Therefore, we rethink vehicle Re-ID in the more realistic multi-query inference setting in this paper. As shown in Fig. 1, due to the limited view information, single query and multishot Re-ID tend to easily matching the vehicle images with similar viewpoints. By contrast, multi-query Re-ID can hit the more challenging right matchings with diverse viewpoints since it can integrate the complementary information among the diverse viewpoint queries. Giving the easily accessible multiple query images captured from a single or several nonoverlapping scenes/cameras, how to take the advantage of multiple queries with diverse viewpoint and illumination changes to achieve more accurate vehicle Re-ID? In this paper, we propose a novel viewpoint-conditioned network (VCNet), which effectively combines the complementary information from different vehicle viewpoints, for multi-query vehicle Re-ID. First, in the training process, to make full use of diverse viewpoint information of the vehicle, we propose a viewpoint conditional coding (VCC) module, which uses the learned vehicle viewpoint features as viewpoint conditional coding information and integrates it into the feature learning process of vehicle appearance. Second, we propose the viewpoint-based adaptive fusion (VAF) module to adaptively fuse the viewpoint coded appearance features of the vehicle in the multi-query inference process. In particular, it adaptively assigns weights to the appearance features of the multiple queries according to their viewpoint similarity to gallery. The higher viewpoint similarity between the query image to the current gallery image, the larger weight to the appearance feature of the corresponding query image. Finally, to tolerate the missing viewpoints in the query set in the inference, we propose a cross-view feature recovery module (CVFR) to recover the features of the missing viewpoints.\nIn addition, although conventional vehicle Re-ID metrics (namely CMC, mAP and mINP) avoid the easy matching from the same camera between query and gallery, they mainly focus on the global relation between the query and gallery while ignoring the local relations within the gallery. Therefore, they tend to result in virtual high scores when retrieving easy positive samples with similar viewpoints from one single camera. Although Zhao et al. [46] propose the evaluation metric Cross-camera Generalization Measure (CGM) to improve the evaluations by introducing position-sensitivity and cross-camera generalization penalties. It still suffers from the influence of similar viewpoint samples under the same camera. Since they only easily divide target images captured from the same cameras into individual groups, and fail to consider the positive samples with similar viewpoints from the individual group. In this paper, we argue that the realistic Re-ID cares more about the cross-scene retrieval ability of the model, which is more crucial to the intelligent transportation society to trace the trajectory of the certain vehicle among the identity of the vast Skynet in the smart city. Therefore, we propose a new metric, the mean Cross-scene Precision (mCSP), which focuses on the cross-scene retrieval ability by suppressing the positive samples with similar viewpoints from the same camera.\nAt last, although existing vehicle Re-ID datasets, including VehicleID [23], VeRI-776 [26], and VERI-Wild [32], provide important benchmarks to evaluate the state-of-the-art methods, the crucial issue is the number of cameras is limited (12 in Vechile ID, 20 in VeRI-776 and 174 in VERI-Wild). Therefore, each vehicle only appears with limited cameras. Furthermore, although they contain vehicle images from multiple viewpoints, the number of viewpoints for each vehicle ID is still limited. Herein, we propose a new vehicle image dataset captured by large amount of cameras (i.e., 6142 cameras) from a real-life transportation surveillance system, named Multi-query Re-Identification dataset (MuRI). MuRI contains diverse viewpoints, including f ront (side f ront), side and rear (side rear), and large number of crossed scenes/cameras for each vehicle (i.e., 34.6 in average), which provides more realistic and challenging scenarios for multi-query vehicle Re-ID.\nTo the best of our knowledge, we are the first to launch the multi-query setting in vehicle Re-ID, which jointly uses images from multiple scenes/viewpoints of a vehicle as a query. The contributions of this paper are mainly in the following four aspects.\n• We introduce a new task called multi-query vehicle reidentification, which devotes to inferring the cross-scene re-identification by exploring the complementary information among the multiple query images with different viewpoints. The task is challenging, but easily accessible and very practical in realistic transportation systems. Most of the vehicle Re-ID methods rely on single query image. In order to learn the detailed features of vehicles and expand the subtle differences between the same models, some works introduce the idea of region of interest prediction or attention models to mine the salient regions of vehicles. He et al. [13] propose a simple and effective partial regularization method, which detects the regions of interest using pretrained detectors and introduce multi-dimensional constraints at the part level (windows, lights, and make alike) into the vehicle Re-ID framework. It improves the model's ability to learn local information while enhancing the subtle difference perception. Zhang et al. [2] propose an attention network using local region guidance, which mines the most important local regions by learning the weights of candidate search regions to increase the weights of discriminative features in vehicle images while reducing the effect of irrelevant background noise. Khorramshshi et al. [19] learn to capture localized discriminative features by focusing attention on the most informative key-points based on different orientation.\nTo handle the similar appearance of the different vehicles, some works propose to use the additional annotation information of the dataset to learn more accurate local features of the vehicle. Liu et al. [28] exploit multi-modal data from large-scale video surveillance, such as visual features, license plates, camera locations, and contextual information, to perform coarse-to-fine search in the feature domain and nearto-far search in the physical space. Wang et al. [43] extract local area features in different directions based on 20 keypoint locations, which are aligned and combined by embedding into directional features. The spatio-temporal constraints are modeled by spatio-temporal regularization using log-normal distribution to refine the retrieval results. Zheng et al. [20] introduce a deep network to fuse the camera views, vehicle types and color into the vehicle features.\nMetric learning based approaches focus on solving the problem of intra-class variation and inter-class similarity caused by view variation. Bai et al. [4] propose a deep metric learning method that divides samples within each vehicle ID into groups using an online grouping method, and create multigranularity triple samples across different vehicle IDs as well as different groups within the same vehicle ID to learn finegrained features. Jin et al. [18] propose a multi-center metric learning framework for multi-view vehicle Re-ID that models potential views directly from the visual appearance of vehicles, and constrains the vehicle view centers by intra-class ranking loss and cross-class ranking loss to increase the discriminative information of different vehicles.\nTo explore more information in the query, Jin et al. [17] explore the comprehensive information of multi-shot images of an object in a teacher-student manner. Although they use the multi-shot teacher branch to guide the single-image branch during training, it still contained only single-image information during the inference phase." }, { "figure_ref": [], "heading": "B. Vehicle Re-ID Metrics", "publication_ref": [ "b22", "b46", "b22", "b46", "b44", "b45" ], "table_ref": [], "text": "Vehicle Re-ID is an image retrieval subproblem, and to evaluate the performance of Re-ID methods. Cumulative Matching Characteristics (CMC) [23] and mean Average Precision (mAP) [47] are two widely used measures. CMCk (also known as k-level matching accuracy) [23] indicates the probability of a correct match among the top k ranked retrieval results. When comparing the performance of different methods, if there is little difference in performance between methods, the cumulative matching performance curves will overlap for the most part, making it impossible to accurately determine good or bad performance. In order to compare the performance differences between methods more concisely, the cumulative matching accuracy at some key matching positions is generally selected for comparison, where rank1 and rank5 are more common, indicating the probability of correctly matching the first 1 and the first 5 images in the result sequence, respectively.\nAnother metric, the mean accuracy (mAP) [47], is used to evaluate the overall performance of the Re-ID methods and represents the average of the accuracy of all retrieval results. It is originally widely used in image retrieval. For Re-ID evaluation, it can address the issue of two systems performing equally well in searching the first ground truth, but has different retrieval abilities for other hard matches. However, these two widely used measures cannot assess the ability of the model to retrieve difficult samples. To address this issue, Ye et al. [45] propose a computationally efficient metric, namely a negative penalty (NP), which measures the penalty to find the hardest correct match. To measure the results derived from individual cameras, et al. [46] propose a cross-scene generalization measure (CGM). It first divides the vehicle images captured by the same camera into individual groups, then calculate the average ranking values for each camera.\nHowever, all of the above metrics ignore the similar positive samples in the same camera, which leads to the virtual high metric scores." }, { "figure_ref": [], "heading": "C. Vehicle Re-ID Datasets", "publication_ref": [ "b25", "b22", "b31", "b25", "b22", "b22" ], "table_ref": [], "text": "Recent vehicle Re-ID methods are mainly evaluated on three public datasets, including VeRI-776 [26], VehicleID [23] and VERI-Wild [32]. VeRI-776 [26] dataset contains 49,360 images of 776 vehicles, of which the samples are obtained by 20 cameras on a circular road in a 1.0 Square Kilometers area for a short period of time (4:00 PM to 5:00 PM during the day), with each vehicle being captured by at least 2 and at most 18 cameras. VehicleID [23] includes 221,763 images about 26,267 vehicles, mainly containing both front and rear views. For comprehensive evaluation of the vehicle Re-ID methods, VehicleID [23] divides the test set into 3 subsets, large, medium and small, according to the size of the vehicle images. " }, { "figure_ref": [], "heading": "Multi-query Inference Stage", "publication_ref": [ "b22", "b31", "b31", "b31", "b25", "b22", "b31" ], "table_ref": [], "text": "Fig. 2. The pipeline of our inference framework. In the multi-query inference stage, viewpoint weights will be calculated between query and gallery viewpoint features, to integrate the complementary information among different viewpoints of the vehicle, we adaptively fuse the generated viewpoint weights with appearance features by viewpoint-based adaptive fusion (VAF) module. When we miss a query image from a random viewpoint, the appearance feature will be recovered by the cross-view feature recovery (CVFR) module. \"⊕\", \"⊗\" and \"⊙\" denote concatenation, cosine similarity calculation, and multiply respectively.\nVehicleID [23] contains limited views (only two views, i.e., front view and rear view). In addition, the images in this dataset mainly contain less complex backgrounds, occlusions and illumination changes. VERI-Wild [32] is collected in a 200 Square Kilometers suburban areas and contains 416,314 images of 40,671 vehicles taken by 174 traffic cameras. The training set consists of 277,797 images (30,671 cars) and the testing set consists of 138,517 images (10,000 cars). Similarly, the testing set of VERI-Wild [32] is divided into three subsets according to image size: large, medium, and small. The vehicle images in VERI-Wild [32] mainly have little variability in views, mostly in front and rear views.\nAlthough impressive results have been achieved on these datasets, the vehicle Re-ID problem is still far from being addressed in the real-world scenarios. First, these datasets contain only a limited number of scenarios and cameras. The samples in VeRI-776 [26], VehicleID [23] and VERI-Wild [32] are captured by 20, 12 and 174 cameras, respectively. This is inconsistent with the real-life surveillance system in the smart city which contains tens of thousands cameras. Second, the distribution of vehicle views is uneven, with most vehicles containing images of only the front and rear views and lacking side images. Moreover, the number of cameras that each vehicle crosses is limited in the existing datasets, and thus it is difficult to evaluate the cross-scene retrieval capability of the models." }, { "figure_ref": [], "heading": "III. VCNET: VIEWPOINT-CONDITIONED NETWORK", "publication_ref": [], "table_ref": [], "text": "In this work, to effectively combine the complementary information from different vehicle viewpoints, we propose a novel viewpoint-conditioned network (VCNet) for multi-query vehicle Re-ID." }, { "figure_ref": [], "heading": "A. Network Architecture", "publication_ref": [], "table_ref": [], "text": "Our VCNet includes two stages: multi-query inference as shown in Fig. 2. First, we propose a viewpoint conditional coding (VCC) module to learn specific viewpoint information. By encoding the vehicle's viewpoint features and embedding them into the learning process of vehicle detail features, it enforces the model to focus on the detail information under a specific viewpoint of the vehicle. As shown in Fig. 3, we use the vehicle's viewpoint features as conditional encoding information, to fuse with the vehicle detail features obtained at each layer of the network. It thus enables the model to focus on the vehicle viewpoint information while learning the discriminative features at that viewpoint.\nTo integrate the complementary information among different viewpoints of the vehicle, we propose a viewpoint-based adaptive fusion (VAF) module for multi-query inference. As shown in Fig. 2, we first assign the appearance feature weights of the query according to the similarity between the multiquery and gallery viewpoint features, then adaptively fuse the features of the multi-query according to the obtained weights, so as to take into account the complementarity and specificity between the different viewpoint features of the vehicle.\nTo handle the scenario of multi-query images with missing viewpoints, we further propose a cross-view feature recovery (CVFR) module to recover the missing appearance features. CVFR module maximizes the common information between different viewpoints through comparative learning, and completes the reconstruction between different viewpoint features based on the common information. " }, { "figure_ref": [], "heading": "B. Viewpoint Conditional Coding Module", "publication_ref": [ "b13", "b42" ], "table_ref": [], "text": "The large intra-class variability due to different viewpoints is a huge challenge for vehicle Re-ID. Therefore, for the vehicle with different viewpoints, the network should focus on different detailed regions. To make full use of the information of vehicle viewpoint, we propose a viewpoint conditional coding (VCC) module, as shown in Fig. 3. We introduce a two-stream network structure in the VCC module, the upper and lower branch is used to learn the appearance and viewpoint features of the vehicle respectively, and both branches use ResNet-50 [14] as feature extractor.\nFirst, different from Wang et al. [43] which mark 8 viewpoints (f ront, rear, lef t, lef t f ront, lef t rear, right, right f ront and right rear) on VeRI-776 dataset, we re-divide the 8 viewpoints into 3 viewpoint labels (f ront, rear and side) to maximize the variation between different viewpoints for the training of viewpoint prediction. To obtain a more robust and refined viewpoint features, on the basis of the training on the VeRI-776 dataset, we re-train the viewpoint prediction network on our MuRI dataset. To regress viewpoints, we use the crossentropy loss as the supervision of the training of viewpoints as follows,\nL view = - 1 N N i=1 log(p(v i |x i )),(1)\nwhere N represents the number of images in a training batch, x i denotes the input image, and v i denotes the viewpoint label. Then, to enforce the network focus on more discriminative regions based on the viewpoint information, the learned viewpoint features are encoded to the vehicle appearance feature learning branch. Here, we use different deconvolution functions as the viewpoint encoders to map the viewpoint features to the embedded features whose dimensions are same with the corresponding layers. Next, we sum the appearance features and the embedded features and then send to the next layer of the network as viewpoint encoding information. Finally, we can obtain the vehicle features which contain specific viewpoint information. The cross-entropy loss and triplet loss are used for the training of appearance features as follows,\nL appearance = - 1 N N i=1 log(p(y i |x i ))+ 1 N N i=1 (m + d(f i a , f i p ) -d(f i a , f i n )),(2)\nwhere y i denotes the appearance label, m denotes the margin, d(•) indicates the Euclidean distance, f a , f p and f n denotes anchor, positive and negative appearance features respectively. At last, the training loss of VCC module can be formulated as,\nL vcc = L view + L appearance .(3)\nTo demonstrate the effectiveness of VCC module, we visualize the features of the last layer, as shown in Fig. 4. VCC module can reduce the interference of background and enforce the model to better focus on the vehicles, comparing Fig. 4 (a 2 ) with Fig. 4 (a 1 ). In addition, VCC module encourages the model to focus on the main clues for classification and explores more discriminative regions, comparing Fig. 4 (b 2 ) and (c 2 ) with Fig. 4 (b 1 ) and (c 1 )." }, { "figure_ref": [], "heading": "C. Viewpoint-based Adaptive Fusion", "publication_ref": [], "table_ref": [], "text": "Although we have got a robust features that contains viewpoint information in VCC module, the limited information in a single query image significantly hinders the performance of vehicle Re-ID in the inference stage. To integrate multiviewpoint information of the vehicle in the inference stage and solve diverse viewpoint gaps between query and gallery, we propose a viewpoint-based adaptive fusion (VAF) module in the inference process, which adaptively fuses the generated viewpoint weights with appearance features. The inference process is shown in Fig. 2.\nFirst, we jointly use 3 vehicle images with different viewpoints in the query set and send them into the pre-trained VCC module to extract the appearance features and viewpoint features respectively. To obtain the viewpoint similarity between 3 query images with gallery images, we calculate the features cosine distance between 3 query viewpoint features with gallery viewpoint features as follows,\ns i = < f qi v , f g v > ||f qi v || × ||f g v || ,(4)\nwhere i = 1, 2, 3, f qi v and f g v denotes query and gallery viewpoint features, < x, y > indicates the inner product of x and y.\nThen, we can obtain the similarity weight set W = { w i |i = 1, 2, 3 } by computing the similarity of query and gallery viewpoint features using the concatenation and softmax function. To adaptively fuse the viewpoint information in the appearance features, we multiply the multi-query appearance features F = { f qi a |i = 1, 2, 3 } with the similarity weight set W to obtain the weighted appearance features F ′ as follows,\nF = f q1 a × w 1 , f q1 a × w 2 , f q1 a × w 3 ,(5)\nTo this end, the query appearance features with the similar viewpoint as the gallery image will be assigned a large weight. If a query image from a random viewpoint is missing, the appearance feature will be recovered by the cross-view feature recovery (CVFR) module. For the final recognition task, we perform a similarity calculation between the fused appearance features with the gallery appearance features and obtain the corresponding scores, which are summed to obtain the final recognition scores." }, { "figure_ref": [], "heading": "D. Cross-view Feature Recovery Module", "publication_ref": [ "b21" ], "table_ref": [], "text": "In some scenarios, the query data might not contain some viewpoints of vehicle images. While our network accepts three query vehicle images as inputs, and thus can not handle such data with missing viewpoints. To solve this problem, referring to Lin et al. [22] in multi-view, we propose cross-view feature recovery (CVFR) module to recover the missing appearance features. To learn information-rich consistent representations, CVFR module maximizes the mutual information between different viewpoints by contrast learning. To recover the missing appearance features, CVFR module minimizes the conditional entropy of different viewpoints by dual prediction. For the sake of convenience, we assume that one viewpoint from f ront, rear, and side is randomly missing, and the recovery process of the missing appearance features is as follows. First, we send two known images from different viewpoints to the pretrained VCC module and obtain the appearance features x 1 and x 2 , respectively. Then the latent representations Z 1 , Z 2 are obtained after the respective encoders E 1 , E 2 , and the reconstructed features x ′ 1 , x ′ 2 are obtained after the decoders D 1 , D 2 . The reconstructed differences will be minimized by the mean squared error loss function as follows,\nL mse = 2 v=1 m t=1 ||x t v -x ′t v || 2 ,(6)\nwhere x t v denotes the t-th sample of x v . To facilitate data recovery ability, contrastive learning is used to learn the common information between different viewpoints and to maximize the common information. The contrastive loss mathematical formula is as follows:\nL cl = - m t=1 (I(Z t 1 , Z t 2 ) + α(H(Z t 1 ) + H(Z t 2 ))),(7)\nwhere I denotes the mutual information, H is the information entropy, and the parameter α is set as 9 to regularize the entropy in our experiments. From information theory, information entropy is the average amount of information conveyed by an event. Hence a larger entropy H(Z i ) denotes a more informative representation Z i . The viewpoint predictors G 1 , G 2 are used to generate latent representations of the corresponding viewpoints, and the differences are generated by minimizing the loss function,\nL pre = 2 v=1 m t=1 ||g t v -Z t 3-v || 2 ,(8)\nwhere v represents the number of available viewpoints, and we can obtain the missing appearance feature with random viewpoints from available ones.\nTo further narrow the differences between the generated and the original viewpoint features, we feed the latent representations generated by the viewpoint predictor into the corresponding decoders separately. Then we obtain the generation of reconfiguration features g ′ 1 , g ′ 2 , which are constrained by mean squared error loss,\nL mse = 2 v=1 m t=1 ||x t v -D v (g t k )|| 2 ,(9)\nwhere D v denotes Decoders." }, { "figure_ref": [], "heading": "IV. MURI DATASET", "publication_ref": [ "b4" ], "table_ref": [], "text": "To evaluate the proposed VCNet on multi-query vehicle Re-ID, we propose a multi-views and unconstrained vehicle Re-ID dataset, MuRI, to integrate the complementary information among different viewpoints during inference. Data Acquisition. The MuRI dataset is collected in a large city with more than 1000 Squares Kilometers. to obtain the vehicle images from more diverse cameras, we first search the corresponding vehicle images by license plate in the Public Security City Service Platform, which monitors tens of thousands of cameras in the city. To ensure that each vehicle has rich viewpoint information, we obtain the vehicle images of different viewpoints at a traffic intersection, which has three or four surveillance cameras from different directions. As shown in Fig. 5, to ensure the diverse viewpoint information, we choose the rotatable dome cameras from the Public Security City Service Platform as the shooting cameras. For the videos captured from the platform, we generate the surrounding boxes by the tracking detection algorithm [5]. For effective evaluation, we automatically select the vehicle images in every 3 adjacent frames, followed by manual checking to avoid data redundancy.The time span of vehicle appearance in the data set is about half the year, and the vehicle resolution is variable due to the varying distances between cameras and vehicles of interest. " }, { "figure_ref": [], "heading": "V. MCSP METRIC", "publication_ref": [], "table_ref": [], "text": "One problem with existing metrics is the lack of consideration of cross-scene scenarios. To this end, a new metric, named mean Cross-scene Precision (mCSP) is proposed in this paper to ensure the cross-scene retrieval capability of the network. The main idea of mCSP can be summarized as follows: if there exist positive samples with similar viewpoints from the same camera, we consider them as the same scene and remove them from the ranked list. Given a ranked list, we use T P to denote the number of positive samples retrieved, f i c and f j c denotes the viewpoint feature of the two positive sample image retrieved under the same camera c. When their Euclidean distance d(f i c , f j c ) is smaller than a threshold hyperparameter ε, we consider that the viewpoints of f j c and f i c is similar. We use SC to denote the number of samples with the similar viewpoint under the same camera ID in T P , and F P denotes the number of positive samples with prediction errors, mCSP can be expressed in the following form,\nmCSP = Ncs i=0 T P -SC T P -SC+F P N cs ,(10)\nwhere N cs denotes the captured target images from positive samples with different cameras. We visualize the calculation process of CSP, as shown in Fig. 8. As shown in Fig. 8 " }, { "figure_ref": [], "heading": "VI. EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Experiments Setup", "publication_ref": [ "b13", "b7" ], "table_ref": [], "text": "Train. We use ResNet-50 [14] pre-trained on ImageNet [8] as our backbone. The model is trained for 80 epochs with the SGD optimizer. We warm up the learning rate to 5e-2 in the first 5 epochs and the backbone is frozen in the warm-up step.\nThe learning rate of 5e-2 is kept until the 60th, drops to 5e-3 in the 60th epoch, and drops to 5e-4 in the 75th epoch for faster convergence. We first pad 10 pixels on the image border, and then randomly crop it to 256×256. We also augment the data with random erasing. Further, we add a Batch Normalization layer after the global feature. A fully connected layer is added to map the global feature to the ID classification score. The batch size is 36 in the MuRI dataset.\nInference. In our inference process, we evaluate the methods in three inference ways, including single query, average query and multi-query. As demonstrated in Fig. 9 (a), single inference directly calculate the cosine distance between each query and the gallery set, which ignores the multi-view information during the inference. When facing the multiple queries, the intuitive inference way is the average inference, which computes the average value of multiple query features, as shown in Fig. 9 (b). However, simply averaging the query features can not effectively use the different viewpoint information of the vehicle. To adaptively utilize the complementary information in the multiple queries from different cameras with diverse viewpoints combinations, we propose the multi-inference for the proposed VCNet, as shown in Fig. 9 (c). Specifically, it generates viewpoint weights by the similarity between multiple queries and gallery view features, then fuses the generated viewpoint weights with appearance features. " }, { "figure_ref": [ "fig_6" ], "heading": "B. Experimental Results", "publication_ref": [ "b13" ], "table_ref": [ "tab_2", "tab_3" ], "text": "Comparison with the state-of-the-arts. To verify the effectiveness of the proposed VCNet with the multi-query setting, we compare four state-of-the-art vehicle Re-ID methods on the collected MuRI dataset. Specifically, we construct the multi-query setting with the number of query N Q = 3, which including three different viewpoints f ront, rear and side respectively. We evaluate the state-of-the-art methods in both single and average inferences for comparison. As shown in Table II, all the state-of-the-art methods achieve significant improvement in the average inference compared to the single inference, which evidences that using multiple queries can better incorporate the complementary information among the images. By progressively embedding viewpoint features to appearance feature learning via the viewpoint conditional coding (VCC), and integrating the complementary information among different viewpoints via the viewpoint-based adaptive fusion (VAF), our VCNet with multi-inference achieves superior performance compared to the state-of-the-art methods. This validates the effectiveness of the proposed VCNet while handling the multi-query inference for vehicle Re-ID. Fig. 10 shows the corresponding ranking results of multiple queries from different viewpoints, which further evidences the promising performance of our method while handling the challenging cross-scene retrieval problem compared to other methods.\nAblation study of VCNet. To verify the effective contribution of the components in our model, we implement the ablation study on the viewpoint conditional coding (VCC) module, viewpoint-based adaptive fusion (VAF) module, and cross-view feature recovery (CVFR) module on our MuRI dataset, as shown in Table III. We employ ResNet-50 [14] in a single query fashion as our baseline to extract vehicle appearance features. Note that introducing VCC significantly Evaluation on mCSP. To evaluate the capability of crossscene retrieval of the Re-ID methods, we reconstruct by adding 5 and 10 images of the same viewpoint under the same camera for each vehicle in the original gallery set. To make a fair comparison, we further delete 5 and 10 images of that vehicle under different cameras to ensure the total number of images of each vehicle remain unchanged. Fig. 11 shows the comparison of the proposed mCSP comparing with existing metrics on both the original and modified gallery sets. By adding the images with the same viewpoint under the same camera, all the Rank1, mAP, and mINP increase in the modified galleries due to more easy matching samples. This is irrational in realistic Re-ID where the capability of matching more positive vehicle images across more diverse scenes is even crucial. By contrast, the proposed mCSP declines with the images with the same viewpoint under the same camera increase, since the positive samples recognized under different cameras become less. The mCSP only focuses on retrieving positive sample images from different cameras in the gallery set and images from different views in the same camera, which can more realistically reflect the retrieval accuracy across cameras in the realistic Re-ID.\nEvaluation on the number of query Fig. 12 evaluates the multi-query inference with the different number of queries. We can see that as the number of queries increases, the performance of the multi-query consistently improves, benefiting from the richer information in the multiple diverse images about the vehicle. Note that the significant improvement in each metric is achieved by increasing from 1 random view to 2 different views, and then to 3 full views (f ront+back+side).\nWhen the number of queries continues to increase from 3 to 7, the performance of each metric consistently increases, but with a slightly slower increase. This indicates that the larger diversity in viewpoints between queries, the better improvement in multi-query inference, since more complementary information between different viewpoints of the vehicle.\nVII. CONCLUSION In this paper, we first launch the multi-query vehicle Re-ID task which leverages multiple queries to overcome the viewpoint limitation of a single one, and propose a viewpointconditioned network (VCNet) for multi-query vehicle Re-ID. First, we propose a viewpoint conditional coding (VCC) module in the training process to learn specific viewpoint information. Then, we propose a viewpoint-based adaptive fusion (VAF) module to integrate the complementary information among different viewpoints in the inference process. To handle the scenario when query images from random viewpoints, we propose the cross-view feature recovery (CVFR) module to recover the missing appearance feature. Finally, a new metric (mCSP) and a new dataset (MuRI) are proposed to measure the ability of cross-scene recognition and conduct multi-query evaluation experiments respectively. Comprehensive experiments demonstrate the necessity of the multi-query inference and the effectiveness of the proposed VCNet. This work provides new research direction for vehicle Re-ID and related areas." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "This research is supported in part by the National Natural Science Foundation of China (61976002), the University Synergy Innovation Program of Anhui Province (GXXT-2020-051 and GXXT-2019-025), and the Natural Science Foundation of Anhui Province (2208085J18).\nA. Zheng and C." } ]
2023-05-25
[ { "authors": "Saghir Alfasly; Yongjian Hu; Haoliang Li; Tiancai Liang; Xiaofeng Jin; Beibei Liu; Qingli Zhao", "journal": "IEEE Access", "ref_id": "b0", "title": "Multi-label-based similarity learning for vehicle re-identification", "year": "2019" }, { "authors": "Haonan Haoran An; Kaiwen Fan; Hai-Miao Deng; Hu", "journal": "IEEE", "ref_id": "b1", "title": "Part-guided network for pedestrian attribute recognition", "year": "2019" }, { "authors": "Yan Bai; Jun Liu; Yihang Lou; Ce Wang; Lingyu Duan", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence(TPAMI)", "ref_id": "b2", "title": "Disentangled feature learning network and a comprehensive benchmark for vehicle re-identification", "year": "2021" }, { "authors": "Yan Bai; Yihang Lou; Feng Gao; Shiqi Wang; Yuwei Wu; Lingyu Duan", "journal": "", "ref_id": "b3", "title": "Group-sensitive triplet embedding for vehicle reidentification", "year": "2018" }, { "authors": "Alexey Bochkovskiy; Chien-Yao Wang; Hong-Yuan Mark Liao", "journal": "", "ref_id": "b4", "title": "Yolov4: Optimal speed and accuracy of object detection", "year": "2020" }, { "authors": "Guangyi Chen; Tianren Zhang; Jiwen Lu; Jie Zhou", "journal": "", "ref_id": "b5", "title": "Deep meta metric learning", "year": "2019" }, { "authors": "Haigang Xu Chen; Jian Sui; Wenqing Fang; Mingting Feng; Zhou", "journal": "IEEE Transactions on Intelligent Transportation Systems(TITS)", "ref_id": "b6", "title": "Vehicle re-identification using distance-based global and partial multiregional feature learning", "year": "2020" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Lijia Li; Kai Li; Feifei Li", "journal": "", "ref_id": "b7", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Naqqash Dilshad; Jaeseung Song", "journal": "IEEE", "ref_id": "b8", "title": "Dual-stream siamese network for vehicle re-identification via dilated convolutional layers", "year": "2021" }, { "authors": "Naqqash Dilshad; Jaeseung Song", "journal": "", "ref_id": "b9", "title": "Dual-stream siamese network for vehicle re-identification via dilated convolutional layers", "year": "2021" }, { "authors": "Haiyun Guo; Chaoyang Zhao; Zhiwei Liu; Jinqiao Wang; Hanqing Lu", "journal": "", "ref_id": "b10", "title": "Learning coarse-to-fine structured feature embedding for vehicle re-identification", "year": "2018" }, { "authors": "Haiyun Guo; Kuan Zhu; Ming Tang; Jinqiao Wang", "journal": "IEEE Transactions on Image Processing(TIP)", "ref_id": "b11", "title": "Two-level attention network with multi-grain ranking loss for vehicle re-identification", "year": "2019" }, { "authors": "Bing He; Jia Li; Yifan Zhao; Yonghong Tian", "journal": "", "ref_id": "b12", "title": "Part-regularized nearduplicate vehicle re-identification", "year": "2019" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b13", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Shuting He; Hao Luo; Pichao Wang; Fan Wang; Hao Li; Wei Jiang", "journal": "", "ref_id": "b14", "title": "Transreid: Transformer-based object re-identification", "year": "2021" }, { "authors": "Min Hung; Jiarui Hsu; Yizhou Cai; Jenq-Neng Wang; Kwang-Ju Hwang; Kim", "journal": "IEEE Transactions on Image Processing(TIP)", "ref_id": "b15", "title": "Multi-target multi-camera tracking of vehicles using metadata-aided re-id and trajectory-based camera link model", "year": "2021" }, { "authors": "Xin Jin; Cuiling Lan; Wenjun Zeng; Zhibo Chen", "journal": "", "ref_id": "b16", "title": "Uncertaintyaware multi-shot knowledge distillation for image-based object reidentification", "year": "2020" }, { "authors": "Yi Jin; Chenning Li; Yidong Li; Peixi Peng; George A Giannopoulos", "journal": "IEEE Transactions on Intelligent Transportation Systems(TITS)", "ref_id": "b17", "title": "Model latent views with multi-center metric learning for vehicle re-identification", "year": "2021" }, { "authors": "Pirazh Khorramshahi; Amit Kumar; Neehar Peri; Saisaketh Rambhatla; Juncheng Chen; Rama Chellappa", "journal": "", "ref_id": "b18", "title": "A dual-path model with adaptive attention for vehicle re-identification", "year": "2019" }, { "authors": "Hongchao Li; Xianmin Lin; Aihua Zheng; Chenglong Li; Bin Luo; Ran He; Amir Hussain", "journal": "IEEE Transactions on Emerging Topics in Computational Intelligence", "ref_id": "b19", "title": "Attributes guided feature learning for vehicle reidentification", "year": "2021" }, { "authors": "Ming Li; Jun Liu; Ce Zheng; Xinming Huang; Ziming Zhang", "journal": "IEEE Transactions on Multimedia(TMM)", "ref_id": "b20", "title": "Exploiting multi-view part-wise correlation via an efficient transformer for vehicle re-identification", "year": "2021" }, { "authors": "Yijie Lin; Yuanbiao Gou; Zitao Liu; Boyun Li; Jiancheng Lv; Xi Peng", "journal": "", "ref_id": "b21", "title": "Completer: Incomplete multi-view clustering via contrastive prediction", "year": "2021" }, { "authors": "Hongye Liu; Yonghong Tian; Yaowei Yang; Lu Pang; Tiejun Huang", "journal": "", "ref_id": "b22", "title": "Deep relative distance learning: Tell the difference between similar vehicles", "year": "2016" }, { "authors": "Xiaobin Liu; Shiliang Zhang; Qingming Huang; Wen Gao", "journal": "", "ref_id": "b23", "title": "Ram: A region-aware deep model for vehicle re-identification", "year": "2018" }, { "authors": "Xiaobin Liu; Shiliang Zhang; Xiaoyu Wang; Richang Hong; Qi Tian", "journal": "IEEE Transactions on Image Processing(TIP)", "ref_id": "b24", "title": "Group-group loss-based global-regional feature learning for vehicle reidentification", "year": "2020" }, { "authors": "Xinchen Liu; Wu Liu; Huadong Ma; Huiyuan Fu", "journal": "", "ref_id": "b25", "title": "Large-scale vehicle re-identification in urban surveillance videos", "year": "2016" }, { "authors": "Xinchen Liu; Wu Liu; Tao Mei; Huadong Ma", "journal": "", "ref_id": "b26", "title": "A deep learning-based approach to progressive vehicle re-identification for urban surveillance", "year": "2016" }, { "authors": "Xinchen Liu; Wu Liu; Tao Mei; Huadong Ma", "journal": "", "ref_id": "b27", "title": "Provid: Progressive and multimodal vehicle re-identification for large-scale urban surveillance", "year": "2017" }, { "authors": "Xinchen Liu; Wu Liu; Jinkai Zheng; Chenggang Yan; Tao Mei", "journal": "", "ref_id": "b28", "title": "Beyond the parts: Learning multi-view cross-part correlation for vehicle re-identification", "year": "2020" }, { "authors": "Xuejing Liu; Liang Li; Shuhui Wang; Zhengjun Zha; Dechao Meng; Qingming Huang", "journal": "", "ref_id": "b29", "title": "Adaptive reconstruction network for weakly supervised referring expression grounding", "year": "2019" }, { "authors": "Yihang Lou; Yan Bai; Jun Liu; Shiqi Wang; Ling-Yu Duan", "journal": "IEEE Transactions on Image Processing(TIP)", "ref_id": "b30", "title": "Embedding adversarial learning for vehicle re-identification", "year": "2019" }, { "authors": "Yihang Lou; Yan Bai; Jun Liu; Shiqi Wang; Lingyu Duan", "journal": "", "ref_id": "b31", "title": "Veriwild: A large dataset and a new method for vehicle re-identification in the wild", "year": "2019" }, { "authors": "Zefeng Lu; Ronghao Lin; Xulei Lou; Lifeng Zheng; Haifeng Hu", "journal": "IEEE Transactions on Intelligent Transportation Systems(TITS)", "ref_id": "b32", "title": "Identity-unrelated information decoupling model for vehicle reidentification", "year": "2022" }, { "authors": "Youzhi Hao Luo; Xingyu Gu; Shenqi Liao; Wei Lai; Jiang", "journal": "", "ref_id": "b33", "title": "Bag of tricks and a strong baseline for deep person re-identification", "year": "2019" }, { "authors": "Lam Mai; Xiu-Zhi Chen; Chao-Wei Yu; Yen-Lin Chen", "journal": "", "ref_id": "b34", "title": "Multi-view vehicle re-identification method based on siamese convolutional neural network structure", "year": "2020" }, { "authors": "Dechao Meng; Liang Li; Xuejing Liu; Yadong Li; Shijie Yang; Zheng-Jun Zha; Xingyu Gao; Shuhui Wang; Qingming Huang", "journal": "", "ref_id": "b35", "title": "Parsingbased view-aware embedding network for vehicle re-identification", "year": "2020" }, { "authors": "Olga Moskvyak; Frederic Maire; Feras Dayoub; Mahsa Baktashmotlagh", "journal": "", "ref_id": "b36", "title": "Keypoint-aligned embeddings for image retrieval and reidentification", "year": "2021" }, { "authors": "Yongming Rao; Guangyi Chen; Jiwen Lu; Jie Zhou", "journal": "", "ref_id": "b37", "title": "Counterfactual attention learning for fine-grained visual categorization and reidentification", "year": "2021" }, { "authors": "Fei Shen; Jianqing Zhu; Xiaobin Zhu; Yi Xie; Jingchang Huang", "journal": "IEEE Transactions on Intelligent Transportation Systems(TITS)", "ref_id": "b38", "title": "Exploring spatial significance via hybrid pyramidal graph network for vehicle re-identification", "year": "2021" }, { "authors": "Yantao Shen; Tong Xiao; Hongsheng Li; Shuai Yi; Xiaogang Wang", "journal": "", "ref_id": "b39", "title": "Learning deep neural networks for vehicle re-id with visual-spatiotemporal path proposals", "year": "2017" }, { "authors": "Wei Sun; Guangzhao Dai; Xiaorui Zhang; Xiaozheng He; Xuan Chen", "journal": "IEEE Transactions on Intelligent Transportation Systems(TITS)", "ref_id": "b40", "title": "Tbe-net: A three-branch embedding network with part-aware ability and feature complementary learning for vehicle re-identification", "year": "2021" }, { "authors": "Laurens Van Der Maaten; Geoffrey Hinton", "journal": "Journal of machine learning research", "ref_id": "b41", "title": "Visualizing data using t-sne", "year": "2008" }, { "authors": "Zhongdao Wang; Luming Tang; Xihui Liu; Zhuliang Yao; Shuai Yi; Jing Shao; Junjie Yan; Shengjin Wang; Hongsheng Li; Xiaogang Wang", "journal": "", "ref_id": "b42", "title": "Orientation invariant feature embedding and spatial temporal regularization for vehicle re-identification", "year": "2017" }, { "authors": "Ke Yan; Yonghong Tian; Yaowei Wang; Wei Zeng; Tiejun Huang", "journal": "", "ref_id": "b43", "title": "Exploiting multi-grain ranking constraints for precisely searching visually-similar vehicles", "year": "2017" }, { "authors": "Mang Ye; Jianbing Shen; Gaojie Lin; Tao Xiang; Ling Shao; Steven Ch Hoi", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence(TPAMI)", "ref_id": "b44", "title": "Deep learning for person re-identification: A survey and outlook", "year": "2021" }, { "authors": "Jiajian Zhao; Yifan Zhao; Jia Li; Ke Yan; Yonghong Tian", "journal": "", "ref_id": "b45", "title": "Heterogeneous relational complement for vehicle re-identification", "year": "2021" }, { "authors": "Liang Zheng; Liyue Shen; Lu Tian; Shengjin Wang; Jingdong Wang; Qi Tian", "journal": "", "ref_id": "b46", "title": "Scalable person re-identification: A benchmark", "year": "2015" }, { "authors": "Zhedong Zheng; Minyue Jiang; Zhigang Wang; Jian Wang; Zechen Bai; Xuanmeng Zhang; Xin Yu; Xiao Tan; Yi Yang; Shilei Wen", "journal": "", "ref_id": "b47", "title": "Going beyond real data: A robust visual representation for vehicle reidentification", "year": "2020" }, { "authors": "Xiangyu Zhu; Zhenbo Luo; Pei Fu; Xiang Ji", "journal": "", "ref_id": "b48", "title": "Voc-reld: Vehicle reidentification based on vehicle-orientation-camera", "year": "2020" } ]
[ { "formula_coordinates": [ 5, 372.16, 73.84, 190.87, 30.32 ], "formula_id": "formula_0", "formula_text": "L view = - 1 N N i=1 log(p(v i |x i )),(1)" }, { "formula_coordinates": [ 5, 357.47, 294.45, 205.57, 64.28 ], "formula_id": "formula_1", "formula_text": "L appearance = - 1 N N i=1 log(p(y i |x i ))+ 1 N N i=1 (m + d(f i a , f i p ) -d(f i a , f i n )),(2)" }, { "formula_coordinates": [ 5, 377.7, 425.47, 185.34, 9.65 ], "formula_id": "formula_2", "formula_text": "L vcc = L view + L appearance .(3)" }, { "formula_coordinates": [ 6, 132.2, 86.04, 167.82, 24.47 ], "formula_id": "formula_3", "formula_text": "s i = < f qi v , f g v > ||f qi v || × ||f g v || ,(4)" }, { "formula_coordinates": [ 6, 96.26, 246.72, 203.76, 13.15 ], "formula_id": "formula_4", "formula_text": "F = f q1 a × w 1 , f q1 a × w 2 , f q1 a × w 3 ,(5)" }, { "formula_coordinates": [ 6, 114.61, 675.27, 185.42, 30.2 ], "formula_id": "formula_5", "formula_text": "L mse = 2 v=1 m t=1 ||x t v -x ′t v || 2 ,(6)" }, { "formula_coordinates": [ 6, 339.66, 83.24, 223.38, 30.2 ], "formula_id": "formula_6", "formula_text": "L cl = - m t=1 (I(Z t 1 , Z t 2 ) + α(H(Z t 1 ) + H(Z t 2 ))),(7)" }, { "formula_coordinates": [ 6, 374.38, 233.38, 188.66, 30.2 ], "formula_id": "formula_7", "formula_text": "L pre = 2 v=1 m t=1 ||g t v -Z t 3-v || 2 ,(8)" }, { "formula_coordinates": [ 6, 368.18, 383.67, 194.86, 30.2 ], "formula_id": "formula_8", "formula_text": "L mse = 2 v=1 m t=1 ||x t v -D v (g t k )|| 2 ,(9)" }, { "formula_coordinates": [ 8, 374.54, 229.27, 188.5, 27.6 ], "formula_id": "formula_9", "formula_text": "mCSP = Ncs i=0 T P -SC T P -SC+F P N cs ,(10)" } ]
Multi-query Vehicle Re-identification: Viewpoint-conditioned Network, Unified Dataset and New Metric
Existing vehicle re-identification methods mainly rely on the single query, which has limited information for vehicle representation and thus significantly hinders the performance of vehicle Re-ID in complicated surveillance networks. In this paper, we propose a more realistic and easily accessible task, called multi-query vehicle Re-ID, which leverages multiple queries to overcome viewpoint limitation of single one. Based on this task, we make three major contributions. First, we design a novel viewpoint-conditioned network (VCNet), which adaptively combines the complementary information from different vehicle viewpoints, for multi-query vehicle Re-ID. Moreover, to deal with the problem of missing vehicle viewpoints, we propose a cross-view feature recovery module which recovers the features of the missing viewpoints by learnt the correlation between the features of available and missing viewpoints. Second, we create a unified benchmark dataset, taken by 6142 cameras from a real-life transportation surveillance system, with comprehensive viewpoints and large number of crossed scenes of each vehicle for multi-query vehicle Re-ID evaluation. Finally, we design a new evaluation metric, called mean cross-scene precision (mCSP), which measures the ability of cross-scene recognition by suppressing the positive samples with similar viewpoints from same camera. Comprehensive experiments validate the superiority of the proposed method against other methods, as well as the effectiveness of the designed metric in the evaluation of multiquery vehicle Re-ID.
Aihua Zheng; Chaobin Zhang; Weijun Zhang; Chenglong Li; Jin Tang; Chang Tan; Ruoran Jia
[ { "figure_caption": "Fig. 3 .Fig. 4 .34Fig. 3. The framework of the proposed VCC module. First, we learn the vehicle viewpoint features by the yellow branch below, then pass them through different deconvolution encoders (E1, E2, E3, and E4) to obtain viewpoint encoding features in different scales. These viewpoint encoding features are added to the vehicle appearance feature learning branch to learn vehicle detail features based on specific viewpoints.", "figure_data": "", "figure_id": "fig_1", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .Fig. 6 .56Fig. 5. Illustration of data acquisition environment of the vehicle images obtained in the MuRI dataset. Vehicle images with diverse viewpoints are captured by the dome cameras at traffic intersection.", "figure_data": "", "figure_id": "fig_2", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "Fig. 7 . 2 )Fig. 8 .728Fig. 7. Distribution of the number of identities across the number of cameras.", "figure_data": "", "figure_id": "fig_3", "figure_label": "728", "figure_type": "figure" }, { "figure_caption": "(a) and (b), for the positive samples detected under the same camera, CSP removes the ones with similar viewpoints. Positive samples with similar viewpoints under the same camera tend to be more easily identified, which results in virtually high scores in the existing metrics such as AP and INP, comparing Fig. 8 (b) to (a). By contrast, the proposed CSP metric can better reflect the ability of cross-camera retrieval, as shown in Fig. 8 (b).", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .Fig. 10 .910Fig. 9. Diagrams of different inference settings.", "figure_data": "", "figure_id": "fig_5", "figure_label": "910", "figure_type": "figure" }, { "figure_caption": "Fig. 12 .12Fig. 12. Evaluation of the number of queries on MuRI.", "figure_data": "", "figure_id": "fig_6", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "AMONG VEHICLEID, VERI-776, VERI-WILD, AND THE CREATED MURI DATASETS FOR VEHICLE REID.", "figure_data": "DatasetVehicleIDVeRI-776VERI-WildMuRIImages221,76349,360416,31423,637Identities26,26777640,671200Cameras12201746142Viewpoints/id2.04.23.45.0Cross-resolution × Night × × × × √√Dataset Characteristics. Compared with existing prevalentRe-ID datasets as shown in Table I, in calculating the averagenumber of viewpoints for each id in the dataset, we divided", "figure_id": "tab_1", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "COMPARISONS ON MURI BENCHMARK.", "figure_data": "MethodVenueInference wayRank1 Rank5 Rank10mAPmINP mCGM mCSPDMML [6]ICCV 2019single query average query0.644 0.7660.767 0.8840.814 0.9270.362 0.5240.060 0.0880.175 0.2810.198 0.272RECT Net [49]CVPR 2020single query average query0.729 0.8060.811 0.9240.859 0.9560.415 0.6020.074 0.1080.212 0.3160.225 0.330GRF [25]TIP 2020single query average query0.683 0.7860.806 0.9060.841 0.9420.398 0.5650.070 0.1010.188 0.3020.214 0.314CAL [38]ICCV 2021single query average query0.754 0.8160.842 0.9600.890 0.9800.441 0.6450.082 0.1380.228 0.3600.266 0.359VCNetOursmulti-query0.8430.9620.9800.6770.1450.4000.393query1score", "figure_id": "tab_2", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "STUDY OF VCNET WHEN THE NUMBER OF QUERY N Q = 3 WITH ONE VIEWPOINT RANDOM MISSING.", "figure_data": "SettingsRank1mAPmINPmCGM mCSPBaseline0.7740.5060.1040.3100.307+VCC0.8030.5310.1210.3320.340+VCC+VAF0.8200.5540.1250.3440.360+VCC+VAF +CVFR0.8320.5650.1300.3580.371", "figure_id": "tab_3", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "OF VAF WHEN THE NUMBER OF QUERIES N Q =3. Fig. 11. The changes of each metric on the modified gallery sets in MuRI.", "figure_data": "0.9Methods RECT NET [49]Rank1 0.806mAP 0.602mINP mCGM mCSP 0.108 0.316 0.3300.8original +5 samples/IDRECT NET + VAF0.8200.6200.1140.3490.3380.7+10 samples/IDCAL [38]0.8160.6450.1380.3600.359CAL + VAF0.8380.6670.1420.3720.3800.6VCC (Ours) VCC + VAF0.826 0.8430.659 0.6770.141 0.1450.374 0.4000.370 0.3930.50.4TABLE V0.3EVALUATION ON CVFR WHEN THE NUMBER OF QUERY N Q = 3 WITH ONE RANDOM MISSING.0.2Inference wayRank1mAPmINP mCGM mCSP0.1(a) single (b) average (c) CVFR + average0.715 0.801 0.8140.426 0.535 0.5440.087 0.122 0.1260.256 0.336 0.3450.272 0.342 0.3510Rank1mAPmCGMmINPmCSP(d) multi (2 views)0.8200.5540.1270.3440.360(e) CVFR + multi0.8320.5650.1300.3580.371boosts the baseline, which evidences the effectiveness of theproposed VCC module which can integrate the complementaryinformation among different viewpoints of the vehicle. VAFconsistently brings a significant improvement on all the metricsby adaptively fusing the generated viewpoint weights withappearance features. At last, CVFR further enhances theperformance by recovering features of the missing viewpoint.Evaluation on VAF. To further demonstrate the effectivenessand applicability of viewpoint-based adaptive fusion (VAF)module, we plugin VAF into two state-of-the-art methodswith the number of query N Q = 3 with three differentviewpoints (f ront, rear and side) in the query set. Toobtain the viewpoint features for the VAF module, we usea pre-trained viewpoint prediction network for all the othermethods. In vehicle re-identification, the main challenge isthe intra-class variability and inter-class similarity problemdue to the difference in vehicle viewpoints. We can usethe images from different views of the vehicle during theinference through multi-query. VAF can assign weights adap-tively according to the similarity between the viewpoints ofvehicles in query and gallery. More similarity between thequery and gallery viewpoints, the greater the weight whenretrieving, which reduce the difficulty of identifying positivesamples. As shown in Table IV, after integrating the proposedVAF into RECT NET [49] and CAL [38], it brings a largemargin improvement over the original methods by fusing thegenerated viewpoint weights with their appearance features.This verifies that VAF can better integrate the complementaryinformation among different viewpoints.Evaluation on CVFR. To handle the scenario with viewpointmissing, we propose the cross-view feature recovery (CVFR)module to recover the appearance features of the missing view-", "figure_id": "tab_4", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "c) brings further improvement. Our proposed multiquery inference can further boost the performance with only two existing viewpoints, as shown in Table V (d), which strongly verifies the effectiveness of the proposed multi-query inference. Finally, the multi-query inference together with recovering the missing appearance features via the proposed CVFR module achieves the best performance, as shown in Table V (e), which indicates the necessity of the supplementary information in multi-query for vehicle Re-ID.", "figure_data": "", "figure_id": "tab_5", "figure_label": "V", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work, ResNet-50, is used as a method to compare the ranking results of the proposed multi-query ReID on the MuRI dataset, providing a basis for the study conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[6], [7], [15], [25], [38], [49]", "Explanation": "The cited works on image-based vehicle Re-ID methods are discussed in the context of the proposed multi-query ReID, indicating a continuation of research in the same field."}, {"Category": "Extension or Continuation", "Citation": "[37], [43]", "Explanation": "The works on vehicle keypoint information and local area features are mentioned as a way to address the viewpoint diversity in the field of ReID, showing an extension of research in the same area."}, {"Category": "Extension or Continuation", "Citation": "[20], [28]", "Explanation": "The use of attribute fusion and spatial-temporal information in learning global vehicle representations is discussed as a way to alleviate the difference from different viewpoints, indicating an extension of research in the field of ReID."}, {"Category": "Supporting Evidence", "Citation": "[17]", "Explanation": "The cited work by Jin et al. provides a multi-shot teacher branch that mines information from multi-viewpoint images to guide the single-image student branch during training, which is used as a foundational method in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[47]", "Explanation": "The cited work by Zheng et al. evaluates person Re-ID in the multi-query fashion by using average or max operations on multiple person images, which the citing paper further extends by exploring the variability between query images and fully utilizing the diversity and complementarity among multiple queries."}, {"Category": "Methodological Basis", "Citation": "[46]", "Explanation": "The cited work by Zhao et al. introduces the Cross-camera Generalization Measure (CGM) evaluation metric, which the citing paper adopts to improve the evaluations by introducing position-sensitivity and cross-camera generalization penalties."}, {"Category": "Data Source", "Citation": "[23]", "Explanation": "The cited work, VehicleID, is acknowledged as a data source for the new vehicle Re-ID dataset proposed in the citing paper."}, {"Category": "Data Source", "Citation": "[26]", "Explanation": "The cited work, VeRI-776, is acknowledged as a data source for the new vehicle Re-ID dataset proposed in the citing paper."}, {"Category": "Data Source", "Citation": "[32]", "Explanation": "The cited work, VERI-Wild, is acknowledged as a data source for the new vehicle Re-ID dataset proposed in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(i.e., 6142 cameras)", "Explanation": "The cited work introduces a new vehicle image dataset with a large number of cameras (i.e., 6142), which is an extension of the existing vehicle Re-ID datasets mentioned in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(He et al., 2019)", "Explanation": "The cited work by He et al. introduces the idea of region of interest prediction or attention models to mine the salient regions of vehicles, which the citing paper adopts in their research to learn detailed features and expand subtle differences between the same models."}, {"Category": "Methodological Basis", "Citation": "[43]", "Explanation": "The cited work by Zheng et al. provides a method for extracting local area features in different directions based on keypoint locations, which the citing paper adopts in their research to model spatio-temporal constraints using log-normal distribution to refine retrieval results."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work by Zheng et al. introduces a deep network to fuse camera views, vehicle types, and color into vehicle features, which the citing paper adopts in their research to improve the fusion of camera views and vehicle features."}, {"Category": "Extension or Continuation", "Citation": "[4]", "Explanation": "The cited work by Bai et al. proposes a deep metric learning method that divides samples within each vehicle ID into groups and creates multigranularity triple samples across different vehicle IDs and groups within the same vehicle ID to learn fine-grained features. The citing paper builds upon this work by exploring the use of this method in their research to learn fine-grained features."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work by Jin et al. proposes a multi-center metric learning framework for multi-view vehicle Re-ID that models potential views directly from visual appearance and constrains vehicle view centers using intra-class ranking loss and cross-class ranking loss. The citing paper adopts this framework in their research to model potential views and constrain vehicle view centers to increase discriminative information."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work provides a teacher-student approach for exploring multi-shot information in images, which the citing paper adopts in their research to guide the single-image branch during training."}, {"Category": "Supporting Evidence", "Citation": "[23]", "Explanation": "The cited work introduces the concept of Cumulative Matching Characteristics (CMC) and its use in evaluating the performance of Re-ID methods, providing foundational information for the citing paper to build upon."}, {"Category": "Supporting Evidence", "Citation": "[47]", "Explanation": "The cited work discusses the use of mean Average Precision (mAP) in evaluating the performance of Re-ID methods, which the citing paper adopts to measure the overall performance of the methods."}, {"Category": "Methodological Basis", "Citation": "[45]", "Explanation": "The cited work by Ye et al. introduces a new metric, NP, to measure the ability of a model to retrieve difficult samples, which the citing paper adopts in their research to address the issue of evaluating the retrieval performance of a system."}, {"Category": "Methodological Basis", "Citation": "[46]", "Explanation": "The cited work by et al. proposes a new measure, CGM, to assess the cross-scene generalization ability of a model, which the citing paper adopts in their research to evaluate the results derived from individual cameras."}, {"Category": "Data Source", "Citation": "[26]", "Explanation": "The cited work, VeRI-776 dataset, is the source of the images used in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[23]", "Explanation": "The cited work, VehicleID dataset, is the source of the images used in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[23]", "Explanation": "The cited work, VehicleID dataset, is the source of the test set used in the study conducted in the citing paper to evaluate the performance of vehicle Re-ID methods."}, {"Category": "Data Source", "Citation": "[23]", "Explanation": "The cited work provides the VehicleID dataset, which contains limited views of vehicles and is used as a data source for the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[32]", "Explanation": "The cited work provides the VERI-Wild dataset, which is a large-scale dataset collected in a suburban area and used for training and testing in the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[26]", "Explanation": "The dataset VeRI-776 is cited as a source of vehicle images in the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[23]", "Explanation": "The dataset VehicleID is cited as a source of vehicle images in the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[32]", "Explanation": "The dataset VERI-Wild is cited as a source of vehicle images in the research conducted in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[14]", "Explanation": "The cited work, ResNet-50, is used as a feature extractor in the VCC module of the citing paper, providing a methodological basis for the network structure and feature learning process."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work by Lin et al. provides a method for cross-view feature recovery (CVFR) module to recover missing appearance features in multi-view scenarios, which the citing paper adopts in their research to address the problem of query data with missing viewpoints."}, {"Category": "Data Source", "Citation": "[5]", "Explanation": "The cited work provides the tracking detection algorithm used in the data acquisition process of the MuRI dataset."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work, ResNet-50, serves as the backbone for the model used in the citing paper, providing a methodological basis for the research conducted."}, {"Category": "Data Source", "Citation": "[8]", "Explanation": "The cited work, ImageNet, is the dataset used to pre-train the model in the citing paper, serving as a data source for the research conducted."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work, ResNet-50, is used as a baseline in the ablation study of the VCC module in the proposed model, indicating a methodological basis for the study conducted in the citing paper."}]
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b24", "b29", "b43", "b27", "b33", "b27", "b53", "b33", "b5" ], "table_ref": [], "text": "S INGLE Image Super-Resolution (SISR) is the process of recovering a high-resolution (HR) image from a corresponding low-resolution (LR) observation. SISR has been widely used for many important applications, including remote sensing, face recognition, and medical image processing [4], [24], [29]. However, SISR problems are often considered as ill-posed and heavily rely on efficient image priors. The high frequency of recurrence of small patches in a natural image provides powerful image-specific self-similarity priors [16] to regularize the SISR problems. These repeated internal textures are more closely related to the input LR image than external training datasets. Essentially, self-similarity offers an insightful solution based on similarity to explore nonlocal image textures, which can filter out irrelevant non-local textures while discovering valuable ones. For instance, when repairing structured architectural textures, the associated nonlocal architectural regions are more meaningful than lowfrequency face or background regions.\nIn deep learning-based SISR models, self-similarity is usually explored through non-local attention (NLA) [43], which was originally used to model non-local dependencies for highlevel vision tasks and has also been proven to be effective in SISR [5], [27], [33], [34]. These NLA-based SISR methods use the softmax transformation to assign attention weights to non-local information. However, as illustrated in Fig. 1, we can see that as the size of non-local sequence increases, the resulting probability distribution of the softmax transformation is gradually flattened, i.e., valuable non-local information will be overwhelmed by a large amount of irrelevant non-local information. Specifically, in short-range sequence modeling (e.g. n = 4 2 ), the resulting probability distribution of the softmax transformation is distinguishable. Whereas in long-range sequence modeling (e.g. n = 20 2 ), the resulting probability of most non-local information is close to zero, and even the most important non-local information can only be assigned to a very small value.\nThe above phenomenon shows that the softmax transformation, a key component of the standard NLA, may degrade the importance of valuable non-local information for long-range sequence modeling. Furthermore, it may introduce interference by incorporating irrelevant features, which can be regarded as noise, into the reconstruction result. Unfortunately, exploring the self-similarity in SISR involves long-range sequence modeling with a lot of irrelevant information. Therefore, we speculated that the softmax transformation would make the NLA inefficient in exploring the image self-similarity. To validate this speculation, we separately added the NLA and NLA random to our backbone network with 128 channels. In NLA random, we randomly select 512 feature vectors from the non-local feature space for non-local fusion, instead of using all non-local feature vectors like the NLA. Surprisingly, we found that the SR performance of NLA and NLA random is very close (see Fig. 2). This motivated us to revisit the softmax transformation of the NLA for more effective exploration of self-similarity in deep SISR.\nIn many existing NLA-based SISR methods, the softmax transformation is commonly used to convert similarity vectors into probability distributions, such as NLRN [27], RNAN [53], Fig. 1: An illustration of the resulting probability distribution of the softmax transformation with different size of input sequences. The 9 input sequences were sampled from the standard normal distribution N (0, 1). [34]. However, these methods require modeling the global features of long-range sequences, which leads to the issue of the softmax transformation mentioned above. Hence, NLSA [33] decomposed the challenge of the long-range sequence modeling problem into a series of shorter sequence modeling sub-problems via locality sensitive hashing (LSH) [15]. Although NLSA avoided modeling long-range sequences, it may miss crucial long-range information due to the large variance of the LSH's estimation. To capture long-range information while alleviating the issue caused by the softmax transformation, ENLCA [46] proposed to multiply similarity vectors by an amplifification factor, which enforces the non-local attention to give higher weights to related information. Unfortunately, this approach leads to an increase in approximation variance, making the selection of the amplifification factor non-robust." }, { "figure_ref": [ "fig_0" ], "heading": "SAN [5], and CSNLN", "publication_ref": [ "b43" ], "table_ref": [], "text": "In this paper, we focus on exploring a novel approach for weighting non-local information in deep SISR, providing a new perspective for existing NLA-based SISR methods. Consequently, we proposed the high-similarity-pass attention (HSPA) with a soft thresholding operation, which returns compact probability distributions by truncating small probability values (low-similarity) to zero. This characteristic enables our HSPA to remove irrelevant non-local information and identify a set of relevant information for image reconstruction, making deep SISR models more efficient and interpretable. From the attention maps in Fig. 2, we can observe that our HSPA achieves superior SR performance than the NLA by fusing more related non-local self-similarity information. Crucially, we derived a closed-form Jacobian expression of the proposed soft thresholding operation to train our HSPA in an end-to-end manner. To the best of our knowledge, this is the first attempt to provide compact probability distributions with a closed-Fig. 2: Comparisons between the NLA [43] and our HSPA on Set5 (×2) during training. In NLA random, we randomly select 512 features for non-local fusion. Please zoom in for best view.\nform Jacobian expression for backpropagation algorithms in self-similarity-based deep SISR models. With the proposed HSPA, we constructed a deep high-similarity-pass attention network (HSPAN) shown in Fig. 3, from which we can see that our HSPAN is built on a simple residual backbone with some HSPA-based modules (HSPAMs). Specifically, each HSPAM has a residual connection and consists of an HSPA, a locality exploration block (LEB) and a feature refinement convlution. The HSPA is responsible for capturing non-local information, while the LEB explores the locality inductive bias of nature images.\nWe believe our findings will stimulate further investigation and discussion about the use of NLA in self-similarity-based deep SISR models. Our contributions can be summarized as follows:\n• We provided new insights into the limitations of NLA used in self-similarity-based deep SISR methods and argued that the softmax transformation in NLA has insurmountable flaws for SISR with long-range sequences. (As shown in Fig. 1 and Fig. 2) • We formalized a concise yet effective soft thresholding operation and explored its key properties, which make it possible to optimize our high-similarity-pass attention (HSPA) end-to-end in deep SISR. • A deep high-similarity-pass attention network (HSPAN) was designed by using our HSPA-based modules (HSPAMs) and achieved state-of-the-art results both quantitatively and qualitatively." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [ "b3", "b9" ], "table_ref": [], "text": "Single image super-resolution (SISR) is a challenging problem, especially when the LR image has a limited number of pixels. To address this issue, various image priors have been proposed to stabilize the inversion of this ill-posed problem, such as the representative self-similarity [16], [57] and sparse representation [3], [9]. The proposed high-similarity-pass attention (HSPA) in this paper benefits from the above two priors, thus we limit our discussion here to SISR methods based on the two priors and classify the corresponding methods into two main categories: self-similarity-based SISR and sparse representation-based SISR." }, { "figure_ref": [], "heading": "A. Self-similarity-based SISR", "publication_ref": [ "b19", "b37", "b47", "b43", "b27", "b28" ], "table_ref": [], "text": "There are many classical self-similarity-based SISR methods [11], [14], [16], [19], [37] that have achieved satisfactory reconstruction results. The self-similarity prior provides a very efficient solution for SISR to explore non-local image information. The difference among these self-similarity-based SISR methods in utilizing self-similarity is mainly in the range of non-local search space. Yang et al. [47] In deep learning-based SISR, the self-similarity is usually integrated by non-local attention (NLA) [43]. Specifically, selfsimilarity prior is fused as a weighted sum over all pixelwise features in the NLA. Liu et al. [27] proposed a nonlocal recurrent network as the first attempt to use the NLA in deep recurrent neural network for capturing long-range self-similarity in SISR and denoising. Then, Dai et al. [5] combined the NLA with channel-wise attention in deep convolutional neural network to simultaneously utilizing both spatial and channel features correlations for more powerful feature representation. Mei et al. [34] first attempted to integrate crossscale self-similarity in deep learning-based SISR methods by the cross-scale non-local attention and achieved impressive SR performance. Luo et al. [28] proposed a hash-learnable non-local attention to capture long-range self-similarity for lightweight SISR. Although these NLA-based deep SISR methods can generate impressive SR results, they all ignore the limitations (see Fig. 1) of the softmax transformation used in the NLA, which leads to inefficient exploration of non-local self-similarity. This motivates us to explore a more efficient method to weight non-local self-similarity information for deep SISR models." }, { "figure_ref": [], "heading": "B. Sparse representation-based SISR", "publication_ref": [ "b33", "b36", "b44", "b21", "b36", "b7", "b44", "b33", "b39" ], "table_ref": [], "text": "The sparse representation prior has been successfully integrated in many image processing tasks, such as SISR [33], [36], [44], [49] and denoising [12], [41]. In SISR, the sparse representation suggests that HR images can be wellexpressed by the sparse linear combinations of atoms in an appropriate over-complete dictionary [48], [49]. Yang et al.\n[49] proposed a joint dictionary learning method to make the sparse representation between LR and HR patch pairs consistent, thus the sparse representation of an LR image can be directly used to generate the corresponding HR image patch with the HR dictionary. Kim et al. [21] constructed a sparse basis set to reduce the time complexity of regression during both training and testing. Peleg et al. [36] proposed a statistical prediction model based on sparse representation to predict the HR representation vector from its corresponding LR coefficients without using the invariance assumption of the sparse representation.\nDong et al. [7] first introduced the sparse representation prior in a deep CNN-based SISR model with the ReLU activation enforced about 50% sparsity by zeroing all negative features. Wang et al. [44] combined the domain expertise of the sparse representation and deep learning models to achieve further improved SR results. Recently, Fan et al. [13] explicitly integrated sparsity constraints in hidden neurons and showed that sparse representation is crucial in deep learning-based SISR. These sparse representation-based deep SISR methods only explored the sparse representation of deep spatial features, lacking the investigation of sparse non-local self-similarity. Mei et al. [33] and Su et al. [39] tried to combine sparse representation with the exploration of nonlocal self-similarity in deep SISR models. Although they limited the scope of non-local exploration for sparsity, they ignored the essential reason why the NLA cannot explore non-local self-similarity sparsely. Inspired by these sparse representation-based deep SISR methods and our analysis of the limitations of the softmax transformation in NLA, we integrated sparse representation into non-local self-similarity exploration by incorporating a soft thresholding operation into the high-similarity-pass attention." }, { "figure_ref": [], "heading": "III. METHODOLOGY", "publication_ref": [ "b39", "b54" ], "table_ref": [], "text": "In this section, we will introduce the details of our deep high-similarity-pass attention network (HSPAN). The HSPAN consists of a simple residual backbone [39], [54] and some high-similarity-pass attention-based modules (HSPAMs). We start with an overview of the HSPAN and then introduce the details of the HSPAM." }, { "figure_ref": [ "fig_0" ], "heading": "A. An Overview of the HSPAN", "publication_ref": [ "b8", "b20", "b54" ], "table_ref": [], "text": "As illustrated in Fig. 3, our HSPAN is an end-to-end SISR network consisting of three parts: LR features extraction, local and non-local deep features fusion, and HR reconstruction.\nIn LR features extraction part, the shallow features F 0 are extracted from the given LR image x by a convolutional layer with trainable parameters θ. This process can be formulated as\nF 0 = ϕ(x; θ),(1)\nwhere ϕ(•) is a convolution operation. Then, F 0 is fed into the local and non-local deep features fusion part with m HSPAMs\nF m = φ(F 0 ; δ),(2)\nwhere φ( are upscaled by sub-pixel operation ↑ [38] and then used to reconstruct an HR image ŷ in the HR reconstruction part\nŷ = ψ(F m ↑; α),(3)\nwhere ψ(•) represents the HR reconstruction part with trainable parameters α for the final RGB image reconstruction. The above three parts can be simplified as follow\nŷ = HSPAN(x; (θ, δ, α)),(4)\nwhere HSPAN(•) is the function of our deep sparse nonlocal attention network. As suggested in previous works [20], [54], we use a long residual connection to directly bypassed abundant low-frequency information and prevent gradients from exploding." }, { "figure_ref": [ "fig_0" ], "heading": "B. High-Similarity-Pass Attention-based Module (HSPAM)", "publication_ref": [], "table_ref": [], "text": "The structure of our HSPAM is shown in Fig. 3, from which we can see that the HSPAM serves as a basic block of our HSPAN. Specifically, each HSPAM is also with a residual connection and consists of a high-similarity-pass attention (HSPA), a locality exploration block (LEB) and a feature refinement layer. The HSPA is responsible for exploring nonlocal self-similarity information and the LEB is used to capture locality inductive bias.\nThe function Ω i (•) of the i-th HSPAM can be formulated as follows:\nF i = Ω i (F i-1 ) = Ω i (Ω i-1 (• • • Ω 2 (Ω 1 (F 0 )))),(5)\nwhere F i and F i-1 are the output and input of the i-th HSPAM. The trainable parameters in HSPAM are omitted for simplicity." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "1) Locality Exploration Block (LEB):", "publication_ref": [ "b53" ], "table_ref": [], "text": "We proposed the LEB to utilize the locality of the convolution layer for capturing the locality inductive bias of nature images, which plays a crucial role in image restoration problems. The structure details of the LEB are shown in Fig. 3, from which we can see that the LEB is constructed by n simplified convolution residual blocks.\n2) High-Similarity-Pass Attention (HSPA): As discussed in Section I, the softmax transformation greatly affects the performance of NLA for modeling long-range sequence with a lot of irrelevant information. To solve the drawback caused by the softmax transformation, we proposed the HSPA (see Fig. 3) with a soft thresholding (ST) operation that can effectively model long-range sequence with a lot of irrelevant information by generating sparse probability distributions.\nNLA is a pixel-level attention mechanism used in deep SISR methods [5], [53], it calculates the influence of each feature vector on the query feature vector. The input feature maps X ∈ R H×W ×C of NLA are first reshaped into X ′ ∈ R HW ×C for illustration. Then, the response of the query feature vector x i in NLA can be defined as\nNLA(x i ) = N j=1 exp(d(x i , x j )) N k=1 exp(d(x i , x k )) ϕ v (x j ),(6)\nwhere x j and x k are the j-th and k-th feature vectors on X ′ respectively and N = HW . ϕ v (•) is a value vector generation function with 1 × 1 convolution operation. d(•, •) is used to measure the dot product similarity between feature vectors and can be expressed as\nd(x i , x j ) = ϕ q (x i ) T ϕ k (x j ),(7)\nwhere ϕ q (•) and ϕ k (•) are introduced to generate query and key feature vectors. From Eq. ( 6) and Eq. ( 7), we can see that NLA provides a non-local information fusion scheme based on dot product similarity, which assigns more weights to feature vectors with higher similarity. Furthermore, the reason why the NLA cannot generate sparse probability distribution is that the numerator of the softmax transformation is always greater than zero, which allows two completely unrelated or even negatively correlated feature vectors to be given a positive weight.\nIn our HSPA, with Eq. ( 7), we can obtain the similarity vector s of the query feature vector x i . Then, the weight of j-th non-local feature vector is assigned by the proposed soft thresholding (ST) operation. Finally, the response of the query feature vector x i can be formulated as\nHSPA(x i ) = N j=1 ST j (s)ϕ v (x j ),(8)\nwhere ST j (s) is the j-th component of the results of the ST operation and defined as\nST j (s) = max{s j -κ(s), 0},(9)\nwhere s j is the j-th component of s and κ(•) : R N → R is a soft threshold function that satisfies\nN j=1 max{s j -κ(s), 0} = 1 (10)\nTo derive the soft threshold function κ(•), we first sorted the similarity vector s and let the sorted s satisfy s\n(1) ≥ s (2) ≥ • • • ≥ s (N )\n. Let K = {1, ..., N } and define T as\nT = max{k ∈ K | ks (k) + 1 > k j=1 s (j) }.(11)\nThen, the soft threshold function κ(•) can be derived from Eq. (10) and Eq. (11) as follows\n( T j=1 s (j) ) -κ(s) × T = 1, κ(s) = ( T j=1 s (j) ) -1 T . (12\n)\nThe details of the ST operation are shown in Algorithm 1, from which we can see that the core of the ST operation is to calculate the result of the soft threshold function κ(•). Then, all coordinates below this threshold will be truncated to zero, and the others will be shifted by this threshold. Obviously, our ST operation is not only able to generate a sparse probability distribution, but also preserves the basic properties of the softmax transformation: all probability values are greater than zero and sum to 1, and it assigns more weights to feature vectors with higher similarity.\nAlgorithm 1 Soft Thresholding (ST) Operation.\n1: Input: the similarity vector s. 2: s ′ ← s.\n3: s ← Sort(s) # Let s (1) ≥ s (2) ≥ • • • ≥ s (N ) . 4: T = max{k ∈ K | ks (k) + 1 > k j=1 s (j) } 5: κ(s) = ( T j=1 s (j) )-1 T 6: Output: max{s ′ -κ(s), 0}" }, { "figure_ref": [], "heading": "C. Properties of the ST operation", "publication_ref": [ "b31" ], "table_ref": [], "text": "From optimization viewpoint, the attention weight vector s obtained by the ST operation can be regarded as the projection of the original similarity vector s onto the simplex [31]. That is:\n∆ = {p ∈ R N | p i = 1, p i ≥ 0} [10],\ns = argmin p∈∆ ||p -s|| 2 . (13\n)\nIn SISR, exploring for non-local information is a longsequence modeling problem, where the sequence length N is usually greater than ten thousand. This makes the projection tend to hit the boundary of the simplex, leading to a sparse probability distribution.\nFrom the perspective of probability theory, we assume the similarity scores s 1 , s 2 , ..., s N as random variables, and the corresponding order statistic are denoted as\ns (1) ≥ s (2) ≥ • • • ≥ s (N ) . Event [T > k]\nrefers to the number of non-zero elements in the results of the ST operation is greater than k, which implies (k + 1)s (k+1) + 1 > k+1 j=1 s (j) . Thus, the probability of the event [T > k] can be estimated as\nP ([T > k]) = P ([(k + 1)s (k+1) + 1 > k+1 j=1 s (j) ]) = P ([(s (1) -s (k+1) ) + ... + (s (k) -s (k+1) )] < 1) ≤ P ([s (k) -s (k+1) < 1 k ]).(14)\nWith the increase of k, the probability that the number of nonzero elements in the attention weight vector is greater than k will become small, which is consistent with our intuitive motivation for designing the ST operation.\nIn practice, the ST operation will cause many non-local weights to be truncated to zero, which means that it is not necessary to use all elements in the similarity vector s. Therefore, we can use top-k instead of the sorting algorithm to reduce the computational complexity. The effects of different k on reconstruction results will be investigated in the experiment section.\nTo train the HSPAN in an end-to-end manner, it is necessary to derive the closed-form Jacobian matrix of the proposed ST operation for backpropagation algorithms. From Eq. ( 9), we have ∂ST j (s)\n∂s k = δ jk -∂κ(s) ∂s k , if s j > κ(s), 0, if s j ≤ κ(s),(15)\nwhere δ jk is the Kronecker delta which returns 1 if the variables i and j are equal, and 0 otherwise. The gradient of the soft threshold function κ(•) can be expressed as\n∂κ(s) ∂s k = 1 T , if k ∈ T(s), 0, if k / ∈ T(s),(16)\nwhere T(s) = {j ∈ K | ST j (s) > 0} is the support set of the ST operation, T is the number of elements in T(s), and c is the characteristic vector whose j-th component is 1 if j ∈ T(s), and 0 otherwise. We can combine Eq. ( 15) and Eq. ( 16) to obtain the derivatives of the ST operation with respect to the variable s k ∂ST j (s)\n∂s k = δ jk - c k c j T .(17)\nThen, the Jacobian matrix of the ST operator can be formulated as\nJ (s) = Diag(c) - cc T T ,(18)\nwhere Diag(c) is a matrix with c in the diagonal. From Eq. ( 18) we can observe the special structure presenting in the Jacobian matrix that is obtained by subtracting a rank 1 matrix from a diagonal matrix. The special structure allows efficient calculation of the gradient. Given the feedback error r, the gradient of the ST operation with respect to s can be expressed as\n▽ s ST(s) = J (s)r = c ⊙ (r - c T r T • 1).(19)\nFrom Eq. ( 19), we can see that the computational complexity of computing the gradient in our ST operation is O(T ). Moreover, for the HSPA discussed in this paper, c T r = j∈T(s) r j , where T(s) can be obtained during the inference, which makes the sublinear time complexity derived from the special structure in (19) efficient." }, { "figure_ref": [], "heading": "IV. EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Datasets and Metrics", "publication_ref": [ "b33", "b54", "b30", "b19", "b45" ], "table_ref": [], "text": "Following previous researches [5], [33], [54], we use 800 images from DIV2K [42] to train our deep SISR model. In a mini-batch, there are 16 images with patch size 48 × 48 randomly cropped from the training datasets. All these training patches are augmented by random rotation of 90, 180, and 270 degrees and horizontal flipping. Then, the SR performance of our model is evaluated on five standard SISR benchmarks: Set5 [2], Set14 [50], B100 [30], Urban100 [19], and Manga109 [32]. All results are compared by SSIM [45] and PSNR metrics on the Y channel in YCbCr space." }, { "figure_ref": [], "heading": "B. Training Details", "publication_ref": [], "table_ref": [], "text": "Our deep high-similarity-pass attention network (HSPAN) is constructed by 10 high-similarity-pass attention-based modules (HSPAMs) in a residual backbone. The number of residual blocks in locality exploration block (LEB) is set to 4 empirically. All the convolutional kernel sizes are set to 3×3, except those in our high-similarity-pass attention (HSPA), which have 1 × 1 kernel sizes. Intermediate features have 192 channels, and the last convolution layer in our HSPAN has 3 filters to transfer deep features into a 3-channel RGB image.\nDuring training, the ADAM algorithm [22] with β 1 = 0.9, β 2 = 0.999, and ϵ = 10 -8 is used for optimizing the mean absolute error (MAE) loss. The initial learning rate is set to 10 -4 in ×2 model and reduced to half every 200 epochs until the training stops at 1000 epochs. The ×3 and ×4 models are initialized by the pre-trained ×2 model, and the learning rate 10 -4 is reduced to half every 50 epochs until the finetunning stops at 200 epochs. All our models are implemented by PyTorch and trained on Nvidia 3090 GPUs." }, { "figure_ref": [ "fig_1", "fig_2", "fig_3", "fig_4" ], "heading": "C. Ablation Studies", "publication_ref": [ "b43", "b30", "b19", "b43", "b30", "b19", "b26", "b54", "b33" ], "table_ref": [ "tab_2", "tab_2" ], "text": "We conducted an in-depth analysis of the proposed HSPA and LEB in ablation studies and trained our HSPAN on DIV2K [42] for classical SISR with scale factor ×2. The best PSNR (dB) values on Set5 [2] in 5 × 10 4 iterations are obtained for comparisons.\n1) Impact of HSPA and LEB:\nThe SR performance of our HSPA and LEB in the highsimilarity-pass attention-based module (HSPAM) are shown in Table I, from which we can see that our HSPA is much better than the NLA [43]. Specifically, by comparing the results in the first and second columns, our HSPA can bring 0.84dB improvement. Furthermore, when exploring locality inductive bias with the LEB, the NLA brings 0.03dB improvement, while our HSPA brings 0.11dB improvement, which is about 3.7 times that of the NLA. In Table I, we can find that the SR performance will degrade severely when the LEB is removed. It means that the locality inductive bias is important for SR reconstruction. Thus, to benifit from both locality inductive bias and non-local information, we combine the LEB and HSPA to construct our HSPAM.\nVisual comparisons between the NLA and our HSPA are shown in Fig. 5, from which we can compare the zoomed in results on Set14 [50], B100 [30] and Urban100 [19] datasets. In Fig. 5, we can observe that our HSPA can generate superior textures than the NLA. For example, in image 'img 004' (bottom) from Urban100, our HSPA can recover the severely damaged black circles missed by the NLA. Furthermore, we show the attention map of the NLA and our HSPA at some attention layers in Fig. 4, from which we can observe that our HSPA can remove irrelevant features, while the NLA must has full support for every non-local feature. These attention maps indicate that our HSPA indeed works as expected: providing more relevant and informative non-local textures for SISR reconstruction with the sparse representation. [43] with our HSPA for ×4 SR. The texture regions from top to bottom belong to the 'barbara', '253027' and 'img 004' images from Set14 [50], B100 [30] and Urban100 [19] datasets, respectively.\n2) Impact of Top-k: As discussed in the Section III-B2, k determines the search space of the query feature, which consists of the top-k similar non-local features. The value of k can be set flexibly in testing phase to find the tradeoffs between reducing computational complexity and getting accurate reconstruction results. k = 0 corresponds to the case where HSPA is not used. The SR performance of different k setting are shown in Fig. 6, from which we can see that the SR performance of our HSPAN peaks at k = 128. Considering the computational complexity and reconstruction performance, we set k to be 128 in our HSPAN.\n3) Generic validation:\nWe validated the generality of the proposed method through two parts: (1) we directly integrated the proposed HSPA into SISR models that did not use non-local attention (NLA); (2) we replaced the softmax transformation in NLA-based deep SISR models with our soft thresholding (ST) operation. Generic of the HSPA. To demonstrate the generic of our HSPA, we inserted the HSPA into some representative deep SISR models with different computational complexity, such as FSRCNN [8], EDSR [26], and RCAN [54]. From Fig. 7, we observe that our HSPA can significantly improve the SR performance of these deep SISR models with negligible parameters. For example, our HSPA brings 0.13dB improvement for FSRCNN [8] (12644 parameters) with only 630 parameters. Generic of the ST operation. In NLA-based deep SISR models, we replaced the softmax transformation with our ST operation in considerable CSNLN [34], NLSN [33], and SAN [5], respectively. As shown in Fig. 8, we can see that our ST operation can improve the SR performance of NLA-based deep SISR models without additional parameters. Specifically, our ST operation brought performance improvements of 0.14dB, 0.08dB, and 0.05dB for CSNLN, NLSN and SAN, respectively. Consistent with subjective perception, we found that the more NLA modules used in NLA-based deep SISR models, the more significant improvement our ST operation could bring after replacing the softmax transformation. For example, CSNLN used 24 NLA modules, and our ST operation brought the most significant improvement to this model, while SAN only used two NLA modules, resulting in relatively limited performance improvement. These results indicate that the proposed method can be integrated into existing deep SISR models as an effective general building unit." }, { "figure_ref": [ "fig_5", "fig_6" ], "heading": "D. Model Efficiency", "publication_ref": [ "b26", "b55", "b54", "b17", "b53", "b33", "b33", "b33", "b54", "b20", "b23", "b40", "b52", "b17", "b26", "b55", "b54", "b18", "b56", "b35", "b33", "b1", "b25", "b20", "b23", "b40", "b52", "b26", "b55", "b54", "b18", "b56", "b35", "b33", "b1", "b25", "b20", "b23", "b40", "b52", "b17", "b26", "b55", "b54", "b18", "b56", "b35", "b33", "b1", "b25" ], "table_ref": [], "text": "In this subsection, we compare the efficiency of our HSPAN with other state-of-the-art models in terms of model parameters and inference time. Model Parameters. The parameters and SR performance of state-of-the-art deep SISR models including EDSR [26], RDN [55], RCAN [54], DBPN [17], RNAN [53], SAN [5], NLSN [33], ENLCN [46] are shown in Fig. 9, from which we can see that the SR performance of our HSPAN on Manga109 (×4) is significantly better than other state-of-the-art models. Specifically, our HSPAN (about 33.6M parameters) brings 0.47dB improvement in SR performance with much lower parameters than the recent state-of-the-art NLSN [33] (about 44.9M parameters). In contrast, compared with RCAN, NLSN brings 0.05dB improvement with 29.3M additional parameters. This means that the proposed HSPAN can indeed significantly improve the SR performance and the improvement is not simply the result of having more parameters in the network. Inference Time. We also provide a smaller version of our HSPAN, denoted as HSPAN S, by integrating only one HSPA in the middle of the backbone. The inference time of our HSPAN, HSPAN S and recently competitive deep SISR models are shown in Fig. 10, from which we see that the SR performance of our HSPAN is much better than other stateof-the-art models. Furthermore, our HSPAN S consumes the least inference time while achieving better SR performance than other state-of-the-art models. Specifically, by comparing NLSN [33] and RCAN [54], we find that the SR performance of RCAN is 0.04dB higher than that of NLSN at a cost [20] LapSRN [23] MemNet [40] SRMDNF [52] DBPN [17] EDSR [26] RDN [55] RCAN [54] SAN [5] OISR [18] IGNN [56] CSNLN [34] HAN [35] NLSN [33] DRLN [1] SwinIR [25] ENLCN [20] LapSRN [23] MemNet [40] SRMDNF [52] EDSR [26] RDN [55] RCAN [54] SAN [5] OISR [18] IGNN [56] CSNLN [34] HAN [35] NLSN [33] DRLN [1] SwinIR [25] [20] LapSRN [23] MemNet [40] SRMDNF [52] DBPN [17] EDSR [26] RDN [55] RCAN [54] SAN [5] OISR [18] IGNN [56] CSNLN [34] HAN [35] NLSN [33] DRLN [1] SwinIR [25] ENLCN of 66 milliseconds. However, the SR performance of our HSPAN S is not only 0.23dB higher than that of NLSN, but also reduces the inference time by 20 milliseconds. These results indicate that our HSPA is very efficient in improving SR performance. The inferences of all models are conducted in the same environment with a Nvidia 3090 GPU.\n[46] ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2\n×3 ×3 ×3 ×3 ×3 ×3 ×3 ×3 ×3 ×3 ×3 ×3 ×3 ×3 ×3 ×3 ×330" }, { "figure_ref": [ "fig_7", "fig_7", "fig_7" ], "heading": "E. Comparisons with State-of-the-art 1) Bicubic-downscale degradation:", "publication_ref": [ "b20", "b23", "b26", "b40", "b52", "b17", "b55", "b54", "b18", "b56", "b35", "b33", "b1", "b25", "b19", "b33", "b33", "b55", "b54", "b25", "b52", "b54", "b36", "b6", "b20", "b51", "b26", "b52", "b55", "b54", "b35", "b35", "b35" ], "table_ref": [ "tab_3", "tab_7" ], "text": "We compare our HSPAN with 18 state-of-the-art methods including FSRCNN [8], VDSR [20], LapSRN [23], EDSR [26], MemNet [40], SRMDNF [52], DBPN [17], RDN [55], RCAN [54], SAN [5], OISR [18],IGNN [56], CSNLN [34], HAN [35], NLSN [33], DRLN [1], SwinIR [25], and ENLCN [46]. HSPAN+ is the self ensemble results of our HSPAN.\nQuantitative Evaluations. The quantitative comparisons on five benchmark datasets with different scale factors ×2, ×3 and ×4 are shown in Table II, from which we can see that our HSPAN outperforms other state-of-the-art models by a [50] and Urban100 [19], respectively. Best and second best results are highlighted and underlined. Please zoom in for best view. large margin on almost all scale factors and benchmarks. For example, compared with the recent state-of-the-art NLSN [33] in scale factor ×4, our HSPAN brings 0.23dB, 0.16dB, 0.08dB, 0.40dB and 0.47dB improvement on Set5, Set14, B100, Urban100 and Manga109 datasets, respectively. It is worth mentioning that the HSPAN is designed to utilize self-similarity information. Thus, the proposed HSPAN can achieve impressive reconstruction results especially for more challenging datasets Urban100 and Manga109, which contain a large amount of self-similarity information.\nOn Urban100 dataset (×3), which is specially designed to test self-similarity-based SISR methods, we can see that recent deep SISR methods have very limited improvements on this dataset. For example, compared with CSNLN [34], NLSN [33] brings 0.12dB improvement. In contrast, our HSPAN brings 0.54dB improvement on this dataset compared to CSNLN. The impressive improvement obtained by the HSPAN on Ur-ban100 dataset is consistent with our motivation to design the HSAP, which aims to explore the self-similarity information efficiently.\nQualitative Evaluations. Visual comparisons on the challenging datasets Urban100 and Mange109 (×4) are shown in Fig. 11, from which we can see that the proposed HSPAN can repair the severely damaged textures when there are informative self-similarity textures in the input LR image. By comparing the reconstructed textures of image 'img 045' in Fig. 11, we can observe that the generated textures of our HSPAN are similar to the HR textures, but other very competitive deep SISR models without non-local attention, such as RDN [55] and RCAN [54], cannot repair such severely damaged regions. Furthermore, compared with other deep SISR models based on non-local attention such as SwinIR [25] and ENLCN [46], our HSPAN still achieves better reconstruction performance with more accurate image details. These comparisons indicate that our HSPAN is more efficient in repairing severely damaged regions by fusing the selfsimilarity information with the proposed HSPA.\nBy comparing the generated textures of image 'GOOD-KISSVer2' in Fig. 11, we can see that our HSPAN is the only method that can accurately restore the texture of the word \"ball\" in HR, and the restoration results of other methods are inconsistent with the textures in HR. Furthermore, in image 'GOODKISSVer2', our HSPAN outperforms the competitive SwinIR by 2.29dB. These visual comparisons demonstrate that our HSPAN not only outperforms other deep SISR models in quantitative metrics, but also significantly improves reconstructed textures perceptually.\n2) Blur-downscale degradation:\nIn the blur-downscale degradation SISR tasks, the SR performance is verified at scale factor ×3 and the gaussian standard deviation is set to 1.6 as previous works [52], [54]. Our HSPAN are compared with 11 state-of-the-art methods: SPMSR [36], SRCNN [6], FSRCNN [8], VDSR [20], IRCNN [51], EDSR [26], SRMDNF [52], RDN [55], RCAN [54], SAN [5] and HAN [35]. Quantitative Evaluations. We compare the quantitative results with the blur-downscale degradation at scale factor ×3. As shown in Table III, we can see that the proposed HSPAN achieves best results on all benchmark datasets. On natural image datasets Set5, Set14, and B100, our HSPAN achieves a satisfactory reconstruction performance. Furthermore, our HSPAN can still bring a large performance improvement on the challenging datasets Urban100 and Manga109. In particular, compared with the competitive HAN [35], our HSPAN brings 0.49dB and 0.31dB improvement on Urban100 and Manga109 datasets, respectively. This is consistent with previous experiment on bicubic-downscale degradation. These results indicate that our HSPAN is not limited to the degradation type and can still achieve very impressive SR performance on the SISR task with blur-downscale degradation. Qualitative Evaluations. Visual comparisons on datasets Set5 and Urban100 with blur-downscale degradation are shown in Fig. 12, from which we can see that our HSPAN reconstructs the most visual pleasing results with more accurate textures. By observing the results of Bicubic interpolation we can find that the textures in the input LR image has been completely damaged. Thus, it is impossible to accurately recover the desired textures with only local information. This means that the reconstruction results of each method presented in Fig. 12 can be used to evaluate the ability of each method in exploring self-similarity information. The reconstructed textures of our HSPAN is very close to the HR image, while the competitive HAN [35] cannot restore textures that are severely damaged by blur-downscale degradation. For example, in image 'img 047' from Fig. 12, HAN cannot generate satisfactory textures, even though the selected region has valuable repeated structured architectural textures in the input image. These visual comparisons demonstrate that our HSPAN is indeed effective in the blur-downscale SISR task." }, { "figure_ref": [], "heading": "V. CONCLUSIONS", "publication_ref": [], "table_ref": [], "text": "In this paper, we provided new insights into the NLA used in SISR problems and found that the softmax transformation, a key component of the NLA, is not suitable for exploring longrange information. To overcome this drawback, we designed a flexible high-similarity-pass attention (HSPA) that enables our deep high-similarity-pass attention network (HSPAN) to focus on more valuable non-local textures while removing irrelevant ones. Furthermore, we explored some key properties of the proposed soft thresholding (ST) operation to train our HSPA in an end-to-end manner. To the best of our knowledge, this is the first attempt to analyze and address the limitations of utilizing the softmax transformation for long-range sequence modeling in low-level vision problems. In addition, extensive experiments demonstrate that our HSPA and ST operation can be integrated as efficient general building units in existing deep SISR models." } ]
2023-05-25
[ { "authors": "", "journal": "MomoyamaHaikagura HR PSNR/SSIM Bicubic", "ref_id": "b0", "title": "", "year": "" }, { "authors": "S Anwar; N Barnes", "journal": "IEEE Transactions on Pattern Analysis & Machine Intelligence", "ref_id": "b1", "title": "Densely residual laplacian super-resolution", "year": "2022" }, { "authors": "M Bevilacqua; A Roumy; C Guillemot; M L Alberi-Morel", "journal": "", "ref_id": "b2", "title": "Lowcomplexity single-image super-resolution based on nonnegative neighbor embedding", "year": "2012" }, { "authors": "E J Candès", "journal": "Citeseer", "ref_id": "b3", "title": "Compressive sampling", "year": "2006" }, { "authors": "V Cherukuri; T Guo; S J Schiff; V Monga", "journal": "IEEE Transactions on Image Processing", "ref_id": "b4", "title": "Deep mr brain image super-resolution using spatio-structural priors", "year": "2019" }, { "authors": "T Dai; J Cai; Y Zhang; S.-T Xia; L Zhang", "journal": "", "ref_id": "b5", "title": "Second-order attention network for single image super-resolution", "year": "2019" }, { "authors": "C Dong; C C Loy; K He; X Tang", "journal": "", "ref_id": "b6", "title": "Learning a deep convolutional network for image super-resolution", "year": "2014-09" }, { "authors": "", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b7", "title": "Image super-resolution using deep convolutional networks", "year": "2015" }, { "authors": "C Dong; C C Loy; X Tang", "journal": "Springer", "ref_id": "b8", "title": "Accelerating the super-resolution convolutional neural network", "year": "2016" }, { "authors": "D L Donoho", "journal": "IEEE Transactions on information theory", "ref_id": "b9", "title": "Compressed sensing", "year": "2006" }, { "authors": "J Duchi; S Shalev-Shwartz; Y Singer; T Chandra", "journal": "Association for Computing Machinery", "ref_id": "b10", "title": "Efficient projections onto the l1-ball for learning in high dimensions", "year": "2008" }, { "authors": "M Ebrahimi; E R Vrscay", "journal": "Springer", "ref_id": "b11", "title": "Solving the inverse problem of image zooming using \"self-examples", "year": "2007" }, { "authors": "M Elad; M Aharon", "journal": "IEEE Transactions on Image processing", "ref_id": "b12", "title": "Image denoising via sparse and redundant representations over learned dictionaries", "year": "2006" }, { "authors": "Y Fan; J Yu; Y Mei; Y Zhang; Y Fu; D Liu; T S Huang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b13", "title": "Neural sparse representation for image restoration", "year": "2020" }, { "authors": "G Freedman; R ", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b14", "title": "Image and video upscaling from local selfexamples", "year": "2011" }, { "authors": "A Gionis; P Indyk; R Motwani", "journal": "", "ref_id": "b15", "title": "Similarity search in high dimensions via hashing", "year": "1999" }, { "authors": "D Glasner; S Bagon; M Irani", "journal": "IEEE", "ref_id": "b16", "title": "Super-resolution from a single image", "year": "2009" }, { "authors": "M Haris; G Shakhnarovich; N Ukita", "journal": "", "ref_id": "b17", "title": "Deep back-projection networks for super-resolution", "year": "2018" }, { "authors": "X He; Z Mo; P Wang; Y Liu; M Yang; J Cheng", "journal": "", "ref_id": "b18", "title": "Ode-inspired network design for single image super-resolution", "year": "2019" }, { "authors": "J.-B Huang; A Singh; N Ahuja", "journal": "", "ref_id": "b19", "title": "Single image super-resolution from transformed self-exemplars", "year": "2015" }, { "authors": "J Kim; J K Lee; K M Lee", "journal": "", "ref_id": "b20", "title": "Accurate image super-resolution using very deep convolutional networks", "year": "2016" }, { "authors": "K I Kim; Y Kwon", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b21", "title": "Single-image super-resolution using sparse regression and natural image prior", "year": "2010" }, { "authors": "A Kingad", "journal": "ICLR", "ref_id": "b22", "title": "A method for stochastic optimization", "year": "2015" }, { "authors": "W.-S Lai; J.-B Huang; N Ahuja; M.-H Yang", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b23", "title": "Fast and accurate image super-resolution with deep laplacian pyramid networks", "year": "2018" }, { "authors": "P Li; L Prieto; D Mery; P J Flynn", "journal": "IEEE Trans. Inf. Forensics Security", "ref_id": "b24", "title": "On low-resolution face recognition in the wild: Comparisons and new techniques", "year": "2019" }, { "authors": "J Liang; J Cao; G Sun; K Zhang; L Van Gool; R Timofte", "journal": "", "ref_id": "b25", "title": "Swinir: Image restoration using swin transformer", "year": "2021" }, { "authors": "B Lim; S Son; H Kim; S Nah; K Mu Lee", "journal": "", "ref_id": "b26", "title": "Enhanced deep residual networks for single image super-resolution", "year": "2017-09-10" }, { "authors": "D Liu; B Wen; Y Fan; C C Loy; T S Huang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b27", "title": "Non-local recurrent network for image restoration", "year": "2018" }, { "authors": "J Luo; L Zhao; L Zhu; W Tao", "journal": "Neurocomputing", "ref_id": "b28", "title": "Multi-scale receptive field fusion network for lightweight image super-resolution", "year": "2022" }, { "authors": "Q Lyu; H Shan; C Steber; C Helis; C Whitlow; M Chan; G Wang", "journal": "IEEE transactions on medical imaging", "ref_id": "b29", "title": "Multi-contrast super-resolution mri through a progressive network", "year": "2020" }, { "authors": "D Martin; C Fowlkes; D Tal; J Malik", "journal": "IEEE", "ref_id": "b30", "title": "A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics", "year": "2001" }, { "authors": "A Martins; R Astudillo", "journal": "PMLR", "ref_id": "b31", "title": "From softmax to sparsemax: A sparse model of attention and multi-label classification", "year": "2016" }, { "authors": "Y Matsui; K Ito; Y Aramaki; A Fujimoto; T Ogawa; T Yamasaki; K Aizawa", "journal": "Multimedia Tools and Applications", "ref_id": "b32", "title": "Sketch-based manga retrieval using manga109 dataset", "year": "2017-09-10" }, { "authors": "Y Mei; Y Fan; Y Zhou", "journal": "", "ref_id": "b33", "title": "Image super-resolution with non-local sparse attention", "year": "2021" }, { "authors": "Y Mei; Y Fan; Y Zhou; L Huang; T S Huang; H Shi", "journal": "", "ref_id": "b34", "title": "Image super-resolution with cross-scale non-local attention and exhaustive selfexemplars mining", "year": "2020" }, { "authors": "B Niu; W Wen; W Ren; X Zhang; L Yang; S Wang; K Zhang; X Cao; H Shen", "journal": "Springer", "ref_id": "b35", "title": "Single image super-resolution via a holistic attention network", "year": "2020" }, { "authors": "T Peleg; M Elad", "journal": "IEEE transactions on image processing", "ref_id": "b36", "title": "A statistical prediction model based on sparse representations for single image super-resolution", "year": "2014" }, { "authors": "M Protter; M Elad; H Takeda; P Milanfar", "journal": "IEEE Transactions on image processing", "ref_id": "b37", "title": "Generalizing the nonlocal-means to super-resolution reconstruction", "year": "2008" }, { "authors": "W Shi; J Caballero; F Huszár; J Totz; A P Aitken; R Bishop; D Rueckert; Z Wang", "journal": "", "ref_id": "b38", "title": "Real-time single image and video superresolution using an efficient sub-pixel convolutional neural network", "year": "2016-06" }, { "authors": "J.-N Su; M Gan; G.-Y Chen; J.-L Yin; C P Chen", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b39", "title": "Global learnable attention for single image super-resolution", "year": "2022" }, { "authors": "Y Tai; J Yang; X Liu; C Xu", "journal": "", "ref_id": "b40", "title": "Memnet: A persistent memory network for image restoration", "year": "2017-10" }, { "authors": "C Tian; Y Xu; Z Li; W Zuo; L Fei; H Liu", "journal": "Neural Networks", "ref_id": "b41", "title": "Attention-guided cnn for image denoising", "year": "2020" }, { "authors": "R Timofte; E Agustsson; L Van Gool; M.-H Yang; L Zhang", "journal": "", "ref_id": "b42", "title": "Ntire 2017 challenge on single image super-resolution: Methods and results", "year": "2017" }, { "authors": "X Wang; R Girshick; A Gupta; K He", "journal": "", "ref_id": "b43", "title": "Non-local neural networks", "year": "2018" }, { "authors": "Z Wang; D Liu; J Yang; W Han; T Huang", "journal": "", "ref_id": "b44", "title": "Deep networks for image super-resolution with sparse prior", "year": "2015" }, { "authors": "Z Wang; A C Bovik; H R Sheikh; E P Simoncelli", "journal": "IEEE transactions on image processing", "ref_id": "b45", "title": "Image quality assessment: from error visibility to structural similarity", "year": "2004" }, { "authors": "B Xia; Y Hang; Y Tian; W Yang; Q Liao; J Zhou", "journal": "", "ref_id": "b46", "title": "Efficient nonlocal contrastive attention for image super-resolution", "year": "2022" }, { "authors": "J Yang; Z Lin; S Cohen", "journal": "", "ref_id": "b47", "title": "Fast image super-resolution based on in-place example regression", "year": "2013" }, { "authors": "J Yang; Z Wang; Z Lin; S Cohen; T Huang", "journal": "IEEE transactions on image processing", "ref_id": "b48", "title": "Coupled dictionary training for image super-resolution", "year": "2012" }, { "authors": "J Yang; J Wright; T S Huang; Y Ma", "journal": "IEEE transactions on image processing", "ref_id": "b49", "title": "Image super-resolution via sparse representation", "year": "2010" }, { "authors": "R Zeyde; M Elad; M Protter", "journal": "Springer", "ref_id": "b50", "title": "On single image scale-up using sparse-representations", "year": "2010" }, { "authors": "K Zhang; W Zuo; S Gu; L Zhang", "journal": "", "ref_id": "b51", "title": "Learning deep cnn denoiser prior for image restoration", "year": "2017" }, { "authors": "K Zhang; W Zuo; L Zhang", "journal": "", "ref_id": "b52", "title": "Learning a single convolutional superresolution network for multiple degradations", "year": "2018" }, { "authors": "Y Zhang; K Li; B Zhong; Y Fu", "journal": "", "ref_id": "b53", "title": "Residual non-local attention networks for image restoration", "year": "2019" }, { "authors": "Y Zhang; K Li; K Li; L Wang; B Zhong; Y Fu", "journal": "", "ref_id": "b54", "title": "Image superresolution using very deep residual channel attention networks", "year": "2018-09-04" }, { "authors": "Y Zhang; Y Tian; Y Kong; B Zhong; Y Fu", "journal": "", "ref_id": "b55", "title": "Residual dense network for image super-resolution", "year": "2018-06" }, { "authors": "S Zhou; J Zhang; W Zuo; C C Loy", "journal": "Advances in neural information processing systems", "ref_id": "b56", "title": "Cross-scale internal graph neural network for image super-resolution", "year": "2020" }, { "authors": "M Zontak; M Irani", "journal": "", "ref_id": "b57", "title": "Internal statistics of a single natural image", "year": "2011" } ]
[ { "formula_coordinates": [ 3, 407.72, 642.39, 155.31, 9.68 ], "formula_id": "formula_0", "formula_text": "F 0 = ϕ(x; θ),(1)" }, { "formula_coordinates": [ 3, 402.87, 694.87, 160.17, 9.68 ], "formula_id": "formula_1", "formula_text": "F m = φ(F 0 ; δ),(2)" }, { "formula_coordinates": [ 4, 138.42, 442.56, 161.61, 9.68 ], "formula_id": "formula_2", "formula_text": "ŷ = ψ(F m ↑; α),(3)" }, { "formula_coordinates": [ 4, 120.52, 503.76, 179.5, 8.99 ], "formula_id": "formula_3", "formula_text": "ŷ = HSPAN(x; (θ, δ, α)),(4)" }, { "formula_coordinates": [ 4, 77.95, 739.05, 222.07, 9.68 ], "formula_id": "formula_4", "formula_text": "F i = Ω i (F i-1 ) = Ω i (Ω i-1 (• • • Ω 2 (Ω 1 (F 0 )))),(5)" }, { "formula_coordinates": [ 4, 342.19, 720.54, 220.84, 30.32 ], "formula_id": "formula_5", "formula_text": "NLA(x i ) = N j=1 exp(d(x i , x j )) N k=1 exp(d(x i , x k )) ϕ v (x j ),(6)" }, { "formula_coordinates": [ 5, 115.71, 120.19, 184.31, 11.72 ], "formula_id": "formula_6", "formula_text": "d(x i , x j ) = ϕ q (x i ) T ϕ k (x j ),(7)" }, { "formula_coordinates": [ 5, 108.33, 320.16, 191.69, 30.32 ], "formula_id": "formula_7", "formula_text": "HSPA(x i ) = N j=1 ST j (s)ϕ v (x j ),(8)" }, { "formula_coordinates": [ 5, 113.86, 385.76, 186.16, 9.65 ], "formula_id": "formula_8", "formula_text": "ST j (s) = max{s j -κ(s), 0},(9)" }, { "formula_coordinates": [ 5, 119.46, 427.29, 180.56, 30.32 ], "formula_id": "formula_9", "formula_text": "N j=1 max{s j -κ(s), 0} = 1 (10)" }, { "formula_coordinates": [ 5, 48.96, 476.08, 251.06, 21.91 ], "formula_id": "formula_10", "formula_text": "(1) ≥ s (2) ≥ • • • ≥ s (N )" }, { "formula_coordinates": [ 5, 89.19, 504.31, 210.83, 30.32 ], "formula_id": "formula_11", "formula_text": "T = max{k ∈ K | ks (k) + 1 > k j=1 s (j) }.(11)" }, { "formula_coordinates": [ 5, 118.98, 567.91, 176.89, 61.08 ], "formula_id": "formula_12", "formula_text": "( T j=1 s (j) ) -κ(s) × T = 1, κ(s) = ( T j=1 s (j) ) -1 T . (12" }, { "formula_coordinates": [ 5, 295.87, 593, 4.15, 8.64 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 5, 317.73, 94.54, 199.77, 52.26 ], "formula_id": "formula_14", "formula_text": "3: s ← Sort(s) # Let s (1) ≥ s (2) ≥ • • • ≥ s (N ) . 4: T = max{k ∈ K | ks (k) + 1 > k j=1 s (j) } 5: κ(s) = ( T j=1 s (j) )-1 T 6: Output: max{s ′ -κ(s), 0}" }, { "formula_coordinates": [ 5, 311.98, 218.2, 251.06, 21.61 ], "formula_id": "formula_15", "formula_text": "∆ = {p ∈ R N | p i = 1, p i ≥ 0} [10]," }, { "formula_coordinates": [ 5, 393.47, 249.64, 165.42, 18.81 ], "formula_id": "formula_16", "formula_text": "s = argmin p∈∆ ||p -s|| 2 . (13" }, { "formula_coordinates": [ 5, 558.89, 252.03, 4.15, 8.64 ], "formula_id": "formula_17", "formula_text": ")" }, { "formula_coordinates": [ 5, 311.98, 363.37, 251.06, 21.91 ], "formula_id": "formula_18", "formula_text": "s (1) ≥ s (2) ≥ • • • ≥ s (N ) . Event [T > k]" }, { "formula_coordinates": [ 5, 311.98, 431.22, 255.86, 78.2 ], "formula_id": "formula_19", "formula_text": "P ([T > k]) = P ([(k + 1)s (k+1) + 1 > k+1 j=1 s (j) ]) = P ([(s (1) -s (k+1) ) + ... + (s (k) -s (k+1) )] < 1) ≤ P ([s (k) -s (k+1) < 1 k ]).(14)" }, { "formula_coordinates": [ 5, 366.1, 691.37, 196.93, 25.3 ], "formula_id": "formula_20", "formula_text": "∂s k = δ jk -∂κ(s) ∂s k , if s j > κ(s), 0, if s j ≤ κ(s),(15)" }, { "formula_coordinates": [ 6, 114.95, 73.66, 185.07, 24.42 ], "formula_id": "formula_21", "formula_text": "∂κ(s) ∂s k = 1 T , if k ∈ T(s), 0, if k / ∈ T(s),(16)" }, { "formula_coordinates": [ 6, 137.64, 181.3, 162.39, 23.23 ], "formula_id": "formula_22", "formula_text": "∂s k = δ jk - c k c j T .(17)" }, { "formula_coordinates": [ 6, 124.65, 229.31, 175.37, 23.89 ], "formula_id": "formula_23", "formula_text": "J (s) = Diag(c) - cc T T ,(18)" }, { "formula_coordinates": [ 6, 90.43, 346.69, 209.59, 23.89 ], "formula_id": "formula_24", "formula_text": "▽ s ST(s) = J (s)r = c ⊙ (r - c T r T • 1).(19)" }, { "formula_coordinates": [ 9, 146.35, 88.6, 41.91, 156.67 ], "formula_id": "formula_25", "formula_text": "[46] ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2 ×2" }, { "formula_coordinates": [ 9, 178.21, 264.35, 32.13, 139.84 ], "formula_id": "formula_26", "formula_text": "×3 ×3 ×3 ×3 ×3 ×3 ×3 ×3 ×3 ×3 ×3 ×3 ×3 ×3 ×3 ×3 ×330" } ]
High-Similarity-Pass Attention for Single Image Super-Resolution
Recent developments in the field of non-local attention (NLA) have led to a renewed interest in self-similarity-based single image super-resolution (SISR). Researchers usually used the NLA to explore non-local self-similarity (NSS) in SISR and achieve satisfactory reconstruction results. However, a surprising phenomenon that the reconstruction performance of the standard NLA is similar to the NLA with randomly selected regions stimulated our interest to revisit NLA. In this paper, we first analyzed the attention map of the standard NLA from different perspectives and discovered that the resulting probability distribution always has full support for every local feature, which implies a statistical waste of assigning values to irrelevant nonlocal features, especially for SISR which needs to model longrange dependence with a large number of redundant non-local features. Based on these findings, we introduced a concise yet effective soft thresholding operation to obtain high-similaritypass attention (HSPA), which is beneficial for generating a more compact and interpretable distribution. Furthermore, we derived some key properties of the soft thresholding operation that enable training our HSPA in an end-to-end manner. The HSPA can be integrated into existing deep SISR models as an efficient general building block. In addition, to demonstrate the effectiveness of the HSPA, we constructed a deep high-similaritypass attention network (HSPAN) by integrating a few HSPAs in a simple backbone. Extensive experimental results demonstrate that HSPAN outperforms state-of-the-art approaches on both quantitative and qualitative evaluations.
Jian-Nan Su; Min Gan; Guang-Yong Chen; Wenzhong Guo; C L Philip Chen
[ { "figure_caption": "Fig. 3 :3Fig. 3: An illustration of our HSPAN. The implementation details of the ST operation can be found in Algorithm 1.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Comparisons between our HSPA and the NLA on attention maps for x4 SR. These attention maps from left to right correspond to the 1st, 3rd, 5th, 7th, and 9th attention operations in our deep SISR backbone. Please zoom in for best view.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 6 :6Fig. 6: The PSNR results of different k setting.", "figure_data": "", "figure_id": "fig_2", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 :7Fig. 7: Parameters vs. performance. Our HSPA can improve the SR performance of representative deep SISR models vary in complexity from the simple FSRCNN [8] to the very complex EDSR[26] and RCAN[54].", "figure_data": "", "figure_id": "fig_3", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 :8Fig. 8: Parameters vs. performance. The SR performance of typical NLA-based deep SISR models can be improved by our soft thresholding (ST) operation without introducing additional parameters.", "figure_data": "", "figure_id": "fig_4", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 :9Fig. 9: Model parameters and SR performance comparsions on Manga109 (×4).", "figure_data": "", "figure_id": "fig_5", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 10 :10Fig. 10: Inference time and SR performance comparisions on Set5 (×4).", "figure_data": "", "figure_id": "fig_6", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Fig. 12 :12Fig. 12: Visual comparisons for ×3 SISR with blur-downscale degradation on the image 'barbara' and 'img 047' from Set14[50] and Urban100[19], respectively. Best and second best results are highlighted and underlined. Please zoom in for best view.", "figure_data": "", "figure_id": "fig_7", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "and Freedman et al. [14] restricted the non-local search space to some specified local regions for reducing the patch search complexity. Ebrahimi et al. [11] and Glasner et al. [16] extended the non-local search space to cross-scale images to achieve more accurate SR performance. Huang et al. [19] further expanded the search space by modeling geometric transformations such as perspective distortion and local shape variation.", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Ablation studeis on our HSPA and LEB. Best and second best results are highlighted and underlined.", "figure_data": "LEBNLA [43]HSPAPSNR36.6937.5337.90 (Backbone)37.9338.01", "figure_id": "tab_2", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "Quantitative results on SISR benchmark datasets. Best and second best results are highlighted and underlined.", "figure_data": "MethodScaleSet5 [2] PSNR SSIMSet14 [50] PSNR SSIMB100 [30] PSNR SSIMUrban100 [19] PSNR SSIMManga109 [32] PSNR SSIMBicubicFSRCNN [8]VDSR", "figure_id": "tab_3", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "Quantitative results on benchmark datasets with blur-downscale degradation. Best and second best results are highlighted and underlined.", "figure_data": "MethodScaleSet5 [2] PSNR SSIMSet14 [50] PSNR SSIMB100 [30] PSNR SSIMUrban100 [19] PSNR SSIMManga109 [32] PSNR SSIMBicubic×328.780.830826.380.727126.330.691823.520.686225.460.8149SPMSR [36]×332.210.900128.890.810528.130.774025.840.785629.640.9003SRCNN [6]×332.050.894428.800.807428.130.773625.700.777029.470.8924FSRCNN [8]×332.330.902028.910.812228.170.779125.710.784229.370.8985VDSR [20]×333.250.915029.460.824428.570.789326.610.813631.060.9234IRCNN [51]×333.380.918229.630.828128.650.792226.770.815431.150.9245SRMDNF [52]×334.010.924230.110.836428.980.800927.500.837032.970.9391RDN [55]×334.580.928030.530.844729.230.807928.460.858233.970.9465EDSR [26]×334.640.928230.540.845129.270.809428.640.861834.130.9477RCAN [54]×334.700.928830.630.846229.320.809328.810.864734.380.9483SAN [5]×334.750.929030.680.846629.330.810128.830.864634.460.9487HAN [35]×334.760.929430.700.847529.340.810628.990.867634.560.9494HSPAN (Ours)×334.900.930330.810.849629.420.813129.480.876634.870.9515HSPAN+ (Ours)×335.000.931030.910.850829.480.814229.720.879535.210.9530HRHRPSNR/SSIMPSNR/SSIMBicubicBicubic24.25/0.397017.53/0.2550FSRCNN [8]EDSR [26]HAN [35]HSPAN(Ours)FSRCNN [8]EDSR [26]HAN [35]HSPAN(Ours)26.29/0.684526.81/0.797426.43/0.774429.01/0.873118.41/0.406919.18/0.584118.01/0.400320.64/0.7618", "figure_id": "tab_7", "figure_label": "III", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "[16]", "Explanation": "The cited work provides the concept of self-similarity priors, which the citing paper leverages to regularize SISR problems and improve the quality of the recovered HR image."}, {"Category": "Methodological Basis", "Citation": "[43]", "Explanation": "The cited work introduces the concept of non-local attention (NLA) in deep learning-based SISR models, which the citing paper adopts to model non-local dependencies in SISR tasks."}, {"Category": "Supporting Evidence", "Citation": "[5], [27], [33], [34]", "Explanation": "The cited works provide evidence of the effectiveness of NLA in SISR tasks, which the citing paper leverages to support the use of NLA in SISR models."}, {"Category": "Extension or Continuation", "Citation": "Fig. 1", "Explanation": "The figure illustrates the limitations of the softmax transformation in long-range sequence modeling, prompting the citing paper to explore new methods for non-local information modeling in SISR tasks."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work NLRN is used as a method for converting similarity vectors into probability distributions, which the citing paper adopts in their research on SISR."}, {"Category": "Methodological Basis", "Citation": "[53]", "Explanation": "The cited work RNAN is also used as a method for converting similarity vectors into probability distributions, which the citing paper adopts in their research on SISR."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work is used as a method for converting similarity vectors into probability distributions in the research on SISR conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[33]", "Explanation": "The cited work NLSA is extended in the citing paper by decomposing the long-range sequence modeling problem into shorter sequence modeling sub-problems via LSH."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work LSH is used in the citing paper to decompose the long-range sequence modeling problem into shorter sequence modeling sub-problems."}, {"Category": "Methodological Basis", "Citation": "[46]", "Explanation": "The cited work ENLCA is used in the citing paper to multiply similarity vectors by an amplification factor to capture long-range information and alleviate the issue caused by the softmax transformation."}, {"Category": "Methodological Basis", "Citation": "[43]", "Explanation": "The cited work provides a non-local attention (NLA) mechanism for fusing self-similarity information in image reconstruction, which the citing paper adopts in the development of the high-similarity-pass attention (HSPA) method for weighting non-local information in deep SISR."}, {"Category": "Methodological Basis", "Citation": "[16], [57]", "Explanation": "The cited works provide the self-similarity prior that the citing paper adopts in the design of the HSPA method for SISR."}, {"Category": "Methodological Basis", "Citation": "[3], [9]", "Explanation": "The cited works contribute the sparse representation prior that the citing paper uses in the HSPA method for SISR."}, {"Category": "Methodological Basis", "Citation": "[47]", "Explanation": "The cited work by Yang et al. provides a method for utilizing self-similarity in SISR by exploring non-local image information, which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work by Liu et al. introduces a non-local recurrent network for capturing long-range self-similarity in SISR and denoising, which the citing paper builds upon in their research."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The cited work by Dai et al. combines the non-local attention with channel-wise attention in deep convolutional neural network to utilize spatial and channel features correlations for feature representation, which the citing paper adopts in their research."}, {"Category": "Extension or Continuation", "Citation": "[43]", "Explanation": "The cited work by Yang et al. integrates self-similarity prior in deep learning-based SISR by non-local attention, which the citing paper extends by exploring new dimensions and variables in their research."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work by Luo et al. proposed a hash-learnable non-local attention to capture long-range self-similarity for lightweight SISR, which the citing paper adopts in their research to improve the efficiency of exploring non-local self-similarity in deep SISR methods."}, {"Category": "Methodological Basis", "Citation": "[33], [36], [44], [49]", "Explanation": "The cited works have successfully integrated the sparse representation prior in image processing tasks, which the citing paper builds upon in their research on SISR."}, {"Category": "Methodological Basis", "Citation": "[12], [41]", "Explanation": "The cited works have contributed to the field of denoising with the use of the sparse representation prior, which the citing paper may have adopted in their research."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work by Kim et al. has constructed a sparse basis set to reduce the time complexity in regression, which the citing paper may have considered in their research on SISR."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work by Peleg et al. has proposed a statistical prediction model based on sparse representation, which the citing paper may have utilized in their research on SISR without the use of the invariance assumption."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The cited work by Dong et al. was the first to introduce the sparse representation prior in a deep CNN-based SISR model with the ReLU activation enforced about 50% sparsity, which the citing paper may have referenced in their research on SISR."}, {"Category": "Methodological Basis", "Citation": "[44]", "Explanation": "The cited work by [44] provides a method of combining domain expertise and deep learning models to improve SR results, which the citing paper builds upon in their research."}, {"Category": "Extension or Continuation", "Citation": "[13]", "Explanation": "The cited work by [13] explicitly integrates sparsity constraints in hidden neurons, which the citing paper further explores in their research to improve SISR results."}, {"Category": "Extension or Continuation", "Citation": "[33]", "Explanation": "The cited work by [33] combines sparse representation with the exploration of non-local self-similarity in deep SISR models, which the citing paper extends by incorporating a soft thresholding operation into the high-similarity-pass attention."}, {"Category": "Extension or Continuation", "Citation": "[39]", "Explanation": "The cited work by [39] also combines sparse representation with the exploration of non-local self-similarity in deep SISR models, which the citing paper further extends by incorporating a soft thresholding operation into the high-similarity-pass attention."}, {"Category": "Methodological Basis", "Citation": "[39], [54]", "Explanation": "The cited works provide the backbone of the HSPAN network, which the citing paper adopts to build its own model for high-similarity-pass attention."}, {"Category": "Methodological Basis", "Citation": "(1)", "Explanation": "The cited work introduces the concept of a convolutional layer for extracting shallow features from a given LR image, which the citing paper adopts in the LR features extraction part of their HSPAN network."}, {"Category": "Methodological Basis", "Citation": "(2)", "Explanation": "The cited work presents the use of sub-pixel operation for upscaling features in the local and non-local deep features fusion part of the HSPAN network, which the citing paper implements in their F m module."}, {"Category": "Methodological Basis", "Citation": "(3)", "Explanation": "The cited work suggests the use of a HR reconstruction part for final RGB image reconstruction, which the citing paper incorporates in the HR reconstruction part of their HSPAN network."}, {"Category": "Methodological Basis", "Citation": "[20], [54]", "Explanation": "The cited works provide the long residual connection method that the citing paper adopts in their research to bypass low-frequency information and prevent gradient explosions."}, {"Category": "Methodological Basis", "Citation": "[5], [53]", "Explanation": "The cited works provide a pixel-level attention mechanism that the citing paper adopts in its research on deep SISR methods for calculating the influence of feature vectors on query feature vectors."}, {"Category": "Methodological Basis", "Citation": "[31]", "Explanation": "The cited work provides the concept of the simplex and the projection of similarity vectors onto it, which the citing paper adopts in the ST operation to obtain the attention weight vector s."}, {"Category": "Data Source", "Citation": "[10]", "Explanation": "The cited work is the source of the data used in the ST operation to project the similarity vector s onto the simplex, as mentioned in the context of the cited work."}, {"Category": "Data Source", "Citation": "[5], [33], [54]", "Explanation": "The cited works are used as the training data for the deep SISR model in the citing paper, providing the foundational images and patches for the model to learn from."}, {"Category": "Methodological Basis", "Citation": "[42]", "Explanation": "The cited work, DIV2K, serves as the data source for training the HSPAN model in the citing paper, providing a foundational dataset for the research conducted."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work, Set5, is used as a benchmark for evaluating the performance of the HSPAN model in the citing paper, providing a standard for comparison in the study."}, {"Category": "Supporting Evidence", "Citation": "[43]", "Explanation": "The cited work, NLA, is used to compare the performance of the HSPA in the citing paper, providing evidence that the HSPA is a more effective method for improving SR performance."}, {"Category": "Extension or Continuation", "Citation": "[42]", "Explanation": "The citing paper extends the research conducted in DIV2K by training the HSPAN model for classical SISR with scale factor \u00d72, exploring new dimensions and applications of the model in the field of image processing."}, {"Category": "Methodological Basis", "Citation": "[50]", "Explanation": "The cited work, Set14, is used as a dataset for visual comparison in Fig. 5, which provides a basis for understanding the performance of the HSPA in SISR reconstruction."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work, B100, is used as a dataset for visual comparison in Fig. 5, which provides a basis for understanding the performance of the HSPA in SISR reconstruction."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work, Urban100, is used as a dataset for visual comparison in Fig. 5, which provides a basis for understanding the performance of the HSPA in SISR reconstruction."}, {"Category": "Data Source", "Citation": "[43]", "Explanation": "The cited work is used as a data source for the comparison of the HSPA with the NLA in SISR reconstruction."}, {"Category": "Methodological Basis", "Citation": "[50]", "Explanation": "The cited work, Set14, provides a dataset that the citing paper uses to test the performance of the proposed method in the field of SISR."}, {"Category": "Extension or Continuation", "Citation": "[30]", "Explanation": "The cited work, B100, is used to further test the performance of the proposed method in the field of SISR, providing a new dataset for analysis."}, {"Category": "Extension or Continuation", "Citation": "[19]", "Explanation": "The cited work, Urban100, is used to test the performance of the proposed method in the field of SISR, providing a new dataset for analysis and extension of the research."}, {"Category": "Data Source", "Citation": "The texture regions from top to bottom", "Explanation": "The texture regions from top to bottom are used as a data source in the citing paper to demonstrate the performance of the proposed method in the field of SISR."}, {"Category": "Methodological Basis", "Citation": "The value of k can be set flexibly in testing phase", "Explanation": "The cited work provides a method of setting the value of k in the testing phase to find the tradeoffs between computational complexity and accurate reconstruction results in the field of SISR."}, {"Category": "Methodological Basis", "Citation": "The SR performance of different k setting are shown in Fig. 6", "Explanation": "The cited work shows the SR performance of different k settings in the field of SISR, providing a method for evaluating the performance of the proposed method."}, {"Category": "Methodological Basis", "Citation": "The soft thresholding (ST) operation is used to replace the softmax transformation in NLA-based deep SISR models", "Explanation": "The cited work provides a method of replacing the softmax transformation in NLA-based deep SISR models with the soft thresholding (ST) operation to improve the performance of the proposed method in the field of SISR."}, {"Category": "Methodological Basis", "Citation": "[8]", "Explanation": "The cited work, FSRCNN, serves as a basis for the insertion of the HSPA into the deep SISR model to improve the SR performance with negligible parameters."}, {"Category": "Methodological Basis", "Citation": "[26]", "Explanation": "The cited work, EDSR, is used as a reference for the insertion of the HSPA into the deep SISR model to enhance the SR performance with minimal parameters."}, {"Category": "Methodological Basis", "Citation": "[54]", "Explanation": "The cited work, RCAN, is employed as a benchmark for the integration of the HSPA into the deep SISR model to improve the SR performance with negligible parameters."}, {"Category": "Extension or Continuation", "Citation": "[34]", "Explanation": "The cited work, CSNLN, is further extended by replacing the softmax transformation with the ST operation to improve the SR performance of the deep SISR model without additional parameters."}, {"Category": "Extension or Continuation", "Citation": "[33]", "Explanation": "The cited work, NLSN, is extended by incorporating the ST operation to enhance the SR performance of the deep SISR model without any extra parameters."}, {"Category": "Extension or Continuation", "Citation": "[5]", "Explanation": "The cited work, SAN, is further developed by replacing the softmax transformation with the ST operation to improve the SR performance of the deep SISR model without additional parameters."}, {"Category": "Methodological Basis", "Citation": "[26]", "Explanation": "The cited work, EDSR, serves as a methodological basis for the comparison of model parameters and SR performance in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[55]", "Explanation": "The cited work, RDN, is used as a method to compare model parameters and SR performance in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[54]", "Explanation": "The cited work, RCAN, is referenced to compare model parameters and SR performance in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work, DBPN, is used to compare model parameters and SR performance in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[53]", "Explanation": "The cited work, RNAN, is referenced to compare model parameters and SR performance in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The cited work, SAN, is used to compare model parameters and SR performance in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[33]", "Explanation": "The cited work, NLSN, is referenced to compare model parameters and SR performance in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[46]", "Explanation": "The cited work, ENLCN, is used to compare model parameters and SR performance in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[33]", "Explanation": "The cited work NLSN is used as a reference for the comparison of SR performance in the citing paper, which provides a methodological basis for evaluating the performance of the HSPAN model."}, {"Category": "Extension or Continuation", "Citation": "[54]", "Explanation": "The cited work RCAN is used to compare the SR performance of the HSPAN model, indicating an extension of the research in the cited work to a new context in the citing paper."}, {"Category": "Data Source", "Citation": "[17]", "Explanation": "The cited work DBPN is acknowledged as a data source in the inference time comparison in Fig. 10, highlighting the reliance on external data for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[26]", "Explanation": "The cited work EDSR is used to compare the SR performance of the HSPAN model, providing a methodological basis for evaluating the performance of the HSPAN model."}, {"Category": "Methodological Basis", "Citation": "[55]", "Explanation": "The cited work RDN is used to compare the SR performance of the HSPAN model, indicating a methodological basis for evaluating the performance of the HSPAN model in the context of SR performance."}, {"Category": "Methodological Basis", "Citation": "[54]", "Explanation": "The cited work RCAN is used to compare the SR performance of the HSPAN model, providing a methodological basis for evaluating the performance of the HSPAN model in the context of SR performance."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The cited work SAN is used to compare the SR performance of the HSPAN model, indicating a methodological basis for evaluating the performance of the HSPAN model in the context of SR performance."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work OISR is used to compare the SR performance of the HSPAN model, providing a methodological basis for evaluating the performance of the HSPAN model in the context of SR performance."}, {"Category": "Methodological Basis", "Citation": "[56]", "Explanation": "The cited work IGNN is used to compare the SR performance of the HSPAN model, indicating a methodological basis for evaluating the performance of the HSPAN model in the context of SR performance."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work CSNLN is used to compare the SR performance of the HSPAN model, providing a methodological basis for evaluating the performance of the HSPAN model in the context of SR performance."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work HAN is used to compare the SR performance of the HSPAN model, indicating a methodological basis for evaluating the performance of the HSPAN model in the context of SR performance."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work DRLN is used to compare the SR performance of the HSPAN model, providing a methodological basis for evaluating the performance of the HSPAN model in the context of SR performance."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work SwinIR is used to compare the SR performance of the HSPAN model, indicating a methodological basis for evaluating the performance of the HSPAN model in the context of SR performance."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work ENLCN is used to compare the SR performance of the HSPAN model, providing a methodological basis for evaluating the performance of the HSPAN model in the context of SR performance."}, {"Category": "Extension or Continuation", "Citation": "[26]", "Explanation": "The cited work EDSR is extended in the citing paper to further improve the SR performance and reduce inference time by implementing the HSPA technique."}, {"Category": "Methodological Basis", "Citation": "[55]", "Explanation": "The cited work RDN is adopted in the citing paper to provide a methodological basis for improving SR performance and reducing inference time through the implementation of the HSPA technique."}, {"Category": "Methodological Basis", "Citation": "[54]", "Explanation": "The cited work RCAN is also adopted in the citing paper to provide a methodological basis for improving SR performance and reducing inference time through the implementation of the HSPA technique."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The cited work SAN is used as a methodological basis in the citing paper to improve SR performance and reduce inference time through the implementation of the HSPA technique."}, {"Category": "Data Source", "Citation": "[18]", "Explanation": "The cited work OISR is used as a data source in the citing paper to provide a reference for the implementation of the HSPA technique in improving SR performance and reducing inference time."}, {"Category": "Data Source", "Citation": "[56]", "Explanation": "The cited work IGNN is also used as a data source in the citing paper to provide a reference for the implementation of the HSPA technique in improving SR performance and reducing inference time."}, {"Category": "Data Source", "Citation": "[34]", "Explanation": "The cited work CSNLN is used as a data source in the citing paper to provide a reference for the implementation of the HSPA technique in improving SR performance and reducing inference time."}, {"Category": "Data Source", "Citation": "[35]", "Explanation": "The cited work HAN is also used as a data source in the citing paper to provide a reference for the implementation of the HSPA technique in improving SR performance and reducing inference time."}, {"Category": "Data Source", "Citation": "[33]", "Explanation": "The cited work NLSN is used as a data source in the citing paper to provide a reference for the implementation of the HSPA technique in improving SR performance and reducing inference time."}, {"Category": "Data Source", "Citation": "[1]", "Explanation": "The cited work DRLN is used as a data source in the citing paper to provide a reference for the implementation of the HSPA technique in improving SR performance and reducing inference time."}, {"Category": "Data Source", "Citation": "[25]", "Explanation": "The cited work SwinIR is also used as a data source in the citing paper to provide a reference for the implementation of the HSPA technique in improving SR performance and reducing inference time."}, {"Category": "Data Source", "Citation": "[20]", "Explanation": "The cited work LapSRN is used as a data source in the citing paper to provide a reference for the implementation of the HSPA technique in improving SR performance and reducing inference time."}, {"Category": "Data Source", "Citation": "[23]", "Explanation": "The cited work MemNet is also used as a data source in the citing paper to provide a reference for the implementation of the HSPA technique in improving SR performance and reducing inference time."}, {"Category": "Data Source", "Citation": "[40]", "Explanation": "The cited work SRMDNF is used as a data source in the citing paper to provide a reference for the implementation of the HSPA technique in improving SR performance and reducing inference time."}, {"Category": "Data Source", "Citation": "[17]", "Explanation": "The cited work DBPN is used as a data source in the citing paper to provide a reference for the implementation of the HSPA technique in improving SR performance and reducing inference time."}, {"Category": "Methodological Basis", "Citation": "[46]", "Explanation": "The cited work provides a set of multiplication operations that the citing paper adopts in their research to perform calculations and generate results."}, {"Category": "Supporting Evidence", "Citation": "[8]", "Explanation": "The cited work, FSRCNN, serves as a baseline for comparison in the citing paper, providing a foundational method for evaluating the performance of the proposed HSPAN model."}, {"Category": "Supporting Evidence", "Citation": "[20]", "Explanation": "The cited work, VDSR, is also used as a baseline for comparison in the citing paper, contributing to the overall assessment of the HSPAN model."}, {"Category": "Supporting Evidence", "Citation": "[23]", "Explanation": "The cited work, LapSRN, is another baseline method used in the citing paper to compare the performance of the HSPAN model."}, {"Category": "Supporting Evidence", "Citation": "[26]", "Explanation": "The cited work, EDSR, is another baseline method used in the citing paper to compare the performance of the HSPAN model."}, {"Category": "Supporting Evidence", "Citation": "[40]", "Explanation": "The cited work, MemNet, is used in the citing paper to provide a method for comparison in the evaluation of the HSPAN model."}, {"Category": "Supporting Evidence", "Citation": "[52]", "Explanation": "The cited work, SRMDNF, is used in the citing paper to provide a method for comparison in the evaluation of the HSPAN model."}, {"Category": "Supporting Evidence", "Citation": "[17]", "Explanation": "The cited work, DBPN, is used in the citing paper to provide a method for comparison in the evaluation of the HSPAN model."}, {"Category": "Supporting Evidence", "Citation": "[55]", "Explanation": "The cited work, RDN, is used in the citing paper to provide a method for comparison in the evaluation of the HSPAN model."}, {"Category": "Supporting Evidence", "Citation": "[54]", "Explanation": "The cited work, RCAN, is used in the citing paper to provide a method for comparison in the evaluation of the HSPAN model."}, {"Category": "Supporting Evidence", "Citation": "[5]", "Explanation": "The cited work, SAN, is used in the citing paper to provide a method for comparison in the evaluation of the HSPAN model."}, {"Category": "Supporting Evidence", "Citation": "[18]", "Explanation": "The cited work, OISR, is used in the citing paper to provide a method for comparison in the evaluation of the HSPAN model."}, {"Category": "Supporting Evidence", "Citation": "[56]", "Explanation": "The cited work, IGGN, is used in the citing paper to provide a method for comparison in the evaluation of the HSPAN model."}, {"Category": "Supporting Evidence", "Citation": "[34]", "Explanation": "The cited work, CSNLN, is used in the citing paper to provide a method for comparison in the evaluation of the HSPAN model."}, {"Category": "Supporting Evidence", "Citation": "[35]", "Explanation": "The cited work, HAN, is used in the citing paper to provide a method for comparison in the evaluation of the HSPAN model."}, {"Category": "Supporting Evidence", "Citation": "[33]", "Explanation": "The cited work, NLSN, is used in the citing paper to provide a method for comparison in the evaluation of the HSPAN model."}, {"Category": "Supporting Evidence", "Citation": "[1]", "Explanation": "The cited work, DRLN, is used in the citing paper to provide a method for comparison in the evaluation of the HSPAN model."}, {"Category": "Supporting Evidence", "Citation": "[25]", "Explanation": "The cited work, SwinIR, is used in the citing paper to provide a method for comparison in the evaluation of the HSPAN model."}, {"Category": "Supporting Evidence", "Citation": "[46]", "Explanation": "The cited work, ENLCN, is used in the citing paper to provide a method for comparison in the evaluation of the HSPAN model."}, {"Category": "Extension or Continuation", "Citation": "[33]", "Explanation": "The cited work NLSN serves as a baseline for comparison in the citing paper, and the citing paper extends the research by bringing improvements in terms of scale factor and performance on various datasets."}, {"Category": "Supporting Evidence", "Citation": "[34]", "Explanation": "The cited work CSNLN is used as a benchmark to compare the performance of the proposed HSPAN in the citing paper, providing a basis for evaluating the improvement achieved by the HSPAN."}, {"Category": "Methodological Basis", "Citation": "[55]", "Explanation": "The RDN model is cited as a method for deep SISR, and the proposed HSPAN is compared to it in terms of its ability to repair severely damaged textures in input images."}, {"Category": "Methodological Basis", "Citation": "[54]", "Explanation": "The RCAN model is cited as a method for deep SISR, and the proposed HSPAN is compared to it in terms of its ability to repair severely damaged textures in input images."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The SwinIR model is cited as a method for deep SISR based on non-local attention, and the proposed HSPAN is compared to it in terms of its ability to achieve better reconstruction performance with more accurate image details."}, {"Category": "Methodological Basis", "Citation": "[46]", "Explanation": "The ENLCN model is cited as a method for deep SISR based on non-local attention, and the proposed HSPAN is compared to it in terms of its ability to achieve better reconstruction performance with more accurate image details."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work, SPMSR, serves as a methodological basis for the comparison of SR performance in the blur-downscale degradation SISR tasks, as the scale factor and gaussian standard deviation are set based on the methods used in SPMSR."}, {"Category": "Supporting Evidence", "Citation": "[52], [54]", "Explanation": "The cited works, VDSR and RCAN, provide supporting evidence for the verification of SR performance in the blur-downscale degradation SISR tasks, as the scale factor and gaussian standard deviation are set based on the methods used in these works."}, {"Category": "Data Source", "Citation": "[52]", "Explanation": "The cited work, SRMDNF, is the data source for the comparison of SR performance in the blur-downscale degradation SISR tasks, as the scale factor and gaussian standard deviation are set based on the methods used in SRMDNF."}, {"Category": "Methodological Basis", "Citation": "[26]", "Explanation": "The cited work, EDSR, serves as a methodological basis for the comparison of SR performance in the blur-downscale degradation SISR tasks, as the scale factor and gaussian standard deviation are set based on the methods used in EDSR."}, {"Category": "Methodological Basis", "Citation": "[55]", "Explanation": "The cited work, RDN, serves as a methodological basis for the comparison of SR performance in the blur-downscale degradation SISR tasks, as the scale factor and gaussian standard deviation are set based on the methods used in RDN."}, {"Category": "Methodological Basis", "Citation": "[54]", "Explanation": "The cited work, RCAN, serves as a methodological basis for the comparison of SR performance in the blur-downscale degradation SISR tasks, as the scale factor and gaussian standard deviation are set based on the methods used in RCAN."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The cited work, SAN, serves as a methodological basis for the comparison of SR performance in the blur-downscale degradation SISR tasks, as the scale factor and gaussian standard deviation are set based on the methods used in SAN."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work, HAN, serves as a methodological basis for the comparison of SR performance in the blur-downscale degradation SISR tasks, as the scale factor and gaussian standard deviation are set based on the methods used in HAN."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work, HAN, serves as a methodological basis for the proposed HSPAN in the citing paper, as it is used to bring a performance improvement in the SISR task with blur-downscale degradation."}, {"Category": "Extension or Continuation", "Citation": "[35]", "Explanation": "The cited work, HAN, is used as a benchmark to compare the performance of the proposed HSPAN in the blur-downscale SISR task. The results show that HSPAN is more effective in reconstructing textures, while HAN struggles to generate satisfactory textures in regions with complex patterns."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b27", "b16", "b35", "b29", "b33", "b15", "b19", "b37", "b14", "b25", "b24", "b26", "b11" ], "table_ref": [], "text": "It would be no exaggeration to say that transformer-based large language models (LLMs) have revolutionized the field of natural language processing (NLP). Kicked off by the advances presented by the GPT-x models developed by OpenAI [27], these types of language models currently provide state-of-the-art performance in many of the standard NLP tasks. Although LLMs were originally developed mostly to do word sequence completion tasks, with no guarantees about the completion beyond its coherence, there have been increasing claims and anecdotal evidence that they have other emergent capabilities that are not normally associated with sequence completion. Indeed, the hints of such emergent capabilities has started a veritable land rush, with researchers probing (prompting) and studying LLM behavior almost as if they were artificial organisms (c.f. [16]). Of particular interest to us in this paper is the thread of efforts that aim to investigate (and showcase) reasoning abilities of LLMs-including commonsense reasoning [35,29,7], logical reasoning [33], and even ethical reasoning [15]. The macro-tenor of the drumbeat of these works has been suggesting that LLM's are indeed capable of doing such kinds of reasoning [19,37,4].\nOne type of reasoning task that has been well studied in the AI community is planning and sequential decision making. At its simplest, planning involves developing a course of actions (policy) which when executed takes the agent to a desired state of the world. Planning has generally been studied primarily as an inference on world and reward models-whether specified by humans or learned by the agent by interacting with its world. In this paper, we are interested in seeing what planning abilities, if any, LLMs may already have, given their high capacity functions (with billions of tunable parameters) trained on web-scale corpora. Specifically, we are interested in answering two broad questions:\n1. How effective are LLMs by themselves in generating simple plans in commonsense planning tasks (of the type that humans are generally quite good at)?\n2. How good are LLMs in an LLM-Modulo setting where they act as a source of heuristic guidance for other agents in their planning tasks?\nNotice that in theory, it is possible for LLMs to be very effective as idea generators for external sound planners or humans in the loop in computer-supported cooperative work scenarios, while themselves being very bad at generating plans that are guaranteed to be correct. This is especially likely because the chief power of LLMs comes from their pattern-finding abilities than from firstprinciples simulations over world models. Compared to a planner that is guaranteed to be correct in a narrow set of domains, LLMs may likely be good at generating plausible (but not guaranteed to be correct) plan heuristics/suggestions in many more domains.\nTo investigate these questions in a systematic rather than anecdotal manner, we generate a suite of planning problem instances2 based on the kinds of domains employed in the International Planning Competition [14]. To eliminate the subjective aspect of analysis that forms the core part of many earlier efforts on evaluating the reasoning capabilities of LLMs, we automate the evaluation by leveraging models and tools from the automated planning community.\nFigure 1: The diagrammatic overview of the two modes of LLMs for planning.\nThe evaluation itself is done in two modes (shown in Figure 1). In the first \"autonomous\" mode, LLMs are used standalone, and we directly assess the quality and correctness of plans they generate.\nAs we shall see, the results in the autonomous mode are pretty bleak. On an average, only about 12% of the plans that the best LLM (GPT-4) generates are actually executable without errors and reach their goals. We will show that the choice of the specific LLM (we have tested the family of GPT LLMs including GPT-4 [25], GPT-3.5 [24], InstructGPT-3.5, InstructGPT-3 [26] and GPT-3 [3]), as well as fine tuning does not seem to have a major effect on this dismal performance. We also show that the performance deteriorates further if the names of the actions and objects in the domain are obfuscated-a change that doesn't in anyway affect the performance of the standard AI planners. To shed further light on the performance of GPT4, we present an evaluation of the plans it generates under a series of more relaxed (more forgiving) executability conditions. Further, we provide a human baseline for the simplest domain in our set of domains, by presenting the planning instances to human subjects (through IRB-approved studies) and evaluating the quality and correctness of their plans. These results are substantially better than those of LLMs-confirming that LLMs can't plan even in a simple common sense domain in the autonomous mode. In the second LLM-Modulo mode, the plans produced by LLMs are given as input to an automated planner working off of a correct domain model to check whether the LLM's plans help with the search process of the underlying planner to come up with correct plans. Specifically we show that a well known automated planner called LPG [6], that uses local search to locate and remove flaws in a candidate plan to make it correct, is able to repair the LLM plans with relative ease. We compare the LLM+LPG combination with two baselines, one where an empty plan is used as the seed plan for the LPG and two, where a random plan is provided as the seed plan to the LPG. We show that the average search steps by the LLM+LPG combination is much lesser than both the baselines, thereby revealing that LLMs' plans are indeed helping with the search process of the underlying planner. Further, instead of having LPG correct the plans, we use an external verifier, VAL [11], to point out the errors in the LLM-generated plans and back-prompt the LLM for a new plan with this feedback. We show that this repeated interaction indeed improves the plan correctness in common-sense domains. Overall, our findings demonstrate that, with respect to planning, LLMs' perform poorly in the autonomous mode but the generated plans can help AI planners in the search process or can be given to external verifiers and back-prompt the LLM for better plans. In this paper, we first present an overview of the related work. Following that, we describe the necessary background and the prompt generation pipeline. Finally, we provide the results and analysis of various experiments undertaken in both autonomous and heuristic evaluation modes." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b12", "b13", "b39", "b28", "b0", "b38", "b21", "b32", "b21", "b18" ], "table_ref": [], "text": "In this work, we look at LLMs' planning capabilities when the domain is given as part of the prompt (as is the standard practice in automated planning [8]). Our evaluation focuses on zero-shot (just domain and problem specification), and few-shot (example problems with plans) modes. There have been a few works that looked at the planning capabilities of LLMs. Most of them, such as [12,2] focus on commonsense domains/tasks (e.g. moving things in kitchens, wedding/menu planning etc.) and thus evaluate LLMs in a mode wherein the prompt doesn't include any information about the specific domain. Plans generated in that way are hard to evaluate as they are not directed at any plan executor and the humans often wind up giving the benefit of doubt for a plausible-but not actually executable-plan. This is why in SayCan [2], where executability is critical, they try to filter out/interpret the LLM plans in terms of the skills/actions that are actually available to the executor. While SayCan does this in a rather convoluted way that requires access to the internal log probabilities of the LLM, our approach simplifies this by specifying the domain as part of the prompt. In all our experiments, we found that LLMs only use the actions listed as part of the domain specification.\nOne other mode of evaluation of planning capabilities in the literature involves the user incrementally interacting with the LLM, and re-prompting it to point out flaws in its plans, with the hope that the LLM eventually reaches an executable plan [13,39,28]. Such evaluations are notorious for their Clever Hans effect [1] with the actual planning being done by the humans in the loop rather than the LLMs themselves. We thus separate our evaluation into two modes-autonomous and as assistants to external planners/reasoners. There have also been efforts which mostly depended on LLMs as \"translators\" of natural language problem/goal specification into formal specifications, which are then thrown over to sound external planners [38,21]. Such efforts don't shed any light on the internal planning capabilities of the LLMs themselves, as our evaluations in autonomous and assistive modes do. Finally, after our initial study and benchmark were made public, other groups did parallel studies that largely corroborate our results on the ineffectiveness of LLMs in finding executable plans [32,21].\nTaking a broader perspective, making plans in the world involves (1) discovering actions (and their precondition/effect causal dependencies), and (2) sequencing an appropriate subset of available/discovered actions to achieve the agent's goals. The former requires broad knowledge about actions available in the world and their individual effects, while the latter requires deep drilling-down over a given set of actions to ensure that all goals are supported (causal chaining) without any undesirable interactions. LLMs have an edge on the former-they do indeed have web-scale broad knowledge! As we shall see however, they are very bad at the second phase of developing valid interaction-free plans (in part, because LLMs don't have the ability to do combinatorial search). Most cases in literature (as outlined in [18]) where LLMs are claimed to have \"planned\" turn out, upon close examination, to be instances of phase 1-your wedding plans, recipe plans etc.-where you are either using a very forgiving plan correctness criterion, or the phase 2 is vacuous. Standard AI planners-on the other hand-assume that the discovery part is done and handed down as a compact domain model, and focus mostly on the second part: selecting among known actions to establish causal chains and sequencing them to make them interaction free. In this sense, LLMs and AI planners can be complementary, as we have shown in this paper-with the former helping with phase 1-either with a candidate/approximate plan or domain model-and the latter with phase 2.\n3 Prompt Generation for Classical Planning Problems" }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b22" ], "table_ref": [], "text": "Given that we are interested in investigating the basic reasoning about actions and change problem, we want to look at the most fundamental planning formalism first, namely the goal-directed deterministic planning problem. Colloquially referred to as classical planning problem, these problem classes consist of a problem domain, an initial state and a goal state. The problem domain consists of a set of fluents which correspond to predicates with some arity and a set of actions. The state-space for the planning problem is defined by the possible truth assignment over the predicates. Each action consists of preconditions and effects where preconditions is a set of predicates that describe when an action can be executed and effects are set of predicates that describe what happens when an action is executed. The effects can further consist of add effects, which is the set of predicates that will be set true by the action, and delete effects, which is the set of predicates that will be set false. The solution for a planning problem is a sequence of actions, or a plan, that when applied in the initial state will result in a state where the goal conditions are satisfied. A standard representation to specify such kind of planning problems is the Planning Definition and Domain Language (PDDL) [22]. Below is a snippet of an action from a popular benchmark problem called Blocksworld, in PDDL. The action corresponds to picking up a block in that domain. A more detailed description on classical planning problems is provided in Appendix A.1. We now will describe how we generate the prompts that are given to the LLMs. Within a prompt, LLMs are first provided with a lifted domain description. For one shot configurations, the prompt additionally contains an example instance of a planning problem (consisting of a description of the initial state and the goal) and the corresponding plan (which ends with a tag, referred to as the plan-end tag, that denotes the end of the plan). All prompts end with a planning problem description. The text generated by the LLM until the plan-end tag is used as the candidate for extracting the plan. If the extractor cannot reasonably extract an instance, it is marked as incorrect. The prompt is either formatted in natural langauge or PDDL. Natural language prompts utilize complete natural language sentences to describe feasible actions in the domain. Initial conditions are also reported as complete sentences. Plans in the natural language setting take the form of a series of commands such as \"stack the orange block on top of the blue block\". As implied by the name, PDDL prompts format all elements (domain description, initial state, goal state, and plans) using PDDL. We point the reader to the supplementary material for examples on each of these prompt configurations." }, { "figure_ref": [ "fig_0" ], "heading": "Prompt Generation", "publication_ref": [], "table_ref": [], "text": "Chain of Thought Prompting: In addition to the four experiments above, we look at a fifth experiment using a state tracking chain of thought prompting technique in a natural language one shot setting. Within this configuration, we provide an annotated example where each action is annotated with the state prior to the action, the reason for why the action is applicable in the prior state, and the resulting state after applying the action. After the example, a meta-explanation about plan correctness is provided. The LLM is then asked to return a response making the same state tracking and justification annotations that were included in the example.\nPrompt Generation Pipeline: We've developed a prompt generation pipeline (visualized in Figure 2) that accepts PDDL domain files as input and outputs prompts that follow the experiments described above. The prompt generation component takes care of creating the set of PDDL problems to be solved for all experiments. Following that, examples are added to the prompt in one shot experiments. While our setup utilizes a planner during example generation, any example generation technique could be used here so long as the examples generated are valid plans. In the state tracking experiment, we also have developed a component to add justification annotations for examples so that the examples reflect what we expect of the LLM. The last step before finishing is translation: since problems at this point are currently in PDDL, prompts for all natural language experiments (whether an example was added or not) need to be translated into natural language. We utilize a domain-specific translator to do so." }, { "figure_ref": [ "fig_1" ], "heading": "Evaluating Planning Capabilities of LLMs in Autonomous Mode", "publication_ref": [ "b11", "b25", "b24", "b40", "b31" ], "table_ref": [ "tab_1", "tab_2", "tab_1", "tab_2", "tab_1", "tab_2" ], "text": "In the autonomous mode, we treat the LLM as an automated planner and perform a single run of the dataset on the LLMs for each domain and prompt configuration. In this mode, the plan generated by the LLM is back-translated from natural language to forms that can be used by external plan validators. For each domain, we perform template-based translation to translate between PDDL and natural language for the natural language prompt configurations. We use VAL [11] to evaluate the translated plan with the corresponding domain and problem file. Our evaluation here primarily focuses on the GPT family of LLMs. We tested GPT-4 [25] and GPT-3.5 (commonly known as Chat-GPT) [24] on all the prompt configurations while we tested the older versions of GPT (namely, Instruct-GPT3 and GPT3) on one-shot natural language prompts across the domains. We set the temperature for all models to be 0, thereby making them deterministic. In this section, we detail the evaluation of LLMs on these domains and prompt configurations. We would like to point the reader to the Appendix for example prompts.\nEvaluation of LLMs on the Blocksworld domain: Blocksworld problems capture common sense block manipulations and consist of a set of blocks. Blocks are identified with unique colors and are placed either on a table or on top of other blocks. The goal is to arrange some of these blocks in a stack in a particular order. The general expectation here would be that one can pick up a block if it is clear, i.e., there are no other blocks on top of that block and you can only stack a block on top of another block if it is clear. The choice of this particular domain is motivated by both the fact that this is a simple common sense domain and is a very popular domain in planning literature, that has a long history of being used in various planning challenges. The instances were generated using a PDDL generator employed in the IPC competitions. We permitted the generation of problems that varied in terms of the number of blocks (3)(4)(5), optimal plan length, and goal properties (positive, negative, or no interactions between subgoals).\nAs shown in Table 1 and Table 2, GPT-4 improves upon previous versions of GPT models in the Blocksworld domain across all four prompt configurations. However, the overall performance is still approximately 34% in the Blocksworld dataset. Even the chain of thought style prompting (indicated by COT in the tables) had little effect on improving the performance. GPT-4 performs better with natural language prompts (206 and 210 instances for one-shot and zero-shot prompts, respectively) as opposed to PDDL prompts (75 and 106 instances). The performance drops significantly with other GPT models. We also discovered that for instances where Instruct-GPT3 generated the correct plans, replacing the example plan in the prompt with another example plan led to an even greater drop in accuracy. This suggests that the LLM seems to rely primarily on pattern matching, rather than inducing some internal model from the prompts. Overall, even in a seemingly simple common-sense domain like Blocksworld, which humans typically find easy to navigate, LLMs prove to be quite ineffective in planning autonomously.\nFinetuning GPT-3 on Blocksworld: Along with directly testing the LLMs from the GPT family, we have also looked at the utility of fine-tuning the LLMs. Specifically, we fine-tuned GPT-3 (Davinci) in the Blocksworld domain. For this, we prepared a dataset comprising the initial state, goal state, and the respective plan for 1,000 distinct Blocksworld instances. It's important to note that these instances were separate from our test set of 600 instances. By using the default hyperparameters provided by OpenAI and an 80-20 train-validation data split, we carried out the fine-tuning process. Our results revealed that the fine-tuned GPT-3 solved only 122 instances out of the 600 in our set, representing approximately 20% of the total. This suggests that fine-tuning has a limited impact on improving the performance of LLMs in Blocksworld planning. This outcome aligns with the observations of [40], who argue that language models trained for reasoning tend to concentrate on the inherent statistical features instead of the causal structure, which in turn affects their performance on such tasks.\nOpen-source models: In addition to the GPT family of LLMs, we have also conducted preliminary experiments with an open-source LLM, BLOOM [31], and found that BLOOM too is ineffective in plan generation. We assessed BLOOM's performance in the blocksworld and mystery blocksworld (deceptive) domains using a one-shot natural language prompt configuration. In the blocksworld domain, BLOOM correctly handled only 4 out of 250 instances, representing a 1.6% success rate. In the mystery domain, it failed to produce a single correct response in all 50 instances.\nHuman Baseline for the Blocksworld: We have previously mentioned that planning tasks on the blocksworld domain are anecdotally simple enough for humans to perform. To establish this and come up with a preliminary baseline to compare LLMs performance, we conducted an IRB-approved user study where we asked 50 participants to come up with a plan for a blocksworld instance picked at random, from the set of 600 instances that we used for the evaluation of LLMs. We presented the same domain description as we did for the LLMs and then primed them with an example instance.\nWe point the reader to the supplementary material for further details on the study.\nOut of the 50 participants, 39 of them (78%) came up with a valid plan. Along with validity, we also tested the optimality of their plans even though they were not required to come up with an optimal plan. Out of the 39 participants, 35 (89.7%) participants came up with an optimal plan. These initial results show that the blocksworld domain is a simple enough domain where most humans are able to come up with plans (which are also optimal) while LLMs, on the other hand, showcase subpar performance.\nEvaluation of LLMs on the Logistics domain: Logistics is also a widely recognized domain in the planning literature. In this domain, the objective is to transport packages within cities via trucks, and between cities via airplanes. Within a city, the locations are directly linked, allowing trucks to travel between any two of these locations. Similarly, cities are directly connected to each other allowing airplanes to travel between any two cities. Each city is equipped with one truck and has a designated location that functions as an airport. We generated 200 instances on this domain.\nFrom Tables 1 and2, we see that in the one-shot setting with natural language input, GPT-4 only solved 14% of the instances (28/200), and this rate dropped to 7.5% (15/200) when using zero-shot prompting. When provided with the domain and problem in PDDL format, GPT-4's performance remained the same in the one-shot setting (14% or 28/200) but decreased to 5.5% (11/200) in the zero-shot setting. GPT-3.5 did even worse.\nObfuscating names to test the brittleness of LLM Planning: Although the domain specification is part of our prompts, the names of the objects (e.g. blocks, trucks), predicates (e.g. on-table, in-city) and actions (e.g. pickup, drive) still do provide connections to the commonsense knowledge that the pretrained LLMs possess. One intriguing question is whether the planning performance is based really only on the domain model or these other background connections. To test this, we experimented with a variation of the Blocksworld domain, where we obfuscate the action names (for example pickup becomes attack, and unstack becomes feast) and predicate names (for example ontable becomes planet, and handempty becomes harmony). Note that from the perspective of standard planners, these domains are essentially identical. 3 In addition to such deceptive obfuscation, we also considered a variation where random alphanumeric names were substituted for the action and object names.\nTables 1 and2, we see that this simple obfuscation leads to a catastrophic drop in performance. Specifically, with zero-shot prompting and natural language input, GPT-4 is able to solve 210 instances out of 600 in the Blocksworld domain, but it could only solve 1 instance in the deceptive Mystery Blocksworld domain and 0 instances in the randomized mystery domain. A similar result is observed with the PDDL-style prompts: GPT-4 could solve 106 instances in Blocksworld, but only 3 instances in the deceptive Mystery Blocksworld. Notably, chain of thought prompting does not significantly improve performance over one-shot natural language prompts. GPT-3.5 does not solve even a single instance in the entire set of natural language instances. For most of the instances, GPT-3.5 outputs that the instance can't be solved. These results strongly suggest that whatever accidental planning performance LLMs show is likely connected to pattern matching rather than reasoning (which should be robust to name change). Analyzing GPT-4 failures: To get a better sense of the type of failures LLM generated plans encounter, we wondered whether they will fare much better with a more forgiving test of the validity of the generated plans. In automated planning community, the notion of relaxations of the domain model are used to simplify the problem-chiefly to derive heuristics for planning problems [8]. Taking a leaf from them, we considered two types of relaxations: (i) delete relaxation involves ignoring all the delete conditions of the domain actions (thus making sure that there can be no negative interactions between subgoals) and (ii) precondition relaxation involves ignoring all the preconditions of the domain actions-thus assuming that the the actions are executable from any state giving their effects.\nOur idea is to evaluate the plans produced by GPT4 with respect to domain models that are delete relaxed, precondition relaxed or both. It should be clear that a plan that is correct with respect to the normal (unrelaxed) model will also be correct with respect to all the relaxed models. Figure 3 shows the results for blocksworld. We see that while the correctness of LLM generated plans increased under more forgiving (relaxed) assessments (area in green), even in the most lenient assessment mode (Delete+Precondition Relaxed), there still are plans (∼39%) that are incorrect (because they still don't reach the goals) across all the prompt configurations. The plots further classify the failure cases in terms of whether they were inexecutable, shown in maroon, or could be executed but didn't reach the goals (shown in red). Note that when preconditions are relaxed, all plans are executable. We provide additional details on the relaxed assessments in Appendix A.2." }, { "figure_ref": [], "heading": "Evaluating LLMs as Idea Generators", "publication_ref": [], "table_ref": [], "text": "While the preceding discussion establishes that LLMs are not capable of generating correct plans in autonomous mode, there is still the possibility that they can be useful in an LLM-Modulo mode as idea generators for other sound external planners, verifiers or even humans-in-the-loop. In this section, we investigate this possibility and demonstrate that LLMs show promise on this front (especially with external planners and verfiers)." }, { "figure_ref": [], "heading": "LLM Plans as Heuristics to Sound Planners", "publication_ref": [ "b17", "b23", "b11" ], "table_ref": [ "tab_3" ], "text": "To see if the LLM generated plans can provide heuristic guidance to sound external planners, we use a local-search planner LPG [6] which generates plans by starting with a seed plan and iteratively repairing flaws until a correct plan is found. We feed the LLM-generated plan as the initial seed plan for LPG's iterative search. Our hypothesis is that this might put LPG on the right path and reduce the time for it to generate a correct plan. It is interesting to note the similarities between this LLM+LPG approach, and the approaches used in case-based planning in the past [9,17]. Here the LLM can be loosely viewed as \"retrieving a potentially useful plan case/sketch\" out of thin air, which the LPG adapts/corrects.\nWe utilized the plans that were generated by LLMs in the one-shot natural language prompt configuration on all three of our previous domains -Blocksworld, Mystery Blocksworld, and Logistics -as the \"seed plans\" from which LPG would begin its local search for a valid plan. For the Blocksworld domain, both GPT-4 and Instruct-GPT3 were evaluated, whereas for the Logistics and Mystery domains only GPT-4 was evaluated. We confirmed that all the plans that were generated by this LLM+LPG combination for both the domains were valid (which is as expected given that the underlying planner, LPG, is sound). To get an idea of how far the initial LLM generated plans were from the final correct solutions generated by LPG, we measured the Levenshtein edit distance between them. While the default LPG local search doesn't aim to minimize the changes to the suggested plan (there do exist versions of LPG that do this; see [23]) , the edit distances also give an idea of how partially or approximately correct the original LLM plan is. Along with the edit distance, we also measured the number of search steps that were taken by the LPG to come up with a correct plan.\nAs shown in Table 3, the edit distances across domains are approximately half the length of the seed plans generated by the LLMs, indicating that 50% of the final plan retains the elements of the initial LLM plan. For each problem, we performed two additional plan initializations to serve as baselines: initializing with an empty plan and initializing with a random plan of the same length as the plan generated by the LLM for that problem. In the Blocksworld and Logistics4 domains, we see a significant improvement in search steps over the empty seed plan when GPT-4 is used and an even larger one over the random seed plan. Consistent with our findings in the autonomous mode, the usefulness of this assistance wanes in domains where the relationships between predicates can no longer be inferred from common sense understandings of their names: in the Mystery Blocksworld domain, the LLM only has meager reduction in step size over the random plan and actually uses more steps than the empty plan. The interaction between LLM and LPG was unidirectional-with LLM sending a seed plan that LPG aims to repair. One supposed advantage of LLMs is that they can be prompted to improve their solutions. Suppose we have access to a sound automated verifier that not only checks the plan correctness but also pinpoints faults (in terms of unsatisfied preconditions or delete interactions). Such feedback can be easily converted into a \"backprompt\" to the LLM, with the hope that LLM comes up with a better plan. This is what we do with the help of VAL [11]-an AI planning tool that uses the domain model to validate the correctness of the plans (and point out errors)." }, { "figure_ref": [], "heading": "Verifier-assisted repeated backprompting of LLMs", "publication_ref": [ "b36", "b34" ], "table_ref": [ "tab_4" ], "text": "While, as we mentioned earlier, there can be thorny \"clever hans\" issues about humans prompting LLMs, an automated verifier mechanically backprompting the LLM doesn't suffer from these.\nWe tested this setup on a subset of the failed instances in the one-shot natural language prompt configuration using GPT-4, given its larger context window. We set a threshold of 15 backprompting rounds. We tested on three domains-Blocksworld, Logistics and Mystery BW-with 50 failed instances from each domain. Table 4 shows the results. We provide the prompt+feedback examples in Appendix A.9. We found that GPT4 is able to come up with correct plans 82% of the Blocksworld instances and 70% of the Logistics one. The average number of backprompting rounds for these successful cases was 3.68 for BW and 3.31 for Logistics. The performance on the Mystery BW however remained quite poor-suggesting that even with back prompting, GPT4 cannot do well unless it can tease out commonsense patterns for the domain.\nIn this backprompting configuration, LLM serves as the candidate plan generator while VAL serves as the external sound verifier. While it is tempting to have a self-critiquing architecture with LLM also serving as the verifier, our recent work shows that approach to be of questionable utility as LLMs are no better at verifying plans than they are at generating them [36,34]." }, { "figure_ref": [], "heading": "LLMs as idea generators for humans-in-the-loop", "publication_ref": [ "b0", "b10" ], "table_ref": [], "text": "Along with external planners and verifiers, LLMs may also offer their insights as plan suggestions directly to the human-in-the-loop which might potentially guide the user to the correct plan. After all, this sort of computer supported cooperative work (CSCW) use case has been the staple of LLM applications. We explored the efficacy of LLMs in assisting human planners through a betweensubjects user study, structured similarly to the study outlined in Section 4, but with two primary distinctions: (1) The study involved two separate participant groups. The first group received no assistance in devising plans, paralleling the approach in Section 4, while the second group had access to LLM-generated suggestions. (2) both participant sets were asked to offer subjective feedback via the NASA-TLX assessment tool [10], gauging their cognitive load. Additionally, participants from the second group evaluated the correctness of the LLM suggestions presented to them. We utilized the plans generated by GPT-4 to provide plan suggestions.\nThe study included 49 participants in the unassisted group and 48 in the LLM-assisted group. We evaluated the statistical significance regarding accuracy, time taken, and cognitive load between the groups. Our findings revealed no statistical significance between the groups across all three aspects. 5 . Notably, 3 out of 48 participants mistakenly accepted incorrect LLM suggestions, with two submitting these erroneous suggestions as their plans. This shows the potential for automation bias in such methodologies [5]. We have provided the details of the user-study in Appendix A.12." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [], "table_ref": [], "text": "In this paper, we presented a critical investigation of the planning abilities of large language models (LLMs). To this end, we evaluated the plan generation abilities of LLMs in two different modes.\nIn the autonomous mode, our results show that even in simple common-sense planning domains where humans could easily come up with plans, LLMs like GPT-3 exhibit a dismal performance.\nEven though there is an uptick in the performance by the newer GPT-4 in the blocksworld domain, it still fails miserably on the mystery blocksworld domain, indicating their inability to reason in an abstract manner. In the LLM-Modulo setting, we have seen that plans generated by LLMs can help improve the search of sound planners like LPG. Further, we showed that using external verifiers, we can point out the errors and back-prompt LLMs for a better plan. We showed that this indeed helps in common-sense domains. In the supplementary material, we show the prompt examples for all the configurations and the details of the user-studies (Appendix A.11). From our studies, we see that LLMs as autonomous planners fail miserably, but we also see that the generated plans improve the search when used by an underlying sound planner and that better plans can be obtained by back-prompting the LLM with feedback from an external verifier.\npredicates). Below is a snippet of an action from a popular benchmark problem called Blocksworld, in PDDL. The action corresponds to picking up a block in that domain. The parameter line provides the possible variables, in this case ?ob, which can stand for possible blocks. The precondition says that you can only pick up a block if it is clear (i.e. predicate (clear ?ob) is true for the block), the block is on the table and the arm is empty. The effects tell you that after you execute the action, the predicate (holding ?ob) becomes true and the block will no longer be considered clear, and on-table. Finally, the arm will no longer be considered empty. A solution to a planning problem is called a plan, and corresponds to a sequence of actions that once executed in the initial state would lead to a state where the goal specification is true. The actions may additionally be associated with cost, in these cases, one could also talk about optimal plans, i.e., a plan π is called an optimal one if no plan exists that is less costly than π.\nThe above description presents one of the simpler classes of planning models and can be extended in multiple ways including allowing for object typing (including type hierarchy), more complex forms of preconditions and conditional effects, not to mention supporting richer classes of planning formalisms." }, { "figure_ref": [], "heading": "A.2 Comparisons between the instances and plans generated by GPT-4", "publication_ref": [], "table_ref": [], "text": "We have also examined the distribution of the instances (in Blocksworld and Logistics domains) that were used to test the LLMs over optimal plan lengths and the distribution of the number of correct plans by GPT-4 over the optimal plan lengths. From Figures 4, 5, 6 and 7, we can say that our traditional notions of planning complexity do not hold with LLMs. For an LLM, an easier instance from the perspective of planning complexity is the same as a harder one as it just predicts the next tokens based on their weights and the context. Similar to that of Blocksworld, there is an increase in the number of goal-reaching plans, but even in the most lenient assessment mode (Delete+Precondition Relaxation), there are quite a number of non-goal-reaching plans. In the assessment modes with precondition relaxations, an inexecutable plan is when there is an action in the plan that does not contain the required number of parameters.\nFigure 9 shows the assessment of GPT-4 plans with relaxations in the Logistics domain. Even in this domain, as we further relax the assessment, we again see an increase in the number of goal reaching plans, but even the most relaxed configuration still has non-goal reaching plans. Figure 6: A detailed comparison of the logistics instances, against the instances where GPT-4 was able to generate a correct plan with a PDDL or a natural language style prompt which included one example." }, { "figure_ref": [], "heading": "A.3.2 Human failures", "publication_ref": [], "table_ref": [], "text": "For the human baseline user study (Section 4), out of 50 participants, 11 participants failed to come up with a valid plan. All the 11 participants came up with inexecutable plans. In the additional user study (in Appendix 5.3), for the first group, where the LLM assistance was not provided, out of 49 participants, 10 participants failed to come up with a valid plan and all the 10 participants came up with inexecutable plans. For the second group, where LLM assistance was provided, out of 48 participants, 15 participants failed to come up with a vaild plan out of which, 14 participants came up with an inexecutable plan and 1 participant came up with a non-goal reaching plan.\nA. One-shot prompt with GPT-4 plan" }, { "figure_ref": [], "heading": "[STATEMENT]", "publication_ref": [], "table_ref": [], "text": "As initial conditions I have that, the red block is clear, the blue block is clear, the yellow block is clear, the hand is empty, the blue block is on top of the orange block, the red block is on the table, the orange block is on the table and the yellow block is on the table." }, { "figure_ref": [], "heading": "→ →", "publication_ref": [], "table_ref": [], "text": "My goal is to have that the orange block is on top of the blue block.\nMy plan is as follows:\n[PLAN] unstack the blue block from on top of the orange block put down the blue block pick up the orange block stack the orange block on top of the blue block [PLAN END]\n[STATEMENT] As initial conditions I have that, the red block is clear, the yellow block is clear, the hand is empty, the red block is on top of the blue block, the yellow block is on top of the orange block, the blue block is on the table and the orange block is on the table.\n→ →\nMy goal is to have that the orange block is on top of the red block.\nMy plan is as follows: Zero-shot prompt with GPT-4 plan\n[STATEMENT]\nAs initial conditions I have that, the red block is clear, the blue block is clear, the yellow block is clear, the hand is empty, the blue block is on top of the orange block, the red block is on the table, the orange block is on the table and the yellow block is on the table." }, { "figure_ref": [], "heading": "→ →", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "My goal is to have that the orange block is on top of the blue block.\nWhat is the plan to achieve my goal? Just give the actions in the plan.\n---------GPT-4 response ---------1. Unstack the blue block from on top of the orange block 2. Put down the blue block on the table 3. Pick up the orange block The plan correctness is defined in terms of states resulting from executing the actions in the plan. An action is executable in a state when all its preconditions hold in that state. The state resulting from the action execution consists of everything in the previous state with the addition and deletion of add and delete effects of the action. Plan correctness is defined as follows: if the first action in the plan is applicable in the initial state, i.e., its preconditions are all present there; and the second action is applicable in the state resulting from applying the first action to the initial state, this process continues until the state resulting from the application of the last action in the last but one state gives rise to the final state where all the goals are satisfied.\n→ → → → → → → → [STATEMENT]\nAs initial conditions I have that, the red block is clear, the blue block is clear, the yellow block is clear, the hand is empty, the blue block is on top of the orange block, the red block is on the The goal conditions are satisfied in the final state. Hence, the above plan is valid." }, { "figure_ref": [], "heading": "[PLAN END] [STATEMENT]", "publication_ref": [], "table_ref": [], "text": "As initial conditions I have that, the red block is clear, the yellow block is clear, the hand is empty, the red block is on top of the blue block, the yellow block is on top of the orange block, the blue block is on the table and the orange block is on the table\n→ →\nMy goal is to have that the orange block is on top of the red block. My plan is as follows:\n[PLAN]\n---------GPT-4 response ---------1. Current State: the red block is clear, the yellow block is clear, the hand is empty, the red block is on top of the blue block, the yellow block is on top of the orange block, the blue block is on the table, and the orange block is on the table As initial conditions I have that, province object a, province object d, harmony, object a craves object b, object d craves object c, planet object b and planet object c." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "My goal is to have that object c craves object a.\nMy plan is as follows:\n[ The plan correctness is defined in terms of states resulting from executing the actions in the plan. An action is executable in a state when all its preconditions hold in that state. The state resulting from the action execution consists of everything in the previous state with the addition and deletion of add and delete effects of the action. Plan correctness is defined as follows: if the first action in the plan is applicable in the initial state, i.e., its preconditions are all present there; and the second action is applicable in the state resulting from applying the first action to the initial state, this process continues until the state resulting from the application of the last action in the last but one state gives rise to the final state where all the goals are satisfied. To perform 1jpkithdyjmlikck action, the following facts need to be true: aqcjuuehivl8auwt object, 51nbwlachmfartjn object, 3covmuy4yrjthijd.\n→ Once 1jpkithdyjmlikck action is performed the following facts will be true: gk5asm3f7u1fekpj object." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once 1jpkithdyjmlikck action is performed the following facts will be false: aqcjuuehivl8auwt object, 51nbwlachmfartjn object, 3covmuy4yrjthijd." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "To perform 9big8ruzarkkquyu action, the following facts need to be true: gk5asm3f7u1fekpj object." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once 9big8ruzarkkquyu action is performed the following facts will be true: aqcjuuehivl8auwt object, 51nbwlachmfartjn object, 3covmuy4yrjthijd." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once 9big8ruzarkkquyu action is performed the following facts will be false: gk5asm3f7u1fekpj object." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "To perform 2ijg9q8swj2shjel action, the following needs to be true: aqcjuuehivl8auwt other object, gk5asm3f7u1fekpj object." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once 2ijg9q8swj2shjel action is performed the following will be true: 3covmuy4yrjthijd, aqcjuuehivl8auwt object, Object 4DMF1cMTYXGSP94G other object." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once 2ijg9q8swj2shjel action is performed the following will be false: aqcjuuehivl8auwt other object, gk5asm3f7u1fekpj object." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "To perform xptxjrdkbi3pqsqr action, the following needs to be true: Object 4DMF1cMTYXGSP94G other object, aqcjuuehivl8auwt object, 3covmuy4yrjthijd." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once xptxjrdkbi3pqsqr action is performed the following will be true: gk5asm3f7u1fekpj object, aqcjuuehivl8auwt other object." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once xptxjrdkbi3pqsqr action is performed the following will be false:, Object 4DMF1cMTYXGSP94G other object, aqcjuuehivl8auwt object, 3covmuy4yrjthijd.\n→ A.5.6 One-shot prompt with GPT-4 plan (Randomized Disguising)\nOne-shot prompt with GPT-4 plan (Randomized Disguising)\n[STATEMENT] As initial conditions I have that, aqcjuuehivl8auwt object a, aqcjuuehivl8auwt object b, aqcjuuehivl8auwt object d, 3covmuy4yrjthijd, object b 4dmf1cmtyxgsp94g object c, 51nbwlachmfartjn object a, 51nbwlachmfartjn object c and 51nbwlachmfartjn object d." }, { "figure_ref": [], "heading": "→ →", "publication_ref": [], "table_ref": [], "text": "My goal is to have that object c 4dmf1cmtyxgsp94g object b.\nMy plan is as follows:\n[PLAN] xptxjrdkbi3pqsqr object b from object c 9big8ruzarkkquyu object b 1jpkithdyjmlikck object c 2ijg9q8swj2shjel object c from object b [PLAN END] [STATEMENT]\nAs initial conditions I have that, aqcjuuehivl8auwt object a, aqcjuuehivl8auwt object d, 3covmuy4yrjthijd, object a 4dmf1cmtyxgsp94g object b, object d 4dmf1cmtyxgsp94g object c, 51nbwlachmfartjn object b and 51nbwlachmfartjn object c." }, { "figure_ref": [], "heading": "→ →", "publication_ref": [], "table_ref": [], "text": "My goal is to have that object c 4dmf1cmtyxgsp94g object a.\nMy plan is as follows: Zero-shot prompt with GPT-4 plan (Randomized Disguising)\n[PLAN] ---------GPT-4 response ---------xptxjrdkbi3pqsqr\n[STATEMENT] As initial conditions I have that, aqcjuuehivl8auwt object a, aqcjuuehivl8auwt object b, aqcjuuehivl8auwt object d, 3covmuy4yrjthijd, object b 4dmf1cmtyxgsp94g object c, 51nbwlachmfartjn object a, 51nbwlachmfartjn object c and 51nbwlachmfartjn object d. " }, { "figure_ref": [], "heading": "Logistics Domain Description", "publication_ref": [], "table_ref": [], "text": "I have to plan logistics to transport packages within cities via trucks and between cities via airplanes. Locations within a city are directly connected (trucks can move between any two such locations), and so are the cities. In each city there is exactly one truck and each city has one location that serves as an airport." }, { "figure_ref": [], "heading": "→ → →", "publication_ref": [], "table_ref": [], "text": "Here are the actions that can be performed:\nLoad a package into a truck. Load a package into an airplane. Unload a package from a truck. Unload a package from an airplane. Drive a truck from one location to another location.\nFly an airplane from one city to another city.\nThe following are the restrictions on the actions: A package can be loaded into a truck only if the package and the truck are in the same location. Once a package is loaded into a truck, the package is not at the location and is in the truck. A package can be loaded into an airplane only if the package and the airplane are in the same location.\n→ Once a package is loaded into an airplane, the package is not at the location and is in the airplane." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "A package can be unloaded from a truck only if the package is in the truck. Once a package is unloaded from a truck, the package is not in the truck and is at the location of the truck." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "A package can be unloaded from an airplane only if the package in the airplane. Once a package is unloaded from an airplane, the package is not in the airplane and is at the location of the airplane." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "A truck can be driven from one location to another if the truck is at the from-location and both from-location and to-location are locations in the same city." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once a truck is driven from one location to another, it is not at the from-location and is at the to-location." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "An airplane can be flown from one city to another if the from-location and the to-location are airports and the airplane is at the from-location." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once an airplane is flown from one city to another the airplane is not at the from-location and is at the to-location.\n→ A.6.2 One-shot prompt with GPT-4 plan\nOne-shot prompt with GPT-4 plan" }, { "figure_ref": [], "heading": "[STATEMENT]", "publication_ref": [], "table_ref": [], "text": "As initial conditions I have that, location_0_0 is an airport, location_1_0 is an airport, airplane_0 is at location_0_0, airplane_1 is at location_0_0, package_0 is at location_1_0, truck_0 is at location_0_0, truck_1 is at location_1_0, location_0_0 is in the city city_0 and location_1_0 is in the city city_1." }, { "figure_ref": [], "heading": "→ → →", "publication_ref": [], "table_ref": [], "text": "My goal is to have that package_0 is at location_0_0.\nMy plan is as follows:\n[PLAN] fly airplane_1 from location_0_0 to location_1_0 load package_0 into airplane_1 at location_1_0 fly airplane_1 from location_1_0 to location_0_0 unload package_0 from airplane_1 at location_0_0 [PLAN END]\n[STATEMENT]\nAs initial conditions I have that, location_0_0 is an airport, location_1_0 is an airport, airplane_0 is at location_1_0, airplane_1 is at location_1_0, package_0 is at location_1_0, truck_0 is at location_0_0, truck_1 is at location_1_0, location_0_0 is in the city city_0 and location_1_0 is in the city city_1." }, { "figure_ref": [], "heading": "→ → →", "publication_ref": [], "table_ref": [], "text": "My goal is to have that package_0 is at location_0_0.\nMy plan is as follows:\n[PLAN] ---------GPT-4 response --------load package_0 into airplane_1 at location_1_0 fly airplane_1 from location_1_0 to location_0_0 unload package_0 from airplane_1 at location_0_0 [PLAN END] ---------Extracted plan ---------(load-airplane p0 a1 l1-0) (fly-airplane a1 l1-0 l0-0) (unload-airplane p0 a1 l0-0)\n--------Ground truth plan ---------(load-airplane p0 a1 l1-0) (fly-airplane a1 l1-0 l0-0) (unload-airplane p0 a1 l0-0) ================SUCCESS================ A.6.3 Zero-shot prompt with GPT-4 plan\nZero-shot prompt with GPT-4 plan I have to plan logistics to transport packages within cities via trucks and between cities via airplanes. Locations within a city are directly connected (trucks can move between any two such locations), and so are the cities. In each city there is exactly one truck and each city has one location that serves as an airport." }, { "figure_ref": [], "heading": "→ → →", "publication_ref": [], "table_ref": [], "text": "Here are the actions that can be performed:\nLoad a package into a truck. For example, load package_1 into truck_1 at location_1_1. Load a package into an airplane. For example, load package_1 into airplane_1 at location_1_1. Unload a package from a truck. For example, unload package_1 from truck_1 at location_1_1. Unload a package from an airplane. For example, unload package_1 from airplane_1 at location_1_1. Drive a truck from one location to another location. For example, drive truck_1 from location_1_1 to location_1_2 in city_1." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Fly an airplane from one city to another city. For example, fly airplane_1 from location_1_1 to location_2_1. Here location_1_1 is the airport in city_1 and location_2_1 is the airport in city_2." }, { "figure_ref": [], "heading": "→ →", "publication_ref": [], "table_ref": [], "text": "The following are the restrictions on the actions: A package can be loaded into a truck only if the package and the truck are in the same location. Once a package is loaded into a truck, the package is not at the location and is in the truck. A package can be loaded into an airplane only if the package and the airplane are in the same location.\n→ Once a package is loaded into an airplane, the package is not at the location and is in the airplane." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "A package can be unloaded from a truck only if the package is in the truck. Once a package is unloaded from a truck, the package is not in the truck and is at the location of the truck." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "A package can be unloaded from an airplane only if the package in the airplane. Once a package is unloaded from an airplane, the package is not in the airplane and is at the location of the airplane." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "A truck can be driven from one location to another if the truck is at the from-location and both from-location and to-location are locations in the same city." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once a truck is driven from one location to another, it is not at the from-location and is at the to-location." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "An airplane can be flown from one city to another if the from-location and the to-location are airports and the airplane is at the from-location." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once an airplane is flown from one city to another the airplane is not at the from-location and is at the to-location." }, { "figure_ref": [], "heading": "→ [STATEMENT]", "publication_ref": [], "table_ref": [], "text": "As initial conditions I have that, location_0_0 is an airport, location_1_0 is an airport, airplane_0 is at location_0_0, airplane_1 is at location_0_0, package_0 is at location_1_0, truck_0 is at location_0_0, truck_1 is at location_1_0, location_0_0 is in the city city_0 and location_1_0 is in the city city_1." }, { "figure_ref": [], "heading": "→ → →", "publication_ref": [], "table_ref": [], "text": "My goal is to have that package_0 is at location_0_0.\nWhat is the plan to achieve my goal? Just give the actions in the plan." }, { "figure_ref": [], "heading": "A.7.2 One-shot prompt with GPT-4 plan", "publication_ref": [], "table_ref": [], "text": "One-shot prompt with GPT-4 plan (:action overcome :parameters (?ob ?underob) :precondition (and (province ?underob) (pain ?ob)) :effect (and (harmony) (province ?ob) (craves ?ob ?underob) (not (province ?underob)) (not (pain ?ob))))\n(:action feast :parameters (?ob ?underob) :precondition (and (craves ?ob ?underob) (province ?ob) (harmony)) :effect (and (pain ?ob) (province ?underob) (not (craves ?ob ?underob)) (not (province ?ob)) (not (harmony)))))\nA.8.2 One-shot prompt with GPT-4 plan (Deceptive Disguising)\nOne-shot prompt with GPT-4 plan (Deceptive Disguising) " }, { "figure_ref": [], "heading": "Logistics Domain Description", "publication_ref": [], "table_ref": [], "text": "Here is a pddl domain, an example problem and it's corresponding plan. Provide the plan for the query problem. Provide only the pddl syntax for the plan. ---------GPT-4 response ---------Based on the provided domain and problem, the plan to achieve the goal state could be:\n1. LOAD-AIRPLANE p0 on a0 at l1-0 2. FLY-AIRPLANE a0 from l1-0 to l0-0 3. UNLOAD-AIRPLANE p0 from a0 at l0-0 Here is the PDDL syntax for the plan:\n(LOAD-AIRPLANE p0 a0 l1-0) (FLY-AIRPLANE a0 l1-0 l0-0) (UNLOAD-AIRPLANE p0 a0 l0-0) ---------Extracted plan ---------(load-airplane p0 a0 l1-0) (fly-airplane a0 l1-0 l0-0) (unload-airplane p0 a0 l0-0) --------Ground truth plan ---------(fly-airplane a1 l0-0 l1-0) (load-airplane p0 a1 l1-0) (fly-airplane a1 l1-0 l0-0) (unload-airplane p0 a1 l0-0) ================FAILURE================ A.10 Backprompting using VAL A.10. --------------VAL Feedback-----------------The above plan is invalid.The following action at step 8 has an unmet precondition: overcome object c from object a The unmet precondition is: province object a --------------GPT-4 Response-----------------I apologize for the errors in my previous plans. After reevaluating the problem, I have determined that it is not possible to achieve the goal of having object c crave object a given the initial conditions and action restrictions provided.\n→ → --------------VAL Feedback-----------------The above plan is invalid. This is the unmet goal condition: object c craves object a --------------GPT-4 Response-----------------You are correct, the goal of having object c crave object a has not been met in the previous plans.\nGiven the initial conditions and action restrictions provided, it appears that it is not possible to achieve this goal.\n→ → --------------VAL Feedback-----------------\nThe above plan is invalid. This is the unmet goal condition: object c craves object a --------------GPT-4 Response-----------------You are correct, the goal of having object c crave object a has not been met in the previous plans.\nGiven the initial conditions and action restrictions provided, it appears that it is not possible to achieve this goal." }, { "figure_ref": [], "heading": "→ →", "publication_ref": [], "table_ref": [], "text": "--------------VAL Feedback-----------------The above plan is invalid. This is the unmet goal condition: object c craves object a --------------GPT-4 Response-----------------You are correct, the goal of having object c crave object a has not been met in the previous plans.\nGiven the initial conditions and action restrictions provided, it appears that it is not possible to achieve this goal.\n→ → --------------VAL Feedback-----------------The above plan is invalid. This is the unmet goal condition: object c craves object a --------------GPT-4 Response-----------------You are correct, the goal of having object c crave object a has not been met in the previous plans. Given the initial conditions and action restrictions provided, it appears that it is not possible to achieve this goal." }, { "figure_ref": [], "heading": "→ →", "publication_ref": [], "table_ref": [], "text": "--------------VAL Feedback-----------------The above plan is invalid. This is the unmet goal condition: object c craves object a --------------GPT-4 Response-----------------You are correct, the goal of having object c crave object a has not been met in the previous plans. Given the initial conditions and action restrictions provided, it appears that it is not possible to achieve this goal." }, { "figure_ref": [], "heading": "→ →", "publication_ref": [], "table_ref": [], "text": "--------------VAL Feedback-----------------The above plan is invalid. This is the unmet goal condition: object c craves object a --------------GPT-4 Response-----------------You are correct, the goal of having object c crave object a has not been met in the previous plans. Given the initial conditions and action restrictions provided, it appears that it is not possible to achieve this goal." }, { "figure_ref": [], "heading": "→ →", "publication_ref": [], "table_ref": [], "text": "--------------VAL Feedback-----------------The above plan is invalid. This is the unmet goal condition: object c craves object a --------------GPT-4 Response-----------------You are correct, the goal of having object c crave object a has not been met in the previous plans. Given the initial conditions and action restrictions provided, it appears that it is not possible to achieve this goal.\n→ → --------------VAL Feedback-----------------The above plan is invalid. This is the unmet goal condition: object c craves object a --------------GPT-4 Response-----------------You are correct, the goal of having object c crave object a has not been met in the previous plans. Given the initial conditions and action restrictions provided, it appears that it is not possible to achieve this goal. Back-prompt with GPT-4\nI have to plan logistics to transport packages within cities via trucks and between cities via airplanes. Locations within a city are directly connected (trucks can move between any two such locations), and so are the cities. In each city there is exactly one truck and each city has one location that serves as an airport." }, { "figure_ref": [], "heading": "→ → →", "publication_ref": [], "table_ref": [], "text": "Here are the actions that can be performed:\nLoad a package into a truck. For example, load package_1 into truck_1 at location_1_1. Load a package into an airplane. For example, load package_1 into airplane_1 at location_1_1. Unload a package from a truck. For example, unload package_1 from truck_1 at location_1_1. Unload a package from an airplane. For example, unload package_1 from airplane_1 at location_1_1. Drive a truck from one location to another location. For example, drive truck_1 from location_1_1 to location_1_2 in city_1." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Fly an airplane from one city to another city. For example, fly airplane_1 from location_1_1 to location_2_1. Here location_1_1 is the airport in city_1 and location_2_1 is the airport in city_2." }, { "figure_ref": [], "heading": "→ →", "publication_ref": [], "table_ref": [], "text": "The following are the restrictions on the actions: A package can be loaded into a truck only if the package and the truck are in the same location. Once a package is loaded into a truck, the package is not at the location and is in the truck. A package can be loaded into an airplane only if the package and the airplane are in the same location.\n→ Once a package is loaded into an airplane, the package is not at the location and is in the airplane." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "A package can be unloaded from a truck only if the package is in the truck. Once a package is unloaded from a truck, the package is not in the truck and is at the location of the truck." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "A package can be unloaded from an airplane only if the package in the airplane. Once a package is unloaded from an airplane, the package is not in the airplane and is at the location of the airplane." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "A truck can be driven from one location to another if the truck is at the from-location and both from-location and to-location are locations in the same city.\n→ Once a truck is driven from one location to another, it is not at the from-location and is at the to-location." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "An airplane can be flown from one city to another if the from-location and the to-location are airports and the airplane is at the from-location." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once an airplane is flown from one city to another the airplane is not at the from-location and is at the to-location." }, { "figure_ref": [], "heading": "→ [STATEMENT]", "publication_ref": [], "table_ref": [], "text": "As initial conditions I have that, location_0_0 is an airport, location_1_0 is an airport, airplane_0 is at location_1_0, package_0 is at location_1_0, package_1 is at location_0_0, truck_0 is at location_0_0, truck_1 is at location_1_0, location_0_0 is in the city city_0 and location_1_0 is in the city city_1." }, { "figure_ref": [], "heading": "→ → →", "publication_ref": [], "table_ref": [], "text": "My goal is to have that package_0 is at location_1_0 and package_1 is at location_1_0.\nMy plan is as follows:\n[PLAN] fly airplane_0 from location_1_0 to location_0_0 load package_1 into airplane_0 at location_0_0 fly airplane_0 from location_0_0 to location_1_0 unload package_1 from airplane_0 at location_1_0 [PLAN END]\n[STATEMENT] As initial conditions I have that, location_0_0 is an airport, location_1_0 is an airport, airplane_0 is at location_1_0, package_0 is at location_1_0, package_1 is at location_1_0, truck_0 is at location_0_0, truck_1 is at location_1_0, location_0_0 is in the city city_0 and location_1_0 is in the city city_1." }, { "figure_ref": [], "heading": "→ → →", "publication_ref": [], "table_ref": [], "text": "My goal is to have that package_0 is at location_0_0 and package_1 is at location_0_0." }, { "figure_ref": [], "heading": "My plan is as follows:", "publication_ref": [], "table_ref": [], "text": "[PLAN] --------------GPT-4 Response----------------fly airplane_0 from location_1_0 to location_0_0 load package_0 into airplane_0 at location_0_0 load package_1 into airplane_0 at location_0_0 fly airplane_0 from location_0_0 to location_1_0 unload package_0 from airplane_0 at location_1_0 unload package_1 from airplane_0 at location_1_0 [PLAN END] --------------VAL Feedback-----------------The above plan is invalid.The following action at step 2 has an unmet precondition: be verified, which could be achieved by using either an automated verifier as in heuristic mode or a human verifier in the loop.\nA subtler issue is the additional perpetuation of bias. LLMs are trained on large amounts of web data and, despite fine-tuning and training safety efforts, can take biased or implicitly harmful courses of action. For example, a wedding plan suggested by an LLM in autonomous mode might by default adhere to certain majority cultural norms. However, in our setting where we incorporate the domain model as part of the prompt, the tendency of LLMs to generate the most common or default plans is reduced if a carefully scrutinized domain model is provided. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This research was supported by ONR grants N00014-18-1-2442, N00014-18-1-2840, N00014-19-1-2119 and N00014-23-1-2409, AFOSR grant FA9550-18-1-0067, DARPA SAIL-ON grant W911NF-19-2-0006, and a JP Morgan AI Faculty Research Grant to Kambhampati. Sreedharan was supported in part by NSF grant 2303019." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "Contents" }, { "figure_ref": [], "heading": "A.1 Classical Planning Problem Formulation", "publication_ref": [], "table_ref": [], "text": "Classical Planning Problems can be mathematically represented by using the tuple P = ⟨D, I, G⟩. D is referred to as the problem domain, I is the initial state and G is the goal specification. The possible truth assignment over the predicates defines the state space for the planning problem. The domain is again defined by the tuple D = ⟨F, O⟩. F corresponds to the set of fluents, i.e., the state variable used to define the state space and each fluent corresponds to a predicate with some arity, and A correspond to the set of actions that can be performed as part of the planning problem. Each action a i [V] ∈ A (where a i is the operator label and V is the variable used by the operator and each variable could be mapped to an object), can be further defined by two components, the precondition prec[V] which describes when an action can be executed and the effects ef f [V] which defines what happens when an action is executed. We will assume that prec[V] consists of a set of predicates defined over the variables V. An action is assumed to be executable only if its preconditions are met, i.e., the predicates in the precondition hold in the given state. The effects ef f [V] is further defined by the tuple ⟨add[V], del[V]⟩, where add[V] or add effects is the set of predicates that will be set true by the action and del[V] or delete effects is the set of predicates that will be set false by the action. An action is said to be grounded if we replace each of the variables with an object, else it is referred to as a lifted domain model (we use a similar convention to differentiate between lifted and grounded\nThe goal conditions are satisfied in the final state. Hence, the above plan is valid.\n[PLAN END] ---------Extracted plan ---------(unstack red blue) (put-down red) (unstack yellow orange) (put-down yellow) (pick-up orange) (stack orange red) --------Ground truth plan ---------(unstack yellow orange) (put-down yellow) (pick-up orange) (stack orange red) ================SUCCESS================ To perform Attack action, the following facts need to be true: Province object, Planet object, Harmony." }, { "figure_ref": [], "heading": "A.5 Mystery Blocksworld Prompts in Natural Language", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once Attack action is performed the following facts will be true: Pain object. Once Attack action is performed the following facts will be false: Province object, Planet object, Harmony." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "To perform Succumb action, the following facts need to be true: Pain object. Once Succumb action is performed the following facts will be true: Province object, Planet object, Harmony." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once Succumb action is performed the following facts will be false: Pain object.\nTo perform Overcome action, the following needs to be true: Province other object, Pain object." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once Overcome action is performed the following will be true: Harmony, Province object, Object Craves other object." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once Overcome action is performed the following will be false: Province other object, Pain object." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "To perform Feast action, the following needs to be true: Object Craves other object, Province object, Harmony." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once Feast action is performed the following will be true: Pain object, Province other object. Once Feast action is performed the following will be false:, Object Craves other object, Province object, Harmony. A.7.3 Zero-shot prompt with GPT-4 plan" }, { "figure_ref": [], "heading": "Zero-shot prompt with GPT-4 plan", "publication_ref": [], "table_ref": [], "text": "Here is a pddl domain, a planning problem. Provide the plan for the query problem. Provide only the pddl syntax for the plan where each action is represented as (ACTION_NAME OBJECTS). ) )\n---------GPT-4 response ---------Here is the plan for the query problem: (and (AIRPLANE ?airplane) (AIRPORT ?loc-from) (AIRPORT ?loc-to) (at ?airplane ?loc-from)) :effect (and (not (at ?airplane ?loc-from)) (at ?airplane ?loc-to))) )\nA.9.2 One-shot prompt with GPT-4 plan One-shot prompt with GPT-4 plan\n(and (at p0 l0-0) ) ) )\n[PLAN] (fly-airplane a1 l0-0 l1-0) (load-airplane p0 a1 l1-0) (fly-airplane a1 l1-0 l0-0) (unload-airplane p0 a1 l0-0)\n(and (at p0 l0-0) ) ) )\n---------GPT-4 response ---------Based on the provided domain, problem, and example plan, here is the PDDL syntax for the plan corresponding to the query problem:\n→ (fly-airplane a0 l1-0 l0-0) (load-airplane p0 a0 l1-0) (fly-airplane a0 l0-0 l1-0) (unload-airplane p0 a0 l1-0) ---------Extracted plan ---------(fly-airplane a0 l1-0 l0-0) (load-airplane p0 a0 l1-0) (fly-airplane a0 l0-0 l1-0) (unload-airplane p0 a0 l1-0) --------Ground truth plan ---------(load-airplane p0 a1 l1-0) (fly-airplane a1 l1-0 l0-0) (unload-airplane p0 a1 l0-0) ================FAILURE================ A.9.3 Zero-shot prompt with GPT-4 plan" }, { "figure_ref": [], "heading": "Zero-shot prompt with GPT-4 plan", "publication_ref": [], "table_ref": [], "text": "Here is a pddl domain, a planning problem. Provide the plan for the query problem. Provide only the pddl syntax for the plan where each action is represented as (ACTION_NAME OBJECTS). \n(and (at p0 l0-0) ) → My plan is as follows:\n[PLAN] --------------GPT-4 Response---------------- --------------VAL Feedback-----------------The above plan is invalid.The following action at step 1 has an unmet precondition: unstack the red block from on top of the orange block The unmet precondition is: the red block is clear - -------------GPT-4 Response-----------------Apologies for the mistake. Here's the corrected plan:\n[PLAN] unstack the yellow block from on top of the red block put down the yellow block unstack the red block from on top of the orange block put down the red block unstack the orange block from on top of the blue block stack the orange block on top of the red block pick up the red block stack the red block on top of the yellow block pick up the yellow block stack the yellow block on top of the blue block [PLAN END] - -------------VAL Feedback-----------------The above plan is invalid.The following action at step 7 has an unmet precondition: pick up the red block The unmet precondition is: the red block is clear --------------GPT-4 Response-----------------Apologies for the mistake again. Here's the corrected plan:\n[PLAN] unstack the yellow block from on top of the red block put down the yellow block unstack the red block from on top of the orange block put down the red block unstack the orange block from on top of the blue block stack the orange block on top of the red block unstack the red block from on top of the orange block stack the red block on top of the yellow block pick up the yellow block stack the yellow block on top of the blue block [PLAN END] - -------------VAL Feedback-----------------The above plan is invalid. The following action at step 7 has unmet preconditions: unstack the red block from on top of the orange block The unmet preconditions are: the red block is clear and the red block is on top of the orange block - -------------GPT-4 Response-----------------I apologize for the confusion. Here's the corrected plan:\n[PLAN] unstack the yellow block from on top of the red block put down the yellow block unstack the red block from on top of the orange block put down the red block unstack the orange block from on top of the blue block put down the orange block pick up the red block stack the red block on top of the yellow block pick up the yellow block stack the yellow block on top of the blue block [PLAN END] - -------------VAL Feedback-----------------The above plan is invalid.The following action at step 9 has an unmet precondition: pick up the yellow block The unmet precondition is: the yellow block is clear --------------GPT-4 Response-----------------I apologize for the mistakes in my previous plans. Here's the corrected plan:\n[PLAN] unstack the yellow block from on top of the red block put down the yellow block unstack the red block from on top of the orange block put down the red block unstack the orange block from on top of the blue block put down the orange block pick up the blue block stack the blue block on top of the yellow block pick up the yellow block stack the yellow block on top of the red block pick up the red block stack the red block on top of the orange block [PLAN END] - -------------VAL Feedback-----------------The above plan is invalid.The following action at step 9 has an unmet precondition: pick up the yellow block The unmet precondition is: the yellow block is clear --------------GPT-4 Response-----------------I apologize for the repeated mistakes. Here's the corrected plan:\n[PLAN] unstack the yellow block from on top of the red block put down the yellow block unstack the red block from on top of the orange block put down the red block unstack the orange block from on top of the blue block put down the orange block pick up the red block stack the red block on top of the yellow block pick up the yellow block stack the yellow block on top of the blue block pick up the blue block stack the blue block on top of the orange block [PLAN END] --------------VAL Feedback-----------------The above plan is invalid.The following action at step 9 has an unmet precondition: pick up the yellow block The unmet precondition is: the yellow block is clear --------------GPT-4 Response-----------------I apologize for the confusion. Here's the corrected plan:\n[PLAN] unstack the yellow block from on top of the red block put down the yellow block unstack the red block from on top of the orange block put down the red block unstack the orange block from on top of the blue block put down the orange block pick up the red block stack the red block on top of the orange block pick up the yellow block stack the yellow block on top of the red block [PLAN END] - -------------VAL Feedback-----------------The above plan is invalid. These are the unmet goal conditions: the red block is on top of the yellow block and the yellow block is on top of the blue block --------------GPT-4 Response-----------------I apologize for the confusion in my previous plans. Here's the corrected plan to achieve the goal: To perform Attack action, the following facts need to be true: Province object, Planet object, Harmony." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once Attack action is performed the following facts will be true: Pain object. Once Attack action is performed the following facts will be false: Province object, Planet object, Harmony." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "To perform Succumb action, the following facts need to be true: Pain object. Once Succumb action is performed the following facts will be true: Province object, Planet object, Harmony." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once Succumb action is performed the following facts will be false: Pain object.\nTo perform Overcome action, the following needs to be true: Province other object, Pain object." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once Overcome action is performed the following will be true: Harmony, Pain object, Object Craves other object." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once Overcome action is performed the following will be false: Province other object, Pain object." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "To perform Feast action, the following needs to be true: Object Craves other object, Province object, Harmony." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "Once Feast action is performed the following will be true: Pain object, Province other object. Once Feast action is performed the following will be false:, Object Craves other object, Province object, Harmony. [STATEMENT] As initial conditions I have that, object a craves object b, object d craves object c, harmony, planet object b, planet object c, province object a and province object d." }, { "figure_ref": [], "heading": "→", "publication_ref": [], "table_ref": [], "text": "My goal is to have that object c craves object a.\nMy plan is as follows:\n[PLAN] --------------GPT-4 Response----------------feast object a from object b succumb object a attack object d overcome object d from object c feast object c from object a [PLAN END] --------------VAL Feedback-----------------load package_0 into airplane_0 at location_0_0\nThe unmet precondition is: package_0 is at location_0_0 --------------GPT-4 Response-----------------Apologies for the mistake. Here's the corrected plan:\n[PLAN] fly airplane_0 from location_1_0 to location_0_0 load package_0 into airplane_0 at location_1_0 load package_1 into airplane_0 at location_1_0 fly airplane_0 from location_0_0 to location_1_0 unload package_0 from airplane_0 at location_0_0 unload package_1 from airplane_0 at location_0_0\nThe above plan is invalid.The following action at step 2 has an unmet precondition: load package_0 into airplane_0 at location_1_0\nThe unmet precondition is: airplane_0 is at location_1_0 --------------GPT-4 Response-----------------I apologize for the mistake again. Here's the corrected plan:\n[PLAN] fly airplane_0 from location_1_0 to location_0_0 load package_0 into airplane_0 at location_0_0 load package_1 into airplane_0 at location_0_0 fly airplane_0 from location_0_0 to location_1_0 unload package_0 from airplane_0 at location_1_0 unload package_1 from airplane_0 at location_1_0\nThe above plan is invalid.The following action at step 2 has an unmet precondition: load package_0 into airplane_0 at location_0_0\nThe unmet precondition is: package_0 is at location_0_0 --------------GPT-4 Response-----------------I apologize for the confusion. Here's the corrected plan:\n[PLAN] fly airplane_0 from location_1_0 to location_0_0 load package_1 into airplane_0 at location_0_0 fly airplane_0 from location_0_0 to location_1_0 unload package_1 from airplane_0 at location_1_0 [PLAN END] --------------VAL Feedback-----------------The above plan is invalid.The following action at step 2 has an unmet precondition: load package_1 into airplane_0 at location_0_0\nThe unmet precondition is: package_1 is at location_0_0 --------------GPT-4 Response-----------------I apologize for the repeated mistakes. Here's the corrected plan:\n[PLAN] fly airplane_0 from location_1_0 to location_0_0 load package_1 into airplane_0 at location_1_0 fly airplane_0 from location_0_0 to location_1_0 unload package_1 from airplane_0 at location_0_0\nThe above plan is invalid.The following action at step 2 has an unmet precondition: load package_1 into airplane_0 at location_1_0\nThe unmet precondition is: airplane_0 is at location_1_0 --------------GPT-4 Response-----------------I apologize once again for the mistakes. Here's the corrected plan:\n[PLAN] load package_0 into airplane_0 at location_1_0 load package_1 into airplane_0 at location_1_0 fly airplane_0 from location_1_0 to location_0_0 unload package_0 from airplane_0 at location_0_0 unload package_1 from airplane_0 at location_0_0 [PLAN END] ================SUCCESS================\nA.11 Additional experiment details A.11.1 LLM experiment details and the compute cost\nAll the experiments were run using the OpenAI API with temperature 0, making the LLMs deterministic, and all other hyperparameters to be the default ones given by the API. For GPT-4, the version we used had an 8k context window and was used between the months of March and May. The pricing of the 8k context window GPT-4 model is $0.03 for 1K tokens for the prompt and $0.06 for 1K tokens for the completion. The total cost of compute for the autonomous mode experiments on GPT-4 was $231 and the total cost for the back-prompting experiments was $149." }, { "figure_ref": [], "heading": "A.11.2 LPG experiment details", "publication_ref": [], "table_ref": [], "text": "As mentioned above, we utilized LPG in the heuristic mode to find sound plans. We specifically use LPG 1.2 implementation without a best first search fallback (so that plans are only found using the local search method) and allow for only one search restart. We use the default heuristic evaluation function and maximum number of search steps (500). If the search is restarted, an additional 50 steps can be used (bringing the maximum number on the second pass to 550). When working with the empty plan baseline, we simply do not provide an input plan. When assessing search on LLM plans, we provide the LLM plan as the input plan. For random plans, we provide a random plan of the same length as the LLM plan as the input plan.\nA.12 User study details\nWe ran the user studies on an online platform Prolific and paid the participants a wage of $8.12/hour for the human baseline study (described in Section 4) and $10.29/hour for the LLM+human user study (described in Section 5.3).\nA.12.1 Instructions provided to the participants Consent for Study: The expected time of participation is between 25-35 minutes. You have the right not to answer any question, and to stop participation at any time. On successful completion, you will be eligible to receive $5-8 for your participation in this study. We will need to record all the responses provided by the participants during the study. Your consent to participate in this study is completely voluntary. To protect your privacy, responses from participants will never be used individually while compiling or presenting results of the study. The results of this study may be used in reports, presentations, or publications only in an aggregate form. Please enter your prolific id and click continue with the study if you agree to take part in this study.\nStudy details for participants receiving LLM assistance: In this study, you will be coming up with a plan that achieves certain goal conditions given some initial conditions.\n• A plan is a sequence of actions that achieve certain goals.\n• A domain consists of the actions that can be done and the restrictions on the actions.\n• A problem in the specified domain will consist of the initial conditions and the goal conditions for which a plan is a solution.\nYou will be dealing with the blocksworld domain which consists of playing with a set of blocks where you need to arrange the blocks into stacks. You will have to come up with a plan for one blocksworld problem. You will have an AI agent that will help you in coming up with plans. This AI agent is not perfect and can make mistakes. You get a base bonus of 50 cents.\n• If you come up with a successful plan your bonus compensation increases by $1.\n• If your plan is unsuccessful, your bonus compensation decreases by 50 cents.\n• Random plan submissions will be rejected and the bonus compensation would not be provided for such submissions.\nWe recommend you to have a pen and paper to aid you in visualizing the domain whenever required. We will first look at how the blocksworld domain works and what actions can you do.\nStudy details for participants not receiving LLM assistance: In this study, you will be coming up with a plan that achieves certain goal conditions given some initial conditions.\n• A plan is a sequence of actions that achieve certain goals.\n• A domain consists of the actions that can be done and the restrictions on the actions.\n• A problem in the specified domain will consist of the initial conditions and the goal conditions for which a plan is a solution.\nYou will be dealing with the blocksworld domain which consists of playing with a set of blocks where you need to arrange the blocks into stacks. You will have to come up with a plan for one blocksworld problem. You get a base bonus of 50 cents.\n• If you come up with a successful plan your bonus compensation increases by $1.\n• If your plan is unsuccessful, your bonus compensation decreases by 50 cents.\n• Random plan submissions will be rejected and the bonus compensation would not be provided for such submissions.\nWe recommend you to have a pen and paper to aid you in visualizing the domain whenever required. We will first look at how the blocksworld domain works and what actions can you do.\nStudy details for participants in the human baseline study: In this study, you will be coming up with a plan that achieves certain goal conditions given some initial conditions.\n• A plan is a sequence of actions that achieve certain goals.\n• A domain consists of the actions that can be done and the restrictions on the actions.\n• A problem in the specified domain will consist of the initial conditions and the goal conditions for which a plan is a solution.\nYou will be dealing with the blocksworld domain which consists of playing with a set of blocks where you need to arrange the blocks into stacks. You will have to come up with a plan for one blocksworld problem. You get a base bonus of 50 cents.\n• If you come up with a successful plan your bonus compensation increases by 50 cents.\n• If your plan is unsuccessful, your bonus compensation decreases by 50 cents.\n• Random plan submissions will be rejected and the bonus compensation would not be provided for such submissions.\nWe recommend you to have a pen and paper to aid you in visualizing the domain whenever required. We will first look at how the blocksworld domain works and what actions can you do." }, { "figure_ref": [], "heading": "A.12.2 Interface of the user study", "publication_ref": [ "b30", "b20" ], "table_ref": [], "text": "We provide the interface images at the various stages of the user studies.\nA.13 Broader Impact on using LLMs for planning\nOur work relies on the use of large language models trained on large amounts of web data produced by the general public. There is significant literature on the social harms-such as the perpetuation of biases-caused by the text generated by LLMs as they are trained on unwashed web data [30,20].\nOur specific focus here is looking at additional potential harms that can be caused in the context of using LLMs for planning.\nAn obvious first order concern with planning is safety: LLMs can easily produce factually incorrect information which might affect the execution of generated plans in terms of correctness and safety considerations. In the autonomous mode, LLM-generated plans may simply fail, or worse, they could have detrimental side effects, such as cases where the generated plan might compromise safety by ignoring a precondition in place. Further, as shown in our results, there is no guarantee that an LLM-produced plan will achieve a goal. To mitigate these effects, plans produced by LLMs should" } ]
2023-11-06
[ { "authors": "Clever Hans", "journal": "", "ref_id": "b0", "title": "", "year": "" }, { "authors": "Anthony Michael Ahn; Noah Brohan; Yevgen Brown; Omar Chebotar; Byron Cortes; Chelsea David; Keerthana Finn; Karol Gopalakrishnan; Alex Hausman; Herzog", "journal": "", "ref_id": "b1", "title": "Do as i can, not as i say: Grounding language in robotic affordances", "year": "2022" }, { "authors": "Benjamin Tom B Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Askell", "journal": "Advances in neural information processing systems", "ref_id": "b2", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b3", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Mary L Cummings", "journal": "Routledge", "ref_id": "b4", "title": "Automation bias in intelligent time critical decision support systems", "year": "2017" }, { "authors": "Alfonso Gerevini; Ivan Serina", "journal": "AIPS", "ref_id": "b5", "title": "Lpg: A planner based on local search for planning graphs with action costs", "year": "2002" }, { "authors": "Mor Geva; Daniel Khashabi; Elad Segal; Tushar Khot; Dan Roth; Jonathan Berant", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b6", "title": "Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies", "year": "2021" }, { "authors": "Malik Ghallab; Dana S Nau; Paolo Traverso", "journal": "Cambridge University Press", "ref_id": "b7", "title": "Automated Planning and Acting", "year": "2016" }, { "authors": "Kristian J Hammond", "journal": "Science", "ref_id": "b8", "title": "CHEF: A model of case-based planning", "year": "1986" }, { "authors": "Morgan Kaufmann", "journal": "", "ref_id": "b9", "title": "", "year": "1986" }, { "authors": "G Sandra; Lowell E Hart; Staveland", "journal": "Advances in psychology", "ref_id": "b10", "title": "Development of nasa-tlx (task load index): Results of empirical and theoretical research", "year": "1988" }, { "authors": "Richard Howey; Derek Long; Maria Fox", "journal": "IEEE", "ref_id": "b11", "title": "VAL: Automatic plan validation, continuous effects and mixed initiative planning using PDDL", "year": "2004" }, { "authors": "Wenlong Huang; Pieter Abbeel; Deepak Pathak; Igor Mordatch", "journal": "PMLR", "ref_id": "b12", "title": "Language models as zero-shot planners: Extracting actionable knowledge for embodied agents", "year": "2022" }, { "authors": "Wenlong Huang; Fei Xia; Ted Xiao; Harris Chan; Jacky Liang; Pete Florence; Andy Zeng; Jonathan Tompson; Igor Mordatch; Yevgen Chebotar", "journal": "", "ref_id": "b13", "title": "Inner monologue: Embodied reasoning through planning with language models", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b14", "title": "IPC. International planning competition", "year": "1998" }, { "authors": "Liwei Jiang; Jena D Hwang; Chandrasekhar Bhagavatula; Le Ronan; Maxwell Bras; Jon Forbes; Jenny Borchardt; Oren Liang; Maarten Etzioni; Yejin Sap; Choi", "journal": "", "ref_id": "b15", "title": "Delphi: Towards Machine Ethics and Norms", "year": "2021" }, { "authors": "Subbarao Kambhampati", "journal": "", "ref_id": "b16", "title": "AI as (an Ersatz) Natural Science?", "year": "2022-06" }, { "authors": "Subbarao Kambhampati; James A Hendler", "journal": "Artif. Intell", "ref_id": "b17", "title": "A validation-structure-based theory of plan modification and reuse", "year": "1992" }, { "authors": "Subbarao Kambhampati; Karthik Valmeekam; Matthew Marquez; Lin Guan", "journal": "", "ref_id": "b18", "title": "On the role of large language models in planning", "year": "2023-07" }, { "authors": "Takeshi Kojima; Shane Shixiang; Machel Gu; Yutaka Reid; Yusuke Matsuo; Iwasawa", "journal": "", "ref_id": "b19", "title": "Large Language Models are Zero-Shot Reasoners", "year": "2022" }, { "authors": "Paul Pu Liang; Chiyu Wu; Louis-Philippe Morency; Ruslan Salakhutdinov", "journal": "PMLR", "ref_id": "b20", "title": "Towards understanding and mitigating social biases in language models", "year": "2021" }, { "authors": "Bo Liu; Yuqian Jiang; Xiaohan Zhang; Qiang Liu; Shiqi Zhang; Joydeep Biswas; Peter Stone", "journal": "", "ref_id": "b21", "title": "Llm+ p: Empowering large language models with optimal planning proficiency", "year": "2023" }, { "authors": "Drew Mcdermott; Malik Ghallab; Adele E Howe; Craig A Knoblock; Ashwin Ram; Manuela M Veloso; Daniel S Weld; David E Wilkins", "journal": "", "ref_id": "b22", "title": "Pddl-the planning domain definition language", "year": "1998" }, { "authors": "Anh Tuan; Minh Nguyen; Alfonso Emilio Do; Ivan Gerevini; Biplav Serina; Subbarao Srivastava; Kambhampati", "journal": "Artificial Intelligence", "ref_id": "b23", "title": "Generating diverse plans to handle unknown and partially known user preferences", "year": "2012" }, { "authors": " Openai", "journal": "", "ref_id": "b24", "title": "Introducing chatgpt by openai", "year": "2022" }, { "authors": " Openai", "journal": "", "ref_id": "b25", "title": "", "year": "2023" }, { "authors": "Long Ouyang; Jeff Wu; Xu Jiang; Diogo Almeida; Carroll L Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "", "ref_id": "b26", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Alec Radford; Karthik Narasimhan; Tim Salimans; Ilya Sutskever", "journal": "", "ref_id": "b27", "title": "Improving language understanding by generative pre-training", "year": "2018" }, { "authors": "Vanya Shreyas Sundara Raman; Eric Cohen; Ifrah Rosen; David Idrees; Stefanie Paulius; Tellex", "journal": "", "ref_id": "b28", "title": "Planning with large language models via corrective re-prompting", "year": "2022" }, { "authors": "Keisuke Sakaguchi; Le Ronan; Chandra Bras; Yejin Bhagavatula; Choi", "journal": "", "ref_id": "b29", "title": "Winogrande: An adversarial winograd schema challenge at scale", "year": "2020" }, { "authors": "Patrick Schramowski; Cigdem Turan; Nico Andersen; Constantin A Rothkopf; Kristian Kersting", "journal": "Nature Machine Intelligence", "ref_id": "b30", "title": "Large pre-trained language models contain human-like biases of what is right and wrong to do", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b31", "title": "BigScience Large Open-science Open-access Multilingual Language Model", "year": "2022" }, { "authors": "Tom Silver; Varun Hariprasad; Reece S Shuttleworth; Nishanth Kumar; Tomás Lozano-Pérez; Leslie Pack; Kaelbling ", "journal": "", "ref_id": "b32", "title": "PDDL planning with pretrained large language models", "year": "2022" }, { "authors": "Aarohi Srivastava; Abhinav Rastogi; Abhishek Rao; Abu Awal; Md Shoeb; Abubakar Abid; Adam Fisch; Adam Adam R Brown; Aditya Santoro; Adrià Gupta; Garriga-Alonso", "journal": "", "ref_id": "b33", "title": "Beyond the imitation game: Quantifying and extrapolating the capabilities of language models", "year": "2022" }, { "authors": "Kaya Stechly; Matthew Marquez; Subbarao Kambhampati", "journal": "", "ref_id": "b34", "title": "Gpt-4 doesn't know it's wrong: An analysis of iterative prompting for reasoning problems", "year": "2023" }, { "authors": "Alon Talmor; Jonathan Herzig; Nicholas Lourie; Jonathan Berant", "journal": "", "ref_id": "b35", "title": "Commonsenseqa: A question answering challenge targeting commonsense knowledge", "year": "2018" }, { "authors": "Karthik Valmeekam; Matthew Marquez; Subbarao Kambhampati", "journal": "", "ref_id": "b36", "title": "Can large language models really improve by self-critiquing their own plans?", "year": "2023" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b37", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Yaqi Xie; Chen Yu; Tongyao Zhu; Jinbin Bai; Ze Gong; Harold Soh", "journal": "", "ref_id": "b38", "title": "Translating natural language to planning goals with large-language models", "year": "2023" }, { "authors": "Shunyu Yao; Jeffrey Zhao; Dian Yu; Nan Du; Izhak Shafran; Yuan Karthik R Narasimhan; Cao", "journal": "", "ref_id": "b39", "title": "React: Synergizing reasoning and acting in language models", "year": "2023" }, { "authors": "Honghua Zhang; Liunian Harold Li; Tao Meng; Kai-Wei Chang; Guy Van Den Broeck", "journal": "", "ref_id": "b40", "title": "On the paradox of learning to reason from data", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b41", "title": "Drive truck_1 from location_1_0 to location_0", "year": "" }, { "authors": "", "journal": "", "ref_id": "b42", "title": "Unload package_0 from truck_1 at location_0", "year": "" }, { "authors": "", "journal": "", "ref_id": "b43", "title": "Load package_0 into airplane_1 at location_0", "year": "" }, { "authors": "", "journal": "", "ref_id": "b44", "title": "Fly airplane_1 from location_0", "year": "" }, { "authors": "", "journal": "", "ref_id": "b45", "title": "Unload package_0 from airplane_1 at location_1", "year": "" }, { "authors": "", "journal": "", "ref_id": "b46", "title": "Load package_0 into truck_0 at location_1", "year": "" }, { "authors": "", "journal": "", "ref_id": "b47", "title": "Fly airplane_0 from location_0", "year": "" }, { "authors": "", "journal": "", "ref_id": "b48", "title": "Drive truck_0 from location_1_0", "year": "" }, { "authors": "", "journal": "", "ref_id": "b49", "title": "load-truck p0 t1 l1-0) (drive-truck t1 l1-0 l0-0 c1) (unload-truck p0 t1 l0-0) (load-airplane p0 a1 l0-0) (fly-airplane a1 l0-0 l1-0) (unload-airplane p0 a1 l1-0) (load-truck p0 t0 l1-0) (fly-airplane a0 l0-0 l1-0) (drive-truck t0 l1-0 l0-0 c0) (unload-truck p0 t0 l0-0", "year": "" }, { "authors": "", "journal": "", "ref_id": "b50", "title": "(fly-airplane a1 l0-0 l1-0) (load", "year": "" }, { "authors": "A ================failure================", "journal": "", "ref_id": "b51", "title": "7 Blocksworld Prompts in PDDL A.7.1 Domain description Blocksworld Domain Description Here is a pddl domain, an example problem and it's corresponding plan", "year": "" }, { "authors": " ", "journal": "", "ref_id": "b52", "title": "", "year": "" } ]
[ { "formula_coordinates": [ 18, 123.17, 551.45, 40.74, 6.97 ], "formula_id": "formula_0", "formula_text": "[STATEMENT]" }, { "formula_coordinates": [ 19, 123.17, 193.94, 40.74, 77.56 ], "formula_id": "formula_1", "formula_text": "→ → → → → → → → [STATEMENT]" }, { "formula_coordinates": [ 20, 123.27, 106.91, 7.31, 13.21 ], "formula_id": "formula_2", "formula_text": "→ →" }, { "formula_coordinates": [ 25, 123.17, 245.41, 144.45, 62.76 ], "formula_id": "formula_3", "formula_text": "[PLAN] xptxjrdkbi3pqsqr object b from object c 9big8ruzarkkquyu object b 1jpkithdyjmlikck object c 2ijg9q8swj2shjel object c from object b [PLAN END] [STATEMENT]" }, { "formula_coordinates": [ 25, 123.17, 364.97, 125.93, 22.91 ], "formula_id": "formula_4", "formula_text": "[PLAN] ---------GPT-4 response ---------xptxjrdkbi3pqsqr" }, { "formula_coordinates": [ 41, 123.17, 234.43, 159.27, 21.77 ], "formula_id": "formula_5", "formula_text": "→ → --------------VAL Feedback-----------------" } ]
On the Planning Abilities of Large Language Models : A Critical Investigation
Intrigued by the claims of emergent reasoning capabilities in LLMs trained on general web corpora, in this paper, we set out to investigate their planning capabilities. We aim to evaluate (1) the effectiveness of LLMs in generating plans autonomously in commonsense planning tasks and (2) the potential of LLMs in LLM-Modulo settings where they act as a source of heuristic guidance for external planners and verifiers. We conduct a systematic study by generating a suite of instances on domains similar to the ones employed in the International Planning Competition and evaluate LLMs in two distinct modes: autonomous and heuristic. Our findings reveal that LLMs' ability to generate executable plans autonomously is rather limited, with the best model (GPT-4) having an average success rate of ∼12% across the domains. However, the results in the LLM-Modulo setting show more promise. In the LLM-Modulo setting, we demonstrate that LLM-generated plans can improve the search process for underlying sound planners and additionally show that external verifiers can help provide feedback on the generated plans and back-prompt the LLM for better plan generation.
Karthik Valmeekam; Matthew Marquez; Sarath Sreedharan; Subbarao Kambhampati
[ { "figure_caption": "Figure 2 :2Figure 2: The diagrammatic overview of the prompt generation pipeline. The prompt configurations for the different experiments are generated from PDDL domain files and are modified with an example generator and natural language translator as needed depending on the experiment requirements.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Assessment of GPT-4 plans with relaxations in Blocksworld domain", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :Figure 848Figure4: A detailed comparison of the blocksworld instances, against the instances where GPT-4 was able to generate a correct plan with a PDDL or a natural language style prompt which included one example.", "figure_data": "", "figure_id": "fig_2", "figure_label": "48", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: A detailed comparison of the blocksworld instances, against the instances where GPT-4 was able to generate a correct plan with a PDDL or a natural language style prompt which included no examples.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: A detailed comparison of the logistics instances, against the instances where GPT-4 was able to generate a correct plan with a PDDL or a natural language style prompt which included no examples.", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :Figure 9 :89Figure 8: Assessment of GPT-4 plans with relaxations in Mystery Blocksworld (Deceptive Disguising) domain", "figure_data": "", "figure_id": "fig_5", "figure_label": "89", "figure_type": "figure" }, { "figure_caption": "[PLAN] ---------GPT-4 response --------unstack the yellow block from on top of the orange block put down the yellow block unstack the red block from on top of the blue block put down the red block pick up the orange block stack the orange block on top of the red block [PLAN END] ---------Extracted plan ---------(unstack yellow orange) (put-down yellow) (unstack red blue) (put-down red) (pick-up orange) (stack orange red) --------Ground truth plan ---------(unstack yellow orange) (put-down yellow) (pick-up orange) (stack orange red) ================SUCCESS================ A.4.3 Zero-shot prompt with GPT-4 plan", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "4 .4Stack the orange block on top of the blue block ---------Extracted plan ---------(unstack blue orange) (put-down blue) (pick-up orange) (stack orange blue) --------Ground truth plan ---------(unstack blue orange) (put-down blue) (pick-up orange) (stack orange blue) ================SUCCESS================ A.4.4 State-tracking prompt with GPT-4 plan COT state-tracking prompt with GPT-4 plan", "figure_data": "", "figure_id": "fig_7", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "PLAN] ---------GPT-4 response --------feast object a from object b succumb object a attack object d overcome object d from object a feast object c from object a [PLAN END] ---------Extracted plan -----------Ground truth plan ---------(feast object d object c) (succumb object d) (attack object c) (overcome object c object a) ================FAILURE================ A.5.3 Zero-shot prompt with GPT-4 plan (Deceptive Disguising) Zero-shot prompt with GPT-4 plan (Deceptive Disguising) [STATEMENT] As initial conditions I have that, object a craves object b, object d craves object c, harmony, planet object b, planet object c, province object a and province object d. → My goal is to have that object c craves object a. What is the plan to achieve my goal? Just give the actions in the plan. ---------GPT-4 response ---------1. Feast object a from object b 2. Attack object a 3. Succumb object a 4. Overcome object a from object c 5. Feast object c from object a ---------Extracted plan ---------(feast a b) (attack a) (succumb a) (overcome a c) (feast c a) --------Ground truth plan ---------(feast d c) (succumb d) (attack c) (overcome c a) ================FAILURE================ A.5.4 State-tracking prompt with GPT-4 plan COT state-tracking prompt with GPT-4 plan (Deceptive Disguising)", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "→ 2 .→ 3 .→ 4 .→ 2 .→ 3 .→ 4 .→ 5 .→ 6 .23423456As initial conditions I have that, object b craves object c, harmony, planet object a, planet object c, planet object d, province object a, province object b and province object d → My goal is to have that object c craves object b. My plan is as follows: [PLAN] 1. Current State: object b craves object c, harmony, planet object a, planet object c, planet object d, province object a, province object b and province object d → Action: feast object b from object c Reason: The above action is applicable in the current state because its preconditions; object b craves object c, harmony and province object b, are satisfied in the current state. → Resulting State: pain object b, planet object a, planet object c, planet object d, province object a, province object c and province object d Current State: pain object b, planet object a, planet object c, planet object d, province object a, province object c and province object d → Action: succumb object b Reason: The above action is applicable in the current state because its preconditions; pain object b, are satisfied in the current state. → Resulting State: harmony, planet object a, planet object b, planet object c, planet object d, province object a, province object b, province object c and province object d Current State: harmony, planet object a, planet object b, planet object c, planet object d, province object a, province object b, province object c and province object d → Action: attack object c Reason: The above action is applicable in the current state because its preconditions; harmony, planet object c and province object c, are satisfied in the current state. → Resulting State: pain object c, planet object a, planet object b, planet object d, province object a, province object b and province object d Current State: pain object c, planet object a, planet object b, planet object d, province object a, province object b and province object d → Action: overcome object c from object b Reason: The above action is applicable in the current state because its preconditions; pain object c and province object b, are satisfied in the current state. → Resulting State: object c craves object b, harmony, planet object a, planet object b, planet object d, province object a, province object c and province object d → Final State: object c craves object b, harmony, planet object a, planet object b, planet object d, province object a, province object c and province object d → The goal conditions are satisfied in the final state. Hence, the above plan is valid. [PLAN END] [STATEMENT] As initial conditions I have that, object a craves object b, object d craves object c, harmony, planet object b, planet object c, province object a and province object d → My goal is to have that object c craves object a. My plan is as follows: [PLAN] ---------GPT-4 response ---------1. Current State: object a craves object b, object d craves object c, harmony, planet object b, planet object c, province object a and province object d → Action: attack object d Reason: The above action is applicable in the current state because its preconditions; planet object d, province object d, and harmony, are satisfied in the current state. → Resulting State: pain object d, planet object b, planet object c, province object a, province object b, and province object c Current State: pain object d, planet object b, planet object c, province object a, province object b, and province object c → Action: overcome object d from object a Reason: The above action is applicable in the current state because its preconditions; pain object d and province object a, are satisfied in the current state. → Resulting State: object d craves object a, harmony, planet object b, planet object c, province object a, province object b, province object c, and province object d Current State: object a craves object b, object d craves object a, harmony, planet object b, planet object c, province object a, province object b, province object c, and province object d → → Action: attack object b Reason: The above action is applicable in the current state because its preconditions; harmony, planet object b, and province object b, are satisfied in the current state. → Resulting State: pain object b, planet object c, province object a, province object c, and province object d Current State: pain object b, planet object c, province object a, province object c, and province object d → Action: succumb object b Reason: The above action is applicable in the current state because its preconditions; pain object b, are satisfied in the current state. → Resulting State: harmony, planet object c, planet object b, province object a, province object b, province object c, and province object d Current State: harmony, planet object c, planet object b, province object a, province object b, province object c and province object d → Action: attack object c Reason: The above action is applicable in the current state because its preconditions; harmony, planet object c, province object c, are satisfied in the current state. → Resulting State: pain object c, planet object b, province object a, province object b, and province object d Current State: pain object c, planet object b, province object a, province object b, and province object d → Action: overcome object c from object a Reason: The above action is applicable in the current state because its preconditions; pain object c and province object a, are satisfied in the current state. → Resulting State: object c craves object a, harmony, planet object b, planet object d, province object a, province object b and province object d → Final State: object c craves object a, harmony, planet object b, planet object d, province object a, province object b and province object d → The goal conditions are satisfied in the final state. Hence, the above plan is valid. [PLAN END] ---------Extracted plan ---------(attack object d) (overcome object d object a) (attack object b) (succumb object b) (attack object c) (overcome object c object a) --------Ground truth plan ---------(feast object d object c) (succumb object d) (attack object c) (overcome object c object a) ================FAILURE================ A.5.5 Domain description (Randomized Disguising) Mystery Blocksworld Domain Description (Randomized Disguising) I am playing with a set of objects. Here are the actions I can do 1jpkithdyjmlikck object xptxjrdkbi3pqsqr object from another object 9big8ruzarkkquyu object 2ijg9q8swj2shjel object from another object I have the following restrictions on my actions:", "figure_data": "", "figure_id": "fig_9", "figure_label": "23423456", "figure_type": "figure" }, { "figure_caption": "object a from object b 9big8ruzarkkquyu object a 1jpkithdyjmlikck object d 2ijg9q8swj2shjel object d from object c [PLAN END] ---------Extracted plan ---------(xptxjrdkbi3pqsqr object a object b) (9big8ruzarkkquyu object a) (1jpkithdyjmlikck object d) (2ijg9q8swj2shjel object d object c) --------Ground truth plan ---------(xptxjrdkbi3pqsqr object d object c) (9big8ruzarkkquyu object d) (1jpkithdyjmlikck object c) (2ijg9q8swj2shjel object c object a) ================FAILURE================ A.5.7 Zero-shot prompt with GPT-4 plan (Randomized Disguising)", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "to have that object c 4dmf1cmtyxgsp94g object b. What is the plan to achieve my goal? Just give the actions in the plan. ---------GPT3 response ---------1. 1jpkithdyjmlikck object A 2. 2ijg9q8swj2shjel object A from object B 3. xptxjrdkbi3pqsqr object C from object A 4. 9big8ruzarkkquyu object A ---------Extracted plan ---------(1jpkithdyjmlikck object a) (2ijg9q8swj2shjel object a object b) (xptxjrdkbi3pqsqr object c object a) (9big8ruzarkkquyu object a) --------Ground truth plan ---------(xptxjrdkbi3pqsqr object b object c) (9big8ruzarkkquyu object b) (1jpkithdyjmlikck object c) (2ijg9q8swj2shjel object c object b) ================FAILURE================ A.6 Logistics Prompts in Natural Language A.6.1 Domain description", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "------Extracted plan ---------(unstack a b) (put-down a) (pick-up c) (stack c a) --------Ground truth plan ---------(unstack d c) (put-down d) (pick-up c) (stack c a) ================FAILURE================ :effect (and (province ?ob) (harmony) (planet ?ob)(not (pain ?ob))))", "figure_data": "", "figure_id": "fig_12", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "A. 8 . 383) (attack c) (overcome c a) [QUERY_PLAN_END] ---------Extracted plan ---------(feast a b) Zero-shot prompt with GPT-4 plan (Deceptive Disguising) Zero-shot prompt with GPT-4 plan (Deceptive Disguising) Here is a pddl domain, a planning problem. Provide the plan for the query problem. Provide only the pddl syntax for the plan where each action is represented as (ACTION_NAME OBJECTS). ------GPT-4 response ---------Here is the PDDL syntax for the plan: 1. (attack a) 2. (overcome b a) 3. (feast b d) 4. (succumb d) 5. (overcome c b) ---------Extracted plan ---------(attack a) (overcome b a) (feast b d) (succumb d) (overcome c b) --------Ground truth plan ------", "figure_data": "", "figure_id": "fig_13", "figure_label": "83", "figure_type": "figure" }, { "figure_caption": "My goal is to have that the red block is on top of the yellow block and the yellow block is on top of the blue block.The above plan is invalid.The following action at step 3 has an unmet precondition: attack object d The unmet precondition is: planet object d --------------GPT-4 Response-----------------Apologies for the oversight. Here's the corrected plan: [PLAN] feast object a from object b succumb object a attack object c overcome object c from object d feast object d from object c [PLAN END] --------------VAL Feedback-----------------The above plan is invalid.The following action at step 3 has an unmet precondition: attack object c The unmet precondition is: province object c --------------GPT-4 Response-----------------I apologize for the mistake. Here's another attempt at a valid plan: [PLAN] feast object a from object b succumb object a attack object b overcome object b from object c feast object c from object d succumb object c [PLAN END] --------------VAL Feedback-----------------The above plan is invalid.The following action at step 4 has an unmet precondition: overcome object b from object c The unmet precondition is: province object c --------------GPT-4 Response-----------------I apologize for the repeated mistakes. Here's a new plan: [PLAN] feast object a from object b succumb object a attack object b overcome object b from object a feast object a from object d succumb object a attack object d overcome object d from object c [PLAN END] --------------VAL Feedback-----------------The above plan is invalid. The following action at step 5 has unmet preconditions: feast object a from object d The unmet preconditions are: object a craves object d and province object a --------------GPT-4 Response-----------------I apologize for the errors in my previous plans. Here's a revised plan: [PLAN] feast object a from object b succumb object a attack object b overcome object b from object a attack object c succumb object c feast object c from object a [PLAN END] --------------VAL Feedback-----------------The above plan is invalid.The following action at step 5 has an unmet precondition: attack object c The unmet precondition is: province object c --------------GPT-4 Response-----------------I apologize for the continued mistakes. Here's another attempt at a valid plan: [PLAN] feast object a from object b succumb object a attack object b overcome object b from object a feast object d from object c succumb object d attack object c overcome object c from object a [PLAN END]", "figure_data": "", "figure_id": "fig_14", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "33Logistics example with GPT-4", "figure_data": "", "figure_id": "fig_15", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: The description of the example problem.", "figure_data": "", "figure_id": "fig_16", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: The description of the example problem and showcasing the solution of the example problem.", "figure_data": "", "figure_id": "fig_17", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: Interface at the plan writing phase without LLM assistance.", "figure_data": "", "figure_id": "fig_18", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: Interface at plan writing phase with assistance from the LLM.", "figure_data": "", "figure_id": "fig_19", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: Description of the translate panel.", "figure_data": "", "figure_id": "fig_20", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: Interface at the plan translation phase", "figure_data": "", "figure_id": "fig_21", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Results of GPT-4, GPT-3.5 (popularly known as ChatGPT), Instruct-GPT3.5, Instruct-GPT3 (text-davinci-002) and GPT3 (davinci) for the Plan Generation task with prompts in natural language.", "figure_data": "DomainMethodInstances correctGPT-4GPT-3.5I-GPT3.5I-GPT3GPT-3BlocksworldOne-shot 206/600 (34.3%) 37/600 (6.1%)54/600 (9%)41/600 (6.8%)6/600 (1%)(BW)Zero-shot 210/600 (34.6%)8/600 (1.3%)---COT214/600 (35.6%)----Logistics DomainOne-shot28/200 (14%)1/200 (0.5%)6/200 (3%)3/200 (1.5%)-Zero-shot15/200 (7.5%)1/200 (0.5%)---Mystery BW (Deceptive)One-shot26/600 (4.3%)0/600 (0%)4/600 (0.6%)14/600 (2.3%)0/600 (0%)Zero-shot1/600 (0.16%)0/600 (0%)---COT54/600 (9%)----Mystery BW (Randomized)One-shot12/600 (2%)0/600 (0%)5/600 (0.8%)5/600 (0.8%)1/600 (0.1%)Zero-shot0/600 (0%)0/600 (0%)---", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results of GPT-4 and GPT-3.5 (popularly known as ChatGPT) for the Plan Generation task with one or zero examples in the prompt by directly providing the domain and problem in PDDL.", "figure_data": "DomainMethodInstances correctGPT-4GPT-3.5Blocksworld (BW)One-shot75/600 (12.5%)12/600 (2%)Zero-shot 106/600 (17.6%) 12/600 (2%)Logistics DomainOne-shot28/200 (14%)1/200 (0.5%)Zero-shot11/200 (5.5%)0/200 (0%)Mystery BW (Deceptive)One-shot17/600 (2.8%)1/600 (0.1%)Zero-shot3/600 (0.5%)0/600 (0%)", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Evaluation of GPT-4 and Instruct-GPT3 (I-GPT-3) plans as heuristics for a local search planner LPG, on blocksworld (BW), logistics and mystery blocksworld domains.", "figure_data": "DomainLLMAvg. Search Steps Empty Random LLMAvg. Plan Length Empty Random LLMAvg. Lev. DistanceSeedSeedSeedSeedSeedSeedPlanPlanPlanPlanPlanPlanBWI-GPT-315.820.0714.58.459.6211.77.22GPT-415.820.078.98.459.6210.764.15LogisticsGPT-477.5144.3951.323.732.7232.2415.04Mystery BWGPT-415.820.4516.098.459.7811.537.77", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": "DomainI.CA.F.RGPT-4GPT-4Blocksworld (BW)41/50 (82%)3.68Logistics35/50 (70%)3.31Mystery BW5/50 (10%)7.0", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "table, the orange block is on the table and the yellow block is on the table My goal is to have that the orange block is on top of the blue block. My plan is as follows: [PLAN] 1. Current State: the red block is clear, the blue block is clear, the yellow block is clear, the hand is empty, the blue block is on top of the orange block, the red block is on the table, the orange block is on the table and the yellow block is on the table Resulting State: the red block is clear, the blue block is clear, the yellow block is clear, the hand is currently holding orange block, the red block is on the table, the blue block is on the table and the yellow block is on the table 4. Current State: the red block is clear, the blue block is clear, the yellow block is clear, the hand is currently holding orange block, the red block is on the table, the blue block is on the table and the yellow block is on the table Resulting State: the red block is clear, the orange block is clear, the yellow block is clear, the hand is empty, the orange block is on top of the blue block, the red block is on the table, the blue block is on the table and the yellow block is on the table", "figure_data": "→→Action: stack the orange block on top of the blue blockReason: The above action is applicable in the current state because its preconditions; the blue→block is clear and the hand is currently holding orange block, are satisfied in the current→state.", "figure_id": "tab_7", "figure_label": "→→", "figure_type": "table" }, { "figure_caption": "Action: unstack the red block from on top of the blue block Reason: The above action is applicable in the current state because its preconditions; the red block is clear, the hand is empty, and the red block is on top of the blue block, are satisfied in the current state.Resulting State: the blue block is clear, the yellow block is clear, the hand is currently holding red block, the yellow block is on top of the orange block, the blue block is on the table, and the orange block is on the table → → 2. Current State: the blue block is clear, the yellow block is clear, the hand is currently holding red block, the yellow block is on top of the orange block, the blue block is on the table, and the orange block is on the table Action: pick up the orange block Reason: The above action is applicable in the current state because its preconditions; the orange block is clear, the hand is empty, and the orange block is on the table, are satisfied in the current state. Resulting State: the red block is clear, the blue block is clear, the hand is currently holding orange block, the yellow block is clear, the red block is on the table, the blue block is on the table, and the yellow block is on the table 6. Current State: the red block is clear, the blue block is clear, the hand is currently holding orange block, the yellow block is clear, the red block is on the table, the blue block is on the table, and the yellow block is on the table Resulting State: the blue block is clear, the orange block is clear, the hand is empty, the yellow block is clear, the orange block is on top of the red block, the blue block is on the table, the red block is on the table, and the yellow block is on the table", "figure_data": "→→→→→→→→Action: stack the orange block on top of the red blockReason: The above action is applicable in the current state because its preconditions; the redblock is clear, and the hand is currently holding orange block, are satisfied in thecurrent state.", "figure_id": "tab_8", "figure_label": "→→", "figure_type": "table" }, { "figure_caption": "1 Blocksworld example with GPT-4 Back-prompt with GPT-4 I am playing with a set of blocks where I need to arrange the blocks into stacks. Here are the actions I can do pick up a block if the block is on the table and the block is clear. A block is clear if the block has no other blocks on top of it and if the block is not picked up. Once I put down or stack a block, my hand becomes empty. Once you stack a block on top of a second block, the second block is no longer clear.[STATEMENT] As initial conditions I have that, the blue block is clear, the hand is empty, the blue block is on top of the orange block, the orange block is on top of the yellow block, the yellow block is on top of the red block and the red block is on the table.My goal is to have that the red block is on top of the orange block and the yellow block is on top of the red block. As initial conditions I have that, the yellow block is clear, the hand is empty, the red block is on top of the orange block, the orange block is on top of the blue block, the yellow block is on top of the red block and the blue block is on the table.", "figure_data": "unstack the yellow block from on top of the blue blockstack the yellow block on top of the red block[PLAN END][STATEMENT]→Pick up a blockUnstack a block from on top of another blockPut down a blockStack a block on top of another blockI have the following restrictions on my actions:I can only pick up or unstack one block at a time.I can only pick up or unstack a block if my hand is empty.I can only →→→My plan is as follows:[PLAN]unstack the blue block from on top of the orange blockput down the blue blockunstack the orange block from on top of the yellow blockput down the orange blockunstack the yellow block from on top of the red blockstack the yellow block on top of the blue blockpick up the red blockstack the red block on top of the orange block", "figure_id": "tab_10", "figure_label": "", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work by OpenAI is the original source of the GPT-x models, which serve as the basis for the development of LLMs in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[16]", "Explanation": "The cited work is a thread of efforts that aim to study the emergent capabilities of LLMs, which the citing paper continues to build upon in their own research."}, {"Category": "Supporting Evidence", "Citation": "[35,29,7]", "Explanation": "The cited works provide evidence of the emergent commonsense reasoning capabilities of LLMs, which the citing paper uses to support their own research on the topic."}, {"Category": "Extension or Continuation", "Citation": "[33]", "Explanation": "The cited work focuses on logical reasoning capabilities of LLMs, which the citing paper extends by exploring the use of LLMs for ethical reasoning."}, {"Category": "Data Source", "Citation": "[15]", "Explanation": "The cited work is a study of LLM behavior that provides data and insights for the citing paper to use in their own research on the emergent capabilities of LLMs."}, {"Category": "Supporting Evidence", "Citation": "[19,37,4]", "Explanation": "The cited works have suggested that large language models (LLMs) are capable of performing certain types of reasoning tasks, which the citing paper builds upon in its research on planning and sequential decision making."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work provides a framework for generating planning problem instances based on the International Planning Competition, which the citing paper leverages to evaluate the reasoning capabilities of LLMs in a systematic and automated manner."}, {"Category": "Supporting Evidence", "Citation": "[25]", "Explanation": "The cited work, GPT-4, is the best LLM tested in the autonomous mode and is used to assess the quality and correctness of plans generated by LLMs. The results show that only about 12% of the plans generated by GPT-4 are actually executable without errors and reach their goals, providing a baseline for comparison in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[24]", "Explanation": "The cited work, GPT-3.5, is one of the LLMs tested in the autonomous mode and is used to assess the quality and correctness of plans generated by LLMs. The results obtained from this LLM are used as a data source in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[26]", "Explanation": "The cited work, InstructGPT-3.5 and InstructGPT-3, are LLMs tested in the autonomous mode and are used to assess the quality and correctness of plans generated by LLMs. The data obtained from these LLMs is used as a data source in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[3]", "Explanation": "The cited work, GPT-3, is an LLM tested in the autonomous mode and is used to assess the quality and correctness of plans generated by LLMs. The data obtained from this LLM is used as a data source in the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work introduces the automated planner called LPG, which the citing paper adopts to check the correctness of the plans produced by LLMs in the LLM-Modulo mode. The citing paper uses the LPG to repair the LLM plans and compare them with two baselines to evaluate the performance of the LLM plans in the autonomous mode."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work, VAL, is used as an external verifier to point out errors in the LLM-generated plans and back-prompt the LLM for a new plan with this feedback. The citing paper adopts the use of VAL as a method to improve the plan correctness in common-sense domains."}, {"Category": "Methodological Basis", "Citation": "[8]", "Explanation": "The cited work provides the standard practice in automated planning, which the citing paper adopts in their evaluation of LLMs planning capabilities in zero-shot and few-shot modes."}, {"Category": "Data Source", "Citation": "[12]", "Explanation": "The cited work is a previous study on LLMs planning capabilities in commonsense domains, which the citing paper uses as a reference for their own evaluation in a mode where the domain is specified in the prompt."}, {"Category": "Extension or Continuation", "Citation": "[2]", "Explanation": "The cited work is a previous study on LLM plans in SayCan, which the citing paper builds upon by simplifying the process of filtering and interpreting plans in terms of available skills and actions in the domain."}, {"Category": "Supporting Evidence", "Citation": "[18]", "Explanation": "The cited work provides a detailed analysis of the limitations of LLMs in planning and highlights the need for a more comprehensive approach to plan development."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work, PDDL, serves as the standard representation for specifying planning problems, which the citing paper adopts in their research to structure and present their own planning problem."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work VAL is used to evaluate the translated plan generated by the LLM in the autonomous mode, providing a method for assessing the plan quality and performance."}, {"Category": "Extension or Continuation", "Citation": "[40]", "Explanation": "The cited work by [40] argues that language models trained for reasoning focus on statistical features rather than causal structure, which impacts their performance in tasks like Blocksworld planning. The citing paper extends this idea by providing evidence that fine-tuning GPT-3 in the Blocksworld domain has limited impact on improving performance, aligning with the findings of [40]."}, {"Category": "Supporting Evidence", "Citation": "[31]", "Explanation": "The cited work, BLOOM, is mentioned as an open-source LLM that the citing paper has conducted preliminary experiments with. The results of these experiments indicate that BLOOM is also ineffective in plan generation, which supports the claim that LLMs are not effective in this task."}, {"Category": "Methodological Basis", "Citation": "[8]", "Explanation": "The cited work introduces the concept of relaxations in the domain model, which the citing paper adopts to simplify the problem in automated planning community and derive heuristics for planning problems."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work, LPG, is used as a local-search planner in the citing paper to generate plans by starting with a seed plan and iteratively repairing flaws until a correct plan is found."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work provides a version of LPG that aims to minimize the changes to the suggested plan, which the citing paper uses to measure the edit distance between the initial LLM plan and the final correct solution generated by LPG."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work, VAL, is used as a tool to validate the correctness of plans generated by the LLM, providing a methodological basis for the citing paper to ensure the quality of the plans generated."}, {"Category": "Supporting Evidence", "Citation": "[36,34]", "Explanation": "The cited works demonstrate that LLMs are not effective in verifying plans, which supports the claim in the citing paper that self-critiquing architectures with LLMs serving as the verifier are of questionable utility."}, {"Category": "Supporting Evidence", "Citation": "[10]", "Explanation": "The cited work by NASA-TLX assessment tool is used to gauge the cognitive load of participants in the user study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[5]", "Explanation": "The cited work is used to acknowledge the potential for automation bias in the user study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work by [30] provides a discussion on the social harms caused by text generated by large language models trained on unwashed web data, which serves as a methodological basis for the citing paper to consider the potential harms in the context of using LLMs for planning."}, {"Category": "Extension or Continuation", "Citation": "[20]", "Explanation": "The cited work by [20] is mentioned in the context of the social harms caused by text generated by large language models, which the citing paper extends by focusing on the specific concerns of using LLMs for planning in terms of safety and plan execution."}]
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b10", "b11", "b12", "b12", "b13", "b15", "b13", "b14", "b15", "b17", "b18", "b19", "b21", "b20", "b22", "b23", "b24", "b16", "b18", "b22", "b26", "b22" ], "table_ref": [], "text": "ISTOPATHOLOGICAL images are considered as the \"gold standard\" for cancer diagnosis, and the analysis of Whole Slide Images (WSIs) is a critical approach for cancer diagnosis, prognosis, survival prediction, and estimation of response-to-treatment in patients [1][2] [3]. Nowadays, deep learning (DL) has been successfully applied to the field of computational pathology to develop the computeraided diagnosis (CAD) system, which can help pathologists improve diagnosis accuracy together with good consistency and reproducibility [4] [5].\nDue to the huge size of WSIs and lack of pixle-level annotation, the weakly-supervised learning framework is generally adopted for analysis of gigapixel WSI, among which multiple instance learning (MIL) is a typical method [6] [7]. Under the MIL framework, each WSI is regarded as a bag with numerous cropped patches as instances. Their features are then extracted and aggregated to produce a slide-level representation for the following weakly-supervised task. Existing MIL-based methods have shown effectiveness in WSI analysis [8] [9][10] [11] [12]. However, these works generally focus on the single resolution of WSIs, and ignore the multi-resolution information. In fact, a WSI can be scaled at different resolutions and express rich multi-scale diagnostic information from extremely small cells to large tissues [13]. These multi-scale semantics can cover tumor-related information more comprehensively, thus helping pathologists improve diagnostic accuracy.\nInspired by the diagnosis process of pathologists, several works have extended the previous MIL frameworks to the multi-resolution oriented approaches [13] [14][15] [16]. For example, a simple strategy is to concatenate the instances from different resolutions into a bag for the following MIL model [14]. An alternative solution is to construct the patch pyramid to preserve the hierarchical information of WSIs, in which the patches from different resolutions are spatially aligned through the feature pyramid [15]. These multi-resolution methods achieve superior performance than the single-resolution ones. However, these works cannot effectively alleviate the intrinsic semantic gap among different resolution patches, which present different levels of information from cellular-scale to tissuescale.\nRecently, Transformer has been widely used in various vision tasks [16][17] [18] [19], which can capture the correlation between different segments in a sequence (tokens) to learn the long-range information. Recently, multi-scale vision Transformers (MVTs) have been developed to process the visual tokens of different scales, and can effectively learn and fuse multi-scale features with superior performance for different tasks [20][21] [22]. Hierarchical structures are popular in recent MVTs, and their basic idea is to use local Transformer blocks on non-overlapping images and hierarchically aggregate them to learn the multi-scale features [21]. Thus, they can effectively alleviate the intrinsic semantical gap in different scales, and work well on natural images with small image sizes, such as 256×256 and 384×384. However, it is a timeconsuming and tedious task to apply existing MVTs to WSIs directly, because WSIs are high-resolution scans of tissue sections, whose full spatial sizes can be over 100000×100000 pixels at 20× magnification.\nOn the other hand, the spatial information in WSIs can describe the spatial relationship between the tumor and its surrounding tissues, which has great diagnostic significance for WSI analysis [23]. Therefore, it would be much more desirable to learn the spatial-related representation from WSIs for a CAD model. In order to learn spatial-related features, Transformer can add the learnable position encoding to patch embeddings for retaining positional information in the fixed-length sequences [24] [25]. However, existing position encoding strategies cannot be used for unfixed-length sequences in the WSI analysis since the number of cropped patches varies among different WSIs. Consequently, the spatial information of WSIs is ignored in the Transformer-based representation learning [17] [19], which affects the performance of a CAD model for cancer diagnosis.\nRecently, Graph Convolutional Network (GCN) has shown its effectiveness in learning the spatial information of WSIs [23][26] [27]. Existing graph-based MIL methods usually regard the WSI as a graph-based data structure, where the nodes correspond to patches and the edges are constructed between adjacent patches [23]. Thus, the constructed graph can effectively represent the spatial relationships among different regions in a WSI. Therefore, it is feasible to introduce the graph representation into Transformer to capture spatial-related information of WSIs.\nIn this work, we propose a Multi-scale Efficient Graph-Transformer (MEGT) framework to effectively fuse the multiscale information of WSIs for more accurate diagnosis. Specifically, MEGT adopts a dual-branch Efficient Graph-Transformer (EGT) to process the low-resolution and highresolution patch embeddings (i.e., tokens in a transformer), respectively. Meanwhile, a multi-scale feature fusion module (MFFM) is developed to fuse these tokens. The proposed EGT integrates the graph representation into the Transformer to capture the spatial information of WSIs. Moreover, to accelerate EGT computation, a novel token pruning module is developed to reduce the redundant tokens. MFFM introduces a non-patch token for each branch as an agent to exchange information with another branch by a specially designed crossattention. The proposed MEGT is evaluated on two public WSIs datasets, and the results indicate that MEGT outperforms other state-of-the-art (SOTAs) MIL models on the WSI classification task. The main contributions of this work are as follows: 1) A new EGT is proposed to learn both the spatial information of WSIs and the correlations between image patches by innovatively integrating GCN into Transformer. Thus, the EGT can be used as a powerful backbone to provide superior feature representations. 2) A simple yet effective token pruning module is developed in the EGT to reduce the number of tokens without introducing additional parameters, which can significantly expedite EGT computation and preserve the most important tokens.\n3) A new MFFM is developed to fuse multi-scale features and alleviate the semantic gap in different resolution patches. The MFFM uses the class token of the branch as an agent to exchange information with the other branch via a cross-attention module, so it only needs linear-time computational and memory complexity." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [ "b27", "b28", "b29", "b22", "b30", "b14", "b18", "b16" ], "table_ref": [], "text": "A. Multiple Instance Learning for WSI MIL has been successfully applied to WSI analysis, such as cancer diagnosis and survival prediction [28], [29]. According to the network structures, existing MIL methods for WSI analysis generally can be divided into two categories [30]: structure-specific and structure-free models.\nThe structure-specific model usually adopts the GCN to learn the spatial information from WSI. For example, DeepGraphSurv applied the GCN to WSIs for survival prediction, in which the extracted patches were adopted as nodes for graph construction [23]; Li et al. [31] proposed a context-aware GCN for survival analysis by using the true spatial coordinates of WSIs to connect edges between adjacent patches. These structure-specific models can represent the contextual information between patches for learning the spatial representation of WSIs.\nThe structure-free model is generally developed based on the attention-MIL. It uses attention scores to weight the instance embeddings to learn the slide-level representation. For example, Li et al. [15] proposed a dual-stream MIL (DSMIL) for WSIs classification, in which the attention scores for each instance were computed with the critical instance; Shao et al. [19] designed a Transformer-based MIL (TransMIL) for WSIs classification, it utilized self-attention to capture long-range dependencies between patches; Huang et al. [17] combined self-supervised learning and Transformer to generate superior feature representation for survival analysis with WSIs. The structure-free model can capture the correlation among different patches to improve the performance of the WSI-based CADs.\nUnlike previous MIL methods, we aim to explore a novel Graph-Transformer structure to combine the strengths of the above two models, which integrates GCN into Transformer to learn both the spatial information and the long-range dependencies between image patches in a WSI." }, { "figure_ref": [], "heading": "B. Multi-scale WSI Analysis", "publication_ref": [ "b6", "b13", "b29", "b6", "b13", "b14", "b29", "b12" ], "table_ref": [], "text": "In recent years, the multi-scale oriented WSI analysis has attracted more attention. Compared with the single-scale method, the multi-scale approach can learn more semantic information for classification tasks [7] [13] [14][15] [30]. For example, Campanella et al. [7] trained different MIL branches for different resolutions and then used a max-pooling operation to fuse these multi-resolution embedding for learning the WSIlevel representation; Hashimoto et al. [14] mixed patches from different resolutions into a bag and then fed it to the MIL model; Li et al. [15] adopted a pyramidal concatenation strategy to spatially align. patches from different resolutions for WSIs classification; Liu et al. [30] proposed a square pooling layer to align patches from two resolutions, which spatially pooled patches from high-resolution under the guidance of lowresolution; Hou et al. [13] also proposed a heterogeneous graph neural network for tumor typing and staging, in which the heterogeneous graph was constructed based on the feature and spatial-scaling relationship of the multi-resolution patches. These works demonstrate that the multi-scale features can learn more effective slide-level representation to improve the performance of WSI analysis.\nHowever, due to the semantic gap in different resolution patches, existing multi-resolution schemes still cannot fully utilize the multi-scale features of WSIs. Thus, we suggest to introduce a non-patch agent for each resolution to address this issue." }, { "figure_ref": [], "heading": "C. Multi-scale Vision Transformer", "publication_ref": [ "b19", "b31", "b32", "b19", "b20", "b31", "b32" ], "table_ref": [], "text": "Inspired by the feature pyramid of images in CNNs, the MVTs have been designed to learn multi-scale visual representations from images [20][21] [32] [33]. For example, Wang et al. [20] developed a pyramid vision transformer for dense prediction tasks, in which a progressive shrinking pyramid was designed to obtain multi-scale feature maps; Zhang et al. [21] proposed a nested hierarchical Transformer, which stacked Transformer layers on non-overlapping image blocks independently, and then nested them hierarchically. Liu et al. [32] proposed a general Transformer backbone that provided hierarchical feature representations for computer vision; Fan et al. [33] designed a multi-scale vision Transformer for video and image recognition, which hierarchically expanded the feature complexity while reducing visual resolution; These works indicate that the MVTs can learn more effective feature representation for different computer vision tasks.\nHowever, the existing MVTs algorithms are mainly developed for natural images with small sizes. Since WSIs have an extremely large size, these algorithms cannot be directly applied to gigapixel WSIs. Therefore, we investigate how to learn multi-scale feature representations in Transformer models for WSIs classification." }, { "figure_ref": [ "fig_0" ], "heading": "III. METHODOLOGY", "publication_ref": [ "b33" ], "table_ref": [], "text": "In this work, a novel MEGT framework is proposed for WSIs classification, which can effectively exploit the multi-scale feature information of WSIs. Given a WSI pyramid, our framework aggregates patch features of different resolutions to implement the slide-level prediction. As shown in Fig. 1, a WSI pyramid is first cropped into non-overlapping patches at low and high resolutions, and their features are extracted by a pretrained TransPath [34], respectively. Then, the multi-resolution patch embeddings will be fed into the proposed MEGT framework, which consists of two Efficient Graph-based Transformer (EGT) modules and a multi-scale feature fusion module (MFFM), to extract discriminative representation and fully mine multi-scale information. Finally, a WSI-level classifier is employed to generate the slide-level prediction based on the learned representation. In the following subsections, we will introduce the EGT, MFFM, and the learning strategy of the whole framework in detail." }, { "figure_ref": [ "fig_1" ], "heading": "A. Efficient Graph-Transformer", "publication_ref": [], "table_ref": [], "text": "The spatial correlation among different patches is essential for cancer diagnosis on WSIs. Different from the previous Graphbased and Transformer-based algorithms, our proposed EGT integrates a graph representation of WSI and a Transformer feature to learn both the spatial information of WSI and the long-range dependencies between image patches.\nAs shown in Fig. 2, the EGT contains two Transformer encoders, a token pruning module, and a Graph-Transformer layer. The first Transformer encoder adopts class token to learn the global information of patch tokens and provides attention scores for token pruning. Then, the token pruning module selects Top-k most important tokens according to the attention scores to reduce the number of tokens. Subsequently, the Graph-Transformer layer uses these selected tokens to simultaneously learn the local and global information of the WSI. Finally, the class token learns all the information again through the second Transformer encoder for the subsequent MFFM." }, { "figure_ref": [], "heading": "1) Transformer Encoder", "publication_ref": [ "b34", "b35", "b36" ], "table_ref": [], "text": "A Transformer encoder [35] is employed to learn potential long-term dependencies between instances. It contains multiple Transformer layers, each of which has a multi-head selfattention (MSA) and a multi-layer perceptron (MLP). MSA uses the self-attention mechanism to calculate the correlation matrix between instances, and the complexity of memory and time are both 𝑂(𝑛 2 ). In WSI processing, a WSI may be divided into tens of thousands of patches. To address the issue of long instance sequence, we employ the Nystrom-attention (NA) method [36] here, which utilizes the Nystrom method to approximate the standard self-attention. The NA method can be defined as:\n𝑄 = 𝑋𝑊 𝑞 , 𝐾 = 𝑋𝑊 𝑘 𝑁𝐴 = 𝑠𝑜𝑓𝑡𝑚𝑎𝑥 ( 𝑄𝐾 ̃𝑇 √𝑑 ) (𝑠𝑜𝑓𝑡𝑚𝑎𝑥 ( 𝑄 ̃𝐾 ̃𝑇 √𝑑 )) † 𝑠𝑜𝑓𝑡𝑚𝑎𝑥 ( 𝑄 ̃𝐾𝑇 √𝑑 )(1)\nwhere 𝑾 𝑞 and 𝑾 𝑘 are learnable parameters, 𝑄 and 𝐾 ∈ ℝ 𝑛×𝑑 , d is the patch embedding feature dimension, 𝑄 ̃ and 𝐾 ̃∈ ℝ 𝑚×𝑑 are the m selected landmarks from the 𝑄 and 𝐾, 𝐴 † is the Moore-Penrose inverse of 𝐴. When m is much less than n, the computational complexity is reduced from 𝑂(𝑛 2 ) to 𝑂(𝑛). The output of the i-th Transformer layer can be defined as:\n𝑻 𝑖 ′ = 𝑀𝑆𝐴(𝐿𝑁(𝑇 𝑖-1 )) + 𝑻 𝑖-1 𝑻 𝑖 = 𝑀𝐿𝑃(𝐿𝑁(𝑇 𝑖 ′ )) + 𝑻 𝑖 ′ (2\n)\nwhere LN is Layer normalization [37]." }, { "figure_ref": [], "heading": "2) Token Pruning Module", "publication_ref": [], "table_ref": [], "text": "Although the above NA solves the long-sequence problem in Transformer, the computational complexity is still heavy, when all tokens are used to construct a graph in GCN. Therefore, we perform token pruning to reduce redundant tokens.\nLet 𝑛 denote the number of patch tokens, and an extra class token is added for classification in the Transformer. The interactions between class and other tokens are performed via the attention mechanism in NA, and an attention map 𝐴 ∈ ℝ (𝑛+1)×(𝑛+1) is obtained, in which the first row of 𝐴 represents the attention score 𝑎 = 𝐴 [0, 1: ] ∈ ℝ 1×𝑛 from class to all patch tokens. Thus, the attention scores are used to determine the importance of each patch token.\nIn the multi-head self-attention layer, there are multiple class attention vectors 𝑎 ℎ , where 𝑥 ℎ = [1, . . . , 𝐻], and H is the total number of attention heads. We compute the average value of all heads by:\n𝑎 ̅ = ∑ 𝑎 ℎ /𝐻 𝐻 ℎ=1(3)\nAfter that, we select the tokens corresponding to the k largest (top-k) elements in 𝑎 ̅, and further fuse the other tokens into a new token using the attention scores in 𝑎 ̅. Therefore, the token fusion can be written as:\nℎ 𝑓𝑢𝑠𝑖𝑜𝑛 = ∑ ℎ 𝑖 𝑖=𝑛-𝑘 𝑖=1 𝑎 ̅ 𝑖 (4\n)\nwhere ℎ 𝑖 represent the i-th patch token." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "3) Graph-Transformer Layer", "publication_ref": [ "b37" ], "table_ref": [], "text": "After token pruning, a total (𝑘 + 1) patch tokens are used in the Graph-Transformer layer. Note that we do not use class token, since it may affect the learning of GCN. Furthermore, graph construction is an essential step in the Graph-Transformer layer for graph representation learning. A graph can be denoted as 𝑮 = (𝑽, 𝑬), where 𝑽 and 𝑬 represent the nodes and a set of edges in the graph, respectively. We take each patch token as a node, and a bag can be represented as a graph node feature matrix 𝑮 𝑖𝑛 ∈ ℝ (𝑘+1)×𝑑 , which has (𝑘 + 1) nodes, and each node has a d-dimensional feature vector.\nInstead of using the k-NN algorithm to construct the adjacency matrix 𝐴 adj ∈ ℝ (𝑘+1)×(𝑘+1) based on a pair of node features, we adopt the attention matrix of self-attention to adaptively generate the adjacency matrix, thereby further speeding up the training of the network. As shown in Fig. 3, the Graph-Transformer layer is similar to the self-attention that maps queries and key-value pairs to outputs. The difference is that the Graph-Transformer layer adds a branch to perform graph convolution.\nGiven an input patch embedding 𝑋 𝑝𝑎𝑡𝑐ℎ ∈ ℝ (𝑘+1)×𝑑 𝑘 , the matrices of query 𝑄, key 𝐾 and values 𝑉 1 , 𝑉 2 are first calculated through four different linear projections as follows:\n𝑄 = 𝐿𝑖𝑛𝑒𝑟(𝑋 𝑝𝑎𝑡𝑐ℎ ) = 𝑋 𝑝𝑎𝑡𝑐ℎ 𝑊 𝑄 , 𝐾 = 𝐿𝑖𝑛𝑒𝑟(𝑋 𝑝𝑎𝑡𝑐ℎ ) = 𝑋 𝑝𝑎𝑡𝑐ℎ 𝑊 𝐾 , 𝑉 1 = 𝐿𝑖𝑛𝑒𝑟(𝑋 𝑝𝑎𝑡𝑐ℎ ) = 𝑋 𝑝𝑎𝑡𝑐ℎ 𝑊 𝑉 1 , 𝑉 2 = 𝐿𝑖𝑛𝑒𝑟(𝑋 𝑝𝑎𝑡𝑐ℎ ) = 𝑋 𝑝𝑎𝑡𝑐ℎ 𝑊 𝑉 2 .\n(\n)5\nwhere 𝑊 𝑄 , 𝑊 𝐾 , 𝑊 𝑉 1 , and 𝑊 𝑉 2 ∈ ℝ 𝑑 𝑘 ×𝑑 𝑚 are the corresponding weight matrices of linear projections.\nIn addition, we also adopt the multi-head structure to expand the Graph-Transformer layer. As shown in Fig. 3, this structural design can project the inputs into different subspaces to learn different features, thereby improving the performance of the model. Specifically, the input features are evenly split into h parts, and the attention matrix can be calculated as follows:\n𝐴 𝑖 = 𝑆𝑐𝑜𝑟𝑒(𝑄 𝑖 , 𝐾 𝑖 ) = 𝑄 𝑖 𝐾 𝑖 𝑇 √𝑑 𝑚 /ℎ (6)\nwhere\n𝐴 𝑖 ∈ ℝ (𝑘+1)×(𝑘+1) , 𝑖 = [1, . . . , ℎ], 𝑄 𝑖 ∈ ℝ (𝑘+1)× 𝑑 𝑚 ℎ , 𝐾 𝑖 ∈ ℝ (𝑘+1)× 𝑑 𝑚\nℎ , and 1/√𝑑 𝑚 /ℎ is a scaling factor. For the Transformer branch, the outputs of the multi-head structure are first concatenated together and then fed into linear projections to obtain the complete outputs:\n𝑉 1𝑖 ′ = 𝑠𝑜𝑓𝑡𝑚𝑎𝑥(𝐴 𝑖 )𝑉 1𝑖 𝑉 1 ′ = 𝐶𝑜𝑛𝑐𝑎𝑡(𝑉 11 ′ , … 𝑉 1ℎ ′ )𝑊 𝑜1 .(7)\nwhere\n𝑉 1𝑖 ′ ∈ ℝ (𝑘+1)× 𝑑 𝑚 ℎ , 𝑉 1 ′ ∈ ℝ (𝑘+1)\n×𝑑 𝑘 , and 𝑊 𝑜1 ∈ ℝ 𝑑 𝑚 ×𝑑 𝑘 the weight matrices of linear For the GCN branch, the adjacency matrix A ̃𝑖 is first transformed by the normalized 𝐴 𝑖 with self-connections, and then the A ̃𝑖 ∈ ℝ (𝑘+1)×(𝑘+1) and node embedding 𝑉 2𝑖 ∈ ℝ (𝑘+1)× 𝑑 𝑚 ℎ are fed into the GCN [38] to learn graph representations. The forward propagation of GCN can be written as follows:\n𝑉 2𝑖 ′ = 𝜎 (𝐷 ̃-1 2 𝐴 ̃𝐷 ̃-1 2 𝑉 2𝑖 𝑊 (𝑙) )(8)\nwhere\n𝑉 2𝑖 ′ ∈ ℝ (𝑘+1)× 𝑑 𝑚\nℎ , 𝐷 ̃ is the degree matrix of A ̃, and 𝑊 (𝑙) is trainable weight matrix. After that, the multiple subgraph representations are concatenated together and then fed into the linear projections to obtain the complete outputs:\n𝑉 2 ′ = 𝐶𝑜𝑛𝑐𝑎𝑡(𝑉 21 ′ , … 𝑉 2ℎ ′ )𝑊 𝑜2(9)\nwhere 𝑉 2 ′ ∈ ℝ (𝑘+1)×𝑑 𝑘 and 𝑊 𝑜2 ∈ ℝ 𝑑 𝑚 ×𝑑 𝑘 are the weight matrices of linear projection.\nThe adjacency matrix can effectively represent the spatial distribution and adjacent relationship between nodes, and then the spatial-related information is learned through GCN layer. Thus, the generated graph representation contains the local information and short-range structure, which are ignored in the original Transformer.\nFinally, the Transformer branch output 𝑉 1 ′ and GCN output 𝑉 2 ′ are concatenated together and then fed into linear projections to fuse the local and global information:\n𝑋 𝑝𝑎𝑡𝑐ℎ ′ = 𝐶𝑜𝑛𝑐𝑎𝑡(𝑉 1 ′ , 𝑉 2 ′ )𝑊 𝑜3(10)\nwhere 𝑋 𝑝𝑎𝑡𝑐ℎ ′ ∈ ℝ (𝑘+1)×𝑑 𝑘 and 𝑊 𝑜3 ∈ ℝ 2𝑑 𝑘 ×𝑑 𝑘 are the weight matrices of linear projection." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "B. Multi-scale Feature Fusion Module", "publication_ref": [], "table_ref": [], "text": "As mentioned before, the patches with different resolutions present different diagnostic information ranging from cellularscale (e.g., nucleus and micro-environment) to tissue-scale (e.g., vessels and glands), which has an intrinsic semantical gap. However, existing multi-scale methods have not paid enough attention to this issue. To this end, we propose a novel MFFM, which alleviates the semantical gap by using the class token as an agent to exchange information between two resolutions.\nAs shown in Fig. 1(b), MEGT contains K MFFM, each of which consists of two Transformer encoders and an efficiently cross-attention, and the value of K is set to 2 in this work. Compared with CNN, vision Transformer adds a learnable class token to summarize all patch tokens. Since the class token uniformly converts different resolution features into discriminative representation for classification, we use class tokens to effectively fuse features of different scales via a Cross-Attention (CA). Fig. 1(c) shows the basic idea of CA, which uses the class token of a branch to exchange information with patch tokens of the other branch. Since the class token already learns the global information of all patch tokens in the corresponding branch, interacting with patch tokens of another branch can help to learn more information at different scales. After the CA, the class token passes the learned information to its patch tokens at the later Transformer encoder, thereby enriching the representation of patch tokens. \nwhere the 𝑿 𝑐𝑙𝑠 𝑙𝑜𝑤 ∈ ℝ 1×𝑑 , so the computational complexity of CA is 𝑂(𝑛 + 1) instead of 𝑂(𝑛 2 ) for self-attention, n is the number of patch tokens in the high-branch. The high-branch follows the same procedure, but 𝑿 𝑐𝑙𝑠 ℎ𝑖𝑔ℎ is used as the query and\n𝑿 ′ = [𝑿 𝑐𝑙𝑠 ℎ𝑖𝑔ℎ ‖𝑿 𝑝𝑎𝑡𝑐ℎ 𝑙𝑜𝑤 ]\nis used as the key and value. The output 𝑶 𝑙𝑜𝑤 with the CA operation can be expressed as:\n𝑦 𝑐𝑙𝑠 𝑙𝑜𝑤 = 𝑿 𝑐𝑙𝑠 𝑙𝑜𝑤 + 𝐶𝐴(𝑋 𝑐𝑙𝑠 𝑙𝑜𝑤 , 𝑋 ′ ) 𝑶 𝑙𝑜𝑤 = [𝑦 𝑐𝑙𝑠 𝑙𝑜𝑤 ‖𝑿 𝑝𝑎𝑡𝑐ℎ 𝑙𝑜𝑤 ](12)" }, { "figure_ref": [ "fig_0" ], "heading": "C. Network Architecture and Training Strategy", "publication_ref": [], "table_ref": [], "text": "EGT and MFFM are the basic components of the proposed MEGT. As shown in Fig. 1(b), MEGT contains two separate branches, and EGT is used in each branch to provide superior feature representation for the MFFM. Then, MFFM employs the class tokens to fuse multi-scale features multiple times via a cross-attention layer. Finally, the dual-resolution class tokens are concatenated to produce slide-level representation 𝒁 for WSIs classification, which can be defined by: { 𝒁 = 𝑐𝑜𝑛𝑐𝑎𝑡(𝑿 𝑐𝑙𝑠 𝑙𝑜𝑤 , 𝑿 𝑐𝑙𝑠 ℎ𝑖𝑔ℎ )" }, { "figure_ref": [], "heading": "𝒀 ̂= 𝑠𝑜𝑓𝑡𝑚𝑎𝑥(𝑀𝐿𝑃(𝒁))", "publication_ref": [], "table_ref": [], "text": "For the model training, the loss function ℒ is defined by the cross entropy between the bag class labels 𝒀 and bag class predictions 𝒀 ̂ which can be expressed as:\nℒ = - 1 𝑀 ∑ ∑ 𝑌 𝑖𝑗 log ( 𝐶 𝑗=1 𝑀 𝑖=1 𝑌 ̂𝑖𝑗 ) (14\n)\nwhere 𝑀 is the number of patients, 𝐶 is the number of classes. The gradient descent algorithm is adopted for optimizing the whole model." }, { "figure_ref": [], "heading": "IV. EXPERIMENTS AND RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Datasets", "publication_ref": [ "b38" ], "table_ref": [], "text": "The proposed MEGT was evaluated on two commonly used WSI datasets, namely the Cancer Genome Atlas Renal Cell Carcinoma (TCGA-RCC) dataset (https://portal.gdc.cancer.gov) and CAMELYON16 dataset [39].\nTCGA-RCC is a WSI dataset for Renal Cell Carcinoma classification, and it contains three categories: Kidney Chromophobe Renal Cell Carcinoma (KICH), Kidney Renal Clear Cell Carcinoma (KIRC), and Kidney Renal Papillary Cell Carcinoma (KIRP). A total of 914 slides are collected from 876 cases, including 111 KICH slides from 99 cases, 519 KIRC slides from 513 cases, and 284 KIRP slides from 264 cases. After WSI pre-processing, the mean numbers of patches extracted on each slide at 5× and 20× magnification were 4263 and 14627, respectively.\nCAMELYON16 is a public dataset for metastasis detection in breast cancer, including 270 training and 129 test slides. After WSI pre-processing, there were about mean 918 and 3506 patches selected from each slide at 5× and 20× magnifications, respectively." }, { "figure_ref": [], "heading": "B. Experiment Setup", "publication_ref": [], "table_ref": [], "text": "In our experiments, all algorithms were trained on the 270 official training slides and tested on the 130 official test slides in the CAMELYON16 dataset. These training slides were further randomly divided into training and validation sets with a ratio of 9:1. Meanwhile, the results of the CAMELYON16 dataset were obtained on the official testing set. For the TCGA-RCC dataset, we conducted a 5-fold cross-validation on the 936 slides to evaluate these algorithms. The widely used accuracy, recall, F1-score (F1), and area under the curve (AUC) were used as evaluation indices to compare the classification performance of different algorithms. The results of the TCGA-RCC dataset were presented in the format of mean ± SD (standard deviation). Since the classification algorithms consistently achieved better results on 20× images than those on 5× ones, we only reported the experimental results on a single 20× scale for both datasets." }, { "figure_ref": [], "heading": "C. Implementation Details", "publication_ref": [ "b33" ], "table_ref": [], "text": "In WSI pre-processing, each WSI was divided into nonoverlapping 299×299 patches in both the magnifications of 20× and 5×, and a threshold was set to filter out background patches. After patching, we used the pre-trained TransPath [34] model, a pre-training vision Transformer for histopathology images, to extract a feature vector with a dimensional of 768 from each 299×299 patch. Thereafter, for the proposed MEGT, the Adam optimizer was used in the training stage with a learning rate of 1e-4 with a weight decay of 1e-5, to update the models. The size of the mini-batch was set as 1 (bag). The MEGT and other models were trained for 150 epochs with a cross-entropy loss function, and they would early stop if the loss would not decrease in the past 30 epochs. All models were implemented by Python 3.7 with PyTorch toolkit 1.10.0 on a platform equipped with one NVIDIA GeForce RTX 3090 GPU." }, { "figure_ref": [], "heading": "D. Comparison with State-of-the-art Methods", "publication_ref": [ "b5", "b14", "b18", "b39", "b22", "b40", "b6", "b13", "b14", "b12" ], "table_ref": [], "text": "We compared the proposed MEGT with the following SOTA MIL algorithms:\n1) The conventional MIL with the pooling operators, including Mean-pooling and Max-pooling.\n2) The classic attention-based pooling operator ABMIL [6] and its variant, non-local attention pooling DSMIL [15].\n3) The Transformer-based MIL, TransMIL [19]. 4) The Cluster-based MIL, Re-Mix [40] 5) The Graph-based MIL, GCN-MIL [23], and GTP [41]. 6) The multi-resolution MIL approaches, including MS-Max [7], MS-ABMIL [14], DSMIL-LC [15] and H 2 -MIL [13].\nTable Ⅰ shows the overall classification results of the comparison experiment on the TCGA-RCC and CAMELYON16 datasets. Generally, the multi-scale MIL algorithms achieve better results than the single-scale MIL ones, which indicates that the multi-scale information of WSIs is important for cancer diagnosis. Moreover, the proposed MEGT achieves the best mean accuracy of 96.91±1.24%, recall of 97.65±0.86%, and F1-score of 96.26±1.19% on the TCGA-RCC dataset. Meanwhile, compared to other algorithms, it improves at least 1.47%, 1.54%, and 1.37% on the corresponding indices, respectively. Similarly, MEGT outperforms all the compared algorithms with the best accuracy of 96.89%, F1-score of 95.74%, and AUC of 97.30% on the CAMELYON16 dataset. As a SOTA multi-resolution MIL algorithm, H 2 -MIL achieves the second-best performance due to a newly proposed GCN algorithm in it, which can learn hierarchical representation from a heterogeneous graph. Nevertheless, our MEGT still improves by 0.77%, 1.00%, and 0.60% on classification accuracy, recall and F1-score, respectively, over the H 2 -MIL.\nOn the other hand, the proposed EGT outperforms all the other single-scale MIL algorithms on all indices for the singlescale experiment on the single 20× scale images. On the TCGA-RCC dataset, EGT achieves the best classification accuracy of 96.91%, recall of 97.65%, and F1-score of 96.26%, respectively. Meanwhile, compared to other algorithms, it improves at least 1.17%, 1.54% and 1.37% on the corresponding indices, respectively. EGT also gets a similar trend on the CAMELYON16 dataset with the best classification performance of 96.12%, 94.85%, and 96.34% on the accuracy, F1-score and AUC, respectively. Compared to other algorithms, it improves at least 0.77%, 1.45%, and 0.55%, on the corresponding indices respectively." }, { "figure_ref": [], "heading": "E. Ablation Study", "publication_ref": [], "table_ref": [], "text": "To further evaluate the effectiveness of MEGT, we conducted a series of ablation experiments to delineate the contributions of two major components in the proposed MEGT: the EGT and the MFFM." }, { "figure_ref": [ "fig_2" ], "heading": "1) Effects of Efficient Graph-Transformer", "publication_ref": [], "table_ref": [], "text": "To evaluate the effectiveness of the proposed token pruning module (TPM) and Graph-Transformer Layer (GTL) in EGT, we compared the proposed EGT with its three variants: 1) EGT-m: This variant removed both the TPM and GTL, which was then equivalent to TransMIL. 2) EGT-TPM: This variant only maintained the TPM, but removed the GTL. 3) EGT-GTL: This variant only maintained the GTL, but removed the TPM. 4) EGT-KNN: This variant used the same structure as the EGT, but used the KNN algorithm to perform the subgraph construction. Fig. 3 shows the results of different variants on both the TCGA-RCC and CAMELYON16 datasets. It can be seen that both EGT-TPM and EGT-GTL outperform EGT-m, suggesting the effectiveness of TPM and GTL in EGT. Moreover, TPM can effectively reduce the number of redundant tokens without additional parameters, and GTL can learn more important local-global features with lower computational complexity. Therefore, by integrating TPM and GTL, EGT can efficiently learn both the spatial-related information of WSI and the correlation between patch tokens for improved performance. In addition, although the EGT-KNN achieves similar performance to our EGT, it needs more time to construct the adjacency matrix, resulting in a significant decrease in the training speed of the network. " }, { "figure_ref": [ "fig_3" ], "heading": "2) Effects of Multi-scale Feature Fusion Module", "publication_ref": [], "table_ref": [], "text": "We further compared our cross-attention fusion strategy to several other feasible multi-scale fusion methods, including:\n1) Concatenation: It removed the cross-attention in the MFFM, and then only concatenated the class tokens of the two branches after MFFM. 2) All-attention: It concatenated all tokens from different branches together and then fused them via the MSA. 3) Class-Token: It simply averaged the class tokens on the two branches, so that the information was passed back to patch tokens through the later transformer encoder. Fig. 4 shows the classification results of four different token fusion strategies on the TCGA-RCC and CAMELYON16 datasets. The specially designed cross-attention outperforms all the other strategies on all indices, which indicates it can effectively fuse multi-scale information of WSIs with superior performance. It is worth noting that although the All-attention mechanism uses additional self-attention to learn information between two branches, it fails to achieve better performance compared to the simple Class-Token. The experimental results demonstrate that class tokens can avoid the semantic gap in different resolution patches, resulting in the performance improvement of the MFFM." }, { "figure_ref": [ "fig_4" ], "heading": "F. Visualization of Attention Weights", "publication_ref": [], "table_ref": [], "text": "We further used the CAMELYON16 test set with pixel-level annotations to evaluate the ability of our MEGT to locate the positive instances. For the global attention heatmaps in the second column of Fig. 5, the attention weights are normalized between 0 to 1 (i.e., blue to red) in each cross-attention map. Thus, the red regions in the global attention heatmaps represent the highest contribution instances for classification in each bag. It can be seen that the hot regions predicted by our model tend to appear in the annotated ROIs on both WSIs. Moreover, almost all the high-attention patches contain high-density irregular cells, which proves that our framework can effectively focus on the most discriminative patches via the cross-attention module to implement a more accurate WSIs classification. " }, { "figure_ref": [], "heading": "G. Parameter Sensitivity Analysis", "publication_ref": [], "table_ref": [], "text": "A parameter sensitivity analysis was also conducted for the proposed MEGT. Two architecture parameters in MEGT will affect the classification performance, i.e., the number of Transformer layers L in MFFM and the number of MFFM K. We then changed both parameters to investigate their impact on the WSI classification task. The five models (A-E) in Table Ⅲ represent different combinations of architecture parameters L and K.\nTable Ⅲ shows the classification results of different numbers of Transformer layers on the low-branch and high-branch. It can be found that both models A and C significantly increase parameters but without any improvement in accuracy compared to the original MEGT, because more Transformer layers lead to a larger number of parameters, which may suffer from the overfitting problem. It is worth noting that the performance of MEGT will be decreased by reducing the depth of the highbranch in model B, which indicates that the high-branch plays the main role in learning the features of WSIs, while the lowbranch only provides additional information. The number of MFFM K is an important parameter in our MEGT, which controls the fusion frequency of the two branches. Table Ⅲ shows the classification results with different numbers of MFFMs on the TCGA-RCC dataset and CAMELYON16 dataset. With MEGT as the baseline, the accuracies of the models D and E on both datasets are much degraded by using only one MFFM, because the class token cannot pass the learned information to its patch tokens. In addition, too much branch fusion does not improve performance, but introduces more parameters. This is because the cross-attention is a linear operation without any nonlinearity function, which results in overfitting of the model due to overparameterization. Considering the performance and parameters of the model, we finally select 1, 2 and 2 as the values of transformer layers 𝐿 𝑙𝑜𝑤 , 𝐿 ℎ𝑖𝑔ℎ and MFFM K." }, { "figure_ref": [], "heading": "V. DISCUSSION", "publication_ref": [], "table_ref": [], "text": "In this work, a novel MEGT network is proposed for cancer diagnosis on gigapixel WSIs. MEGT designs an effective MFFM to aggregate different resolution patches to produce stronger WSI-level representation, which is easy to implement without complicated patch processing. Experimental results on both TCGA-RCC and CAMELYON16 datasets indicate the effectiveness of our proposed MEGT.\nPrevious works on WSIs generally focused on singleresolution methods, which fail to capture multi-scale information of WSIs. Inspired by the diagnosis process of pathologists, some researchers have extended previous MIL algorithms to learn multi-scale representations from the WSI pyramid. From the aspect of multi-scale feature fusion, existing schemes are restricted to simple concatenation or multi-scale feature pyramid construction, which are not paid enough attention to the intrinsic semantic gap among different resolution patches. To this end, our proposed MEGT introduces a class token for each resolution as an agent to fuse multi-scale information of WSIs. Since the class token uniformly converts different resolution features into slide-level representations for classification, the semantic gap in different resolution patches is alleviated. In addition, our framework avoids the complicated patch processing, such as building feature pyramids and heterogeneous graphs in the WSI. Therefore, the proposed MEGT is a simple yet effective framework for learning the multi-scale information of WSIs for a CAD model.\nThe spatial information of WSIs is also essential for cancer diagnosis. Different from previous graph-based MIL or Transformer-based MIL, the proposed EGT aims to learn the spatial features from graph data to enhance the performance of the Transformer. Here, each node in the graph data corresponds to a patch in the original WSI, and the edges are computed by the embedded features from these patches. Thus, the patchbased graph actually represents the spatial relationships among different regions in a WSI. In addition, EGT utilizes the attention scores of Transformer encoder for token pruning, which greatly reduces the computational complexity of graph construction and graph convolution. Therefore, EGT can efficiently learn the spatial information and the correlation between patches simultaneously to produce a superior feature representation.\nAlthough the proposed MEGT achieves superior performance over the compared SOTA algorithms, it still has some room for improvement. For example, we will focus on the data-driven pretext task design in self-supervised learning to learn more effective multi-scale feature representation, which can alleviate the issue of a small sample size in histopathological images. On the other hand, MEGT is currently only suitable for dual resolution on WSIs due to the multi-scale feature fusion strategy of MFFM. Future studies can explore other efficient strategies, such as feature pyramids and hierarchical networks, to combine more resolution for feature fusion." }, { "figure_ref": [], "heading": "VI. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this work, we proposed Multi-scale Efficient Graph-Transformer (MEGT), a dual-branch Transformer for aggregating image patches of different resolutions, to promote the accuracy of cancer diagnosis on WSIs. Particularly, an effective MFFM was developed to learn the multi-scale features and reduce the semantic gap in different resolution patches. Meanwhile, the EGT was specifically designed to improve the " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "This work is supported by This work is supported by the National Key R&D Program of China (2021YFA1003004), National Natural Science Foundation of China (62271298, 81871428) and 111 Project (D20031)." } ]
[ { "authors": "K Bera; K Schalper; D Rimm", "journal": "Nat. Rev. Clin. Oncol", "ref_id": "b0", "title": "Artificial intelligence in digital pathology -new tools for diagnosis and precision oncology", "year": "2017" }, { "authors": "M Zarella; D Bowman; F Aeffner", "journal": "Arch. Pathol. Lab Med", "ref_id": "b1", "title": "A Practical Guide to Whole Slide Imaging: A White Paper From the Digital Pathology Association", "year": "2018" }, { "authors": "J Shi; X Zheng; J Wu", "journal": "Pattern Recognit", "ref_id": "b2", "title": "Quaternion Grassmann average network for learning representation of histopathological image", "year": "2019" }, { "authors": "X Zhu; J Yao; F Zhu; J Huang", "journal": "", "ref_id": "b3", "title": "WSISA: Making Survival Prediction from Whole Slide Histopathological Images", "year": "2017" }, { "authors": "D Di; S Li; J Zhang; Y Gao", "journal": "", "ref_id": "b4", "title": "Ranking-Based Survival Prediction on Histopathological Whole-Slide Images", "year": "2020" }, { "authors": "M Ilse; J M Tomczak; M Welling", "journal": "", "ref_id": "b5", "title": "Attention-based Deep Multiple Instance Learning", "year": "2018" }, { "authors": "G Campanella; M G Hanna; L Geneslaw; A Miraflor", "journal": "Nat Med", "ref_id": "b6", "title": "Clinicalgrade computational pathology using weakly supervised deep learning on whole slide images", "year": "2019" }, { "authors": "W Shao; T Wang; Z Huang", "journal": "IEEE Trans. Med. Imaging", "ref_id": "b7", "title": "Weakly Supervised Deep Ordinal Cox Model for Survival Prediction from Whole-Slide Pathological Images", "year": "2021" }, { "authors": "D Tellez; G Litjens; J A Van Der Laak; F Ciompi", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b8", "title": "Neural Image Compression for Gigapixel Histopathology Image Analysis", "year": "2018" }, { "authors": "H Zhang; Y Meng; Y Zhao", "journal": "", "ref_id": "b9", "title": "DTFD-MIL: Double-Tier Feature Distillation Multiple Instance Learning for Histopathology Whole Slide Image Classification", "year": "2022" }, { "authors": "J Yao; X Zhu; J Jonnagaddala", "journal": "Med. Image Anal", "ref_id": "b10", "title": "Whole slide images based cancer survival prediction using attention guided deep multiple instance learning networks", "year": "2020" }, { "authors": "M Y Lu; D F K Williamson; T Y Chen", "journal": "Nat Biomed Eng", "ref_id": "b11", "title": "Data-efficient and weakly supervised computational pathology on whole-slide", "year": "2021" }, { "authors": "W Hou; L Yu; C Lin", "journal": "", "ref_id": "b12", "title": "H^2-MIL: Exploring Hierarchical Representation with Heterogeneous Multiple Instance Learning for Whole Slide Image Analysis", "year": "2022" }, { "authors": "N Hashimoto; D Fukushima; R Koga", "journal": "", "ref_id": "b13", "title": "Multi-scale adversarial Multiple-instance CNN for Cancer Subtype Classification with Unannotated Histopathological Images", "year": "2020" }, { "authors": "B Li; Y Li; K W Eliceiri", "journal": "", "ref_id": "b14", "title": "Dual-stream Multiple Instance Learning Network for Whole Slide Image Classification with Self-supervised Contrastive Learning", "year": "2021" }, { "authors": "R J Chen; C Chen; Y Li", "journal": "", "ref_id": "b15", "title": "Scaling Vision Transformers to Gigapixel Images via Hierarchical Self-Supervised Learning", "year": "2022" }, { "authors": "Z Huang; H Chai; R Wang", "journal": "", "ref_id": "b16", "title": "Integration of Patch Features Through Self-supervised Learning and Transformer for Survival Analysis on Whole Slide Images", "year": "2021" }, { "authors": "H Li; F Yang; Y Zhao", "journal": "", "ref_id": "b17", "title": "DT-MIL: Deformable Transformer for Multi-instance Learning on Histopathological Image", "year": "2021" }, { "authors": "Z Shao; H Bian; Y Chen", "journal": "", "ref_id": "b18", "title": "TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification", "year": "2021" }, { "authors": "W Wang; E Xie; X Li", "journal": "", "ref_id": "b19", "title": "Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions", "year": "2021" }, { "authors": "Z Zhang; H Zhang; L Zhao", "journal": "", "ref_id": "b20", "title": "Nested Hierarchical Transformer: Towards Accurate, Data-Efficient and Interpretable Visual Understanding", "year": "2022" }, { "authors": "P Zhang; X Dai; J Yang", "journal": "", "ref_id": "b21", "title": "Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding", "year": "2021" }, { "authors": "R Li; J Yao; X Zhu", "journal": "", "ref_id": "b22", "title": "Graph CNN for Survival Analysis on Whole Slide Pathological Images", "year": "2018" }, { "authors": "A Dosovitskiy; L Beyer; A Kolesnikov", "journal": "", "ref_id": "b23", "title": "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale", "year": "2020" }, { "authors": "M A Islam; S Jia; N D B Bruce", "journal": "", "ref_id": "b24", "title": "How Much Position Information Do Convolutional Neural Networks Encode?", "year": "2020" }, { "authors": "P Pati; G Jaume; A Foncubierta", "journal": "Med. Image Anal", "ref_id": "b25", "title": "Hierarchical graph representations in digital pathology", "year": "2021" }, { "authors": "Y Zhou; S Graham; N A Koohbanani", "journal": "", "ref_id": "b26", "title": "CGC-Net: Cell Graph Convolutional Network for Grading of Colorectal Cancer Histology Images", "year": "2019" }, { "authors": "N A Koohbanani; B Unnikrishnan; S A Khurram", "journal": "IEEE Trans. Med. Imaging", "ref_id": "b27", "title": "Self-Path: Self-Supervision for Classification of Pathology Images With Limited Annotations", "year": "2020" }, { "authors": "R J Chen; M Y Lu; W Weng", "journal": "", "ref_id": "b28", "title": "Multimodal Co-Attention Transformer for Survival Prediction in Gigapixel Whole Slide Images", "year": "2021" }, { "authors": "P Liu; B Fu; F Ye", "journal": "", "ref_id": "b29", "title": "DSCA: A Dual-Stream Network with Cross-Attention on Whole-Slide Image Pyramids for Cancer Prognosis", "year": "2022" }, { "authors": "R J Chen; M Y Lu; M Shaban", "journal": "", "ref_id": "b30", "title": "Whole Slide Images are 2D Point Clouds: Context-Aware Survival Prediction Using Patch-Based Graph Convolutional Networks", "year": "2021" }, { "authors": "Z Liu; Y Lin; Y Cao", "journal": "", "ref_id": "b31", "title": "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows", "year": "2021" }, { "authors": "H Fan; B Xiong; K Mangalam", "journal": "", "ref_id": "b32", "title": "Multiscale Vision Transformers", "year": "2021" }, { "authors": "X Wang; S Yang; J Zhang", "journal": "Med. Image Anal", "ref_id": "b33", "title": "Transformer-based unsupervised contrastive learning for histopathological image classification", "year": "2022" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit", "journal": "", "ref_id": "b34", "title": "Attention Is All You Need", "year": "2017" }, { "authors": "Y Xiong; Z Zeng; R Chakraborty", "journal": "", "ref_id": "b35", "title": "Nystr\\\"omformer: A Nystr\\\"om-Based Algorithm for Approximating Self-Attention", "year": "2021" }, { "authors": "Q Wang; B Li; T Xiao", "journal": "", "ref_id": "b36", "title": "Learning Deep Transformer Models for Machine Translation", "year": "2019" }, { "authors": "T Kipf; M Welling", "journal": "", "ref_id": "b37", "title": "Semi-Supervised Classification with Graph Convolutional Networks", "year": "2016" }, { "authors": "B E Bejnordi; M Veta; P J Van Diest", "journal": "JAMA", "ref_id": "b38", "title": "Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women with Breast Cancer", "year": "2017" }, { "authors": "J Yang; H Chen; Y Zhao", "journal": "", "ref_id": "b39", "title": "ReMix: A General and Efficient Framework for Multiple Instance Learning Based Whole Slide Image Classification", "year": "2022" }, { "authors": "Y Zheng; R H Gindra; E J Green", "journal": "IEEE Trans. Med. Imaging", "ref_id": "b40", "title": "A graph-transformer for whole slide image classification", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 320.33, 56.48, 248.41, 35.58 ], "formula_id": "formula_0", "formula_text": "𝑄 = 𝑋𝑊 𝑞 , 𝐾 = 𝑋𝑊 𝑘 𝑁𝐴 = 𝑠𝑜𝑓𝑡𝑚𝑎𝑥 ( 𝑄𝐾 ̃𝑇 √𝑑 ) (𝑠𝑜𝑓𝑡𝑚𝑎𝑥 ( 𝑄 ̃𝐾 ̃𝑇 √𝑑 )) † 𝑠𝑜𝑓𝑡𝑚𝑎𝑥 ( 𝑄 ̃𝐾𝑇 √𝑑 )(1)" }, { "formula_coordinates": [ 4, 367.27, 184.33, 194.2, 28.56 ], "formula_id": "formula_1", "formula_text": "𝑻 𝑖 ′ = 𝑀𝑆𝐴(𝐿𝑁(𝑇 𝑖-1 )) + 𝑻 𝑖-1 𝑻 𝑖 = 𝑀𝐿𝑃(𝐿𝑁(𝑇 𝑖 ′ )) + 𝑻 𝑖 ′ (2" }, { "formula_coordinates": [ 4, 561.47, 194.58, 3.91, 8.96 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 4, 406.87, 453.3, 158.51, 13.08 ], "formula_id": "formula_3", "formula_text": "𝑎 ̅ = ∑ 𝑎 ℎ /𝐻 𝐻 ℎ=1(3)" }, { "formula_coordinates": [ 4, 389.59, 527.1, 171.92, 13.32 ], "formula_id": "formula_4", "formula_text": "ℎ 𝑓𝑢𝑠𝑖𝑜𝑛 = ∑ ℎ 𝑖 𝑖=𝑛-𝑘 𝑖=1 𝑎 ̅ 𝑖 (4" }, { "formula_coordinates": [ 4, 561.51, 529.57, 3.87, 9.05 ], "formula_id": "formula_5", "formula_text": ")" }, { "formula_coordinates": [ 5, 105.5, 369.58, 139.23, 59.02 ], "formula_id": "formula_6", "formula_text": "𝑄 = 𝐿𝑖𝑛𝑒𝑟(𝑋 𝑝𝑎𝑡𝑐ℎ ) = 𝑋 𝑝𝑎𝑡𝑐ℎ 𝑊 𝑄 , 𝐾 = 𝐿𝑖𝑛𝑒𝑟(𝑋 𝑝𝑎𝑡𝑐ℎ ) = 𝑋 𝑝𝑎𝑡𝑐ℎ 𝑊 𝐾 , 𝑉 1 = 𝐿𝑖𝑛𝑒𝑟(𝑋 𝑝𝑎𝑡𝑐ℎ ) = 𝑋 𝑝𝑎𝑡𝑐ℎ 𝑊 𝑉 1 , 𝑉 2 = 𝐿𝑖𝑛𝑒𝑟(𝑋 𝑝𝑎𝑡𝑐ℎ ) = 𝑋 𝑝𝑎𝑡𝑐ℎ 𝑊 𝑉 2 ." }, { "formula_coordinates": [ 5, 291.2, 393.93, 7.73, 8.96 ], "formula_id": "formula_7", "formula_text": ")5" }, { "formula_coordinates": [ 5, 124.22, 539.24, 174.71, 21.6 ], "formula_id": "formula_8", "formula_text": "𝐴 𝑖 = 𝑆𝑐𝑜𝑟𝑒(𝑄 𝑖 , 𝐾 𝑖 ) = 𝑄 𝑖 𝐾 𝑖 𝑇 √𝑑 𝑚 /ℎ (6)" }, { "formula_coordinates": [ 5, 46.8, 567.68, 252.02, 33.31 ], "formula_id": "formula_9", "formula_text": "𝐴 𝑖 ∈ ℝ (𝑘+1)×(𝑘+1) , 𝑖 = [1, . . . , ℎ], 𝑄 𝑖 ∈ ℝ (𝑘+1)× 𝑑 𝑚 ℎ , 𝐾 𝑖 ∈ ℝ (𝑘+1)× 𝑑 𝑚" }, { "formula_coordinates": [ 5, 116.42, 644.49, 182.49, 26.28 ], "formula_id": "formula_10", "formula_text": "𝑉 1𝑖 ′ = 𝑠𝑜𝑓𝑡𝑚𝑎𝑥(𝐴 𝑖 )𝑉 1𝑖 𝑉 1 ′ = 𝐶𝑜𝑛𝑐𝑎𝑡(𝑉 11 ′ , … 𝑉 1ℎ ′ )𝑊 𝑜1 .(7)" }, { "formula_coordinates": [ 5, 76.1, 676.55, 122.66, 17.49 ], "formula_id": "formula_11", "formula_text": "𝑉 1𝑖 ′ ∈ ℝ (𝑘+1)× 𝑑 𝑚 ℎ , 𝑉 1 ′ ∈ ℝ (𝑘+1)" }, { "formula_coordinates": [ 5, 373.63, 120.91, 191.75, 13.49 ], "formula_id": "formula_12", "formula_text": "𝑉 2𝑖 ′ = 𝜎 (𝐷 ̃-1 2 𝐴 ̃𝐷 ̃-1 2 𝑉 2𝑖 𝑊 (𝑙) )(8)" }, { "formula_coordinates": [ 5, 340.15, 142.83, 68.08, 17.49 ], "formula_id": "formula_13", "formula_text": "𝑉 2𝑖 ′ ∈ ℝ (𝑘+1)× 𝑑 𝑚" }, { "formula_coordinates": [ 5, 391.75, 201.97, 173.63, 13.08 ], "formula_id": "formula_14", "formula_text": "𝑉 2 ′ = 𝐶𝑜𝑛𝑐𝑎𝑡(𝑉 21 ′ , … 𝑉 2ℎ ′ )𝑊 𝑜2(9)" }, { "formula_coordinates": [ 5, 387.19, 360.88, 178.19, 13.08 ], "formula_id": "formula_15", "formula_text": "𝑋 𝑝𝑎𝑡𝑐ℎ ′ = 𝐶𝑜𝑛𝑐𝑎𝑡(𝑉 1 ′ , 𝑉 2 ′ )𝑊 𝑜3(10)" }, { "formula_coordinates": [ 6, 46.8, 251.1, 82.65, 13.68 ], "formula_id": "formula_17", "formula_text": "𝑿 ′ = [𝑿 𝑐𝑙𝑠 ℎ𝑖𝑔ℎ ‖𝑿 𝑝𝑎𝑡𝑐ℎ 𝑙𝑜𝑤 ]" }, { "formula_coordinates": [ 6, 120.26, 283.6, 178.67, 26.67 ], "formula_id": "formula_18", "formula_text": "𝑦 𝑐𝑙𝑠 𝑙𝑜𝑤 = 𝑿 𝑐𝑙𝑠 𝑙𝑜𝑤 + 𝐶𝐴(𝑋 𝑐𝑙𝑠 𝑙𝑜𝑤 , 𝑋 ′ ) 𝑶 𝑙𝑜𝑤 = [𝑦 𝑐𝑙𝑠 𝑙𝑜𝑤 ‖𝑿 𝑝𝑎𝑡𝑐ℎ 𝑙𝑜𝑤 ](12)" }, { "formula_coordinates": [ 6, 96.98, 512.82, 197.75, 17.88 ], "formula_id": "formula_19", "formula_text": "ℒ = - 1 𝑀 ∑ ∑ 𝑌 𝑖𝑗 log ( 𝐶 𝑗=1 𝑀 𝑖=1 𝑌 ̂𝑖𝑗 ) (14" }, { "formula_coordinates": [ 6, 294.74, 517.31, 4.19, 8.96 ], "formula_id": "formula_20", "formula_text": ")" } ]
Multi-scale Efficient Graph-Transformer for Whole Slide Image Classification
The multi-scale information among the whole slide images (WSIs) is essential for cancer diagnosis. Although the existing multi-scale vision Transformer has shown its effectiveness for learning multi-scale image representation, it still cannot work well on the gigapixel WSIs due to their extremely large image sizes. To this end, we propose a novel Multi-scale Efficient Graph-Transformer (MEGT) framework for WSI classification. The key idea of MEGT is to adopt two independent Efficient Graph-based Transformer (EGT) branches to process the low-resolution and high-resolution patch embeddings (i.e., tokens in a Transformer) of WSIs, respectively, and then fuse these tokens via a multi-scale feature fusion module (MFFM). Specifically, we design an EGT to efficiently learn the localglobal information of patch tokens, which integrates the graph representation into Transformer to capture spatialrelated information of WSIs. Meanwhile, we propose a novel MFFM to alleviate the semantic gap among different resolution patches during feature fusion, which creates a non-patch token for each branch as an agent to exchange information with another branch by cross-attention. In addition, to expedite network training, a novel token pruning module is developed in EGT to reduce the redundant tokens. Extensive experiments on TCGA-RCC and CAMELYON16 datasets demonstrate the effectiveness of the proposed MEGT.
Saisai Ding; Juncheng Li; Jun Wang; Shihui Ying; Jun Shi; ) S Ding
[ { "figure_caption": "1 .1Overview of MEGT for WSI classification. (a) WSI processing and feature extraction. A WSI pyramid is divided into non-overlapping patches at low and high resolutions, and then their features are extracted by a pre-trained TransPath model, respectively. (b) Flowchart of MEGT. The multi-resolution patch embeddings are fed into the proposed MEGT framework, equipped with the efficient Graph-Transformer and multi-scale feature fusion module, to learn slide-level representation for WSIs classification. (c) Cross-attention operation for the low-resolution branch. The CLS token of the low-resolution branch is used as a query token to interact with the patch tokens from the high-resolution branch through crossattention, and the high-resolution branch follows the same procedure, but swaps CLS and patch tokens from another branch.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Structure of Efficient Graph-Transformer (EGT), which is composed of two Transformer encoder layers, a token pruning module, and a Graph-Transformer layer.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Structure of Graph-Transformer layer.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Classification results for evaluating different fusion strategies in MFFM on (a) TCGA-RCC dataset and (b) CAMELYON16 dataset.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Visualization of cross-attention maps on Camelyon16 testing set. Each representative slide is annotated by pathologists, who roughly highlighted the tumor tissue regions (left). A global attention map with corresponding high-attention patches to each slide is generated by computing the attention weights of cross-attention maps (right).", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "RESULTS ON THE CAMELYON16 AND TCGA DATASETS. WE USE THE SAME PATCH FEATURE EXTRACTOR FOR ALL METHODS (UNIT: %)", "figure_data": "MethodResolutionACCTCGA-RCC RecallF1ACCCAMELYON16 F1AUCMean-pooling20×89.67±1.0690.74±1.4688.66±1.2491.4787.6492.75Max-pooling20×92.20±0.7292.51±0.8191.01±0.9092.2588.6493.64ABMIL [6]20×93.72±1.3194.53±1.3192.53±1.8393.8090.1194.82GCN-MIL [23]20×93.28±1.1093.45±1.1591.90±1.2993.0289.8994.17DSMIL [15]20×92.90±0.8493.27±0.6991.95±1.2193.8091.3094.94DSMIL* [15]20×---86.82-89.44TransMIL [19]20×93.94±1.0994.87±0.6493.43±0.7594.5792.9395.53TransMIL* [19]20×---88.37-93.09Re-Mix [40]20×93.58±1.2594.37±0.9593.01±0.8494.5793.4896.03Re-Mix* [40]20×---95.3595.18-GPT [41]20×94.27±1.1294.75±1.4293.44±1.5895.3593.7595.79EGT (Ours)20×95.37±0.6496.04±0.4094.73±0.5596.1295.2096.34MS-Max-pooling [7]20×+ 5×91.42±0.9891.98±0.7990.68±1.1493.0290.5394.26MS-ABMIL [14]20×+ 5×94.58±0.8795.43±1.4594.08±1.3194.5792.4795.15DSMIL-LC* [15]20×+ 5×---89.92-91.65H 2 -MIL [13]20×+ 5×95.44±0.9696.11±1.0294.89±1.3496.1294.7496.70MEGT (Ours)20×+ 5×96.91±1.2497.65±0.8696.26±1.1996.8995.7497.30* DENOTES SCORES REPORTED IN THE PAPER.", "figure_id": "tab_1", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "STUDY RESULTS FOR EVALUATING THE TPM AND GTL IN EGT ON THE CAMELYON16 AND TCGA DATASETS (UNIT: %)", "figure_data": "MethodTPMGTLACCTCGA-RCCF1CAMELYON16 ACC AUCAverage Seconds (Epoch) TCGA-RCC Came.16Model SizeEGT-m××93.83±1.1192.65±1.0993.8094.7242s27s2.5 MEGT-TPM√×94.41±1.3292.88±1.2894.5795.5529s18s2.5 MEGT-GTL×√94.58±1.0393.24±1.1195.3595.6965s39s4.2 MEGT-KNN√√95.21±0.9894.53±1.2496.1296.59141s101s4.2 MEGT√√95.37±0.6494.73±0.5596.8997.3038s26s4.2 M", "figure_id": "tab_2", "figure_label": "Ⅱ", "figure_type": "table" }, { "figure_caption": "THE BLACK COLOR INDICATES CHANGES FROM MEGT. ability of the branches in MEGT for learning spatial information of WSIs. Experimental results on two public WSI datasets demonstrated the effectiveness of the proposed MEGT framework. It suggests that MEGT has the potential for WSIbased CAD in clinical practice.", "figure_data": "TABLE IIICLASSIFICATION RESULTS WITH DIFFERENT ARCHITECTUREPARAMETERS ON CAMELYON16 AND TCGA-RCC DATASETSModellLhKCame16 Acc. (%)TCGA Acc. (%)Params. (M)GCMT12296.8996.91±1.2426.79A22296.1296.21±1.1333.09B11295.3595.98±1.3820.49C14296.1296.67±1.2639.39D12194.5795.77±1.8512.61E12495.3596.79±1.4748.85", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work introduces the concept of multi-resolution information in WSIs, which inspires the citing paper to explore the use of multi-scale diagnostic information in improving diagnostic accuracy."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work presents a simple strategy of concatenating instances from different resolutions for MIL models, which the citing paper adopts in their research to address the multi-resolution information in WSIs."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work introduces the patch pyramid to preserve hierarchical information of WSIs, which the citing paper uses to spatially align patches from different resolutions in their research."}, {"Category": "Data Source", "Citation": "[13]", "Explanation": "The cited work provides a dataset of WSIs that the citing paper uses in their research to explore the use of multi-scale diagnostic information in improving diagnostic accuracy."}, {"Category": "Extension or Continuation", "Citation": "[14]", "Explanation": "The cited work extends the previous MIL frameworks to the multi-resolution oriented approaches, which the citing paper builds upon in their research to address the multi-resolution information in WSIs."}, {"Category": "Extension or Continuation", "Citation": "[15]", "Explanation": "The cited work constructs the patch pyramid to preserve hierarchical information of WSIs, which the citing paper extends in their research to spatially align patches from different resolutions in their research."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work introduces the concept of hierarchical structures in MVTs, which the citing paper adopts to process the visual tokens of different scales in WSIs."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work introduces the concept of using GCN to learn spatial information in WSIs, which the citing paper adopts in their own research to improve the performance of a CAD model for cancer diagnosis."}, {"Category": "Extension or Continuation", "Citation": "[24] [25]", "Explanation": "The cited works discuss the use of position encoding in Transformer to retain positional information in fixed-length sequences, which the citing paper extends to unfixed-length sequences in WSI analysis for better representation learning."}, {"Category": "Extension or Continuation", "Citation": "[17] [19]", "Explanation": "The cited works mention the use of Transformer-based representation learning in WSI analysis, which the citing paper further extends to improve the performance of a CAD model for cancer diagnosis."}, {"Category": "Data Source", "Citation": "[26] [27]", "Explanation": "The cited works provide a graph-based data structure for WSI analysis, where the nodes represent patches and edges are constructed between adjacent patches. This data source is used in the citing paper to learn spatial information in WSIs for a CAD model."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work, DeepGraphSurv, is used as a methodological basis for the citing paper in applying the GCN to WSIs for survival prediction."}, {"Category": "Methodological Basis", "Citation": "[31]", "Explanation": "The cited work by Li et al. is used to propose a context-aware GCN for survival analysis in the citing paper by using the true spatial coordinates of WSIs to connect edges between adjacent patches."}, {"Category": "Extension or Continuation", "Citation": "[15]", "Explanation": "The cited work by Li et al. is extended in the citing paper to propose a dual-stream MIL (DSMIL) for WSIs classification by using attention scores to weight the instance embeddings to learn the slide-level representation."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work by Huang et al. [17] is used as a methodological basis for the design of a new model that combines self-supervised learning and Transformer to generate feature representation for survival analysis with WSIs."}, {"Category": "Extension or Continuation", "Citation": "[19]", "Explanation": "The cited work by Huang et al. [17] is extended in the citing paper to explore a novel Graph-Transformer structure that combines the strengths of previous MIL methods to learn both spatial information and long-range dependencies in WSIs."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The cited work by Campanella et al. provides a method of training different MIL branches for different resolutions and using max-pooling to fuse multi-resolution embeddings for WSI analysis, which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work by Hashimoto et al. introduces a method of mixing patches from different resolutions into a bag and feeding it to a MIL model for WSI analysis, which the citing paper may have adopted in their research."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work by Li et al. proposes a pyramidal concatenation strategy to spatially align patches from different resolutions for WSI classification, which the citing paper may have adopted in their research."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work by Liu et al. presents a square pooling layer to align patches from two resolutions, which spatially pools high-resolution patches under the guidance of low-resolution, providing a method for WSI analysis that the citing paper may have adopted."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work by Hou et al. proposes a heterogeneous graph neural network for tumor typing and staging, in which the heterogeneous graph is constructed based on the feature and spatial-scaling relationship of multi-resolution patches, providing a method for WSI analysis that the citing paper may have adopted."}, {"Category": "Methodological Basis", "Citation": "[20][21] [32] [33]", "Explanation": "The cited works have inspired the design of a pyramid vision transformer and a nested hierarchical Transformer for multi-scale feature learning in images, which the citing paper adopts to develop a method for learning multi-scale feature representations in WSIs."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work, TransPath, is a pretrained model that is used in the proposed MEGT framework to extract features from the WSI pyramid patches at low and high resolutions."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work introduces the use of a Transformer encoder in the research, which the citing paper adopts to learn potential long-term dependencies between instances in the WSI processing process."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work presents the Nystrom-attention method, which the citing paper utilizes to approximate the standard self-attention in the WSI processing process to address the issue of long instance sequence."}, {"Category": "Methodological Basis", "Citation": "[37]", "Explanation": "The cited work introduces the concept of Layer normalization, which the citing paper adopts in the i-th Transformer layer to improve the computational efficiency of the model."}, {"Category": "Methodological Basis", "Citation": "(38)", "Explanation": "The cited work, GCN, is used as a method for learning graph representations in the citing paper."}, {"Category": "Data Source", "Citation": "[39]", "Explanation": "The cited work is the source of the CAMELYON16 dataset, which the citing paper utilizes in their research for metastasis detection in breast cancer."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work, TransPath, is used as a pre-training vision Transformer to extract feature vectors from histopathology images, which forms the basis for the feature extraction process in the proposed MEGT model."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work ABMIL serves as a methodological basis for the attention-based pooling operator used in the citing paper to improve the performance of cancer diagnosis."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work DSMIL is a variant of ABMIL and is also used as a methodological basis for the non-local attention pooling operator in the citing paper to enhance the performance of cancer diagnosis."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work TransMIL is a Transformer-based MIL algorithm that serves as a methodological basis for the proposed MEGT in the citing paper to improve the performance of cancer diagnosis."}, {"Category": "Methodological Basis", "Citation": "[40]", "Explanation": "The cited work Re-Mix is a cluster-based MIL approach that is used as a methodological basis in the citing paper to enhance the performance of cancer diagnosis."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work GCN-MIL is a graph-based MIL algorithm that is used as a methodological basis in the citing paper to improve the performance of cancer diagnosis."}, {"Category": "Methodological Basis", "Citation": "[41]", "Explanation": "The cited work GTP is another graph-based MIL approach that is used as a methodological basis in the citing paper to enhance the performance of cancer diagnosis."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The cited work MS-Max is a multi-resolution MIL approach that serves as a methodological basis for the proposed MEGT in the citing paper to improve the performance of cancer diagnosis."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work MS-ABMIL is another multi-resolution MIL algorithm that is used as a methodological basis in the citing paper to enhance the performance of cancer diagnosis."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work DSMIL-LC is a variant of the multi-resolution MIL approach DSMIL and is also used as a methodological basis in the citing paper to improve the performance of cancer diagnosis."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work H2-MIL is another multi-resolution MIL approach that is used as a methodological basis in the citing paper to enhance the performance of cancer diagnosis."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b35", "b38", "b1", "b8", "b19", "b20", "b15", "b9" ], "table_ref": [], "text": "Since obtaining perfect supervision is usually challenging in the real-world machine learning problems, the machine learning approaches often have to deal with inaccurate, incomplete, or inexact supervisions, collectively referred to as weak supervision (Zhou 2017). To achieve this, many researchers have devoted into the area of weakly supervised learning, such as semi-supervised learning (Zhu et al. 2009), positive-unlabeled learning (Bekker and Davis 2020), noisy label learning (Han et al. 2021), etc.\nAmong multiple scenarios of weakly supervised learning, one of the most challenging scenarios is to learn classifiers from m unlabeled (U) datasets with different class priors, i.e., the proportions of positive instances in the sets. Such a learning task is usually referred to as U m learning. This scenario usually occur when the instances can be categorized into different groups, and the probability of an instance to be positive varies across the groups, e.g., for predicting voting rates or morbidity rates. Prior studies include Scott and Zhang (2020), which ensembles the classifiers trained on all pairs of the unlabeled sets; Tsai and Lin (2020), which introduces consistency regularization for the problem. Recently, Lu et al. (2021) proposed a consistent approach for classification from multiple unlabeled sets, which is the first classifier-consistent approach for learning from m unlabeled sets (m > 2) that optimizes a classification loss.\nIn this paper, we further consider the problem of learning an AUC (area under ROC curve) optimization model from the U m data, which maximizes the pairwise ranking ability of the classifier (Hanley and McNeil 1982). The importance of this problem lie in two folds: First, we note that for certain scenarios, the ranking performance of the model is more concerned. E.g., ranking items with coarse-grind rank labels. Second, given multiple U sets with different class priors, the imbalance issue is very likely to affect the learning process. Thus, taking an imbalance-aware performance measure, i.e., AUC, is naturally appropriate for the problem.\nTo achieve this goal, we introduce U m -AUC, a novel AUC optimization approach from U m data. U m -AUC solves the problem as a multi-label AUC optimization problem, as each label of the multi-label learning problem corresponds to a pseudo binary AUC optimization sub-problem. To overcome the quadratic time complexity of the pairwise loss computation, we convert the problem into a stochastic saddle point problem and solve it through point-wise AUC optimization algorithm. Our theoretical analysis shows that U m -AUC is consistent with the optimal AUC optimization model, and provides the generalization bound. Experiments show that our approach outperforms the state-of-the-art methods and has superior robustness.\nOur main contributions are highlighted as follows:\n• To the best of our knowledge, we present the first algorithm for optimizing AUC in U m scenarios. Additionally, our algorithm is the first to address the U m problem without the need for an exact class priors. Significantly, our algorithm possesses a simple form and exhibits efficient performance. • Furthermore, we conduct a comprehensive theoretical analysis of the proposed methodology, demonstrating its validity and assessing its excess risk. • We perform experiments on various settings using multiple benchmark datasets, and the results demonstrate that our proposed method consistently outperforms the stateof-the-art methods and performs robustly under different imbalance settings.\nThe reminder of our paper is organized as follows. We first introduce preliminary in section 2. Then, we introduce the U m -AUC approach and conducts the theoretical analysis in section 3, while section 4 shows the experimental results." }, { "figure_ref": [], "heading": "arXiv:2305.15776v3 [cs.LG] 15 Sep 2023", "publication_ref": [], "table_ref": [], "text": "Finally, section 5 introduces the related works, and section 6 concludes the paper." }, { "figure_ref": [], "heading": "Preliminary", "publication_ref": [ "b9" ], "table_ref": [], "text": "In the fully supervised AUC optimization, we are given a dataset sampled from a specific distribution\nX L := {(x i , y i )} n i=1 i.i.d.\n∼ p(x, y) .\n(1)\nFor convenience, we refer to positive and negative data as samples from two particular distributions:\nX P :={x i } n P i=1 i.i.d.\n∼ p P (x) := p(x | y = +1) , and\nX N :={x ′ j } n N j=1 i.i.d. ∼ p N (x) := p(x | y = -1) ,\nthere we have\nX L = X P ∪ X N .\nLet f : X → R be a scoring function. It is expected that positive instances will have a higher score than negative ones. For a threshold value t, we define the true positive rate TPR f (t) = Pr(f (x) ≥ t|y = 1) and the false positive rate FPR f (t) = Pr(f (x) ≥ t|y = 0). The AUC is defined as the area under the ROC curve:\nAUC = 1 0 TPR f (FPR -1 f (t))dt.(2)\nPrevious study (Hanley and McNeil 1982) introduced that randomly drawing a positive instance and a negative instance, the AUC is equivalent to the probability of the positive instance is scored higher than the negative instance, so that the AUC of the model f can be formulated as:\nAUC = 1 - E x∼p P (x) [ E x ′ ∼p N (x) [ℓ 01 (f (x) -f (x ′ ))]] .(3)\nHere ℓ 01 (z) = I[z < 0]. Without creating ambiguity, we will denote f (x, x ′ ) as f (x) -f (x ′ ) for clarity. The maximization of the AUC is equivalent to the minimization of the following AUC risk. Since the true AUC risk measures the error rate of ranking positive instances over negative instances, we refer to the true AUC risk as the PN-AUC risk to avoid confusion:\nR P N (f ) = E x∼p P (x) E x ′ ∼p N (x) [ℓ 01 (f (x, x ′ ))] .(4)\nWith a finite sample, we typically solve the following empirical risk minimization (ERM) problem:\nmin f RP N (f ) = 1 |X P ||X N | x∈X P x ′ ∈X N ℓ(f (x, x ′ )) .\n(5)" }, { "figure_ref": [], "heading": "U m -AUC: The Method", "publication_ref": [ "b15" ], "table_ref": [], "text": "In this paper, we study AUC optimization under U m setting, which involves optimizing AUC across multiple unlabeled datasets. Suppose we are given m(m ≥ 2) unlabeled datasets U 1 , . . . , U m with different class prior probabilities, as defined by the following equation:\nU i = {x ik } ni k=1 i.i.d. ∼ p i (x) = π i p P (x) + (1 -π i )p N (x) ,(6)\nwhere p P (x) and p N (x) are the positive and negative classconditional probability distributions, respectively, and π i denotes the class prior of the i-th unlabeled set. The size of U i is n i . Although we only have access to unlabeled data, our objective is to build a classifier that minimizes the PN-AUC risk eq. ( 4).\nTo achieve this goal, we propose U m -AUC, a novel AUC optimization approach that learns from U m data. Dislike the previous studies on U m classification who require the knowledge of the class prior (Lu et al. 2021), U m -AUC replace this requirement by only knowing knowledge of the relative order of the unlabeled sets based on their class priors, which is more realistic.\nWe next introduce the U m -AUC approach. For convenience, and without loss of generality, we assume that the class priors of the unlabeled sets are in descending order, i.e., π i ≥ π j for i < j. Additionally, we assume that at least two unlabeled sets have different priors, i.e., π 1 > π m ; otherwise, the problem is unsolvable." }, { "figure_ref": [], "heading": "Consistent AUC Learning from U m Data", "publication_ref": [ "b2", "b6", "b34", "b14", "b32", "b34", "b7" ], "table_ref": [], "text": "To provide a solution that is consistent with the true AUC, we first introduce the two sets case, i.e., one can achieve consistent AUC learning through two unlabeled sets with different class priors. Such a result is discussed in previous studies (Charoenphakdee, Lee, and Sugiyama 2019). Suppose the two unlabeled sets are U i and U j with π i > π j . We can minimize the following U 2 AUC risk:\nR ij (f ) = E x∼pi(x) E x ′ ∼pj (x) [ℓ 01 (f (x, x ′ ))](7)\nby solve the following U 2 AUC ERM problem:\nmin f Rij (f ) = 1 n i n j x∈Ui x ′ ∈Uj ℓ(f (x, x ′ )) . (8\n)\nThe following theorem shows that the U 2 AUC risk minimization problem is consistent with the original AUC optimization problem we need to solve, i.e., we can solve the original AUC optimization problem by minimizing the U 2 AUC risk.\nTheorem 1 (U 2 AUC consistency). Suppose f * is a minimizer of the AUC risk R ij over two distributions p i and p j where π i > π j , i.e., f * = arg min R ij . Then, it follows that f * is also a minimizer of the true AUC risk R P N , and thus, R ij is consistent with R P N .\nTherefore, by minimizing the U 2 AUC risk under the condition that only impure data sets are available, we can obtain the desired model.\nWith the U m data, we can construct the following minimization problem by composing m(m -1)/2 AUC subproblems with weights z ij > 0:\nmin f R U m (f ) = i,j|1≤i<j≤m z ij R ij (f ) (9)\nwhich corresponds to the U m AUC ERM problem\nNaive Solution U m -AUC \nmin f RU m (f ) = i,j|1≤i<j≤m x∈Ui x ′ ∈Uj z ij ℓ(f (x, x ′ )) n i n j .\n(10) As well, we can theoretically demonstrate the consistency between U m AUC risk minimization problem and the original AUC optimization problem. Theorem 2 (U m AUC consistency). Suppose f * is a minimizer of the AUC risk R U m over m distributions p 1 , • • • , p m where π i > π j for i < j, i.e., f * = arg min R U m . Then, it follows that f * is also a minimizer of the true AUC risk R P N , and thus, R U m is consistent with R P N .\nSimilarly, the desired model can be obtained by optimizing the U m AUC risk minimization problem under the condition that only impure data sets are available. This means that the ERM in eq. ( 10) provides a naive solution for U m AUC risk minimization by solving all the m(m -1)/2 AUC sub-problems using a pairwise surrogate loss based on the definition of the AUC score (Gao and Zhou 2015), and minimizing the loss over all instance pairs that belong to different U sets.\nHowever, such a solution can be complex and inefficient, especially when dealing with a large number of datasets and a huge amount of data in each dataset. For instance, with m datasets, we need to handle m(m -1)/2 sub-problems according to the definition of U m AUC risk, which can be complex when m grows. Furthermore, assuming that the number of samples is n, if a pairwise loss is used to optimize each sub-problem, the time complexity of each epoch in training is O(n 2 ). This means that the time consumption of the method grows quadratically with n, making it computationally infeasible for large-scale datasets. To address the afore-mentioned issues, we propose a novel and efficient training algorithm for U m AUC risk minimization. Specifically, we assign the surrogate label ȳ(k) be the label of the k-th unlabeled set U k , where\nȳ(k) = [ 0, 0, . . . , 0 k-1 , 1, 1, . . . , 1 m-k ] (11\n)\nhas k -1 negative labels in the front and m -k positive labels in the rear. Let g(x) = ŷ be the model output score for the multilabel learning problem, and g k (x) be the k-th dimension of g(x), which denote the output of k-th sub-problem, the multi-label learning problem can be formalized as:\nmax g AUC macro (g) = 1 m -1 k=1,2,••• ,m AUC k (g k ) ,(12)\nand AUC k is AUC on the k-th label, which has\nAUC k (g k ) = 1 - x∈ i≤k Ui x ′ ∈ j>k Uj ℓ(g k (x, x ′ ))\ni≤k n i j>k n j .\n(13) For the k-th label, the sub-problem of the multi-label learning problem is a simple AUC optimization problem:\nmin g k 1 m -1 x∈ i≤k Ui x ′ ∈ j>k Uj ℓ(g k (x, x ′ )) i≤k n i j>k n j . (14)\nThat is, in order to solve the multi-label learning problem described in eq. ( 12), we can optimize m -1 sub-problems of the form described in eq. ( 14) only. The following explanation outlines why the U m -AUC problem eq. ( 9) can be solved by solving this multi-label learning problem eq. ( 12).\nLet r ijk = (n i n j )/( i≤k n i j>k n j ), the optimization problem eq. ( 14) is equivalent to\nmin g k 1 m -1 i≤k j>k r ijk x∈Ui x ′ ∈Uj ℓ(g k (x, x ′ )) n i n j ,(15)\nor simplified as:\nmin g k 1 m -1 i≤k j>k r ijk Rij (g k ) .\n(16) This is exactly the special case of eq. ( 10)'s where z ij = r ijk /(m -1) > 0. Therefore, each sub-problem of multilabel learning problem eq. ( 12) is an ERM problem for the U m AUC risk minimization problem eq. ( 9). According to theorem 2, optimizing this multi-label learning problem is equivalent to solving the original AUC optimization problem. Thus, we aggregate the output of sub-problem as\nf = 1 m-1 m-1 k=1 g k .\nIn summary, by transforming the U m AUC risk minimization problem into a multi-label learning problem, we only need to optimize m -1, rather than m(m -1)/2 subproblems as in the naive approach.\nEfficient Training of the Model Although we have reduced the number of the sub-problems from O(m 2 ) to O(m), this approach may not be practical for large datasets when optimizing a generic pairwise loss on training data, since the pairwise method suffers from severe scalability issue, as each epoch will take O(n P • n N ) time with n P positive and n N negative samples. This issue has been discussed and efficient methods for AUC optimization have been proposed in several previous works like (Yuan et al. 2021;Liu et al. 2020). These method will take only O(n P + n N ) time each epoch, making them more suitable for large-scale datasets.\nConsider using the square surrogate AUC loss in the multi-label learning problem eq. ( 12), with the derivation process shown in Appendix, we get\n1 m -1 1≤k<m x∈ i≤k Ui x ′ ∈ j>k Uj (1 -f (x) + f (x ′ )) 2 i≤k n i j>k n j = 1 m -1 1≤k<m ( x∈ i≤k Ui (f (x) -a k (f )) 2 i≤k n i A k (f ) + x ′ ∈ j>k Uj (f (x ′ ) -b k (f ))) 2 j>k n j B k (f ) + (1 -a k (f ) + b k (f )) 2 C k (f ) ) = 1 m -1 1≤k<m (A k (f ) + B k (f ) + max α k {2α k (1 -a k (f ) + b k (f )) -α 2 k }) (17) where a k (f ) = x∈ i≤k Ui f (x)/ i≤k n i and b k (f ) = x ′ ∈ j>k Uj f (x ′ )/ j>k n j .\nFollowing previous work (Ying, Wen, and Lyu 2016), the objective eq. ( 17) is equivalent to (m-1) min-max problems\nmin f,a k ,b k max α k h(f, a k , b k , α k ) := E z [H(f, a k , b k , α k ; z)],\n(18) where z = (x, y) is a random sample, and\nH(f, a k , b k , α k ; z) =(1 -p)(f (x) -a k ) 2 I [y=1] + p(f (x) -b k ) 2 I [y=-1] -p(1 -p)α 2 k + 2α k (p(1 -p) + pf (x)I [y=-1] -(1 -p)f (x)I [y=1] ), (19\n) with p = i≤k n i /( i≤k n i + j>k n j ).\nBesides, we replace the\nC k with max α k ≥0 {2α k (m - a k (f ) + b k (f )) -α 2\nk }, which has a margin parameter m to make the loss function more robust (Yuan et al. 2021).\nThe min-max problems can be efficiently solved using primal-dual stochastic optimization techniques, eliminating the need for explicit construction of positive-negative pairs. In our implementation, we leverage the Polyak-Łojasiewicz (PL) conditions (Guo et al. 2020) of the objective functions in the min-max problems eq. ( 18), and update the parameters accordingly to solve the multi-label learning problem.\nThrough a combination of equivalence problem conversion techniques and efficient optimization methods, the complexity of each epoch in training can be reduced from O(n 2 ) to O(n). The algorithm is described in algorithm 1." }, { "figure_ref": [], "heading": "Theoretical Analysis", "publication_ref": [], "table_ref": [], "text": "In this subsection, we provide a theoretical analysis of the approach described above. Specifically, we prove excess risk bounds for the ERM problem of U 2 AUC and U m AUC.\nLet X be the feature space, K be a kernel over X 2 , and C w be a strictly positive real number. Let F K be a class of Algorithm 1: U m -AUC Input: Model g, m sets of unlabeled data with class priors in descending order U 1 , • • • , U m , training epochs num epochs, number of batches num batchs.\n1: for t = 1, 2, . . . , num epochs do 2:\nfor b = 1, 2, . . . , num batches do 3:\nFetch mini-batch B from 0≤i≤m U i 4:\nForward B and get g(B)\n5:\nCompute multi-label loss of the mini-batch B 6:\nUpdate the parameters of g 7:\nend for 8: end for 9: Aggregate the g by f = 1 m-1 m-1 k=1 g k Output: f functions defined as:\nF K = {f w : X → R, f w (x) = K(w, x)|∥w∥ k ≤ C w } ,\nwhere ∥x∥ K = K(x, x). We also assume that the surrogate loss ℓ is L-Lipschitz continuous, bounded by a strictly positive real number C ℓ , and satisfies inequality ℓ ≥ ℓ 01 .\nLet f * ij be the minimizer of empirical risk Rij (f ), we introduce the following excess risk bound, showing that the risk of f * ij converges to risk of the optimal function in the function family F K . Theorem 3 (Excess Risk for U 2 AUC ERM problem). Asthat f * ij ∈ F K is the minimizer of empirical risk Rij (f ), f * P N ∈ F K is the minimizer of true risk R P N (f ). For any δ > 0, with the probability at least 1 -δ, we have\nR P N ( f * ij ) -R P N (f * P N ) ≤ h(δ) a n i + n j n i n j ,\nwhere h(δ) = 8 √ 2C ℓ C w C x + 5 2 ln (2/δ), a = π i -π j , and n i , n j is the size of dataset U i , U j .\nTheorem 3 guarantees that the excess risk of general case can be bounded plus the term of order\nO 1 a √ n i + 1 a √ n j .\nLet f * U m be the minimizer of empirical risk RU m (f ), we introduce the following excess risk bound, showing that the risk of f * U m converges to risk of the optimal function in the function family F K . Theorem 4 (Excess Risk for U m AUC ERM problem). Assume that f * U m ∈ F K is the minimizer of empirical risk RU m (f ), f * P N ∈ F K is the minimizer of true risk R P N (f ). For any δ > 0, with the probability at least 1 -δ, we have\nR P N ( f * U m ) -R P N (f * P N ) ≤ h( 2δ m(m-1) ) s i,j|1≤i<j≤m z ij n i + n j n i n j , where h(δ) = 8 √ 2C ℓ C w C x + 5 2 ln (2/δ), and s = i,j|1≤i<j≤m z ij (π i -π j ) , n i , n j is the size of unlabeled dataset U i , U j .\nTheorem 4 guarantees that the excess risk of general case can be bounded plus the term of order\nO   1 s i,j|1≤i<j≤m z ij n i + n j n i n j   .\nIt is evident that theorem 4 degenerates into theorem 3 when m = 2 and z 12 = 1." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b3", "b13", "b10" ], "table_ref": [], "text": "In this section, we report the experimental results of the proposed U m -AUC, compared to state-of-the-art U m classification approaches.\nDatasets We tested the performance of U m -AUC using the benchmark datasets Kuzushiji-MNIST (K-MNIST for short) (Clanuwat et al. 2018), CIFAR-10, and CIFAR-100 (Krizhevsky, Hinton et al. 2009) with synthesizing multiple datasets with different settings. We transformed these datasets into binary classification datasets, where we classified odd vs. even class IDs for K-MNIST and animals vs. non-animals for CIFAR datasets. In the experiments, we choose m ∈ {10, 50}, and the size of each unlabeled data set U i is fixed to n i = ⌈n train /m⌉, unless otherwise specified. To simulate the distribution of the dataset in different cases, we will generate the class priors {π i } m i=1 from four different distributions , ensuring that the class priors are not all identical to avoid mathematically unsolvable situations. We then randomly sampled data from the training set into U i using the definition in eq. ( 6).\nModels For all experiments on the Kuzushiji-MNIST dataset, we use a 5-layer MLP (multi-layer perceptron) as our model. For experiments on the CIFAR datasets, we use the Resnet32 (He et al. 2016) as our model. We train all models for 150 epochs, and we report the AUC on the test set at the final epoch." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b15", "b11", "b16" ], "table_ref": [], "text": "In our experiments, we compared our method with state-of-the-art U m classification methods: LLP-VAT (Tsai and Lin 2020) on behalf of EPRM methods, and U m -SSC (Lu et al. 2021) on behalf of ERM methods. Note that the previous methods require exact class priors, while in our setting, we can only obtain relative order relations for the class priors of the unlabeled dataset. To ensure fairness in performance comparisons, we created weaker versions of LLP-VAT and U m -SSC, called LLP-VAT ⋆ and U m -SSC ⋆ , respectively, by giving them priors obtained by dividing [0, 1] uniformly instead of using the true priors. We used Adam (Kingma and Ba 2014) and cross-entropy loss for their optimization, following the standard implementation in the original paper. To ensure fairness, we used the same model to implement all methods in all tasks.\nOur implementation is based on PyTorch (Paszke et al. 2019), and experiments are conducted on an NVIDIA Tesla V100 GPU. To ensure the robustness of the results, all experiments are repeated 3 times with different random seed, \nDataset D LLP-VAT ⋆ LLP-VAT U m -SSC ⋆ U m -SSC U m -AUC K-MNIST (m = 10) D u 0." }, { "figure_ref": [], "heading": "Comparison with Baseline Methods", "publication_ref": [ "b15" ], "table_ref": [], "text": "To compare our approach with the baseline methods, we conducted experiments on the three image datasets and two different numbers of bags, as described above. In real-world scenarios, the class priors of datasets often do not follow a uniform distribution. To better simulate real-world situations, we considered four different class prior distributions for each image dataset and for each number of bags: Beta(1, 1), Beta(5, 1), Beta(5, 5), and Beta(5, 2). We refer to these four distributions as uniform, biased, concentrated, and biased concentrated, respectively. These four distributions represent four distinct cases as follows: It is worth mentioning that our experiments encompass a broader range of settings compared to previous work. Specifically, the Beta(1, 1) distribution corresponds to a uniform distribution on the interval [0, 1], which is similar to the setting explored in (Lu et al. 2021).\nFor m = 10 and m = 50, the results obtained from different datasets and varied distributions of class priors are reported in table 1. The results demonstrates that our proposed method, U m -AUC, outperforms the baselines, even when a smaller amount of information is utilized." }, { "figure_ref": [], "heading": "Robustness to Imbalanced Datasets", "publication_ref": [ "b15" ], "table_ref": [], "text": "One of the most prevalent challenges in classification tasks is handling imbalanced datasets, where the number of samples in each class is unequal. In the U m setting, we also encounter imbalanced datasets. If the number of samples in each dataset is unequal, it can result in biased models that prioritize the larger datasets, while underperforming on the minority class.\nTo assess the robustness of our method against imbalanced datasets, we conducted tests using various settings. Specifically, we generated imbalanced datasets in two ways following the approach proposed in (Lu et al. 2021 " }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b17", "b26", "b12", "b33", "b15", "b4", "b27", "b5", "b32", "b14", "b34", "b36", "b31", "b6", "b0", "b21", "b34", "b22", "b18", "b25" ], "table_ref": [], "text": "U m Classification U m Classification involves learning a classifier from m(m ≥ 2) unlabeled datasets, where we have limited information about each dataset, typically the class priors of each dataset. The U m Classification setting is a case of weak supervised learning (Zhou 2017), and can be traced back to a classical problem of learning with label proportions (LLP) (Quadrianto et al. 2008). There are three main categories of previous approaches to solving the U m classification problem: clustering-based approaches, EPRM (empirical proportion risk minimization)-based approaches, and ERM (empirical risk minimization)-based approaches. For clustering-based approaches, Xu et al. (2004) and Krause, Perona, and Gomes (2010) assumed that each cluster corresponds to a single class and applied discriminative clustering methods to solve the problem. For EPRMbased approaches, Yu et al. (2014) aimed to minimize the distance between the average predicted probabilities and the class priors for each dataset U i (i.e., empirical proportion risk), while Tsai and Lin (2020) introduced consistency regularization to the problem. For ERM-based approaches, Scott and Zhang (2020) extended the U 2 classification problem by ensembling classifiers trained on all pairs of unlabeled sets, while Lu et al. (2021) tackled the problem through a surrogate set classification task. However, all of the previous works on U m classification have required knowledge of the class priors and are unable to address situations where only the relative order of the unlabeled sets' class priors is known.\nAUC Optimization As a widely-used performance measure alongside classification accuracy, AUC has received great attention from researchers, especially for problems with imbalanced data. While the goal is to train models with better AUC, studies (Cortes and Mohri 2003) have shown that algorithms that maximize model accuracy do not necessarily maximize the AUC score. Accordingly, numerous studies have been dedicated to directly optimizing the AUC for decades (Yang and Ying 2022). To enable efficient optimization of AUC, Gao et al. (2013) proposed an AUC optimization algorithm using a covariance matrix, while Ying, Wen, and Lyu (2016) optimized the AUC optimize problem as a stochastic saddle point problem with stochastic gradient-based methods. For AUC optimization with deep neural models, Liu et al. (2020) introduced deep AUC maximization based on a non-convex min-max problem, and Yuan et al. (2021) proposed an end-to-end AUC optimization method that is robust to noisy and easy data. Recently, there are also studies of partial-AUC and multi-class AUC optimization (Yang et al. 2021a,b;Zhu et al. 2022;Yao, Lin, and Yang 2022). In addition to the algorithms, significant work has been conducted on the theoretical aspects. For example, Gao and Zhou (2015) investigated the consistency of commonly used surrogate losses for AUC, while Agarwal et al. (2005) and Usunier, Amini, and Gallinari (2005) studied the generalization bounds of AUC optimization models. The research on AUC optimization has led to the development of numerous real-world applications, such as software build outcome prediction (Xie and Li 2018a), medical image classification (Yuan et al. 2021), and molecular property prediction (Wang et al. 2022). Most recently, there has been a growing body of research on weakly supervised AUC optimization. For example, Sakai, Niu, and Sugiyama (2018) and Xie and Li (2018b) studied semi-supervised AUC optimization, Charoenphakdee, Lee, and Sugiyama (2019) studied the properties of AUC optimization under label noise, and Xie et al. (2023) offered a universal solution for AUC optimization in various weakly supervised scenarios. However, to the best of our knowledge, there has been no investigation into AUC optimization for U m classification to date." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we investigate the challenge of constructing AUC optimization models from multiple unlabeled datasets. To address this problem, we propose U m -AUC, a novel AUC optimization method with both simplicity and efficiency. U m -AUC is the first solution for AUC optimization under the U m learning scenario and provides a solution for U m learning without exact knowledge of the class priors. Furthermore, theoretical analysis demonstrates the validity of U m -AUC, while empirical evaluation demonstrates that U m -AUC exhibits superiority and robustness compared to the state-of-the-art alternatives." } ]
[ { "authors": "S Agarwal; T Graepel; R Herbrich; S Har-Peled; D Roth", "journal": "Journal of Machine Learning Research", "ref_id": "b0", "title": "Generalization Bounds for the Area Under the ROC Curve", "year": "2005" }, { "authors": "J Bekker; J Davis", "journal": "Machine Learning", "ref_id": "b1", "title": "Learning from Positive and Unlabeled Data: A Survey", "year": "2020" }, { "authors": "N Charoenphakdee; J Lee; M Sugiyama", "journal": "", "ref_id": "b2", "title": "On Symmetric Losses for Learning from Corrupted Labels", "year": "2019" }, { "authors": "T Clanuwat; M Bober-Irizar; A Kitamoto; A Lamb; K Yamamoto; D Ha", "journal": "", "ref_id": "b3", "title": "Deep Learning for Classical Japanese Literature", "year": "2018" }, { "authors": "C Cortes; M Mohri", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b4", "title": "AUC optimization vs. error rate minimization", "year": "2003" }, { "authors": "W Gao; R Jin; S Zhu; Z.-H Zhou", "journal": "", "ref_id": "b5", "title": "One-Pass AUC Optimization", "year": "2013" }, { "authors": "W Gao; Z.-H Zhou", "journal": "", "ref_id": "b6", "title": "On the Consistency of AUC Pairwise Optimization", "year": "2015" }, { "authors": "Z Guo; Y Yan; Z Yuan; T Yang", "journal": "", "ref_id": "b7", "title": "Fast objective & duality gap convergence for nonconvex-strongly-concave min-max problems", "year": "2020" }, { "authors": "B Han; Q Yao; T Liu; G Niu; I W Tsang; J T Kwok; M Sugiyama", "journal": "", "ref_id": "b8", "title": "A Survey of Labelnoise Representation Learning: Past, Present and Future", "year": "2021" }, { "authors": "J A Hanley; B J Mcneil", "journal": "Radiology", "ref_id": "b9", "title": "The Meaning and Use of the Area Under a Receiver Operating Characteristic (ROC) Curve", "year": "1982" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b10", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "D P Kingma; J Ba", "journal": "", "ref_id": "b11", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "A Krause; P Perona; R Gomes", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b12", "title": "Discriminative clustering by regularized information maximization", "year": "2010" }, { "authors": "A Krizhevsky; G Hinton", "journal": "", "ref_id": "b13", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "M Liu; Z Yuan; Y Ying; T Yang", "journal": "", "ref_id": "b14", "title": "Stochastic AUC Maximization with Deep Neural Networks", "year": "2020" }, { "authors": "N Lu; S Lei; G Niu; I Sato; M Sugiyama", "journal": "", "ref_id": "b15", "title": "Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification", "year": "2021" }, { "authors": "A Paszke; S Gross; F Massa; A Lerer; J Bradbury; G Chanan; T Killeen; Z Lin; N Gimelshein; L Antiga; A Desmaison; A Kopf; E Yang; Z Devito; M Raison; A Tejani; S Chilamkurthy; B Steiner; L Fang; J Bai; S Chintala", "journal": "", "ref_id": "b16", "title": "PyTorch: An Imperative Style, High-Performance Deep Learning Library", "year": "2019" }, { "authors": "N Quadrianto; A J Smola; T S Caetano; Q V Le", "journal": "", "ref_id": "b17", "title": "Estimating labels from label proportions", "year": "2008" }, { "authors": "T Sakai; G Niu; M Sugiyama", "journal": "Machine Learning", "ref_id": "b18", "title": "Semi-Supervised AUC Optimization Based on Positive-Unlabeled Learning", "year": "2018" }, { "authors": "C Scott; J Zhang", "journal": "", "ref_id": "b19", "title": "Learning from Label Proportions: A Mutual Contamination Framework", "year": "2020" }, { "authors": "K.-H Tsai; H.-T Lin", "journal": "", "ref_id": "b20", "title": "Learning from Label Proportions with Consistency Regularization", "year": "2020" }, { "authors": "N Usunier; M.-R Amini; P Gallinari", "journal": "", "ref_id": "b21", "title": "A Datadependent Generalisation Error Bound for the AUC", "year": "2005" }, { "authors": "Z Wang; M Liu; Y Luo; Z Xu; Y Xie; L Wang; L Cai; Q Qi; Z Yuan; T Yang", "journal": "Bioinformatics", "ref_id": "b22", "title": "Advanced graph and sequence neural networks for molecular property prediction and drug discovery", "year": "2022" }, { "authors": "Z Xie; M Li", "journal": "", "ref_id": "b23", "title": "Cutting the Software Building Efforts in Continuous Integration by Semi-Supervised Online AUC Optimization", "year": "2018" }, { "authors": "Z Xie; M Li", "journal": "", "ref_id": "b24", "title": "Semi-Supervised AUC Optimization Without Guessing Labels of Unlabeled Data", "year": "2018" }, { "authors": "Z Xie; Y Liu; H.-Y He; M Li; Z.-H Zhou", "journal": "", "ref_id": "b25", "title": "Weakly Supervised AUC Optimization: A Unified Partial AUC Approach", "year": "2023" }, { "authors": "L Xu; J Neufeld; B Larson; D Schuurmans", "journal": "", "ref_id": "b26", "title": "Maximum margin clustering", "year": "2004" }, { "authors": "T Yang; Y Ying", "journal": "ACM Computing Surveys", "ref_id": "b27", "title": "AUC maximization in the era of big data and AI: A survey", "year": "2022" }, { "authors": "Z Yang; Q Xu; S Bao; X Cao; Q Huang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b28", "title": "Learning with multiclass AUC: Theory and algorithms", "year": "2021" }, { "authors": "Z Yang; Q Xu; S Bao; Y He; X Cao; Q Huang", "journal": "", "ref_id": "b29", "title": "When all we need is a piece of the pie: A generic framework for optimizing two-way partial AUC", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b30", "title": "", "year": "" }, { "authors": "Y Yao; Q Lin; T Yang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b31", "title": "Large-scale optimization of partial auc in a range of false positive rates", "year": "2022" }, { "authors": "Y Ying; L Wen; S Lyu", "journal": "", "ref_id": "b32", "title": "Stochastic Online AUC Maximization", "year": "2016" }, { "authors": "F X Yu; K Choromanski; S Kumar; T Jebara; S.-F Chang", "journal": "", "ref_id": "b33", "title": "On learning from label proportions", "year": "2014" }, { "authors": "Z Yuan; Y Yan; M Sonka; T Yang", "journal": "", "ref_id": "b34", "title": "Large-Scale Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification", "year": "2021" }, { "authors": "Z.-H Zhou", "journal": "National Science Review", "ref_id": "b35", "title": "A brief introduction to weakly supervised learning", "year": "2017" }, { "authors": "D Zhu; G Li; B Wang; X Wu; T Yang", "journal": "", "ref_id": "b36", "title": "When AUC meets DRO: Optimizing partial AUC for deep learning with non-convex convergence guarantee", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b37", "title": "", "year": "" }, { "authors": "X Zhu; A B Goldberg; R Brachman; T Dietterich", "journal": "Morgan and Claypool publishers", "ref_id": "b38", "title": "Introduction to Semi-Supervised Learning", "year": "2009" } ]
[ { "formula_coordinates": [ 2, 104.39, 132.42, 101.52, 14.75 ], "formula_id": "formula_0", "formula_text": "X L := {(x i , y i )} n i=1 i.i.d." }, { "formula_coordinates": [ 2, 70.72, 180.71, 79.06, 14.75 ], "formula_id": "formula_1", "formula_text": "X P :={x i } n P i=1 i.i.d." }, { "formula_coordinates": [ 2, 69.81, 199.47, 190.75, 16.08 ], "formula_id": "formula_2", "formula_text": "X N :={x ′ j } n N j=1 i.i.d. ∼ p N (x) := p(x | y = -1) ," }, { "formula_coordinates": [ 2, 111.46, 222.42, 68.33, 9.65 ], "formula_id": "formula_3", "formula_text": "X L = X P ∪ X N ." }, { "formula_coordinates": [ 2, 103.42, 301.71, 189.09, 26.29 ], "formula_id": "formula_4", "formula_text": "AUC = 1 0 TPR f (FPR -1 f (t))dt.(2)" }, { "formula_coordinates": [ 2, 61.37, 392.09, 231.13, 17.74 ], "formula_id": "formula_5", "formula_text": "AUC = 1 - E x∼p P (x) [ E x ′ ∼p N (x) [ℓ 01 (f (x) -f (x ′ ))]] .(3)" }, { "formula_coordinates": [ 2, 69.97, 503.26, 222.53, 17.74 ], "formula_id": "formula_6", "formula_text": "R P N (f ) = E x∼p P (x) E x ′ ∼p N (x) [ℓ 01 (f (x, x ′ ))] .(4)" }, { "formula_coordinates": [ 2, 62.77, 555.02, 220.97, 27.42 ], "formula_id": "formula_7", "formula_text": "min f RP N (f ) = 1 |X P ||X N | x∈X P x ′ ∈X N ℓ(f (x, x ′ )) ." }, { "formula_coordinates": [ 2, 58.57, 680.63, 233.94, 23.52 ], "formula_id": "formula_8", "formula_text": "U i = {x ik } ni k=1 i.i.d. ∼ p i (x) = π i p P (x) + (1 -π i )p N (x) ,(6)" }, { "formula_coordinates": [ 2, 349.45, 382.41, 208.55, 17.12 ], "formula_id": "formula_9", "formula_text": "R ij (f ) = E x∼pi(x) E x ′ ∼pj (x) [ℓ 01 (f (x, x ′ ))](7)" }, { "formula_coordinates": [ 2, 344.91, 428.47, 209.22, 26.8 ], "formula_id": "formula_10", "formula_text": "min f Rij (f ) = 1 n i n j x∈Ui x ′ ∈Uj ℓ(f (x, x ′ )) . (8" }, { "formula_coordinates": [ 2, 554.13, 435.53, 3.87, 8.64 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 2, 356.18, 664.57, 201.82, 20.53 ], "formula_id": "formula_12", "formula_text": "min f R U m (f ) = i,j|1≤i<j≤m z ij R ij (f ) (9)" }, { "formula_coordinates": [ 3, 56.44, 349.01, 233.62, 28.84 ], "formula_id": "formula_13", "formula_text": "min f RU m (f ) = i,j|1≤i<j≤m x∈Ui x ′ ∈Uj z ij ℓ(f (x, x ′ )) n i n j ." }, { "formula_coordinates": [ 3, 375.9, 567.4, 177.95, 23.91 ], "formula_id": "formula_14", "formula_text": "ȳ(k) = [ 0, 0, . . . , 0 k-1 , 1, 1, . . . , 1 m-k ] (11" }, { "formula_coordinates": [ 3, 553.85, 567.86, 4.15, 8.64 ], "formula_id": "formula_15", "formula_text": ")" }, { "formula_coordinates": [ 3, 330.57, 666.33, 227.43, 37.82 ], "formula_id": "formula_16", "formula_text": "max g AUC macro (g) = 1 m -1 k=1,2,••• ,m AUC k (g k ) ,(12)" }, { "formula_coordinates": [ 4, 54, 77.75, 221.1, 30.7 ], "formula_id": "formula_17", "formula_text": "AUC k (g k ) = 1 - x∈ i≤k Ui x ′ ∈ j>k Uj ℓ(g k (x, x ′ ))" }, { "formula_coordinates": [ 4, 60.92, 156.32, 231.58, 41.76 ], "formula_id": "formula_18", "formula_text": "min g k 1 m -1 x∈ i≤k Ui x ′ ∈ j>k Uj ℓ(g k (x, x ′ )) i≤k n i j>k n j . (14)" }, { "formula_coordinates": [ 4, 60.27, 290.87, 232.23, 28.45 ], "formula_id": "formula_19", "formula_text": "min g k 1 m -1 i≤k j>k r ijk x∈Ui x ′ ∈Uj ℓ(g k (x, x ′ )) n i n j ,(15)" }, { "formula_coordinates": [ 4, 101.42, 355.78, 143.66, 26.88 ], "formula_id": "formula_20", "formula_text": "min g k 1 m -1 i≤k j>k r ijk Rij (g k ) ." }, { "formula_coordinates": [ 4, 54, 474.87, 83.68, 14.56 ], "formula_id": "formula_21", "formula_text": "f = 1 m-1 m-1 k=1 g k ." }, { "formula_coordinates": [ 4, 319.5, 73.51, 244.38, 228.17 ], "formula_id": "formula_22", "formula_text": "1 m -1 1≤k<m x∈ i≤k Ui x ′ ∈ j>k Uj (1 -f (x) + f (x ′ )) 2 i≤k n i j>k n j = 1 m -1 1≤k<m ( x∈ i≤k Ui (f (x) -a k (f )) 2 i≤k n i A k (f ) + x ′ ∈ j>k Uj (f (x ′ ) -b k (f ))) 2 j>k n j B k (f ) + (1 -a k (f ) + b k (f )) 2 C k (f ) ) = 1 m -1 1≤k<m (A k (f ) + B k (f ) + max α k {2α k (1 -a k (f ) + b k (f )) -α 2 k }) (17) where a k (f ) = x∈ i≤k Ui f (x)/ i≤k n i and b k (f ) = x ′ ∈ j>k Uj f (x ′ )/ j>k n j ." }, { "formula_coordinates": [ 4, 327.32, 331.22, 222.85, 15.36 ], "formula_id": "formula_23", "formula_text": "min f,a k ,b k max α k h(f, a k , b k , α k ) := E z [H(f, a k , b k , α k ; z)]," }, { "formula_coordinates": [ 4, 328.14, 375.55, 225.71, 83.78 ], "formula_id": "formula_24", "formula_text": "H(f, a k , b k , α k ; z) =(1 -p)(f (x) -a k ) 2 I [y=1] + p(f (x) -b k ) 2 I [y=-1] -p(1 -p)α 2 k + 2α k (p(1 -p) + pf (x)I [y=-1] -(1 -p)f (x)I [y=1] ), (19" }, { "formula_coordinates": [ 4, 319.5, 450.69, 238.5, 21.78 ], "formula_id": "formula_25", "formula_text": ") with p = i≤k n i /( i≤k n i + j>k n j )." }, { "formula_coordinates": [ 4, 319.5, 474.14, 238.5, 20.61 ], "formula_id": "formula_26", "formula_text": "C k with max α k ≥0 {2α k (m - a k (f ) + b k (f )) -α 2" }, { "formula_coordinates": [ 5, 61.92, 270.41, 222.66, 9.65 ], "formula_id": "formula_27", "formula_text": "F K = {f w : X → R, f w (x) = K(w, x)|∥w∥ k ≤ C w } ," }, { "formula_coordinates": [ 5, 81.71, 431.54, 183.08, 23.23 ], "formula_id": "formula_28", "formula_text": "R P N ( f * ij ) -R P N (f * P N ) ≤ h(δ) a n i + n j n i n j ," }, { "formula_coordinates": [ 5, 126.15, 520.96, 94.19, 23.23 ], "formula_id": "formula_29", "formula_text": "O 1 a √ n i + 1 a √ n j ." }, { "formula_coordinates": [ 5, 73.46, 48.92, 484.54, 656.68 ], "formula_id": "formula_30", "formula_text": "R P N ( f * U m ) -R P N (f * P N ) ≤ h( 2δ m(m-1) ) s i,j|1≤i<j≤m z ij n i + n j n i n j , where h(δ) = 8 √ 2C ℓ C w C x + 5 2 ln (2/δ), and s = i,j|1≤i<j≤m z ij (π i -π j ) , n i , n j is the size of unlabeled dataset U i , U j ." }, { "formula_coordinates": [ 5, 365.14, 119.72, 147.23, 34.15 ], "formula_id": "formula_31", "formula_text": "O   1 s i,j|1≤i<j≤m z ij n i + n j n i n j   ." }, { "formula_coordinates": [ 6, 90.61, 56.88, 425.44, 42.93 ], "formula_id": "formula_32", "formula_text": "Dataset D LLP-VAT ⋆ LLP-VAT U m -SSC ⋆ U m -SSC U m -AUC K-MNIST (m = 10) D u 0." } ]
AUC Optimization from Multiple Unlabeled Datasets
Weakly supervised learning aims to make machine learning more powerful when the perfect supervision is unavailable, and has attracted much attention from researchers. Among the various scenarios of weak supervision, one of the most challenging cases is learning from multiple unlabeled (U) datasets with only a little knowledge of the class priors, or U m learning for short. In this paper, we study the problem of building an AUC (area under ROC curve) optimal model from multiple unlabeled datasets, which maximizes the pairwise ranking ability of the classifier. We propose U m -AUC, an AUC optimization approach that converts the U m data into a multi-label AUC optimization problem, and can be trained efficiently. We show that the proposed U m -AUC is effective theoretically and empirically.
Zheng Xie; Yu Liu; Ming Li
[ { "figure_caption": "Figure 1 :1Figure 1: Framework Demonstration. The U m -AUC employs an efficient stochastic optimization algorithm, eliminating the need for pairwise loss and reducing the time complexity to O(n). Additionally, it simplifies the naive solution by transforming it into a multi-label learning problem, reducing the number of sub-problems and resulting in a more concise problem formulation.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "U m -AUC: Simple and Efficient Learning for U m AUC Risk Minimization To simplify the form of naive solution to U m AUC risk minimization, we transform it into an equivalent multi-label learning problem, reducing the number of sub-problems to m -1. To decrease the time cost of training the model, we use an efficient stochastic optimization algorithm, reducing the time complexity from O(n 2 ) to O(n). The proposed approach is demonstrated in the fig. 1. Reduction in the Number of Sub-problems To reduce the number of sub-problems, we transform the U m AUC risk minimization problem into a multi-label learning problem with m-1 labels. We let the samples in datasets U 1 , • • • , U k have label 1 at the k-th position and let the samples in datasets U k+1 , • • • , U m have label 0 at the k-th position.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "1.D u (Uniform): the class priors are sampled from uniform distribution on [0, 1]; 2. D b (Biased): the class priors are sampled from the distribution concentrated on one side, i.e., most sets have more positive samples than negative samples; 3. D c (Concentrated): the class priors are sampled from the distribution concentrated around 0.5, i.e., most sets have close proportions of positive and negative samples; 4. D bc (Biased Concentrated): the class priors are sampled from the distribution concentrated around 0.8, i.e., most sets have close proportions of positive and negative samples, and positive samples more than negative samples.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "865 ±0.0145 0.896 ±0.0249 0.908 ±0.0073 0.911 ±0.0084 0.938 ±0.0064 D b 0.780 ±0.0225 0.789 ±0.0185 0.833 ±0.0357 0.836 ±0.0521 0.851 ±0.0616 D c 0.853 ±0.0330 0.808 ±0.0131 0.858 ±0.0239 0.856 ±0.0307 0.870 ±0.0512 D bc 0.825 ±0.0315 0.798 ±0.0332 0.868 ±0.0255 0.857 ±0.0390 0.896 ±0.0439 CIFAR-10 (m = 10) D u 0.856 ±0.0131 0.856 ±0.0066 0.860 ±0.0090 0.859 ±0.0131 0.905 ±0.0080 D b 0.723 ±0.0454 0.737 ±0.0754 0.746 ±0.0614 0.778 ±0.0462 0.866 ±0.0238 D c 0.787 ±0.0172 0.847 ±0.0059 0.792 ±0.0372 0.807 ±0.0209 0.884 ±0.0046 D bc 0.769 ±0.0373 0.805 ±0.0231 0.796 ±0.0552 0.812 ±0.0430 0.887 ±0.0155 CIFAR-100 (m = 10) D u 0.734 ±0.0092 0.731 ±0.0167 0.747 ±0.0192 0.756 ±0.0115 0.847 ±0.0121 D b 0.630 ±0.0183 0.651 ±0.0210 0.652 ±0.0332 0.667 ±0.0331 0.715 ±0.0292 D c 0.670 ±0.0168 0.707 ±0.0117 0.676 ±0.0363 0.692 ±0.0264 0.757 ±0.0136 D bc 0.672 ±0.0359 0.700 ±0.0324 0.683 ±0.0500 0.701 ±0.0415 0.751 ±0.0641 K-MNIST (m = 50) D u 0.896 ±0.0124 0.902 ±0.0102 0.915 ±0.0136 0.915 ±0.0107 0.931 ±0.0156 D b 0.808 ±0.0142 0.787 ±0.0196 0.861 ±0.0102 0.869 ±0.0083 0.883 ±0.0229 D c 0.863 ±0.0206 0.833 ±0.0165 0.855 ±0.0378 0.863 ±0.0417 0.867 ±0.0125 D bc 0.860 ±0.0523 0.815 ±0.0052 0.881 ±0.0056 0.885 ±0.0078 0.904 ±0.0012 CIFAR-10 (m = 50) D u 0.852 ±0.0079 0.857 ±0.0073 0.853 ±0.0030 0.854 ±0.0492 0.889 ±0.0083 D b 0.757 ±0.0250 0.742 ±0.0847 0.794 ±0.0278 0.806 ±0.0204 0.861 ±0.0097 D c 0.790 ±0.0132 0.852 ±0.0038 0.807 ±0.0101 0.808 ±0.0062 0.861 ±0.0138 D bc 0.804 ±0.0056 0.830 ±0.0235 0.826 ±0.0059 0.832 ±0.0052 0.873 ±0.0074 CIFAR-100 (m = 50) D u 0.739 ±0.0036 0.738 ±0.0084 0.742 ±0.0647 0.744 ±0.0084 0.844 ±0.0042 D b 0.669 ±0.0199 0.673 ±0.0363 0.686 ±0.0103 0.696 ±0.0068 0.756 ±0.0281 D c 0.689 ±0.0075 0.724 ±0.0065 0.700 ±0.0018 0.703 ±0.0097 0.790 ±0.0085 D bc 0.699 ±0.0065 0.718 ±0.0082 0.714 ±0.0009 0.717 ±0.0024 0.812 ±0.0163 Test AUC on benchmark datasets and different priors distribution with m = 10 and m = 50. and we report the mean values with standard deviations. For more details about the experiment, please refer to Appendix.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "±0.0042 0.934 ±0.0095 0.926 ±0.0038 0.928 ±0.0046 0.928 ±0.0196 50 0.938 ±0.0106 0.932 ±0.0047 0.937 ±0.0097 0.928 ±0.0042 0.941 ±0.0186 CIFAR-10 10 0.907 ±0.0087 0.901 ±0.0053 0.901 ±0.0039 0.895 ±0.0026 0.904 ±0.0123 50 0.900 ±0.0022 0.895 ±0.0080 0.893 ±0.0023 0.890 ±0.0147 0.902 ±0.0048 CIFAR-100 10 0.842 ±0.0098 0.835 ±0.0036 0.827 ±0.0228 0.817 ±0.0243 0.803 ±0.0366 50 0.795 ±0.0090 0.805 ±0.0067 0.785 ±0.0210 0.777 ±0.0213 0.811 ±0.0125 Test AUC on benchmark datasets with different imbalanced setting. ⌈m/2⌉ datasets, and change their size to ⌈τ • (n train /m)⌉. 2. Random: Randomly sample dataset size n i from range [0, n train ], such that m i=1 n i = n train . The test AUC of U m -AUC on with different imbalance datasets is presented in table 2. It indicates that our method is reasonably robust to the imbalance settings, as it exhibits a slow decline in test performance and a slow increase in test performance variance as the reduction ratio decreases.", "figure_data": "):", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "(Zhou 2017)", "Explanation": "The cited work by Zhou (2017) provides foundational information on the concept of weak supervision, which the citing paper builds upon in discussing the challenges of obtaining perfect supervision in real-world machine learning problems."}, {"Category": "Extension or Continuation", "Citation": "(Zhu et al. 2009)", "Explanation": "The cited work by Zhu et al. (2009) introduces the concept of semi-supervised learning, which the citing paper extends by discussing the challenges of learning classifiers from unlabeled datasets with different class priors in the context of weakly supervised learning."}, {"Category": "Extension or Continuation", "Citation": "(Bekker and Davis 2020)", "Explanation": "The cited work by Bekker and Davis (2020) presents the concept of positive-unlabeled learning, which the citing paper further builds upon in discussing the challenges of learning classifiers from unlabeled datasets with different class priors in the context of weakly supervised learning."}, {"Category": "Extension or Continuation", "Citation": "(Han et al. 2021)", "Explanation": "The cited work by Han et al. (2021) discusses the concept of noisy label learning, which the citing paper extends by highlighting the challenges of learning classifiers from unlabeled datasets with different class priors in the context of weakly supervised learning."}, {"Category": "Data Source", "Citation": "(Scott and Zhang 2020)", "Explanation": "The cited work by Scott and Zhang (2020) presents a method for ensembling classifiers trained on all pairs of unlabeled sets, which the citing paper uses as a data source for discussing the challenges of learning classifiers from unlabeled datasets with different class priors in the context of weakly supervised learning."}, {"Category": "Data Source", "Citation": "(Tsai and Lin 2020)", "Explanation": "The cited work by Tsai and Lin (2020) introduces a method for consistency regularization in the context of unlabeled datasets with different class priors, which the citing paper uses as a data source for discussing the challenges of learning classifiers from unlabeled datasets with different class priors in the context of weakly supervised learning."}, {"Category": "Methodological Basis", "Citation": "(Lu et al., 2021)", "Explanation": "The cited work by Lu et al. (2021) provides a consistent approach for classification from multiple unlabeled sets, which the citing paper adopts to optimize a classification loss in the context of learning from multiple unlabeled sets."}, {"Category": "Methodological Basis", "Citation": "(Lu et al., 2021)", "Explanation": "The cited work by Lu et al. (2021) provides a method for optimizing AUC in a multi-unlabeled dataset setting, which the citing paper builds upon to develop a new approach for AUC optimization."}, {"Category": "Methodological Basis", "Citation": "(Yuan et al. 2021)", "Explanation": "The cited work by Yuan et al. provides efficient methods for optimizing the AUC in the multi-label learning problem, which the citing paper adopts to address the scalability issue in the training process."}, {"Category": "Methodological Basis", "Citation": "(Liu et al. 2020)", "Explanation": "The cited work by Liu et al. also contributes to the discussion on efficient methods for optimizing the AUC in the multi-label learning problem, providing another approach for the citing paper to consider in addressing the scalability issue."}, {"Category": "Methodological Basis", "Citation": "(Yuan et al. 2021)", "Explanation": "The cited work introduces a margin parameter m to make the loss function more robust, which the citing paper adopts in their implementation of the min-max problems in the multi-label learning problem."}, {"Category": "Data Source", "Citation": "(Clanuwat et al. 2018)", "Explanation": "The cited work provides the benchmark dataset Kuzushiji-MNIST (K-MNIST) for testing the performance of U m -AUC in the proposed research."}, {"Category": "Data Source", "Citation": "(Krizhevsky, Hinton et al. 2009)", "Explanation": "The cited work provides the benchmark datasets CIFAR-10 and CIFAR-100, which are transformed into binary classification datasets for testing the performance of U m -AUC in the proposed research."}, {"Category": "Data Source", "Citation": "(He et al., 2016)", "Explanation": "The cited work by He et al. (2016) is the source of the Resnet32 model used in the experiments on the CIFAR datasets in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Tsai and Lin 2020)", "Explanation": "The cited work, LLP-VAT, serves as a methodological basis for the comparison of the unlabeled dataset class priors in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Lu et al. 2021)", "Explanation": "The cited work, U m -SSC, is used as a methodological basis for the comparison of the unlabeled dataset class priors in the citing paper."}, {"Category": "Data Source", "Citation": "(Kingma and Ba 2014)", "Explanation": "The cited work, Adam optimization method, is used as a data source for the optimization process in the citing paper."}, {"Category": "Data Source", "Citation": "(Paszke et al. 2019)", "Explanation": "The cited work, PyTorch implementation, is used as a data source for the experiments conducted in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Lu et al. 2021)", "Explanation": "The cited work by Lu et al. (2021) is mentioned in the context of class prior distributions, and the results obtained in the experiments are used to support the claim that the proposed method outperforms the baselines in a similar setting."}, {"Category": "Data Source", "Citation": "(Lu et al. 2021)", "Explanation": "The cited work is used to generate imbalanced datasets in the U m setting, which the citing paper uses to assess the robustness of their method against such data."}, {"Category": "Supporting Evidence", "Citation": "(Quadrianto et al. 2008)", "Explanation": "The cited work by Quadrianto et al. provides a classical problem of learning with label proportions, which forms the basis for the U m classification problem discussed in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Yu et al. 2014)", "Explanation": "The cited work by Yu et al. introduced a method of minimizing the distance between predicted probabilities and class priors to solve the U m classification problem, which the citing paper adopts as a basis for their research."}, {"Category": "Extension or Continuation", "Citation": "(Tsai and Lin 2020)", "Explanation": "The cited work by Tsai and Lin further extends the research on the U m classification problem by introducing consistency regularization to the problem, providing a new approach to solving the issue."}, {"Category": "Methodological Basis", "Citation": "(Scott and Zhang 2020)", "Explanation": "The cited work by Scott and Zhang (2020) ensembling classifiers trained on all pairs of unlabeled sets is adopted as a method in the citing paper to address the U 2 classification problem."}, {"Category": "Methodological Basis", "Citation": "(Lu et al. 2021)", "Explanation": "The cited work by Lu et al. (2021) tackles the U 2 classification problem through a surrogate set classification task, which is used as a method in the citing paper to address the same problem."}, {"Category": "Extension or Continuation", "Citation": "(Yang and Ying 2022)", "Explanation": "The cited work by Yang and Ying (2022) has dedicated studies to directly optimizing the AUC for decades, which the citing paper extends by further exploring the optimization of AUC in the context of U m classification problems."}, {"Category": "Data Source", "Citation": "(Gao et al. 2013)", "Explanation": "The cited work by Gao et al. (2013) proposed an AUC optimization algorithm using a covariance matrix, which the citing paper uses as a data source to enable efficient optimization of AUC in the context of U m classification problems."}, {"Category": "Data Source", "Citation": "(Ying, Wen, and Lyu 2016)", "Explanation": "The cited work by Ying, Wen, and Lyu (2016) optimized the AUC optimize problem as a stochastic saddle point problem with stochastic gradient-based methods, which the citing paper uses as a data source to further explore the optimization of AUC in the context of U m classification problems."}, {"Category": "Supporting Evidence", "Citation": "(Liu et al., 2020)", "Explanation": "The cited work by Liu et al. (2020) introduced a non-convex min-max problem for deep AUC maximization, which provides a foundational method for the study of AUC optimization in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Yuan et al., 2021)", "Explanation": "The cited work by Yuan et al. (2021) proposed an end-to-end AUC optimization method that is robust to noisy and easy data, which further builds upon the work of Liu et al. (2020) in the field of AUC optimization."}, {"Category": "Extension or Continuation", "Citation": "(Yang et al., 2021a,b)", "Explanation": "The cited works by Yang et al. (2021a,b) study partial-AUC and multi-class AUC optimization, which extends the research on AUC optimization to new dimensions and contexts."}, {"Category": "Extension or Continuation", "Citation": "(Zhu et al., 2022)", "Explanation": "The cited work by Zhu et al. (2022) also studies multi-class AUC optimization, which further expands the research on this topic."}, {"Category": "Extension or Continuation", "Citation": "(Yao, Lin, and Yang, 2022)", "Explanation": "The cited work by Yao, Lin, and Yang (2022) also studies multi-class AUC optimization, which further extends the research in this area."}, {"Category": "Supporting Evidence", "Citation": "(Gao and Zhou, 2015)", "Explanation": "The cited work by Gao and Zhou (2015) investigated the consistency of commonly used surrogate losses for AUC, which provides a theoretical basis for the study of AUC optimization in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Agarwal et al., 2005)", "Explanation": "The cited work by Agarwal et al. (2005) studied the generalization bounds of AUC optimization models, which further contributes to the theoretical aspects of AUC optimization research."}, {"Category": "Supporting Evidence", "Citation": "(Usunier, Amini, and Gallinari, 2005)", "Explanation": "The cited work by Usunier, Amini, and Gallinari (2005) also studied the generalization bounds of AUC optimization models, which further supports the research on the theoretical aspects of AUC optimization."}, {"Category": "Data Source", "Citation": "(Xie and Li, 2018a)", "Explanation": "The cited work by Xie and Li (2018a) conducted research on software build outcome prediction using AUC optimization, which provides a real-world application of the method discussed in the citing paper."}, {"Category": "Data Source", "Citation": "(Wang et al., 2022)", "Explanation": "The cited work by Wang et al. (2022) studied molecular property prediction using AUC optimization, which highlights the use of the method in a real-world application."}, {"Category": "Extension or Continuation", "Citation": "(Sakai, Niu, and Sugiyama, 2018)", "Explanation": "The cited work by Sakai, Niu, and Sugiyama (2018) studies semi-supervised AUC optimization, which the citing paper extends by focusing on U m classification."}, {"Category": "Extension or Continuation", "Citation": "(Xie and Li, 2018b)", "Explanation": "The cited work by Xie and Li (2018b) also studies semi-supervised AUC optimization, and the citing paper further extends this research by focusing on U m classification."}, {"Category": "Extension or Continuation", "Citation": "(Charoenphakdee, Lee, and Sugiyama, 2019)", "Explanation": "The cited work by Charoenphakdee, Lee, and Sugiyama (2019) studies the properties of AUC optimization under label noise, which the citing paper extends by focusing on U m classification."}, {"Category": "Extension or Continuation", "Citation": "(Xie et al., 2023)", "Explanation": "The cited work by Xie et al. (2023) offers a universal solution for AUC optimization in various weakly supervised scenarios, and the citing paper builds upon this work by applying the solution to U m classification."}]
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b21", "b22", "b6", "b14", "b12", "b6", "b14", "b20" ], "table_ref": [], "text": "Recent work on deep generative models has led to rapid advancements in image editing. Text-to-image models [19,22] trained on large-scale databases [23] allow intuitive editing [7,15] of images in various domains. Then, to what extent can these models support precise editing instructions? Can a unique concept of the user, especially one not encountered during large-scale training, be utilized for editing? Editing with a prompt acquired from a wellperforming captioning model [13] fails to capture the appearance of reference, as shown in Fig. 1.\nWe propose Custom-Edit, a two-step approach that involves (i) customizing the model [6, 12, 21] using a few reference images and then (ii) utilizing effective text-guided editing methods [7,15,16] to edit images. While prior customization studies [6,12,21] deal with the random generation of images (noise→image), our work focuses on image editing (image→image). As demonstrated in Fig. 1, customization improves faithfulness to the reference's appearance by a large margin. This paper shows that customizing * Corresponding Authors only language-relevant parameters with augmented prompts significantly enhances the quality of edited images. Moreover, we present our design choices for each customization and editing process and discuss the source-reference tradeoff in Custom-Edit." }, { "figure_ref": [], "heading": "Diffusion Models", "publication_ref": [ "b7", "b10", "b6", "b21", "b0" ], "table_ref": [], "text": "Throughout the paper, we use Stable Diffusion [19], an open-source text-to-image model. The diffusion model [5,8,24,26] is trained in the latent space of a VAE [11], which downsamples images for computation efficiency. The model is trained to reconstruct the clean latent representation x 0 from a perturbed representation x t given the text condition c, which is embedded with the CLIP text encoder [18]. The diffusion model is trained with the following objective:\nT t=1 E x0,ϵ [||ϵ -ϵ θ (x t , t, c)|| 2 ],(1)\nwhere ϵ is an added noise, t is a time step indicating a perturbed noise level, and ϵ θ is a diffusion model with a U-Net We customize a diffusion model by optimizing only language-relevant parameters (i.e., custom embedding V* and attention weights) on a given set of reference images. We also apply the prior preservation loss to alleviate the language drift. (b) Editing. We then transform the source image to the output using the customized word. We leverage the P2P and Null-text inversion methods [7,16] for this process.\nvalues of cross-attention layers, and the text encoder is kept frozen to preserve its language understanding capability.\nImagen [22] and eDiffi [1] have shown that leveraging rich language understandings of large language models by freezing them is the key to boosting the performance." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Custom-Edit", "publication_ref": [], "table_ref": [], "text": "Our goal is to edit images with complex visual instructions given as reference images (Fig. 1). Therefore, we propose a two-step approach that (i) customizes the model on given references (Sec. 3.1) and (ii) edits images with textual prompts (Sec. 3.2). Our method is presented in Fig. 2." }, { "figure_ref": [], "heading": "Customization", "publication_ref": [ "b20", "b20", "b22" ], "table_ref": [], "text": "Trainable Parameters. We optimize only the keys and values of cross-attention and the '[rare token]', following Custom-Diffusion [12]. As we discuss in Sec. 4, our results indicate that training these language-relevant parameters is crucial for successfully transferring reference concepts to source images. Furthermore, training only these parameters requires less storage than Dreambooth [21]. Augmented Prompts. We fine-tune the abovementioned parameters by minimizing Eq. (1). We improve Custom-Diffusion for editing by augmenting the text input as '[rare token] [modifier] [class noun]' (e.g., 'V* patterned teapot'). We find that '[modifier]' encourages the model to focus on learning the appearance of the reference. Datasets. To keep the language understanding while finetuning on the reference, we additionally minimize prior preservation loss [21] over diverse images belonging to the same class as the reference. Thus, we use CLIP-retrieval [3] to retrieve 200 images and their captions from the LAION dataset [23] using the text query 'photo of a [modifier] [class noun]'." }, { "figure_ref": [], "heading": "Text-Guided Image Editing", "publication_ref": [ "b6", "b12", "b14" ], "table_ref": [], "text": "Prompt-to-Prompt. We use Prompt-to-Prompt [7] (P2P), a recently introduced editing framework that edits images by only modifying source prompts. P2P proposes attention injection to preserve the structure of a source image. For each denoising step t, let us denote the attention maps of the source and edited image as M t and M t * , respectively. P2P then injects a new attention map Edit(M t , M t * , t) into the model ϵ θ . Edit is an attention map editing operation, including prompt refinement and word swap. Additionally, P2P enables local editing with an automatically computed mask. P2P computes the average of cross-attention Mt,w and M * t,w related to the word w and thresholds them to produce the binary mask B( Mt )∪B( M * t ). Before editing with P2P, we utilize Null-Text Inversion [16] to boost the source preservation. Refer to Sec. C for a more description. Operation Choice. Due to the limited number of reference images, the customized words favor only a limited variety of structures. This inspired us to propose the following recipe. First, we use prompt refinement for the Edit function. Word swap fails when the customized words do not prefer the swapped attention map. Second, we use mask B( Mt ) rather than B( Mt ) ∪ B( M * t ), as the customized words are likely to generate incorrect masks. Source-Reference Trade-Off. A key challenge in image editing is balancing the edited image's source and reference similarities. We refer to τ /T as strength, where P2P injects self-attention from t = T to t = τ . In P2P, we observed that a critical factor in controlling the trade-off is the injection Our method transfers the reference's appearance to the source image with unprecedented fidelity. The structures of the source are well preserved. We obtain source prompts using BLIP2 [13]. Except for the pencil drawing example, we use local editing of P2P with automatically generated masks.\nof self-attention rather than cross-attention. Higher strength denotes higher source similarity at the expense of reference similarity. In Sec. 4, we also show results with SDEdit [15], which diffuses the image from t = 0 to t = τ and denoises it back. As opposed to P2P, higher strength in SDEdit means higher reference similarity." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [ "b20", "b14", "b12" ], "table_ref": [], "text": "In this section, we aim to validate each process of Custom-Edit. Specifically, we assess our design choices for customization by using Textual Inversion [6] and Dreambooth [21] in the customization process. We compare their source-reference trade-off in the editing process. As well as P2P, we use SDEdit [15] for experiments. Baselines. Textual Inversion learns a new text embedding V*, initialized with a class noun (e.g., 'pot'), by minimizing Eq. ( 1) for the input prompt 'V*'. Dreambooth fine-tunes the diffusion model while the text encoder is frozen. Eq. ( 1) is minimized over a few images given for input prompt '[rare token] [class noun]' (e.g., 'ktn teapot'). SDEdit is the simplest editing method that diffuse-and-denoise the image. Datasets. We use eight references in our experiments, including two pets, five objects, and one artwork. For each reference, we used five source images on average. Metrics. We measure the source and reference similarities with CLIP ViT-B/32 [18]. We use strengths [0.2, 0.4, 0.6, 0.8] for P2P and [0.5, 0.6, 0.7, 0.8] for SDEdit results. We generated two P2P samples with cross-attention injection strengths [0.2, 0.6], and three SDEdit samples for each strength and source image from different random seeds. Inference Details. We employ a guidance scale of 7.5 and 50 inference steps. We acquire all source prompts using BLIP2 [13]. More details are available in Sec. B." }, { "figure_ref": [], "heading": "Qualitative Results", "publication_ref": [], "table_ref": [], "text": "Fig. 3 illustrates the selected results. Custom-Edit transfers the reference's detailed appearance to the source while preserving the overall structure. For example, Custom-Edit generates a horizontally elongated V* wooden pot from the wine bottle (first row). In the second row, Custom-Edit generates a V* tortoise plushy wearing a hat with the texture of its shell. The blue jay in the third row became a V* ceramic bird with perfectly preserved macarons. In the last row, the V* cat is sitting in a pose that does not exist in the reference set. We show qualitative comparisons in Sec. A.1." }, { "figure_ref": [ "fig_3" ], "heading": "Quantitative Results", "publication_ref": [], "table_ref": [], "text": "Fig. 4 shows average trade-off curves on P2P and SDEdit. Our improved Custom-Diffusion yields the best trade-off, while Textual Inversion shows similar source similarity but lower reference similarity. Dreambooth has higher source similarity but lower reference similarity, suggesting that it is ineffective in modifying images. SDEdit results also show a similar tendency, supporting our claim that customizing language-relevant parameters is effective for editing. Note that SDEdit shows lower source similarity than P2P, indicating the superiority of P2P and our operation choices in text-guided editing." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b6", "b0", "b21", "b13" ], "table_ref": [], "text": "We propose Custom-Edit, which allows fine-grained editing with textual prompts. We present our design choices for each process, which can benefit future customization and editing work. Additionally, we discuss the trade-off between source and reference in diffusion-based editing.\nAlthough Custom-Edit shows various successful editing results, there are some failure cases, as presented in Sec. A.3. Custom-Edit sometimes edits undesired regions or fails to edit complex backgrounds. We hypothesize that this is due to the inaccurate attention maps of Stable Diffusion [7,16] and the limited controllability of the text input. Potential solutions are to apply Custom-Edit on text-toimage models with larger text encoders [1,22] or extended controllability [14,28]. " }, { "figure_ref": [], "heading": "A. Additional Results", "publication_ref": [], "table_ref": [], "text": "Additional " }, { "figure_ref": [], "heading": "A.2. Strength Control", "publication_ref": [], "table_ref": [], "text": "We show how the strength of P2P (Fig. D) and SDEdit (Fig. E) affect the results. By controlling the strength of these methods, users can choose samples that match their preferences. Our empirical findings suggest that P2P strengths between 0.4 and 0.6, and SDEdit strengths between 0.6 and 0.7 produce good samples." }, { "figure_ref": [], "heading": "A.3. Failure Cases", "publication_ref": [ "b6", "b21", "b13" ], "table_ref": [], "text": "Failure cases are shown in Fig. F. In the first row, the cake turns into the V* cat instead of the coffee. Replacing 'A cup' with 'V* cat' resolves the issue. We speculate that Stable Diffusion is not familiar with a cat sitting in coffee, which causes the word 'cat' to fail to attend to coffee. Recent works [7,16] have noted that attention maps of Stable Diffusion are less accurate than those of Imagen [22].\nTurning the dolphin into the V* tortoise plushy in the second row is easy. However, we cannot turn rocks into the V* tortoise plushy. The rocks are scattered in the complex scene so, the model requires clarification on which rock to modify. Applying Custom-Edit on extended text-to-image models such as GLIGEN [14], which is a model extended to the grounding inputs, may solve this problem." }, { "figure_ref": [], "heading": "A.4. Text Similarity", "publication_ref": [], "table_ref": [], "text": "In addition to the source-reference trade-off shown in the main paper, we show the trade-off between text similarity and source similarity in Fig. G. We measure text similarity using CLIP ViT-B/32 between the edited image and the target text (with V* omitted). Our improved Custom-Diffusion achieves significantly better text similarity compared to other methods." }, { "figure_ref": [], "heading": "B. Implementation Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B.1. Customization", "publication_ref": [], "table_ref": [], "text": "Dreambooth and Custom-Diffusion. We train a model for 500 optimization steps on a batch size of 2. We use same dataset for prior preservation loss. During training, we augment text input with the following templates:\n• \"photo of a V* [modifier] [class]\" • \"rendering of a V* [modifier] [class]\" • \"illustration of a V* [modifier] [class]\" • \"depiction of a V* [modifier] [class]\" • \"rendition of a V* [modifier] [class]\"\nFor 1 out of 3 training iterations, we randomly crop images and augment text input with the following templates:\n• \"zoomed in photo of a V* [modifier] [class]\" • \"close up in photo of a V* [modifier] [class]\" • \"cropped in photo of a V* [modifier] [class]\"\nWe would like to note that for two pet categories (cat and dog), customizing without '[modifier]' token already offered good results.\nTextual Inversion. We train a single token for 2000 optimization steps on a batch size of 4. We used the text template from [6]." }, { "figure_ref": [], "heading": "B.2. Dataset", "publication_ref": [ "b16", "b21", "b0", "b9", "b1" ], "table_ref": [], "text": "Reference sets are collected from prior customization works. Wooden pot, tortoise plushy, cat, and dog from [12], ceramic bird, cat figurine, patterned teapot from [6], and pencil drawing from [17].\nSource images are collected from Imagen [22], eDiff-I [1], Imagic [10], Muse [4], Null-Text Inversion [16], and Text2Live [2]. We provide source-reference pairs used for quantitative comparisons in the supplementary material." }, { "figure_ref": [], "heading": "C. Additional Background", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C.1. Prompt-to-Prompt", "publication_ref": [], "table_ref": [], "text": "The attention map editing operation Edit includes two sub-operations, namely prompt refinement and word swap. word swap refers to replacing cross-attention maps of words in the original prompt with other words, while prompt refinement refers to adding cross-attention maps of new words to the prompt while preserving attention maps of the common words. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements: This work was supported by the National Research Foundation of Korea (NRF) grants funded by the Korea government (Ministry of Science and ICT, MSIT) (2022R1A3B1077720), Institute of Information & communications Technology Planning & Evaluation (IITP) grants funded by the Korea government (MSIT) (2021-0-01343: AI Graduate School Program, SNU), and the BK21 FOUR program of the Education and Research Program for Future ICT Pioneers, Seoul National University in 2023." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Null-text inversion [16] addresses this issue by optimizing unconditional text embeddings, which take only a minute for a source image. Note that the diffusion model is not trained; therefore, the model maintains its knowledge." } ]
2023-05-25
[ { "authors": "Yogesh Balaji; Seungjun Nah; Xun Huang; Arash Vahdat; Jiaming Song; Karsten Kreis; Miika Aittala; Timo Aila; Samuli Laine; Bryan Catanzaro", "journal": "", "ref_id": "b0", "title": "ediffi: Text-to-image diffusion models with an ensemble of expert denoisers", "year": "2022" }, { "authors": "Omer Bar-Tal; Dolev Ofri-Amar; Rafail Fridman; Yoni Kasten; Tali Dekel", "journal": "Springer", "ref_id": "b1", "title": "Text2live: Text-driven layered image and video editing", "year": "2022" }, { "authors": "Romain Beaumont", "journal": "", "ref_id": "b2", "title": "Clip retrieval: Easily compute clip embeddings and build a clip retrieval system with them", "year": "2022" }, { "authors": "Huiwen Chang; Han Zhang; Jarred Barber; Jose Maschinot; Lu Lezama; Ming-Hsuan Jiang; Kevin Yang; Murphy; Michael William T Freeman; Rubinstein", "journal": "", "ref_id": "b3", "title": "Muse: Text-to-image generation via masked generative transformers", "year": "2023" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b4", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Rinon Gal; Yuval Alaluf; Yuval Atzmon; Or Patashnik; H Amit; Gal Bermano; Daniel Chechik; Cohen-Or", "journal": "", "ref_id": "b5", "title": "An image is worth one word: Personalizing text-to-image generation using textual inversion", "year": "2022" }, { "authors": "Amir Hertz; Ron Mokady; Jay Tenenbaum; Kfir Aberman; Yael Pritch; Daniel Cohen-Or", "journal": "", "ref_id": "b6", "title": "Prompt-to-prompt image editing with cross attention control", "year": "2022" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b7", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Jonathan Ho; Tim Salimans", "journal": "", "ref_id": "b8", "title": "Classifier-free diffusion guidance", "year": "2022" }, { "authors": "Bahjat Kawar; Shiran Zada; Oran Lang; Omer Tov; Huiwen Chang; Tali Dekel; Inbar Mosseri; Michal Irani", "journal": "", "ref_id": "b9", "title": "Imagic: Text-based real image editing with diffusion models", "year": "2023" }, { "authors": "P Diederik; Max Kingma; Welling", "journal": "", "ref_id": "b10", "title": "Auto-encoding variational bayes", "year": "2013" }, { "authors": "Nupur Kumari; Bingliang Zhang; Richard Zhang; Eli Shechtman; Jun-Yan Zhu", "journal": "", "ref_id": "b11", "title": "Multi-concept customization of textto-image diffusion", "year": "2023" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b12", "title": "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "Yuheng Li; Haotian Liu; Qingyang Wu; Fangzhou Mu; Jianwei Yang; Jianfeng Gao; Chunyuan Li; Yong Jae Lee", "journal": "", "ref_id": "b13", "title": "Gligen: Open-set grounded text-to-image generation", "year": "2023" }, { "authors": "Chenlin Meng; Yutong He; Yang Song; Jiaming Song; Jiajun Wu; Jun-Yan Zhu; Stefano Ermon", "journal": "", "ref_id": "b14", "title": "Sdedit: Guided image synthesis and editing with stochastic differential equations", "year": "2021" }, { "authors": "Ron Mokady; Amir Hertz; Kfir Aberman; Yael Pritch; Daniel Cohen-Or", "journal": "", "ref_id": "b15", "title": "Null-text inversion for editing real images using guided diffusion models", "year": "2022" }, { "authors": "Utkarsh Ojha; Yijun Li; Jingwan Lu; Alexei A Efros; Yong ; Jae Lee; Eli Shechtman; Richard Zhang", "journal": "", "ref_id": "b16", "title": "Few-shot image generation via cross-domain correspondence", "year": "2021" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b17", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b18", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "", "ref_id": "b19", "title": "Unet: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b20", "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2022" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily Denton; Seyed Kamyar; Seyed Ghasemipour; Raphael Gontijo-Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "" }, { "authors": "Christoph Schuhmann; Romain Beaumont; Richard Vencu; Ross Cade W Gordon; Mehdi Wightman; Theo Cherti; Aarush Coombes; Clayton Katta; Mitchell Mullis; Wortsman", "journal": "", "ref_id": "b22", "title": "Laion-5b: An open large-scale dataset for training next generation image-text models", "year": "" }, { "authors": "Jascha Sohl-Dickstein; Eric Weiss; Niru Maheswaranathan; Surya Ganguli", "journal": "PMLR", "ref_id": "b23", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b24", "title": "Denois", "year": "" } ]
[ { "formula_coordinates": [ 1, 367.83, 627.68, 177.28, 30.2 ], "formula_id": "formula_0", "formula_text": "T t=1 E x0,ϵ [||ϵ -ϵ θ (x t , t, c)|| 2 ],(1)" }, { "formula_coordinates": [ 6, 320.32, 213.43, 164.24, 83.18 ], "formula_id": "formula_1", "formula_text": "• \"photo of a V* [modifier] [class]\" • \"rendering of a V* [modifier] [class]\" • \"illustration of a V* [modifier] [class]\" • \"depiction of a V* [modifier] [class]\" • \"rendition of a V* [modifier] [class]\"" }, { "formula_coordinates": [ 6, 320.32, 337.31, 189.42, 45.91 ], "formula_id": "formula_2", "formula_text": "• \"zoomed in photo of a V* [modifier] [class]\" • \"close up in photo of a V* [modifier] [class]\" • \"cropped in photo of a V* [modifier] [class]\"" } ]
Custom-Edit: Text-Guided Image Editing with Customized Diffusion Models
Text-to-image diffusion models can generate diverse, high-fidelity images based on user-provided text prompts. Recent research has extended these models to support text-guided image editing. While text guidance is an intuitive editing interface for users, it often fails to ensure the precise concept conveyed by users. To address this issue, we propose Custom-Edit, in which we (i) customize a diffusion model with a few reference images and then (ii) perform text-guided editing. Our key discovery is that customizing only language-relevant parameters with augmented prompts improves reference similarity significantly while maintaining source similarity. Moreover, we provide our recipe for each customization and editing process. We compare popular customization methods and validate our findings on two editing methods using various datasets.
Jooyoung Choi; Yunjey Choi; Yunji Kim; Junho Kim; Sungroh Yoon
[ { "figure_caption": "Figure 1 .1Figure 1. Our Custom-Edit allows high-fidelity text-guided editing, given a few references. Edited images with BLIP2 [13] captions show the limitation of textual guidance in capturing the finegrained appearance of the reference.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Our Custom-Edit consists of two processes: the customization process and the editing process. (a) Customization.We customize a diffusion model by optimizing only language-relevant parameters (i.e., custom embedding V* and attention weights) on a given set of reference images. We also apply the prior preservation loss to alleviate the language drift. (b) Editing. We then transform the source image to the output using the customized word. We leverage the P2P and Null-text inversion methods[7, 16] for this process.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Photo of a giraffe drinking from a blue bucket", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Source-Reference Trade-Off. Custom-Diffusion shows the best trade-off, indicating the effectiveness of training only language-relevant parameters. We exhibit qualitative comparisons and samples with various strengths in Sec. A.2.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Custom-Edit results are shown in Fig. A. A.1. Qualitative Comparisons Comparisons of customization methods on P2P are shown in Fig. B. Dreambooth fails to modify the source images. Textual Inversion results do not capture details of the references. Comparisons on SDEdit are shown in Fig. C. Similar to the results on P2P, Dreambooth and Textual Inversion fail to capture the detailed appearance of the reference.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure A .Figure B .Figure C .ABCPhoto of a womanPhoto of a V* cartoon person", "figure_data": "", "figure_id": "fig_5", "figure_label": "ABC", "figure_type": "figure" }, { "figure_caption": "2 Figure D .Figure E .2DEFigure D. Varying strength of P2P. Red box indicates the best sample.", "figure_data": "", "figure_id": "fig_6", "figure_label": "2DE", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "ing diffusion implicit models. In International Conference on Learning Representations, 2021. 11 [26] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. In NeurIPS, 2019.", "figure_data": "1[27] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko-reit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and IlliaPolosukhin. Attention is all you need. Advances in neuralinformation processing systems, 30, 2017. 1[28] Lvmin Zhang and Maneesh Agrawala. Adding conditionalcontrol to text-to-image diffusion models. arXiv preprintarXiv:2302.05543, 2023. 4", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "[13]", "Explanation": "The cited work is a captioning model that is used to acquire a prompt for editing images. The citing paper finds that the prompt acquired from this model fails to capture the appearance of the reference image, which is a crucial factor in image editing."}, {"Category": "Methodological Basis", "Citation": "[6, 12, 21]", "Explanation": "The cited works are used in the two-step approach proposed in the citing paper to customize the model for image editing. The cited works provide the methods and techniques for customizing the model, which the citing paper builds upon to improve image editing performance."}, {"Category": "Extension or Continuation", "Citation": "[7,15,16]", "Explanation": "The cited works are text-guided editing methods that the citing paper utilizes in the second step of the two-step approach. The citing paper extends the use of these methods to support precise editing instructions and improve the appearance of reference images in image editing."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work, Stable Diffusion, is used as a text-to-image model in the citing paper to generate images from text conditions."}, {"Category": "Data Source", "Citation": "[18]", "Explanation": "The cited work, the CLIP text encoder, is used in the embedding process of the text condition in the diffusion model training."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work, the VAE, is used in the diffusion model to downsample images for computation efficiency in the training process."}, {"Category": "Methodological Basis", "Citation": "[5,8,24,26]", "Explanation": "The cited works on diffusion models provide the basis for the training objective in the diffusion model used in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work on the diffusion model is used to train the model with the objective of reconstructing clean latent representations from perturbed ones given text conditions."}, {"Category": "Methodological Basis", "Citation": "[7,16]", "Explanation": "The cited works provide the P2P and Null-text inversion methods that the citing paper adopts in the process of generating text from images."}, {"Category": "Supporting Evidence", "Citation": "[22,1]", "Explanation": "The cited works, Imagen and eDiffi, have demonstrated the importance of leveraging language understanding capabilities of large language models in boosting the performance of text generation from images."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work, Custom-Diffusion, is a method that the citing paper builds upon to optimize the keys and values of cross-attention and the [rare token] parameters in the model."}, {"Category": "Extension or Continuation", "Citation": "[21]", "Explanation": "The cited work, Dreambooth, is extended by the citing paper to improve Custom-Diffusion for editing by augmenting the text input and training only the necessary parameters to save storage space."}, {"Category": "Data Source", "Citation": "[3]", "Explanation": "The cited work, CLIP-retrieval, is used to retrieve images and their captions from the LAION dataset to improve language understanding in the finetuning process of the model."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The cited work, Prompt-to-Prompt, serves as the basis for the editing framework used in the citing paper to edit images by modifying source prompts."}, {"Category": "Supporting Evidence", "Citation": "[16]", "Explanation": "The cited work, Null-Text Inversion, is utilized in the citing paper to boost source preservation in the editing process, providing additional support for the research conducted in the study."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work, BLIP2, is used as a method to obtain source prompts in the citing paper, which is a key component in the process of image editing."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work, SDEdit, is used as a method to diffuse the image from t = 0 to t = \u03c4 and denoise it back, providing a specific technique for the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[13]", "Explanation": "The cited work provides the source prompts used in the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[7,16]", "Explanation": "The cited works on Stable Diffusion are used as a methodological basis for the design choices made in the citing paper for Custom-Edit, which allows fine-grained editing with textual prompts."}, {"Category": "Methodological Basis", "Citation": "[7,16]", "Explanation": "The cited works have noted that attention maps of Stable Diffusion are less accurate than those of Imagen, which the citing paper adopts in their research to understand the failure cases in text-to-image models."}, {"Category": "Extension or Continuation", "Citation": "[14]", "Explanation": "The cited work on extended text-to-image models such as GLIGEN is discussed as a potential solution to the problem of turning rocks into the V* tortoise plushy in the second row, which the citing paper extends upon in their research to address the issue of complex scenes in text-to-image models."}, {"Category": "Data Source", "Citation": "[12]", "Explanation": "The cited work provides the wooden pot, tortoise plushy, cat, and dog images used in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[6]", "Explanation": "The cited work provides the ceramic bird, cat figurine, and patterned teapot images used in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[17]", "Explanation": "The cited work provides the pencil drawing image used in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[22]", "Explanation": "The cited work provides the source images for the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[1]", "Explanation": "The cited work provides the eDiff-I source images used in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[10]", "Explanation": "The cited work provides the Imagic source images used in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[4]", "Explanation": "The cited work provides the Muse source images used in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[16]", "Explanation": "The cited work provides the Null-Text Inversion source images used in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[2]", "Explanation": "The cited work provides the Text2Live source images used in the study conducted in the citing paper."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b1", "b5", "b6", "b7", "b8", "b9", "b11", "b12", "b13", "b15", "b16", "b17", "b27", "b13", "b11", "b28", "b29", "b13", "b11", "b12", "b17", "b16", "b28", "b27", "b1", "b2", "b26", "b27", "b19", "b31", "b32", "b13" ], "table_ref": [], "text": "In recent years, deep learning has made remarkable progress in computer vision, with models continually increasing in capability and capacity [1,2,3,4,5]. The philosophy behind prevailing approach to achieving better performance has been larger is better, as evidenced by the success of increasingly deep [2,6,7,8] and wide [9,10] models. Unfortunately, these cumbersome models with numerous parameters are difficult to be deployed on edge devices with limited computation resources, such as cell phones and autonomous vehicles. To overcome this challenge, knowledge distillation (KD) [11] and its variants [12,13,14,15,16,17,18] have been proposed to improve the performance of compact student models by transferring knowledge from larger teacher models during training. In this paper, we delve deep into this question and thoroughly study the crucial factors in deciding distillation performances. We point out the small data pitfall in current knowledge distillation literature: when evaluated on smallscale datasets, i.e., CIFAR-100 (50K training images), KD methods meticulously designed on such datasets can easily surpass vanilla KD [11]. However, when evaluated on large-scale datasets, i.e., ImageNet-1K (1M training images), vanilla KD achieves on par or even better results compared to other methods.\nTo break down this problem, we start by compensating for the limited data via training for longer iterations [28] on small-scale datasets. Albeit with longer schedule, meticulously designed KD methods still outperform vanilla KD by a large margin. It follows that large-scale datasets are necessary for vanilla KD to reach its peak.\nWe further investigate the crucial factors in deciding KD performances and carefully study two key elements, i.e., training strategy and model capacity. In terms of different training strategies, we have made the following observations: (i) By evaluating the meticulously designed methods [14,15] which perform well on small-scale benchmarks under different training recipes [12,29,30] on largescale datasets such as ImageNet [23], it is evident that with the enhancement of data augmentation techniques and longer training iterations, the gap between vanilla KD [11] and other carefully designed KD methods gradually diminishes. (ii) Our experiments also demonstrate that logits-based methods [11,14,15] outperform hint-based methods [12,13,18,17] in terms of generalizability. With the growing magnitude of datasets and models, teacher and student models possess varying capacities to handle intricate distributions. Consequently, the utilization of the hint-based approach, wherein the student imitates the intermediate features of the teacher, becomes less conducive to achieving satisfactory results.\nWith regard to model capacity, we compare teacher-student pairs of different scales, e.g., using Res34 to teach Res18 vs. using Res152 to teach Res50. The outcome reveals that vanilla KD ends up achieving on par performance with the best carefully designed methods, indicating that the impact of model capacity on the effectiveness of knowledge distillation is in fact small. Throughout our study, we emphasize that due to small data pitfall, the power of vanilla KD is significantly underestimated. In the meantime, meticulously designed distillation methods may instead become suboptimal when given stronger training strategies [29,28] and larger datasets [31]. Furthermore, directly integrating vanilla KD to ResNet-50 [2], ViT-Tiny, ViT-Small [3], and Con-vNeXtV2 [27] architectures leads to an accuracy of 83.1%, 78.1%, 84.3%, and 85.0% on ImageNet respectively, surpassing the best results reported in the literature [28] without additional bells and whistles. Our results provide valid evidence for the potential and power of vanilla KD and indicate its practical value. In addition, it bears further reflection whether it is appropriate to design and evaluate KD approaches on small-scale datasets. Finally, we demonstrate that improving the backbone architecture with higher performance on ImageNet can also significantly enhance downstream tasks such as object detection and instance segmentation [20,32,33].\nAlthough this work does not propose a novel distillation method, we believe our identification of small data pitfall and a series of analyzes based on this would provide valuable insights for the vision community in the field of knowledge distillation and the pursuit of new state-of-the-art results under more practical scenarios. Moreover, we anticipate that the released checkpoints of commonly used architectures will facilitate further research on downstream tasks. KD [11] can achieve on par result to the state-of-the-art approaches, i.e., DKD [15] and DIST [14]. However, this phenomenon is not observed on small-scale dataset, revealing the small data pitfall that underestimates the power of vanilla KD. The symbol ∆ represents the accuracy gap between vanilla KD and the best result achieved by other approaches (marked with gray ). \"previous recipe\" refers to results reported in original papers, while \"stronger recipe\" refers to results obtained through our enhanced training strategy. 2 Small data pitfall: limited performance of vanilla KD on small-scale dataset" }, { "figure_ref": [], "heading": "Review of knowledge distillation", "publication_ref": [ "b13", "b11", "b12", "b17", "b16", "b11" ], "table_ref": [], "text": "Knowledge distillation (KD) techniques can be broadly categorized into two types based on the source of information used by the pre-trained teacher model: those that utilize the teacher's output probabilities (logits-based) [15,14,11] and those that utilize the teacher's intermediate representations (hint-based) [12,13,18,17]. Logits-based approaches leverage teacher outputs as auxiliary signals to train a smaller model, known as the student:\nL kd = αD cls (p s , y) + (1 -α)D kd (p s , p t ),(1)\nwhere p s and p t are logits of the student and the teacher. y is the one-hot ground truth label. D cls and D kd are classification loss and distillation loss, such as cross-entropy and KL divergence, respectively. The hyper-parameter α determines the balance between two loss terms. For convenience, we set α to 0.5 in all subsequent experiments.\nBesides the logits, intermediate hints (features) [12] can also be used for KD. Considering a student feature F s and a teacher feature F t , hint-based distillation is achieved as follows:\nL hint = D hint (T s (F s ), T t (F t )),(2)\nwhere T s and T t are transformation modules for aligning two features. D hint is a measurement of feature divergence, e.g., l 1 or l 2 -norm. In common practice, the hint-based loss is typically used in conjunction with the classification loss, and we adhere to this setting in our experiments." }, { "figure_ref": [ "fig_0" ], "heading": "Vanilla KD can not achieve satisfactory results on small-scale dataset", "publication_ref": [ "b27", "b13", "b27", "b18" ], "table_ref": [ "tab_0", "tab_0" ], "text": "Recently, many studies have used simplified evaluation protocols that involve small models or datasets. However, there is a growing concern [28] about the limitations of evaluating KD methods solely in such small-scale settings, as many real-world applications involve larger models and datasets. In this section, we extensively investigate the impact of using small-capacity models and small-scale datasets on different KD methods. To provide a comprehensive analysis, we specifically compare vanilla KD with two state-of-the-art logits-based KD approaches, i.e., DKD [15] and DIST [14]. Different from [28] which compresses large models to Res50, our primary focus is to ascertain whether the unsatisfactory performance of vanilla KD can be attributed to small student models or small-scale datasets.\nImpact of limited model capacity. We conduct experiments using two commonly employed teacherstudent pairs: Res34-Res18 and Res50-MobileNetV2. The corresponding results are shown in Table 1a. Previously, these models were trained using the SGD optimizer for 90 epochs (\"Original\" in table). However, to fully explore the power of KD approaches, we leverage a stronger training Impact of small dataset scale. To investigate the impact of small dataset scale on the performance of vanilla KD, we conduct experiments using the widely used CIFAR-100 dataset [19]. Similarly, we design a stronger training strategy by extending the training epochs from 240 to 2400 and introduce additional data augmentation schemes. As shown in Table 1b, despite the improved strategy leads to better results for all three approaches again, there still remains a performance gap between vanilla KD and other baselines. Notably, the accuracy gap even increases from 1.31% to 2.17% for the Res56-Res20 teacher-student pair. These observations clearly indicate that the underestimation of vanilla KD is not solely attributed to insufficient training, but also to the small-scale dataset.\nDiscussion. CIFAR-100 is widely adopted as a standard benchmark for evaluating most distillation approaches. However, our experimental results shows that the true potential of vanilla KD has been underestimated on such small-scale datasets. We refer to this phenomenon as the small data pitfall, where performance improvements observed on small-scale datasets do not necessarily translate to more complex real-world datasets. As shown in Figure 1, the performance gap between vanilla Kd and other approaches gradually diminishes with the increasing scale of benchmarks. Considering the elegantly simple of vanilla KD, more extensive experiments are required to explore its full potential.\n3 Evaluate the power of vanilla KD on large-scale dataset" }, { "figure_ref": [], "heading": "Experimental setup", "publication_ref": [ "b18" ], "table_ref": [], "text": "In order to fully understand and tap into the potential of vanilla KD, we conduct experiments based on large-scale datasets and strong training strategies. The experimental setup are as following.\nDatasets. We mainly evaluate KD baselines on the larger and more complex ImageNet-1K dataset [23], which comprises 1.2 million training samples and 50,000 validation samples, spanning across 1000 different categories. Compared to CIFAR-100 dataset [19], ImageNet-1K provides a closer approximation to real-world data distribution, allowing for a more comprehensive evaluation." }, { "figure_ref": [], "heading": "Models.", "publication_ref": [ "b1", "b33", "b2", "b24", "b26", "b1", "b24", "b28", "b24", "b13", "b17", "b15", "b16", "b12" ], "table_ref": [ "tab_2" ], "text": "In our experiments, we mainly utilize Res50 [2] as the student model and BEiTv2-L [34] as the teacher model. This choice is motivated by teacher's exceptional performance, as it currently stands as the best available open-sourced model in terms of top-1 accuracy when using an input resolution of 224x224. In addition, we also include ViT [3,25] and ConvNeXtV2 [27] as student models, and ResNet [2] and ConvNeXt [5] as teacher models. Specifically, when training ViT student, we do not use an additional distillation token [25].\nTraining strategy. We leverage two training strategies based on recent work [29] that trains highperforming models from scratch using sophisticated training schemes, such as more complicated data augmentation and more advanced optimization methods [25]. Strategy \"A1\" is slightly stronger than \"A2\" as summarized in Table 2, and we use it with longer training schedule configurations.\nBaseline distillation methods. To make a comprehensive comparison, we adopt several recently proposed KD methods as the baselines, such as logits-based vanilla KD [11], DKD [15], and DIST [14], hint-based CC [18], RKD [16], CRD [17], and ReviewKD [13]. " }, { "figure_ref": [ "fig_2" ], "heading": "Logits-based methods consistently outperform hint-based methods", "publication_ref": [ "b1", "b34", "b35" ], "table_ref": [ "tab_3" ], "text": "In this section, we conduct a comparative analysis between logits-based and hint-based distillation approaches. We utilize the widely adopted Res50 architecture [2] as the student model, with Res152 and BEiTv2-L serving as the teachers. The student models are distilled with two different epoch configurations: 300 and 600 epochs. The evaluation results, along with the total GPU training time for each student, are presented in Table 3. To gain a deeper understanding of the generalization of student models beyond the ImageNet-1K dataset, we extend the evaluation to include ImageNet-Real [35] and ImageNet-V2 matched frequency [36], which provide separate test sets.\nAfter 300 epochs of training using strategy A2, all hint-based distillation methods exhibit inferior results compared to the logits-based baselines. Despite an extended training schedule with the stronger strategy A1, a notable performance gap remains between the two distillation categories. Moreover, it is worth noticing that hint-based approaches necessitate significantly more training time, highlighting their limitations in terms of both effectiveness and efficiency. Discussion. Our experiments demonstrate that logits-based methods consistently outperform hint-based methods in terms of generalizability. We speculate that this discrepancy can be attributed to the different capabilities of the teacher and student models when dealing with complex distributions. The hint-based approach mimicking the intermediate features of the teacher model becomes less suitable, hindering it from achieving satisfactory results. Furthermore, hintbased methods may encounter difficulties when using heterogeneous teacher and student architectures due to distinct learned representations, which impede the feature alignment process. To analyze this, we perform a center kernel analysis (CKA) [37] to compare the features extracted by Res50 with those of Res152 and BEiTv2-L. As depicted in Figure 2, there is a noticeable dissimilarity between the intermediate features of BEiTv2-L and those of the Res50 student, while the Res152 features exhibit a closer resemblance to Res50. In addition to the suboptimal performance, the CKA figure further highlights the incompatibility of hint-based approaches with heterogeneous scenarios, thereby limiting their practical applicability in real-world settings. Consequently, we focus solely on logits-based distillation approaches in subsequent experiments." }, { "figure_ref": [], "heading": "Vanilla KD preserves strength with increasing teacher capacity", "publication_ref": [ "b33" ], "table_ref": [ "tab_3" ], "text": "Table 3 also shows the comparison between vanilla KD and other two logits-based baselines using a stronger heterogeneous BEiTv2-L teacher [34]. This teacher has achieved the state-of-the-art top-1 accuracy on ImageNet-1K validation among open-sourced models.\nIn all the evaluated settings, the three logits-based approaches exhibit similar performance. After being distilled for 300 epochs with Res152 as the teacher, vanilla KD achieves a top-1 accuracy of 80.55% on ImageNet-1K validation set, which is only 0.06% lower than the best student performance achieved by DIST. With an extended training duration of 600 epochs, the student trained using vanilla KD even achieves the top performance at 81.33%. When taught by BEiT-L, vanilla KD obtains 80.89% top-1 accuracy, surpassing both DKD and DIST. Even in the presence of distribution shift, vanilla KD maintains comparable performance to other approached on ImageNet-Real and ImageNet-V2 matched frequency benchmarks. Additionally, further extending the training schedule to 1200 epochs improves vanilla KD and results in a performance on par with DKD and DIST, thus highlighting its effectiveness in delivering impressive results.\nThese results clearly indicate that vanilla KD has been underestimated in previous studies because they are designed and evaluated on small-scale datasets. When trained with both powerful homogeneous and heterogeneous teachers, vanilla KD achieves competitive performance that is comparable to state-of-the-art methods while still maintaining its simplicity." }, { "figure_ref": [], "heading": "Robustness of vanilla KD across different training components", "publication_ref": [ "b28" ], "table_ref": [ "tab_4", "tab_5" ], "text": "In this section, we perform ablation studies to assess the robustness of vanilla KD under different training settings. Specifically, we compare various loss functions used in vanilla KD and explore different hyper-parameter configurations for the three logits-based distillation methods mentioned earlier.\nAblation on Loss functions. Since our strategies [29] focus on training the student using one-hot labels and the binary cross-entropy (BCE) loss function, we also evaluate a binary version of the KL divergence as an alternative for learning from soft labels:\nL BKL = i∼Y [-p t i log(p s i /p t i ) -(1 -p t ) log((1 -p s )/(1 -p t ))],(3)\nwhere Y denotes the label space.\nWe compare various combinations of loss functions for vanilla KD and present the results in Table 4. Surprisingly, the binary cross-entropy (BCE) loss outperforms the multi-class cross-entropy loss in terms of accuracy. However, we find that the binary version of KL divergence, referred to as BKL Ablation on training hyper-parameters. To ensure a thorough and fair comparison, we investigate various configurations of learning rate and weight decay for DKD, DIST, and vanilla KD using strategy \"A2\". The results, presented in Table 5, reveal an intriguing finding: vanilla KD outperforms the other methods across all configurations. In addition, it is noteworthy that vanilla KD consistently performs close to its best across different learning rates and weight decay configurations, demonstrating its robustness and ability to maintain competitive results under various settings." }, { "figure_ref": [], "heading": "Distillation results with different teacher-student pairs", "publication_ref": [ "b37", "b1", "b2", "b24", "b26", "b27", "b27", "b24" ], "table_ref": [ "tab_6", "tab_8", "tab_7", "tab_10" ], "text": "In addition to the BEiTv2-L teacher and the Res50 student used in previous experiments, we introduce additional combinations of teacher-student pairs: ConvNeXt-XL-Res50, BEiTv2-L-DeiT-S, and ConvNeXt-XL-DeiT-S. We conduct experiments to compare the performance of vanilla KD with that of DKD and DIST on these combinations. Each student model is trained for 300 epochs using strategy \"A2\". We evaluate multiple hyper-parameter configurations, including learning rate and weight decay, and select the configuration with the highest performance for each method to ensure a fair comparison. Corresponding results are presented in Table 6. Consistent with the findings from previous experiments, vanilla KD achieves on par or slightly better performances compared to DKD and DIST on all combinations. These results demonstrate the potential of vanilla KD to achieve state-of-the-art performances across a diverse range of teacher-student combinations.\nMarginal gains when teacher is sufficiently large. Although vanilla KD can achieve state-of-the-art results across various architectures in our experiments, it is not without limitations. For example, we observe that the Res50 student, trained with a BEiTv2-B teacher, achieves similar or even better result than taht trained with a larger BEiTv2-L. This observation is consistent with the analysis presented in [38]. To gain deeper insights into the impact of teacher model when working with large-scale datasets, we conducted additional experiments in this section. We evaluate the impact of the BEiTv2-B teacher and the BEiTv2-L teacher with all the logits-based baselines, using three different epoch configurations. As shown in Table 9, the results demonstrate a trend of performance degradation as the capacity of the teacher model increases. This indicates that vanilla KD is not able to obtain benefit from a larger teacher model, even trained on a large-scale dataset. Sustained benefits when enlarging the teacher's training set.\nwe also evaluate the influence of the training set used for teacher models on student performance. We compare two BEiTv2-B teacher models: one pre-trained on ImageNet-1K and the other pre-trained on ImageNet-21K. Subsequently, we train two distinct students using strategy \"A1\" for 1200 epochs. The evaluation results are presented in Table 7. The results clearly demonstrate that when the teacher is trained on a larger-scale dataset, it positively affects the performance of the student. We hypothesize that a teacher pre-trained on a larger dataset has a better understanding of the data distribution, which subsequently facilitates student learning and leads to improved performance. [2], ViT-T, ViT-S [3,25], and ConveNextV2-T [27], and trained them using strategy \"A1\". The corresponding results are shown in Table 8. All three students achieve improved performance with longer training epochs. This trend is also consistent with the pattern observed in [28]. When trained with 4800 epochs, the students achieve new state-of-the-art performance, surpassing previous literature [28,25]. Specifically, the Res50, ViT-S, ViT-T, and ConvNeXtV2-T achieve 83.08%, 84.33%, 78.11% and 85.03%3 top-1 accuracy, respectively." }, { "figure_ref": [], "heading": "Comparison with masked image modeling", "publication_ref": [ "b38", "b39", "b40", "b26" ], "table_ref": [ "tab_9" ], "text": "Masked image modeling (MIM) framework [39,40,41] has shown promising results in training models with excellent performance. However, it is worth noting that the pre-training and fine-tuning process of MIM can be time-consuming. In this section, we conduct a comparison between MIM and vanilla KD, taking into account both accuracy and time consumption. Specifically, we compare vanilla KD with the FCMIM framework proposed in ConvNeXtv2 [27]. We utilize a ConvNeXtv2-T student model and a BEiTv2-B teacher model. We train two student models using vanilla KD: one for 300 epochs and another for 1200 epochs.\nThe results are presented in Table 10, highlighting the efficiency of vanilla KD in training exceptional models. Specifically, even with a training duration of only 300 epochs, vanilla KD achieves an accuracy of 84.42%, surpassing MIM by 0.53%, while consuming only one-fifth of the training time. Moreover, when extending the training schedule to 1200 epochs, vanilla KD achieves a performance of 85.03%, surpassing the best-performing MIM-trained model by 1.14%. These results further demonstrate the effectiveness and time efficiency of vanilla KD compared to the MIM framework." }, { "figure_ref": [], "heading": "Transferring to downstream task", "publication_ref": [ "b19", "b31", "b32" ], "table_ref": [ "tab_11" ], "text": "To assess the transfer learning performance of the student distilled on ImageNet, we conduct experiments on object detection and instance segmentation tasks using COCO benchmark [20]. We adopt Res50 and ConvNeXtV2-T as backbones, and initialize them with the distilled checkpoint. The detectors in our experiments are commonly used Mask RCNN [32] and Cascade Mask RCNN [33].\nTable 11 reports the corresponding results. When utilizing our distilled Res50 model as the backbone, Mask RCNN outperforms the best counterpart using a backbone trained from scratch by a significant margin (+3.1% box AP using \"1×\" schedule and +2.1% box AP using \"2×\" schedule). Similarly, when the backbone architecture is ConvNeXtV2-T, using the model trained by vanilla KD consis- tently leads to improved performance. These results demonstrate the efficient transferability of the performance improvements achieved by vanilla KD to downstream tasks." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b13", "b41", "b11", "b42", "b43", "b45", "b17", "b15", "b46", "b47", "b12", "b48", "b49", "b16", "b50", "b2", "b24", "b51", "b52", "b53", "b54", "b56", "b57", "b37", "b23", "b38", "b2", "b58", "b5", "b6", "b7", "b27", "b59", "b60", "b61", "b62", "b27" ], "table_ref": [], "text": "Knowledge distillation (KD). Previous KD methods can be broadly categorized into two classes: logits-based methods and hint-based methods. Hinton et al.\n[11] first proposes to train the student to learn form logits of teacher. This method is referred as vanilla KD. Recently, researchers have introduced improvements to the vanilla KD method by enhancing the dark knowledge [15], relaxing the constraints [14], or using sample factors to replace the unified temperature [42]. Hint-based methods train the student to imitate learned representations of the teacher. These methods leverage various forms of knowledge, such as model activation [12,43,44], attention map [45], flow of solution procedure [46], or feature correlation among samples [18,16]. To facilitate the alignment of features between the teacher and the student, techniques including design new projector [47,48] and more complex alignment approaches [13,49,50] have been proposed. Besides directly mimicking the hint knowledge, contrastive learning [17] and masked modeling [51] have also been explored.\nRecently, the success of vision transformers [3] has prompted the development of specific distillation approaches tailored to this architecture, including introducing additional distillation tokens [25,52] or perform distillation at the fine-grained patch level [53]. Additionally, vision transformer architectures have also been leveraged to facilitate KD between CNN models [54,55]. Furthermore, some works attempt to address specific challenges in KD, such as distilling from ensembles of teachers [56,57] and investigating the impact of different teachers on student performance [58,38].\nLarge-scale training. Recent developments in large-scale deep learning [24,39] suggest that by scaling up data [3,59], model size [6,7,8] and training iteration [28] to improve model performance, achieve state-of-the-art results [60] across diverse tasks, and enable better adaptability to real-world scenarios. For example, in computer vision, numerous large models have achieved outstanding results on tasks such as classification, detection, and segmentation. These models are usually trained on extensive labeled image datasets, including ImageNet-21K [23] and JFT [11,61]. Similarly, in natural language processing, training models on massive text corpora [62,63] has led to breakthroughs in language understanding, machine translation, and text generation tasks. Inspired by this scaling trend, it becomes crucial to evaluate KD approaches in more practical scenarios. After all, the goal of KD is to enhance the performance of student networks.\nThe closest work to ours is that of Beyer et al. [28]. They identify several design choices to make knowledge distillation work well in model compression. Taking it a step further, we point out the small data pitfall in current distillation literature: small-scale datasets limit the power of vanilla KD. We thoroughly study the crucial factors in deciding distillation performance. In addition, we have distilled several new state-of-the-art backbone architectures, further expanding the repertoire of high-performing models in vision arsenal." }, { "figure_ref": [], "heading": "Conclusion and discussion", "publication_ref": [], "table_ref": [ "tab_0", "tab_12" ], "text": "Motivated by the growing emphasis on scaling up for remarkable performance gains in deep learning, this paper revisits several knowledge distillation (KD) approaches, spanning from small-scale to large-scale dataset settings. Our analysis has disclosed the small data pitfall present in previous KD methods, wherein the advantages offered by most KD variants over vanilla KD diminish when applied to large-scale datasets. Equipped with enhanced data augmentation techniques and larger datasets, vanilla KD emerges as a viable contender, capable of achieving comparable performance to the most advanced KD variants. Remarkably, our vanilla KD-trained ResNet-50, ViT-S, and ConvNeXtV2-T models attain new state-of-the-art performances without any additional modifications.\nWhile the application of knowledge distillation on large-scale datasets yields significant performance improvements, it is important to acknowledge the associated longer training times, which inevitably contribute to increased carbon emissions. These heightened computational resource requirements may impede practitioners from thoroughly exploring and identifying optimal design choices. Future endeavors should focus on limitations posed by longer training times and carbon emissions, and exploring new avenues to further enhance the efficacy and efficiency of distillation methods.\nA Impact of small-scale dataset A.1 Details of \"stronger recipe\"\nIn Table 1 of our main paper, we evaluate the impact of limited model capacity and small-scale dataset by comparing the results of using \"previous training recipe\" and our \"stronger recipe\". We summarize the details of \"stronger recipe\" and present them in Table 12. " }, { "figure_ref": [ "fig_0" ], "heading": "A.2 Numerical results", "publication_ref": [ "b13" ], "table_ref": [ "tab_13" ], "text": "In Figure 1 of our main paper, we present a comparison of performance gaps among vanilla KD and two logits-based baselines, i.e., DKD [15] and DIST [14], on two datasets of varying scales, to demonstrate the underestimation of vanilla KD on small-scale datasets. The benchmarks used for this comparison include CIFAR-100, ImageNet-1K and two subsets of ImageNet-1K. These subsets are obtained by stratified sampling from the training set of ImageNet-1K with fractions of 30% (0.38M training images) and 60% (0.77M training images), respectively. Table 13 reports numerical results of the comparison. On CIFAR-100 dataset, students are trained for 2400 epochs with strategy \"C\", while on ImageNet-1K dataset and its subsets, i.e., 30%/60%/full, students are trained for 4000/2000/1200 epochs (same total iteration) with strategy \"A1\". " }, { "figure_ref": [], "heading": "B Scaling up to ImageNet-22K", "publication_ref": [], "table_ref": [ "tab_14" ], "text": "To further investigate the performance of vanilla KD on datasets with larger scale, we conduct experiments on ImageNet-22K. Our experiment follows a two-stage KD setting. In the first stage, KD is performed exclusively using samples from ImageNet-21K, where the student model learns solely from the soft labels provided by the ImageNet-1K pretrained teacher (the 21K images are classified into 1K labels, avoid re-training a new classify head). In the second stage, distillation is performed on ImageNet-1K similar to our previous experiments.\nThe results are summarized in the Table 14. Our intuition of utilizing additional samples from ImageNet-21K is to provide the student model with a broader range of data distributions. However, we found that when trained for an equal number of iteration steps, the performance on ImageNet-22K was inferior to that achieved on ImageNet-1K. We hypothesize that this discrepancy arises from the simplistic approach of classifying 21K images into the 1K categories through the pre-trained teacher, which may result in the out-of-distribution problem. Moreover, if we were to follow conventional methods (22K pre-training then 1K fine-tuning) such as re-train a classification head for both teacher and student, it might offer some assistance. However, this approach would significantly increase the computational cost associated with the entire knowledge distillation process, which goes against the original intention of KD to directly leverage existing teacher model. In conclusion, more sophisticated approaches are required to fully harness the potential of the additional out-of-distribution samples. " }, { "figure_ref": [], "heading": "C Ablation of hyper-parameters in baseline methods", "publication_ref": [], "table_ref": [ "tab_15", "tab_8" ], "text": "In our main paper, results of DKD and DIST are obtained using hyper-parameters in their original implementation. The loss function of DKD and DIST are presented as follows:\nL DKD = α DKD TCKD + β DKD NCKD L DIST = L cls + β DIST L inter + γ DIST L intra .(4)\nBy default, hyper-parameters α DKD , β DKD , β DIST , and γ DIST are set to 1, 2, 1, and 1, respectively. In order to provide a fair evaluation of these logits-based baselines, we conduct experiments to study the impact of different settings of these hyper-parameters. We use the training strategy \"A2\" and train all models for 300 epochs. The results of the ablation study are presented in Table 15. By modifying the hyper-parameters, the best accuracy of DKD and DIST improves to 80.96% and 81.01% respectively, which is an increase of 0.20% and 0.07% compared to their original settings. However, vanilla KD achieves a comparable performance of 80.96% top-1 accuracy, without any modifications to its hyper-parameters, as shown in Table 9 of our main paper. " } ]
2023-05-25
[ { "authors": "Alex Krizhevsky; Ilya Sutskever; Geoffrey E Hinton", "journal": "Communications of the ACM", "ref_id": "b0", "title": "Imagenet classification with deep convolutional neural networks", "year": "2017" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b1", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b2", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "Jianyuan Guo; Kai Han; Han Wu; Yehui Tang; Xinghao Chen; Yunhe Wang; Chang Xu", "journal": "", "ref_id": "b3", "title": "Cmt: Convolutional neural networks meet vision transformers", "year": "2022" }, { "authors": "Zhuang Liu; Hanzi Mao; Chao-Yuan Wu; Christoph Feichtenhofer; Trevor Darrell; Saining Xie", "journal": "", "ref_id": "b4", "title": "A convnet for the 2020s", "year": "2022" }, { "authors": "Christian Szegedy; Wei Liu; Yangqing Jia; Pierre Sermanet; Scott Reed; Dragomir Anguelov; Dumitru Erhan; Vincent Vanhoucke; Andrew Rabinovich", "journal": "", "ref_id": "b5", "title": "Going deeper with convolutions", "year": "2015" }, { "authors": "Hugo Touvron; Matthieu Cord; Alexandre Sablayrolles; Gabriel Synnaeve; Hervé Jégou", "journal": "", "ref_id": "b6", "title": "Going deeper with image transformers", "year": "2021" }, { "authors": "Hongyu Wang; Shuming Ma; Li Dong; Shaohan Huang; Dongdong Zhang; Furu Wei", "journal": "", "ref_id": "b7", "title": "Deepnet: Scaling transformers to 1,000 layers", "year": "2022" }, { "authors": "Saining Xie; Ross Girshick; Piotr Dollár; Zhuowen Tu; Kaiming He", "journal": "", "ref_id": "b8", "title": "Aggregated residual transformations for deep neural networks", "year": "2017" }, { "authors": "Sergey Zagoruyko; Nikos Komodakis", "journal": "", "ref_id": "b9", "title": "Wide residual networks", "year": "2016" }, { "authors": "Geoffrey E Hinton; Oriol Vinyals; Jeffrey Dean", "journal": "", "ref_id": "b10", "title": "Distilling the knowledge in a neural network", "year": "2009" }, { "authors": "Adriana Romero; Nicolas Ballas; Samira Ebrahimi Kahou; Antoine Chassang; Carlo Gatta; Yoshua Bengio", "journal": "", "ref_id": "b11", "title": "Fitnets: Hints for thin deep nets", "year": "2014" }, { "authors": "Pengguang Chen; Shu Liu; Hengshuang Zhao; Jiaya Jia", "journal": "", "ref_id": "b12", "title": "Distilling knowledge via knowledge review", "year": "2021" }, { "authors": "Tao Huang; Shan You; Fei Wang; Chen Qian; Chang Xu", "journal": "", "ref_id": "b13", "title": "Knowledge distillation from a stronger teacher", "year": "2022" }, { "authors": "Borui Zhao; Quan Cui; Renjie Song; Yiyu Qiu; Jiajun Liang", "journal": "", "ref_id": "b14", "title": "Decoupled knowledge distillation", "year": "2009" }, { "authors": "Wonpyo Park; Dongju Kim; Yan Lu; Minsu Cho", "journal": "", "ref_id": "b15", "title": "Relational knowledge distillation", "year": "2019" }, { "authors": "Yonglong Tian; Dilip Krishnan; Phillip Isola", "journal": "", "ref_id": "b16", "title": "Contrastive representation distillation", "year": "2020" }, { "authors": "Baoyun Peng; Xiao Jin; Jiaheng Liu; Dongsheng Li; Yichao Wu; Yu Liu; Shunfeng Zhou; Zhaoning Zhang", "journal": "", "ref_id": "b17", "title": "Correlation congruence for knowledge distillation", "year": "2019" }, { "authors": "Alex Krizhevsky; Geoffrey Hinton", "journal": "", "ref_id": "b18", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "", "ref_id": "b19", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Marius Cordts; Mohamed Omran; Sebastian Ramos; Timo Rehfeld; Markus Enzweiler; Rodrigo Benenson; Uwe Franke; Stefan Roth; Bernt Schiele", "journal": "", "ref_id": "b20", "title": "The cityscapes dataset for semantic urban scene understanding", "year": "2016" }, { "authors": "Eirikur Agustsson; Radu Timofte", "journal": "", "ref_id": "b21", "title": "Ntire 2017 challenge on single image super-resolution: Dataset and study", "year": "2017" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "", "ref_id": "b22", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b23", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Hugo Touvron; Matthieu Cord; Matthijs Douze; Francisco Massa; Alexandre Sablayrolles; Hervé Jégou", "journal": "", "ref_id": "b24", "title": "Training data-efficient image transformers & distillation through attention", "year": "2021" }, { "authors": "Xiaohua Zhai; Alexander Kolesnikov; Neil Houlsby; Lucas Beyer", "journal": "", "ref_id": "b25", "title": "Scaling vision transformers", "year": "2022" }, { "authors": "Sanghyun Woo; Shoubhik Debnath; Ronghang Hu; Xinlei Chen; Zhuang Liu; In So Kweon; Saining Xie", "journal": "", "ref_id": "b26", "title": "Convnext v2: Co-designing and scaling convnets with masked autoencoders", "year": "2023" }, { "authors": "Lucas Beyer; Xiaohua Zhai; Amélie Royer; Larisa Markeeva; Rohan Anil; Alexander Kolesnikov", "journal": "", "ref_id": "b27", "title": "Knowledge distillation: A good teacher is patient and consistent", "year": "2022" }, { "authors": "Ross Wightman; Hugo Touvron; Hervé Jégou", "journal": "", "ref_id": "b28", "title": "Resnet strikes back: An improved training procedure in timm", "year": "2021" }, { "authors": "Huan Wang; Suhas Lohit; Michael N Jones; Yun Fu", "journal": "", "ref_id": "b29", "title": "What makes a\" good\" data augmentation in knowledge distillation-a statistical perspective", "year": "2022" }, { "authors": "Andreas Steiner; Alexander Kolesnikov; Xiaohua Zhai; Ross Wightman; Jakob Uszkoreit; Lucas Beyer", "journal": "", "ref_id": "b30", "title": "How to train your vit? data, augmentation, and regularization in vision transformers", "year": "2021" }, { "authors": "Kaiming He; Georgia Gkioxari; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b31", "title": "Mask r-cnn", "year": "2017" }, { "authors": "Zhaowei Cai; Nuno Vasconcelos", "journal": "", "ref_id": "b32", "title": "Cascade r-cnn: Delving into high quality object detection", "year": "2018" }, { "authors": "Zhiliang Peng; Li Dong; Hangbo Bao; Qixiang Ye; Furu Wei", "journal": "", "ref_id": "b33", "title": "BEiT v2: Masked image modeling with vector-quantized visual tokenizers", "year": "2022" }, { "authors": "Lucas Beyer; Olivier J Hénaff; Alexander Kolesnikov; Xiaohua Zhai; Aäron Van Den Oord", "journal": "", "ref_id": "b34", "title": "Are we done with imagenet?", "year": "2020" }, { "authors": "Benjamin Recht; Rebecca Roelofs; Ludwig Schmidt; Vaishaal Shankar", "journal": "", "ref_id": "b35", "title": "Do imagenet classifiers generalize to imagenet?", "year": "2019" }, { "authors": "Simon Kornblith; Mohammad Norouzi; Honglak Lee; Geoffrey Hinton", "journal": "", "ref_id": "b36", "title": "Similarity of neural network representations revisited", "year": "2019" }, { "authors": "Hyun Jang; Bharath Cho; Hariharan", "journal": "", "ref_id": "b37", "title": "On the efficacy of knowledge distillation", "year": "2019" }, { "authors": "Karol Mathieu Germain; Iain Gregor; Hugo Murray; Larochelle", "journal": "", "ref_id": "b38", "title": "Made: Masked autoencoder for distribution estimation", "year": "2015" }, { "authors": "Jianyuan Guo; Kai Han; Han Wu; Yehui Tang; Yunhe Wang; Chang Xu", "journal": "", "ref_id": "b39", "title": "Fastmim: Expediting masked image modeling pre-training for vision", "year": "2022" }, { "authors": "Zhenda Xie; Zheng Zhang; Yue Cao; Yutong Lin; Jianmin Bao; Zhuliang Yao; Qi Dai; Han Hu", "journal": "", "ref_id": "b40", "title": "Simmim: A simple framework for masked image modeling", "year": "2022" }, { "authors": "Kunran Xu; Lai Rui; Yishi Li; Lin Gu", "journal": "", "ref_id": "b41", "title": "Feature normalized knowledge distillation for image classification", "year": "2020" }, { "authors": "Linfeng Zhang; Yukang Shi; Zuoqiang Shi; Kaisheng Ma; Chenglong Bao", "journal": "", "ref_id": "b42", "title": "Task-oriented feature distillation", "year": "2020" }, { "authors": "Xianing Chen; Qiong Cao; Yujie Zhong; Jing Zhang; Shenghua Gao; Dacheng Tao", "journal": "", "ref_id": "b43", "title": "Dearkd: dataefficient early knowledge distillation for vision transformers", "year": "2022" }, { "authors": "Nikos Komodakis; Sergey Zagoruyko", "journal": "", "ref_id": "b44", "title": "Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer", "year": "2017" }, { "authors": "Junho Yim; Donggyu Joo; Jihoon Bae; Junmo Kim", "journal": "", "ref_id": "b45", "title": "A gift from knowledge distillation: Fast optimization, network minimization and transfer learning", "year": "2017" }, { "authors": "Yudong Chen; Sen Wang; Jiajun Liu; Xuwei Xu; Frank De Hoog; Zi Huang", "journal": "", "ref_id": "b46", "title": "Improved feature distillation via projector ensemble", "year": "2022" }, { "authors": "Xiaolong Liu; Lujun Li; Chao Li; Anbang Yao", "journal": "", "ref_id": "b47", "title": "Norm: Knowledge distillation via n-to-one representation matching", "year": "2023" }, { "authors": "Defang Chen; Jian-Ping Mei; Hailin Zhang; Can Wang; Yan Feng; Chun Chen", "journal": "", "ref_id": "b48", "title": "Knowledge distillation with the reused teacher classifier", "year": "2022" }, { "authors": "Dongyang Liu; Meina Kan; Shiguang Shan; Chen Xilin", "journal": "", "ref_id": "b49", "title": "Function-consistent feature distillation", "year": "2023" }, { "authors": "Zhendong Yang; Zhe Li; Mingqi Shao; Dachuan Shi; Zehuan Yuan; Chun Yuan", "journal": "", "ref_id": "b50", "title": "Masked generative distillation", "year": "2022" }, { "authors": "Zhengqi Sucheng Ren; Tianyu Gao; Zihui Hua; Yonglong Xue; Shengfeng Tian; Hang He; Zhao", "journal": "", "ref_id": "b51", "title": "Co-advise: Cross inductive bias distillation", "year": "2022" }, { "authors": "Zhiwei Hao; Jianyuan Guo; Ding Jia; Kai Han; Yehui Tang; Chao Zhang; Han Hu; Yunhe Wang", "journal": "", "ref_id": "b52", "title": "Learning efficient vision transformers via fine-grained manifold distillation", "year": "2022" }, { "authors": "Martin Zong; Zengyu Qiu; Xinzhu Ma; Kunlin Yang; Chunya Liu; Jun Hou; Shuai Yi; Wanli Ouyang", "journal": "", "ref_id": "b53", "title": "Better teacher better student: Dynamic prior knowledge for knowledge distillation", "year": "2023" }, { "authors": "Sihao Lin; Hongwei Xie; Bing Wang; Kaicheng Yu; Xiaojun Chang; Xiaodan Liang; Gang Wang", "journal": "", "ref_id": "b54", "title": "Knowledge distillation via the target-aware transformer", "year": "2022" }, { "authors": "Shan You; Chang Xu; Chao Xu; Dacheng Tao", "journal": "", "ref_id": "b55", "title": "Learning from multiple teacher networks", "year": "2017" }, { "authors": "Andrey Malinin; Bruno Mlodozeniec; Mark Gales", "journal": "", "ref_id": "b56", "title": "Ensemble distribution distillation", "year": "2020" }, { "authors": "Byeongho Heo; Jeesoo Kim; Sangdoo Yun; Hyojin Park; Nojun Kwak; Jin Young Choi", "journal": "", "ref_id": "b57", "title": "A comprehensive overhaul of feature distillation", "year": "2019" }, { "authors": "Qizhe Xie; Minh-Thang Luong; Eduard Hovy; Quoc V Le", "journal": "", "ref_id": "b58", "title": "Self-training with noisy student improves imagenet classification", "year": "2020" }, { "authors": "Simon Kornblith; Jonathon Shlens; Quoc V Le", "journal": "", "ref_id": "b59", "title": "Do better imagenet models transfer better", "year": "2019" }, { "authors": "François Chollet", "journal": "", "ref_id": "b60", "title": "Xception: Deep learning with depthwise separable convolutions", "year": "2017" }, { "authors": "Guillaume Wenzek; Marie-Anne Lachaux; Alexis Conneau; Vishrav Chaudhary; Francisco Guzmán; Armand Joulin; Edouard Grave", "journal": "", "ref_id": "b61", "title": "Ccnet: Extracting high quality monolingual datasets from web crawl data", "year": "2019" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b62", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 220.52, 397.08, 284.15, 11.88 ], "formula_id": "formula_0", "formula_text": "L kd = αD cls (p s , y) + (1 -α)D kd (p s , p t ),(1)" }, { "formula_coordinates": [ 3, 244.38, 491.44, 260.29, 11.88 ], "formula_id": "formula_1", "formula_text": "L hint = D hint (T s (F s ), T t (F t )),(2)" }, { "formula_coordinates": [ 6, 167.75, 655.21, 336.92, 12.72 ], "formula_id": "formula_2", "formula_text": "L BKL = i∼Y [-p t i log(p s i /p t i ) -(1 -p t ) log((1 -p s )/(1 -p t ))],(3)" }, { "formula_coordinates": [ 15, 225.26, 350.68, 279.4, 23.7 ], "formula_id": "formula_3", "formula_text": "L DKD = α DKD TCKD + β DKD NCKD L DIST = L cls + β DIST L inter + γ DIST L intra .(4)" } ]
VanillaKD: Revisit the Power of Vanilla Knowledge Distillation from Small Scale to Large Scale
The tremendous success of large models trained on extensive datasets demonstrates that scale is a key ingredient in achieving superior results. Therefore, the reflection on the rationality of designing knowledge distillation (KD) approaches for limited-capacity architectures solely based on small-scale datasets is now deemed imperative. In this paper, we identify the small data pitfall that presents in previous KD methods, which results in the underestimation of the power of vanilla KD framework on large-scale datasets such as ImageNet-1K. Specifically, we show that employing stronger data augmentation techniques and using larger datasets can directly decrease the gap between vanilla KD and other meticulously designed KD variants. This highlights the necessity of designing and evaluating KD approaches in the context of practical scenarios, casting off the limitations of small-scale datasets. Our investigation of the vanilla KD and its variants in more complex schemes, including stronger training strategies and different model capacities, demonstrates that vanilla KD is elegantly simple but astonishingly effective in large-scale scenarios. Without bells and whistles, we obtain state-of-the-art ResNet-50, ViT-S, and ConvNeXtV2-T models for ImageNet, which achieve 83.1%, 84.3%, and 85.0% top-1 accuracy, respectively. PyTorch code and checkpoints can be found at https://github.com/Hao840/vanillaKD. Thus far, most existing KD approaches in the literature are tailored for small-scale benchmarks (e.g., CIFAR [19]) and small teacher-student pairs (e.g., Res34-Res18 [2] and WRN40-WRN16 [10]). However, downstream vision tasks [20,21,22] actually require the backbone models to be pre-trained on large-scale datasets (e.g., ImageNet [23]) to achieve state-of-the-art performances. Only exploring KD approaches on small-scale datasets may fall short in providing a comprehensive understanding * Equal contribution.
Zhiwei Hao; Jianyuan Guo; Kai Han; Han Hu; Chang Xu; Yunhe Wang
[ { "figure_caption": "Figure 1 :1Figure 1: The performance gap between vanilla KD and other carefully designed approaches gradually diminishes with the increasing scale of datasets.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Feature similarity heatmap measured by CKA. Left: similarity between homogeneous architectures; Right: similarity between heterogeneous architectures. The coordinate axes represent layer indexes.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Impact of dataset scale on KD performance. When adopting a stronger training strategy on largescale dataset, vanilla", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Training strategies used for distillation. \"A1\" and \"A2\" represent two strategies following[29], The upper table provides the shared settings, while the bottom presents their specific settings.", "figure_data": "Setting T. Res. S. Res. BS Optimizer LR LR decay Warmup ep. AMP EMA Label LossCommon2242242048 LAMB 5e-3cosine5✓×BCESetting WD Smoothing Drop path Repeated Aug. H. Flip PRC Rand Aug. Mixup CutmixA20.03×0.053✓✓7/0.50.11.0A10.010.10.053✓✓7/0.50.21.0strategy by employing the AdamW optimizer with 300 training epochs (\"Improve\" in table), detailscan be found in appendix. The original results reported in DKD and DIST demonstrate a clearperformance advantage over vanilla KD. However, when adopting a stronger training strategy, allthree approaches exhibit improved results. Notably, the performance gap between vanilla KD and thebaselines surprisingly diminishes, indicating that the limited power of vanilla KD can be attributed toinsufficient training, rather than to models with small capacity.", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison between hint-based and logits-based distillation methods. The student is Res50, while the teachers are Res152 and BEiTv2-L with top-1 accuracy of 82.83% and 88.39%, respectively. \"✓\" indicates that the corresponding method is hint-based. GPU hours are evaluated on a single machine with 8 V100 GPUs.", "figure_data": "Scheme MethodHint Epoch GPU hoursRes152 teacher IN-1K IN-Real IN-V2 IN-1K IN-Real IN-V2 BEiTv2-L teacherCC [18]✓300265 79.5585.4167.5279.5085.3967.82RKD [16]✓300282 79.5385.1867.4279.2385.1467.60A2 (79.80)CRD [17] ReviewKD [13] ✓ ✓ DKD [15] ×300 300 300307 79.33 439 80.06 265 80.4985.25 85.73 85.3667.57 68.85 68.6579.48 79.11 80.7785.28 85.41 86.5568.09 67.36 68.94DIST [14]×300265 80.6186.2669.2280.7086.0669.35vanilla KD [11] ×300264 80.5586.2369.0380.8986.3269.65CC [18]✓600529 80.3385.7268.82---RKD [16]✓600564 80.3885.8668.38---CRD [17]✓600513 80.1885.7568.32---ReviewKD [13] ✓600877 80.7686.4169.31---A1DKD [15]×600529 81.3186.7669.6381.8387.1970.09(80.38)DIST [14]×600529 81.2386.7270.0981.7286.9470.04vanilla KD [11] ×600529 81.3386.7769.5981.6886.9269.80DKD [15]×1200 1059 81.6687.0170.4282.2887.4171.10DIST [14]×1200 1058 81.6586.9270.4381.8287.1070.35vanilla KD [11] ×1200 1057 81.6186.8670.4782.2787.2770.88", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation of different losses used to learn from hard label and soft label in vanilla KD. The \"CE\", \"BCE\", and \"KL\" refers to cross-entropy loss, binary cross-entropy loss, and KL divergence measurement, respectively. BKL is a binary counterpart of KL. Its definition is provided in Equation 3.", "figure_data": "Epoch Hard Label Soft Label Acc. (%)300CEKL80.49300BCEKL80.89300-KL80.87300CEBKL79.42300BCEBKL79.36300-BKL80.82600BCEKL81.68600-KL81.65", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation about different configurations of learning rate and weight decay on ImageNet-1K. LR: learning rate; WD: weight decay.", "figure_data": "LRWD DKD [15] DIST [14] KD [11]2e-3 0.0279.5079.6979.772e-3 0.0379.5279.5879.783e-3 0.0280.1880.3280.463e-3 0.0380.2480.4180.375e-3 0.0180.7380.6280.825e-3 0.0280.7680.7180.825e-3 0.0380.8180.7980.895e-3 0.0480.7280.3980.776e-3 0.0280.5880.4280.897e-3 0.0280.6080.2080.848e-3 0.0280.5180.3280.618e-3 0.0380.5280.4680.668e-3 0.0480.5580.1880.61", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Distillation results of logits-based approaches with various teacher and student combinations. Number in the parenthesis reports the result of training corresponding model from scratch.", "figure_data": "ConvNeXt-XL -Res50BEiTv2-L -DeiT-SConvNeXt-XL -DeiT-SLR WD(86.97)(79.86)LR WD(88.39) (79.90)(86.97)(79.90)DKD DISTKDDKD DISTKDDKD DISTKD3e-3 0.02 80.50 80.50 80.713e-4 0.05 80.11 79.44 80.45 80.18 79.73 80.455e-3 0.02 81.05 80.89 80.945e-4 0.03 79.82 79.52 80.28 80.17 79.61 80.195e-3 0.03 81.02 80.85 80.985e-4 0.05 80.55 79.58 80.87 80.56 80.19 80.895e-3 0.04 80.90 80.84 81.105e-4 0.07 80.80 80.00 80.99 80.86 80.26 80.877e-3 0.02 80.81 80.86 81.077e-4 0.05 80.53 79.94 80.97 80.80 80.25 80.99loss, yields suboptimal performance. Moreover, the results indicate that training with or withoutsupervision from hard labels has minimal impact on performance, even with a longer training duration.Consequently, we maintain hard label supervision [11, 15] throughout our experiments.", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Teacher trained with larger", "figure_data": "dataset can produce a better student.The teacher is BEiTv2-B.IN-1K T.IN-21K T.Student(85.60)(86.53)DeiT-T76.1277.19Res5081.9782.35", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Impact of different teacher model sizes. The", "figure_data": "student model is Res50.Method Teacher300 Ep. 600 Ep. 1200 Ep.DKDBEiTv2-B 80.76 BEiTv2-L 80.7781.68 81.8382.31 82.28DISTBEiTv2-B 80.94 BEiTv2-L 80.7081.88 81.7282.33 81.82KDBEiTv2-B 80.96 BEiTv2-L 80.8981.64 81.6882.35 82.27", "figure_id": "tab_8", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Vanilla KD vs. FCMIM on ConveNeXtV2-T. † : the GPU hours of fine-tuning stage is 568.", "figure_data": "MethodEP. Hours Acc. (%)FCMIM (IN-1K)800 1334-+ Fine-tune on IN-1K300 1902 † 82.94FCMIM (IN-1K)800 1334-+ Fine-tune on IN-21K 903223-+ Fine-tune on IN-1K90339383.89vanilla KD30065384.42vanilla KD1200 261285.03", "figure_id": "tab_9", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Results of extended training schedule. The teacher model is BEiTv2-B.", "figure_data": "Model \\ Epoch3006001200 4800Res5080.96 81.64 82.35 83.08ViT-S81.38 82.71 83.79 84.33ViT-T--77.19 78.11ConvNextV2-T 84.42 84.74 85.03-", "figure_id": "tab_10", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Object detection and instance segmentation results on COCO. We compare checkpoints with different pre-trained accuracies on ImageNet when used as backbones in downstream tasks.", "figure_data": "(a) Mask RCNN (Res50 [2]).(b) Mask RCNN (ConvNeXtV2-T [27]).(c) Cascade Mask RCNN (ConvNeXtV2-T [27]).ckpt. sche. IN-1K AP b AP mckpt. sche. IN-1K AP b AP mckpt. sche. IN-1K AP b AP mprev. 1×77.1 38.2 34.7prev. 1×83.0 45.4 41.5prev. 1×83.0 49.8 43.5prev. 1×80.4 38.7 35.1prev. 1×83.9 45.6 41.6prev. 1×83.9 50.4 44.0ours 1×83.1 41.8 37.7ours 1×85.0 45.7 42.0ours 1×85.0 50.6 44.3prev. 2×77.1 39.2 35.4prev. 3×83.0 47.4 42.7prev. 3×83.0 51.1 44.5prev. 2×80.4 40.0 36.1prev. 3×83.9 47.7 42.9prev. 3×83.9 51.7 44.9ours 2×83.1 42.1 38.0ours 3×85.0 47.9 43.3ours 3×85.0 52.1 45.4", "figure_id": "tab_11", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Stronger training strategy used for distillation. \"B\" and \"C\" represent strategies for training students on ImageNet-1K and CIFAR100, respectively.", "figure_data": "Setting T. Res. S. Res.BSOptimizer LR LR decay Warmup epochs AMP Label LossB2242241024 AdamW 1e-3cosine20✓CEC3232512SGD5e-2cosine××CESettingWDH. FlipRandom erasingAuto AugmentRand AugmentMixupCutmixB5e-2✓0.25×7/0.50.11.0C5e-4✓×✓×0.1×", "figure_id": "tab_12", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Comparison of different dataset scales. Results in parentheses report the performance improvement over vanilla KD.", "figure_data": "DatasetCIFAR-100IN-1K (30%)IN-1K (60%)IN-1K (full)TeacherRes56BEiTv2-BBEiTv2-BBEiTv2-LStudentRes20Res50Res50Res50vanilla KD72.3479.5881.3382.27DKD73.10 (+0.76)79.86 (+0.28)81.47 (+0.14)82.28 (+0.01)DIST74.51 (+2.17)79.75 (+0.17)81.34 (+0.01)81.82 (-0.46)", "figure_id": "tab_13", "figure_label": "13", "figure_type": "table" }, { "figure_caption": "Results on ImageNet-22K. The teacher is BEiTv2-B and the student is Res50.", "figure_data": "MethodIN-21K iter. IN-1K iter. Total iter. Acc. (%)DKD0375.0K375.0K81.68DIST0375.0K375.0K81.88vanilla KD0375.0K375.0K81.64vanilla KD0750.0K750.0K82.35vanilla KD03000K3000K83.08DKD187.5K187.5K375.0K81.21DIST187.5K187.5K375.0K81.20vanilla KD187.5K187.5K375.0K81.29vanilla KD187.5K562.5K750.0K81.98vanilla KD750K3000K3750K82.85", "figure_id": "tab_14", "figure_label": "14", "figure_type": "table" }, { "figure_caption": "Ablation of hyper-parameters in baseline methods on ImageNet-1K. The teacher model is BEiTv2-B and the student model is Res50. The best results are indicated in bold. The hyperparameters adopted in original baselines of our main paper are marked with gray , which is also the default setting in their corresponding paper.", "figure_data": "DKDDISTαDKD (βDKD = 2)Acc. (%)βDKD (αDKD = 1)Acc. (%)βDIST (γDIST = 1)Acc. (%)γDIST (βDIST = 1)Acc. (%)078.020.280.540.580.74080.190.279.560.580.84180.940.580.910.580.30180.96281.01180.94180.76280.76380.74280.86280.93480.31480.64--480.81879.86580.64--", "figure_id": "tab_15", "figure_label": "15", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work on knowledge distillation (KD) serves as the methodological basis for the citing paper, as it is the method that the citing paper is building upon to improve the performance of compact student models."}, {"Category": "Methodological Basis", "Citation": "[14,15]", "Explanation": "The cited works provide meticulously designed methods that perform well on small-scale benchmarks under different training recipes. The citing paper adopts these methods to study the crucial factors in determining KD performances on large-scale datasets."}, {"Category": "Data Source", "Citation": "[23]", "Explanation": "The cited work, ImageNet, is the large-scale dataset used in the study conducted in the citing paper to evaluate the performance of vanilla KD and other methods."}, {"Category": "Extension or Continuation", "Citation": "[12,29,30]", "Explanation": "The cited works provide different training recipes that are evaluated on the large-scale dataset in the citing paper to study the effect of data augmentation techniques and training iterations on the performance of vanilla KD."}, {"Category": "Methodological Basis", "Citation": "[11,14,15]", "Explanation": "The cited works provide logits-based methods that the citing paper adopts to improve generalizability in knowledge distillation."}, {"Category": "Extension or Continuation", "Citation": "[12,13,18,17]", "Explanation": "The cited works introduce hint-based methods that the citing paper builds upon to further enhance the generalizability of knowledge distillation."}, {"Category": "Supporting Evidence", "Citation": "[11,14,15]", "Explanation": "The cited works demonstrate the effectiveness of logits-based methods in knowledge distillation, providing supporting evidence for the claims made in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[12,13,18,17]", "Explanation": "The cited works present hint-based methods that the citing paper extends to improve the generalizability of knowledge distillation in different contexts and settings."}, {"Category": "Data Source", "Citation": "[29,28,31]", "Explanation": "The cited works provide strong training strategies and large datasets that the citing paper utilizes to assess the impact of model capacity on the effectiveness of knowledge distillation."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work, ResNet-50, serves as the backbone architecture for the vanilla KD method used in the citing paper to achieve improved performance on ImageNet."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work, ViT-Tiny and ViT-Small, are used in the vanilla KD method to improve the performance of the backbone architecture on ImageNet."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work, Con-vNeXtV2, is used in the vanilla KD method to further enhance the performance of the backbone architecture on ImageNet."}, {"Category": "Data Source", "Citation": "[28]", "Explanation": "The cited work is the source of the best results reported in the literature for the evaluation of the vanilla KD method on ImageNet."}, {"Category": "Methodological Basis", "Citation": "[20,32,33]", "Explanation": "The cited works are used to demonstrate the potential of improving the backbone architecture with higher performance on ImageNet for downstream tasks such as object detection and instance segmentation."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work provides a method called KD (Knowledge Distillation) that the citing paper adopts in their research to achieve on par results with state-of-the-art approaches in a small-scale dataset context."}, {"Category": "Extension or Continuation", "Citation": "[15]", "Explanation": "The cited work, DKD (DistilBERT), is further extended in the citing paper to improve the performance of vanilla KD in small-scale datasets."}, {"Category": "Extension or Continuation", "Citation": "[14]", "Explanation": "The cited work, DIST (DistilBERT), is also extended in the citing paper to improve the performance of vanilla KD in small-scale datasets."}, {"Category": "Data Source", "Citation": "\u2206", "Explanation": "The symbol \u2206 represents the accuracy gap between vanilla KD and the best result achieved by other approaches in the small-scale dataset context, highlighting the reliance on external data to assess the performance of vanilla KD in this context."}, {"Category": "Methodological Basis", "Citation": "[15,14,11]", "Explanation": "The cited works provide a method for using the teacher's output probabilities as auxiliary signals to train a smaller model, which the citing paper adopts in its research on knowledge distillation techniques."}, {"Category": "Methodological Basis", "Citation": "[12,13,18,17]", "Explanation": "The cited works introduce the use of intermediate representations (features) for knowledge distillation, which the citing paper builds upon in its study of KD techniques."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work highlights the limitations of evaluating knowledge distillation methods in small-scale settings, which leads the citing paper to focus on the impact of using small-capacity models and small-scale datasets in their research."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work, CIFAR-100 dataset, is used as a benchmark for evaluating the performance of vanilla KD in the citing paper. The authors use the dataset to conduct experiments and compare the results of different approaches, including vanilla KD, to demonstrate the limitations of the method."}, {"Category": "Data Source", "Citation": "[23]", "Explanation": "The cited work provides the ImageNet-1K dataset, which the citing paper uses for their experiments to evaluate the performance of knowledge distillation."}, {"Category": "Methodological Basis", "Citation": "[29]", "Explanation": "The cited work provides a training strategy that the citing paper adopts in their research to train high-performing models from scratch using more advanced training schemes."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work introduces the concept of logits-based vanilla KD, which the citing paper adopts as a method for knowledge distillation in their research."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work presents DKD, a method for knowledge distillation that the citing paper uses in their research to compare the performance of different methods."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work introduces DIST, a method for knowledge distillation that the citing paper compares to other methods in their research."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work presents CC, a hint-based method for knowledge distillation that the citing paper adopts as a baseline in their research."}, {"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work introduces RKD, a hint-based method for knowledge distillation that the citing paper compares to other methods in their research."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work presents CRD, a hint-based method for knowledge distillation that the citing paper uses as a baseline in their research."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work presents ReviewKD, a method for knowledge distillation that the citing paper compares to other methods in their research."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The Res50 architecture is adopted as the student model in the citing paper, serving as the basis for the distillation process."}, {"Category": "Data Source", "Citation": "[35]", "Explanation": "The ImageNet-Real dataset is used in the evaluation of the student models, providing a specific test set for generalization analysis."}, {"Category": "Data Source", "Citation": "[36]", "Explanation": "The ImageNet-V2 matched frequency dataset is utilized in the evaluation of the student models, offering a separate test set for generalization analysis."}, {"Category": "Methodological Basis", "Citation": "[37]", "Explanation": "The cited work on center kernel analysis (CKA) is used to compare the features extracted by Res50 with those of Res152 and BEiTv2-L, which serves as a methodological basis for the analysis of the dissimilarity between the intermediate features in the heterogeneous scenario."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work provides a strong teacher model (BEiT2-L) for the citing paper to use in the distillation process, which contributes to the improved performance of the student model in the ImageNet-1K validation set."}, {"Category": "Methodological Basis", "Citation": "[29]", "Explanation": "The cited work by [29] provides the one-hot labels and the binary cross-entropy (BCE) loss function that the citing paper uses in their training process for vanilla knowledge distillation."}, {"Category": "Data Source", "Citation": "Table 4", "Explanation": "The table in the cited work is used to present the results of the ablation study on the different loss functions used in vanilla knowledge distillation, which the citing paper utilizes in their research."}, {"Category": "Supporting Evidence", "Citation": "[38]", "Explanation": "The cited work provides analysis that supports the observation made in the citing paper about the impact of teacher model size on performance in large-scale datasets."}, {"Category": "Data Source", "Citation": "ImageNet-1K and ImageNet-21K", "Explanation": "The cited training sets are used in the experiments to evaluate the impact of the teacher model size on student performance."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work provides a training strategy (A1) that the citing paper adopts to train the teacher model on a larger dataset, which in turn affects the performance of the student model."}, {"Category": "Methodological Basis", "Citation": "[3,25]", "Explanation": "The cited works provide the ViT-S and ViT-T models that the citing paper uses as students in the training process, and the results show that the students achieve improved performance with longer training epochs."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work provides the ConveNextV2-T model that the citing paper uses as a student in the training process, and the results show that the student achieves improved performance with longer training epochs."}, {"Category": "Extension or Continuation", "Citation": "[28]", "Explanation": "The cited work provides a training strategy that the citing paper extends by training the students with 4800 epochs, resulting in new state-of-the-art performance in the Res50, ViT-S, ViT-T, and ConvNeXtV2-T models."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The FCMIM framework proposed in ConvNeXtv2 is adopted in the comparison between vanilla KD and MIM, serving as a methodological basis for the study conducted in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[20]", "Explanation": "The cited work, COCO benchmark, is used as a standard for evaluating the performance of object detection and instance segmentation tasks in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[32]", "Explanation": "The cited work, Mask RCNN, is adopted as the detector in the experiments conducted in the citing paper to assess the performance of the student model in object detection and instance segmentation tasks."}, {"Category": "Methodological Basis", "Citation": "[33]", "Explanation": "The cited work, Cascade Mask RCNN, is also adopted as a detector in the experiments to further improve the performance of the student model in the downstream tasks."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work by Hinton et al. introduces the concept of training the student to learn from logits of the teacher, which forms the basis for the vanilla KD method discussed in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[15]", "Explanation": "The cited work introduces improvements to the vanilla KD method by enhancing the dark knowledge, which the citing paper further extends to provide additional insights and methods for improving the method."}, {"Category": "Extension or Continuation", "Citation": "[14]", "Explanation": "The cited work relaxes the constraints in the vanilla KD method, which the citing paper builds upon to discuss the improvements made in this area."}, {"Category": "Extension or Continuation", "Citation": "[42]", "Explanation": "The cited work uses sample factors to replace the unified temperature in the vanilla KD method, which the citing paper expands upon to discuss the various methods used to improve the method."}, {"Category": "Extension or Continuation", "Citation": "[12,43,44]", "Explanation": "The cited works explore the use of model activation in hint-based methods, which the citing paper extends to discuss the various forms of knowledge leveraged in this class of methods."}, {"Category": "Extension or Continuation", "Citation": "[45]", "Explanation": "The cited work uses attention map in hint-based methods, which the citing paper builds upon to discuss the various forms of knowledge leveraged in this class of methods."}, {"Category": "Extension or Continuation", "Citation": "[46]", "Explanation": "The cited work uses the flow of solution procedure in hint-based methods, which the citing paper further extends to discuss the various forms of knowledge leveraged in this class of methods."}, {"Category": "Extension or Continuation", "Citation": "[18,16]", "Explanation": "The cited works explore the use of feature correlation among samples in hint-based methods, which the citing paper builds upon to discuss the various forms of knowledge leveraged in this class of methods."}, {"Category": "Methodological Basis", "Citation": "[47,48]", "Explanation": "The cited works propose the design of new projector in hint-based methods, which the citing paper adopts to discuss the techniques used to align features between the teacher and the student."}, {"Category": "Methodological Basis", "Citation": "[13,49,50]", "Explanation": "The cited works propose more complex alignment approaches in hint-based methods, which the citing paper adopts to discuss the methods used to align features between the teacher and the student."}, {"Category": "Extension or Continuation", "Citation": "[17]", "Explanation": "The cited work explores the use of contrastive learning in hint-based methods, which the citing paper extends to discuss the various forms of knowledge leveraged in this class of methods."}, {"Category": "Extension or Continuation", "Citation": "[51]", "Explanation": "The cited work explores the use of masked modeling in hint-based methods, which the citing paper builds upon to discuss the various forms of knowledge leveraged in this class of methods."}, {"Category": "Supporting Evidence", "Citation": "[23]", "Explanation": "The cited work, ImageNet-21K, is mentioned as a large dataset that is used in training large models in computer vision tasks, which supports the claim in the citing paper that these models are trained on extensive labeled image datasets."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work by Beyer et al. provides a set of design choices to make knowledge distillation work well in model compression, which the citing paper builds upon to further study the factors that impact distillation performance."}, {"Category": "Supporting Evidence", "Citation": "[15]", "Explanation": "The cited work, DKD, is used as a baseline to compare the performance gaps of vanilla KD and logits-based methods on small-scale datasets, providing a reference for the underestimation of vanilla KD in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[14]", "Explanation": "The cited work, DIST, is also used as a baseline to compare the performance gaps of vanilla KD and logits-based methods on small-scale datasets, further supporting the underestimation of vanilla KD in the citing paper."}, {"Category": "Data Source", "Citation": "ImageNet-1K", "Explanation": "The ImageNet-1K dataset is used as a benchmark in the comparison of performance gaps among vanilla KD and logits-based baselines, providing a data source for the study conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "CIFAR-100", "Explanation": "The CIFAR-100 dataset is used in the comparison of performance gaps to extend the study of small-scale datasets beyond the ImageNet-1K dataset."}, {"Category": "Extension or Continuation", "Citation": "30%/60%/full", "Explanation": "The subsets of ImageNet-1K with fractions of 30%/60%/full are used in the comparison of performance gaps to further extend the study of small-scale datasets in the citing paper."}]
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b51", "b1", "b2", "b77", "b80", "b19", "b32", "b69", "b6", "b57", "b47", "b9", "b74", "b36", "b38", "b21", "b29", "b4", "b56", "b51", "b55", "b7", "b8", "b48", "b10", "b40", "b78", "b63", "b58", "b37", "b65", "b66" ], "table_ref": [], "text": "Stable Diffusion models (SDMs) [52,55] are one of the most renowned open-source models for text-to-image (T2I) synthesis, and their exceptional capability has begun to be leveraged as a backbone in several text-guided vision applications [2,3,78,81]. SDMs are T2I-specialized latent diffusion models (LDMs) [55], which employ diffusion operations [20,33,70] in a semantically compressed space for compute efficiency. Within a SDM, a U-Net [7,58] performs iterative sampling to progressively denoise a random latent code and is aided by a text encoder [48] and an image decoder [10,75] to produce text-aligned images. This inference process still involves excessive computational requirements (see Fig. 2), which often hinder the utilization of SDMs despite their rapidly growing usage.\nTo alleviate this issue, numerous approaches toward efficient SDMs have been introduced. A pretrained diffusion model is distilled to reduce the number of denoising steps, enabling an identically architectured model with fewer sampling steps [37,39]. Post-training quantization [22,30,68] and implementation optimization [5] methods are also leveraged. However, the removal of architectural elements in large diffusion models remains less explored.\nThis study unlocks the immense potential of classical architectural compression in attaining smaller and faster diffusion models. We eliminate multiple residual and attention blocks from the U-Net of a SDM and retrain it with feature-level knowledge distillation (KD) [16,57] for general-purpose T2I. Under restricted training resources, our compact models can mimic the original SDM by leveraging transferred knowledge. Our work effectively reduces the computation of SDM-v1.4 [52] and SDM-v2.1-base [56] while achieving compelling zero-shot results on par with multi-billion parameter models [8,9,49]. Our contributions are summarized as follows:\n• We compress SDMs by removing architectural blocks from the U-Net, achieving up to 51% reduction in model size and 43% improvement in latency on CPU and GPU.\nPrevious pruning studies [11,41,79] focused on small models (<100M parameters) like ResNet50 and DeiT-B, not on foundation models like SDMs (>1,000M=1B), possibly due to the lack of economic retraining for such large models. Moreover, U-Net architectures are arguably more complex due to the necessity of considering skip connections across the network, making the structural block removal inside them not straightforward. • To the best of our knowledge, we first demonstrate the notable benefit of feature distillation for training diffusion models, which enables competitive T2I even with significantly fewer resources (using only 13 A100 days and 0.22M LAION pairs [64]). Considering the vast expense of training SDMs from scratch (surpassing 6,000 A100 days and 2,000M pairs), our study indicates that network compression is a remarkably cost-effective strategy in building compact general-purpose diffusion models. • We show the practicality of our work across various aspects. Our lightweight backbones are readily applicable to customized generation [59] and image-to-image translation [38], effectively lowering finetuning and inference costs. T2I synthesis on Jetson AGX Orin and iPhone 14 using our models takes less than 4 seconds. • We have publicly released our approach, model weights, and source code: https://github.com/Nota-NetsPresso/BK-SDM. Interesting subsequent works include block pruning and KD for the SDM-v1 variant [66] and SDXL [67]." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b6", "b20", "b69", "b5", "b47", "b41", "b59", "b49", "b51", "b52", "b55", "b1", "b2", "b58", "b77" ], "table_ref": [], "text": "Large T2I diffusion models. By gradually removing noise from corrupted data, diffusion-based generative models [7,21,70] enable high-fidelity synthesis with broad mode coverage. Integrating these merits with the advancement of pretrained language models [6,47,48] has significantly improved the quality of T2I synthesis. In GLIDE [42] and Imagen [60], a text-conditional diffusion model generates a small image, which is upsampled via super-resolution modules. In DALL•E-2 [50], a text-conditional prior network produces an image embedding, which is transformed into an image via a diffusion decoder and further upscaled into higher resolutions. SDMs [52,53,55,56] perform the diffusion modeling in a low-dimensional latent space constructed through a pixel-space autoencoder. We use SDMs as our baseline because of its open-access and gaining popularity over numerous downstream tasks [2,3,59,78]." }, { "figure_ref": [], "heading": "Efficient diffusion models.", "publication_ref": [ "b36", "b38", "b60", "b33", "b34", "b21", "b29", "b4", "b18", "b43", "b56", "b79", "b28", "b50", "b81", "b23", "b62", "b70", "b73", "b30", "b44", "b11", "b51", "b55" ], "table_ref": [], "text": "Several studies have addressed the slow sampling process. Diffusion-tailored distillation [37,39,61] progressively transfers knowledge from a pretrained diffusion model to a fewer-step model with the same architecture. Fast high-order solvers [34,35,83] for diffusion ordinary differential equations boost the sampling speed. Complementarily, our network compression approach reduces per-step computation and can be easily integrated with less sampling steps. Leveraging quantization [22,30,68] and implementation optimizations [5] for SDMs can also be combined with our compact models for further efficiency.\nDistillation-based compression. KD enhances the performance of small-size models by exploiting output-level [19,44] and feature-level [16,57,80] information of large source models. Although this classical KD has been actively used for efficient GANs [29,51,82], its power has not been explored for structurally compressed diffusion models. Distillation pretraining enables small yet capable generalpurpose language models [24,63,71] and vision transformers [14,74]. Beyond such models, we show that its success can be extended to diffusion models with iterative sampling.\nConcurrent studies. SnapFusion [31] achieves an efficient U-Net for SDMs through architecture evolution and step distillation. Würstchen [45] introduces two diffusion processes on low-and high-resolution latent spaces for economic training. These two works are valuable but require much larger resources than our work (see Tab. 1). While not demonstrated on SDMs, Diff-Pruning [12] proposes structured pruning based on Taylor expansion tailored for diffusion models. Removed from Original SDM:\n4 × 𝐻 × 𝑊 R 1,1 A 1,1 R 1,2 A 1,2 Cv Dn R 2,1 A 2,1 R 2,2 A 2,2 Cv Dn R 3,1 A 3,1 R 3,2 A 3,2 Cv Dn R 4,1 R 4,2 R 5,1 A 5,1 R 5,2 R 6,1 R 6,2 Cv Up ⧺ ⧺ ⧺ R 6,3 R 7,1 A 7,1 R 7,2 A 7,2 Cv Up ⧺ ⧺ ⧺ R 7,3 A 7,3 R 8,1 A 8,1 R 8,2 A 8,2 Cv Up ⧺ ⧺ ⧺ R 8,3 A 8,3 R 9,1 A 9,1 R 9,2 A 9,2 ⧺ ⧺ ⧺ R\nBK-SDM-Base BK-SDM-Small BK-SDM-Tiny Figure 3. Block removal from the denoising U-Net. Our approach is applicable to all the SDM versions in v1 and v2, which share the same U-Net block configuration. For experiments, we used v1.4 [52] and v2.1-base [56]. See Sec. A for the detailed architectures. " }, { "figure_ref": [], "heading": "SDM-v1.4 No Mid-Stage", "publication_ref": [], "table_ref": [], "text": "Two stuffed bears are dressed in astronaut suits." }, { "figure_ref": [], "heading": "SDM-v1.4", "publication_ref": [], "table_ref": [], "text": "No Mid-Stage " }, { "figure_ref": [ "fig_0" ], "heading": "Compression Method", "publication_ref": [ "b57" ], "table_ref": [], "text": "We compress the U-Net [58] in SDMs, which is the most compute-heavy component (see Fig. 2). Conditioned on the text and time-step embeddings, the U-Net performs multiple denoising steps on latent representations. At each step, the U-Net produces the noise residual to compute the latent for the next step. We reduce this per-step computation, leading to Block-removed Knowledge-distilled SDMs (BK-SDMs)." }, { "figure_ref": [], "heading": "Compact U-Net architecture", "publication_ref": [], "table_ref": [], "text": "The following architectures are obtained by compressing SDM-v1 (1.04B parameters), as shown in Fig. 3: • BK-SDM-Base (0.76B) obtained with Sec. 3.1.1.\n• BK-SDM-Small (0.66B) with Secs. 3.1.1 and 3.1.2.\n• BK-SDM-Tiny (0.50B) with Secs. 3.1.1, 3.1.2, and 3.1.3.\nOur approach can be identically applied to SDM-v2 (1.26B parameters), leading to BK-SDM-v2-{Base (0.98B), Small (0.88B), Tiny (0.72B)}." }, { "figure_ref": [ "fig_3" ], "heading": "Fewer blocks in the down and up stages", "publication_ref": [ "b62", "b14", "b22", "b75", "b62" ], "table_ref": [], "text": "This approach is closely aligned with DistilBERT [63] which halves the number of layers and initializes the compact model with the original weights by benefiting from the shared dimensionality. In the original U-Net, each stage with a common spatial size consists of multiple blocks, and most stages contain pairs of residual (R) [15] and cross-attention (A) [23,76] blocks. We hypothesize the existence of some unnecessary pairs and use the following removal strategies. Down stages. We maintain the first R-A pairs while eliminating the second pairs, because the first pairs process the changed spatial information and would be more important than the second pairs. This design is consistent with the pruning sensitivity analysis that measures the block-level significance (see Fig. 5). Our approach also does not harm the dimensionality of the original U-Net, enabling the use of the corresponding pretrained weights for initialization [63]. Up stages. While adhering to the aforementioned scheme, we retain the third R-A pairs. This allows us to utilize the output feature maps at the end of each down stage and the corresponding skip connections between the down and up stages. The same process is applied to the innermost down and up stages that contain only R blocks." }, { "figure_ref": [ "fig_2" ], "heading": "Removal of the entire mid-stage", "publication_ref": [ "b26" ], "table_ref": [], "text": "Surprisingly, removing the entire mid-stage from the original U-Net does not noticeably degrade the generation quality while effectively reducing the parameters by 11% (see Fig. 4). This observation is consistent with the minor role of inner layers in the U-Net generator of GANs [27].\nIntegrating the mid-stage removal with fewer blocks in Sec. 3.1.1 further decreases compute burdens (Tab. 2) at the cost of a slight decline in performance (Tab. 1). Therefore, we offer this mid-stage elimination as an option, depending on the priority between compute efficiency (using BK-SDM-Small) and generation quality (BK-SDM-Base)." }, { "figure_ref": [], "heading": "Further removal of the innermost stages", "publication_ref": [], "table_ref": [], "text": "For additional compression, the innermost down and up stages can also be pruned, leading to our lightest model BK-SDM-Tiny. This implies that outer stages with larger spatial dimensions and their skip connections play a crucial role in the U-Net for T2I synthesis." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Alignment with pruning sensitivity analysis", "publication_ref": [ "b26", "b62" ], "table_ref": [], "text": "To support the properness of our architectures, we measure the importance of each block (see Fig. 5) and show that unimportant blocks match with our design choices. The importance is measured by how generation scores vary when removing each residual or attention block from the U-Net. A significant drop in performance highlights the essential role of that block. Note that some blocks are not directly removable due to different channel dimensions between input and output; we replace such blocks with channel interpolation modules (denoted by \"*\" in Fig. 5) to mimic the removal while retaining the information.\nThe sensitivity analysis implies that the innermost downmid-up stages and the second R-A pairs in the down stages play relatively minor roles. Pruning these blocks aligns with our architectures, designed based on human knowledge (e.g., prioritizing blocks with altered channel dimensions) and previous studies [27,63]." }, { "figure_ref": [ "fig_4" ], "heading": "Distillation-based retraining", "publication_ref": [ "b20", "b20", "b18", "b50", "b56", "b68" ], "table_ref": [], "text": "For general-purpose T2I, we train our block-removed U-Net to mimic the behavior of the original U-Net (see Fig. 6). To obtain the input of U-Net, we use pretrained-and-frozen encoders [55] for images and text prompts.\nGiven the latent representation z of an image and its paired text embedding y, the task loss for the reverse denoising process [21,55] is computed as:\nL Task = E z,ϵ,y,t ||ϵ -ϵ S (z t , y, t)|| 2 2 ,(1)\nwhere z t is a noisy latent code from the diffusion process [21] with the sampled noise ϵ∼N (0, I) and time step t∼Uniform(1, T ), and ϵ S (•) indicates the estimated noise from our compact U-Net student. For brevity, we omit the subscripts of E z,ϵ,y,t [•] in the following notations. The compact student is also trained to imitate the outputs of the original U-Net teacher, ϵ T (•), with the following output-level KD objective [19]: A key to our approach is feature-level KD [16, 57] that provides abundant guidance for the student's training:\nL OutKD = E ||ϵ T (z t , y, t) -ϵ S (z t , y, t)|| 2 2 .(2)\nL FeatKD = E l ||f l T (z t , y, t) -f l S (z t , y, t)|| 2 2 ,(3)\nwhere f l T (•) and f l S (•) represent the feature maps of the l-th layer in a predefined set of distilled layers from the teacher and the student, respectively. While learnable regressors (e.g., 1×1 convolutions to match the number of channels) have been commonly used [51,57,69], our approach circumvents this requirement. By applying distillation at the end of each stage in both models, we ensure that the dimensionality of the feature maps already matches, thus eliminating the need for additional regressors.\nThe final objective is shown below, and we simply set λ OutKD and λ FeatKD as 1. Without loss-weight tuning, our approach is effective in empirical validation.\nL = L Task + λ OutKD L OutKD + λ FeatKD L FeatKD . (4)" }, { "figure_ref": [ "fig_12" ], "heading": "Experimental setup", "publication_ref": [ "b63", "b64", "b51", "b55", "b48", "b59", "b31", "b17", "b61", "b16", "b47", "b58", "b58", "b3", "b38", "b76", "b35", "b19", "b59" ], "table_ref": [], "text": "Distillation retraining. We primarily use 0.22M image-text pairs from LAION-Aesthetics V2 (L-Aes) 6.5+ [64,65], which are significantly fewer than the original training data used for SDMs [52,56] (>2,000M pairs). In Fig. 11, dataset sizes smaller than 0.22M are randomly sampled from L-Aes 6.5+, while those larger than 0.22M are from L-Aes 6.25+. Zero-shot T2I evaluation. Following the popular protocol [49,55,60], we use 30K prompts from the MS-COCO validation split [32], downsample the 512×512 generated images to 256×256, and compare them with the entire validation set. We compute Fréchet Inception Distance (FID) [18] and Inception Score (IS) [62] to assess visual quality. We measure CLIP score [17,48] with CLIP-ViT-g/14 model to assess text-image correspondence. Downstream tasks. For personalized generation, we use the DreamBooth dataset [59] (30 subjects × 25 prompts × 4∼6 images) and perform per-subject finetuning. Following the evaluation protocol [59], we use ViT-S/16 model [4] for DINO score and CLIP-ViT-g/14 model for CLIP-I and CLIP-T scores. For image-to-image translation, input images are sourced from Meng et al. [39]. Implementation details. We adjust the codes in Diffusers [77] and PEFT [36]. We use a single NVIDIA A100 80G GPU for main retraining and a single NVIDIA GeForce RTX 3090 GPU for per-subject finetuning. For compute efficiency, we always opt for 25 denoising steps of the U-Net at the inference phase, unless specified. The classifier-free guidance scale [20,60] is set to the default value of 7.5. The latent resolution is set to the default (H = W = 64 in Fig. 3), yielding 512×512 images. See Sec. I for the details." }, { "figure_ref": [], "heading": "Model", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_11" ], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "All the results in Secs. 5.1-5.4 were obtained with the full benchmark protocal (MS-COCO 256×256 30K samples), except for Fig. 10 (512×512 5K samples). Unless specified, the training setup in Tab. 1 was used." }, { "figure_ref": [ "fig_6", "fig_6" ], "heading": "Comparison with existing works", "publication_ref": [ "b45", "b8", "b72", "b39" ], "table_ref": [], "text": "Quantitative comparison. Tab. 1 shows the zero-shot results for general-purpose T2I. Despite being trained with only 0.22M samples and having fewer than 1B parameters, our compressed models demonstrate competitive performance on par with existing large models. A recent small SDM without paper evidence [46] relies on far more training resources: two-stage KD with two teachers (SDM-v1.4 and v1.5) and significantly longer iterations on a much larger dataset. In contrast, our lighter models yield compelling results while saving on training budgets. Visual comparison. Fig. 7 depicts synthesized images with some MS-COCO captions. Our compact models inherit the superiority of SDM and produce more photorealistic images compared to the AR-based [9] and GAN-based [73,84] baselines. Noticeably, the same latent code results in a shared visual style between the original and our models (6th-9th columns in Fig. 7), similar to the observation in transfer learning for GANs [40]. See Sec. D for additional results." }, { "figure_ref": [], "heading": "Computational gain", "publication_ref": [], "table_ref": [], "text": "Tab. 2 shows how the compute reduction for each sampling step of the U-Net affects the overall process. The per-step reduction effectively decreases MACs and inference time by more than 30%. Notably, BK-SDM-Tiny has 50% fewer parameters than the original SDM. " }, { "figure_ref": [], "heading": "Benefit of distillation retraining", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "GLIDE-filtered", "publication_ref": [], "table_ref": [], "text": "[ICML '22] Prompt: A bowl that has vegetables inside of it.; A brown and white cat staring off with pretty green eyes.; A toy raccoon standing on a pile of broccoli." }, { "figure_ref": [ "fig_10", "fig_11" ], "heading": "Würstchen-v2", "publication_ref": [ "b19", "b59" ], "table_ref": [], "text": "[ArXiv'23] transferred knowledge. Exploiting output-level KD (Eq. 2) boosts the performance compared to using only the denoising task loss. Leveraging feature-level KD (Eq. 3) yields further score enhancement. Additionally, using the teacher weights for initialization is highly beneficial.\nCross-attention resemblance. Training progress. Fig. 9 shows the results over training iterations. Without KD, training solely with the denoising task loss causes fluctuations or sudden drops in performance (indicated with green and cyan). In contrast, distillation (purple and pink) stabilizes and accelerates the training process, demonstrating the benefit of providing sufficient hints for training guidance. Notably, our small-and tiny-size models trained with KD (yellow and red) outperform the bigger base-size model without KD (cyan). Additionally, while the best FID score is observed early on, IS and CLIP score exhibit ongoing improvement, implying that judging models merely with FID may be suboptimal.\nTrade-off results. Fig. 10 shows the results of BK-SDM- Base with and without KD on MS-COCO 512×512 5K. Higher classifier-free guidance scales [20,60] lead to better text-aligned images at the cost of less diversity. More denoising steps improve generation quality at the cost of slower inference. Distillation retraining achieves much better tradeoff curves than the baseline without KD." }, { "figure_ref": [ "fig_12" ], "heading": "Impact of training resources on performance", "publication_ref": [], "table_ref": [], "text": "Consistent with existing evidence, the use of larger batch sizes, more extensive data, and longer iterations for training enhances performance in our work (see Tab. 5, Fig. 11, and Sec. H). However, this benefit requires increased resource demands (e.g., extended training days without multiple highspec GPUs and greater data storage capacity). As such, despite the better performing models in Tab. 5, we primarily report the models with fewer resources. We believe that accessible training costs by many researchers can help drive advancement in massive models." }, { "figure_ref": [ "fig_9", "fig_2" ], "heading": "Application", "publication_ref": [ "b58", "b37", "b53" ], "table_ref": [], "text": "Personalized T2I synthesis. Tab. 6 compares the fine-tuning results with DreamBooth [59] over different backbones to create images about a given subject. BK-SDMs can preserve 95%∼99% scores of the original SDM while cutting finetuning costs. Fig. 12 the models retrained with a batch size of 64, the baselines without KD fail to generate the subjects or cannot maintain the identity details. See Sec. E for further results.\nImage-to-image translation. Fig. 13 presents the textguided stylization results with SDEdit [38]. Our model, resembling the ability of the original SDM, faithfully produces images given style-specified prompts and content pictures. See Sec. F for additional results. Deployment on edge devices. We deploy our models trained with 2.3M pairs and compare them against the original SDM under the same setup on edge devices (20 denoising steps on NVIDIA Jetson AGX Orin 32GB and 10 steps on iPhone 14). Our models produce a 512×512 image within 4 seconds (see Tab. 7), while maintaining acceptable image quality (Fig. 1(d) and Sec. G).\nAnother LDM. SDMs are derived from LDMs [55], which share a similar U-Net design across many tasks. To validate the generality of our work, the same approach of BK-SDM-Small is applied to compress an LDM for unconditional generation on CelebA-HQ [54]. Fig. 14 shows the efficacy of our architecture and distillation retraining." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We uncover the potential of architectural compression for general-purpose text-to-image synthesis with a renowned model, Stable Diffusion. Our block-removed lightweight models are effective for zero-shot generation, achieving competitive results against large-scale baselines. Distillation is a key of our method, leading to powerful retraining even Table 6. Personalized generation with finetuning over different backbones. Our compact models can preserve subject fidelity (DINO and CLIP-I) and prompt fidelity (CLIP-T) of the original SDM with reduced finetuning (FT) costs and fewer parameters. under very constrained resources. Our work is orthogonal to previous works for efficient diffusion models, e.g., enabling fewer sampling steps, and can be readily combined with them. We hope our study can facilitate future research on structural compression of large diffusion models." }, { "figure_ref": [ "fig_6" ], "heading": "BK-SDM: A Lightweight, Fast, and Cheap Version of Stable Diffusion Appendix", "publication_ref": [ "b14" ], "table_ref": [], "text": "A. U-Net architecture and distillation retraining of BK-SDM Figs. 15 and 16 depict the U-Net architectures and distillation process, respectively. Our approach is directly applicable to all the SDM versions in v1 and v2 (i.e., v1.1/2/3/4/5, v2.0/1, and v2.0/1-base), which share the same U-Net block configuration. See Fig. 17 for the block details. \nDn Dn Dn Dn Up Up Up Up Cv Cv Latent 4 × 𝐻 × 𝑊 Noise 4 × 𝐻 × 𝑊 R 1,1 A 1,1 R 1,2 A 1,2 Cv Dn R 2,1 A 2,1 R 2,2 A 2,2 Cv Dn R 3,1 A 3,1 R 3,2 A 3,2 Cv Dn R 4,1 R 4,2 R 5,1 A 5,1 R 5,2 R 6,1 R 6,2 Cv Up ⧺ ⧺ ⧺ R6,3 R 7,1 A 7,1 R 7,2 A 7,2 Cv Up ⧺ ⧺ ⧺ R 7,3 A 7,3 R 8,1 A 8,1 R 8,2 A 8" }, { "figure_ref": [], "heading": "Cv Cv", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dn Cv", "publication_ref": [], "table_ref": [], "text": "Dn R \nLatent 4×64×64 Noise 4×64×64 R 1,1 A 1,1 R 1,2 A 1,2 Cv Dn R 2,1 A 2,1 R 2,2 A 2,2 Cv Dn R 3,1 A 3,1 R 3,2 A 3,2 Cv Dn R 4,1 R R 5,1 A 5,1 R 5,2 R 6,1 R 6,2 Cv Up ⧺ ⧺ ⧺ R 6,3 R 7,1 A 7,1 R 7,2 A 7,2 Cv Up ⧺ ⧺ ⧺ R 7,3 A 7,3 R 8,1 A 8,1 R 8,2 A 8,2 Cv Up ⧺ ⧺ ⧺ R 8,3 A 8,3 R 9,1 A 9,1 R 9,2 A 9,2 ⧺ ⧺ ⧺ R 9,3 A 9,3 320×64×64 320×32×32 640×16×16 1280×8×8 1280×8×8 1280×16×16 1280×32×32 640×64×64 320×64×64 1280×8×8 Cv Cv Noise 4×64×64 Dn Cv Dn R 1,1 A 1,1 Dn Cv Dn R 2,1 A 2,1 Dn R 3,1 A 3,1 Cv Dn Dn R 4,1 Up R 7,1 A 7,1 Cv Up ⧺ ⧺ R 7,3 A 7,3 Up R 6,1 ⧺ ⧺ Cv Up R 6,3 Up R 8,1 A 8,1 Cv Up ⧺ ⧺ R 8,3 A 8,3 Up R 9,1 A 9,1 ⧺ ⧺ R 9,3 A 9,3 320×64×64 320×32×32 640×16×16 1280×8×8 1280×8×8 1280×16×16 1280×32×32 640×64×64 320×64×64" }, { "figure_ref": [], "heading": "Teacher", "publication_ref": [], "table_ref": [], "text": "Original SDM " }, { "figure_ref": [], "heading": "D. Comparison with existing studies", "publication_ref": [], "table_ref": [], "text": "A bowl that has vegetables inside of it.\nThere are decorative umbrellas hanging up.\nA brown and white cat staring off with pretty green eyes. " }, { "figure_ref": [], "heading": "GALIP-CC12M", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "BK-SDM-Tiny (Ours)", "publication_ref": [], "table_ref": [], "text": "A toy raccoon standing on a pile of broccoli. ornate living room set sits in a large house.\nA small looking a camera. " }, { "figure_ref": [ "fig_21", "fig_22" ], "heading": "G. Deployment on edge devices", "publication_ref": [ "b52", "b0", "b25", "b34", "b27", "b42", "b51", "b33", "b34" ], "table_ref": [], "text": "Our models are tested on NVIDIA Jetson AGX Orin 32GB, benchmarked against SDM-v1.5 [53,55] under the same default setting of Stable Diffusion WebUI [1]. For the inference, 20 denoising steps, DPM++ 2M Karras sampling [26,35], and xFormers-optimized attention [28] are used to synthesize 512×512 images. BK-SDM demonstrates quicker generation at 3.4 seconds, compared to the 4.9 seconds of SDM-v1.5 (see Figs. We also deploy our models on iPhone 14 with post-training palettization [43] and compare them against the original SDM-v1.4 [52,55] converted with the identical setup. With 10 denoising steps and DPM-Solver [34,35], 512×512 images are generated from given prompts. The inference takes 3.9 seconds using BK-SDM, which is faster than 5.6 seconds using SDM-v1.4, while maintaining acceptable image quality (see Fig. 24 with BK-SDM-Small trained on 2.3M pairs). Additional results using different models can be found in Fig. 25." }, { "figure_ref": [], "heading": "SDM-v1.4 BK-SDM-Small Mobile App UI SDM-v1.4 BK-SDM-Small", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "SDM-v1.5 BK-SDM-Base BK-SDM-Tiny BK-SDM-Small", "publication_ref": [], "table_ref": [], "text": "Jetson AGX Orin 32GB" }, { "figure_ref": [], "heading": "BK-SDM-Base", "publication_ref": [ "b3" ], "table_ref": [], "text": "BK-SDM-Tiny BK-SDM-Small SDM-v1. 4 iPhone 14 " }, { "figure_ref": [], "heading": "H. Impact of training data volume", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_11" ], "heading": "I. Implementation", "publication_ref": [ "b76", "b35", "b20", "b32", "b33", "b34", "b19", "b59", "b37", "b76" ], "table_ref": [], "text": "We adjust the codes in Diffusers [77] for distillation retraining and PEFT [36] for per-subject finetuning, both which adopt the training process of DDPM [21] in latent spaces. Distillation retraining for general-purpose T2I. For augmentation, smaller edge of each image is resized to 512, and a center crop of size 512 is applied with random flip. We use a single NVIDIA A100 80G GPU for 50K-iteration retraining with the AdamW optimizer and a constant learning rate of 5e-5. The number of steps for gradient accumulation is always set to 4. With a total batch size of 256 (=4×64), it takes about 300 hours and 53GB GPU memory. Training smaller architectures results in 5∼10% decrease in GPU memory usage. DreamBooth finetuning. For augmentation, smaller edge of each image is resized to 512, and a random crop of size 512 is applied. We use a single NVIDIA GeForce RTX 3090 GPU to finetune each personalized model for 800 iterations with the AdamW optimizer and a constant learning rate of 1e-6. We jointly finetune the text encoder as well as the U-Net. For each subject, 200 class images are generated by the original SDM. The weight of prior preservation loss is set to 1. With a batch size of 1, the original SDM requires 23GB GPU memory for finetuning, whereas BK-SDMs require 13∼19GB memory. Inference setup. Following the default setup, we use PNDM scheduler [33] for zero-shot T2I generation and DPM-Solver [34,35] for DreamBooth results. For compute efficiency, we always opt for 25 denoising steps of the U-Net, unless specified. The classifier-free guidance scale [20,60] is set to the default value of 7.5, except the analysis in Fig. 10. Image-to-image translation. We use the SDEdit method [38] implemented in Diffusers [77], with the strength value of 0.8. Distillation retraining for unconditional face generation. A similar approach to our T2I training is applied. For the 30K-iteration retraining, we use a batch size of 64 (=4×16) and set the KD loss weights to 100." }, { "figure_ref": [], "heading": "", "publication_ref": [ "b16", "b4", "b9", "b19", "b19" ], "table_ref": [], "text": "Fig. 17 shows the details of architectural blocks. Each residual block (ResBlock) contains two 3-by-3 convolutional layers and is conditioned time-step embedding. Each attention block (AttnBlock) contains a self-attention module, a cross-attention module, and a feed-forward network. The text embedding is merged via the cross-attention module. Within the block, the feature spatial dimensions h and w are flattened into a sequence length of hw. The number of channels c is considered as an embedding size, processed with attention heads. The number of groups for the group normalization is set to 32. The differences between SDM-v1 and SDM-v2 include the number of attention heads (8 for all the stages of SDM-v1 and [5,10,20,20] for different stages of SDM-v2) and the text embedding dimensions (77×768 for SDM-v1 and 77×1024 for SDM-v2)." }, { "figure_ref": [], "heading": "ResBlock (time-embedding conditioned)", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "R", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Time Emb 1280", "publication_ref": [], "table_ref": [], "text": "Res Block Feature = SiLU Linear(1280, c out )\nLinear(c, 8c) Split " }, { "figure_ref": [], "heading": "B. Impact of the mid-stage removal", "publication_ref": [], "table_ref": [], "text": "Removing the entire mid-stage from the original U-Net does not noticeably degrade the generation quality for many text prompts while effectively reducing the number of parameters. See Fig. 18 and Tab. 8." }, { "figure_ref": [], "heading": "Candles and flowers neatly", "publication_ref": [], "table_ref": [], "text": "placed on a table." }, { "figure_ref": [], "heading": "SDM-v1.4 No Mid-Stage", "publication_ref": [], "table_ref": [], "text": "Two stuffed bears are dressed in astronaut suits.\nA room is furnished with couches, rugs, and fancy paintings.\nArtistic photograph of architecture and frost-laden trees at dawn. A small tan and striped bird is sits on the branch of a red flowering bush." }, { "figure_ref": [], "heading": "SDM-v1.4 No Mid-Stage", "publication_ref": [], "table_ref": [], "text": "A tray of food on the top of a table." }, { "figure_ref": [], "heading": "BK-SDM Type", "publication_ref": [], "table_ref": [], "text": "Base Base-2M Small Small-2M Tiny Tiny-2M\nA boy wearing a tie and a white shirt.\nThe living room is all decorated for Christmas. " } ]
2023-11-16
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "Stable diffusion web ui", "year": "" }, { "authors": "Andreas Blattmann; Robin Rombach; Huan Ling; Tim Dockhorn; Seung Wook Kim; Sanja Fidler; Karsten Kreis", "journal": "", "ref_id": "b1", "title": "Align your latents: High-resolution video synthesis with latent diffusion models", "year": "2023" }, { "authors": "Tim Brooks; Aleksander Holynski; Alexei A Efros", "journal": "", "ref_id": "b2", "title": "Instructpix2pix: Learning to follow image editing instructions", "year": "2023" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Hervé Jégou; Julien Mairal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b3", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Yu-Hui Chen; Raman Sarokin; Juhyun Lee; Jiuqiang Tang; Chuo-Ling Chang; Andrei Kulik; Matthias Grundmann", "journal": "", "ref_id": "b4", "title": "Speed is all you need: On-device acceleration of large diffusion models via gpu-aware optimizations", "year": "2023" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b5", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "NeurIPS", "ref_id": "b6", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Ming Ding; Zhuoyi Yang; Wenyi Hong; Wendi Zheng; Chang Zhou; Da Yin; Junyang Lin; Xu Zou; Zhou Shao; Hongxia Yang", "journal": "NeurIPS", "ref_id": "b7", "title": "Cogview: Mastering text-to-image generation via transformers", "year": "2021" }, { "authors": "Ming Ding; Wendi Zheng; Wenyi Hong; Jie Tang", "journal": "NeurIPS", "ref_id": "b8", "title": "Cogview2: Faster and better text-to-image generation via hierarchical transformers", "year": "2022" }, { "authors": "Patrick Esser; Robin Rombach; Bjorn Ommer", "journal": "", "ref_id": "b9", "title": "Taming transformers for high-resolution image synthesis", "year": "2021" }, { "authors": "Gongfan Fang; Xinyin Ma; Mingli Song; Michael Bi; Mi ; Xinchao Wang", "journal": "", "ref_id": "b10", "title": "Depgraph: Towards any structural pruning", "year": "2023" }, { "authors": "Gongfan Fang; Xinyin Ma; Xinchao Wang", "journal": "NeurIPS", "ref_id": "b11", "title": "Structural pruning for diffusion models", "year": "2023" }, { "authors": "Oran Gafni; Adam Polyak; Oron Ashual; Shelly Sheynin; Devi Parikh; Yaniv Taigman", "journal": "", "ref_id": "b12", "title": "Make-a-scene: Scene-based text-to-image generation with human priors", "year": "2022" }, { "authors": "Zhiwei Hao; Jianyuan Guo; Ding Jia; Kai Han; Yehui Tang; Chao Zhang; Han Hu; Yunhe Wang", "journal": "NeurIPS", "ref_id": "b13", "title": "Learning efficient vision transformers via fine-grained manifold distillation", "year": "2022" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b14", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Byeongho Heo; Jeesoo Kim; Sangdoo Yun; Hyojin Park; Nojun Kwak; Jin Young Choi", "journal": "", "ref_id": "b15", "title": "A comprehensive overhaul of feature distillation", "year": "2019" }, { "authors": "Jack Hessel; Ari Holtzman; Maxwell Forbes; Ronan Le Bras; Yejin Choi", "journal": "", "ref_id": "b16", "title": "CLIPScore: A reference-free evaluation metric for image captioning", "year": "2021" }, { "authors": "Martin Heusel; Hubert Ramsauer; Thomas Unterthiner; Bernhard Nessler; Sepp Hochreiter", "journal": "NeurIPS", "ref_id": "b17", "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "year": "2017" }, { "authors": "Geoffrey Hinton; Oriol Vinyals; Jeff Dean", "journal": "", "ref_id": "b18", "title": "Distilling the knowledge in a neural network", "year": "2014" }, { "authors": "Jonathan Ho; Tim Salimans", "journal": "", "ref_id": "b19", "title": "Classifier-free diffusion guidance", "year": "2021" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "NeurIPS", "ref_id": "b20", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Jilei Hou; Ziad Asghar", "journal": "", "ref_id": "b21", "title": "World's first on-device demonstration of stable diffusion on an android phone", "year": "2023" }, { "authors": "Andrew Jaegle; Felix Gimeno; Andy Brock; Oriol Vinyals; Andrew Zisserman; Joao Carreira", "journal": "", "ref_id": "b22", "title": "Perceiver: General perception with iterative attention", "year": "2021" }, { "authors": "Xiaoqi Jiao; Yichun Yin; Lifeng Shang; Xin Jiang; Xiao Chen; Linlin Li; Fang Wang; Qun Liu", "journal": "", "ref_id": "b23", "title": "Tinybert: Distilling bert for natural language understanding", "year": "2020" }, { "authors": "Minguk Kang; Jun-Yan Zhu; Richard Zhang; Jaesik Park; Eli Shechtman; Sylvain Paris; Taesung Park", "journal": "", "ref_id": "b24", "title": "Scaling up gans for text-to-image synthesis", "year": "2023" }, { "authors": "Tero Karras; Miika Aittala; Timo Aila; Samuli Laine", "journal": "NeurIPS", "ref_id": "b25", "title": "Elucidating the design space of diffusion-based generative models", "year": "2022" }, { "authors": "Bo-Kyeong Kim; Shinkook Choi; Hancheol Park", "journal": "", "ref_id": "b26", "title": "Cut inner layers: A structured pruning strategy for efficient u-net gans", "year": "2022" }, { "authors": "Benjamin Lefaudeux; Francisco Massa; Diana Liskovich; Wenhan Xiong; Vittorio Caggiano; Sean Naren; Min Xu; Jieru Hu; Marta Tintore; Susan Zhang; Patrick Labatut; Daniel Haziza", "journal": "", "ref_id": "b27", "title": "xformers: A modular and hackable transformer modelling library", "year": "" }, { "authors": "Muyang Li; Ji Lin; Yaoyao Ding; Zhijian Liu; Jun-Yan Zhu; Song Han", "journal": "", "ref_id": "b28", "title": "Gan compression: Efficient architectures for interactive conditional gans", "year": "2020" }, { "authors": "Xiuyu Li; Yijiang Liu; Long Lian; Huanrui Yang; Zhen Dong; Daniel Kang; Shanghang Zhang; Kurt Keutzer", "journal": "", "ref_id": "b29", "title": "Q-diffusion: Quantizing diffusion models", "year": "2023" }, { "authors": "Yanyu Li; Huan Wang; Qing Jin; Ju Hu; Pavlo Chemerys; Yun Fu; Yanzhi Wang; Sergey Tulyakov; Jian Ren", "journal": "NeurIPS", "ref_id": "b30", "title": "Snapfusion: Text-to-image diffusion model on mobile devices within two seconds", "year": "2023" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "", "ref_id": "b31", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Luping Liu; Yi Ren; Zhijie Lin; Zhou Zhao", "journal": "ICLR", "ref_id": "b32", "title": "Pseudo numerical methods for diffusion models on manifolds", "year": "2022" }, { "authors": "Cheng Lu; Yuhao Zhou; Fan Bao; Jianfei Chen; Chongxuan Li; Jun Zhu", "journal": "NeurIPS", "ref_id": "b33", "title": "Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps", "year": "2022" }, { "authors": "Cheng Lu; Yuhao Zhou; Fan Bao; Jianfei Chen; Chongxuan Li; Jun Zhu", "journal": "", "ref_id": "b34", "title": "Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models", "year": "2022" }, { "authors": "Sourab Mangrulkar; Sylvain Gugger; Lysandre Debut; Younes Belkada; Sayak Paul", "journal": "", "ref_id": "b35", "title": "Peft: State-of-the-art parameterefficient fine-tuning methods", "year": "2022" }, { "authors": "Chenlin Meng; Ruiqi Gao; P Diederik; Stefano Kingma; Jonathan Ermon; Tim Ho; Salimans", "journal": "", "ref_id": "b36", "title": "On distillation of guided diffusion models", "year": "2022" }, { "authors": "Chenlin Meng; Yutong He; Yang Song; Jiaming Song; Jiajun Wu; Jun-Yan Zhu; Stefano Ermon", "journal": "ICLR", "ref_id": "b37", "title": "Sdedit: Guided image synthesis and editing with stochastic differential equations", "year": "2022" }, { "authors": "Chenlin Meng; Ruiqi Gao; P Diederik; Stefano Kingma; Jonathan Ermon; Tim Ho; Salimans", "journal": "", "ref_id": "b38", "title": "On distillation of guided diffusion models", "year": "2023" }, { "authors": "Sangwoo Mo; Minsu Cho; Jinwoo Shin", "journal": "", "ref_id": "b39", "title": "Freeze the discriminator: a simple baseline for fine-tuning gans", "year": "2020" }, { "authors": "Chaitanya Murti; Tanay Narshana; Chiranjib Bhattacharyya", "journal": "ICLR", "ref_id": "b40", "title": "TVSPrune -pruning non-discriminative filters via total variation separability of intermediate representations without fine tuning", "year": "2023" }, { "authors": "Alex Nichol; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mcgrew; Ilya Sutskever; Mark Chen", "journal": "", "ref_id": "b41", "title": "Glide: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2022" }, { "authors": "Atila Orhon; Michael Siracusa; Aseem Wadhwa", "journal": "", "ref_id": "b42", "title": "Stable diffusion with core ml on apple silicon", "year": "2022" }, { "authors": "Wonpyo Park; Dongju Kim; Yan Lu; Minsu Cho", "journal": "", "ref_id": "b43", "title": "Relational knowledge distillation", "year": "2019" }, { "authors": "Pablo Pernias; Dominic Rampas; L Mats; Christopher J Richter; Marc Pal; Aubreville", "journal": "", "ref_id": "b44", "title": "Wuerstchen: An efficient architecture for large-scale text-to-image diffusion models", "year": "2023" }, { "authors": "Justin Pinkney", "journal": "", "ref_id": "b45", "title": "Small stable diffusion", "year": "2023" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b46", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "", "ref_id": "b47", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Aditya Ramesh; Mikhail Pavlov; Gabriel Goh; Scott Gray; Chelsea Voss; Alec Radford; Mark Chen; Ilya Sutskever", "journal": "", "ref_id": "b48", "title": "Zero-shot text-to-image generation", "year": "2021" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b49", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "Jie Yuxi Ren; Xuefeng Wu; Jianchao Xiao; Yang", "journal": "", "ref_id": "b50", "title": "Online multi-granularity distillation for gan compression", "year": "2021" }, { "authors": "Robin Rombach; Patrick Esser", "journal": "", "ref_id": "b51", "title": "Stable diffusion v1-4", "year": "2008" }, { "authors": "Robin Rombach; Patrick Esser", "journal": "", "ref_id": "b52", "title": "Stable diffusion v1-5", "year": "2007" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b53", "title": "Ldm on celeba-hq", "year": "2022" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "CVPR", "ref_id": "b54", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Robin Rombach; Patrick Esser; David Ha", "journal": "", "ref_id": "b55", "title": "Stable diffusion v2-1-base", "year": "2005" }, { "authors": "Adriana Romero; Nicolas Ballas; Samira Ebrahimi Kahou; Antoine Chassang; Carlo Gatta; Yoshua Bengio", "journal": "ICLR", "ref_id": "b56", "title": "Fitnets: Hints for thin deep nets", "year": "2015" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "", "ref_id": "b57", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b58", "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2023" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "NeurIPS", "ref_id": "b59", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Tim Salimans; Jonathan Ho", "journal": "ICLR", "ref_id": "b60", "title": "Progressive distillation for fast sampling of diffusion models", "year": "2022" }, { "authors": "Tim Salimans; Ian Goodfellow; Wojciech Zaremba; Vicki Cheung; Alec Radford; Xi Chen", "journal": "NeurIPS", "ref_id": "b61", "title": "Improved techniques for training gans", "year": "2016" }, { "authors": "Victor Sanh; Lysandre Debut; Julien Chaumond; Thomas Wolf", "journal": "", "ref_id": "b62", "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "year": "2019" }, { "authors": "Christoph Schuhmann; Romain Beaumont", "journal": "", "ref_id": "b63", "title": "Laionaesthetics", "year": "2022" }, { "authors": "Christoph Schuhmann; Romain Beaumont; Richard Vencu; Cade Gordon; Ross Wightman; Mehdi Cherti; Theo Coombes; Aarush Katta; Clayton Mullis; Mitchell Wortsman", "journal": "", "ref_id": "b64", "title": "Laion-5b: An open large-scale dataset for training next generation image-text models", "year": "2022" }, { "authors": " Segmind", "journal": "", "ref_id": "b65", "title": "Segmind-distill-sd", "year": "2002" }, { "authors": " Segmind", "journal": "", "ref_id": "b66", "title": "Ssd-1b", "year": "2023" }, { "authors": "Haihao Shen; Penghui Cheng; Xinyu Ye; Wenhua Cheng; Huma Abidi", "journal": "", "ref_id": "b67", "title": "Accelerate stable diffusion with intel neural compressor", "year": "2022" }, { "authors": "Changyong Shu; Yifan Liu; Jianfei Gao; Zheng Yan; Chunhua Shen", "journal": "", "ref_id": "b68", "title": "Channel-wise knowledge distillation for dense prediction", "year": "2021" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "ICLR", "ref_id": "b69", "title": "Denoising diffusion implicit models", "year": "2021" }, { "authors": "Zhiqing Sun; Hongkun Yu; Xiaodan Song; Renjie Liu; Yiming Yang; Denny Zhou", "journal": "", "ref_id": "b70", "title": "Mobilebert: a compact task-agnostic bert for resource-limited devices", "year": "2020" }, { "authors": "Raphael Tang; Linqing Liu; Akshat Pandey; Zhiying Jiang; Gefei Yang; Karun Kumar; Pontus Stenetorp; Jimmy Lin; Ferhan Ture", "journal": "", "ref_id": "b71", "title": "What the DAAM: Interpreting stable diffusion using cross attention", "year": "2023" }, { "authors": "Ming Tao; Bing-Kun Bao; Hao Tang; Changsheng Xu", "journal": "", "ref_id": "b72", "title": "Galip: Generative adversarial clips for text-to-image sis", "year": "2023" }, { "authors": "Hugo Touvron; Matthieu Cord; Matthijs Douze; Francisco Massa; Alexandre Sablayrolles; Herve Jegou", "journal": "", "ref_id": "b73", "title": "Training data-efficient image transformers and distillation through attention", "year": "2021" }, { "authors": "Aaron Van Den; Oriol Oord; Vinyals", "journal": "NeurIPS", "ref_id": "b74", "title": "Neural discrete representation learning", "year": "2017" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "NeurIPS", "ref_id": "b75", "title": "Attention is all you need", "year": "2017" }, { "authors": "Suraj Patrick Von Platen; Anton Patil; Pedro Lozhkov; Nathan Cuenca; Kashif Lambert; Mishig Rasul; Thomas Davaadorj; Wolf", "journal": "", "ref_id": "b76", "title": "Diffusers: State-of-the-art diffusion models", "year": "2022" }, { "authors": "Haochen Wang; Xiaodan Du; Jiahao Li; Raymond A Yeh; Greg Shakhnarovich", "journal": "", "ref_id": "b77", "title": "Score jacobian chaining: Lifting pretrained 2d diffusion models for 3d generation", "year": "2023" }, { "authors": "Lu Yu; Wei Xiang", "journal": "", "ref_id": "b78", "title": "X-pruner: explainable pruning for vision transformers", "year": "2023" }, { "authors": "Sergey Zagoruyko; Nikos Komodakis", "journal": "ICLR", "ref_id": "b79", "title": "Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer", "year": "2017" }, { "authors": "Lvmin Zhang; Maneesh Agrawala", "journal": "", "ref_id": "b80", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Linfeng Zhang; Xin Chen; Xiaobing Tu; Pengfei Wan; Ning Xu; Kaisheng Ma", "journal": "", "ref_id": "b81", "title": "Wavelet knowledge distillation: Towards efficient image-to-image translation", "year": "2022" }, { "authors": "Qinsheng Zhang; Yongxin Chen", "journal": "", "ref_id": "b82", "title": "Fast sampling of diffusion models with exponential integrator", "year": "2023" }, { "authors": "Yufan Zhou; Ruiyi Zhang; Changyou Chen; Chunyuan Li; Chris Tensmeyer; Tong Yu; Jiuxiang Gu; Jinhui Xu; Tong Sun", "journal": "", "ref_id": "b83", "title": "Towards language-free training for text-to-image generation", "year": "2022" }, { "authors": "Ligeng Zhu", "journal": "", "ref_id": "b84", "title": "Thop: Pytorch-opcounter", "year": "2018" } ]
[ { "formula_coordinates": [ 3, 81.84, 98.22, 461.71, 19.27 ], "formula_id": "formula_0", "formula_text": "4 × 𝐻 × 𝑊 R 1,1 A 1,1 R 1,2 A 1,2 Cv Dn R 2,1 A 2,1 R 2,2 A 2,2 Cv Dn R 3,1 A 3,1 R 3,2 A 3,2 Cv Dn R 4,1 R 4,2 R 5,1 A 5,1 R 5,2 R 6,1 R 6,2 Cv Up ⧺ ⧺ ⧺ R 6,3 R 7,1 A 7,1 R 7,2 A 7,2 Cv Up ⧺ ⧺ ⧺ R 7,3 A 7,3 R 8,1 A 8,1 R 8,2 A 8,2 Cv Up ⧺ ⧺ ⧺ R 8,3 A 8,3 R 9,1 A 9,1 R 9,2 A 9,2 ⧺ ⧺ ⧺ R" }, { "formula_coordinates": [ 4, 90.42, 570.39, 196.61, 12.69 ], "formula_id": "formula_1", "formula_text": "L Task = E z,ϵ,y,t ||ϵ -ϵ S (z t , y, t)|| 2 2 ,(1)" }, { "formula_coordinates": [ 4, 77.19, 700.63, 209.84, 12.69 ], "formula_id": "formula_2", "formula_text": "L OutKD = E ||ϵ T (z t , y, t) -ϵ S (z t , y, t)|| 2 2 .(2)" }, { "formula_coordinates": [ 4, 320.19, 235.68, 225.59, 22.21 ], "formula_id": "formula_3", "formula_text": "L FeatKD = E l ||f l T (z t , y, t) -f l S (z t , y, t)|| 2 2 ,(3)" }, { "formula_coordinates": [ 4, 319.56, 414.15, 226.22, 9.65 ], "formula_id": "formula_4", "formula_text": "L = L Task + λ OutKD L OutKD + λ FeatKD L FeatKD . (4)" }, { "formula_coordinates": [ 12, 54.17, 207.33, 489.12, 17.71 ], "formula_id": "formula_5", "formula_text": "Dn Dn Dn Dn Up Up Up Up Cv Cv Latent 4 × 𝐻 × 𝑊 Noise 4 × 𝐻 × 𝑊 R 1,1 A 1,1 R 1,2 A 1,2 Cv Dn R 2,1 A 2,1 R 2,2 A 2,2 Cv Dn R 3,1 A 3,1 R 3,2 A 3,2 Cv Dn R 4,1 R 4,2 R 5,1 A 5,1 R 5,2 R 6,1 R 6,2 Cv Up ⧺ ⧺ ⧺ R6,3 R 7,1 A 7,1 R 7,2 A 7,2 Cv Up ⧺ ⧺ ⧺ R 7,3 A 7,3 R 8,1 A 8,1 R 8,2 A 8" }, { "formula_coordinates": [ 12, 58.34, 532.98, 481.72, 62.62 ], "formula_id": "formula_7", "formula_text": "Latent 4×64×64 Noise 4×64×64 R 1,1 A 1,1 R 1,2 A 1,2 Cv Dn R 2,1 A 2,1 R 2,2 A 2,2 Cv Dn R 3,1 A 3,1 R 3,2 A 3,2 Cv Dn R 4,1 R R 5,1 A 5,1 R 5,2 R 6,1 R 6,2 Cv Up ⧺ ⧺ ⧺ R 6,3 R 7,1 A 7,1 R 7,2 A 7,2 Cv Up ⧺ ⧺ ⧺ R 7,3 A 7,3 R 8,1 A 8,1 R 8,2 A 8,2 Cv Up ⧺ ⧺ ⧺ R 8,3 A 8,3 R 9,1 A 9,1 R 9,2 A 9,2 ⧺ ⧺ ⧺ R 9,3 A 9,3 320×64×64 320×32×32 640×16×16 1280×8×8 1280×8×8 1280×16×16 1280×32×32 640×64×64 320×64×64 1280×8×8 Cv Cv Noise 4×64×64 Dn Cv Dn R 1,1 A 1,1 Dn Cv Dn R 2,1 A 2,1 Dn R 3,1 A 3,1 Cv Dn Dn R 4,1 Up R 7,1 A 7,1 Cv Up ⧺ ⧺ R 7,3 A 7,3 Up R 6,1 ⧺ ⧺ Cv Up R 6,3 Up R 8,1 A 8,1 Cv Up ⧺ ⧺ R 8,3 A 8,3 Up R 9,1 A 9,1 ⧺ ⧺ R 9,3 A 9,3 320×64×64 320×32×32 640×16×16 1280×8×8 1280×8×8 1280×16×16 1280×32×32 640×64×64 320×64×64" } ]
BK-SDM: A Lightweight, Fast, and Cheap Version of Stable Diffusion
b) Efficient Personalized T2I (a) Efficient General-purpose T2I Figure 1. Our compressed Stable Diffusion enables efficient (a) zero-shot text-to-image generation, (b) personalized synthesis, (c) image-toimage translation, and (d) mobile deployment. Samples from BK-SDM-Small with 36% reduced parameters and latency are shown.
Bo-Kyeong Kim; Hyoung-Kyu Song; Thibault Castells; Shinkook Choi
[ { "figure_caption": "Figure 2 .2Figure 2. Computation of Stable Diffusion v1 and v2. The denoising U-Net is the main processing bottleneck. THOP [85] is used to measure MACs in generating a 512×512 image.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Minor impact of removing the mid-stage from the U-Net. Results without retraining. See Sec. B for additional results.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Importance of (a) each block and (b) each group of paired/triplet blocks. Higher score implies removable blocks. The results are aligned with our architectures (e.g., removal of innermost stages and the second R-A pairs in down stages). See Sec. C for further analysis.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Distillation-based retraining. The block-removed U-Net is trained effectively through the guidance of the original U-Net.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "T2I performance. Tab. 3 summarizes the results from ablating the total KD objective (Eq. 2+Eq. 3). Across various model types, distillation brings a clear improvement in generation quality. Tab. 4 analyzes the effect of each element in", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 .A7Figure 7. Visual comparison with open-sourced models. The results [9, 42, 45, 73, 84] were obtained with their official codes.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Image areas affected by each word. KD enables our models to mimic the SDM, yielding similar per-word attribution maps. The model without KD behaves differently, causing dissimilar maps and inaccurate generation (e.g., two sheep and unusual bird shapes).", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 88displays the per-word attribution maps[72] created by aggregating cross-attention scores over spatiotemporal dimensions. The attribution maps of our models are semantically and spatially similar to those of the original model, indicating the merit of supervisory signals at multiple stages via KD. In contrast, the baseline model without KD activates incorrect areas, leading to textmismatched generation results.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Table 2 .2Impact of compute reduction in U-Net on the entire SDM. The number of sampling steps is indicated with the parentheses, e.g., U-Net (1) for one step. The full computation (denoted by \"Whole\") covers the text encoder, U-Net, and image decoder. All corresponding values are obtained on the generation of a single 512×512 image with 25 denoising steps. The latency was measured on Xeon Silver 4210R CPU 2.40GHz and NVIDIA GeForce RTX 3090 GPU.", "figure_data": "", "figure_id": "fig_9", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 9. Zero-shot results over training progress. The architecture size of BK-SDM, usage of KD, and batch size are denoted. Results on MS-COCO 30K.", "figure_data": "", "figure_id": "fig_10", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 .10Figure 10. Trade-off curves. Left: FID vs. CLIP score; Right: generation quality vs. efficiency. BK-SDM-Base on MS-COCO 5K.", "figure_data": "", "figure_id": "fig_11", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 .11Figure 11. Impact of training data volume. Results of BK-SDM-Small on MS-COCO 30K.", "figure_data": "", "figure_id": "fig_12", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Table 7 .Figure 12 .712Figure 12. Visual results of personalized generation. Each subject is marked as \"a [identifier] [class noun]\" (e.g., \"a [V] dog\").", "figure_data": "", "figure_id": "fig_13", "figure_label": "712", "figure_type": "figure" }, { "figure_caption": "Figure 13 .Figure 14 .1314Figure 13. Text-guided image-to-image translation.", "figure_data": "", "figure_id": "fig_14", "figure_label": "1314", "figure_type": "figure" }, { "figure_caption": "Figure 16 .16Figure16. Distillation retraining process. The compact U-Net student is built by eliminating several residual and attention blocks from the original U-Net teacher. Through the feature and output distillation from the teacher, the student can be trained effectively yet rapidly. The default latent resolution for SDM-v1 and v2-base is H = W = 64 in Fig.15, resulting in 512×512 generated images.", "figure_data": "", "figure_id": "fig_15", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Figure 20 .20Figure 20. Zero-shot general-purpose T2I results. The results of previous studies[9,73, 84] were obtained with their official codes and released models. We do not apply any CLIP-based reranking for SDM and our models.", "figure_data": "", "figure_id": "fig_18", "figure_label": "20", "figure_type": "figure" }, { "figure_caption": "Figure 21 .21Figure 21. Results of personalized generation. Each subject is marked as \"a [identifier] [class noun]\" (e.g., \"a [V] dog\"). Similar to the original SDM, our compact models can synthesize the images of input subjects in different backgrounds while preserving their appearance.", "figure_data": "", "figure_id": "fig_19", "figure_label": "21", "figure_type": "figure" }, { "figure_caption": "Figure 23 .23Figure 23. Deployment on NVIDIA Jetson AGX Orin 32GB.", "figure_data": "", "figure_id": "fig_20", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "Figure 24 .24Figure 24. Deployment on iPhone 14.", "figure_data": "", "figure_id": "fig_21", "figure_label": "24", "figure_type": "figure" }, { "figure_caption": "Figure 25 .25Figure 25. Additional examples from deployment on edge devices.", "figure_data": "", "figure_id": "fig_22", "figure_label": "25", "figure_type": "figure" }, { "figure_caption": "Figure 26 .26Figure 26. Stable Diffusion WebUI [1] used in the deployment on AGX Orin.", "figure_data": "", "figure_id": "fig_23", "figure_label": "26", "figure_type": "figure" }, { "figure_caption": "Fig. 2727Fig. 27 illustrates how varying data sizes affects the training of BK-SDM-Small. Fig. 28 presents additional visual outputs of the following models: BK-SDM-{Base, Small, Tiny} trained on 212K (i.e., 0.22M) pairs and BK-SDM-{Base-2M, Small-2M, Tiny-2M} trained on 2256K (2.3M) pairs.", "figure_data": "", "figure_id": "fig_24", "figure_label": "27", "figure_type": "figure" }, { "figure_caption": "Figure 27 .27Figure 27. Varying data quantities in training BK-SDM-Small. As the amount of data increases, the visual outcomes improve, such as enhanced image-text matching and clearer differentiation between objects.", "figure_data": "", "figure_id": "fig_25", "figure_label": "27", "figure_type": "figure" }, { "figure_caption": "Results on zero-shot MS-COCO 256×256 30K. Training resources include image-text pairs, batch size, iterations, and A100 days. Despite far smaller resources, our compact models outperform prior studies [8, 9, 49, 73, 84], showing the benefit of compressing existing powerful models. Note that FID fluctuates more than the other metrics over training progress in our experiments (see Figs. 9 and 11).", "figure_data": "Generation ScoreTraining Resource", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Performance gains from distillation retraining. Results on MS-COCO 30K.", "figure_data": "BK-SDMGeneration Score# ParamTypeKD FID↓IS↑CLIP↑ U-Net WholeBase✗ ✓23.57 23.02 0.2408 0.58B 0.76B 15.76 33.79 0.2878 0.58B 0.76Bv2-Base✗ ✓16.76 25.88 0.2661 0.59B 0.98B 15.85 31.70 0.2868 0.59B 0.98Bv2-Small✗ ✓16.71 25.77 0.2655 0.49B 0.88B 16.61 31.73 0.2901 0.49B 0.88Bv2-Tiny✗ ✓16.87 26.06 0.2678 0.33B 0.72B 15.68 31.64 0.2897 0.33B 0.72BBK-SDMInitOutKD FeatKD FID↓IS↑CLIP↑Random✗✗43.80 13.61 0.1622BaseTeacher✗✗20.45 22.68 0.2444(Batch 64)Teacher✓✗16.48 27.30 0.2620Teacher✓✓14.61 31.44 0.2826Random✗✗41.75 15.42 0.1733v2-SmallTeacher Teacher✗ ✓✗ ✗16.71 25.77 0.2655 14.27 29.47 0.2777Teacher✓✓16.61 31.73 0.2901Init: weight initialization. (OutKD, FeatKD): (output, feature)-level KD.", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Significance of each element in transferred knowledge.", "figure_data": "", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "depicts that our models can accurately capture the subject details and generate various scenes. Over Impact of training batch size and iterations. Results on MS-COCO 30K.", "figure_data": "BK-SDMBaseSmallTinyBatch Size642566425664256FID↓14.6115.7616.8716.9817.2817.12IS↑31.4433.7929.5131.6828.3330.09CLIP↑0.2826 0.2878 0.26440.26770.2607 0.2653BK-SDM Base (Data 2.3M) Small (Data 2.3M) Tiny (Data 2.3M)# Iter50K100K50K100K50K100KFID↓14.8115.3917.0517.0117.5317.63IS↑34.1734.7633.1033.1431.3232.26CLIP↑0.2883 0.2889 0.27340.27540.2690 0.2713", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": ": Per-subject FT time and GPU memory for 800 iterations with a batch size of 1 on NVIDIA RTX 3090.", "figure_data": "Backbone {Param}DINO↑ CLIP-I↑ CLIP-T↑ FT (Time, Mem) †SDM-v1.4 [52] {1.04B}0.7280.7250.263(882s, 23.0GB)BK-SDM-Base {0.76B}0.7230.7170.260(623s, 18.7GB)BK-SDM-Small {0.66B}0.7200.7050.259(604s, 17.2GB)BK-SDM-Tiny {0.50B}0.7150.6930.261(560s, 13.1GB)Base (Batch 64) {0.76B}0.7180.7080.262(623s, 18.7GB)-No KD & Random Init.0.5940.4650.191(623s, 18.7GB)-No KD & Teacher Init.0.7160.6690.258(623s, 18.7GB)", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[37,39]", "Explanation": "The cited works introduce a method of distilling a pretrained diffusion model to reduce the number of denoising steps, which the citing paper adopts to improve the efficiency of SDMs in text-guided vision applications."}, {"Category": "Methodological Basis", "Citation": "[16,57]", "Explanation": "The cited works on feature-level knowledge distillation (KD) are leveraged in the citing paper to retrain a compact model for general-purpose T2I with transferred knowledge from a large diffusion model."}, {"Category": "Extension or Continuation", "Citation": "[11,41,79]", "Explanation": "The cited works focused on small models, but the citing paper extends the research to larger models like SDMs, which are more complex and require a different approach to structural block removal."}, {"Category": "Data Source", "Citation": "[64]", "Explanation": "The cited work provides the LAION pairs dataset that the citing paper uses in its research on training diffusion models."}, {"Category": "Methodological Basis", "Citation": "[59]", "Explanation": "The cited work on customized generation is used as a reference for the application of the lightweight backbones in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[38]", "Explanation": "The cited work on image-to-image translation is used as a reference for the application of the lightweight backbones in the citing paper to lower finetuning and inference costs."}, {"Category": "Extension or Continuation", "Citation": "[66]", "Explanation": "The cited work introduces block pruning and knowledge distillation techniques for the SDM-v1 variant, which the citing paper may build upon to further improve the performance of the model."}, {"Category": "Extension or Continuation", "Citation": "[67]", "Explanation": "The cited work presents the SDXL model, which the citing paper may extend by exploring new dimensions, contexts, or variables in the context of T2I synthesis on Jetson AGX Orin and iPhone 14 using the model."}, {"Category": "Methodological Basis", "Citation": "[7,21,70]", "Explanation": "The cited works on diffusion-based generative models provide the methodological basis for the text-to-image synthesis techniques used in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[42,60]", "Explanation": "The cited works on text-conditional diffusion models for image synthesis are extended in the citing paper to further improve the quality of T2I synthesis."}, {"Category": "Methodological Basis", "Citation": "[50]", "Explanation": "The text-conditional prior network in DALL\u2022E-2 is adopted as a method for producing image embeddings in the diffusion modeling process described in the citing paper."}, {"Category": "Data Source", "Citation": "[52,53,55,56]", "Explanation": "The cited works on SDMs provide a data source for the low-dimensional latent space used in the pixel-space autoencoder in the diffusion modeling process described in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[37,39,61]", "Explanation": "The cited works on diffusion-tailored distillation provide a method for progressively transferring knowledge from a pretrained diffusion model to a fewer-step model with the same architecture, which the citing paper adopts to improve the sampling speed."}, {"Category": "Methodological Basis", "Citation": "[34,35,83]", "Explanation": "The cited works on fast high-order solvers for diffusion ordinary differential equations offer a method to boost the sampling speed in diffusion models, which the citing paper can utilize to improve the efficiency of the sampling process."}, {"Category": "Extension or Continuation", "Citation": "[22,30,68]", "Explanation": "The cited works on leveraging quantization and implementation optimizations for SDMs can be combined with the compact models presented in the citing paper to further improve efficiency in diffusion models."}, {"Category": "Supporting Evidence", "Citation": "[19,44]", "Explanation": "The cited works on output-level and feature-level information in large source models support the use of distillation for enhancing the performance of small-size models in diffusion models."}, {"Category": "Supporting Evidence", "Citation": "[14,74]", "Explanation": "The cited works on distillation pretraining for general-purpose language models and vision transformers provide evidence that the success of distillation can be extended to diffusion models with iterative sampling."}, {"Category": "Methodological Basis", "Citation": "[31]", "Explanation": "SnapFusion [31] provides a U-Net architecture and step distillation techniques that the citing paper adopts in their research on efficient SDMs."}, {"Category": "Extension or Continuation", "Citation": "[45]", "Explanation": "The work of W\u00fcrstchen [45] introduces two diffusion processes for training SDMs, which the citing paper builds upon to further explore the economic training aspect of diffusion models."}, {"Category": "Supporting Evidence", "Citation": "[12]", "Explanation": "Diff-Pruning [12] proposes a structured pruning method based on Taylor expansion for diffusion models, which the citing paper uses to support their research on efficient SDMs."}, {"Category": "Methodological Basis", "Citation": "[52]", "Explanation": "The cited work v1.4 is used as a reference for the U-Net block configuration in the denoising U-Net approach discussed in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[56]", "Explanation": "The cited work v2.1-base is also referenced in the discussion of the U-Net block configuration in the denoising U-Net approach, indicating that both versions are used in the experiments."}, {"Category": "Methodological Basis", "Citation": "[58]", "Explanation": "The cited work, U-Net, is a method that the citing paper adopts in the design of the SDMs. The U-Net is used in the denoising process of the SDMs, and the citing paper reduces the per-step computation of the U-Net to create BK-SDMs."}, {"Category": "Methodological Basis", "Citation": "[63]", "Explanation": "The cited work, DistilBERT, provides the method of halving the number of layers and initializing the compact model with the original weights, which the citing paper adopts in their approach to U-Net design."}, {"Category": "Extension or Continuation", "Citation": "[15]", "Explanation": "The cited work, residual (R) blocks, is extended in the U-Net design by maintaining the first R-A pairs and eliminating the second pairs in the down stages."}, {"Category": "Extension or Continuation", "Citation": "[23]", "Explanation": "The cited work, cross-attention (A) blocks, is extended in the U-Net design by maintaining the third R-A pairs in the up stages, allowing the use of output feature maps and skip connections."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work on GANs provides a reference for the U-Net generator in the citing paper, which is used to analyze the role of inner layers in the generator and guide the design of the mid-stage removal in the proposed model."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work provides a methodological basis for the design choices made in the citing paper regarding the importance of certain blocks in the U-Net architecture."}, {"Category": "Methodological Basis", "Citation": "[63]", "Explanation": "The cited work contributes to the methodological basis of the citing paper by providing insights and guidelines for the design of the U-Net architecture."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work introduces the concept of the diffusion process, which the citing paper adopts in the training of the U-Net student for the task of general-purpose T2I."}, {"Category": "Methodological Basis", "Citation": "[55]", "Explanation": "The cited work provides the pretrained and frozen encoders for images and text prompts, which the citing paper uses in the input of the U-Net to train the block-removed U-Net to mimic the behavior of the original U-Net."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work provides the output-level KD objective that the citing paper adopts in the training of the compact student."}, {"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work introduces the feature-level KD approach that the citing paper utilizes in the training of the student model."}, {"Category": "Methodological Basis", "Citation": "[57]", "Explanation": "The cited work provides further guidance for the training of the student model by offering additional information on feature-level KD."}, {"Category": "Data Source", "Citation": "[64,65]", "Explanation": "The cited works L-Aes 6.5+ and L-Aes 6.25+ are the data sources for the image-text pairs used in the distillation retraining process in the citing paper."}, {"Category": "Data Source", "Citation": "[59]", "Explanation": "The cited work provides the DreamBooth dataset used in the citing paper for personalized generation evaluation."}, {"Category": "Methodological Basis", "Citation": "[39]", "Explanation": "The cited work is the source of the input images used in the image-to-image translation task in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[77]", "Explanation": "The cited work is the source of the Diffusers code used in the main retraining process in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work is the source of the PEFT code used in the per-subject finetuning process in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work provides the default value of the classifier-free guidance scale used in the inference phase of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[60]", "Explanation": "The cited work is the source of the default value of the classifier-free guidance scale used in the inference phase of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work is the source of the ViT-S/16 model used in the DINO score evaluation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[39]", "Explanation": "The cited work is the source of the input images used in the image-to-image translation task in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work is the source of the PEFT code used in the per-subject finetuning process in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work provides the default value of the classifier-free guidance scale used in the inference phase of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[60]", "Explanation": "The cited work is the source of the default value of the classifier-free guidance scale used in the inference phase of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work is the source of the ViT-S/16 model used in the DINO score evaluation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[39]", "Explanation": "The cited work is the source of the input images used in the image-to-image translation task in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work is the source of the PEFT code used in the per-subject finetuning process in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work provides the default value of the classifier-free guidance scale used in the inference phase of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[60]", "Explanation": "The cited work is the source of the default value of the classifier-free guidance scale used in the inference phase of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work is the source of the ViT-S/16 model used in the DINO score evaluation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[39]", "Explanation": "The cited work is the source of the input images used in the image-to-image translation task in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work is the source of the PEFT code used in the per-subject finetuning process in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work provides the default value of the classifier-free guidance scale used in the inference phase of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[60]", "Explanation": "The cited work is the source of the default value of the classifier-free guidance scale used in the inference phase of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work is the source of the ViT-S/16 model used in the DINO score evaluation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[39]", "Explanation": "The cited work is the source of the input images used in the image-to-image translation task in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work is the source of the PEFT code used in the per-subject finetuning process in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work provides the default value of the classifier-free guidance scale used in the inference phase of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[60]", "Explanation": "The cited work is the source of the default value of the classifier-free guidance scale used in the inference phase of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work is the source of the ViT-S/16 model used in the DINO score evaluation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[39]", "Explanation": "The cited work is the source of the input images used in the image-to-image translation task in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work is the source of the PEFT code used in the per-subject finetuning process in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work provides the default value of the classifier-free guidance scale used in the inference phase of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[60]", "Explanation": "The cited work is the source of the default value of the classifier-free guidance scale used in the inference phase of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work is the source of the ViT-S/16 model used in the DINO score evaluation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[39]", "Explanation": "The cited work is the source of the input images used in the image-to-image translation task in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work is the source of the PEFT code used in the per-subject finetuning process in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work provides the default value of the classifier-free guidance scale used in the inference phase of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[60]", "Explanation": "The cited work is the source of the default value of the classifier-free guidance scale used in the inference phase of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work is the source of the ViT-S/16 model used in the DINO score evaluation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[39]", "Explanation": "The cited work is the source of the input images used in the image-to-image translation task in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work is the source of the PEFT code used in the per-subject finetuning process in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work provides the default value of the classifier-free guidance scale used in the inference phase of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[60]", "Explanation": "The cited work is the source of the default value of the classifier-free guidance scale used in the inference phase of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work is the source of the ViT-S/16 model used in the DINO score evaluation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[39]", "Explanation": "The cited work is the source of the input images used in the image-to-image translation task in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work is the source of the PEFT code used in the per-subject finetuning process in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work provides the default value of the classifier-free guidance scale used in the inference phase of the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[46]", "Explanation": "The cited work by [46] provides evidence of the need for a two-stage knowledge distillation process with two teachers in order to achieve competitive performance in general-purpose T2I models, which the citing paper aims to address with a more efficient and cost-effective approach."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work by [9] serves as a methodological basis for the visual comparison conducted in the citing paper, as it provides a baseline for assessing the photorealism of synthesized images generated by the models."}, {"Category": "Data Source", "Citation": "[73]", "Explanation": "The cited work by [73] is acknowledged as a data source in the citing paper, as it is used to generate a dataset of images and captions for the visual comparison conducted in the study."}, {"Category": "Data Source", "Citation": "[84]", "Explanation": "The cited work by [84] is acknowledged as a data source in the citing paper, as it is used to generate a dataset of images and captions for the visual comparison conducted in the study."}, {"Category": "Extension or Continuation", "Citation": "[40]", "Explanation": "The cited work by [40] is discussed in the citing paper as a point of extension or continuation, as it highlights the observation of shared visual styles between original and compressed models in transfer learning for GANs, which the citing paper aims to build upon in its own research."}, {"Category": "Methodological Basis", "Citation": "[20,60]", "Explanation": "The cited works provide higher classifier-free guidance scales that the citing paper adopts to improve the text-aligned image generation process."}, {"Category": "Data Source", "Citation": "More denoising steps", "Explanation": "The cited work introduces the concept of more denoising steps, which the citing paper utilizes in their research to improve generation quality."}, {"Category": "Extension or Continuation", "Citation": "Distillation retraining", "Explanation": "The cited work introduces the concept of distillation retraining, which the citing paper extends to achieve better tradeoff curves in their research."}, {"Category": "Methodological Basis", "Citation": "[59]", "Explanation": "The cited work, DreamBooth, provides a method for fine-tuning pre-trained image generation models for personalized T2I synthesis. The citing paper builds upon this method to create images about a given subject with a reduced finetuning cost."}, {"Category": "Extension or Continuation", "Citation": "[38]", "Explanation": "The cited work, SDEdit, is used in the citing paper to present text-guided stylization results for image-to-image translation. The citing paper extends the research by exploring the use of SDEdit in the context of image generation."}, {"Category": "Extension or Continuation", "Citation": "[55]", "Explanation": "The cited work on LDMs is extended by applying the same approach of BK-SDM-Small to compress an LDM for unconditional generation on CelebA-HQ, demonstrating the generality of the work."}, {"Category": "Methodological Basis", "Citation": "[53,55]", "Explanation": "The cited work provides a benchmark for testing the models in the citing paper, establishing a standard for comparison and evaluation."}, {"Category": "Data Source", "Citation": "[1]", "Explanation": "The cited work is the Stable Diffusion WebUI, which is the source of the data and models used in the inference and testing of the models in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[26,35]", "Explanation": "The cited work introduces the DPM++ 2M Karras sampling method, which the citing paper adopts to synthesize images in a faster and more efficient manner."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work optimizes the attention mechanism in the models, which the citing paper utilizes to improve the inference speed and image quality."}, {"Category": "Methodological Basis", "Citation": "[43]", "Explanation": "The cited work introduces post-training palettization, which the citing paper deploys in the models to improve the inference speed and image quality on iPhone 14."}, {"Category": "Methodological Basis", "Citation": "[52,55]", "Explanation": "The cited work is the original SDM-v1.4 model, which the citing paper compares against in the post-training palettization setup to demonstrate the improved performance of the models in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[77]", "Explanation": "The cited work, Diffusers, provides the training process of DDPM in latent spaces that the citing paper adopts for distillation retraining."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work, PEFT, is used for per-subject finetuning in the citing paper, leveraging the training process of DDPM in latent spaces."}, {"Category": "Data Source", "Citation": "[33]", "Explanation": "The cited work provides the PNDM scheduler used in the zero-shot T2I generation process in the citing paper."}, {"Category": "Data Source", "Citation": "[34,35]", "Explanation": "The cited works are the DPM-Solver used in the DreamBooth results in the citing paper."}, {"Category": "Data Source", "Citation": "[20,60]", "Explanation": "The cited works provide the default value of the classifier-free guidance scale used in the image-to-image translation process in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[38]", "Explanation": "The cited work, SDEdit method, is implemented in Diffusers and used as a method for training the model in the citing paper."}, {"Category": "Data Source", "Citation": "[77]", "Explanation": "The cited work, Diffusers, is the source of the implementation of the SDEdit method used in the citing paper for training the model."}, {"Category": "Methodological Basis", "Citation": "(Wang et al., 2021)", "Explanation": "The cited work by Wang et al. (2021) provides the architectural blocks and details of the residual blocks and attention blocks used in the citing paper for the design of the self-attention module and cross-attention module."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b28", "b15", "b5", "b33", "b24", "b2", "b20", "b21", "b21", "b8" ], "table_ref": [], "text": "There has been a growing amount of work [28; 23; 20; 35] using powerful text-to-image pre-trained models [29; 31] to generate high-quality 3D scenes based on the text inputs. Despite the impressive results reported in the literature, it still remains a challenge of achieving a level of grace and flexibility in generating and manipulating scenes through open-vocabulary natural language that is on par with human interaction. To enhance human-computer interaction in image generation, researchers [41; 37; 8] have attempted to utilize pre-trained cross-modal models, e.g., CLIP [29] or generative models, e.g., StyleGAN [16] to generate images through continual linguistic instructions. However, for real-world applications, such models suffer from limited capacities to comprehend and reason the natural language input in the open-vocabulary domain. Moreover, these works have solely focused on the 2D generation and do not account for the challenges of rendering 3D scenes, particularly related to the arrangement and editing of multiple 3D objects.\nMeanwhile, recent advances in Large language models (LLMs), such as PaLM [6], LLaMA [34] and GPT-4 [25], exhibit exceptional abilities in carrying out natural language reasoning tasks and demonstrate impressive conversational, reasoning, and zero-shot generation abilities across diverse domains. More remarkably, LLMs have shown eminent potential in realizing and interpreting the 3D space through codes [3], thus making it possible to be a strong linkage between the natural language and the 3D generation modeling without relying on 3D datasets. However, it is non-trivial to directly apply LLMs to solve such complex interactive 3D generation tasks without considering the proper cooperation with off-the-shelf generative models. In light of this, we propose to use the layout as the interface to bridge the LLMs and generative models, i.e., the LLMs interpret natural language into a 3D layout across multi-round interactions and the 3d generative models, e.g., CompoNeRF [21] use the 3D layout as conditional input to perform layout-to-3D generation. To achieve this, we design a language-guided interactive 3D generation system, dubbed LI3D, that defines a versatile layout structure based on bounding boxes and semantics for LLMs to interpret. This allows our system to not only generate 3D scenes from the stretch but also enable the editing of various objects within the generated scene. Furthermore, we integrate the LLMs [22; 10; 42] boosted by visual instruction tuning, e.g., LLaVA [22] in our case, into our system to improve the stability of the generated 3D scenes. Because recent works [23; 21; 5] have demonstrated the instability of generating complex 3D scenes when relying solely on general descriptions. This instability often results in unsatisfactory visual quality and potential confusion in modeling multiple objects. Firstly, the LLaVA [22] acts as the verifier to predict the visual similarity between generated rendered image and the description. Secondly, for the low-quality generated content, the LLaVA provides a more detailed description of the scene based on the image for the LLMs interpreter to refine the layout.\nOur extensive empirical evaluations showcase the versatility and efficacy of LI3D in both 2D and 3D generation tasks. We evaluate the LI3D using the i-CLEVR benchmark [9] to assess its robustness in generating layouts based on sequential language instructions. The results indicate that LI3D can achieve a high recall rate in recognizing objects and a strong reasoning capacity of 3D spatial relationships modeling under multi-round interactions. The visual results show LI3D can generate 3D scenes that are consistent with the user's natural language input in multiple rounds. We also conduct studies on 3D single object generation, which reveal the LI3D having knowledge about the components of individual 3D objects and being able to understand and edit fine-grained parts following user language input. Additionally, we extend our pipeline to 2D image generation and demonstrate promising results in multi-round 2D image generation and editing.\nIn summary, our proposed interactive 3D generation system, LI3D, utilizes the capabilities of LLMs to interpret natural language input and generate 3D layouts, allowing for highly customized and realistic 3D scenes. Our evaluation results demonstrate the robustness and versatility of LI3D in both 2D and 3D interactive generation tasks based on natural language input and multi-round interactions." }, { "figure_ref": [ "fig_0" ], "heading": "Proposed Method", "publication_ref": [ "b21", "b21" ], "table_ref": [], "text": "Overview of our LI3D: An overview of our proposed system is depicted in Fig. 1. The key idea is to utilize the spatial domain (2D or 3D) knowledge, the remarkable conversational competency, and reasoning capabilities within recent powerful LLMs [26; 3; 33] to interpret the user language into the 3D layout as input to the conditional generative models. Language-guided interactive 3D generation aims to generate 3D content following multi-round user language inputs, which requires the model to understand the relationship within the input language and be capable of interactively generating 3D content. In our work, we bridge the gap between LLMs and the 3D generative models by considering the generated 3D content as a composition of multiple objects or parts that can be formalized by coarse 2D or 3D boxes with semantic descriptions in a scene. Therefore, LI3D leverages the LLMs to interpret the user language into the 3D layout for guiding the off-the-shelf generative models to perform conditional 3D generation. Using the 3D layout to represent a 3D scene makes the LLMs maintain the context of the current conversation more precisely and reason the language input in a more standardized way. Moreover, we incorporate the LLaVA [22] to provide feedback and generate more details descriptions based on the rendered image for LLMs, allowing for updating the interpreted layout when it fails to generate satisfactory results. Empirically, the conditional layout with more details can further improve the visual quality of generated content. The LLaVA [22] can be integrated into LI3D to predict generation quality and provide detailed description feedback for LLM to update the layout that fails to generate satisfactory content (Sec. 2.2)." }, { "figure_ref": [ "fig_0" ], "heading": "Language-guided Interactive 3D Generation Framework", "publication_ref": [ "b29", "b20", "b29", "b17" ], "table_ref": [], "text": "As shown in Fig. 1(a), LI3D employs a natural language interpreter to understand the user input and generate a layout that includes concept semantics, e.g., blue cube, and location information, e.g., the center of the scene. Each element in the layout is a coarse box with a semantic description of a scene. This layout is then used as the conditional input to generative modules to produce visual content. Formally, given a pre-defined context prompt p, input language x, and generative module G, the natural language interpreter I generates a corresponding layout l by responding to the user's query. In our work, the interpreter I is the LLM, which is prompted to generate a layout given a 2D or 3D generation task setup (See details in Sec. 3.1 and Appendix). The generative module G consists of a set of pre-built modules: {G i }, each corresponding to a type of models, e.g., Stable Diffusion [30] or CompoNeRF [21]. Eq. 1 provides a formal definition of the layout l:\nl = I(x 0 ; p)(1)\nwhere x 0 is the initial language input for 3D generation from scratch. Given the generated layout, the corresponding generative modules for each step are then executed to obtain generated results G(l). To improve the consistency across multi-round interactive generation, we first detect the layout modification compared to the previous step and then apply the pre-defined rules to update the generated result using generative models. For time step t, the rendered views r t is generated by:\nr t = G(l t-1 , I(x t ; p; l t-1 ))(2)\nDuring an interaction session, we use the rule-based operation to perform interactive editing based on the principle that uses semantic description as the unique identifier for each 3D box. When detecting the addition of a new box (i.e., instance) to the layout, we only initialize a new instance based on the previously generated results, while keeping the rest unchanged. When detecting edits to the attribute in the semantic description, we further fine-tune the modified object to align with the edited description. When detecting edits to the location, we can simply modify the transformation matrix associated with the 3D location. For interactive image generation, we introduce both Stable Diffusion [30] and Segment Anything models [18] to generate, extract and manipulate the object based on the layout. More details are illustrated in Sec. 3.4." }, { "figure_ref": [ "fig_0" ], "heading": "Generative Feedback Module", "publication_ref": [ "b21", "b2" ], "table_ref": [], "text": "Recent research [23; 21] has shown that generating complex 3D scenes can be unstable when using less informative descriptions, leading to unsatisfactory visual quality and confusing multi-object modeling results. Based on the empirical study, we find that LI3D tends to generate the layout with a general description, which may trigger the aforementioned issues. To this end, we propose a generative feedback module to automatically assess the suitability of the generated 3D scene and ensure proper 3D placement using LLaVA [22] instead of responses from the user. It employs the LLaVA as the verifier V to explicitly predict whether the rendered views satisfy the description provided in the layout using the context prompt p, as shown in Fig. 1 (b). Then, the feedback b t = V (l, p, r t ) based on the rendered view r t is utilized to help LLMs update the layout with more details when these views fail to match the description through LLaVA. Formally, the updated rendered view r * t is calculated by:\nr * t = G(l t-1 , I(x t ; p; b t ))(3)\nThe generative feedback module can be seamlessly integrated at every step of the LI3D as an enhancer.\nDiscussion: Recent research [3] has showcased that the LLMs, e.g., [6; 25], can synthesize information from different domains or modalities and can apply knowledge across different contexts. Specifically, the LLMs are able to generate primitive graphics or geometries through codes. An intuitive idea is to take such primitives as the conditional input for the generative models. However, it is nontrivial to compose the off-the-shelf generative models with the code interface in an interactive way. Therefore, it motivates us to propose LI3D with a simple yet effective intermediate representation design to interpret the user intention from language through LLMs. Using layouts with boxes to model scenes, LI3D facilitates precise element-level decomposition and editing, allowing for more accurate context preservation in multi-round scenarios compared to code formats or plain language." }, { "figure_ref": [], "heading": "Empirical Evaluation and Analysis", "publication_ref": [], "table_ref": [], "text": "As there lack of benchmark studies for the interactive 3D generation task, we conduct a quantitative evaluation and case study to demonstrate the versatile use of LLMs as the layout interpreter by answering the following questions: 1) How well do LLMs understand the language instruction in the spatial domain? 2) How well do LLMs learn about the common scene about 3D scene understanding or single object? To address these questions, we first introduce a benchmark with the dataset from the interactive visual generation to evaluate the potential of layout from language instruction. Then, we provide the case study results, analysis and failure cases that could lead to future improvement." }, { "figure_ref": [], "heading": "Experiment Setup", "publication_ref": [ "b25", "b33", "b20", "b20", "b35" ], "table_ref": [], "text": "Implementation Details We implement the LLMs with ChatGPT [26] (OpenAI \"gpt-3.5-turbo\" version) and the LLaVA-13B [34] as our generative feedback module. Once a 3D layout response is generated, we feed it into the CompoNeRF [21] for generative rendering on a single RTX3090. For more details of CompoNeRF please refer to the paper [21]. For the generation stage, i.e., the initial user input, we train the CompoNeRF from scratch with 8000 iterations. For the following editing stages, we use a rule-based strategy to update previously generated 3D content based on CompoNeRF. During a user interaction session, we use the object description as their unique id for each local NeRF. When we detect a new instance is added to the layout, we first train the single local NeRF in 3000 iterations and then joint fine-tunes the whole scene with 6000 iterations. When the layout contains duplicated object description, we use the minimal location distance for the matching modified object from the previous generated results to perform location editing.\nPrompt Design Similar to VisualChatGPT [36], we design a context template for prompting the layout generation. Our case study focuses on two scenarios: scene generation and object generation.\nTo optimize generation quality in each scenario, we developed two types of context prompts with different emphases, as shown in Tab. image and the context prompt with layout information, i.e.description of scene or object, as input to predict the matching similarity between the image and description. When the LLaVA generates a negative response, we further use LLaVA to generate a more detailed description of the current content as feedback input for LLMs to update the layout. More details are provided in Appendix." }, { "figure_ref": [], "heading": "Robustness of Layout Generation from Language", "publication_ref": [ "b8", "b8", "b8" ], "table_ref": [], "text": "To evaluate layout generation following the language input qualitatively, we utilize a subset of i-CLEVR [9], a dataset designed for the Generative Neural Visual Artist (GeNeVA) task, as the benchmark. Each example in i-CLEVR [9] dataset consists of a sequence of 5 (image, instruction) pairs. Starting from an empty background, each language instruction describes an object to add to the canvas in terms of its shape and color. The instruction also describes where the object should be placed relative to existing objects in the scene. We sample 50 sequential instructions and their ground-truth arrangement to evaluate the robustness of LLMs following instructions. LLM is asked to generate the scene layout based on the textual instruction with multi-round sequential instruction (max to 5). More details about the evaluation can be found in the appendix.\nTo compute a similarity metric between the generated layout and the ground truth, we use recall and relational similarity (rsim) proposed in [9] as the evaluation metric. The recall is the objects recalled in the generated layout based on the ground truth. The relational similarity metric quantifies the overlap between the ground truth relations and the generated layout by counting the number of shared relations. The relations are defined by the left-right and front-back relations between each object. Tab. 2 reports the mean of the different time-step values for each example over the sampled sub-dataset. It shows that the LLMs can recognize the object identities from multi-round language input with high accuracy and achieve a strong reasoning capacity in the 3D spatial domain for object arrangement. This phenomenon motivates us to further develop LLMs as versatile layout interpreters for 3D and 2D interactive generation tasks. Add some clouds over the castle.\nMove the clouds higher." }, { "figure_ref": [], "heading": "LI3D:", "publication_ref": [], "table_ref": [], "text": "A courtyard with a well in the center and several trees." }, { "figure_ref": [], "heading": "Duplicate another well.", "publication_ref": [], "table_ref": [], "text": "Replace all the trees with fir tree." }, { "figure_ref": [], "heading": "LI3D:", "publication_ref": [], "table_ref": [], "text": "Generate a simple study room.\nMove the bookshelf away from the desk.\nRemove the bookshelf and make the chair black. " }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "Qualitative Results of Interactive 3D Generation", "publication_ref": [], "table_ref": [], "text": "3D Scene Generation Results are shown in Fig. 2. In this case, the user can generate a complex compositional scene with a simple prompt. Our LI3D can generate reasonable spatial layouts as responses. The generated scene involves different scales from a mountain to a small room.\nMoreover, the editing instruction can include adding/removing objects and adjusting/duplicating objects. However, in some instances, the LLMs fail to generate a reasonable layout without the user adjustment, e.g., the book self in the third case. Nevertheless, LI3D shows the great potential of using LLMs as the interface for the interactive 3D generation task based on user language.\nSingle Object Generation As shown in Fig. 3, LI3D also enables users to generate individual objects from simple compositional parts using language. The results of our LI3D reveal that LLMs, which are trained solely on text data, are capable of learning about the concept of object decomposition. This suggests that LLMs have the potential as knowledge engines in future work for 3D generation modeling. Based on the decomposed parts in the layout, our LI3D allows editing operations including adding, removing, and adjusting object parts. Despite the impressive results LI3D achieved, LI3D still can not decompose the object following the physical rules even with simple structure, e.g., the wheels are out of the vehicle in certain views." }, { "figure_ref": [ "fig_4" ], "heading": "Ablation Study of Generative Feedback", "publication_ref": [], "table_ref": [], "text": "To verify the effectiveness of our generative feedback module, we perform the ablation study in Fig. 4. Specifically, we provide two cases before and after using the generative feedback module. The results suggest that the generated general layout may not be accurately captured in generative models resulting in low visual quality. However, when more detailed description feedback from LLaVA was incorporated, the generated layout contained richer descriptions of the scene leading to more satisfactory generated content.\nFailure Cases & Limitation While LI3D represents a significant advancement towards languageguided interactive 3D generation using the LLMs, it still has limitations that need to be addressed.\nGenerate \"a off-road vehicle\".\nMake the wheel smaller and more closer to each other.\nReplace the wheel with basketball." }, { "figure_ref": [], "heading": "LI3D:", "publication_ref": [], "table_ref": [], "text": "Generate a table. Merge the four legs into one. Add a base for the table." }, { "figure_ref": [ "fig_6" ], "heading": "LI3D:", "publication_ref": [ "b20" ], "table_ref": [], "text": "Generate a windmill. Make the tower more wider. Duplicate the blades in the middle of tower. As shown in Fig. 5, the LLMs may generate a scene with imbalance size among different objects contrary to common sense and reasoning of the layout considering the physical rules in the real world. For these cases, our generative feedback module also fails to provide correct feedback due to their generation quality. These failure cases highlight the importance of providing constraints based on common sense in more realistic and precise 3D content generation. Besides, the nondeterministic behavior of the LLMs may result in different generation contents during different trials or the nonconsistent or mistakenly generated layout across a long sequence of interactions. Moreover, the foundational 3D generative models, such as CompoNeRF [21], are still unstable in constructing certain open-vocabulary single objects, especially irregular objects, and can not achieve real-time interaction performance limited by their computation cost. Although our interactive system can generate and edit complex scenes based on language, the low resolution of generated scenes might have limited performance when involving a large number of objects or objects having extreme size differences.\nUser Prompt: Put a computer on the desk. Description: A simple study room with a desk, a chair, a bookshelf, a lamp, and a computer on the desk.\nUser Prompt: Generate a scene \"A simple living room.\" Description: A cozy living room with a sofa, coffee table, lamp, and potted plant. Generate a scene \"a dog is running on the grass.\"\nAdd a flying disc.\nA dog is sitting on a sofa." }, { "figure_ref": [], "heading": "LI2D:", "publication_ref": [], "table_ref": [], "text": "Generate a scene \"a dog is sitting on the sofa.\"\nMove the dog to the right. Remove the dog." }, { "figure_ref": [], "heading": "LI2D:", "publication_ref": [], "table_ref": [], "text": "Time Figure 6: Extending our pipeline into interactive 2D generation and editing." }, { "figure_ref": [], "heading": "Extension on Interactive Image Generation", "publication_ref": [ "b29", "b17" ], "table_ref": [], "text": "Our design pipeline also supports image generation and editing by incorporating Stable Diffusion [30] for layout conditional generation and in-painting, and Segment Anything (SAM) [18] for layout element extraction. The prompt design is similar to 3D generation; more details are provided in the Appendix. Our interactive image generation pipeline is also driven by the layout, and we focus on showcasing the manipulation of elements, including adding, removing, and moving objects defined in the layout. We use an object described as a unique object identity and define rule-based operations for editing by detecting layout differences. When adding a new object, we first generate a new image based on the current layout and use SAM to extract the new object, then paste it into the image generated from the previous round interaction. When an object is removed, we use SAM to extract the object mask and perform image in-painting. When detecting object location or size change, we treat it as object removal and paste masked objects based on the generated layout. The qualitative results are shown in Fig. 6. The different images are generated using different random seeds. It indicates using LLMs as layout interpreters can be a highly flexible framework for the visual generation." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b40", "b28", "b15", "b14", "b10", "b35", "b27", "b23", "b19", "b22", "b20" ], "table_ref": [], "text": "Interactive Visual Generation and Editing Recently, a variety of generation and editing methods [9; 8; 37; 13] have emerged to facilitate multi-round interactions. These methods enable continuous editing of the generative results, ensuring the satisfaction of user requirements throughout the interaction process. For instance, TiGAN [41] utilizes the pre-trained cross-modal large models CLIP [29] to understand the language feedback from the user to support the successive editing by generative models StyleGAN [16]. In contrast, Talk-to-Edit [15] introduces a rule-based system response module to generate responses for multi-round interactions. Nevertheless, these methods face limitations when it comes to handling non-predefined scenes, which poses a particularly challenging when applying them to complex multi-object 3D scenes. To tackle this challenge, LLMs offer a promising solution by leveraging their robust capacity to comprehend and reason common knowledge. The advancement of LLMs has spurred the development of various approaches to computer vision, which primarily concentrate on tasks related to visual generation [31; 14], and visual understanding [1; 2; 12]. Furthermore, the LLMs are also utilized for interpreting user instructions and translating them into other executable programs. The VISPROG [11] leverages LLMs to generate modular programs resembling Python code. These programs are executed to obtain solutions along with clear and interpretable explanations. In a similar vein, Visual ChatGPT [36], a system that integrates various Visual Foundation Models, enables users to interact with ChatGPT using images as input and receive images as responses. Differently, our proposed LI3D solves the 3D interactive generation problems by designing a task-specific intermediate representation for LLMs to interpret for better cooperation with generative models.\nText-guided 3D Generative Models Recently, there has been growing interest in exploring the use of text-to-3D synthesis [17; 28; 20; 35; 38] by leveraging the success of diffusion models in 2D generative models. Existing methods typically rely on pre-trained text-to-image models [30; 31] and employ score distillation sampling to generate 3D geometries and synthesize novel views. Dream-Fusion [28] exploits the pre-trained text-to-image diffusion model as the prior to update NeRF [24] for text-to-3D synthesis, which eliminates the need for the 3D training data and demonstrates the effectiveness of this prior in the 3D generation. Similarly, Magic3D [20] achieves higher resolution 3D mesh models through a two-stage optimization framework that enhances the text-to-3D synthesis of NeRF. Moreover, Latent-NeRF [23] utilizes score distillation sampling in the latent space to produce 3D results with less computation cost. Despite these advancements, some challenges [21] remain including view inconsistency, inaccuracy generation content, and compositional capacity, especially in complex scenes with multi-object generation." }, { "figure_ref": [], "heading": "Layout Conditional Visual Synthesis", "publication_ref": [ "b29" ], "table_ref": [], "text": "The layout-to-image task is an early attempt to perform single-round interactive generation in the image domain. The layout control in image generation is typically specified as labeled bounding boxes [40; 32; 4; 30] or semantic maps [19; 39]. In this work, we use the labeled bounding boxes as our conditioned input as they can be described through language with specific structure. While recent advances in large text-to-image models, such as stable diffusion [30], have made it possible to extend layout-to-image generation task [4; 19] using language. Despite the progress in the image domain, it remains challenging for the text to convey precise information about the layout of a 3D scene and image generators still have limited spatial fidelity. To address this issue, there have been several approaches [7; 27; 21] joints employing the conditioning on the text and 3D layout generation from the text. These approaches employ different levels of SDS loss to train the local and global scene rendering based on user input layouts. Our work takes inspiration from [21; 4] as they offer distinct advantages in layout conditional generation. LI3D eliminates the need for pre-defined object concepts and spatial locations in layout input from the user. Instead, the natural language input can be automatically interpreted as a layout for the generative models to generate and perform multi-round editing." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a novel interactive language-guided system named LI3D, which integrates LLMs into existing generation models to enable interactive visual content generation in both 2D and 3D domains. By bridging the gap between the natural language and the 3D generative models through LLMs, LI3D allows users to easily generate and edit visual content, even from continuous language instructions. To further improve generation quality, we use LLaVA to verify and enrich the generated content based on the rendered views as feedback for LLMs to update the layout. Our experiments validate the effectiveness of LI3D, demonstrating the potential benefits of incorporating LLMs in generative AI for applications such as metaverse and gaming simulation. Moreover, our results reveal that LLMs can serve as a visual commonsense knowledge reasoning engine, opening new opportunities for future research.\nBroader Impact The use of the stable diffusion model in LI3D brings with it the possibility of inheriting biases or limitations present in the model. This is a concern shared by all generative models, which can potentially be used to create disinformation that is more convincing if presented in 3D. While the synthesized 3D models in LI3D may not yet match the realism of state-of-the-art 2D image synthesis, they still pose a risk for misuse. Additionally, the automation enabled by generative models such as ours may lead to the displacement of creative workers. However, it's also important to acknowledge the potential for these tools to facilitate growth and enhance accessibility in the creative industry. Overall, it's crucial to consider these ethical implications and use generative models responsibly to maximize their potential benefits while minimizing potential harms." }, { "figure_ref": [], "heading": "A Implementation Details", "publication_ref": [], "table_ref": [], "text": "In this section, we provide more model details of the 3D and 2D generative models and more implementation details about integrating our LI3D and these generative models." }, { "figure_ref": [], "heading": "A.1 Layout Conditional 3D Generative Models", "publication_ref": [ "b20" ], "table_ref": [], "text": "In LI3D our 3D generative model adopt the CompoNeRF [21] which performs multi-object text-to-3D from a configurable layout as input. In CompoNeRF, the 3D layout requires extracting multiple noun phrases with their corresponding binding attributes and mapping these local text prompts into the corresponding regions. While in LI3D we use the LLMs to directly predict such complex 3D layouts from user language input. The generated layout is defined with multiple local frames associated with a local NeRF as representation. Each local NeRF contains a local text prompt and a spatial layout described by 3D boxes for each object entity as formulated in our layout format. In the local NeRF representation, each point (x l , y l , z l ) ∈ [-1, 1] in the local frame is mapped into its corresponding volumetric density σ and emitted color C l using the equation: \n[C l , σ ] = θ l (x l , y l , z l )." }, { "figure_ref": [], "heading": "A.2 Layout Conditional 2D Generative Models", "publication_ref": [ "b29", "b17", "b3" ], "table_ref": [], "text": "In the extension of 2D image generation, our basic models are the open-source Stable Diffusion(v1-5) [30] and Segment Anything [18] (SAM-VitH). In the layout-to-image step, we follow the [4] to use backward guidance in the default setting to control the generation process in Stable Diffusion.\nDuring image editing steps with the updated layout as a condition, we first use SAM to segment the objects that detected change in the layout using the center point of the box. In practice, we use the mask with the second large predicted score to extract the object and dilate the mask with 10 pixels around the mask. After extracting the object, we perform inpainting using the Stable Diffusion model to obtain the unchanged background based on the updated layout." }, { "figure_ref": [], "heading": "B Prompt Design for 3D Generation B.1 Scene Generation", "publication_ref": [], "table_ref": [], "text": "In this section, we provide the context prompt for the scene generations (See Table . 4) in detail and the case dialog (See Table . 5) for the 3D scene generation task in LI3D. The key insight is to prompt the LLMs to be specific for the generative models, including the expected sizes of the boxes and the generating of the aligned elements. Furthermore, we provide the layout visualization (See Figure The number of elements in object_description, object_center_point, and object_box_scales should be equal. Each object_description should include scene information. Given an instruction, 3DGPT is able to understand the meaning of the instruction and generate the modification of the object_description, object_center_point or object_box_scales based on previous output results. The relative size of each object should follow the common scene in the world. The range of object_center_point and object_box_scales MUST be [-1, 1]. Each object should not have overlap to each other.\n." }, { "figure_ref": [], "heading": "Generated Response:", "publication_ref": [], "table_ref": [], "text": "Understood, how may I assist you today with 3DGPT and 3D modeling tasks? In this scene, there are four objects: a well in the center, and three trees positioned around it. The position of each object is described by its 'object_center_point', which is a list of three values corresponding to the x, y, and z coordinates of the object's center point in the scene. The size of each object is described by its 'object_box_scales', which is a list of three values corresponding to the width, height, and depth of the object's bounding box in the scene. Note that the 'object_center_point' and 'object_box_scales' values are normalized to the range [-1, 1], and that the size of each object follows the common scene in the world. Also, there is no overlap between any of the objects in the scene." }, { "figure_ref": [], "heading": "User Prompt:", "publication_ref": [], "table_ref": [], "text": "Duplicate another well Generated Response:\nSure, here's an updated scene description that includes a duplicate well: \"' object_description: ['a well', 'a well', 'a tree', 'a tree', 'a tree'] object_center_point: [[0, 0, 0], [0. In this updated scene, there are five objects: two wells positioned in the center, and three trees positioned around them. The position and size of each object is described using the 'object_center_point' and 'object_box_scales' lists, respectively. Note that the 'object_description', 'object_center_point', and 'object_box_scales' lists have been updated to include the second well, and that the 'description' field has been updated to reflect the change in the scene." }, { "figure_ref": [], "heading": "User Prompt:", "publication_ref": [], "table_ref": [], "text": "Replace all the trees with fir tree Generated Response:\nSure, here's an updated scene description that replaces all the trees with fir trees: \"' object_description: ['a well', 'a well', 'a fir tree', 'a fir tree', 'a fir tree'] object_center_point: [[0, 0, 0], [0. " }, { "figure_ref": [], "heading": "B.2 Single Object Generation", "publication_ref": [], "table_ref": [], "text": "In this section, we provide the context prompt in detail for the individual objects from multiple compositional parts using language (See Table . 6) and the case dialog (See Table . 7). Our main focus is on adjusting or editing specific compositions within individual objects. Moreover, we offer the visualization of the layout for an individual object (See Figure. 9), as well as multiple rendering views (See Figure . 10) of the same case across multiple rounds of interaction. 3DObjGPT is designed to be able to assist with the 3d modeling tasks, 3DObjGPT is able to generate a 3d scene of object structure based on the input it receives. When 3DGPT generate a response for a single OBJECT, MUST use the following format: \"' part_description: ['description of a object part', 'description of a object part'] part_center_point: [[0, 0, 0], [0, 0, 0]] part_box_scales: [[0.5, 0.5, 0.5], [0.5, 0.5, 0.5]] description: 'description of the whole object' \"' The number of elements in part_description, part_center_point, and part_box_scales MUST be equal. Each part_description should include object information. If parts are duplicated, all of them should be listed. Given an instruction, 3DObjGPT is able to understand the meaning of the instruction and generate the modification of the object_description, ob-ject_center_point or object_box_scales based on previous output results. The relative size of each object should follow the common scene in the world. The range of object_center_point and object_box_scales MUST be [-1, 1]." }, { "figure_ref": [], "heading": "Generated Response:", "publication_ref": [], "table_ref": [], "text": "Thank you for providing the format for generating responses for a single object using 3DObjGPT. It seems like a very useful tool for 3D modeling tasks. Based on the given format, the part_description variable should contain a list of descriptions for each part of the object. Each element in the list should describe a different part of the object, and there should be one element for each part. The part_center_point variable should contain a list of center points for each part of the object. Each element in the list should correspond to the part description in the same position in the part_description list. The center point represents the position of the center of the part relative to the origin of the object. The part_box_scales variable should contain a list of box scales for each part of the object. Each element in the list should correspond to the part description in the same position in the part_description list. The box scales represent the size of the bounding box of the part, which can be used to determine the overall size and shape of the object. The description variable should contain a single description of the whole object, which summarizes the different parts and their relationships to each other. When generating modifications to the object_description, object_center_point, or object_box_scales, the new values should still conform to the same format and rules as the original response. The relative size of each object should still follow the common scene in the world, and the range of object_center_point and object_box_scales should still be [-1, 1]. description: 'A windmill with a base, tower, blades, and spinner.' \"' In this response, I have described the windmill as having four parts: the base, tower, blades, and spinner. The 'part_center_point' values describe the position of each part relative to the origin of the windmill. The 'part_box_scales' values describe the size and shape of each part. The base of the windmill is a rectangular prism with dimensions of 1 unit in the x and z directions and 0.5 units in the y direction. The tower is a thin cylinder with a height of 1 unit and a radius of 0.1 units. The blades are a long, flat rectangular prism with dimensions of 2 units in the x direction, 0.1 units in the y direction, and 0.5 units in the z direction. The spinner is a small cube with dimensions of 0.5 units in each direction. Of course, this is just one possible representation of a windmill. Depending on the level of detail and style that you want, there are many other ways that a windmill could be described using 3DObjGPT. User Prompt:\nMake the tower more wider Generated Response:\nSure, I can modify the windmill tower to make it wider. Here is an updated response: " }, { "figure_ref": [], "heading": "D Prompt Design for 2D Generation", "publication_ref": [], "table_ref": [], "text": "In this section, we showcase the prompt in detail for the LLMs to guide the layout in 2D generation using language (See Table. 9) and the case dialog (See Table . 10) for the 2D generation task in LI3D. description: 'description of the whole scene' \"' Each object_center_point and object_scale are normalized and MUST be [0, 1]. The number of elements in object_description and object_box should be equal. When input a instruction, 2DGPT is able to understand the meaning of instruction and reason the modification of the locations or size or description by output the same format as input. The relative size of each object should follow the common scene in the world and the object usually are close to the camera. Each width and height of object MUST large than 0.2. In the scene, the bottom part usually is the ground. Generated Response:Sure, I can assist with 2D modeling tasks using the given format. Please provide me with more details on the task you want me to perform. The above structure represents the following: -There are two objects in the scene with descriptions 'dog' and 'sofa' respectively. -The center point of the dog object is [x=0.5, y=0.5], which represents its position on a 2D plane where (0,0) is the top left corner and (1,1) is the bottom right corner of the image. -The scale of the dog object is [width=0.6, height=0.6], which represents the size of the bounding box around the object. -The center point of the sofa object is [x=0.5, y=0.75], which represents its position on the same 2D plane.\n-The scale of the sofa object is [width=0.8, height=0.2], which represents the size of the bounding box around the object. -The scene description is 'a dog is sitting on the sofa'. Note that the bottom part of the scene is assumed to be the ground. The relative size of the objects follows the common scene in the world, with the sofa being relatively larger than the dog. Also, the dog's bounding box is slightly larger than the minimum required width and height of 0. In the above layout, the dog object has been removed, and the object descriptions, center points, and scales have been adjusted accordingly. The scene description has also been modified to reflect the absence of the dog." }, { "figure_ref": [ "fig_2" ], "heading": "D.1 Prompt Design for Benchmarking Layout Generation", "publication_ref": [ "b8", "b8" ], "table_ref": [], "text": "To evaluate layout generation following the language input qualitatively, we utilize a subset of i-CLEVR [9], a dataset designed for the Generative Neural Visual Artist (GeNeVA) task, as the benchmark. Each example in i-CLEVR [9] dataset consists of a sequence of 5 (image, instruction) pairs. Starting from an empty background, each language instruction describes an object to add to the canvas in terms of its shape and color. The instruction also describes where the object should be placed relative to existing objects in the scene. We sample 50 sequential instructions and their ground-truth arrangement to evaluate the robustness of LLMs following instructions. As shown in the Table. 11, we first use the context prompt the same as the 3D generation described in Table . 4. Then the LLM is asked to generate the scene layout based on the textual instruction with multi-round sequential instruction (max to 5). We show a complete cases dialog at Table. 11. Note that in each round of testing, all the instructions are fed simultaneously in a user prompt since we focus on evaluating the layout generation ability based on complex input. When the instructions are split into multiple rounds of interaction, the capability of maintaining context also needs to be considered making it hard to evaluate the layout generation ability only. During the numerical evaluation, we calculate the similarity between the front view of the 3D layout and the ground truth images (See Fig. 12) based on the center points of each object. " }, { "figure_ref": [], "heading": "C Prompt Design for Generative Feedback", "publication_ref": [], "table_ref": [], "text": "In this section, we present a comprehensive context prompt for LLaVA to generate feedback on the layout produced by LLMs(See Table . 8). The feedback serves as the revised context prompt for LLMs to generate a new layout. Furthermore, we provide multiple rendering views (See Figure. 11) showcasing the original 3D scene and the edited 3D scene. The image does not match the description \"A castle on a mountain.\" Instead, it shows a computer-generated (CGI) image of a castle with a grassy hill. There is no indication of the castle being situated on a mountain in the provided image." }, { "figure_ref": [], "heading": "Context Prompt:(LLaVA)", "publication_ref": [], "table_ref": [], "text": "Generate the description with more details to match the scene ${DESCRIPTION} (\"A castle on a mountain.\") LLaVA Response:(FEEDBACK)\nTo modify the description to match the \"A castle on a mountain,\" you could add specific details about the mountainous environment surrounding the castle. For example, you could include sentences such as \"The castle is situated atop a tall mountain, overlooking the surrounding landscape,\" or \"The castle is built into the side of a massive mountain, providing a sense of natural fortification.\" By incorporating these details, you can create a more accurate and visually appealing image that aligns with the original description." }, { "figure_ref": [], "heading": "Context Prompt:(LLMs)", "publication_ref": [], "table_ref": [], "text": "Given feedback ${FEEDBACK} (\"To modify the description to match the \"A castle on a mountain,\" you could add specific details about the mountainous environment surrounding the castle. For example, you could include sentences such as \"The castle is situated atop a tall mountain, overlooking the surrounding landscape,\" or \"The castle is built into the side of a massive mountain, providing a sense of natural fortification.\" By incorporating these details, you can create a more accurate and visually appealing image that aligns with the original description.\") update the layout. Generated Response: (LLMs) Sure, here's an updated layout that includes the specific details about the mountainous environment surrounding the castle: \"' object_description: ['a mountain', 'a castle'] object_center_point: [[0, -0.5, 0], [0, 0, 0]] object_box_scales: [[0.9, 0.5, 0.9], [0.4, 0.4, 0.4]] description: 'The castle is situated atop a tall mountain, overlooking the surrounding landscape. The castle itself is built into the side of the mountain, providing a sense of natural fortification. A winding path leads up to the castle gates, which are guarded by two towering statues. From the castle walls, one can see for miles in every direction, taking in the majesty of the surrounding peaks and valleys.' \"' In this updated layout, the scene is described in more detail. The castle is situated on a tall mountain, and is built into the side of the mountain for natural fortification. The castle gates are guarded by two towering statues, and a winding path leads up to the gates. From the castle walls, one can see for miles in every direction, taking in the majesty of the surrounding peaks and valleys." } ]
2023-05-25
[ { "authors": "Omri Avrahami; Dani Lischinski; Ohad Fried", "journal": "", "ref_id": "b0", "title": "Blended diffusion for text-driven editing of natural images", "year": "2022" }, { "authors": "Tim Brooks; Aleksander Holynski; Alexei A Efros", "journal": "", "ref_id": "b1", "title": "Instructpix2pix: Learning to follow image editing instructions", "year": "2022" }, { "authors": "Sébastien Bubeck; Varun Chandrasekaran; Ronen Eldan; Johannes Gehrke; Eric Horvitz; Ece Kamar; Peter Lee; Yin Tat Lee; Yuanzhi Li; Scott Lundberg", "journal": "", "ref_id": "b2", "title": "Sparks of artificial general intelligence: Early experiments with gpt-4", "year": "2023" }, { "authors": "Minghao Chen; Iro Laina; Andrea Vedaldi", "journal": "", "ref_id": "b3", "title": "Training-free layout control with cross-attention guidance", "year": "2023" }, { "authors": "Yen-Chi Cheng; Hsin-Ying Lee; Sergey Tulyakov; Alexander Schwing; Liangyan Gui", "journal": "", "ref_id": "b4", "title": "Sdfusion: Multimodal 3d shape completion, reconstruction, and generation", "year": "2022" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b5", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Dana Cohen-Bar; Elad Richardson; Gal Metzer; Raja Giryes; Daniel Cohen-Or", "journal": "", "ref_id": "b6", "title": "Setthe-scene: Global-local training for generating controllable nerf scenes", "year": "2023" }, { "authors": "Xing Cui; Zekun Li; Peipei Li; Yibo Hu; Hailin Shi; Zhaofeng He", "journal": "", "ref_id": "b7", "title": "I2edit: Towards multi-turn interactive image editing via dialogue", "year": "2023" }, { "authors": "Alaaeldin El-Nouby; Shikhar Sharma; Hannes Schulz; Devon Hjelm; Layla El Asri; Samira Ebrahimi Kahou; Yoshua Bengio; Graham W Taylor", "journal": "", "ref_id": "b8", "title": "Tell, draw, and repeat: Generating and modifying images based on continual linguistic instruction", "year": "2019" }, { "authors": "Tao Gong; Chengqi Lyu; Shilong Zhang; Yudong Wang; Miao Zheng; Qian Zhao; Kuikun Liu; Wenwei Zhang; Ping Luo; Kai Chen", "journal": "", "ref_id": "b9", "title": "Multimodal-gpt: A vision and language model for dialogue with humans", "year": "2023" }, { "authors": "Tanmay Gupta; Aniruddha Kembhavi", "journal": "", "ref_id": "b10", "title": "Visual programming: Compositional visual reasoning without training", "year": "2022" }, { "authors": "Ayaan Haque; Matthew Tancik; Alexei A Efros; Aleksander Holynski; Angjoo Kanazawa", "journal": "", "ref_id": "b11", "title": "Instruct-nerf2nerf: Editing 3d scenes with instructions", "year": "2023" }, { "authors": "Qiuyuan Huang; Jae Sung Park; Abhinav Gupta; Paul Bennett; Ran Gong; Subhojit Som; Baolin Peng; Owais Khan Mohammed; Chris Pal; Yejin Choi", "journal": "", "ref_id": "b12", "title": "Ark: Augmented reality with knowledge interactive emergent ability", "year": "2023" }, { "authors": "Ajay Jain; Ben Mildenhall; Jonathan T Barron; Pieter Abbeel; Ben Poole", "journal": "", "ref_id": "b13", "title": "Zero-shot text-guided object generation with dream fields", "year": "2022" }, { "authors": "Yuming Jiang; Ziqi Huang; Xingang Pan; Chen Change Loy; Ziwei Liu", "journal": "", "ref_id": "b14", "title": "Talk-to-edit: Finegrained facial editing via dialog", "year": "2021" }, { "authors": "Tero Karras; Samuli Laine; Timo Aila", "journal": "", "ref_id": "b15", "title": "A style-based generator architecture for generative adversarial networks", "year": "2019" }, { "authors": "Gwanghyun Kim; Se Young; Chun ", "journal": "", "ref_id": "b16", "title": "Datid-3d: Diversity-preserved domain adaptation using text-to-image diffusion for 3d generative model", "year": "2022" }, { "authors": "Alexander Kirillov; Eric Mintun; Nikhila Ravi; Hanzi Mao; Chloe Rolland; Laura Gustafson; Tete Xiao; Spencer Whitehead; Alexander C Berg; Wan-Yen Lo", "journal": "", "ref_id": "b17", "title": "Segment anything", "year": "2023" }, { "authors": "Yuheng Li; Haotian Liu; Qingyang Wu; Fangzhou Mu; Jianwei Yang; Jianfeng Gao; Chunyuan Li; Yong Jae Lee", "journal": "", "ref_id": "b18", "title": "Gligen: Open-set grounded text-to-image generation", "year": "2023" }, { "authors": "Chen-Hsuan Lin; Jun Gao; Luming Tang; Towaki Takikawa; Xiaohui Zeng; Xun Huang; Karsten Kreis; Sanja Fidler; Ming-Yu Liu; Tsung-Yi Lin", "journal": "", "ref_id": "b19", "title": "Magic3d: High-resolution text-to-3d content creation", "year": "2022" }, { "authors": "Yiqi Lin; Haotian Bai; Sijia Li; Haonan Lu; Xiaodong Lin; Hui Xiong; Lin Wang", "journal": "", "ref_id": "b20", "title": "Componerf: Text-guided multi-object compositional nerf with editable 3d scene layout", "year": "2023" }, { "authors": "Haotian Liu; Chunyuan Li; Qingyang Wu; Yong Jae Lee", "journal": "", "ref_id": "b21", "title": "Visual instruction tuning", "year": "2023" }, { "authors": "Gal Metzer; Elad Richardson; Or Patashnik; Raja Giryes; Daniel Cohen-Or", "journal": "", "ref_id": "b22", "title": "Latent-nerf for shape-guided generation of 3d shapes and textures", "year": "2022" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "Communications of the ACM", "ref_id": "b23", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2021" }, { "authors": " Openai", "journal": "", "ref_id": "b24", "title": "", "year": "2023" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b25", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Ryan Po; Gordon Wetzstein", "journal": "", "ref_id": "b26", "title": "Compositional 3d scene generation using locally conditioned diffusion", "year": "2023" }, { "authors": "Ben Poole; Ajay Jain; Jonathan T Barron; Ben Mildenhall", "journal": "", "ref_id": "b27", "title": "Dreamfusion: Text-to-3d using 2d diffusion", "year": "2022" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b28", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b29", "title": "Highresolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b30", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Wei Sun; Tianfu Wu", "journal": "", "ref_id": "b31", "title": "Image synthesis from reconfigurable layout and style", "year": "2019" }, { "authors": "Romal Thoppilan; Daniel De Freitas; Jamie Hall; Noam Shazeer; Apoorv Kulshreshtha; Heng-Tze; Alicia Cheng; Taylor Jin; Leslie Bos; Yu Baker; Du", "journal": "", "ref_id": "b32", "title": "Lamda: Language models for dialog applications", "year": "2022" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b33", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Haochen Wang; Xiaodan Du; Jiahao Li; Raymond A Yeh; Greg Shakhnarovich", "journal": "", "ref_id": "b34", "title": "Score jacobian chaining: Lifting pretrained 2d diffusion models for 3d generation", "year": "2022" }, { "authors": "Chenfei Wu; Shengming Yin; Weizhen Qi; Xiaodong Wang; Zecheng Tang; Nan Duan", "journal": "", "ref_id": "b35", "title": "Visual chatgpt: Talking, drawing and editing with visual foundation models", "year": "2023" }, { "authors": "Weihao Xia; Yujiu Yang; Jing-Hao Xue; Baoyuan Wu", "journal": "", "ref_id": "b36", "title": "Tedigan: Text-guided diverse face image generation and manipulation", "year": "2021" }, { "authors": "Jiale Xu; Xintao Wang; Weihao Cheng; Yan-Pei Cao; Ying Shan; Xiaohu Qie; Shenghua Gao", "journal": "", "ref_id": "b37", "title": "Dream3d: Zero-shot text-to-3d synthesis using 3d shape prior and text-to-image diffusion models", "year": "2022" }, { "authors": "Zhengyuan Yang; Jianfeng Wang; Zhe Gan; Linjie Li; Kevin Lin; Chenfei Wu; Nan Duan; Zicheng Liu; Ce Liu; Michael Zeng", "journal": "", "ref_id": "b38", "title": "Reco: Region-controlled text-to-image generation", "year": "2022" }, { "authors": "Bo Zhao; Lili Meng; Weidong Yin; Leonid Sigal", "journal": "", "ref_id": "b39", "title": "Image generation from layout", "year": "2019" }, { "authors": "Yufan Zhou; Ruiyi Zhang; Jiuxiang Gu; Chris Tensmeyer; Tong Yu; Changyou Chen; Jinhui Xu; Tong Sun", "journal": "", "ref_id": "b40", "title": "Tigan: Text-based interactive image generation and manipulation", "year": "2022" }, { "authors": "Deyao Zhu; Jun Chen; Xiaoqian Shen; Xiang Li; Mohamed Elhoseiny", "journal": "", "ref_id": "b41", "title": "Minigpt-4: Enhancing vision-language understanding with advanced large language models", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 283.39, 567.57, 221.27, 9.9 ], "formula_id": "formula_0", "formula_text": "l = I(x 0 ; p)(1)" }, { "formula_coordinates": [ 3, 255.94, 646.46, 248.73, 9.73 ], "formula_id": "formula_1", "formula_text": "r t = G(l t-1 , I(x t ; p; l t-1 ))(2)" }, { "formula_coordinates": [ 4, 258.77, 271.03, 245.9, 13.03 ], "formula_id": "formula_2", "formula_text": "r * t = G(l t-1 , I(x t ; p; b t ))(3)" }, { "formula_coordinates": [ 13, 371.3, 240.89, 87.94, 9.88 ], "formula_id": "formula_3", "formula_text": "[C l , σ ] = θ l (x l , y l , z l )." } ]
Towards Language-guided Interactive 3D Generation: LLMs as Layout Interpreter with Generative Feedback
Generating and editing a 3D scene guided by natural language poses a challenge, primarily due to the complexity of specifying the positional relations and volumetric changes within the 3D space. Recent advancements in Large Language Models (LLMs) have demonstrated impressive reasoning, conversational, and zero-shot generation abilities across various domains. Surprisingly, these models also show great potential in realizing and interpreting the 3D space. In light of this, we propose a novel language-guided interactive 3D generation system, dubbed LI3D, that integrates LLMs as a 3D layout interpreter into the off-the-shelf layout-to-3D generative models, allowing users to flexibly and interactively generate visual content. Specifically, we design a versatile layout structure base on the bounding boxes and semantics to prompt the LLMs to model the spatial generation and reasoning from language. Our system also incorporates LLaVA, a large language and vision assistant, to provide generative feedback from the visual aspect for improving the visual quality of generated content. We validate the effectiveness of LI3D, primarily in 3D generation and editing through multi-round interactions, which can be flexibly extended to 2D generation and editing. Various experiments demonstrate the potential benefits of incorporating LLMs in generative AI for applications, e.g., metaverse. Moreover, we benchmark the layout reasoning performance of LLMs with neural visual artist tasks, revealing their emergent ability in the spatial layout domain.
Yiqi Lin; Hao Wu; Ruichen Wang; Haonan Lu; Xiaodong Lin; Hui Xiong; Lin Wang
[ { "figure_caption": "Figure 1 :1Figure 1: The overall system of LI3D. (a) LI3D utilize the LLM to interpret language input into the 3D layout for layout-to-3D generative model (CompoNeRF [21]) as conditional input (Sec. 2.1), which can also be extended to the image domain by several adaptions (Sec. 3.4). (b)The LLaVA[22] can be integrated into LI3D to predict generation quality and provide detailed description feedback for LLM to update the layout that fails to generate satisfactory content (Sec. 2.2).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "1 . 5 (15For the feedback generation, we randomly sample a rendered Examples of 3D Scene and Single Object Layout Generated by GPT-3.Scene) Context Prompt: ... MUST use the following format: object_description: ['description of single object', 'description of single object'] object_center_point: [[0, 0, 0], [0, 0, 0]] object_box_scales: [[0.5, 0.5, 0.5], [0.5, 0.5, 0.5]] description: 'description of the whole scene' ... (Details are provided in Appendix) (Scene) User Prompt: Generate a scene \" a castle on a mountain\" (Scene) Generated Response: object_description: ['a mountain', 'a castle'] object_center_point: [[0, -0.5, 0], [0, 0, 0]] object_box_scales: [[0.9, 0.5, 0.9], [0.4, 0.4, 0.4]] description: 'A castle on a mountain.' (Object) Context Prompt: ... MUST use the following format: part_description: ['description of an object part', 'description of an object part'] part_center_point: [[0, 0, 0], [0, 0, 0]] part_box_scales: [[0.5, 0.5, 0.5], [0.5, 0.5, 0.5]] description: 'description of the whole object' ... (Details are provided in Appendix) (Object) User Prompt: Generate an Off-road vehicle (Object) Generated Response: part_description: ['vehicle body', 'front left wheel', 'front right wheel', 'rear left wheel', 'rear right wheel'] part_center_point: [[0, 0, 0], [-0.4, -0.25, 0.4], [0.4, -0.25, 0.4], [-0.4, -0.25, -0.4], [0.4, -0.25, -0.4]] part_box_scales: [[0.6, 0.3, 1], [0.25, 0.25, 0.25], [0.25, 0.25, 0.25], [0.25, 0.25, 0.25], [0.25, 0.25, 0.25]] description: 'an off-road vehicle with a main body and four large wheels'", "figure_data": "", "figure_id": "fig_1", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Multiple rounds of interaction for 3D scene generation between users and LI3D.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Multiple rounds of interaction for 3D single object generation between users and LI3D.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Ablation study of generative feedback module.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "add computer (a) Imbalance size of sofa (b) Object floating on the air", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Failure cases of LI3D.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "7) and the more rendering views (See Figure.8) of the same case across multiple rounds of interaction.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "3, -0.2, 0.6], [0.7, 0.4, -0.2], [-0.7, 0.3, 0.4], [-0.5, 0.2, -0.6]] object_box_scales: [[0.5, 0.5, 0.5], [0.5, 0.5, 0.5], [0.2, 1.5, 0.2], [0.2, 1.5, 0.2], [0.2, 1.5, 0.2]] description: 'A courtyard with two wells in the center and several trees' \"'", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "A courtyard with two wells in the center and several fir trees' \"' In this updated scene, all three trees have been replaced with fir trees. The position and size of each object remains the same as in the previous scene description, and the 'object_description' and 'description' fields have been updated to reflect the change in the scene.", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :Figure 8 :78Figure 7: The visualization of the multiple rounds of generated layouts. (a), (b) and (c) represent the layouts generated for the first, second, and third user prompts.", "figure_data": "", "figure_id": "fig_10", "figure_label": "78", "figure_type": "figure" }, { "figure_caption": "part_center_point: [[0, -0.5, 0], [0, 0.5, 0], [0, 1.5, 0], [0, 1.5, 0]] part_box_scales: [[1, 0.5, 1], [0.2, 1, 0.2], [2, 0.1, 0.5], [0.5, 0.5, 0.5]]", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 9 :Figure 10 :910Figure 9: The visualization of the multiple rounds of a single object from compositional parts guided by the prompt. (a), (b), and (c) represent the compositional parts of the object generated for the first, second, and third user prompts.", "figure_data": "", "figure_id": "fig_12", "figure_label": "910", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: The visualization of the 3D scene generation of generative feedback module. (a) showcases the initial generated scene. (b) and (c) depict the objects present in the initial scene. (d) displays the scene editing performed through feedback. (e) and (f) demonstrate the objects in the updated scene.", "figure_data": "", "figure_id": "fig_14", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "4 )4User Prompt:(3-Round) Add a cyan cube at the center Add a red cylinder behind it Add a cyan sphere in front of it on the left and in front of the cyan cube on the left Generated Response: \"' object_description: ['Cyan cube', 'Red cylinder', 'Cyan sphere'] object_center_point: [[0, 0, 0], [0, 0, -0.8], [-0.6, 0, 0.4]] object_box_scales: [[0.5, 0.5, 0.5], [0.3, 0.3, 0.6], [0.3, 0.3, 0.3]] description: 'A cyan cube at the center, a red cylinder behind it, and a cyan sphere in front of it on the left and in front of the cyan cube on the left.' \"' Let me know if you have any further modifications or adjustments. Context Prompt:...(Please refer to Table.", "figure_data": "", "figure_id": "fig_15", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "4 )4User Prompt:(4-Round) Add a cyan cube at the center Add a red cylinder behind it Add a cyan sphere in front of it on the left and in front of the cyan cube on the left Add a purple cylinder behind it on the left and in front of the cyan cube on the left Generated Response: \"' object_description: ['Cyan cube', 'Red cylinder', 'Cyan sphere', 'Purple cylinder'] object_center_point: [[0, 0, 0], [0, 0, -0.8], [-0.6, 0, 0.4], [-0.6, 0, -0.4]] object_box_scales: [[0.5, 0.5, 0.5], [0.3, 0.3, 0.6], [0.3, 0.3, 0.3], [0.3, 0.3, 0.6]] description: 'A cyan cube at the center, a red cylinder behind it, a cyan sphere in front of it on the left and in front of the cyan cube on the left, and a purple cylinder behind it on the left and in front of the cyan cube on the left.' \"' Let me know if you have any further modifications or adjustments. Context Prompt:...(Please refer to Table.", "figure_data": "", "figure_id": "fig_16", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "4 )Figure 12 :412Figure 12: The visualization of the ground truth image of the multiple rounds instruction of i-CLEVR.", "figure_data": "", "figure_id": "fig_17", "figure_label": "412", "figure_type": "figure" }, { "figure_caption": "Quantitative results on the i-CLEVR datasets.", "figure_data": "Recall98.098.098.097.997.9rsim(E G gt ,E G gen )97.984.178.271.066.0", "figure_id": "tab_0", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Therefore, a local NeRF outputs four pseudo-color channels C and a volume density σ when casting a ray through the local frame. The predicted pseudo-color is then fed into the decoder of Stable Diffusion models to obtain the final rendering result. During training, CompoNeRF uses Score Distillation Sampling (SDS) proposed in DreamFusion[28] to propagate the text-to-image generative prior from the diffusion model to the NeRF parameters. In the original CompoNeRF, a global MLP also is proposed to enhance the global consistency across all the local NeRFs. In our LI3D , to improve the training efficiency and inheritance of the local NeRF in the generated content, we remove the global MLP for fast convergence. Therefore, replacing certain objects or parts in LI3D cost less computation.", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The Context Prompt of 3D Scene Generation", "figure_data": "Context Prompt:3DGPT is designed to be able to assist with the 3d modeling tasks, 3DGPT is able togenerate a 3d scene based on the input it receives. When 3DGPT generates a response for aSCENE, MUST use the following format:\"'object_description: ['description of single object', 'description of single object']object_center_point: [[0, 0, 0], [0, 0, 0]]object_box_scales: [[0.5, 0.5, 0.5], [0.5, 0.5, 0.5]]description: 'description of the whole scene'\"'", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "A Case Dialog of Multi-Round Interactive 3D Scene Generation", "figure_data": "... (Context Prompt)User Prompt:Generate a scene \" a courtyard with a well in the center and several trees\"Generated Response:Sure, here's a scene description for a courtyard with a well in the center and several trees:\"'object_description: ['a well', 'a tree', 'a tree', 'a tree']object_center_point: [[0, 0, 0], [0.7, 0.4, -0.2], [-0.7, 0.3, 0.4], [-0.5, 0.2, -0.6]]object_box_scales: [[0.5, 0.5, 0.5], [0.2, 1.5, 0.2], [0.2, 1.5, 0.2], [0.2, 1.5, 0.2]]description: 'A courtyard with a well in the center and several trees'\"'", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "The Context Prompt of 3D Single Object Generation", "figure_data": "Context Prompt:", "figure_id": "tab_4", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "A Case Dialog of Multi-Round Interactive 3D Single Object Generation", "figure_data": "... (Context Prompt)User Prompt:Generate a windmillGenerated Response:Sure, I can generate a windmill for you using 3DObjGPT. Here is an example response:\"'part_description: ['base', 'tower', 'blades', 'spinner']", "figure_id": "tab_5", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "The Context Prompt of 2D Image Generation", "figure_data": "Context Prompt:2DGPT is designed to be able to assist with the 2d modeling tasks, 2DGPTis able to generate a 2d object layout using bounding box based on the input it receives, pleaseuse the following format:\"'object_description: ['a object description', 'a object description']object_center_point: [[0.1, 0.2], [0.3, 0.4]]object_scale: [[0.9, 0.9], [0.7, 0.8]]", "figure_id": "tab_6", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "A Case Dialog of Multi-Round Interactive 2D Image Generation", "figure_data": "... (Context Prompt)User Prompt:Generate a scene \"a dog is sitting on the sofa\"Generated Response:Based on the input you provided, I can generate a 2D object layout using the followingformat:\"'object_description: ['dog', 'sofa']object_center_point: [[0.5, 0.5], [0.5, 0.75]]object_scale: [[0.6, 0.6], [0.8, 0.2]]description: 'a dog is sitting on the sofa'\"'", "figure_id": "tab_7", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "2, to ensure that the object is clearly visible in the scene. Let me know if you would like me to modify the location, size, or description of any object in the scene. In the above layout, the center point of the dog object has been modified to [x=0.7, y=0.5], which represents its new position to the right of its original position. The rest of the layout remains the same.", "figure_data": "User Prompt:Move the dog to the rightGenerated Response:Sure, I can modify the 2D object layout as per your request. Here's the updated objectlayout with the dog moved to the right:\"'object_description: ['dog', 'sofa']object_center_point: [[0.7, 0.5], [0.5, 0.75]]object_scale: [[0.6, 0.6], [0.8, 0.2]]description: 'a dog is sitting on the sofa'\"'User Prompt:Remove the dogGenerated Response:Sure, I can modify the 2D object layout as per your request. Here's the updated objectlayout with the dog removed:\"'object_description: ['sofa']object_center_point: [[0.5, 0.75]]object_scale: [[0.8, 0.2]]description: 'there is a sofa'\"'", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The Prompt Design and Cases Dialog of Benchmarking on i-CLEVR Context Prompt:...(Please refer to Table. 4) Generated Response: Understood. How may I assist you with your 3D modeling tasks? Let me know if you have any further modifications or adjustments. Context Prompt:...(Please refer to Table. 4) User Prompt:(2-Round) Add a cyan cube at the center Add a red cylinder behind it Cyan cube', 'Red cylinder'] object_center_point: [[0, 0, 0], [0, 0, -0.8]] object_box_scales: [[0.5, 0.5, 0.5], [0.3, 0.3, 0.6]] description: 'A cyan cube at the center and a red cylinder behind it.' \"' Let me know if you have any further modifications or adjustments. Context Prompt:...(Please refer to Table.", "figure_data": "User Prompt:(1-Round)Add a cyan cube at the centerGenerated Response:\"'object_description: ['Cyan cube']object_center_point: [[0, 0, 0]]object_box_scales: [[0.5, 0.5, 0.5]]description: 'A cyan cube at the center.'\"'Generated Response:\"'object_description: ['", "figure_id": "tab_9", "figure_label": "11", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[29]", "Explanation": "The cited work provides a pre-trained text-to-image model that the citing paper adopts to generate high-quality 3D scenes based on text inputs."}, {"Category": "Methodological Basis", "Citation": "[31]", "Explanation": "The cited work is another pre-trained text-to-image model that the citing paper utilizes in generating 3D scenes based on text inputs."}, {"Category": "Extension or Continuation", "Citation": "[41]", "Explanation": "The cited work has attempted to use pre-trained cross-modal models to generate images through continuous linguistic instructions, which the citing paper extends to generate 3D scenes through open-vocabulary natural language input."}, {"Category": "Extension or Continuation", "Citation": "[37]", "Explanation": "The cited work has also used pre-trained cross-modal models to generate images through continuous linguistic instructions, which the citing paper extends to generate 3D scenes through open-vocabulary natural language input."}, {"Category": "Extension or Continuation", "Citation": "[8]", "Explanation": "The cited work has utilized pre-trained cross-modal models to generate images through continuous linguistic instructions, which the citing paper extends to generate 3D scenes through open-vocabulary natural language input."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work, CompoNeRF, is used as a generative model in the citing paper to perform layout-to-3D generation using the 3D layout as a conditional input."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work, LLaVA, is used as a visual instruction tuning method in the citing paper to improve the stability of generated 3D scenes and address the instability issue in previous works."}, {"Category": "Supporting Evidence", "Citation": "[10]", "Explanation": "The cited work is mentioned as a potential method to be integrated into the system for language-guided interactive 3D generation, providing a possible solution to the problem."}, {"Category": "Supporting Evidence", "Citation": "[42]", "Explanation": "The cited work is mentioned as a potential method to be integrated into the system for language-guided interactive 3D generation, providing a possible solution to the problem."}, {"Category": "Extension or Continuation", "Citation": "[23]", "Explanation": "The cited work is mentioned as a reference to the instability of generating complex 3D scenes when relying solely on general descriptions, indicating a need for further research in this area."}, {"Category": "Extension or Continuation", "Citation": "[21]", "Explanation": "The cited work is mentioned as a reference to the instability of generating complex 3D scenes when relying solely on general descriptions, indicating a need for further research in this area."}, {"Category": "Extension or Continuation", "Citation": "[5]", "Explanation": "The cited work is mentioned as a reference to the instability of generating complex 3D scenes when relying solely on general descriptions, indicating a need for further research in this area."}, {"Category": "Supporting Evidence", "Citation": "[9]", "Explanation": "The cited work, i-CLEVR benchmark, is used to assess the robustness of the LI3D in generating layouts based on sequential language instructions, providing a basis for the claims made in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work, LLaVA, is incorporated into LI3D to provide feedback and generate more detailed descriptions for LLMs, which is a methodological basis for improving the visual quality of generated content in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work, Stable Diffusion, is used as a pre-built module in the generative module G of the citing paper, providing a method for generating visual content based on a layout input."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work, CompoNeRF, is also used as a pre-built module in the generative module G, providing another method for generating visual content based on a layout input."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work, Stable Diffusion, is used as a method for interactive image generation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work, Segment Anything, is also used as a method for interactive image generation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[23; 21]", "Explanation": "The cited works have shown that generating complex 3D scenes can be unstable when using less informative descriptions, which motivates the citing paper to adopt a generative feedback module to address this issue."}, {"Category": "Extension or Continuation", "Citation": "LLaVA [22]", "Explanation": "The cited work, LLaVA, is utilized in the citing paper to assess the suitability of the generated 3D scene and ensure proper 3D placement, extending the research on layout generation in 3D scene generation."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work showcases the use of LLMs in generating information from different domains or modalities, which serves as a methodological basis for the proposed LI3D model in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[26]", "Explanation": "The cited work, ChatGPT, serves as the generative feedback module used in the implementation of the LLMs in the citing paper, providing a methodological basis for the research."}, {"Category": "Data Source", "Citation": "[34]", "Explanation": "The cited work, LLaVA-13B, is a data source used in the implementation of the LLMs in the citing paper, as a generative feedback module for the research."}, {"Category": "Extension or Continuation", "Citation": "[21]", "Explanation": "The cited work, CompoNeRF, is extended in the citing paper to perform generative rendering on a single RTX3090, expanding the research in a new direction."}, {"Category": "Data Source", "Citation": "[36]", "Explanation": "The cited work, VisualChatGPT, is the source of the context template used in the citing paper for prompting layout generation in two scenarios."}, {"Category": "Supporting Evidence", "Citation": "[9]", "Explanation": "The cited work, i-CLEVR dataset, serves as a benchmark for evaluating layout generation following language input in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[21]", "Explanation": "The cited work, CompoNeRF, is mentioned as a foundational model for generating 3D content, and the citing paper notes that it is still unstable in constructing certain objects and has limited performance in real-time interaction due to its high computation cost."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work, Stable Diffusion, is used as a method for layout conditional generation and in-painting in the design pipeline of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work, Segment Anything (SAM), is used for layout element extraction in the design pipeline of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[9; 8; 37; 13]", "Explanation": "The cited works provide methods for generating and editing results in multi-round interactions, which the citing paper builds upon to facilitate the interaction process."}, {"Category": "Extension or Continuation", "Citation": "[41]", "Explanation": "The cited work TiGAN utilizes pre-trained cross-modal large models to support successive editing in generative models, which the citing paper extends to handle non-predefined scenes in complex multi-object 3D scenes."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work Talk-to-Edit introduces a rule-based system response module to generate responses for multi-round interactions, which the citing paper builds upon to address the challenge of handling non-predefined scenes in complex multi-object 3D scenes."}, {"Category": "Methodological Basis", "Citation": "[31; 14]", "Explanation": "The cited works focus on visual generation tasks related to LLMs, which the citing paper leverages to develop approaches for handling non-predefined scenes in complex multi-object 3D scenes."}, {"Category": "Methodological Basis", "Citation": "[1; 2; 12]", "Explanation": "The cited works on visual understanding tasks with LLMs provide a basis for the citing paper to utilize robust capacity in understanding and reasoning common knowledge in handling non-predefined scenes in complex multi-object 3D scenes."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work, VISPROG, is used as a reference for the design of the task-specific intermediate representation in the citing paper, which is intended to improve the cooperation between LLMs and generative models in solving 3D interactive generation problems."}, {"Category": "Data Source", "Citation": "[36]", "Explanation": "The cited work, Visual ChatGPT, is mentioned as a system that integrates Visual Foundation Models and enables users to interact with ChatGPT using images as input and receive images as responses. The citing paper may have used this work as a data source for the generation of images in its research or analysis."}, {"Category": "Extension or Continuation", "Citation": "[17; 28; 20; 35; 38]", "Explanation": "The cited works are mentioned in the context of text-to-3D synthesis, which the citing paper is also interested in exploring. The cited works may have provided insights or methods for the text-to-3D synthesis research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[30; 31]", "Explanation": "The cited works on text-to-image models are mentioned in the context of score distillation sampling for generating 3D geometries and novel views. The citing paper may have used these works as a methodological basis for the generation of 3D geometries and novel views in its research or analysis."}, {"Category": "Supporting Evidence", "Citation": "[28]", "Explanation": "The cited work demonstrates the effectiveness of using a pre-trained text-to-image diffusion model as a prior in text-to-3D synthesis, which the citing paper builds upon in their research on improving text-to-3D generation."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work presents a two-stage optimization framework for text-to-3D synthesis of NeRF that the citing paper adopts in their research to achieve higher resolution 3D mesh models."}, {"Category": "Data Source", "Citation": "[23]", "Explanation": "The cited work utilizes score distillation sampling in the latent space to produce 3D results with less computation cost, which the citing paper references in their study on improving text-to-3D generation."}, {"Category": "Supporting Evidence", "Citation": "[4]", "Explanation": "The cited work provides a specific method of using labeled bounding boxes as conditioned input for layout control in image generation, which the citing paper adopts in their research."}, {"Category": "Extension or Continuation", "Citation": "[19]", "Explanation": "The cited work is mentioned as a previous study in the image generation field that uses semantic maps as a way to control layout in image generation. The citing paper extends this work by exploring the use of language to describe the layout control in image generation."}, {"Category": "Supporting Evidence", "Citation": "[30]", "Explanation": "The cited work, stable diffusion, is a large text-to-image model that has made it possible to extend the layout-to-image generation task in the image domain. The citing paper leverages this progress in the image generation field to build upon the research in this area."}, {"Category": "Supporting Evidence", "Citation": "[21]", "Explanation": "The cited work is mentioned as a method that employs joint conditioning on text and 3D layout generation to address the issue of limited spatial fidelity in text-to-image generation. The citing paper builds upon this approach to offer a solution to the problem of precise layout information in text and image generation."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work, CompoNeRF, serves as the basis for the 3D generative model adopted in the citing paper. The model performs multi-object text-to-3D generation from a configurable layout input, which the citing paper leverages to generate complex 3D layouts from user language input."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work, Stable Diffusion, is used as a foundational model for the image generation process in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work, Segment Anything, is used in the layout-to-image step to control the generation process in Stable Diffusion."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work is used to implement backward guidance in the image editing steps to control the generation process in Stable Diffusion."}, {"Category": "Extension or Continuation", "Citation": "[9]", "Explanation": "The cited work, i-CLEVR, serves as a benchmark for evaluating layout generation following language input in the citing paper. The citing paper extends the use of i-CLEVR by sampling sequential instructions and ground-truth arrangements to test the robustness of language models in following instructions."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b2", "b1", "b9", "b7", "b11", "b11", "b12", "b12" ], "table_ref": [], "text": "In this work, we focus on building a language identifier for the 22 languages listed in the Indian constitution. With increasing digitization, there is a push to make NLP technologies like translation, ASR, conversational technologies, etc. (Bose, 2022) available as a public good at population scale (Chandorkar, 2022). A good language identifier is required to help build corpora in low-resource languages. For such languages, language identification is far from a solved problem due to noisy web crawls, small existing datasets, and similarity to high-resource languages (Caswell et al., 2020).\nExisting publicly available LID tools like CLD3 1 , LangID 2 (Lui and Baldwin, 2011), Fast-Text 3 (Joulin et al., 2016) and NLLB 4 (NLLB Team et al., 2022) have some shortcomings with respect to Indian languages. They do not cover all the above-mentioned 22 languages. In social media and chats, it is also common to use the roman script for most Indian languages leading to substantial user-generated content in roman script. However, none of the LIDs have any support for the detection of romanized Indian language text (except cld3 support for Latin Hindi). The widespread use of romanization implies that accurate romanized Language Identification models are a critical component in the NLP stack for Indian languages, given that this affects over 735 million internet users (KPMG and Google, 2017). Therefore, our work on developing accurate and effective romanized Language Identification models has the potential to make a significant impact in the NLP space for Indian languages, particularly in the social media and chat application domains. Hence, we undertake the task of creating a LID for these 22 Indian languages. The main contributions of our work are as follows:\n• We create Bhasha-Abhijnaanam 5 , a language identification test set for native-script as well as romanized text which spans 22 Indic languages. Previous benchmarks for native script do not cover all these languages (NLLB Team et al., 2022;Roark et al., 2020). The Dakshina test set for romanized text covers only 11 languages and there are ambiguous instances in the test set like named entities that cannot be assigned to a particular language (Roark et al., 2020).\n• We also train, IndicLID, an LID for all the above-mentioned languages in both native and romanized script. For native-script training data, we sample sentences from diverse sources and oversample low-resource languages. IndicLID native-script model has better language coverage than existing LIDs and is competitive or better than other LIDs with 98% accuracy and at least 6 times better throughput.\n• To the best of our knowledge, ours is one of the first large-scale efforts for romanized LID in any language, a task that has not received much attention. A major challenge for romanized text LID is the lack of romanized training data. We show that synthetic romanized training data created via transliteration can help train a reasonably good LID for romanized text. A simple linear classifier does not perform well for romanized text. Hence, we combine a simple but fast text classifier with a slower but more accurate classifier based on a pretrained language model to achieve a good trade-off between accuracy and speed.\nOur findings are relevant to other languages that need LID for romanized text. We require native script data and a transliteration model to create the synthetic romanized data for the target language. This romanized data serves as training data for the romanized LID." }, { "figure_ref": [], "heading": "Bhasha-Abhijnaanam benchmark", "publication_ref": [], "table_ref": [], "text": "We describe the creation of the Bhasha-Abhijnaanam LID benchmark for 22 Indian languages in native and roman script. Table 1 describes the statistics of the Bhasha-Abhijnaanam benchmark. We build upon existing benchmarks to fill in the coverage and quality gaps and cost-efficiently cover all languages." }, { "figure_ref": [], "heading": "Native script test set.", "publication_ref": [ "b11", "b12" ], "table_ref": [], "text": "We compile a native script test set comprising 19 Indian languages and 11 scripts from the FLORES-200 devtest (NLLB Team et al., 2022) and Dakshina sentence test set (Roark et al., 2020). We create native text test sets for the remaining three languages (Bodo, Konkani, Dogri) and one script (Manipuri in Meetei Mayek script) not covered in these datasets. For these new languages we first sample the English sentences from Wikipedia and ask in-house, professional translators to translate the sentences to respective languages. This method ensured the quality and accuracy of our test samples, as well as minimizing ) and asked annotators to write the same in roman script. We did not specify any transliteration guidelines and annotators were free to transliterate in the most natural way they deemed fit. We additionally asked annotators to skip the sentence if they find it invalid (wrong language, offensive, truncated, etc.)." }, { "figure_ref": [], "heading": "Romanized Dakshina testset filtering", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "The Dakshina romanized sentence test set includes short sentences which are just named entities and English loan words which are not useful for romanized text LID evaluation. To address this issue, we manually validate the Dakshina test sets for the languages we are interested in. We first identified potentially problematic sentences from the romanized Dakshina test set by applying two constraints: (i) sentences shorter than 5 words, and (ii) native LID model is less confident about the native language sentence (prediction score less than 0.8). These sentences were then validated by native language annotators. The annotators were asked to read the roman sentences and determine whether they were named entities or sentences where they could not determine the language. Such entries were filtered out. About 7% of the sentences were filtered. Table 2 describes the filtering statistics." }, { "figure_ref": [], "heading": "IndicLID Model", "publication_ref": [], "table_ref": [], "text": "IndicLID is a classifier specifically for Indic languages that can predict 47 classes (24 native-script classes and 21 roman-script classes plus English and Others). We create three classifier variants: a fast linear classifier, a slower classifier finetuned from a pre-trained LM, and an ensemble of the two models which trades off speed v/s accuracy." }, { "figure_ref": [], "heading": "Training dataset creation", "publication_ref": [ "b11", "b10" ], "table_ref": [], "text": "Native-script training data. We compiled the training data sentences from various sources viz. In- Team et al., 2022), Wikipedia, Vikaspedia6 and internal sources. To ensure a diverse and representative training dataset, we sampled 100k sentences per language-script combination in a balanced way across all these sources. We used oversampling for languages with less than 100k sentences. We tokenized and normalized the sentences using Indic-NLP library7 (Kunchukuttan, 2020) with default settings.\nRomanized training data. There is hardly any romanized corpora for Indian languages in the public domain 8 . Hence, we explored the use of transliteration for creating synthetic romanized data. We create romanized training data by transliterating the native script training data into roman script using the multilingual IndicXlit9 transliteration model (Indic-to-En version) (Madhani et al., 2022), The authors have provided results on the transliteration quality of the IndicXlit model. We rely on this analysis to ensure the quality of generated training data." }, { "figure_ref": [], "heading": "Linear classifier", "publication_ref": [ "b5", "b7" ], "table_ref": [], "text": "Linear classifiers using character n-gram features are widely used for LIDs (Jauhiainen et al., 2021). We use FastText (Joulin et al., 2016) to train our fast, linear classifier. It is a lightweight and efficient linear classifier that is well-suited for handling large-scale text data. It utilizes character n-gram features which enables it to utilize subword information. This makes it particularly useful for dealing with rare words and allows it to discriminate between similar languages having sim-ilar spellings. We trained separate classifiers for native script (IndicLID-FTN) and roman script (IndicLID-FTR). We chose 8-dimension wordvector models after experimentation as they maintain small model sizes without losing model accuracy (refer Appendix A for results)." }, { "figure_ref": [], "heading": "Pretrained LM-based classifier", "publication_ref": [ "b3", "b4", "b8" ], "table_ref": [], "text": "For romanized text, we observed that linear classifiers do not perform very well. Hence, we also experimented with models having larger capacity. Particularly, we finetuned a pretrained LM on the romanized training dataset. We evaluated the following LMs: XLM-R (Conneau et al., 2020), IndicBERT-v2 (Doddapaneni et al., 2022) and MuRIL (Khanuja et al., 2021). The last two LMs are specifically trained for Indian languages and MuRIL also incorporates synthetic romanized data in pre-training. Hyperparameters for finetuning are described in Appendix B. We used IndicBERTbased classifier as the LM-based classifier (henceforth referred to as IndicLID-BERT) since it was amongst the best-performing romanized text classifiers and had maximum language coverage." }, { "figure_ref": [], "heading": "Final Ensemble classifier", "publication_ref": [], "table_ref": [], "text": "Our final IndicLID classifier is an pipeline of multiple classifiers. Figure 1 shows the overall workflow of the IndicLID classifier. The pipeline works as described here: (1) Depending on the amount of roman script in the input text, we invoke either the native-text or romanized linear classifier. IndicLID-FTR is invoked for text containing >50% roman characters.\n(2) For roman text, if IndicLID-FTR is not confident about its prediction, we redirect the request to the IndicLID-BERT. We resort to this two-stage approach for romanized input to achieve a good trade-off between classifier accuracy and inference speed. The fast IndicLID-FTR's prediction is used if the model is confident about its prediction (probability of predicted class > 0.6 ), else the slower but more accurate IndicLID-BERT is invoked. This threshold provides a good trade-off (See Appendix C for more details)." }, { "figure_ref": [], "heading": "Results and Discussion", "publication_ref": [ "b12" ], "table_ref": [], "text": "We discuss the performance of various models on the benchmark and analyze the results. To prevent any overlap between the test/valid and train sets, we excluded the Flores-200 test set (NLLB while sampling native train samples from various sources. Additionally, we removed the training samples from the benchmark samples when collecting sentences for the benchmark test set. We also made sure that there was no overlap between the test and valid sets. To create the romanized training set, we simply transliterated the native training set. As the Dakshina test set (Roark et al., 2020) provided parallel sentences for the native and roman test sets, there was no overlap between the roman train and test sets." }, { "figure_ref": [], "heading": "Native script LID", "publication_ref": [ "b11", "b7" ], "table_ref": [ "tab_3" ], "text": "We compare IndicLID-FTN with the NLLB model (NLLB Team et al., 2022) and the CLD3 model.\nAs we can see in Table 3, the LID performance of IndicLID-FTN is comparable or better than other models. Our model is 10 times faster and 4 times smaller than the NLLB model. The model's footprint can be further reduced by model quantization (Joulin et al., 2016) which we leave for future work." }, { "figure_ref": [], "heading": "Roman script LID", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "Table 4 presents the results of different model variants on the romanized test set (see Appendix D for language-wise results). IndicLID-BERT is significantly better than IndicLID-FTR, but the throughput decreases significantly. The ensemble model (IndicLID) maintains the same LID performance as IndicLID-BERT with a 3x increase in the throughput over IndicLID-BERT. Further speedups in the model throughput can be achieved by creating distilled versions, which we leave for future work." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "LID confusion analysis", "publication_ref": [], "table_ref": [], "text": "The confusion matrix for IndicLID is shown in Figure 2. We see that major confusions are between similar languages. Some The confusion matrix gives further insights into the impact of synthetic training data. Hindi is confused with languages like Nepali, Sanskrit, Marathi and Konkani using the same native script as Hindi (Devanagari). Since a multilingual transliteration model with significant Hindi data was used to create the synthetic romanized training data, it may result in the synthetic romanized forms of these languages being more similar to Hindi than would be the case with original romanized data. Impact of input length Figure 3 plots the LID accuracy for various input length buckets. The LID is most confused for short inputs (<10 words) after which the performance is relatively stable." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b6" ], "table_ref": [], "text": "We introduce an LID benchmark and models for native-script and romanized text in 22 Indian languages. These tools will serve as a basis for building NLP resources for Indian languages, particularly extremely low-resource ones that are \"leftbehind\" in the NLP world today (Joshi et al., 2020). Our work takes first steps towards LID of romanized text, and our analysis reveals directions for future work." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "The benchmark for language identification for the most part contains clean sentences (grammatically correct, single script, etc.). Data from the real world might be noisy (ungrammatical, mixed scripts, code-mixed, invalid characters, etc.). A better representative benchmark might be useful for such use cases. However, the use cases captured by this benchmark should suffice for the collection of clean monolingual corpora. This also represents a first step for many languages where no LID benchmark exists.\nThe use of synthetic training data seems to create a gap in performance due to divergence in train/test data distributions. Acquisition of original native romanized text and methods to generate better romanized text are needed.\nNote that the romanized LID model does not support Dogri since the IndicXlit transliteration model does not support Dogri. However, since Dogri is written in the Devanagari script using the transliterator for Hindi which uses the same script might be a good approximation to generate synthetic training data. We will explore this in the future.\nThis work is limited to the 22 languages listed in the 8 th schedule of the Indian constitution. Further work is needed to extend the benchmark to many more widely used languages in India (which has about 30 languages with more than a million speakers)." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "For the human annotations on the dataset, the language experts are native speakers of the languages and from the Indian subcontinent. They were paid a competitive monthly salary to help with the task. The salary was determined based on the skill set and experience of the expert and adhered to the norms of the government of our country. The dataset has no harmful content. The annotators were made aware of the fact that the annotations would be released publicly and the annotations contain no private information. The proposed benchmark builds upon existing datasets. These datasets and related works have been cited.\nThe annotations are collected on a publicly available dataset and will be released publicly for future use. The IndicCorp dataset which we annotated has already been checked for offensive content.\nAll the datasets created as part of this work will be released under a CC-0 license 10 and all the code and models will be released under an MIT license. 11 " }, { "figure_ref": [], "heading": "B Model selection for Roman script LM-based classifier", "publication_ref": [ "b4", "b3", "b8" ], "table_ref": [ "tab_9" ], "text": "We experimented with three different pre-trained language models: IndicBERT (Doddapaneni et al., 2022), XLM-R (Conneau et al., 2020), and MuRIL (Khanuja et al., 2021). In the initial experiment, we froze all the layers except for the last softmax layer and finetuned the model with our training data. To fine-tune the language model, we added one softmax layer to the end of the model and used our roman script training data to finetune the model. The results for these experiments are shown in Table 7. We found that IndicBERT and MuRIL performed similarly among these three models for our roman LID task. MuRIL leverages the advantage of roman text training data, while IndicBERT was trained on the only native script but performed similarly. However, IndicBERT supports 24 Indian languages, while MuRIL only supports 17 Indian languages. Therefore, we selected IndicBERT due to its superior coverage and performance.\nWe then further experimented with IndicBERT by unfreezing 1, 2, 4, 6, 8, and 11 layers. The results and comparison of all the experiments are described in Table 8. We found that unfreezing 1 layer was enough for our task and that unfreezing more layers did not provide any additional benefit." }, { "figure_ref": [], "heading": "C Analysis of speed/accuracy tradeoff", "publication_ref": [], "table_ref": [], "text": "We experimented IndicLID with different thresholds. If the probability score is below a certain threshold we invoke a more powerful model IndicLID-BERT, otherwise, we go with IndicLID-FTR model prediction. IndicLID-FTR model is quite fast as compared to IndicLID-BERT model. We can see a good trade-off between throughput and accuracy in table 9 as we increase the threshold.\nAs the threshold increases, the input is more likely to go towards the IndicLID-BERT model, as we are making the model less reliant on the IndicLID-FTR model." }, { "figure_ref": [], "heading": "D Language-wise analysis for Roman script classifiers", "publication_ref": [], "table_ref": [], "text": "Table 10 illustrates the language-specific performance of IndicLID-FTR, IndicLID-BERT and In-dicLID models in detail. As we can see IndicLID-BERT has better representation than IndicLID-FTR for almost all the languages which leads better F1 score for IndicLID. However, for the languages of Sanskrit and Manipuri, the IndicLID-FTR model has a better representation than the IndicLID-BERT model, which is an interesting finding that warrants further investigation in future studies." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to thank the Ministry of Electronics and Information Technology of the Government of India for their generous grant through the Digital India Bhashini project. We also thank the Centre for Development of Advanced Computing for providing compute time on the Param Siddhi Supercomputer. We also thank Nilekani Philanthropies for their generous grant towards building datasets, models, tools and resources for Indic languages. We also thank Microsoft for their grant to support research on Indic languages. We would like to thank Jay Gala and Ishvinder Sethi for their help in coordinating the annotation work. Most importantly we would like to thank all the annotators who helped create the Bhasha-Abhijnaanam benchmark." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "://ai4bharat.iitm.ac." } ]
2023-10-26
10.18653/v1/2020.coling-main.579
[ { "authors": "Arghanshu Bose", "journal": "The Times of India", "ref_id": "b0", "title": "Explained: What is Bhashini and how it can bridge the gap between Indian languages", "year": "2022-09-02" }, { "authors": "Isaac Caswell; Theresa Breiner; Daan Van Esch; Ankur Bapna", "journal": "International Committee on Computational Linguistics", "ref_id": "b1", "title": "Language ID in the wild: Unexpected challenges on the path to a thousand-language web text corpus", "year": "2020" }, { "authors": "Aashish Chandorkar", "journal": "News", "ref_id": "b2", "title": "UPI, CoWIN, ONDC: Public Digital Infrastructure Has Put India on the Fast Lane of Tech-led Growth", "year": "2022-05-28" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Edouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Sumanth Doddapaneni; Rahul Aralikatte; Gowtham Ramesh; Shreya Goyal; M Mitesh; Anoop Khapra; Pratyush Kunchukuttan; Kumar", "journal": "", "ref_id": "b4", "title": "Towards Leaving No Indic Language Behind: Building Monolingual Corpora, Benchmark and Models for Indic Languages", "year": "2022" }, { "authors": "Tommi Jauhiainen; Tharindu Ranasinghe; Marcos Zampieri", "journal": "", "ref_id": "b5", "title": "Comparing approaches to dravidian language identification", "year": "2021" }, { "authors": "Pratik Joshi; Sebastin Santy; Amar Budhiraja; Kalika Bali; Monojit Choudhury", "journal": "", "ref_id": "b6", "title": "The state and fate of linguistic diversity and inclusion in the NLP world", "year": "2020" }, { "authors": "Armand Joulin; Edouard Grave; Piotr Bojanowski; Matthijs Douze; Hérve Jégou; Tomas Mikolov", "journal": "", "ref_id": "b7", "title": "FastText.zip: Compressing text classification models", "year": "2016" }, { "authors": "Simran Khanuja; Diksha Bansal; Sarvesh Mehtani; Savya Khosla; Atreyee Dey; Balaji Gopalan; Dilip Kumar Margam; Pooja Aggarwal; Rajiv Teja Nagipogu; Shachi Dave", "journal": "KPMG and Google", "ref_id": "b8", "title": "Muril: Multilingual representations for indian languages", "year": "2017" }, { "authors": "Marco Lui; Timothy Baldwin", "journal": "Asian Federation of Natural Language Processing", "ref_id": "b9", "title": "Cross-domain feature selection for language identification", "year": "2011" }, { "authors": "Yash Madhani; Sushane Parthan; Priyanka Bedekar; Ruchi Khapra; Vivek Seshadri; Anoop Kunchukuttan; Pratyush Kumar; Mitesh M Khapra", "journal": "", "ref_id": "b10", "title": "Aksharantar: Towards building open transliteration tools for the next billion users", "year": "2022" }, { "authors": "Marta R Nllb Team; James Costa-Jussà; Onur Cross; Maha Çelebi; Kenneth Elbayad; Kevin Heafield; Elahe Heffernan; Janice Kalbassi; Daniel Lam; Jean Licht; Anna Maillard; Skyler Sun; Guillaume Wang; Al Wenzek; Bapi Youngblood; Loic Akula; Gabriel Mejia Barrault; Prangthip Gonzalez; John Hansanti; Semarley Hoffman; Jarrett; Ram Kaushik; Dirk Sadagopan; Shannon Rowe; Chau Spruit; Pierre Tran; Necip Andrews; Shruti Fazil Ayan; Sergey Bhosale; Angela Edunov; Cynthia Fan; Vedanuj Gao; Francisco Goswami; Philipp Guzmán; Alexandre Koehn; Christophe Mourachko; Safiyyah Ropers; Holger Saleem; Jeff Schwenk; Wang", "journal": "", "ref_id": "b11", "title": "No Language Left Behind: Scaling Human-Centered Machine Translation", "year": "2022" }, { "authors": "Brian Roark; Lawrence Wolf-Sonkin; Christo Kirov; Sabrina J Mielke; Cibu Johny; Isin Demirsahin; Keith B Hall", "journal": "European Language Resources Association", "ref_id": "b12", "title": "Processing South Asian Languages Written in the Latin Script: the Dakshina Dataset", "year": "1372" } ]
[]
Bhasha-Abhijnaanam: Native-script and Romanized Language Identification for 22 Indic Languages
We create publicly available language identification (LID) datasets and models in all 22 Indian languages listed in the Indian constitution in both native-script and romanized text. First, we create Bhasha-Abhijnaanam, a language identification test set for nativescript as well as romanized text which spans all 22 Indic languages. We also train Indi-cLID, a language identifier for all the abovementioned languages in both native and romanized script. For native-script text, it has better language coverage than existing LIDs and is competitive or better than other LIDs. IndicLID is the first LID for romanized text in Indian languages. Two major challenges for romanized text LID are the lack of training data and low-LID performance when languages are similar. We provide simple and effective solutions to these problems. In general, there has been limited work on romanized text in any language, and our findings are relevant to other languages that need romanized language identification. Our models are publicly available at https://ai4bharat.iitm.
Yash Madhani; Mitesh M Khapra; Anoop Kunchukuttan
[ { "figure_caption": "Figure 2 :2Figure 2: Confusion matrix (IndicLID, roman testset)", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Effect of input length on romanized testset", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Statistics of Dakshina roman filtered test set 2022", "figure_data": "Language ScriptNative RomanAssameseBengali1012512BanglaBengali56064595BodoDevanagari1500433DogriDevanagari1498512GujaratiGujarati57974785HindiDevanagari56174606KannadaKannada58594848KashmiriPerso-Arabic Devanagari2511 1012450KonkaniDevanagari1500444MaithiliDevanagari2512439Malayalam Malayalam56284617ManipuriBengali Meetei Mayek1012 1500442MarathiDevanagari56114603NepaliDevanagari2512423OriyaOriya1012512PunjabiGurmukhi57764765SanskritDevanagari2510448SantaliOl Chiki25120SindhiPerso-Arabic58934881TamilTamil57794767TeluguTelugu57514741UrduPerso-Arabic68834371Table 1: Summary of the Bhasha-Abhijnaanam bench-mark. Number of romanized and native-script sentencesare reported. The cells in bold indicate the datasetsnewly contributed by this work. Romanized Santali test-set has not been created since Santhali annotators wecontacted did not use roman script and spoke Bengali asa second language. NLLB Team et al. (2022) also cite asimilar experience.any potential noise in the data.2.2 Roman script test set.We propose a new benchmark test set to evaluateroman-script language identification for 21 Indianlanguages. Out of these, 11 languages are repre-sented in the Dakshina romanized sentence testset (Roark et al., 2020), which comprises nativescript sentences from Wikipedia along with theirromanization. However, this test set includes shortsentences which are just named entities and Englishloan words which are not useful for romanized textLID evaluation. To address this issue, we manuallyvalidate the Dakshina test sets for the languageswe are interested in and filter out about 7% of the", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": Benchmarking on the Bhasha-Abhijnaanamnative-script testset. For fair comparison with NLLBand CLD3, we restrict the comparison to languages thatare common with IndicLID-FTN (count of common lan-guages is indicated in brackets). Throughput is numberof sentence/second.", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": "examples of such language clusters that can beobserved are (1) Hindi and very close languageslike Maithili, Urdu and Punjabi, (2) Konkani andMarathi, (3) Sindi and Kashmiri. Improving roman-ized LID between very similar languages is thus animportant direction of improvement.Impact of synthetic training data To understandthe impact of synthetic training data, we generatea machine-transliterated version of the romanizedtest set using IndicXlit. We compare the LID ac-curacy on the original and synthetically generatedtest sets. Table 5 shows that the results on thesynthetic test set are significantly better than theoriginal test set (approaching accuracy levels in the90s). The data characteristics of the synthetic testset are much closer to the training data than theoriginal test set. Closing the training-test distribu-TestsetPRF1AccOriginal72.74 84.50 74.72 80.40Synthetic 90.79 97.24 93.43 95.96", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparison of results on Synthetic vs. original Romanized test sets for IndicLID model", "figure_data": "", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "IndicLID-FTR performance on Bhasha-Abhijnaanam roman script test set. IndicLID-FTR are hyper-tuned by fixing different dimensions. Throughput is number of sentence/second.", "figure_data": "ModelPrecision Recall F1-Score AccuracyXLMR (Conneau et al., 2020)63.1970.9259.4965.15MuRIL (Khanuja et al., 2021)66.7079.0867.7773.70IndicBERT (Doddapaneni et al., 2022)68.0780.5268.9175.81Table 7: Bhasha-Abhijnaanam roman script test set re-sults on roman script Language models finetuned byfreezing all the layersA Hyperparameter tuning for Romanscript linear classifierWe train the IndicLID-FTR model using 100k sam-ples. While deciding the configuration IndicLID-FTR model, we experimented with fixing the di-mension of IndicLID-FTR model and tuning on therest of the hyperparameters. As we can see fromtable 6 model size increases with the increase ofIndicLID-FTR dimension. However, beyond 8 di-mensions, there is not much improvement observed.Therefore, we chose the model with 8 dimensions,taking into account the model size.", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Bhasha-Abhijnaanam roman script test set results on IndicLID-BERT finetuned with unfreezing different numbers of layers", "figure_data": "ThresholdsPRF1Acc Throughputthreshold 0.1 63.13 78.02 63.29 71.4950000threshold 0.2 63.43 78.18 63.63 71.77379threshold 0.3 65.50 79.64 66.15 73.8454threshold 0.4 68.39 81.84 69.77 76.8422threshold 0.5 70.99 83.60 72.87 79.1514threshold 0.6 72.74 84.51 74.7280.410threshold 0.7 73.60 84.80 75.54 80.939threshold 0.8 73.88 84.81 75.77 80.968threshold 0.9 73.51 84.50 75.35 80.626", "figure_id": "tab_9", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Trade-off between inference time and accuracy with different thresholds. Throughput is number of sentence/second.", "figure_data": "", "figure_id": "tab_10", "figure_label": "9", "figure_type": "table" } ]
[{"Category": "Data Source", "Citation": "(Caswell et al., 2020)", "Explanation": "The cited work by Caswell et al. provides a reference to the challenges in language identification for low-resource languages, which the citing paper uses to highlight the need for a good language identifier in the Indian context."}, {"Category": "Methodological Basis", "Citation": "(NLLB Team et al., 2022)", "Explanation": "The cited work by NLLB Team et al. provides the NLLB tool for language identification, which the citing paper uses as a reference to build a language identifier for Indian languages."}, {"Category": "Data Source", "Citation": "(KPMG and Google, 2017)", "Explanation": "The cited work provides data on the number of internet users in India that use romanized text, which is used in the citing paper to highlight the need for accurate romanized Language Identification models in the NLP space for Indian languages."}, {"Category": "Extension or Continuation", "Citation": "(NLLB Team et al., 2022)", "Explanation": "The cited work is a previous benchmark for native script that does not cover all the languages in the study, and the citing paper extends this work by creating a new language identification test set for both native script and romanized text in 22 Indic languages."}, {"Category": "Extension or Continuation", "Citation": "(Roark et al., 2020)", "Explanation": "The cited work is another previous benchmark for native script that does not cover all the languages in the study, and the citing paper extends this work by creating a new language identification test set for both native script and romanized text in 22 Indic languages."}, {"Category": "Methodological Basis", "Citation": "(Roark et al., 2020)", "Explanation": "The cited work by Roark et al. (2020) provides a test set for romanized text that the citing paper uses to assess the performance of their language identification system."}, {"Category": "Extension or Continuation", "Citation": "IndicLID", "Explanation": "The cited work on IndicLID is an extension of the research on language identification systems, as the authors train a new system for all the languages mentioned in the text."}, {"Category": "Data Source", "Citation": "Low-resource languages", "Explanation": "The cited work on low-resource languages is a data source for the training of the IndicLID native-script model, as the authors sample sentences from diverse sources and oversample low-resource languages to improve the model performance."}, {"Category": "Extension or Continuation", "Citation": "Romanized text LID", "Explanation": "The cited work on romanized text LID is an extension of the research on language identification systems, as the authors show that synthetic romanized training data can help train a reasonably good LID for romanized text."}, {"Category": "Methodological Basis", "Citation": "Simple linear classifier", "Explanation": "The cited work on simple linear classifiers is a methodological basis for the research on language identification systems, as the authors show that a simple linear classifier does not perform well for romanized text."}, {"Category": "Data Source", "Citation": "(NLLB Team et al., 2022)", "Explanation": "The cited work provides the FLORES-200 devtest dataset, which the citing paper uses to compile a native script test set for 19 Indian languages and 11 scripts."}, {"Category": "Data Source", "Citation": "(Roark et al., 2020)", "Explanation": "The cited work provides the Dakshina sentence test set, which the citing paper uses to compile a native script test set for the same languages and scripts as the FLORES-200 devtest dataset."}, {"Category": "Extension or Continuation", "Citation": "New languages and script", "Explanation": "The citing paper extends the test sets by including new languages and scripts that were not covered in the FLORES-200 devtest and Dakshina sentence test set datasets."}, {"Category": "Methodological Basis", "Citation": "Professional translators", "Explanation": "The citing paper uses professional translators to ensure the quality and accuracy of the test samples, which serves as a methodological basis for the creation of the new test sets."}, {"Category": "Data Source", "Citation": "(In-Team et al., 2022)", "Explanation": "The cited work provides the training data sentences for the citing paper, which the authors use to compile the training data for their research."}, {"Category": "Data Source", "Citation": "Wikipedia", "Explanation": "The cited work is a source of training data sentences for the citing paper, which the authors use to compile the training data for their research."}, {"Category": "Data Source", "Citation": "Vikaspedia6", "Explanation": "The cited work is a source of training data sentences for the citing paper, which the authors use to compile the training data for their research."}, {"Category": "Data Source", "Citation": "internal sources", "Explanation": "The cited work is a source of training data sentences for the citing paper, which the authors use to compile the training data for their research."}, {"Category": "Data Source", "Citation": "Indic-NLP library7 (Kunchukuttan, 2020)", "Explanation": "The cited work is a tool used by the citing paper to tokenize and normalize the training data sentences."}, {"Category": "Data Source", "Citation": "Madhani et al., 2022", "Explanation": "The cited work is the source of the IndicXlit model used for creating romanized training data in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Jauhiainen et al., 2021)", "Explanation": "The cited work by Jauhiainen et al. provides the basis for the use of linear classifiers with character n-gram features in the citing paper for the task of language identification."}, {"Category": "Data Source", "Citation": "(Joulin et al., 2016)", "Explanation": "The cited work by Joulin et al. serves as the data source for the use of FastText in the citing paper, which is a pre-existing model for training a lightweight and efficient linear classifier."}, {"Category": "Methodological Basis", "Citation": "(Conneau et al., 2020)", "Explanation": "The cited work, XLM-R, is a pre-trained language model that the citing paper uses to finetune a model for romanized text classification."}, {"Category": "Methodological Basis", "Citation": "(Doddapaneni et al., 2022)", "Explanation": "The cited work, IndicBERT-v2, is a pre-trained language model that the citing paper uses to finetune a model for romanized text classification."}, {"Category": "Methodological Basis", "Citation": "(Khanuja et al., 2021)", "Explanation": "The cited work, MuRIL, is a pre-trained language model that the citing paper uses to finetune a model for romanized text classification and incorporates synthetic romanized data in pre-training."}, {"Category": "Data Source", "Citation": "(NLLB while sampling native train samples from various sources)", "Explanation": "The cited work provides the benchmark samples used in the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Roark et al., 2020)", "Explanation": "The cited work is the source of the parallel sentences used in the creation of the romanized training set in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(NLLB Team et al., 2022)", "Explanation": "The cited work, NLLB model, is used as a baseline for comparison in the citing paper, which extends the research by comparing the performance of IndicLID-FTN with the NLLB model."}, {"Category": "Data Source", "Citation": "(Joulin et al., 2016)", "Explanation": "The cited work, model quantization by Joulin et al., is used as a data source in the citing paper to further reduce the model footprint for future work."}, {"Category": "Methodological Basis", "Citation": "(Doddapaneni et al., 2022)", "Explanation": "The cited work provides the IndicBERT model, which the citing paper uses to fine-tune the last softmax layer for the roman LID task."}, {"Category": "Data Source", "Citation": "(Conneau et al., 2020)", "Explanation": "The cited work introduces the XLM-R model, which the citing paper uses as a pre-trained language model in the initial experiment."}, {"Category": "Data Source", "Citation": "(Khanuja et al., 2021)", "Explanation": "The cited work provides the MuRIL model, which the citing paper uses as a pre-trained language model in the initial experiment."}]
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b33", "b11", "b4", "b2", "b0" ], "table_ref": [], "text": "With the development of deep learning, DNNs with high overparameterization have achieved tremendous success in various machine learning scenarios such as CV and NLP 2 . Although the over-parameterized models are prone to overfit the training data [33], they do generalize well most of the time. The mystery of generalization has received increasing attention and has become a hot research topic in deep learning.\nRecent works have revealed that the generalization is closely related to the flatness of minima, i.e., the flatter minima of the loss landscape could achieve lower generalization error [4, 9, 14-16, 19, 23]. Sharpness-Aware Minimization (SAM) [11] is one of the most promising methods for finding flatter minima to improve generalization. It is widely used in various fields, such as CV [5], NLP [3] and bi-level learning [1], and has significantly outperformed the state-of-the-art (SOTA) method in those fields.\nFor the exploration of the flatter minima, SAM defines the sharpness of the loss function 𝐿 at 𝒘 as follows: L(𝒘) := max ∥𝜹 ∥ ≤𝜌" }, { "figure_ref": [], "heading": "𝐿(𝒘 + 𝜹)", "publication_ref": [ "b34", "b24", "b27", "b26", "b21" ], "table_ref": [], "text": "𝐿 𝑆𝐴𝑀 (𝒘 ) -𝐿(𝒘).\n(1)\nZhuang et al. [34] proves that L(𝒘) is an approximation to the dominant eigenvalue of the Hessian at local minima, implying that L(𝒘) is indeed an effective metric of the sharpness. However, L(𝒘) can only be used to find flatter areas but not minima, which could potentially lead to convergence at a point where the loss is still large. Thus, SAM adopts L(𝒘) + 𝐿(𝒘), i.e., 𝐿 𝑆𝐴𝑀 (𝒘), to be the loss function. It can be thought of as a compromise between finding the flatter surface and the smaller minima by giving the same weights to L(𝒘) and 𝐿(𝒘).\nIn this paper, we rethink the construction of 𝐿 𝑆𝐴𝑀 (𝒘) and regard L(𝒘) as a regularization term. We develop a more general and effective algorithm, called WSAM (Weighted Sharpness-Aware Minimization), whose loss function is regularized by a weighted sharpness term 𝛾 1-𝛾 L(𝒘), where the hyperparameter 𝛾 controls the weight of sharpness. In Section 4 we demonstrate how 𝛾 directs the loss trajectory to find either flatter or lower minima. Our contribution can be summarized as follows.\n• We propose WSAM, which regards the sharpness as a regularization and assigns different weights across different tasks. Inspired by Loshchilov and Hutter [24], we put forward a \"weight decouple\" technique to address the regularization in the final updated formula, aiming to reflect only the current step's sharpness. When the base optimizer is not simply SGD [27], such as SgdM [26] and Adam [21], WSAM has significant differences in form compared to SAM. The ablation study shows that this technique can improve performance in most cases. • We establish theoretically sound convergence guarantees in both convex and non-convex stochastic settings, and give a generalization bound by mixing PAC and Bayes-PAC techniques. • We validate WSAM on a wide range of common tasks on public datasets. Experimental results show that WSAM yields better or highly competitive generalization performance versus SAM and its variants." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b34", "b7", "b22", "b20" ], "table_ref": [], "text": "Several SAM variants have been developed to improve either effectiveness or efficiency. GSAM [34] minimizes both 𝐿 𝑆𝐴𝑀 (𝒘) and L(𝒘) of Eq. ( 1) simultaneously by employing the gradient projection technique. Compared to SAM, GSAM keeps 𝐿 𝑆𝐴𝑀 (𝒘) unchanged and decreases the surrogate gap, i.e. L(𝒘), by increasing 𝐿(𝒘). In other words, it gives more weights to L(𝒘) than 𝐿(𝒘) implicitly.\nESAM [8] improves the efficiency of SAM without sacrificing accuracy by selectively applying SAM update with stochastic weight perturbation and sharpness-sensitivity data selection. ASAM [22] and Fisher SAM [20] try to improve the geometrical structure of the exploration area of 𝐿 𝑆𝐴𝑀 (𝒘). ASAM introduces adaptive sharpness, which normalizes the radius of the exploration region, i.e., replacing ∥𝜹 ∥ of Eq. ( 1) with ∥𝜹/𝒘 ∥, to avoid the scaledependent problem that SAM can suffer from. Fisher SAM employs another replacement by using √︁ ∥𝜹 𝑇 diag(𝐹 )𝜹 ∥ as an intrinsic metric that can depict the underlying statistical manifold more accurately, where 𝐹 is the empirical Fisher information." }, { "figure_ref": [], "heading": "PRELIMINARY 3.1 Notation", "publication_ref": [], "table_ref": [], "text": "We use lowercase letters to denote scalars, boldface lowercase letters to denote vectors, and uppercase letters to denote matrices. We denote a sequence of vectors by subscripts, that is, 𝒙 1 , . . . , 𝒙 𝑡 where 𝑡 ∈ [𝑇 ] := {1, 2, . . . ,𝑇 }, and entries of each vector by an additional subscript, e.g., 𝑥 𝑡,𝑖 . For any vectors 𝒙, 𝒚 ∈ R 𝑛 , we write 𝒙 𝑇 𝒚 or 𝒙 • 𝒚 for the standard inner product, 𝒙𝒚 for element-wise multiplication, 𝒙/𝒚 for element-wise division, √ 𝒙 for element-wise square root, 𝒙 2 for element-wise square. For the standard Euclidean norm,\n∥𝒙 ∥ = ∥𝒙 ∥ 2 = √︁ ⟨𝒙, 𝒙⟩. We also use ∥𝒙 ∥ ∞ = max 𝑖 |𝑥 (𝑖 ) | to denote ℓ ∞ - norm, where 𝑥 (𝑖 ) is the 𝑖-th element of 𝒙.\nLet S 𝑚 be a training set of size 𝑚, i.e., S 𝑚 = {(𝒙 𝑖 , 𝑦 𝑖 )} 𝑖=1,...,𝑚 , where 𝒙 𝑖 ∈ X ⊆ R 𝑘 is an instance and 𝑦 𝑖 ∈ Y is a label. Denote the hypotheses space H = {ℎ 𝒘 : 𝒘 ∈ R 𝑛 }, where ℎ 𝒘 (•) : X → Y is a hypothesis. Denote the training loss\n𝐿(𝒘) := 1 𝑚 𝑚 ∑︁ 𝑘=1 ℓ (ℎ 𝒘 (𝒙 𝑘 ), 𝑦 𝑘 ),\nwhere ℓ (ℎ 𝒘 (𝒙), 𝑦) (we will often write ℓ (𝒘) for simplicity) is a loss function measuring the performance of the parameter 𝒘 on the example (𝒙, 𝑦). Since it is inefficient to calculate the exact gradient in each optimization iteration when 𝑚 is large, we usually adopt a stochastic gradient with mini-batch, which is\n𝑔(𝒘) = 1 |B| ∑︁ 𝑘 ∈ B ∇ℓ (ℎ 𝒘 (𝒙 𝑘 ), 𝑦 𝑘 ),\nwhere B ⊂ {1, . . . , 𝑚} is the sample set of size |B| ≪ 𝑚. Furthermore, let ℓ 𝑡 (𝒘) be the loss function of the model at 𝑡-step." }, { "figure_ref": [], "heading": "Sharpness-Aware Minimization", "publication_ref": [ "b0", "b27", "b26", "b21", "b11" ], "table_ref": [], "text": "SAM is a min-max optimization problem of solving 𝐿 𝑆𝐴𝑀 (𝒘) defined in Eq. (1). First, SAM approximates the inner maximization problem using a first-order Taylor expansion around 𝒘, i.e.,\n𝜹 * = arg max ∥𝜹 ∥ ≤𝜌 𝐿(𝒘 + 𝜹) ≈ arg max ∥𝜹 ∥ ≤𝜌 𝐿(𝒘) + 𝜹 𝑇 ∇𝐿(𝒘) = 𝜌 ∇𝐿(𝒘) ∥∇𝐿(𝒘)∥ .\nSecond, SAM updates 𝒘 by adopting the approximate gradient of 𝐿 𝑆𝐴𝑀 (𝒘), which is\n∇𝐿 𝑆𝐴𝑀 (𝒘) ≈ ∇𝐿(𝒘 + 𝜹 * ) =∇𝐿(𝒘)| 𝒘+𝜹 * + 𝑑𝜹 * 𝑑𝒘 ∇𝐿(𝒘)| 𝒘+𝜹 * ≈ ∇𝐿(𝒘)| 𝒘+𝜹 * ,\nwhere the second approximation is for accelerating the computation. Other gradient based optimizers (called base optimizer) can be incorporated into a generic framework of SAM, defined in Algorithm 1. By varying 𝒎 𝑡 and 𝐵 𝑡 of Algorithm 1, we can obtain different base optimizers for SAM, such as Sgd [27], SgdM [26] and Adam [21], see Tab. 1. Note that when the base optimizer is SGD, Algorithm 1 rolls back to the original SAM in Foret et al. [11].\nAlgorithm 1 Generic framework of SAM \nI SgdM 𝑡 -1 𝑖=0 𝛾 𝑖 𝒈 𝑡 -𝑖 I Adam 1-𝛽 1 1-𝛽 𝑡 1 𝑡 -1 𝑖=0 𝒈 𝑡 -𝑖 𝛽 𝑖 1 diag( √︂ 1-𝛽 2 1-𝛽 𝑡 2 𝑡 𝑖=1 𝒈 2 𝑖 𝛽 𝑡 -𝑖 2 + 𝜖)" }, { "figure_ref": [ "fig_0" ], "heading": "ALGORITHM 4.1 Details of WSAM", "publication_ref": [ "b24", "b26" ], "table_ref": [], "text": "In this section, we give the formal definition of 𝐿 𝑊 𝑆𝐴𝑀 , which composes of a vanilla loss and a sharpness term. From Eq. (1), we have\n𝐿 𝑊 𝑆𝐴𝑀 (𝒘) := 𝐿(𝒘) + 𝛾 1 -𝛾 L(𝒘) = 1 -2𝛾 1 -𝛾 𝐿(𝒘) + 𝛾 1 -𝛾 𝐿 𝑆𝐴𝑀 (𝒘),(2)\nwhere 𝛾 ∈ [0, 1). When 𝛾 = 0, 𝐿 𝑊 𝑆𝐴𝑀 (𝒘) degenerates to the vanilla loss; when 𝛾 = 1/2, 𝐿 𝑊 𝑆𝐴𝑀 (𝒘) is equivalent to 𝐿 𝑆𝐴𝑀 (𝒘); when 𝛾 > 1/2, 𝐿 𝑊 𝑆𝐴𝑀 (𝒘) gives more weights to the sharpness, and thus prone to find the point which has smaller curvature rather than smaller loss compared with SAM; and vice versa.\nThe generic framework of WSAM, which incorporates various optimizers by choosing different 𝜙 𝑡 and 𝜓 𝑡 , is listed in Algorithm 2. For example, when 𝜙 𝑡 = 𝒈 𝑡 and 𝜓 𝑡 = I, we derive SGD with WSAM, which is listed in Algorithm 3. Here, motivated by Loshchilov and Hutter [24], we adopt a \"weight decouple\" technique, i.e., the sharpness term L(𝒘) is not integrated into the base optimizer to calculate the gradients and update weights, but is calculated independently (the last term in Line 7 of Algorithm 2). In this way the effect of the regularization just reflects the sharpness of the current step without additional information. For comparison, WSAM without \"weight decouple\", dubbed Coupled-WSAM, is listed in Algorithm 4 of Section 6.5. For example, if the base optimizer is SgdM [26], the update regularization term of Coupled-WSAM is the exponential moving averages of the sharpness. As shown in Section 6.5, \"weight decouple\" can improve the performance in most cases.\nFig. 1 depicts the WSAM update with different choices of 𝛾. ∇𝐿 𝑊 𝑆𝐴𝑀 (𝒘) is between in ∇𝐿(𝒘) and ∇𝐿 𝑆𝐴𝑀 (𝒘) when 𝛾 < 1 2 , and gradually drift away ∇𝐿(𝒘) as 𝛾 grows larger.\nAlgorithm 2 Generic framework of WSAM 1: Input: parameters 𝜌, 𝜖 > 0, 𝛾 ∈ [0, 1), 𝒘 1 ∈ R 𝑛 , step size {𝛼 𝑡 } 𝑇 𝑡 =1 , sequence of functions {𝜙 𝑡 ,𝜓 𝑡 } 𝑇 𝑡 =1 2: for 𝑡 = 1 to 𝑇 do 3: g𝑡 = ∇ℓ 𝑡 (𝒘 𝑡 ) 4:\n𝜹 𝑡 = 𝜌 g𝑡 /(∥ g𝑡 ∥ + 𝜖)\n5:\n𝒈 𝑡 = ∇ℓ 𝑡 (𝒘 𝑡 + 𝜹 𝑡 ) 6:\nm𝑡 = 𝜙 𝑡 ( g1 , . . . , g𝑡 ) and B𝑡 = 𝜓 𝑡 ( g1 , . . . , g𝑡 )\n7:\n𝒘 𝑡 +1 = 𝒘 𝑡 -𝛼 𝑡 B-1 𝑡 m𝑡 + 𝛾 1-𝛾 (𝒈 𝑡 -g𝑡 ) 8: end for" }, { "figure_ref": [ "fig_2" ], "heading": "Toy Example", "publication_ref": [ "b20" ], "table_ref": [], "text": "To better illustrate the effect and benefit of 𝛾 in WSAM, we setup a 2D toy example, similar to Kim et al. [20]. As shown in Fig. 2, the loss function contains a sharp minimum on the lower left (valued 0.28 at around (-16.8, 12.8)) and a flat minimum on the upper right (valued 0.36 at around (19.8, 29.9)). The loss is defined as\n𝐿(𝒘) = -log 0.7𝑒 -𝐾 1 (𝒘 )/1.8 2 + 0.3𝑒 -𝐾 2 (𝒘 )/1.2 2 , Algorithm 3 SGD with WSAM 1: Input: parameters 𝜌, 𝜖 > 0, 𝛾 ∈ [0, 1), 𝒘 1 ∈ R 𝑛 , step size {𝛼 𝑡 } 𝑇 𝑡 =1 2: for 𝑡 = 1 to 𝑇 do 3: g𝑡 = ∇ℓ 𝑡 (𝒘 𝑡 ) 4:\n𝜹 𝑡 = 𝜌 g𝑡 /(∥ g𝑡 ∥ + 𝜖)" }, { "figure_ref": [], "heading": "5:", "publication_ref": [], "table_ref": [], "text": "𝒈 𝑡 = ∇ℓ 𝑡 (𝒘 𝑡 + 𝜹 𝑡 )\n6: while 𝐾 𝑖 (𝒘) = 𝐾 𝑖 (𝜇, 𝜎) is the KL divergence between the univariate Gaussian model and the two normal distributions, which is\n𝒘 𝑡 +1 = 𝒘 𝑡 -𝛼 𝑡 𝛾 1-𝛾 𝒈 𝑡 + 1-2𝛾 1-𝛾 g𝑡 7: end for ∇L SAM (ω) ∇L(ω) ∇L W SAM (ω) γ < 1 2 ∇L SAM (ω) ∇L(ω) γ > 1 2 ∇L W SAM (ω)\n𝐾 𝑖 (𝜇, 𝜎) = log 𝜎 𝑖 𝜎 + 𝜎 2 + (𝜇 -𝜇 𝑖 ) 2 2𝜎 2 𝑖 -1 2\n, where (𝜇 1 , 𝜎 1 ) = (20, 30) and (𝜇 2 , 𝜎 2 ) = (-20, 10). We use SgdM with momentum 0.9 as the base optimizer, and set 𝜌 = 2 for both SAM and WSAM. Starting from the initial point (-6, 10), the loss function is optimized for 150 steps with a learning rate of 5. SAM converges to the lower but sharper minimum, as well as WSAM with 𝛾 = 0.6. However, a larger 𝛾 = 0.95 leads to the flat minimum, because a stronger sharpness regularization comes to effect." }, { "figure_ref": [], "heading": "THEORETICAL ANALYSIS 5.1 Convergence of WSAM", "publication_ref": [], "table_ref": [], "text": "In this section, we choose SGD as the base optimizer for simplicity, i.e., Algorithm 3, and use 𝜌 𝑡 to replace 𝜌 of Algorithm 3 where 𝜌 𝑡 is non-increasing. We have the following convergence theorems for both convex and non-convex settings. we have the following bound on the regret\n𝛼 𝑡 = 𝛼/ √ 𝑡, 𝜌 𝑡 ≤ 𝜌, ∥𝒈 𝑡 ∥ ∞ ≤ 𝐺 ∞ , ∥ g𝑡 ∥ ∞ ≤ 𝐺 ∞ ∀𝑡 ∈ [𝑇 ]. Suppose ℓ 𝑡 (𝒘) is convex and 𝜂-smooth, i.e., ∥∇ℓ 𝑡 (𝒖) -∇ℓ 𝑡 (𝒗)∥ ≤ 𝜂 ∥𝒖 -𝒗 ∥, for all 𝑡 ∈ [𝑇 ], 𝒘 * is an optimal solution of 𝑇 𝑡 =1 ℓ 𝑡 (𝒘), i.e., 𝒘 * = arg min 𝒘 ∈R 𝑛 𝑇 𝑡 =1 ℓ 𝑡 (𝒘) and there exists the constant 𝐷 ∞ such that max 𝑡 ∈ [𝑇 ] ∥𝒘 𝑡 -𝒘 * ∥ ∞ ≤ 𝐷 ∞ . Then\n𝑇 ∑︁ 𝑡 =1 ℓ 𝑡 (𝒘 𝑡 ) -ℓ 𝑡 (𝒘 * ) ≤ 𝐶 1 √ 𝑇 + 𝐶 2 𝑇 ∑︁ 𝑡 =1 𝜌 𝑡 ,\nwhere 𝐶 1 and 𝐶 2 are defined as follows:\n𝐶 1 = 𝑛𝐷 2 ∞ 2𝛼 + 10𝛾 2 -8𝛾 + 2 (1 -𝛾) 2 𝑛𝐺 2 ∞ , 𝐶 2 = √ 𝑛𝐷 ∞ 𝜂𝛾 1 -𝛾 .\nHere, to ensure the condition max 𝑡 ∈ [𝑇 ] ∥𝒘 𝑡 -𝒘 * ∥ ∞ ≤ 𝐷 ∞ holds, we can make the assumption that the domain W ⊆ R 𝑛 is bounded and project the sequence {𝒘 𝑡 } onto W by setting\n𝒘 𝒕+1 = Π W 𝒘 𝑡 -𝛼 𝑡 𝛾 1-𝛾 𝒈 𝑡 + 1-2𝛾 1-𝛾 g𝑡 . Proof. Let 𝒉 𝑡 = 𝛾 1-𝛾 𝒈 𝑡 + 1-2𝛾 1-𝛾 g𝑡 . Since ∥𝒘 𝑡 +1 -𝒘 * ∥ 2 = 𝒘 𝑡 -𝛼 𝑡 𝛾 1 -𝛾 𝒈 𝑡 + 1 -2𝛾 1 -𝛾 g𝑡 -𝒘 * 2 = ∥𝒘 𝑡 -𝒘 * ∥ 2 -2𝛼 𝑡 𝒘 𝑡 -𝒘 * , g𝑡 -2𝛼 𝑡 𝒘 𝑡 -𝒘 * , 𝛾 1 -𝛾 (𝒈 𝑡 -g𝑡 ) + 𝛼 2 𝑡 ∥𝒉 𝑡 ∥ 2 ,(3)\nthen rearranging Eq. (3), we have\n𝒘 𝑡 -𝒘 * , g𝑡 = 1 2𝛼 𝑡 ∥𝒘 𝑡 -𝒘 * ∥ 2 -∥𝒘 𝑡 +1 -𝒘 * ∥ 2 - 𝛾 1 -𝛾 < 𝒘 𝒕 -𝒘 * , 𝒈 𝑡 -g𝑡 > + 𝛼 𝑡 2 ∥𝒉 𝑡 ∥ 2 ≤ 1 2𝛼 𝑡 ∥𝒘 𝑡 -𝒘 * ∥ 2 -∥𝒘 𝑡 +1 -𝒘 * ∥ 2 + 𝛾 1 -𝛾 ∥𝒘 𝑡 -𝒘 * ∥∥𝒈 𝑡 -g𝑡 ∥ + 𝛼 𝑡 (1 -2𝛾) 2 (1 -𝛾) 2 ∥𝒈 𝑡 ∥ 2 + 𝛾 2 (1 -𝛾) 2 ∥ g𝑡 ∥ 2 ≤ 1 2𝛼 𝑡 ∥𝒘 𝑡 -𝒘 * ∥ 2 -∥𝒘 𝑡 +1 -𝒘 * ∥ 2 + 𝛾 1 -𝛾 √ 𝑛𝐷 ∞ 𝜂𝜌 𝑡 + 5𝛾 2 -4𝛾 + 1 (1 -𝛾) 2 𝑛𝐺 2 ∞ 𝛼 𝑡 ,\nwhere the first inequality follows from Cauchy-Schwartz inequality and 𝑎𝑏 ≤ 1 2 (𝑎 2 + 𝑏 2 ), and the second inequality follows from the\n𝜂-smooth of ℓ 𝑡 (𝒘), ∥𝒘 𝑡 -𝒘 * ∥ ∞ ≤ 𝐷 ∞ and ∥𝒈 𝑡 ∥ ∞ ≤ 𝐺 ∞ . Thus, the regret 𝑇 ∑︁ 𝑡 =1 ℓ 𝑡 (𝒘 𝑡 ) -ℓ 𝑡 (𝒘 * ) ≤ 𝑇 ∑︁ 𝑡 =1 𝒘 𝑡 -𝒘 * , g𝑡 ≤ 𝑇 ∑︁ 𝑡 =1 1 2𝛼 𝑡 ∥𝒘 𝑡 -𝒘 * ∥ 2 -∥𝒘 𝑡 +1 -𝒘 * ∥ 2 + 𝛾 1 -𝛾 √ 𝑛𝐷 ∞ 𝜂𝜌 𝑡 + 5𝛾 2 -4𝛾 + 1 (1 -𝛾) 2 𝑛𝐺 2 ∞ 𝛼 𝑡 ≤ 1 2𝛼 1 ∥𝒘 1 -𝒘 * ∥ 2 + 𝑇 ∑︁ 𝑡 =2 ∥𝒘 𝑡 -𝒘 * ∥ 2 2 1 𝛼 𝑡 - 1 𝛼 𝑡 -1 + 𝑇 ∑︁ 𝑡 =1 √ 𝑛𝐷 ∞ 𝜂𝛾 1 -𝛾 𝜌 𝑡 + 𝑇 ∑︁ 𝑡 =1 5𝛾 2 -4𝛾 + 1 (1 -𝛾) 2 𝑛𝐺 2 ∞ 𝛼 𝑡 ≤ 𝑛𝐷 2 ∞ 2𝛼 + 𝑛𝐷 2 ∞ 2 𝑇 ∑︁ 𝑡 =2 1 𝛼 𝑡 - 1 𝛼 𝑡 -1 + 𝑇 ∑︁ 𝑡 =1 √ 𝑛𝐷 ∞ 𝜂𝛾 1 -𝛾 𝜌 𝑡 + 𝑇 ∑︁ 𝑡 =1 5𝛾 2 -4𝛾 + 1 (1 -𝛾) 2 𝑛𝐺 2 ∞ 𝛼 𝑡 ≤ 𝑛𝐷 2 ∞ 2𝛼 √ 𝑇 + 𝑇 ∑︁ 𝑡 =1 √ 𝑛𝐷 ∞ 𝜂𝛾 1 -𝛾 𝜌 𝑡 + 10𝛾 2 -8𝛾 + 2 (1 -𝛾) 2 𝑛𝐺 2 ∞ √ 𝑇 ,\nwhere the first inequality follows from the convexity of ℓ 𝑡 (𝒘) and the last inequality follows from\n𝑇 ∑︁ 𝑡 =1 1 √ 𝑡 = 1 + ∫ 3 2 1 √ 2 𝑑𝑠 + • • • + ∫ 𝑇 𝑇 -1 1 √ 𝑇 𝑑𝑠 < 1 + ∫ 3 2 1 √ 𝑠 -1 𝑑𝑠 + • • • + ∫ 𝑇 𝑇 -1 1 √ 𝑠 -1 𝑑𝑠 = 1 + ∫ 𝑇 2 1 √ 𝑠 -1 𝑑𝑠 = 2 √ 𝑇 -1 -1 < 2 √ 𝑇 .\nThis completes the proof. □ Corollary 5.2. Suppose 𝜌 𝑡 = 𝜌/ √ 𝑡, then we have\n𝑇 ∑︁ 𝑡 =1 ℓ 𝑡 (𝒘 𝑡 ) -ℓ 𝑡 (𝒘 * ) ≤ (𝐶 1 + 2𝜌𝐶 2 ) √ 𝑇 ,\nwhere 𝐶 1 and 𝐶 2 are the same with Theorem 5.1.\nProof. Since 𝜌 𝑡 = 𝜌/ √ 𝑡, we have\n𝑇 ∑︁ 𝑡 =1 𝜌 𝑡 = 𝜌 𝑇 ∑︁ 𝑡 =1 1 √ 𝑡 < 2𝜌 √ 𝑇 .\nThis completes the proof. □ (1) 𝐿 is differential and lower bounded, i.e., 𝐿(𝒘 * ) > -∞ where 𝒘 * is an optimal solution. 𝐿 is also 𝜂-smooth, i.e., ∀𝒖, 𝒗 ∈ R 𝑛 , we have\n𝐿(𝒖) ≤ 𝐿(𝒗) + ⟨∇𝐿(𝒗), 𝒖 -𝒗⟩ + 𝜂 2 ∥𝒖 -𝒗 ∥ 2 .\n(2) At step 𝑡, the algorithm can access the bounded noisy gradients and the true gradient is bounded, i.e.,\n∥𝒈 𝑡 ∥ ∞ ≤ 𝐺 ∞ , ∥ g𝑡 ∥ ∞ ≤ 𝐺 ∞ , ∥∇𝐿(𝒘 𝑡 ) ∥ ∞ ≤ 𝐺 ∞ , ∀𝑡 ∈ [𝑇 ]. (3)\nThe noisy gradient is unbiased and the noise is independent, i.e., g𝑡 = ∇𝐿(𝒘\n𝑡 ) + 𝜻 𝑡 , E[𝜻 𝑡 ] = 0 and 𝜻 𝑖 is independent of 𝜻 𝑗 if 𝑖 ≠ 𝑗. (4) 𝛼 𝑡 = 𝛼/ √ 𝑡, 𝜌 𝑡 ≤ 𝜌, ∀𝑡 ∈ [𝑇 ].\nThen Algorithm 3 yields\nmin 𝑡 ∈ [𝑇 ] E[∥∇𝐿(𝒘 𝑡 )∥ 2 ] ≤ 1 √ 𝑇 -1 𝐶 3 + 𝐶 5 + 𝐶 4 𝑇 ∑︁ 𝑡 =1 𝛼 𝑡 𝜌 𝑡 + 𝐶 5 log𝑇 ,\nwhere 𝐶 3 , 𝐶 4 and 𝐶 5 are defined as follows:\n𝐶 3 = 𝐿(𝒘 1 ) 2𝛼 , 𝐶 4 = √ 𝑛𝐺 ∞ 𝜂𝛾 2𝛼 (1 -𝛾) , 𝐶 5 = 5𝛾 2 -4𝛾 + 1 2(1 -𝛾) 2 𝑛𝐺 2 ∞ 𝜂𝛼 .\nProof. By assumption 1, we have\n𝐿(𝒘 𝑡 +1 ) ≤ 𝐿(𝒘 𝑡 ) + ⟨∇𝐿(𝒘 𝑡 ), 𝒘 𝑡 +1 -𝒘 𝑡 ⟩ + 𝜂 2 ∥𝒘 𝑡 +1 -𝒘 𝑡 ∥ 2 =𝐿(𝒘 𝑡 ) -∇𝐿(𝒘 𝑡 ), 𝛼 𝑡 g𝑡 + 𝛾 1 -𝛾 (𝒈 𝑡 -g𝑡 ) + 𝜂 2 ∥𝒘 𝑡 +1 -𝒘 𝑡 ∥ 2 .\n(4) Rearranging Eq. ( 4) and taking expectation both sides, by assumptions 3, we get\n𝛼 𝑡 E[∥∇𝐿(𝒘 𝑡 ) ∥ 2 ] ≤ E[𝐿(𝒘 𝑡 ) -𝐿(𝒘 𝑡 +1 )] -𝛼 𝑡 E ∇𝐿(𝒘 𝑡 ), 𝛾 1 -𝛾 (∇𝐿(𝒘 𝑡 + 𝜹 𝑡 ) -∇𝐿(𝒘 𝑡 )) + 𝜂 2 ∥𝒘 𝑡 +1 -𝒘 𝑡 ∥ 2 ≤E[𝐿(𝒘 𝑡 ) -𝐿(𝒘 𝑡 +1 )] + 𝛼 𝑡 𝛾 1 -𝛾 E[∥∇𝐿(𝒘 𝑡 )∥∥∇𝐿(𝒘 𝑡 + 𝜹 𝑡 ) -∇𝐿(𝒘 𝑡 )∥] + 𝜂 2 𝛼 2 𝑡 𝛾 1 -𝛾 𝒈 𝑡 + 1 -2𝛾 1 -𝛾 g𝑡 2 ≤E[𝐿(𝒘 𝑡 ) -𝐿(𝒘 𝑡 +1 )] + √ 𝑛𝐺 ∞ 𝜂𝛾 1 -𝛾 𝛼 𝑡 𝜌 𝑡 + 5𝛾 2 -4𝛾 + 1 (1 -𝛾) 2 𝑛𝐺 2 ∞ 𝜂𝛼 2 𝑡 .\n(5) Telescoping Eq. ( 5) for 𝑡 = 1 to 𝑇 , we have\n𝑇 ∑︁ 𝑡 =1 𝛼 𝑡 E[∥∇𝐿(𝒘 𝑡 ) ∥ 2 ] ≤ E[𝐿(𝒘 1 ) -𝐿(𝒘 𝑇 +1 )] + √ 𝑛𝐺 ∞ 𝜂𝛾 1 -𝛾 𝑇 ∑︁ 𝑡 =1 𝛼 𝑡 𝜌 𝑡 + 5𝛾 2 -4𝛾 + 1 (1 -𝛾) 2 𝑛𝐺 2 ∞ 𝜂 𝑇 ∑︁ 𝑡 =1 𝛼 2 𝑡 ≤𝐿(𝒘 1 ) + √ 𝑛𝐺 ∞ 𝜂𝛾 1 -𝛾 𝑇 ∑︁ 𝑡 =1 𝛼 𝑡 𝜌 𝑡 + 5𝛾 2 -4𝛾 + 1 (1 -𝛾) 2 𝑛𝐺 2 ∞ 𝜂 𝑇 ∑︁ 𝑡 =1 𝛼 2 𝑡 .(6)\nSince\n𝑇 ∑︁ 𝑡 =1 𝛼 𝑡 = 𝑇 ∑︁ 𝑡 =1 𝛼 √ 𝑡 = 𝛼 ∫ 2 1 1 √ 1 𝑑𝑠 + • • • + ∫ 𝑇 𝑇 -1 1 √ 𝑇 𝑑𝑠 > 𝛼 ∫ 𝑇 1 1 √ 𝑠 𝑑𝑠 = 2𝛼 √ 𝑇 -1 , 𝑇 ∑︁ 𝑡 =1 𝛼 2 𝑡 = 𝑇 ∑︁ 𝑡 =1 𝛼 2 𝑡 = 𝛼 2 1 + ∫ 3 2 1 2 𝑑𝑠 + • • • + ∫ 𝑇 𝑇 -1 1 𝑇 𝑑𝑠 < 𝛼 2 1 + ∫ 𝑇 2 1 𝑠 -1 𝑑𝑠 = 𝛼 2 (log(𝑇 -1) + 1) < 𝛼 2 (log𝑇 + 1) ,(7)\nsubstituting Eq. ( 7) into Eq. ( 6), we have min \n𝑡 ∈ [𝑇 ] E[∥∇𝐿(𝒘 𝑡 )∥ 2 ] ≤ 1 𝑇 𝑡 =1 𝛼 𝑡 𝐿(𝒘 1 ) + √ 𝑛𝐺 ∞ 𝜂𝛾 1 -𝛾 𝑇 ∑︁ 𝑡 =1 𝛼 𝑡 𝜌 𝑡 + 5𝛾 2 -4𝛾 + 1 (1 -𝛾) 2 𝑛𝐺 2 ∞ 𝜂 𝑇 ∑︁ 𝑡 =1 𝛼 2 𝑡 ≤ 1 √ 𝑇 -1 𝐿(𝒘 1 ) 2𝛼 + √ 𝑛𝐺 ∞ 𝜂𝛾 2𝛼 (1 -𝛾) 𝑇 ∑︁ 𝑡 =1 𝛼 𝑡 𝜌 𝑡 + 5𝛾 2 -4𝛾 + 1 2(1 -𝛾) 2 𝑛𝐺 2 ∞ 𝜂𝛼 (log𝑇 + 1) .(8)\nSubstituting Eq. ( 9) into Eq. ( 8), we finish the proof. □ Corollaries 5.4 implies the convergence (to the stationary point) rate of WSAM is 𝑂 (log𝑇 / √ 𝑇 ) in non-convex settings." }, { "figure_ref": [], "heading": "Generalization of WSAM", "publication_ref": [], "table_ref": [], "text": "In this section, we are interested in binary classification problems, i.e., the label set Y = {0, 1}, and focus on the 0-1 loss, i. \n𝐿 D (𝒘) ≤ 𝐿 𝑊 𝑆𝐴𝑀 S (𝒘) + 2|1 -2𝛾 | 1 -𝛾 √︂ 𝐶 1 𝑚 + 𝛾 1 -𝛾 √︂ 𝐶 2 + 𝐶 3 𝑚 -1 .\nwhere 𝐿 𝑊 𝑆𝐴𝑀 S is defined in Eq. ( 2) and 𝐶 1 ∼ 𝐶 3 are defined as follows: \n𝐶 1 = 8𝑑\n+ 𝑛 log 1 + ∥𝒘 ∥ 2 𝜌 2 1 + √︂ log(𝑚) 𝑛 2 1/2\n.\nHence, we have\n𝐿 D (𝒘) = 1 -2𝛾 1 -𝛾 𝐿 D (𝒘) + 𝛾 1 -𝛾 𝐿 D (𝒘) ≤ 1 -2𝛾 1 -𝛾 𝐿 S (𝒘) + 2|1 -2𝛾 | 1 -𝛾 √︂ 8𝑑 log(𝑒𝑚/𝑑) + 2 log(4/𝛿) 𝑚 + 𝛾 1 -𝛾 max ∥𝝐 ∥ ≤𝜌 𝐿 S (𝒘 + 𝝐) + 𝛾 1 -𝛾 1 √ 𝑚 -1 4 log(𝑚/𝛿) + 8 log(6𝑚 + 3𝑛) + 𝑛 log 1 + ∥𝒘 ∥ 2 𝜌 2 1 + √︂ log(𝑚) 𝑛 2 1/2 =𝐿 𝑊 𝑆𝐴𝑀 S (𝒘) + 2|1 -2𝛾 | 1 -𝛾 √︂ 8𝑑 log(𝑒𝑚/𝑑) + 2 log(4/𝛿) 𝑚 + 𝛾 1 -𝛾 1 √ 𝑚 -1 𝑛 log 1 + ∥𝒘 ∥ 2 𝜌 2 1 + √︂ log(𝑚) 𝑛 2 + 4 log(𝑚/𝛿) + 8 log(6𝑚 + 3𝑛) 1/2 .\nThis completes the proof. □\nNote that we assume 𝜌 (𝜌 𝑡 ) decreases to zero for proving the convergence in both convex and non-convex settings. However, the generalization bound would go to infinity if 𝜌 decreases to zero. In practice, we keep 𝜌 be a constant. To prove the convergence when 𝜌 is constant would be an interesting problem for the future work." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct experiments with a wide range of tasks to verify the effectiveness of WSAM." }, { "figure_ref": [], "heading": "Image Classification from Scratch", "publication_ref": [ "b13", "b11", "b6", "b5" ], "table_ref": [], "text": "We first study WSAM's performance for training models from scratch on Cifar10 and Cifar100 datasets. The models we choose include ResNet18 [13] and WideResNet- 28-10 [32]. We train the models on both Cifar10 and Cifar100 with a predefined batch size, 128 for ResNet18 and 256 for WideResNet-28-10. The base optimizer used here is SgdM with momentum 0.9. Following the settings of SAM [11], each vanilla training runs twice as many epochs as a SAM-like training run. We train both models for 400 epochs (200 for SAM-like optimizers), and use cosine scheduler to decay the learning rate. Note that we do not use any advanced data augmentation methods, such as cutout regularization [7] and AutoAugment [6].\nFor both models, we determine the learning rate and weight decay using a joint grid search for vanilla training, and keep them invariant for the next SAM experiments. The search range is {0.05, 0.1} and {1e-4, 5e-4, 1e-3} for learning rate and weight decay, respectively. Since all SAM-like optimizers have a hyperparameter 𝜌 (the neighborhood size), we then search for the best 𝜌 over the SAM optimizer, and use the same value for other SAM-like optimizers. The search range for 𝜌 is {0.01, 0.02, 0.05, 0.1, 0.2, 0.5}. We conduct independent searches over each optimizer individually for optimizer-specific hyperparameters and report the best performance. We use the range recommended in the corresponding article for the search. For GSAM, we search for 𝛼 in {0.01, 0.02, 0.03, 0.1, 0.2, 0.3}. For ESAM, we search for 𝛽 in {0.4, 0.5, 0.6} and 𝛾 in {0.4, 0.5, 0.6}. For WSAM, we search for 𝛾 in {0.5, 0.6, 0.7, 0.8, 0.82, 0.84, 0.86, 0.88, 0.9, 0.92, 0.94, 0.96}. We repeat the experiments five times with different random seeds and report the mean error and the associated standard deviation. We conduct the experiments on a single NVIDIA A100 GPU. Hyperparameters of the optimizers for each model are summarized in Tab. 3.\nTab. 2 gives the top-1 error for ResNet18, WRN-28-10 trained on Cifar10 and Cifar100 with different optimizers. SAM-like optimizers improve significantly over the vanilla one, and WSAM outperforms the other SAM-like optimizers for both models on Cifar10/100. " }, { "figure_ref": [], "heading": "Extra Training on ImageNet", "publication_ref": [ "b28", "b30" ], "table_ref": [], "text": "We further experiment on the ImageNet dataset [28] using Data-Efficient Image Transformers [30]. We restore a pre-trained DeiTbase checkpoint 3 , and continue training for three epochs. The model is trained using a batch size of 256, SgdM with momentum 0.9 as the base optimizer, a weight decay of 1e-4, and a learning rate of 1e-5. We conduct the experiment on four NVIDIA A100 GPUs. We search for the best 𝜌 for SAM in {0.05, 0.1, 0.5, 1.0, • • • , 6.0}. The best 𝜌 = 5.5 is directly used without further tunning by GSAM and WSAM. After that, we search for the optimal 𝛼 for GSAM in {0.01, 0.02, 0.03, 0.1, 0.2, 0.3} and 𝛾 for WSAM from 0.80 to 0.98 with a step size of 0.02.\nThe initial top-1 error rate of the model is 18.2%, and the error rates after the three extra epochs are shown in Tab. 4. We find no significant differences between the three SAM-like optimizers, while they all outperform the vanilla optimizer, indicating that they find a flatter minimum with better generalization. " }, { "figure_ref": [], "heading": "Robustness to Label Noise", "publication_ref": [ "b11", "b20", "b22", "b1", "b17", "b1" ], "table_ref": [], "text": "As shown in previous works [11,20,22], SAM-like optimizers exhibit good robustness to label noise in the training set, on par with 3 https://github.com/facebookresearch/deit those algorithms specially designed for learning with noisy labels [2,17]. Here, we compare the robustness of WSAM to label noise with SAM, ESAM, and GSAM. We train a ResNet18 for 200 epochs on the Cifar10 dataset and inject symmetric label noise of noise levels 20%, 40%, 60%, and 80% to the training set, as introduced in Arazo et al. [2]. We use SgdM with momentum 0.9 as the base optimizer, batch size 128, learning rate 0.05, weight decay 1e-3, and cosine learning rate scheduling. For each level of label noise, we determine the common 𝜌 value using a grid search over SAM in {0.01, 0.02, 0.05, 0.1, 0.2, 0.5}. Then, we search individually for other optimizer-specific hyperparameters to find the best performance. Hyperparameters to reproduce our results are listed in Tab. 5. We present the results of the robustness test in Tab. 6. WSAM generally achieves better robustness than SAM, ESAM, and GSAM." }, { "figure_ref": [], "heading": "Effect of Geometric Structures of Exploration Region", "publication_ref": [ "b20", "b22", "b20" ], "table_ref": [], "text": "SAM-like optimizers can be combined with techniques like ASAM and Fisher SAM to shape the exploration neighborhood adaptively.\nWe conduct experiments with WRN-28-10 on Cifar10, and compare the performance of SAM and WSAM using the adaptive and Fisher information methods, respectively, to understand how geometric structures of exploration region would affect the performance of SAM-like optimizers.\nFor parameters other than 𝜌 and 𝛾, we reuse the configuration in Sec. 6.1. From previous studies [20,22], 𝜌 is usually larger for ASAM and Fisher SAM. We search for the best 𝜌 in {0.1, 0.5, 1.0, . . . , 6.0} and the best 𝜌 is 5.0 in both scenarios. Afterward, we search for the optimal 𝛾 for WSAM from 0.80 to 0.94 with a step size of 0.02. The best 𝛾 is 0.88 for both methods.\nSurprisingly, the vanilla WSAM is found to be superior across the candidates, as seen in Tab. 7. It is also worth noting that, contrary to what is reported in Kim et al. [20], the Fisher method reduces " }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct an ablation study to gain a deeper understanding of the importance of the \"weight decouple\" technique in WSAM. As described in Section 4.1, we compare a variant of WSAM without weight decouple, Coupled-WSAM (outlined in Algorithm 4), to the original method." }, { "figure_ref": [], "heading": "Algorithm 4 Generic framework of Coupled-WSAM", "publication_ref": [], "table_ref": [], "text": "1: Input: parameters 𝜌, 𝜖 > 0, 𝛾 ∈ [0, 1), 𝒘 1 ∈ R 𝑛 , step size {𝛼 𝑡 } 𝑇 𝑡 =1 2: for 𝑡 = 1 to 𝑇 do 3: g𝑡 = ∇ℓ 𝑡 (𝒘 𝑡 ) 4:\n𝜹 𝑡 = 𝜌 g𝑡 /(∥ g𝑡 ∥ + 𝜖)" }, { "figure_ref": [], "heading": "5:", "publication_ref": [], "table_ref": [], "text": "𝒈 𝑡 = ∇ℓ 𝑡 (𝒘 𝑡 + 𝜹 𝑡 )\n6: 𝒘 𝑡 +1 = 𝒘 𝑡 -𝛼 𝑡 𝐵 -1 𝑡 𝒎 𝑡 9: end for\n𝒉 𝑡 = 𝛾 1-𝛾 𝒈 𝑡 + 1-2𝛾\nThe results are reported in Tab. 8. Coupled-WSAM yields better results than SAM in most cases, and WSAM further improves performance in most cases, demonstrating that the \"weight decouple\" technique is both effective and necessary for WSAM. " }, { "figure_ref": [], "heading": "Minima Analysis", "publication_ref": [ "b11", "b18", "b34", "b12" ], "table_ref": [], "text": "Here, we further deepen our understanding of the WSAM optimizer by comparing the differences in the minima found by the WSAM and SAM optimizers. The sharpness at the minima can be described by the dominant eigenvalue of the Hessian matrix. The larger the eigenvalue, the greater the sharpness. This metric is often used in other literature [11,18,34]. We use the Power Iteration algorithm to calculate this maximum eigenvalue, a practical tool seen in Golmant et al. [12]. Tab. 9 shows the differences in the minima found by the SAM and WSAM optimizers. We find that the minima found by the vanilla optimizer have smaller loss but greater sharpness, whereas the minima found by SAM have larger loss but smaller sharpness, thereby improving generalization. Interestingly, the minima found by WSAM not only have much smaller loss than SAM but also have sharpness that is close to SAM. This indicates that WSAM prioritizes ensuring a smaller loss while minimizing sharpness in the process of finding minima. Here, we present this surprising discovery, and further detailed research is left for future investigation. " }, { "figure_ref": [ "fig_8" ], "heading": "Hyperparameter Sensitivity", "publication_ref": [], "table_ref": [], "text": "Compared to SAM, WSAM has an additional hyperparameter 𝛾 that scales the size of the sharpness term. Here we test the sensitivity of WSAM's performance to this hyperparameter. We train ResNet18 and WRN-28-10 models on Cifar10 and Cifar100 with WSAM using a wide range of 𝛾. Results in Fig. 3 show that WSAM is not sensitive to the choice of hyperparameter 𝛾. We also find that the best performance of WSAM occurs almost always in the range between 0.8 and 0.95. " }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we revisit the structure of SAM, introducing a novel optimizer, called WSAM, which treats the sharpness as a regularization term, allowing for different weights for different tasks. Additionally, the \"weight decouple\" technique is employed to further enhance the performance. We prove the convergence rate in both convex and non-convex stochastic settings, and derive a generalization bound by combining PAC and Bayes-PAC techniques. Extensive empirical evaluations are performed on several datasets from distinct tasks. The results clearly demonstrate the advantages of WSAM in achieving better generalization." }, { "figure_ref": [], "heading": "ACKNOWLEDGEMENT", "publication_ref": [], "table_ref": [], "text": "We thank Junping Zhao and Shouren Zhao for their support in providing us with GPU resources." } ]
2023-06-09
10.1145/3580305.3599501
[ { "authors": "Momin Abbas; Quan Xiao; Lisha Chen; Pin-Yu Chen; Tianyi Chen", "journal": "PMLR", "ref_id": "b0", "title": "Sharp-MAML: Sharpness-Aware Model-Agnostic Meta Learning", "year": "2022-07" }, { "authors": "Eric Arazo; Diego Ortego; Paul Albert; Noel E O'connor; Kevin Mcguinness", "journal": "PMLR", "ref_id": "b1", "title": "Unsupervised Label Noise Modeling and Loss Correction", "year": "2019-06" }, { "authors": "Dara Bahri; Hossein Mobahi; Yi Tay", "journal": "", "ref_id": "b2", "title": "Sharpness-Aware Minimization Improves Language Model Generalization", "year": "2022-05-22" }, { "authors": "Pratik Chaudhari; Anna Choromanska; Stefano Soatto; Yann Lecun; Carlo Baldassi; Christian Borgs; Jennifer T Chayes; Levent Sagun; Riccardo Zecchina", "journal": "", "ref_id": "b3", "title": "Entropy-SGD: Biasing Gradient Descent Into Wide Valleys", "year": "2017-04-24" }, { "authors": "Xiangning Chen; Cho-Jui Hsieh; Boqing Gong", "journal": "", "ref_id": "b4", "title": "When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations", "year": "2022" }, { "authors": "Ekin Dogus Cubuk; Barret Zoph; Dandelion Mané; Vijay Vasudevan; V Quoc; Le", "journal": "", "ref_id": "b5", "title": "AutoAugment: Learning Augmentation Policies from Data", "year": "2018" }, { "authors": "Terrance Devries; Graham W Taylor", "journal": "", "ref_id": "b6", "title": "Improved Regularization of Convolutional Neural Networks with Cutout", "year": "2017" }, { "authors": "Jiawei Du; Hanshu Yan; Jiashi Feng; Joey Tianyi Zhou; Liangli Zhen; Rick Siow Mong Goh; Y F Vincent; Tan", "journal": "", "ref_id": "b7", "title": "Efficient Sharpness-aware Minimization for Improved Training of Neural Networks", "year": "2022-04-25" }, { "authors": " Openreview", "journal": "", "ref_id": "b8", "title": "", "year": "" }, { "authors": "Karolina Gintare; Daniel M Dziugaite; Roy", "journal": "AUAI Press", "ref_id": "b9", "title": "Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data", "year": "2017-08-11" }, { "authors": "Karolina Gintare; Daniel M Dziugaite; Roy", "journal": "AUAI Press", "ref_id": "b10", "title": "Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data", "year": "2017-08-11" }, { "authors": "Pierre Foret; Ariel Kleiner; Hossein Mobahi; Behnam Neyshabur", "journal": "", "ref_id": "b11", "title": "Sharpness-aware Minimization for Efficiently Improving Generalization", "year": "2021-05-03" }, { "authors": "Noah Golmant; Zhewei Yao; Amir Gholami; Michael Mahoney; Joseph Gonzalez", "journal": "", "ref_id": "b12", "title": "pytorch-hessian-eigenthings: efficient PyTorch Hessian eigendecomposition", "year": "2018" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "IEEE Computer Society", "ref_id": "b13", "title": "Deep Residual Learning for Image Recognition", "year": "2016-06-27" }, { "authors": "Sepp Hochreiter; Jürgen Schmidhuber", "journal": "Neural Comput", "ref_id": "b14", "title": "Flat Minima", "year": "1997" }, { "authors": "W ; Ronny Huang; Zeyad Emam; Micah Goldblum; Liam Fowl; Justin K Terry; Furong Huang; Tom Goldstein", "journal": "", "ref_id": "b15", "title": "Understanding Generalization Through Visualizations. In \"I Can't Believe It's Not Better!", "year": "2020" }, { "authors": "Pavel Izmailov; Dmitrii Podoprikhin; Timur Garipov; P Dmitry; Andrew Vetrov; Wilson Gordon", "journal": "AUAI Press", "ref_id": "b16", "title": "Averaging Weights Leads to Wider Optima and Better Generalization", "year": "2018-06-10" }, { "authors": "Lu Jiang; Di Huang; Mason Liu; Weilong Yang", "journal": "PMLR", "ref_id": "b17", "title": "Beyond Synthetic Noise: Deep Learning on Controlled Noisy Labels", "year": "2020-07" }, { "authors": "Jean Kaddour; Linqing Liu; Ricardo Silva; Matt J Kusner", "journal": "", "ref_id": "b18", "title": "When Do Flat Minima Optimizers Work?", "year": "2022" }, { "authors": "Nitish Shirish Keskar; Dheevatsa Mudigere; Jorge Nocedal; Mikhail Smelyanskiy; Ping Tak; Peter Tang", "journal": "", "ref_id": "b19", "title": "On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima", "year": "2017-04-24" }, { "authors": "Minyoung Kim; Da Li; Shell Xu Hu; Timothy M Hospedales", "journal": "PMLR", "ref_id": "b20", "title": "Fisher SAM: Information Geometry and Sharpness Aware Minimisation", "year": "2022-07" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b21", "title": "Adam: A Method for Stochastic Optimization", "year": "2015-05-07" }, { "authors": "Jungmin Kwon; Jeongseop Kim; Hyunseo Park; In Kwon Choi", "journal": "PMLR", "ref_id": "b22", "title": "ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural Networks", "year": "2021-07" }, { "authors": "Hao Li; Zheng Xu; Gavin Taylor; Christoph Studer; Tom Goldstein", "journal": "", "ref_id": "b23", "title": "Visualizing the Loss Landscape of Neural Nets", "year": "2018-12-03" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b24", "title": "Decoupled Weight Decay Regularization", "year": "2019-05-06" }, { "authors": "David A Mcallester", "journal": "ACM", "ref_id": "b25", "title": "PAC-Bayesian Model Averaging", "year": "1999-07-07" }, { "authors": "Boris T Polyak", "journal": "U. S. S. R. Comput. Math. and Math. Phys", "ref_id": "b26", "title": "Some methods of speeding up the convergence of iteration methods", "year": "1964" }, { "authors": "Herbert Robbins; Sutton Monro", "journal": "The annals of mathematical statistics", "ref_id": "b27", "title": "A stochastic approximation method", "year": "1951" }, { "authors": "Olga Russakovsky; Jia Deng; Hao Su; Jonathan Krause; Sanjeev Satheesh; Sean Ma; Zhiheng Huang; Andrej Karpathy; Aditya Khosla; Michael Bernstein; Alexander C Berg; Li Fei-Fei", "journal": "International Journal of Computer Vision (IJCV)", "ref_id": "b28", "title": "ImageNet Large Scale Visual Recognition Challenge", "year": "2015" }, { "authors": "Shai Shalev; -Shwartz ; Shai Ben-David", "journal": "Cambridge University Press", "ref_id": "b29", "title": "Understanding Machine Learning -From Theory to Algorithms", "year": "2014" }, { "authors": "Hugo Touvron; Matthieu Cord; Matthijs Douze; Francisco Massa; Alexandre Sablayrolles; Hervé Jégou", "journal": "PMLR", "ref_id": "b30", "title": "Training data-efficient image transformers & distillation through attention", "year": "2021-07" }, { "authors": "V N Vapnik; A Ya; Chervonenkis", "journal": "Theory of Probability and its Applications", "ref_id": "b31", "title": "On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities", "year": "1971" }, { "authors": "Sergey Zagoruyko; Nikos Komodakis", "journal": "BMVA Press", "ref_id": "b32", "title": "Wide Residual Networks", "year": "2016-09-19" }, { "authors": "Chiyuan Zhang; Samy Bengio; Moritz Hardt; Benjamin Recht; Oriol Vinyals", "journal": "", "ref_id": "b33", "title": "Understanding deep learning requires rethinking generalization", "year": "2017-04-24" }, { "authors": "Juntang Zhuang; Boqing Gong; Liangzhe Yuan; Yin Cui; Hartwig Adam; Nicha C Dvornek; Sekhar Tatikonda; James S Duncan; Ting Liu", "journal": "", "ref_id": "b34", "title": "Surrogate Gap Minimization Improves Sharpness-Aware Training", "year": "2022-04-25" }, { "authors": " Openreview", "journal": "", "ref_id": "b35", "title": "", "year": "" } ]
[ { "formula_coordinates": [ 2, 53.8, 561.24, 241.76, 32.28 ], "formula_id": "formula_0", "formula_text": "∥𝒙 ∥ = ∥𝒙 ∥ 2 = √︁ ⟨𝒙, 𝒙⟩. We also use ∥𝒙 ∥ ∞ = max 𝑖 |𝑥 (𝑖 ) | to denote ℓ ∞ - norm, where 𝑥 (𝑖 ) is the 𝑖-th element of 𝒙." }, { "formula_coordinates": [ 2, 120.47, 645.92, 106.68, 25.4 ], "formula_id": "formula_1", "formula_text": "𝐿(𝒘) := 1 𝑚 𝑚 ∑︁ 𝑘=1 ℓ (ℎ 𝒘 (𝒙 𝑘 ), 𝑦 𝑘 )," }, { "formula_coordinates": [ 2, 378.43, 116.45, 118.76, 22.44 ], "formula_id": "formula_2", "formula_text": "𝑔(𝒘) = 1 |B| ∑︁ 𝑘 ∈ B ∇ℓ (ℎ 𝒘 (𝒙 𝑘 ), 𝑦 𝑘 )," }, { "formula_coordinates": [ 2, 345.53, 233.66, 184.61, 43.89 ], "formula_id": "formula_3", "formula_text": "𝜹 * = arg max ∥𝜹 ∥ ≤𝜌 𝐿(𝒘 + 𝜹) ≈ arg max ∥𝜹 ∥ ≤𝜌 𝐿(𝒘) + 𝜹 𝑇 ∇𝐿(𝒘) = 𝜌 ∇𝐿(𝒘) ∥∇𝐿(𝒘)∥ ." }, { "formula_coordinates": [ 2, 349.5, 313.2, 177.07, 35.25 ], "formula_id": "formula_4", "formula_text": "∇𝐿 𝑆𝐴𝑀 (𝒘) ≈ ∇𝐿(𝒘 + 𝜹 * ) =∇𝐿(𝒘)| 𝒘+𝜹 * + 𝑑𝜹 * 𝑑𝒘 ∇𝐿(𝒘)| 𝒘+𝜹 * ≈ ∇𝐿(𝒘)| 𝒘+𝜹 * ," }, { "formula_coordinates": [ 2, 323.39, 647.06, 226.69, 38.61 ], "formula_id": "formula_5", "formula_text": "I SgdM 𝑡 -1 𝑖=0 𝛾 𝑖 𝒈 𝑡 -𝑖 I Adam 1-𝛽 1 1-𝛽 𝑡 1 𝑡 -1 𝑖=0 𝒈 𝑡 -𝑖 𝛽 𝑖 1 diag( √︂ 1-𝛽 2 1-𝛽 𝑡 2 𝑡 𝑖=1 𝒈 2 𝑖 𝛽 𝑡 -𝑖 2 + 𝜖)" }, { "formula_coordinates": [ 3, 93.99, 149.62, 200.6, 41.24 ], "formula_id": "formula_6", "formula_text": "𝐿 𝑊 𝑆𝐴𝑀 (𝒘) := 𝐿(𝒘) + 𝛾 1 -𝛾 L(𝒘) = 1 -2𝛾 1 -𝛾 𝐿(𝒘) + 𝛾 1 -𝛾 𝐿 𝑆𝐴𝑀 (𝒘),(2)" }, { "formula_coordinates": [ 3, 53.47, 471.29, 240.57, 64.39 ], "formula_id": "formula_7", "formula_text": "Algorithm 2 Generic framework of WSAM 1: Input: parameters 𝜌, 𝜖 > 0, 𝛾 ∈ [0, 1), 𝒘 1 ∈ R 𝑛 , step size {𝛼 𝑡 } 𝑇 𝑡 =1 , sequence of functions {𝜙 𝑡 ,𝜓 𝑡 } 𝑇 𝑡 =1 2: for 𝑡 = 1 to 𝑇 do 3: g𝑡 = ∇ℓ 𝑡 (𝒘 𝑡 ) 4:" }, { "formula_coordinates": [ 3, 59.67, 540.29, 82.57, 17.6 ], "formula_id": "formula_8", "formula_text": "𝒈 𝑡 = ∇ℓ 𝑡 (𝒘 𝑡 + 𝜹 𝑡 ) 6:" }, { "formula_coordinates": [ 3, 59.67, 563.47, 159.31, 23.19 ], "formula_id": "formula_9", "formula_text": "𝒘 𝑡 +1 = 𝒘 𝑡 -𝛼 𝑡 B-1 𝑡 m𝑡 + 𝛾 1-𝛾 (𝒈 𝑡 -g𝑡 ) 8: end for" }, { "formula_coordinates": [ 3, 83.57, 87.24, 474.63, 619.87 ], "formula_id": "formula_10", "formula_text": "𝐿(𝒘) = -log 0.7𝑒 -𝐾 1 (𝒘 )/1.8 2 + 0.3𝑒 -𝐾 2 (𝒘 )/1.2 2 , Algorithm 3 SGD with WSAM 1: Input: parameters 𝜌, 𝜖 > 0, 𝛾 ∈ [0, 1), 𝒘 1 ∈ R 𝑛 , step size {𝛼 𝑡 } 𝑇 𝑡 =1 2: for 𝑡 = 1 to 𝑇 do 3: g𝑡 = ∇ℓ 𝑡 (𝒘 𝑡 ) 4:" }, { "formula_coordinates": [ 3, 323.83, 168.04, 205.34, 207.92 ], "formula_id": "formula_11", "formula_text": "𝒘 𝑡 +1 = 𝒘 𝑡 -𝛼 𝑡 𝛾 1-𝛾 𝒈 𝑡 + 1-2𝛾 1-𝛾 g𝑡 7: end for ∇L SAM (ω) ∇L(ω) ∇L W SAM (ω) γ < 1 2 ∇L SAM (ω) ∇L(ω) γ > 1 2 ∇L W SAM (ω)" }, { "formula_coordinates": [ 3, 366.7, 440.1, 139.59, 23.21 ], "formula_id": "formula_12", "formula_text": "𝐾 𝑖 (𝜇, 𝜎) = log 𝜎 𝑖 𝜎 + 𝜎 2 + (𝜇 -𝜇 𝑖 ) 2 2𝜎 2 𝑖 -1 2" }, { "formula_coordinates": [ 3, 317.51, 649.75, 241.67, 60.69 ], "formula_id": "formula_13", "formula_text": "𝛼 𝑡 = 𝛼/ √ 𝑡, 𝜌 𝑡 ≤ 𝜌, ∥𝒈 𝑡 ∥ ∞ ≤ 𝐺 ∞ , ∥ g𝑡 ∥ ∞ ≤ 𝐺 ∞ ∀𝑡 ∈ [𝑇 ]. Suppose ℓ 𝑡 (𝒘) is convex and 𝜂-smooth, i.e., ∥∇ℓ 𝑡 (𝒖) -∇ℓ 𝑡 (𝒗)∥ ≤ 𝜂 ∥𝒖 -𝒗 ∥, for all 𝑡 ∈ [𝑇 ], 𝒘 * is an optimal solution of 𝑇 𝑡 =1 ℓ 𝑡 (𝒘), i.e., 𝒘 * = arg min 𝒘 ∈R 𝑛 𝑇 𝑡 =1 ℓ 𝑡 (𝒘) and there exists the constant 𝐷 ∞ such that max 𝑡 ∈ [𝑇 ] ∥𝒘 𝑡 -𝒘 * ∥ ∞ ≤ 𝐷 ∞ . Then" }, { "formula_coordinates": [ 4, 94.23, 289.92, 150.82, 26.06 ], "formula_id": "formula_14", "formula_text": "𝑇 ∑︁ 𝑡 =1 ℓ 𝑡 (𝒘 𝑡 ) -ℓ 𝑡 (𝒘 * ) ≤ 𝐶 1 √ 𝑇 + 𝐶 2 𝑇 ∑︁ 𝑡 =1 𝜌 𝑡 ," }, { "formula_coordinates": [ 4, 84.07, 335.18, 179.01, 26.85 ], "formula_id": "formula_15", "formula_text": "𝐶 1 = 𝑛𝐷 2 ∞ 2𝛼 + 10𝛾 2 -8𝛾 + 2 (1 -𝛾) 2 𝑛𝐺 2 ∞ , 𝐶 2 = √ 𝑛𝐷 ∞ 𝜂𝛾 1 -𝛾 ." }, { "formula_coordinates": [ 4, 53.19, 405.03, 248.34, 104.11 ], "formula_id": "formula_16", "formula_text": "𝒘 𝒕+1 = Π W 𝒘 𝑡 -𝛼 𝑡 𝛾 1-𝛾 𝒈 𝑡 + 1-2𝛾 1-𝛾 g𝑡 . Proof. Let 𝒉 𝑡 = 𝛾 1-𝛾 𝒈 𝑡 + 1-2𝛾 1-𝛾 g𝑡 . Since ∥𝒘 𝑡 +1 -𝒘 * ∥ 2 = 𝒘 𝑡 -𝛼 𝑡 𝛾 1 -𝛾 𝒈 𝑡 + 1 -2𝛾 1 -𝛾 g𝑡 -𝒘 * 2 = ∥𝒘 𝑡 -𝒘 * ∥ 2 -2𝛼 𝑡 𝒘 𝑡 -𝒘 * , g𝑡 -2𝛼 𝑡 𝒘 𝑡 -𝒘 * , 𝛾 1 -𝛾 (𝒈 𝑡 -g𝑡 ) + 𝛼 2 𝑡 ∥𝒉 𝑡 ∥ 2 ,(3)" }, { "formula_coordinates": [ 4, 58.52, 531.72, 230.63, 138.59 ], "formula_id": "formula_17", "formula_text": "𝒘 𝑡 -𝒘 * , g𝑡 = 1 2𝛼 𝑡 ∥𝒘 𝑡 -𝒘 * ∥ 2 -∥𝒘 𝑡 +1 -𝒘 * ∥ 2 - 𝛾 1 -𝛾 < 𝒘 𝒕 -𝒘 * , 𝒈 𝑡 -g𝑡 > + 𝛼 𝑡 2 ∥𝒉 𝑡 ∥ 2 ≤ 1 2𝛼 𝑡 ∥𝒘 𝑡 -𝒘 * ∥ 2 -∥𝒘 𝑡 +1 -𝒘 * ∥ 2 + 𝛾 1 -𝛾 ∥𝒘 𝑡 -𝒘 * ∥∥𝒈 𝑡 -g𝑡 ∥ + 𝛼 𝑡 (1 -2𝛾) 2 (1 -𝛾) 2 ∥𝒈 𝑡 ∥ 2 + 𝛾 2 (1 -𝛾) 2 ∥ g𝑡 ∥ 2 ≤ 1 2𝛼 𝑡 ∥𝒘 𝑡 -𝒘 * ∥ 2 -∥𝒘 𝑡 +1 -𝒘 * ∥ 2 + 𝛾 1 -𝛾 √ 𝑛𝐷 ∞ 𝜂𝜌 𝑡 + 5𝛾 2 -4𝛾 + 1 (1 -𝛾) 2 𝑛𝐺 2 ∞ 𝛼 𝑡 ," }, { "formula_coordinates": [ 4, 53.44, 89.58, 507.32, 620.22 ], "formula_id": "formula_18", "formula_text": "𝜂-smooth of ℓ 𝑡 (𝒘), ∥𝒘 𝑡 -𝒘 * ∥ ∞ ≤ 𝐷 ∞ and ∥𝒈 𝑡 ∥ ∞ ≤ 𝐺 ∞ . Thus, the regret 𝑇 ∑︁ 𝑡 =1 ℓ 𝑡 (𝒘 𝑡 ) -ℓ 𝑡 (𝒘 * ) ≤ 𝑇 ∑︁ 𝑡 =1 𝒘 𝑡 -𝒘 * , g𝑡 ≤ 𝑇 ∑︁ 𝑡 =1 1 2𝛼 𝑡 ∥𝒘 𝑡 -𝒘 * ∥ 2 -∥𝒘 𝑡 +1 -𝒘 * ∥ 2 + 𝛾 1 -𝛾 √ 𝑛𝐷 ∞ 𝜂𝜌 𝑡 + 5𝛾 2 -4𝛾 + 1 (1 -𝛾) 2 𝑛𝐺 2 ∞ 𝛼 𝑡 ≤ 1 2𝛼 1 ∥𝒘 1 -𝒘 * ∥ 2 + 𝑇 ∑︁ 𝑡 =2 ∥𝒘 𝑡 -𝒘 * ∥ 2 2 1 𝛼 𝑡 - 1 𝛼 𝑡 -1 + 𝑇 ∑︁ 𝑡 =1 √ 𝑛𝐷 ∞ 𝜂𝛾 1 -𝛾 𝜌 𝑡 + 𝑇 ∑︁ 𝑡 =1 5𝛾 2 -4𝛾 + 1 (1 -𝛾) 2 𝑛𝐺 2 ∞ 𝛼 𝑡 ≤ 𝑛𝐷 2 ∞ 2𝛼 + 𝑛𝐷 2 ∞ 2 𝑇 ∑︁ 𝑡 =2 1 𝛼 𝑡 - 1 𝛼 𝑡 -1 + 𝑇 ∑︁ 𝑡 =1 √ 𝑛𝐷 ∞ 𝜂𝛾 1 -𝛾 𝜌 𝑡 + 𝑇 ∑︁ 𝑡 =1 5𝛾 2 -4𝛾 + 1 (1 -𝛾) 2 𝑛𝐺 2 ∞ 𝛼 𝑡 ≤ 𝑛𝐷 2 ∞ 2𝛼 √ 𝑇 + 𝑇 ∑︁ 𝑡 =1 √ 𝑛𝐷 ∞ 𝜂𝛾 1 -𝛾 𝜌 𝑡 + 10𝛾 2 -8𝛾 + 2 (1 -𝛾) 2 𝑛𝐺 2 ∞ √ 𝑇 ," }, { "formula_coordinates": [ 4, 346.44, 380.43, 183.13, 79.42 ], "formula_id": "formula_19", "formula_text": "𝑇 ∑︁ 𝑡 =1 1 √ 𝑡 = 1 + ∫ 3 2 1 √ 2 𝑑𝑠 + • • • + ∫ 𝑇 𝑇 -1 1 √ 𝑇 𝑑𝑠 < 1 + ∫ 3 2 1 √ 𝑠 -1 𝑑𝑠 + • • • + ∫ 𝑇 𝑇 -1 1 √ 𝑠 -1 𝑑𝑠 = 1 + ∫ 𝑇 2 1 √ 𝑠 -1 𝑑𝑠 = 2 √ 𝑇 -1 -1 < 2 √ 𝑇 ." }, { "formula_coordinates": [ 4, 361.75, 500.43, 144.1, 26.06 ], "formula_id": "formula_20", "formula_text": "𝑇 ∑︁ 𝑡 =1 ℓ 𝑡 (𝒘 𝑡 ) -ℓ 𝑡 (𝒘 * ) ≤ (𝐶 1 + 2𝜌𝐶 2 ) √ 𝑇 ," }, { "formula_coordinates": [ 4, 384.02, 566.78, 98, 26.06 ], "formula_id": "formula_21", "formula_text": "𝑇 ∑︁ 𝑡 =1 𝜌 𝑡 = 𝜌 𝑇 ∑︁ 𝑡 =1 1 √ 𝑡 < 2𝜌 √ 𝑇 ." }, { "formula_coordinates": [ 4, 359.66, 695.08, 156.56, 16.24 ], "formula_id": "formula_22", "formula_text": "𝐿(𝒖) ≤ 𝐿(𝒗) + ⟨∇𝐿(𝒗), 𝒖 -𝒗⟩ + 𝜂 2 ∥𝒖 -𝒗 ∥ 2 ." }, { "formula_coordinates": [ 5, 54.23, 98.27, 239.55, 30.36 ], "formula_id": "formula_23", "formula_text": "∥𝒈 𝑡 ∥ ∞ ≤ 𝐺 ∞ , ∥ g𝑡 ∥ ∞ ≤ 𝐺 ∞ , ∥∇𝐿(𝒘 𝑡 ) ∥ ∞ ≤ 𝐺 ∞ , ∀𝑡 ∈ [𝑇 ]. (3)" }, { "formula_coordinates": [ 5, 54.23, 131.14, 240.79, 20.22 ], "formula_id": "formula_24", "formula_text": "𝑡 ) + 𝜻 𝑡 , E[𝜻 𝑡 ] = 0 and 𝜻 𝑖 is independent of 𝜻 𝑗 if 𝑖 ≠ 𝑗. (4) 𝛼 𝑡 = 𝛼/ √ 𝑡, 𝜌 𝑡 ≤ 𝜌, ∀𝑡 ∈ [𝑇 ]." }, { "formula_coordinates": [ 5, 92.78, 183.45, 162.46, 48.2 ], "formula_id": "formula_25", "formula_text": "min 𝑡 ∈ [𝑇 ] E[∥∇𝐿(𝒘 𝑡 )∥ 2 ] ≤ 1 √ 𝑇 -1 𝐶 3 + 𝐶 5 + 𝐶 4 𝑇 ∑︁ 𝑡 =1 𝛼 𝑡 𝜌 𝑡 + 𝐶 5 log𝑇 ," }, { "formula_coordinates": [ 5, 119.85, 261.55, 107.51, 52.72 ], "formula_id": "formula_26", "formula_text": "𝐶 3 = 𝐿(𝒘 1 ) 2𝛼 , 𝐶 4 = √ 𝑛𝐺 ∞ 𝜂𝛾 2𝛼 (1 -𝛾) , 𝐶 5 = 5𝛾 2 -4𝛾 + 1 2(1 -𝛾) 2 𝑛𝐺 2 ∞ 𝜂𝛼 ." }, { "formula_coordinates": [ 5, 58.1, 374.94, 231.5, 40.66 ], "formula_id": "formula_27", "formula_text": "𝐿(𝒘 𝑡 +1 ) ≤ 𝐿(𝒘 𝑡 ) + ⟨∇𝐿(𝒘 𝑡 ), 𝒘 𝑡 +1 -𝒘 𝑡 ⟩ + 𝜂 2 ∥𝒘 𝑡 +1 -𝒘 𝑡 ∥ 2 =𝐿(𝒘 𝑡 ) -∇𝐿(𝒘 𝑡 ), 𝛼 𝑡 g𝑡 + 𝛾 1 -𝛾 (𝒈 𝑡 -g𝑡 ) + 𝜂 2 ∥𝒘 𝑡 +1 -𝒘 𝑡 ∥ 2 ." }, { "formula_coordinates": [ 5, 54.07, 462.77, 254.11, 110.7 ], "formula_id": "formula_28", "formula_text": "𝛼 𝑡 E[∥∇𝐿(𝒘 𝑡 ) ∥ 2 ] ≤ E[𝐿(𝒘 𝑡 ) -𝐿(𝒘 𝑡 +1 )] -𝛼 𝑡 E ∇𝐿(𝒘 𝑡 ), 𝛾 1 -𝛾 (∇𝐿(𝒘 𝑡 + 𝜹 𝑡 ) -∇𝐿(𝒘 𝑡 )) + 𝜂 2 ∥𝒘 𝑡 +1 -𝒘 𝑡 ∥ 2 ≤E[𝐿(𝒘 𝑡 ) -𝐿(𝒘 𝑡 +1 )] + 𝛼 𝑡 𝛾 1 -𝛾 E[∥∇𝐿(𝒘 𝑡 )∥∥∇𝐿(𝒘 𝑡 + 𝜹 𝑡 ) -∇𝐿(𝒘 𝑡 )∥] + 𝜂 2 𝛼 2 𝑡 𝛾 1 -𝛾 𝒈 𝑡 + 1 -2𝛾 1 -𝛾 g𝑡 2 ≤E[𝐿(𝒘 𝑡 ) -𝐿(𝒘 𝑡 +1 )] + √ 𝑛𝐺 ∞ 𝜂𝛾 1 -𝛾 𝛼 𝑡 𝜌 𝑡 + 5𝛾 2 -4𝛾 + 1 (1 -𝛾) 2 𝑛𝐺 2 ∞ 𝜂𝛼 2 𝑡 ." }, { "formula_coordinates": [ 5, 54.07, 605.88, 240.89, 101.49 ], "formula_id": "formula_29", "formula_text": "𝑇 ∑︁ 𝑡 =1 𝛼 𝑡 E[∥∇𝐿(𝒘 𝑡 ) ∥ 2 ] ≤ E[𝐿(𝒘 1 ) -𝐿(𝒘 𝑇 +1 )] + √ 𝑛𝐺 ∞ 𝜂𝛾 1 -𝛾 𝑇 ∑︁ 𝑡 =1 𝛼 𝑡 𝜌 𝑡 + 5𝛾 2 -4𝛾 + 1 (1 -𝛾) 2 𝑛𝐺 2 ∞ 𝜂 𝑇 ∑︁ 𝑡 =1 𝛼 2 𝑡 ≤𝐿(𝒘 1 ) + √ 𝑛𝐺 ∞ 𝜂𝛾 1 -𝛾 𝑇 ∑︁ 𝑡 =1 𝛼 𝑡 𝜌 𝑡 + 5𝛾 2 -4𝛾 + 1 (1 -𝛾) 2 𝑛𝐺 2 ∞ 𝜂 𝑇 ∑︁ 𝑡 =1 𝛼 2 𝑡 .(6)" }, { "formula_coordinates": [ 5, 340.45, 101.09, 218.29, 124 ], "formula_id": "formula_30", "formula_text": "𝑇 ∑︁ 𝑡 =1 𝛼 𝑡 = 𝑇 ∑︁ 𝑡 =1 𝛼 √ 𝑡 = 𝛼 ∫ 2 1 1 √ 1 𝑑𝑠 + • • • + ∫ 𝑇 𝑇 -1 1 √ 𝑇 𝑑𝑠 > 𝛼 ∫ 𝑇 1 1 √ 𝑠 𝑑𝑠 = 2𝛼 √ 𝑇 -1 , 𝑇 ∑︁ 𝑡 =1 𝛼 2 𝑡 = 𝑇 ∑︁ 𝑡 =1 𝛼 2 𝑡 = 𝛼 2 1 + ∫ 3 2 1 2 𝑑𝑠 + • • • + ∫ 𝑇 𝑇 -1 1 𝑇 𝑑𝑠 < 𝛼 2 1 + ∫ 𝑇 2 1 𝑠 -1 𝑑𝑠 = 𝛼 2 (log(𝑇 -1) + 1) < 𝛼 2 (log𝑇 + 1) ,(7)" }, { "formula_coordinates": [ 5, 318.22, 246.8, 240.52, 113.28 ], "formula_id": "formula_31", "formula_text": "𝑡 ∈ [𝑇 ] E[∥∇𝐿(𝒘 𝑡 )∥ 2 ] ≤ 1 𝑇 𝑡 =1 𝛼 𝑡 𝐿(𝒘 1 ) + √ 𝑛𝐺 ∞ 𝜂𝛾 1 -𝛾 𝑇 ∑︁ 𝑡 =1 𝛼 𝑡 𝜌 𝑡 + 5𝛾 2 -4𝛾 + 1 (1 -𝛾) 2 𝑛𝐺 2 ∞ 𝜂 𝑇 ∑︁ 𝑡 =1 𝛼 2 𝑡 ≤ 1 √ 𝑇 -1 𝐿(𝒘 1 ) 2𝛼 + √ 𝑛𝐺 ∞ 𝜂𝛾 2𝛼 (1 -𝛾) 𝑇 ∑︁ 𝑡 =1 𝛼 𝑡 𝜌 𝑡 + 5𝛾 2 -4𝛾 + 1 2(1 -𝛾) 2 𝑛𝐺 2 ∞ 𝜂𝛼 (log𝑇 + 1) .(8)" }, { "formula_coordinates": [ 6, 68.24, 112.77, 211.08, 22.24 ], "formula_id": "formula_33", "formula_text": "𝐿 D (𝒘) ≤ 𝐿 𝑊 𝑆𝐴𝑀 S (𝒘) + 2|1 -2𝛾 | 1 -𝛾 √︂ 𝐶 1 𝑚 + 𝛾 1 -𝛾 √︂ 𝐶 2 + 𝐶 3 𝑚 -1 ." }, { "formula_coordinates": [ 6, 99.73, 163.14, 28.11, 7.74 ], "formula_id": "formula_34", "formula_text": "𝐶 1 = 8𝑑" }, { "formula_coordinates": [ 6, 91.66, 329.26, 143.29, 22.9 ], "formula_id": "formula_35", "formula_text": "+ 𝑛 log 1 + ∥𝒘 ∥ 2 𝜌 2 1 + √︂ log(𝑚) 𝑛 2 1/2" }, { "formula_coordinates": [ 6, 53.66, 378.27, 240.44, 193.43 ], "formula_id": "formula_36", "formula_text": "𝐿 D (𝒘) = 1 -2𝛾 1 -𝛾 𝐿 D (𝒘) + 𝛾 1 -𝛾 𝐿 D (𝒘) ≤ 1 -2𝛾 1 -𝛾 𝐿 S (𝒘) + 2|1 -2𝛾 | 1 -𝛾 √︂ 8𝑑 log(𝑒𝑚/𝑑) + 2 log(4/𝛿) 𝑚 + 𝛾 1 -𝛾 max ∥𝝐 ∥ ≤𝜌 𝐿 S (𝒘 + 𝝐) + 𝛾 1 -𝛾 1 √ 𝑚 -1 4 log(𝑚/𝛿) + 8 log(6𝑚 + 3𝑛) + 𝑛 log 1 + ∥𝒘 ∥ 2 𝜌 2 1 + √︂ log(𝑚) 𝑛 2 1/2 =𝐿 𝑊 𝑆𝐴𝑀 S (𝒘) + 2|1 -2𝛾 | 1 -𝛾 √︂ 8𝑑 log(𝑒𝑚/𝑑) + 2 log(4/𝛿) 𝑚 + 𝛾 1 -𝛾 1 √ 𝑚 -1 𝑛 log 1 + ∥𝒘 ∥ 2 𝜌 2 1 + √︂ log(𝑚) 𝑛 2 + 4 log(𝑚/𝛿) + 8 log(6𝑚 + 3𝑛) 1/2 ." }, { "formula_coordinates": [ 8, 59.67, 544.01, 234.37, 50.45 ], "formula_id": "formula_37", "formula_text": "1: Input: parameters 𝜌, 𝜖 > 0, 𝛾 ∈ [0, 1), 𝒘 1 ∈ R 𝑛 , step size {𝛼 𝑡 } 𝑇 𝑡 =1 2: for 𝑡 = 1 to 𝑇 do 3: g𝑡 = ∇ℓ 𝑡 (𝒘 𝑡 ) 4:" }, { "formula_coordinates": [ 8, 77.87, 609.68, 67.51, 11.17 ], "formula_id": "formula_38", "formula_text": "𝒉 𝑡 = 𝛾 1-𝛾 𝒈 𝑡 + 1-2𝛾" } ]
Sharpness-Aware Minimization Revisited: Weighted Sharpness as a Regularization Term
Deep Neural Networks (DNNs) generalization is known to be closely related to the flatness of minima, leading to the development of Sharpness-Aware Minimization (SAM) for seeking flatter minima and better generalization. In this paper, we revisit the loss of SAM and propose a more general method, called WSAM, by incorporating sharpness as a regularization term. We prove its generalization bound through the combination of PAC and Bayes-PAC techniques, and evaluate its performance on various public datasets. The results demonstrate that WSAM achieves improved generalization, or is at least highly competitive, compared to the vanilla optimizer, SAM and its variants. The code is available at this link 1 .• Theory of computation → Continuous optimization.
Yun Yue; Jiadi Jiang; Zhiling Ye; Ning Gao; Yongchao Liu; Ke Zhang
[ { "figure_caption": "Figure 1 :1Figure 1: How WSAM updates on the choice of 𝛾.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Theorem 5 . 1 .51(Convergence in convex settings) Let {𝒘 𝑡 } be the sequence obtained byAlgorithm 3, ", "figure_data": "", "figure_id": "fig_1", "figure_label": "51", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: WSAM can achieve different minima by choosing different 𝛾.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Corollary 5 .52 implies the regret is 𝑂 ( √ 𝑇 ) and can achieve the convergence rate 𝑂 (1/ √ 𝑇 ) in convex settings. Theorem 5.3. (Convergence in non-convex settings) Suppose that the following assumptions are satisfied:", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Corollary 5 . 4 . 1 √ 𝑇 - 1 𝐶 3 +=1 1 𝑡<541131This completes the proof. □ Then we have the following corollary. Suppose 𝜌 𝑡 = 𝜌/ √ 𝑡, then we have min 𝑡 ∈ [𝑇 ] E[∥∇𝑓 (𝒘 𝑡 )∥ 2 ] ≤ 𝐶 5 + 𝛼𝜌𝐶 4 + (𝐶 5 + 𝛼𝜌𝐶 4 ) log𝑇 , where 𝐶 3 ∼ 𝐶 5 are the same with Theorem 5.3. Proof. Since 𝜌 𝑡 = 𝜌/ √ 𝑡, we have 𝑇 ∑︁ 𝑡 =1 𝛼 𝑡 𝜌 𝑡 = 𝛼𝜌 𝑇 ∑︁ 𝑡 𝛼𝜌 (log𝑇 + 1).", "figure_data": "", "figure_id": "fig_4", "figure_label": "541131", "figure_type": "figure" }, { "figure_caption": "Theorem 5 . 5 .55e., ℓ (ℎ 𝒘 (𝒙), 𝑦) = I(ℎ 𝒘 (𝒙) ≠ 𝑦) where I is the indicator function. Followed by Dziugaite and Roy[10], Foret et al.[11], McAllester[25], Shalev-Shwartz and Ben-David[29], Vapnik and Chervonenkis[31], we have the following generalization property. Let H = {ℎ 𝒘 : 𝒘 ∈ R 𝑛 } be a hypothesis class of functions from a domain X to {0, 1} and let the loss function be the 0-1 loss. Assume that VCdim(H ) = 𝑑 < ∞ and 𝐿 D (𝒘) ≤ E 𝝐∼𝑁 (0,𝜌 2 I) [𝐿 D (𝒘 + 𝝐)]. Then for any 𝜌 > 0, 𝛾 ∈ [0, 1) and any distribution D, with probability of at least 1 -𝛿 over the choice of the training set S which has 𝑚 elements drawn i.i.d. according to D, we have", "figure_data": "", "figure_id": "fig_5", "figure_label": "55", "figure_type": "figure" }, { "figure_caption": "Table 5 : 2 ,52Hyperparameters to reproduce experimental results for ResNet18 on Cifar10 with different noise levels. 𝛽 = 0.5, 𝛾 = 0.6 𝜌 = 0.2, 𝛽 = 0.5, 𝛾 = 0.6 𝜌 = 0.1, 𝛽 = 0.6, 𝛾 = 0.6 𝜌 = 0.05, 𝛽 = 0.5, 𝛾 = 0.5 GSAM 𝜌 = 0.2, 𝛼 = 0.01 𝜌 = 0.2, 𝛼 = 0.02 𝜌 = 0.1, 𝛼 = 0.3 𝜌 = 0.05, 𝛼 = 0.3 WSAM 𝜌 = 0.2, 𝛾 = 0.91 𝜌 = 0.2, 𝛾 = 0.91 𝜌 = 0.1, 𝛾 = 0.93 𝜌 = 0.05, 𝛾 = 0.92", "figure_data": "", "figure_id": "fig_6", "figure_label": "52", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The sensitivity of WSAM's performance to the choice of 𝛾.", "figure_data": "", "figure_id": "fig_8", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Input: parameters 𝜌, 𝜖 > 0, 𝒘 1 ∈ R 𝑛 , step size {𝛼 𝑡 } 𝑇 𝑡 =1 ,", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "𝒎 𝑡 = 𝜙 𝑡 (𝒈 1 , . . . , 𝒈 𝑡 ) and 𝐵 𝑡 = 𝜓 𝑡 (𝒈 1 , . . . , 𝒈 𝑡 )", "figure_data": "6:7:𝒘 𝑡 +1 = 𝒘 𝑡 -𝛼 𝑡 𝐵 -1 𝑡 𝒎 𝑡8: end for", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Base optimizers by different 𝒎 𝑡 and 𝐵 𝑡 .", "figure_data": "Optimizer𝒎 𝒕𝑩 𝒕Sgd𝒈 𝑡", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Top-1 error (%) for ResNet18, WRN-28-10 trained on Cifar10 and Cifar100 with different optimizers.", "figure_data": "Cifar10Cifar100Vanilla (SgdM) 4.32 ± 0.07 20.51 ± 0.20SAM3.68 ± 0.06 19.97 ± 0.30ResNet18ESAM3.83 ± 0.08 20.67 ± 0.24GSAM3.71 ± 0.11 19.84 ± 0.27WSAM3.62 ± 0.10 19.42 ± 0.12Vanilla (SgdM) 3.73 ± 0.09 19.27 ± 0.12SAM2.94 ± 0.08 16.49 ± 0.09WRN-28-10ESAM2.99 ± 0.07 16.52 ± 0.21GSAM2.91 ± 0.08 16.36 ± 0.32WSAM2.74 ± 0.07 16.33 ± 0.26", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Hyperparameters to reproduce experimental results on Cifar10 and Cifar100.", "figure_data": "Cifar10Cifar100ResNet18SgdM SAM ESAM GSAM WSAM SgdM SAM ESAM GSAM WSAMlearning rate0.050.05weight decay1e-31e-3𝜌-0.20.20.20.2-0.20.20.20.2𝛼---0.02----0.03-𝛽--0.6----0.5--𝛾--0.6-0.88--0.5-0.82WRN-28-10 SgdM SAM ESAM GSAM WSAM SgdM SAM ESAM GSAM WSAMlearning rate0.10.1weight decay1e-31e-3𝜌-0.20.20.20.2-0.20.20.20.2𝛼---0.01----0.2-𝛽--0.6----0.5--𝛾--0.6-0.88--0.6-0.94", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Top-1 error (%) for Deit-Base over-trained on Ima-geNet.", "figure_data": "Top-1 error (%)Initial18.2Vanilla (SgdM)18.17 ± 0.005SAM18.01 ± 0.007GSAM (𝛼 = 0.02)18.01 ± 0.005WSAM (𝛾 = 0.94)18.01 ± 0.003", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Test of label noise. Top-1 accuracy (%) for ResNet18 on Cifar10 with different noise levels. ± 0.11 93.25 ± 0.12 89.90 ± 0.18 79.09 ± 0.91 WSAM 95.18 ± 0.12 93.33 ± 0.11 89.95 ± 0.12 78.30 ± 0.92 the accuracy and causes a high variance. Therefore, it is recommended to use the vanilla WSAM with a fixed 𝜌 for stability and performance.", "figure_data": "noise level (%)20406080Vanilla88.06 ± 0.48 84.11 ± 0.39 79.15 ± 0.43 69.07 ± 0.95SAM94.99 ± 0.09 93.28 ± 0.16 88.32 ± 0.28 77.57 ± 0.51ESAM94.93 ± 0.18 92.69 ± 0.48 86.42 ± 0.30 32.29 ± 4.67GSAM95.11", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Top-1 error (%) for WRN-28-10 trained on Cifar10.", "figure_data": "Vanilla+Adaptive+FisherSAM2.94 ± 0.08 2.84 ± 0.04 3.00 ± 0.13WSAM 2.74 ± 0.07 2.90 ± 0.08 3.45 ± 0.35", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "𝒎 𝑡 = 𝜙 𝑡 (𝒉 1 , . . . , 𝒉 𝑡 ) and 𝐵 𝑡 = 𝜓 𝑡 (𝒉 1 , . . . , 𝒉 𝑡 )", "figure_data": "1-𝛾 g𝑡7:8:", "figure_id": "tab_9", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The ablation study of WSAM for ResNet18, WRN-28-10 on Cifar10 and Cifar100.", "figure_data": "Cifar10Cifar100", "figure_id": "tab_10", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Differences in the minima found by different optimizers for ResNet18 on Cifar10. 𝜆 𝑚𝑎𝑥 is the dominant Hessian eigenvalue.", "figure_data": "optimizerlossaccuracy 𝜆 𝑚𝑎𝑥Vanilla(SGDM) 0.00260.957562.58SAM0.03390.962222.67WSAM0.00890.965423.97", "figure_id": "tab_11", "figure_label": "9", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work, SAM, is a method that the citing paper adopts to improve generalization in various fields such as CV, NLP, and bi-level learning."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work by Zhuang et al. provides a proof that the loss function L(\ud835\udc98) is an approximation to the dominant eigenvalue of the Hessian at local minima, which the citing paper adopts as a basis for their research on sharpness metrics."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work introduces the GSAM method, which the citing paper adopts to minimize both the surrogate loss and the L(w) of Eq. (1) by employing the gradient projection technique."}, {"Category": "Extension or Continuation", "Citation": "[8]", "Explanation": "The cited work improves the efficiency of SAM by introducing the ESAM method, which selectively applies SAM update with stochastic weight perturbation and sharpness-sensitivity data selection, building upon the original SAM method."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work introduces the ASAM method, which improves the geometrical structure of the exploration area of L(w) by introducing adaptive sharpness, building upon the original SAM method."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work introduces the Fisher SAM method, which also aims to improve the geometrical structure of the exploration area of L(w), building upon the original SAM method."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work provides the Sgd optimization method that the citing paper incorporates into the generic framework of SAM."}, {"Category": "Methodological Basis", "Citation": "[26]", "Explanation": "The cited work provides the SgdM optimization method that the citing paper incorporates into the generic framework of SAM."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work provides the Adam optimization method that the citing paper incorporates into the generic framework of SAM."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work by Foret et al. provides the original framework of SAM, which the citing paper uses as a basis for their own research on the generic framework of SAM."}, {"Category": "Supporting Evidence", "Citation": "[20]", "Explanation": "The cited work by Kim et al. provides a similar toy example to illustrate the effect and benefit of \ud835\udefe in WSAM, which supports the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work, ResNet18, serves as the model used in the study of WSAM performance for training models from scratch on the Cifar10 and Cifar100 datasets."}, {"Category": "Methodological Basis", "Citation": "[32]", "Explanation": "The cited work, WideResNet-28-10, is the model used in the study of WSAM performance for training models from scratch on the Cifar10 and Cifar100 datasets."}, {"Category": "Data Source", "Citation": "[11]", "Explanation": "The cited work, SAM, is the source of the training settings used in the study of WSAM performance for training models from scratch on the Cifar10 and Cifar100 datasets."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work, ImageNet dataset, provides the data and training framework for the experiment conducted in the citing paper to test the performance of the DeiTbase model."}, {"Category": "Data Source", "Citation": "[30]", "Explanation": "The cited work, Data-Efficient Image Transformers, serves as the pre-trained checkpoint for the DeiTbase model used in the experiment conducted in the citing paper to test the performance of the model."}, {"Category": "Methodological Basis", "Citation": "[11,20,22]", "Explanation": "The cited works provide a basis for the comparison of WSAM to label noise robustness in the training set, as they have previously shown the effectiveness of SAM-like optimizers in this regard."}, {"Category": "Extension or Continuation", "Citation": "https://github.com/facebookresearch/deit", "Explanation": "The cited work introduces the use of SgdM with momentum 0.9 as the base optimizer, which the citing paper extends by using it in the training of a ResNet18 for 200 epochs on the Cifar10 dataset."}, {"Category": "Supporting Evidence", "Citation": "[2]", "Explanation": "The cited work by Arazo et al. provides the methodology for injecting symmetric label noise of different levels in the training set, which the citing paper uses to compare the robustness of WSAM to label noise."}, {"Category": "Methodological Basis", "Citation": "[20,22]", "Explanation": "The cited works provide the configuration of parameters for the adaptive and Fisher information methods used in the study, which serves as a methodological basis for the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "Sec. 6.1", "Explanation": "The cited section in the text is the data source for the parameters used in the study, as the configuration is reused from previous studies to ensure accurate and reliable results."}, {"Category": "Extension or Continuation", "Citation": "Tab. 7", "Explanation": "The cited table in the text is an extension of the study conducted in the paper, as it presents the results of the search for the optimal parameters in the adaptive and Fisher information methods used in the research."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work by Kim et al. provides a method for reducing the number of parameters in a model, which the citing paper adopts to improve the performance of their model."}, {"Category": "Methodological Basis", "Citation": "[11,18,34]", "Explanation": "The cited works provide a metric for describing the sharpness at the minima found by the WSAM and SAM optimizers, which the citing paper adopts in its research to compare the differences in the minima found by the two optimizers."}, {"Category": "Data Source", "Citation": "Power Iteration algorithm", "Explanation": "The Power Iteration algorithm is used in the cited work to calculate the maximum eigenvalue, which the citing paper utilizes in its research to understand the differences in the minima found by the WSAM and SAM optimizers."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11" ], "table_ref": [], "text": "Graph is a foundational data structure, denoting pairwise relationships between entities. It finds applications across a range of domains, such as social networks, transportation, and biology. [Wu et al., 2020, Ma andTang, 2021] Among these diverse applications, semi-supervised node classification has emerged as a crucial and challenging task, attracting significant attention from researchers. Given the graph structure, node features, and a subset of labels, the semi-supervised node classification task aims to predict the labels of unlabeled nodes. In recent years, Graph Neural Networks (GNNs) have demonstrated remarkable success in addressing this task due to their exceptional ability to model both the graph structure and node features [Zhou et al., 2020]. A typical GNN model usually follows the message-passing scheme [Gilmer et al., 2017], which mainly contains two operators, i.e., feature transformation and feature propagation, to exploit node features, graph structure, and label information.\nDespite the great success, recent studies have shown that GNNs could introduce various biases from the perspectives of node features and graph topology. In terms of node features, Jiang et al. [Jiang et al., 2022] demonstrated that the message-passing scheme could amplify sensitive node attribute bias. A series of studies [Agarwal et al., 2021, Kose and Shen, 2022, Dai and Wang, 2021] have endeavored to mitigate this sensitive attribute bias in GNNs and ensure fair classification. In terms of graph topology, Tang et al. [Tang et al., 2020] investigated the degree bias in GNNs, signifying that high-degree nodes typically outperform low-degree nodes. This degree bias has also been addressed by several recent studies [Kang et al., 2022, Liu et al., 2023, Liang et al., 2022].\nPreprint. Under review." }, { "figure_ref": [], "heading": "arXiv:2305.15822v1 [cs.LG] 25 May 2023", "publication_ref": [ "b12", "b13", "b8" ], "table_ref": [], "text": "In addition to node features and graph topology, the label information, especially the position of labeled nodes, also plays a crucial role in GNNs. However, the potential bias in label information has been largely overlooked. In practice, with an equal number of training nodes, different labeling can result in significant discrepancies in test performance [Ma et al., 2022, Cai et al., 2017, Hu et al., 2020a]. For instance, Ma et al. [Ma et al., 2021a] study the subgroup generalization of GNNs and find that the shortest path distance to labeled nodes can also affect the GNNs' performance, but they haven't provided deep understanding or solutions. The investigation of the influence of labeled nodes' position on unlabeled nodes remains under-explored.\nIn this work, we discover the presence of a new bias in GNNs, namely the label position bias, which indicates that the nodes \"closer\" to the labeled nodes tend to receive better prediction accuracy. We propose a novel metric called Label Proximity Score (LPS) to quantify and measure this bias. Our study shows that different node groups with varied LPSs can result in a significant performance gap, which showcases the existence of label position bias. More importantly, this new metric has a much stronger correlation with performance disparity than existing metrics such as degree [Tang et al., 2020] and shortest path distance [Ma et al., 2021a], which suggests that the proposed Label Proximity Score might be a more intrinsic measurement of label position bias.\nAddressing the label position bias in GNNs is greatly desired. First, the label position bias would cause the fairness issue to nodes that are distant from the labeled nodes. For instance, in a financial system, label position bias could result in unfair assessments for individuals far from labeled ones, potentially denying them access to financial resources. Second, mitigating this bias has the potential to enhance the performance of GNNs, especially if nodes that are distant can be correctly classified. In this work, we propose a Label Position unbiased Structure Learning method (LPSL) to derive a graph structure that mitigates the label position bias. Specifically, our goal is to learn a new graph structure in which each node exhibits similar Label Proximity Scores. The learned graph structure can then be applied across various GNNs. Extensive experiments demonstrate that our proposed LPSL not only outperforms backbone methods but also significantly mitigates the issue of label position bias in GNNs." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Label Position Bias", "publication_ref": [ "b8", "b17", "b18" ], "table_ref": [], "text": "In this section, we provide an insightful preliminary study to reveal the existence of label position bias in GNNs. Before that, we first define the notations used in this paper.\nNotations. We use bold upper-case letters such as X to denote matrices. X i denotes its i-th row and X ij indicates the i-th row and j-th column element. We use bold lower-case letters such as x to denote vectors. 1 n ∈ R n×1 is all-ones column vector. The Frobenius norm and the trace of a matrix X are defined as ∥X∥ F = ij X 2 ij and tr(X) = i X ii , respectively. Let G = (V, E) be a graph, where V is the node set and E is the edge set. N i denotes the neighborhood node set for node v i . The graph can be represented by an adjacency matrix A ∈ R n×n , where A ij > 0 indices that there exists an edge between nodes v i and v j in G, or otherwise A ij = 0. Let D = diag(d 1 , d 2 , . . . , d n ) be the degree matrix, where d i = j A ij is the degree of node v i . The graph Laplacian matrix is defined as L = D -A. We define the normalized adjacency matrix as à = D -1 2 AD -1 2 and the normalized Laplacian matrix as L = I -Ã. Furthermore, suppose that each node is associated with a d-dimensional feature x and we use X = [x 1 , . . . , x n ] ⊤ ∈ R n×d to denote the feature matrix. In this work, we focus on the node classification task on graphs. Given a graph G = {A, X} and a partial set of labels Y L = {y 1 , . . . , y l } for node set V L = {v 1 , . . . , v l }, where y i ∈ R c is a one-hot vector with c classes, our goal is to predict labels of unlabeled nodes. For convenience, we reorder the index of nodes and use a mask matrix T = I l 0 0 0 to represent the indices of labeled nodes.\nLabel Proximity Score. In this study, we aim to study the bias caused by label positions. When studying prediction bias, we first need to define the sensitive groups based on certain attributes or metrics. Therefore, we propose a novel metric, namely the Label Proximity Score, to quantify the closeness between test nodes and training nodes with label information. Specifically, the proposed Label Proximity Score (LPS) is defined as follows:\nLPS = PT1 n , and\nP = I -(1 -α) à -1 ,(1)\nwhere P represents the Personalized PageRank matrix, T is the label mask matrix, 1 n is an all-ones column vector, and α ∈ (0, 1] stands for the teleport probability. P ij represents the pairwise node proximity between node i and node j. For each test node i, its LPS represents the sum of its node proximity values to all labeled nodes, i.e., (PT1 n ) i = P i,: T1 n = j∈V L P ij .\nSensitive Groups. In addition to the proposed LPS, we also explore two existing metrics such as node degree [Tang et al., 2020] and shortest path distance to label nodes [Ma et al., 2021a] for comparison since they could be related to the label position bias. For instance, the node with a high degree is more likely to connect with labeled nodes, and the node with a small shortest path to a labeled node is also likely \"closer\" to all labeled nodes if the number of labeled nodes is small. According to these metrics, we split test nodes into different sensitive groups. Specifically, for node degree and shortest path distance to label nodes, we use their actual values to split them into seven sensitive groups, as there are only very few nodes whose degrees or shortest path distances are larger than seven. For the proposed LPS, we first calculate its value and subsequently partition the test nodes evenly into seven sensitive groups, each having an identical range of LPS values.\nExperimental Setup. We conduct the experiments on three representative datasets used in semisupervised node classification tasks, namely Cora, CiteSeer, and PubMed. We also experiment with three different labeling rates: 5 labels per class, 20 labels per class, and 60% labels per class. The experiments are performed using two representative GNN models, GCN [Kipf and Welling, 2016] and APPNP [Gasteiger et al., 2018], which cover both coupled and decoupled architectures. We also provide the evaluation on Label Propagation (LP) [Zhou et al., 2003] to exclude the potential bias caused by node features. For GCN and APPNP, we adopt the same hyperparameter setting with their original papers. The node classification accuracy on different sensitive groups {1, 2, 3, 4, 5, 6, 7} with the labeling rate of 20 labeled nodes per class under APPNP, GCN, and LP models is illustrated in Figure 1, 2, and 3 respectively. Due to the space limitation, we put more details and results of other datasets and labeling rates into Appendix A. Observations. From the results presented in Figure 1, 2, and 3, we can observe the following:\n• Label Position bias is prevalent across all GNN models and datasets. The classification accuracy can notably vary between different sensitive groups, and certain trends are discernible. To ensure fairness and improve performance, addressing this bias is a crucial step in improving GNN models. • While Degree and Shortest Path Distance (SPD) can somewhat reflect disparate performance, indicating that nodes with higher degrees and shorter SPDs tend to perform better, these trends lack consistency, and they can't fully reflect the Label Position bias. For instance, degree bias is not pronounced in the APPNP model as shown in Figure 1, as APPNP can capture the global structure. Moreover, SPD fails to effectively evaluate relatively low homophily graphs, such as CiteSeer [Ma et al., 2021b]. Consequently, there is a need to identify a more reliable metric. • The Label Proximity Score (LPS) consistently exhibits a strong correlation with performance disparity across all datasets and models. Typically, nodes with higher LPS scores perform better.\nIn addition, nodes with high degrees and low Shortest Path Distance (SPD) often have higher LPS, as previously analyzed. Therefore, LPS is highly correlated with label position bias. • The Label Propagation, which solely relies on the graph structure, demonstrates a stronger label position bias compared to GNNs as shown in Figure 3. Moreover, the label position bias becomes less noticeable in all models when the labeling rate is high, as there typically exist labeled nodes within the two-hop neighborhood of each test node (detailed in Appendix A). These observations suggest that the label position bias is predominantly influenced by the graph structure. Consequently, this insight motivates us to address the Label Position bias from the perspective of the graph structure.\nIn conclusion, label position bias is indeed present in GNN models, and the proposed Label Proximity Score accurately and consistently reflects the performance disparity over different sensitive groups for different models across various datasets. Overall, the label proximity score exhibits more consistent and stronger correlations with performance disparity compared with node degree and shortest path distance, which suggests that LPS serves as a better metric for label position bias. Further, through the analysis of the Label Propagation method and the effects of different labeling rates, we deduce that the label position bias is primarily influenced by the graph structure. This understanding paves us a way to mitigate label position bias." }, { "figure_ref": [], "heading": "The Proposed Framework", "publication_ref": [], "table_ref": [], "text": "The studies in Section 2 suggest that Label Position bias is a prevalent issue in GNNs. In other words, nodes far away from labeled nodes tend to yield subpar performance. Such unfairness could be problematic, especially in real-world applications where decisions based on these predictions can have substantial implications. As a result, mitigating label position bias has the potential to enhance the fairness of GNNs in real-world applications, as well as improve overall model performance.\nTypically, there are two ways to address this problem, i.e., from a model-centric or a data-centric perspective. In this work, we opt for a data-centric perspective for two primary reasons: (1) The wide variety of GNN models in use in real-world scenarios, each with its unique architecture, makes it challenging to design a universal component that can be seamlessly integrated into all GNNs to mitigate the label position bias. Instead, the graph structure is universal and can be applied to any existing GNNs. (2) Our preliminary studies indicate that the graph structure is the primary factor contributing to the label position bias. Therefore, it is more rational to address the bias by learning a label position unbiased graph structure.\nHowever, there are mainly two challenges: (1) How can we define a label position unbiased graph structure, and how can we learn this structure based on the original graph? (2) Given that existing graphs are typically sparse, how can we ensure that the learned data structure is also sparse to avoid excessive memory consumption? In the following subsections, we aim to address these challenges." }, { "figure_ref": [], "heading": "Label Position Unbiased Graph Structure Learning", "publication_ref": [ "b20", "b21" ], "table_ref": [], "text": "Based on our preliminary studies, the Label Proximity Score (LPS) can consistently reflect performance disparity across various GNNs and indicate the label position bias. Therefore, to mitigate the label position bias from the structural perspective, our objective is to learn a new graph structure in which each node exhibits similar LPSs. Meanwhile, this learned unbiased graph structure should maintain certain properties of the original graph. To achieve this goal, we formulate the Label Position Unbiased Structure Learning (LPSL) problem as follows:\narg min B ∥I -B∥ 2 F + λtr(B ⊤ LB) s.t. BT1 n = c1 n ,(2)\nwhere B ∈ R n×n represents the debiased graph structure matrix.\ntr(B ⊤ LB) = (vi,vj )∈E ∥B i / √ d i -B j / d j ∥ 2\n2 measures the smoothness of the new structure based on the original graph structure. The proximity to identity matrix I ∈ R n×n encourages self-loops and avoids trivial over-smoothed structures. λ is a hyperparameter that controls the balance between smoothness and self-loop. T is the mask matrix indicating the labeled nodes, 1 n is the all-ones vector, and c is a hyperparameter serving as the uniform Label Proximity Score for all nodes.\nNotably, if we ignore the constraint, then the optimal solution for this primary problem is given by\nB = (I + λL) -1 = α(I -(1 -α Ã)) -1 , where α = 1\n1+λ . This solution recovers the Personalized PageRank (PPR) matrix which measures pairwise node proximity. Furthermore, the constraint in Eq. ( 2) ensures that all nodes have the same Label Proximity Score, denoted as c. The constraint encourages fair label proximity scores for all nodes so that the learned graph structure mitigates the label position bias.\nThe constrained optimization problem in Eq. ( 2) is a convex optimization problem, and it can be solved by the Lagrange Multiplier method [Boyd and Vandenberghe, 2004]. The augmented Lagrange function can be written as:\nL ρ (B, y) = ∥I -B∥ 2 F + λtr(B ⊤ LB) + y ⊤ (BT1 n -c1 n ) + ρ 2 ∥BT1 n -c1 n ∥ 2 2 ,(3)\nwhere y ∈ R n×1 is the introduced Lagrange multiplier, and ρ > 0 represents the augmented Lagrangian parameter. The gradient of L ρ (B, y) to B can be represented as:\n∂L ρ ∂B = 2(B -I) + 2λ LB + y(T1 n ) ⊤ + ρ(BT1 n -c1 n )(T1 n ) ⊤ .(4)\nThen, the problem can be solved by dual ascent algorithm [Boyd et al., 2011] as follows:\nB k+1 = arg min B L ρ (B k , y k ) y k+1 = y k + ρ(B k T1 n -c1 n ),\nwhere k is the current optimization step, and B k+1 can be obtained by multiple steps of gradient descent using the gradient in Eq. (4)." }, { "figure_ref": [], "heading": "Understandings", "publication_ref": [ "b22" ], "table_ref": [], "text": "In this subsection, we provide the understanding and interpretation of our proposed LPSL, establishing its connections with the message passing in GNNs.\nRemark 3.1. The feature aggregation using the learned graph structure B directly as a propagation matrix, i.e., F = BX, is equivalent to applying the message passing in GNNs using the original graph if B is the approximate or exact solution to the primary problem defined in Eq. 2 without constraints.\nThe detailed proof can be found in Appendix B. Remark 3.1 suggests that we can directly substitute the propagation matrix in GNNs with the learned structure B. The GNNs are trained based on the labeled nodes, and the labeled nodes would influence the prediction of unlabeled nodes because of the message-passing scheme. Following the definition in [Xu et al., 2018], the influence of node j on node i can be represented by I i (j) = sum ∂hi ∂xj , where h i is the representation of node i, x j is the input feature of node j, and ∂hi ∂xj represents the Jacobian matrix. Afterward, we have the following Proposition based on the influcence scores: Proposition 3.1. The influence scores from all labeled nodes to any unlabeled node i will be the equal, i.e., j∈V L I i (j) = c, when using the unbiased graph structure B obtained from the optimization problem in Eq. (2) as the propagation matrix in GNNs.\nThe proof can be found in Appendix B. Proposition 3.1 suggests that by using the unbiased graph structure for feature propagation, each node can receive an equivalent influence from all the labeled nodes, thereby mitigating the label position bias issue." }, { "figure_ref": [], "heading": "ℓ 1 -regularized Label Position Unbiased Sparse Structure Learning", "publication_ref": [ "b23", "b24", "b25" ], "table_ref": [], "text": "One challenge of solving the graph structure learning problem in Eq. ( 2) is that it could result in a dense structure matrix B ∈ R n×n . This is a memory-intensive outcome, especially when the number of nodes n is large. Furthermore, applying this dense matrix to GNNs can be time-consuming for downstream tasks, which makes it less practical for real-world applications. To make the learned graph structure sparse, we propose the following ℓ 1 -regularized Label Position Unbiased Sparse Structure Learning optimization problem:\narg min B ∥I -B∥ 2 F + λtr(B ⊤ LB) + β∥B∥ 1 s.t. BT1 n = c1 n ,(5)\nwhere ∥B∥ 1 represents the ℓ 1 regularization that encourages zero values in B. β > 0 is a hyperparameter to control the sparsity of B. The primary problem in Eq. ( 5) is proved to have a strong localization property and can guarantee the sparsity [Ha et al., 2021, Hu, 2020]. The problem in Eq. ( 5) can also be solved by the Lagrange Multiplier method. However, when the number of nodes n is large, solving this problem using conventional gradient descent methods becomes computationally challenging. Therefore, we propose to solve the problem in Eq. ( 5) efficiently by Block Coordinate Descent (BCD) method [Tseng, 2001] in conjunction with the proximal gradient approach, particularly due to the presence of the ℓ 1 regularization. Specifically, we split B into column blocks, and B :,j represents the j-th block. The gradient of L ρ with respect to B :,j can be written as:\n∂L ρ ∂B :,j = 2(B :,j -I :,j ) + 2λ LB :,j + y(T1 n ) ⊤ j + ρ(BT1 n -c1 n )(T1 n ) ⊤ j ,(6)\nwhere (T1 n ) j ∈ R d×1 is the corresponding block part with block size d. After updating the current block B :,j , we apply a soft thresholding operator S β/ρ (•) based on the proximal mapping. The full algorithm is detailed in Algorithm 1. Notably, lines 6-8 handle the block updates, line 9 performs the soft thresholding operation, and line 11 updates Lagrange multiplier y through dual ascent update. for each block j do 6:" }, { "figure_ref": [], "heading": "The Model", "publication_ref": [ "b17" ], "table_ref": [], "text": "for i = 0 to update steps t do The proposed LPSL learns an unbiased graph structure with respect to the labeled nodes. Therefore, the learned graph structure can be applied to various GNN models to mitigate the Label Position bias. In this work, we test LPSL on two widely used GNN models, i.e., GCN [Kipf and Welling, 2016] and APPNP [Gasteiger et al., 2018]. For the GCN model, each layer can be represented by:\nH l+1 = σ B λ H l W l ,\nwhere H 0 = X, σ is the non-linear activation function, B λ is the unbiased structure with papermeter λ, and W l is the weight matrix in the l-th layer. We refer to this model as LPSL GCN . For the APPNP model, we directly use the learned B λ as the propagation matrix, and the prediction can be written as:\nY pred = B λ f θ (X),\nwhere f θ (•) is any machine learning model parameterized by the learnable parameters θ. We name this model as LPSL APPNP . The parameter λ provides a high flexibility when applying B λ to different GNN architectures. For decoupled GNNs such as APPNP, which only propagates once, a large λ is necessary to encode the global graph structure with a denser sparsity. In contrast, for coupled GNNs, such as GCN, which apply propagation multiple times, a smaller λ can be used to encode a more local structure with a higher sparsity." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [ "b26", "b27", "b29", "b30", "b17", "b18", "b31", "b33" ], "table_ref": [], "text": "In this section, we conduct comprehensive experiments to verify the effectiveness of the proposed LPSL. In particular, we try to answer the following questions:\n• Q1: Can the proposed LPSL improve the performance of different GNNs? (Section 4.2)\n• Q2: Can the proposed LPSL mitigate the label position bias? (Section 4.3)\n• Q3: How do different hyperparameters affect the proposed LPSL? (Section 4.4)\n4.1 Experimental Settings Datasets. We conduct experiments on 8 real-world graph datasets for the semi-supervised node classification task, including three citation datasets, i.e., Cora, Citeseer, and Pubmed [Sen et al., 2008], two co-authorship datasets, i.e., Coauthor CS and Coauthor Physics, two co-purchase datasets, i.e., Amazon Computers and Amazon Photo [Shchur et al., 2018], and one OGB dataset, i.e., ogbnarxiv [Hu et al., 2020b]. The details about these datasets are shown in Appendix C.\nWe employ the fixed data split for the ogbn-arxiv dataset, while using ten random data splits for all other datasets to ensure more reliable results [Liu et al., 2021]. Additionally, for the Cora, CiteSeer, and PubMed datasets, we experiment with various labeling rates: low labeling rates with 5, 10, and 20 labeled nodes per class, and high labeling rates with 60% labeled nodes per class. Each model is run three times for every data split, and we report the average performance along with the standard deviation.\nBaselines. To the best of our knowledge, there are no previous works that aim to address the label position bias. In this work, we select three GNNs, namely, GCN [Kipf and Welling, 2016], GAT [Veličković et al., 2017], and APPNP [Gasteiger et al., 2018], two Non-GNNs, MLP and Label Propagation [Zhou et al., 2003], as baselines. Furthermore, we also include GRADE [Wang et al., 2022], a method designed to mitigate degree bias. Notably, SRGNN [Zhu et al., 2021a] demonstrates that if labeled nodes are gathered locally, it could lead to an issue of feature distribution shift. SRGNN aims to mitigate the feature distribution shift issue and is also included as a baseline.\nHyperparameters Setting. We follow the best hyperparameter settings in their original papers for all baselines. For the proposed LPSL GCN , we set the λ in range [1,8]. For LPSL APPNP , we set the λ in the range [8,15]. For both methods, c is set in the range [0.5, 1.5]. We fix the learning rate 0.01, dropout 0.5 or 0.8, hidden dimension size 64, and weight decay 0.0005, except for the ogbn-arxiv dataset. Adam optimizer [Kingma and Ba, 2014] is used in all experiments. More details about the hyperparameters setting for all methods can be found in Appendix D." }, { "figure_ref": [], "heading": "Performance Comparison on Benchmark Datasets", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "In this subsection, we test the learned unbiased graph structure by the proposed LPSL on both GCN and APPNP models. We then compare these results with seven baseline methods across all eight datasets. The primary results are presented in Table 1. Due to space limitations, we have included the results from other baselines in Appendix E. From these results, we can make several key observations:\n• The integration of our proposed LPSL to both GCN and APPNP models consistently improves their performance on almost all datasets. This indicates that a label position unbiased graph structure can significantly aid semi-supervised node classification tasks.\n• Concerning the different labeling rates for the first three datasets, our proposed LPSL shows greater performance improvement with a low labeling rate. This aligns with our preliminary study that label position bias is more pronounced when the labeling rate is low.\n• SRGNN, designed to address the feature distribution shift issue, does not perform well on most datasets with random splits instead of locally distributed labels. Only when the labeling rate is very low, SRGNN can outperform GCN. Hence, the label position bias cannot be solely solved by addressing the feature distribution shift.\n• The GRADE method, aimed at mitigating the degree-bias issue, also fails to improve overall performance with randomly split datasets. " }, { "figure_ref": [], "heading": "Evaluating Bias Mitigation Performance", "publication_ref": [], "table_ref": [], "text": "In this subsection, we aim to investigate whether the proposed LPSL can mitigate the label position bias. We employ all three aforementioned bias metrics, namely label proximity score, degree, and shortest path distance, on Cora and CiteSeer datasets. We first group test nodes into different sensitive groups according to the metrics, and then use three representative group bias measurements -Weighted Demographic Parity (WDP), Weighted Standard Deviation (WSD), and Weighted Coefficient of Variation (WCV) -to quantify the bias. These are defined as follows:\nWDP = D i=1 N i • |A i -A avg | N total , WSD = 1 N total D i=1 N i • (A i -A avg ) 2 , WCV = WSD A avg ,\nwhere D is the number of groups, N i is the node number of group i, A i is the accuracy of group i, A avg is the weighted average accuracy of all groups, i.e., the overall accuracy, and N total is the total number of nodes. We choose six representative models, i.e., Label Propagation (LP), GRADE, GCN, APPNP, LPSL GCN , and LPSL APPNP , in this experiment. The results of the label proximity score, degree, and shortest path on the Cora and Citeseer datasets are shown in Tabel 2, 3, and 4, respectively. It can be observed from the tables:\n• The Label Propagation method, which solely utilizes the graph structure information, exhibits the most significant label position bias across all measurements and datasets. This evidence suggests that label position bias primarily stems from the biased graph structure, thereby validating our strategy of learning an unbiased graph structure with LPSL.\n• The proposed LPSL not only enhances the classification accuracy of the backbone models, but also alleviates the bias concerning Label Proximity Score, degree, and Shortest distance.\n• The GRADE method, designed to mitigate degree bias, does exhibit a lesser degree bias than GCN and APPNP. However, it still falls short when compared to the proposed LPSL. Furthermore, GRADE may inadvertently heighten the bias evaluated by other metrics. For instance, it significantly increases the label proximity score bias on the CiteSeer dataset. " }, { "figure_ref": [ "fig_5" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "In this subsection, we explore the impact of different hyperparameters, specifically the smoothing term λ and the constraint c, on our model. We conducted experiments on the Cora and CiteSeer datasets using ten random data splits with 20 labels per class. The accuracy of different λ values for LPSL APPNP and LPSL GCN on the Cora and CiteSeer datasets are illustrated in Figure 4.4.\nFrom our analysis, we note that the proposed LPSL is not highly sensitive to the λ within the selected regions. Moreover, for the APPNP model, the best λ is higher than that for the GCN model, which aligns with our discussion in Section 3 that the decoupled GNNs require a larger λ to encode the global graph structure. The results for hyperparameter c can be found in Appendix F with similar observations. " }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b34", "b30", "b17", "b29", "b24", "b36", "b0", "b38", "b39", "b7", "b40", "b41", "b5", "b43", "b8", "b9", "b10", "b11" ], "table_ref": [], "text": "Graph Neural Networks (GNNs) serve as an effective framework for representing graph-structured data, primarily employing two operators: feature transformation and propagation. The ordering of these operators classifies most GNNs into two categories: Coupled and Decoupled GNNs. Coupled GNNs, such as GCN [Kipf and Welling, 2016], GraphSAGE [Hamilton et al., 2017], and GAT [Veličković et al., 2017], entwine feature transformation and propagation within each layer. In contrast, recent models like APPNP [Gasteiger et al., 2018] represent Decoupled GNNs [Liu et al., 2021, 2020, Zhou et al., 2021] that separate transformation and propagation. While Graph Neural Networks (GNNs) have achieved notable success across a range of domains [Wu et al., 2020], they often harbor various biases tied to node features and graph topology [Dai et al., 2022]. For example, GNNs may generate predictions skewed by sensitive node features [Dai andWang, 2021, Agarwal et al., 2021], leading to potential unfairness in diverse tasks such as recommendations [Buyl and De Bie, 2020] and loan fraud detection [Xu et al., 2021]. Numerous studies have proposed different methods to address feature bias, including adversarial training [Dai and Wang, 2021, Dong et al., 2022, Masrour et al., 2020], and fairness constraints [Agarwal et al., 2021, Dai and Wang, 2022, Kang et al., 2020]. Structural bias is another significant concern, where low-degree nodes are more likely to be falsely predicted by GNNs [Tang et al., 2020]. Recently, there are several works aimed to mitigate the degree bias issue [Kang et al., 2022, Liu et al., 2023, Liang et al., 2022]. Distinct from these previous studies, our work identifies a new form of bias -label position bias, which is prevalent in GNNs. To address this, we propose a novel method, LPSL, specifically designed to alleviate the label position bias." }, { "figure_ref": [], "heading": "Conclusion and Limitation", "publication_ref": [], "table_ref": [], "text": "In this study, we shed light on a previously unexplored bias in GNNs, the label position bias, which suggests that nodes closer to labeled nodes typically yield superior performance. To quantify this bias, we introduce a new metric, the Label Proximity Score, which proves to be a more intrinsic measure. To combat this prevalent issue, we propose a novel optimization framework, LPSL, to learn an unbiased graph structure. Our extensive experimental evaluation shows that LPSL not only outperforms standard methods but also significantly alleviates the label position bias in GNNs.\nIn our current work, we address the label position bias only from a structure learning perspective.\nFuture research could incorporate feature information, which might lead to improved performance. Besides, we have primarily examined homophily graphs. It would be interesting to investigate how label position bias affects heterophily graphs. We hope this work will stimulate further research and development of methods aimed at enhancing label position fairness in GNNs. Our analysis of the results above leads us to the following key observations:\n• Label Position Bias is a widespread phenomenon across all GNN models and datasets. Classification accuracy exhibits substantial variation between different sensitive groups, with discernible patterns.\n• When contrasted with Degree and Shortest Path Distance, the proposed Label Proximity Score consistently shows a robust correlation with performance disparity across all datasets and models. This underscores its efficacy as a measure of Label Position Bias.\n• The severity of Label Position Bias is more prominent when the labeling rate is low, such as with 5 or 20 labeled nodes per class. However, with a labeling rate of 60% labeled nodes per class, the bias becomes less noticeable. This is evident from the fact that the shortest path distance is either 1 or 2 for all datasets, implying that all test nodes have at least one labeled node within their two-hop neighbors.\nThese findings highlight the importance of further exploring Label Position Bias and developing strategies to mitigate its impact on GNN performance." }, { "figure_ref": [], "heading": "B Understandings", "publication_ref": [ "b47", "b30", "b17", "b22" ], "table_ref": [], "text": "Remark 3.1 The feature aggregation using the learned graph structure B directly as a propagation matrix, i.e., F = BX, is equivalent to applying the message passing in GNNs using the original graph if B is the approximate or exact solution to the primary problem defined in Eq. ( 2) without constraints.\nProof. There are several recent studies [Zhu et al., 2021b, Ma et al., 2021c, Yang et al., 2021] unified the message passing in different GNNs under an optimization framework. For instance, Ma et al. [Ma et al., 2021c] demonstrated that the message-passing scheme of GNNs, such as GCN [Kipf and Welling, 2016], GAT [Veličković et al., 2017], PPNP and APPNP [Gasteiger et al., 2018], can be viewed as optimizing the following graph signal denoising problem:\narg min\nF∈R n×d L = ∥X -F∥ 2 F + λ tr(F ⊤ LF),(7)\nwhere X ∈ R n×d is the node features, F is the optimal node representations after applying GNNs, and λ are used to control the feature smoothness. The gradient of ∂L ∂F can be represented as:\n∂L ∂F = 2(F -X) + 2λ LF.\nHere, we provide two examples of using the Eq. ( 7) to derive APPNP and GCN. For APPNP, we can adopt multiple-step gradient descent to solve the Eq. ( 7):\nF k = F k-1 -γ ∂L ∂F = (1 -2λ -2λγ)F k-1 + 2λγ ÃF k-1 + 2γX.\nIf we set the stepsize γ = 1 2(1+λ) , then we have the following update steps:\nF k = λ 1 + λ ÃF k-1 + 1 1 + λ X\nwhich is the message passing scheme of APPNP. Then, if we propagate K layers, then\nF K = λ 1 + λ ÃF K-1 + 1 1 + λ X = λ 1 + λ Ã( λ 1 + λ ÃF K-2 + 1 1 + λ X) + 1 1 + λ X = λ 1 + λ K ÃK + K-1 i=0 1 1 + λ λ 1 + λ i Ãi X.(8)\nFor GCN, we can use one step gradient to solve the Eq. ( 7):\nF = X -γ ∂L ∂F F=X = (1 -2γλ)X + 2γλ ÃX.\nIf we set the step size γ = 1 2λ , then the F = ÃX, which is the message passing of GCN. The primary problem defined in Eq. ( 2) without constraints can be represented as:\narg min B L = ∥I -B∥ 2 F + λtr(B ⊤ LB).(9)\nComparing Eq. ( 7) with Eq. ( 9), the only difference lies in the first term, where the feature matrix X is set to be identity matrix I. Then, we can follow the same steps to solve the Eq. ( 9).\nIf we use the multiple-step gradient descent with the stepsize γ = 1 2(1+λ) , then we have the following update steps:\nB k = λ 1 + λ ÃB k-1 + 1 1 + λ I.\nThen, for K steps iteration, B K will be:\nB K = λ 1 + λ K ÃK + K-1 i=0 1 1 + λ λ 1 + λ i Ãi ,(10)\nwhich is the propagation matrix of APPNP in Eq. ( 8). As a result, the message passing of APPNP can be written as F = BX.\nIf we use one-step gradient descent to solve Eq. ( 9), then B can be represented as:\nB = I -γ ∂L ∂B B=I = (1 -2γλ)I + 2γλ Ã.\nIf we set the step size γ = 1 2λ , then the B = Ã. As a result, the aggregation in GCN can also be represented by F = BX.\nProposition B.1. The influence scores from all labeled nodes to any unlabeled node i will be the equal, i.e., j∈V L I i (j) = c, when using the unbiased graph structure B obtained from the optimization problem in Eq. (2) as the propagation matrix in GNNs.\nProof. Following the definition in [Xu et al., 2018], the influence of node j on node i can be represented by I i (j) = sum ∂hi ∂xj , where h i is the representation of node i, x j is the input feature of node j, and ∂hi ∂xj represents the Jacobian matrix.\nIf we use the unbiased graph B as the propagation matrix, then H = BX. Thus, h ij = n k=0 B ik x kj . The Jacobian matrix ∂hi ∂xj can be written as:\n∂h i ∂x j = Diag([B ij , B ij , . . . , B ij ]),(11)\nwhere Diag represents the diagonal matrix. As a result, I i (j) = sum ∂hi ∂xj = nB ij . Suppose the constraint BT1 n = c n is satisfied, then the influence scores from all labeled nodes to the unlabeled node i can be represented as:\nj∈V L I i (j) = j∈V L nB ij = nBT1 n = c.(12)\nFinally, the influence scores from all labeled nodes to any unlabeled node i are equal." }, { "figure_ref": [], "heading": "C Datasets Statistics", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "In the experiments, the data statistics used in Section 4 are summarized in Table 5. For Cora, CiteSeer and PubMed dataset, we adopt different label rates, i.e., 5, 10, 20, and 60% labeled nodes per class, to get a more comprehensive comparison. For label rates 5, 10, and 20, we use 500 nodes for validation and 1000 nodes for testing. For label rates of 60% labeled node per class, we use half of the rest nodes for validation and the remaining half for the test. For each labeling rate and dataset, we adopt 10 random splits for each dataset. For the ogbn-arxiv dataset, we follow the original data split. " }, { "figure_ref": [], "heading": "D Hyperparamters Setting", "publication_ref": [], "table_ref": [], "text": "In this section, we describe in detail the search space for parameters of different experiments.\nFor all deep models, we use 3 transformation layers with 256 hidden units for the ogbn-arxiv dataset and 2 transformation layers with 64 hidden units for other datasets. For all methods, the following common hyperparameters are tuned based on the loss and validation accuracy from the following search space:\n• Learning Rate: {0.01, 0.05} • Dropout Rate: {0, 0.5, 0.8} • Weight Decay: {0, 5e-5, 5e-4}\nFor APPNP and Label Propagation, we tune the teleport probability α in {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}. For GRADE1 , we set the hidden dimension 256, the temperature in {0.2, 0.5, 0.8, 1, 1.1, 1.5, 1.7, 2}, which covers all the best values in their original paper. For SRGNN2 , we set the weight of CMD loss in {0.1, 0.5, 1, 1.5, 2}.\nFor the proposed LPSL, we set the c in the range [0.7, 1.3], ρ in {0.01, 0.001}, γ in {0.01, 0.001}, β in {1e-4, 1e-5, 5e-5, 1e-6, 5e-6, 1e-7}. For the LPSL APPNP , we set λ in {8, 9, 10, 11, 12, 13, 14, 15}.\nFor the LPSL GCN , we set λ in {1, 2, 3, 4, 5, 6, 7, 8}.\nOur code is available at: https://anonymous.4open.science/r/LPSL-8187" }, { "figure_ref": [], "heading": "E Node Classification Results", "publication_ref": [ "b26", "b27", "b30", "b17", "b18", "b31" ], "table_ref": [ "tab_5" ], "text": "For the semi-supervised node classification task, we choose nine common used datasets including three citation datasets, i.e., Cora, Citeseer and Pubmed [Sen et al., 2008], two coauthors datasets, i.e., CS and Physics, two Amazon datasets, i.e., Computers and Photo [Shchur et al., 2018], and one OGB datasets, i.e., ogbn-arxiv [Hu et al., 2020b].\nTo the best of our knowledge, there are no previous works that aim to address the label position bias.\nIn this work, we select three GNNs, namely, GCN [Kipf and Welling, 2016], GAT [Veličković et al., 2017], and APPNP [Gasteiger et al., 2018], two Non-GNNs, MLP and Label Propagation [Zhou et al., 2003], as baselines. Furthermore, we also include GRADE [Wang et al., 2022], a method designed to mitigate degree bias. Notably, SRGNN [Zhu et al., 2021a] demonstrates that if labeled nodes are gathered locally, it could lead to an issue of feature distribution shift. SRGNN aims to mitigate the feature distribution shift issue and is also included as a baseline. The overall performance are shown in Table 6. " }, { "figure_ref": [], "heading": "F Ablation Study", "publication_ref": [], "table_ref": [], "text": "In this subsection, we explore the impact of different hyperparameters, specifically the smoothing term λ and the constraint c, on our model. We conducted experiments on the Cora and CiteSeer " }, { "figure_ref": [], "heading": "Appendix A Preliminary Study", "publication_ref": [ "b44", "b17", "b18" ], "table_ref": [], "text": "In this section, we present a comprehensive set of experimental results, showcasing the performance disparity across various GNNs concerning three metrics related to Label Distance Bias: Degree, Shortest Path Distance, and Label Proximity Score. Datasets. We selected three representative datasets for our experiments: Cora, CiteSeer, and PubMed. For each of these datasets, we worked with three different labeling rates: 5 labels per class, 20 labels per class, and 60% labels per class. For the data splits consisting of 5 and 20 labels per class, we adopted a commonly used setting [Fey and Lenssen, 2019] that randomly selects 500 nodes for validation and 1000 labels for testing. When dealing with a labeling rate of 60%, we randomly selected 20% of nodes for validation and another 20% for testing.\nModels. Our study also incorporates three representative models: APPNP [Gasteiger et al., 2018], GCN [Kipf and Welling, 2016], and Label Propagation [Zhou et al., 2003]. APPNP, a decoupled GNN, directly leverages the PPR matrix for feature propagation. On the other hand, GCN, a coupled GNN, uses the original adjacency matrix for feature propagation across each layer. Label Propagation relies solely on graph structure and labeled nodes for prediction. For all the models, we select their best hyperparameters based on the search space in their original papers.\nExperimental Setup. For both Degree and Shortest Path Distance metrics, we employ their actual values, [1,2,3,4,5,6,7], to segregate nodes into separate sensitive groups, considering only a handful of nodes possess a degree or shortest path Distance greater than seven. We delete the groups with only a few nodes. For the Label Proximity Score (LPS), we divided the test nodes evenly into seven sensitive groups based on their LPS, each group possessing a uniform range. It's important to note that we also filtered out outliers, particularly those with significantly larger LPS than the rest. Additionally, if a group contained only a few nodes, we merged it with adjacent groups. " }, { "figure_ref": [], "heading": "A.1 Results with 20 labeled nodes per class", "publication_ref": [], "table_ref": [], "text": "" } ]
[ { "authors": "Zonghan Wu; Shirui Pan; Fengwen Chen; Guodong Long; Chengqi Zhang; S Yu; Philip ", "journal": "IEEE transactions on neural networks and learning systems", "ref_id": "b0", "title": "A comprehensive survey on graph neural networks", "year": "2020" }, { "authors": "Yao Ma; Jiliang Tang", "journal": "Cambridge University Press", "ref_id": "b1", "title": "Deep learning on graphs", "year": "2021" }, { "authors": "Jie Zhou; Ganqu Cui; Shengding Hu; Zhengyan Zhang; Cheng Yang; Zhiyuan Liu; Lifeng Wang; Changcheng Li; Maosong Sun", "journal": "AI open", "ref_id": "b2", "title": "Graph neural networks: A review of methods and applications", "year": "2020" }, { "authors": "Justin Gilmer; S Samuel; Patrick F Schoenholz; Oriol Riley; George E Vinyals; Dahl", "journal": "PMLR", "ref_id": "b3", "title": "Neural message passing for quantum chemistry", "year": "2017" }, { "authors": "Zhimeng Jiang; Xiaotian Han; Chao Fan; Zirui Liu; Na Zou; Ali Mostafavi; Xia Hu", "journal": "", "ref_id": "b4", "title": "Fmp: Toward fair graph message passing against topology bias", "year": "2022" }, { "authors": "Chirag Agarwal; Himabindu Lakkaraju; Marinka Zitnik", "journal": "PMLR", "ref_id": "b5", "title": "Towards a unified framework for fair and stable graph representation learning", "year": "2021" }, { "authors": "Kose Deniz; Yanning Shen", "journal": "", "ref_id": "b6", "title": "Fair node representation learning via adaptive data augmentation", "year": "2022" }, { "authors": "Enyan Dai; Suhang Wang", "journal": "", "ref_id": "b7", "title": "Say no to the discrimination: Learning fair graph neural networks with limited sensitive attribute information", "year": "2021" }, { "authors": "Xianfeng Tang; Huaxiu Yao; Yiwei Sun; Yiqi Wang; Jiliang Tang; Charu Aggarwal; Prasenjit Mitra; Suhang Wang", "journal": "", "ref_id": "b8", "title": "Investigating and mitigating degree-related biases in graph convoltuional networks", "year": "2020" }, { "authors": "Jian Kang; Yan Zhu; Yinglong Xia; Jiebo Luo; Hanghang Tong", "journal": "", "ref_id": "b9", "title": "Rawlsgcn: Towards rawlsian difference principle on graph convolutional network", "year": "2022" }, { "authors": "Zemin Liu; Trung-Kien Nguyen; Yuan Fang", "journal": "", "ref_id": "b10", "title": "On generalized degree fairness in graph neural networks", "year": "2023" }, { "authors": "Langzhang Liang; Zenglin Xu; Zixing Song; Irwin King; Jieping Ye", "journal": "", "ref_id": "b11", "title": "Resnorm: Tackling long-tailed degree distribution issue in graph neural networks via normalization", "year": "2022" }, { "authors": "Jiaqi Ma; Ziqiao Ma; Joyce Chai; Qiaozhu Mei", "journal": "", "ref_id": "b12", "title": "Partition-based active learning for graph neural networks", "year": "2022" }, { "authors": "Hongyun Cai; Vincent W Zheng; Kevin Chen; -Chuan Chang", "journal": "", "ref_id": "b13", "title": "Active learning for graph embedding", "year": "2017" }, { "authors": "Shengding Hu; Zheng Xiong; Meng Qu; Xingdi Yuan; Marc-Alexandre Côté; Zhiyuan Liu; Jian Tang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b14", "title": "Graph policy network for transferable active learning on graphs", "year": "2020" }, { "authors": "Jiaqi Ma; Junwei Deng; Qiaozhu Mei", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b15", "title": "Subgroup generalization and fairness of graph neural networks", "year": "2021" }, { "authors": "N Thomas; Max Kipf; Welling", "journal": "", "ref_id": "b16", "title": "Semi-supervised classification with graph convolutional networks", "year": "2016" }, { "authors": "Johannes Gasteiger; Aleksandar Bojchevski; Stephan Günnemann", "journal": "", "ref_id": "b17", "title": "Predict then propagate: Graph neural networks meet personalized pagerank", "year": "2018" }, { "authors": "Dengyong Zhou; Olivier Bousquet; Thomas Lal; Jason Weston; Bernhard Schölkopf", "journal": "Advances in neural information processing systems", "ref_id": "b18", "title": "Learning with local and global consistency", "year": "2003" }, { "authors": "Yao Ma; Xiaorui Liu; Neil Shah; Jiliang Tang", "journal": "", "ref_id": "b19", "title": "Is homophily a necessity for graph neural networks?", "year": "2021" }, { "authors": "P Stephen; Lieven Boyd; Vandenberghe", "journal": "Cambridge university press", "ref_id": "b20", "title": "Convex optimization", "year": "2004" }, { "authors": "Stephen Boyd; Neal Parikh; Eric Chu; Borja Peleato; Jonathan Eckstein", "journal": "Foundations and Trends® in Machine learning", "ref_id": "b21", "title": "Distributed optimization and statistical learning via the alternating direction method of multipliers", "year": "2011" }, { "authors": "Keyulu Xu; Chengtao Li; Yonglong Tian; Tomohiro Sonobe; Ken-Ichi Kawarabayashi; Stefanie Jegelka", "journal": "PMLR", "ref_id": "b22", "title": "Representation learning on graphs with jumping knowledge networks", "year": "2018" }, { "authors": "Wooseok Ha; Kimon Fountoulakis; Michael W Mahoney", "journal": "The Journal of Machine Learning Research", "ref_id": "b23", "title": "Statistical guarantees for local graph clustering", "year": "2021" }, { "authors": "Chufeng Hu", "journal": "", "ref_id": "b24", "title": "Local graph clustering using l1-regularized pagerank algorithms", "year": "2020" }, { "authors": "Paul Tseng", "journal": "Journal of optimization theory and applications", "ref_id": "b25", "title": "Convergence of a block coordinate descent method for nondifferentiable minimization", "year": "2001" }, { "authors": "Prithviraj Sen; Galileo Namata; Mustafa Bilgic; Lise Getoor; Brian Galligher; Tina Eliassi-Rad", "journal": "AI magazine", "ref_id": "b26", "title": "Collective classification in network data", "year": "2008" }, { "authors": "Oleksandr Shchur; Maximilian Mumme; Aleksandar Bojchevski; Stephan Günnemann", "journal": "", "ref_id": "b27", "title": "Pitfalls of graph neural network evaluation", "year": "2018" }, { "authors": "Weihua Hu; Matthias Fey; Marinka Zitnik; Yuxiao Dong; Hongyu Ren; Bowen Liu; Michele Catasta; Jure Leskovec", "journal": "Advances in neural information processing systems", "ref_id": "b28", "title": "Open graph benchmark: Datasets for machine learning on graphs", "year": "2020" }, { "authors": "Xiaorui Liu; Wei Jin; Yao Ma; Yaxin Li; Hua Liu; Yiqi Wang; Ming Yan; Jiliang Tang", "journal": "PMLR", "ref_id": "b29", "title": "Elastic graph neural networks", "year": "2021" }, { "authors": "Petar Veličković; Guillem Cucurull; Arantxa Casanova; Adriana Romero; Pietro Lio; Yoshua Bengio", "journal": "", "ref_id": "b30", "title": "Graph attention networks", "year": "2017" }, { "authors": "Ruijia Wang; Xiao Wang; Chuan Shi; Le Song", "journal": "", "ref_id": "b31", "title": "Uncovering the structural fairness in graph contrastive learning", "year": "2022" }, { "authors": "Qi Zhu; Natalia Ponomareva; Jiawei Han; Bryan Perozzi", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b32", "title": "Shift-robust gnns: Overcoming the limitations of localized graph training data", "year": "2021" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b33", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Will Hamilton; Zhitao Ying; Jure Leskovec", "journal": "Advances in neural information processing systems", "ref_id": "b34", "title": "Inductive representation learning on large graphs", "year": "2017" }, { "authors": "Meng Liu; Hongyang Gao; Shuiwang Ji", "journal": "", "ref_id": "b35", "title": "Towards deeper graph neural networks", "year": "2020" }, { "authors": "Kaixiong Zhou; Xiao Huang; Daochen Zha; Rui Chen; Li Li; Soo-Hyun Choi; Xia Hu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b36", "title": "Dirichlet energy constrained learning for deep graph neural networks", "year": "2021" }, { "authors": "Enyan Dai; Tianxiang Zhao; Huaisheng Zhu; Junjie Xu; Zhimeng Guo; Hui Liu; Jiliang Tang; Suhang Wang", "journal": "", "ref_id": "b37", "title": "A comprehensive survey on trustworthy graph neural networks: Privacy, robustness, fairness, and explainability", "year": "2022" }, { "authors": "Maarten Buyl; Tijl De; Bie ", "journal": "PMLR", "ref_id": "b38", "title": "Debayes: a bayesian method for debiasing network embeddings", "year": "2020" }, { "authors": "Bingbing Xu; Huawei Shen; Bingjie Sun; Rong An; Qi Cao; Xueqi Cheng", "journal": "", "ref_id": "b39", "title": "Towards consumer loan fraud detection: Graph neural networks with role-constrained conditional random field", "year": "2021" }, { "authors": "Yushun Dong; Ninghao Liu; Brian Jalaian; Jundong Li", "journal": "", "ref_id": "b40", "title": "Edits: Modeling and mitigating data bias for graph neural networks", "year": "2022" }, { "authors": "Farzan Masrour; Tyler Wilson; Heng Yan; Pang-Ning Tan; Abdol Esfahanian", "journal": "", "ref_id": "b41", "title": "Bursting the filter bubble: Fairness-aware network link prediction", "year": "2020" }, { "authors": "Enyan Dai; Suhang Wang", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b42", "title": "Learning fair graph neural networks with limited and private sensitive attribute information", "year": "2022" }, { "authors": "Jian Kang; Jingrui He; Ross Maciejewski; Hanghang Tong", "journal": "", "ref_id": "b43", "title": "Inform: Individual fairness on graph mining", "year": "2020" }, { "authors": "Matthias Fey; Jan Eric Lenssen", "journal": "", "ref_id": "b44", "title": "Fast graph representation learning with pytorch geometric", "year": "2019" }, { "authors": "Meiqi Zhu; Xiao Wang; Chuan Shi; Houye Ji; Peng Cui", "journal": "", "ref_id": "b45", "title": "Interpreting and unifying graph neural networks with an optimization framework", "year": "2021" }, { "authors": "Yao Ma; Xiaorui Liu; Tong Zhao; Yozen Liu; Jiliang Tang; Neil Shah", "journal": "", "ref_id": "b46", "title": "A unified view on graph neural networks as graph signal denoising", "year": "2021" }, { "authors": "Liang Yang; Chuan Wang; Junhua Gu; Xiaochun Cao; Bingxin Niu", "journal": "", "ref_id": "b47", "title": "Why do attributes propagate in graph convolutional neural networks", "year": "2021" } ]
[ { "formula_coordinates": [ 2, 296.53, 699.77, 208.13, 15.91 ], "formula_id": "formula_0", "formula_text": "P = I -(1 -α) Ã -1 ,(1)" }, { "formula_coordinates": [ 5, 237.52, 101.4, 267.14, 33.2 ], "formula_id": "formula_1", "formula_text": "arg min B ∥I -B∥ 2 F + λtr(B ⊤ LB) s.t. BT1 n = c1 n ,(2)" }, { "formula_coordinates": [ 5, 118.52, 141.39, 385.48, 26.04 ], "formula_id": "formula_2", "formula_text": "tr(B ⊤ LB) = (vi,vj )∈E ∥B i / √ d i -B j / d j ∥ 2" }, { "formula_coordinates": [ 5, 108, 229.09, 226.13, 11.5 ], "formula_id": "formula_3", "formula_text": "B = (I + λL) -1 = α(I -(1 -α Ã)) -1 , where α = 1" }, { "formula_coordinates": [ 5, 138.25, 325.5, 366.42, 22.31 ], "formula_id": "formula_4", "formula_text": "L ρ (B, y) = ∥I -B∥ 2 F + λtr(B ⊤ LB) + y ⊤ (BT1 n -c1 n ) + ρ 2 ∥BT1 n -c1 n ∥ 2 2 ,(3)" }, { "formula_coordinates": [ 5, 168.86, 380.9, 335.81, 22.31 ], "formula_id": "formula_5", "formula_text": "∂L ρ ∂B = 2(B -I) + 2λ LB + y(T1 n ) ⊤ + ρ(BT1 n -c1 n )(T1 n ) ⊤ .(4)" }, { "formula_coordinates": [ 5, 238.41, 423.32, 135.19, 34.04 ], "formula_id": "formula_6", "formula_text": "B k+1 = arg min B L ρ (B k , y k ) y k+1 = y k + ρ(B k T1 n -c1 n )," }, { "formula_coordinates": [ 6, 217.06, 202.4, 287.6, 32.7 ], "formula_id": "formula_7", "formula_text": "arg min B ∥I -B∥ 2 F + λtr(B ⊤ LB) + β∥B∥ 1 s.t. BT1 n = c1 n ,(5)" }, { "formula_coordinates": [ 6, 152.85, 343.47, 351.81, 23.23 ], "formula_id": "formula_8", "formula_text": "∂L ρ ∂B :,j = 2(B :,j -I :,j ) + 2λ LB :,j + y(T1 n ) ⊤ j + ρ(BT1 n -c1 n )(T1 n ) ⊤ j ,(6)" }, { "formula_coordinates": [ 6, 153.46, 544.35, 97.13, 11.72 ], "formula_id": "formula_9", "formula_text": "H l+1 = σ B λ H l W l ," }, { "formula_coordinates": [ 6, 163.63, 643.59, 76.77, 9.84 ], "formula_id": "formula_10", "formula_text": "Y pred = B λ f θ (X)," }, { "formula_coordinates": [ 8, 126.24, 364.68, 359.53, 30.32 ], "formula_id": "formula_11", "formula_text": "WDP = D i=1 N i • |A i -A avg | N total , WSD = 1 N total D i=1 N i • (A i -A avg ) 2 , WCV = WSD A avg ," }, { "formula_coordinates": [ 16, 224.37, 168.61, 280.3, 19.75 ], "formula_id": "formula_12", "formula_text": "F∈R n×d L = ∥X -F∥ 2 F + λ tr(F ⊤ LF),(7)" }, { "formula_coordinates": [ 16, 252.53, 226.14, 108.15, 22.31 ], "formula_id": "formula_13", "formula_text": "∂L ∂F = 2(F -X) + 2λ LF." }, { "formula_coordinates": [ 16, 168.52, 280.14, 274.96, 22.31 ], "formula_id": "formula_14", "formula_text": "F k = F k-1 -γ ∂L ∂F = (1 -2λ -2λγ)F k-1 + 2λγ ÃF k-1 + 2γX." }, { "formula_coordinates": [ 16, 242.04, 327.92, 127.92, 22.31 ], "formula_id": "formula_15", "formula_text": "F k = λ 1 + λ ÃF k-1 + 1 1 + λ X" }, { "formula_coordinates": [ 16, 193.66, 386.62, 311.01, 81.82 ], "formula_id": "formula_16", "formula_text": "F K = λ 1 + λ ÃF K-1 + 1 1 + λ X = λ 1 + λ Ã( λ 1 + λ ÃF K-2 + 1 1 + λ X) + 1 1 + λ X = λ 1 + λ K ÃK + K-1 i=0 1 1 + λ λ 1 + λ i Ãi X.(8)" }, { "formula_coordinates": [ 16, 206.93, 497.06, 198.13, 24.86 ], "formula_id": "formula_17", "formula_text": "F = X -γ ∂L ∂F F=X = (1 -2γλ)X + 2γλ ÃX." }, { "formula_coordinates": [ 16, 225.23, 561.85, 279.44, 19.09 ], "formula_id": "formula_18", "formula_text": "arg min B L = ∥I -B∥ 2 F + λtr(B ⊤ LB).(9)" }, { "formula_coordinates": [ 16, 241.87, 643.6, 128.26, 22.31 ], "formula_id": "formula_19", "formula_text": "B k = λ 1 + λ ÃB k-1 + 1 1 + λ I." }, { "formula_coordinates": [ 16, 197.52, 690.03, 307.15, 30.32 ], "formula_id": "formula_20", "formula_text": "B K = λ 1 + λ K ÃK + K-1 i=0 1 1 + λ λ 1 + λ i Ãi ,(10)" }, { "formula_coordinates": [ 17, 215.98, 121.41, 180.05, 24.86 ], "formula_id": "formula_21", "formula_text": "B = I -γ ∂L ∂B B=I = (1 -2γλ)I + 2γλ Ã." }, { "formula_coordinates": [ 17, 237.02, 332.11, 267.65, 23.22 ], "formula_id": "formula_22", "formula_text": "∂h i ∂x j = Diag([B ij , B ij , . . . , B ij ]),(11)" }, { "formula_coordinates": [ 17, 224.27, 421.58, 280.4, 20.68 ], "formula_id": "formula_23", "formula_text": "j∈V L I i (j) = j∈V L nB ij = nBT1 n = c.(12)" } ]
Towards Label Position Bias in Graph Neural Networks
Graph Neural Networks (GNNs) have emerged as a powerful tool for semisupervised node classification tasks. However, recent studies have revealed various biases in GNNs stemming from both node features and graph topology. In this work, we uncover a new bias -label position bias, which indicates that the node closer to the labeled nodes tends to perform better. We introduce a new metric, the Label Proximity Score, to quantify this bias, and find that it is closely related to performance disparities. To address the label position bias, we propose a novel optimization framework for learning a label position unbiased graph structure, which can be applied to existing GNNs. Extensive experiments demonstrate that our proposed method not only outperforms backbone methods but also significantly mitigates the issue of label position bias in GNNs.
Haoyu Han; Xiaorui Liu; Feng Shi; Charu C Aggarwal; Ibm T J Watson; Jiliang Tang
[ { "figure_caption": "Figure 1 :1Figure 1: APPNP with 20 labeled nodes per class on Cora and CiteSeer datasets.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: GCN with 20 labeled nodes per class on Cora and CiteSeer datasets.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3: LP with 20 labeled nodes per class on Cora and CiteSeer datasets.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Architecture Algorithm 11Algorithm of LPSL 1: Input: Laplacian matrix L, Label mask matrix T, Hyperparamters λ, c, β, ρ, learning rate γ 2: Output: Label position unbiased graph structure B 3: Initialization: B 0 = I and y 0 = 0 4: while Not converge do 5:", "figure_data": "", "figure_id": "fig_3", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "y= y + ρ(BT1 n -c1 n ) 12: end while 13: return B", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The accuracy of different λ for LPSL APPNP and LPSL GCN on Cora and CiteSeer datasets.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 7: LP with 20 labeled nodes per class.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: GCN with 5 labeled nodes per class.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: LP with 5 labeled nodes per class.", "figure_data": "", "figure_id": "fig_8", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: APPNP with 60% labeled nodes per class.", "figure_data": "", "figure_id": "fig_9", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12: GCN with 60% labeled nodes per class.", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: LP with 60% labeled nodes per class.", "figure_data": "", "figure_id": "fig_11", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "datasets using ten random data splits with 20 labels per class. The accuracy of different λ and c values for LPSL APPNP and LPSL GCN on the Cora and CiteSeer datasets are illustrated inFigure F and Figure F, respectively. From the results, we can find both LPSL APPNP and LPSL GCN are not very sensitive to λ and c at the chosen regions.", "figure_data": "", "figure_id": "fig_12", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: The accuracy of different λ for LPSL APPNP and LPSL GCN on Cora and CiteSeer datasets.", "figure_data": "", "figure_id": "fig_13", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: The accuracy of different c for LPSL APPNP and LPSL GCN on Cora and CiteSeer datasets.", "figure_data": "", "figure_id": "fig_14", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Semi-supervised node classification accuracy (%) on benchmark datasets.", "figure_data": "DatasetLabel RateGCNAPPNPGRADESRGNNLPSL GCNLPSL APPNP570.68 ± 2.17 75.86 ± 2.34 69.51 ± 6.79 70.77 ± 1.82 76.58 ± 2.37 77.24 ± 2.18Cora10 2076.50 ± 1.42 80.29 ± 1.00 74.95 ± 2.46 75.42 ± 1.57 80.39 ± 1.17 81.59 ± 0.98 79.41 ± 1.30 82.34 ± 0.67 77.41 ± 1.49 78.42 ± 1.75 82.74 ± 1.01 83.24 ± 0.7560%88.60 ± 1.19 88.49 ± 1.28 86.84 ± 0.99 87.17 ± 0.95 88.75 ± 1.21 88.62 ± 1.69561.27 ± 3.85 63.92 ± 3.39 63.03 ± 3.61 64.84 ± 3.41 65.65 ± 2.47 65.70 ± 2.18CiteSeer10 2066.28 ± 2.14 67.57 ± 2.05 64.20 ± 3.23 67.83 ± 2.19 67.73 ± 2.57 68.76 ± 1.77 69.60 ± 1.67 70.85 ± 1.45 67.50 ± 1.76 69.13 ± 1.99 70.73 ± 1.32 71.25 ± 1.1460%76.88 ± 1.78 77.42 ± 1.47 74.00 ± 1.87 74.57 ± 1.57 77.18 ± 1.64 77.56 ± 1.44569.76 ± 6.46 72.68 ± 5.68 66.90 ± 6.49 69.38 ± 6.48 73.46 ± 4.64 73.57 ± 5.30PubMed10 2072.79 ± 3.58 75.53 ± 3.85 73.31 ± 3.75 72.69 ± 3.49 75.67 ± 4.42 76.18 ± 4.05 77.43 ± 2.66 78.93 ± 2.11 75.12 ± 2.37 77.09 ± 1.68 78.75 ± 2.45 79.26 ± 2.3260%88.48 ± 0.46 87.56 ± 0.52 86.90 ± 0.46 88.32 ± 0.55 87.75 ± 0.57 87.96 ± 0.57CS2091.73 ± 0.49 92.38 ± 0.38 89.43 ± 0.67 89.43 ± 0.67 91.94 ± 0.54 92.44 ± 0.36Physics2093.29 ± 0.80 93.49 ± 0.67 91.44 ± 1.41 93.16 ± 0.64 93.56 ± 0.51 93.65 ± 0.50Computers2079.17 ± 1.92 79.07 ± 2.34 79.01 ± 2.36 78.54 ± 2.15 80.05 ± 2.92 79.58 ± 2.31Photo2089.94 ± 1.22 90.87 ± 1.14 90.17 ± 0.93 89.36 ± 1.02 90.85 ± 1.16 90.93 ± 1.40ogbn-arxiv54%71.91 ± 0.15 71.61 ± 0.30OOM68.01 ± 0.35 72.04 ± 0.12 69.20 ± 0.26", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison of Methods in Addressing Label Proximity Score Bias.", "figure_data": "DatasetCoraCiteSeerMethodWDP ↓ WSD ↓ WCV ↓ WDP ↓ WSD ↓ WCV ↓LP0.1079 0.1378 0.1941 0.2282 0.2336 0.4692GRADE0.0372 0.0489 0.0615 0.0376 0.0467 0.0658GCN0.0494 0.0618 0.0758 0.0233 0.0376 0.0524LPSL GCN0.0361 0.0438 0.0518 0.0229 0.0346 0.0476APPNP0.0497 0.0616 0.0732 0.0344 0.0426 0.0594LPSL APPNP 0.0390 0.0476 0.0562 0.0275 0.0349 0.0474", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison of Methods in Addressing Degree Bias.", "figure_data": "DatasetCoraCiteSeerMethodWDP ↓ WSD ↓ WCV ↓ WDP ↓ WSD ↓ WCV ↓LP0.0893 0.1019 0.1447 0.1202 0.1367 0.2773GRADE0.0386 0.0471 0.0594 0.0342 0.0529 0.0744GCN0.0503 0.0566 0.0696 0.0466 0.0643 0.0901LPSL GCN0.0407 0.0468 0.0554 0.0378 0.0538 0.0742APPNP0.0408 0.0442 0.0527 0.0499 0.0688 0.0964LPSL APPNP 0.0349 0.0395 0.0467 0.0316 0.0487 0.0665", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparison of Methods in Addressing Shortest Path Distance Bias.", "figure_data": "DataSetCoraCiteSeerMethodWDP ↓ WSD ↓ WCV ↓ WDP ↓ WSD ↓ WCV ↓LP0.0562 0.0632 0.0841 0.0508 0.07350.109GRADE0.0292 0.0369 0.0459 0.0282 0.0517 0.0707GCN0.0237 0.0444 0.0533 0.0296 0.0553 0.0752LPSL GCN0.0150 0.0248 0.0289 0.0246 0.0526 0.0714APPNP0.0218 0.0316 0.0369 0.0321 0.0495 0.0668LPSL APPNP 0.0166 0.0253 0.0295 0.0265 0.0490 0.0654", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Dataset Statistics.", "figure_data": "DatasetNodesEdgesFeatures ClassesCora2,7085,2781,4337CiteSeer3,3274,5523,7036PubMed19,71744,3245003Coauthor CS18,33381,8946,80515Coauthor Physics34,493247,9628,4155Amazon Computer 13,381245,77876710Amazon Photo7,487119,0437458Ogbn-Arxiv169,343 1,166,24312840", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "The overall results of the node classification task. .31 57.60 ± 5.71 70.68 ± 2.17 75.86 ± 2.34 72.97 ± 2.23 69.51 ± 6.79 70.77 ± 1.82 76.58 ± 2.37 77.24 ± 2.18 10 51.34 ± 3.37 63.76 ± 3.60 76.50 ± 1.42 80.29 ± 1.00 78.03 ± 1.17 74.95 ± 2.46 75.42 ± 1.57 80.39 ± 1.17 81.59 ± 0.98 20 59.23 ± 2.52 67.87 ± 1.43 79.41 ± 1.30 82.34 ± 0.67 81.39 ± 1.41 77.41 ± 1.49 78.42 ± 1.75 82.74 ± 1.01 83.24 ± 0.75 60% 76.49 ± 1.13 86.05 ± 1.35 88.60 ± 1.19 88.49 ± 1.28 88.68 ± 1.13 86.84 ± 0.99 87.17 ± 0.95 88.75 ± 1.21 88.62 ± 1.69 CiteSeer 5 41.05 ± 2.84 39.06 ± 3.53 61.27 ± 3.85 63.92 ± 3.39 62.60 ± 3.34 63.03 ± 3.61 64.84 ± 3.41 65.65 ± 2.47 65.70 ± 2.18 10 47.99 ± 2.71 42.29 ± 3.26 66.28 ± 2.14 67.57 ± 2.05 66.81 ± 2.10 64.20 ± 3.23 67.83 ± 2.19 67.73 ± 2.57 68.76 ± 1.77 20 56.96 ± 1.80 46.15 ± 2.31 69.60 ± 1.67 70.85 ± 1.45 69.66 ± 1.47 67.50 ± 1.76 69.13 ± 1.99 70.73 ± 1.32 71.25 ± 1.14 60% 73.15 ± 1.36 69.39 ± 2.01 76.88 ± 1.78 77.42 ± 1.47 76.70 ± 1.81 74.00 ± 1.87 74.57 ± 1.57 77.18 ± 1.64 77.56 ± 1.44 PubMed 5 58.48 ± 4.06 65.52 ± 6.42 69.76 ± 6.46 72.68 ± 5.68 70.42 ± 5.36 66.90 ± 6.49 69.38 ± 6.48 73.46 ± 4.64 73.57 ± 5.30 10 65.36 ± 2.08 68.39 ± 4.88 72.79 ± 3.58 75.53 ± 3.85 73.35 ± 3.83 73.31 ± 3.75 72.69 ± 3.49 75.67 ± 4.42 76.18 ± 4.05 20 69.07 ± 2.10 71.88 ± 1.72 77.43 ± 2.66 78.93 ± 2.11 77.43 ± 2.66 75.12 ± 2.37 77.09 ± 1.68 78.75 ± 2.45 79.26 ± 2.32 60% 86.14 ± 0.64 83.38 ± 0.64 88.48 ± 0.46 87.56 ± 0.52 86.52 ± 0.56 86.90 ± 0.46 88.32 ± 0.55 87.75 ± 0.57 87.96 ± 0.57 CS 20 88.12 ± 0.78 77.45 ± 1.80 91.73 ± 0.49 92.38 ± 0.38 90.96 ± 0.46 89.43 ± 0.67 89.43 ± 0.67 91.94 ± 0.54 92.44 ± 0.36", "figure_data": "DatasetLabel RateMLPLPGCNAPPNPGATGRADESRGNNLPSLGCNLPSLAPPNPCora 42.34 ± 3Physics 5 20 88.30 ± 1.59 86.70 ± 1.03 93.29 ± 0.80 93.49 ± 0.67 92.81 ± 1.03 91.44 ± 1.41 93.16 ± 0.64 93.56 ± 0.51 93.65 ± 0.50Computers2060.66 ± 2.98 72.44 ± 2.87 79.17 ± 1.92 79.07 ± 2.34 78.38 ± 2.27 79.01 ± 2.36 78.54 ± 2.15 80.05 ± 2.92 79.58 ± 2.31Photo2075.33 ± 1.91 81.58 ± 4.69 89.94 ± 1.22 90.87 ± 1.14 89.24 ± 1.42 90.17 ± 0.93 89.36 ± 1.02 90.85 ± 1.16 90.93 ± 1.40ogbn-arxiv54%61.17 ± 0.20 74.08 ± 0.00 71.91 ± 0.15 71.61 ± 0.30OOMOOM68.01 ± 0.35 72.04 ± 0.12 69.20 ± 0.26", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[Wu et al., 2020, Ma and Tang, 2021]", "Explanation": "The cited works provide a range of applications for graph data structure, including social networks, transportation, and biology, which serve as a methodological basis for the citing paper to focus on the specific application of semi-supervised node classification."}, {"Category": "Supporting Evidence", "Citation": "[Zhou et al., 2020]", "Explanation": "The cited work highlights the success of GNNs in addressing the semi-supervised node classification task, providing supporting evidence for the citing paper to further explore the application of GNNs in this area."}, {"Category": "Data Source", "Citation": "[Gilmer et al., 2017]", "Explanation": "The cited work introduces the message-passing scheme for GNN models, which is a foundational data source for the citing paper to build upon in their study of GNN models and the semi-supervised node classification task."}, {"Category": "Extension or Continuation", "Citation": "Jiang et al.", "Explanation": "The cited work by Jiang et al. is an extension or continuation of the study on GNN biases, providing insights into the potential biases introduced by GNN models in terms of node features and graph topology, which the citing paper can build upon in their own research."}, {"Category": "Supporting Evidence", "Citation": "[Ma et al., 2022]", "Explanation": "The cited work by Ma et al. provides evidence of the potential bias in label information in GNNs, which the citing paper builds upon to further explore the impact of label information on GNN performance."}, {"Category": "Supporting Evidence", "Citation": "[Cai et al., 2017]", "Explanation": "The cited work by Cai et al. also highlights the importance of label information in GNNs, which the citing paper uses to support the claim that label information can affect GNN performance."}, {"Category": "Supporting Evidence", "Citation": "[Hu et al., 2020a]", "Explanation": "The cited work by Hu et al. provides further evidence of the impact of label information in GNNs, which the citing paper uses to support the claim that label information can affect GNN performance."}, {"Category": "Extension or Continuation", "Citation": "[Ma et al., 2021a]", "Explanation": "The cited work by Ma et al. studies the subgroup generalization of GNNs and finds that the shortest path distance to labeled nodes can affect GNN performance. The citing paper extends this research by providing a deeper understanding of the impact of labeled nodes' position on unlabeled nodes."}, {"Category": "Supporting Evidence", "Citation": "[Tang et al., 2020]", "Explanation": "The cited work by Tang et al. provides a method of measuring label position bias in GNNs using degree as a metric, which the citing paper builds upon to further explore the issue of label position bias in GNNs."}, {"Category": "Supporting Evidence", "Citation": "[Ma et al., 2021a]", "Explanation": "The cited work by Ma et al. introduces a method of measuring label position bias in GNNs using shortest path distance as a metric, which the citing paper uses to further highlight the importance of label position bias in GNNs and the need to address it."}, {"Category": "Extension or Continuation", "Citation": "Our study", "Explanation": "The citing paper extends the research on label position bias in GNNs by conducting a study to show the existence of label position bias and the need to address it in GNNs."}, {"Category": "Extension or Continuation", "Citation": "First", "Explanation": "The citing paper extends the discussion on the importance of mitigating label position bias in GNNs by highlighting the potential fairness and performance issues that can arise from it."}, {"Category": "Methodological Basis", "Citation": "We propose a Label Position unbiased Structure Learning method (LPSL)", "Explanation": "The citing paper proposes a new method for learning a graph structure that mitigates label position bias in GNNs, which serves as the methodological basis for the rest of the study."}, {"Category": "Data Source", "Citation": "[Tang et al., 2020]", "Explanation": "The cited work provides the node degree metric used in the study conducted in the citing paper to split test nodes into different sensitive groups."}, {"Category": "Data Source", "Citation": "[Ma et al., 2021a]", "Explanation": "The cited work provides the shortest path distance to label nodes metric used in the study conducted in the citing paper to split test nodes into different sensitive groups."}, {"Category": "Extension or Continuation", "Citation": "The proposed LPS metric is used in the study conducted in the citing paper to further split test nodes into different sensitive groups based on the LPS values."}, {"Category": "Data Source", "Citation": "[Zhou et al., 2003]", "Explanation": "The cited work by Zhou et al. provides the evaluation on Label Propagation (LP), which is used in the citing paper to exclude the potential bias caused by node features in the analysis of node classification accuracy on different sensitive groups."}, {"Category": "Supporting Evidence", "Citation": "[Ma et al., 2021b]", "Explanation": "The cited work provides a dataset with relatively low homophily that serves as a benchmark for evaluating the performance of GNN models in addressing label position bias."}, {"Category": "Methodological Basis", "Citation": "[Boyd et al., 2011]", "Explanation": "The cited work provides the dual ascent algorithm that the citing paper adopts in solving the problem of minimizing the loss function in a specific optimization step."}, {"Category": "Methodological Basis", "Citation": "[Xu et al., 2018]", "Explanation": "The cited work provides a definition of the influence of node j on node i in GNNs, which the citing paper uses to understand the message-passing scheme and the way labeled nodes can influence the prediction of unlabeled nodes."}, {"Category": "Methodological Basis", "Citation": "(Ha et al., 2021)", "Explanation": "The cited work by Ha et al. is used to provide a strong localization property and guarantee the sparsity of the learned graph structure in the citing paper."}, {"Category": "Data Source", "Citation": "(Hu, 2020)", "Explanation": "The cited work by Hu is referenced to acknowledge the use of a specific data source in the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[Kipf and Welling, 2016]", "Explanation": "The GCN model is used as a basis for the proposed LPSL GCN model, which is a variant of the GCN model that incorporates the proposed label position structure learning method."}, {"Category": "Methodological Basis", "Citation": "[Gasteiger et al., 2018]", "Explanation": "The APPNP model is used as a basis for the proposed LPSL APPNP model, which is a variant of the APPNP model that incorporates the proposed label position structure learning method."}, {"Category": "Methodological Basis", "Citation": "[Sen et al., 2008]", "Explanation": "The cited work provides the datasets used in the experiments conducted in the citing paper, which serve as a methodological basis for the research conducted."}, {"Category": "Data Source", "Citation": "[Shchur et al., 2018]", "Explanation": "The cited work provides the co-purchase datasets used in the experiments conducted in the citing paper, which serve as a data source for the research."}, {"Category": "Data Source", "Citation": "[Hu et al., 2020b]", "Explanation": "The cited work provides the OGB dataset used in the experiments conducted in the citing paper, which serves as a data source for the research."}, {"Category": "Data Source", "Citation": "[Liu et al., 2021]", "Explanation": "The cited work provides the data split for the ogbn-arxiv dataset, which the citing paper uses in its research."}, {"Category": "Methodological Basis", "Citation": "[Gasteiger et al., 2018]", "Explanation": "The cited work introduces the APPNP GNN model, which the citing paper adopts in its research to address the label position bias."}, {"Category": "Extension or Continuation", "Citation": "[Veli\u010dkovi\u0107 et al., 2017]", "Explanation": "The cited work presents the GAT GNN model, which the citing paper extends by using it as a baseline to compare the performance of the proposed method in addressing the label position bias."}, {"Category": "Data Source", "Citation": "[Zhou et al., 2003]", "Explanation": "The cited work introduces the Label Propagation method, which the citing paper uses as a baseline to compare the performance of the proposed method in addressing the label position bias."}, {"Category": "Methodological Basis", "Citation": "[Wang et al., 2022]", "Explanation": "The cited work, GRADE, is a method designed to mitigate degree bias, which the citing paper adopts in their research to address a specific issue in the field."}, {"Category": "Extension or Continuation", "Citation": "[Zhu et al., 2021a]", "Explanation": "The cited work, SRGNN, demonstrates the issue of feature distribution shift in local data gathering, which the citing paper extends by proposing a method to mitigate the issue and include it as a baseline in their research."}, {"Category": "Methodological Basis", "Citation": "[Kipf and Welling, 2016]", "Explanation": "The cited work GCN serves as a methodological basis for the feature transformation and propagation operators in GNNs."}, {"Category": "Methodological Basis", "Citation": "[Hamilton et al., 2017]", "Explanation": "The cited work GraphSAGE is a methodological basis for the feature transformation and propagation operators in GNNs."}, {"Category": "Methodological Basis", "Citation": "[Veli\u010dkovi\u0107 et al., 2017]", "Explanation": "The cited work GAT is a methodological basis for the feature transformation and propagation operators in GNNs."}, {"Category": "Methodological Basis", "Citation": "[Gasteiger et al., 2018]", "Explanation": "The cited work APPNP represents a methodological basis for the decoupled GNNs in the field of graph neural networks."}, {"Category": "Methodological Basis", "Citation": "[Liu et al., 2021]", "Explanation": "The cited work represents a methodological basis for the decoupled GNNs in the field of graph neural networks."}, {"Category": "Methodological Basis", "Citation": "[Liu et al., 2020]", "Explanation": "The cited work represents a methodological basis for the decoupled GNNs in the field of graph neural networks."}, {"Category": "Methodological Basis", "Citation": "[Zhou et al., 2021]", "Explanation": "The cited work represents a methodological basis for the decoupled GNNs in the field of graph neural networks."}, {"Category": "Data Source", "Citation": "[Wu et al., 2020]", "Explanation": "The cited work provides a data source for the evaluation of the success of GNNs in various domains."}, {"Category": "Supporting Evidence", "Citation": "[Dai et al., 2022]", "Explanation": "The cited work provides supporting evidence for the biases tied to node features and graph topology in GNNs."}, {"Category": "Supporting Evidence", "Citation": "[Dai and Wang, 2021]", "Explanation": "The cited work by Dai and Wang highlights the issue of GNNs generating predictions skewed by sensitive node features, which supports the claim in the citing paper that GNNs can lead to unfairness in various tasks due to feature bias."}, {"Category": "Supporting Evidence", "Citation": "[Agarwal et al., 2021]", "Explanation": "The work by Agarwal et al. also addresses the issue of feature bias in GNNs, which further supports the claim in the citing paper that GNNs can lead to unfairness in various tasks due to feature bias."}, {"Category": "Supporting Evidence", "Citation": "[Buyl and De Bie, 2020]", "Explanation": "The work by Buyl and De Bie highlights the issue of GNNs generating predictions skewed by sensitive node features, which supports the claim in the citing paper that GNNs can lead to unfairness in various tasks due to feature bias."}, {"Category": "Supporting Evidence", "Citation": "[Xu et al., 2021]", "Explanation": "The work by Xu et al. discusses the issue of GNNs leading to potential unfairness in loan fraud detection due to feature bias, which further supports the claim in the citing paper that GNNs can lead to unfairness in various tasks due to feature bias."}, {"Category": "Extension or Continuation", "Citation": "[Masrour et al., 2020]", "Explanation": "The work by Masrour et al. focuses on addressing feature bias in GNNs through adversarial training, which is a method that the citing paper may consider in their research to mitigate the issue of GNNs generating predictions skewed by sensitive node features."}, {"Category": "Extension or Continuation", "Citation": "[Kang et al., 2020]", "Explanation": "The work by Kang et al. discusses the use of fairness constraints to address feature bias in GNNs, which the citing paper may consider in their research to mitigate the issue of GNNs leading to unfairness in various tasks due to feature bias."}, {"Category": "Extension or Continuation", "Citation": "[Tang et al., 2020]", "Explanation": "The work by Tang et al. highlights the issue of low-degree nodes being more likely to be falsely predicted by GNNs, which the citing paper may consider in their research to address the issue of structural bias in GNNs."}, {"Category": "Extension or Continuation", "Citation": "[Kang et al., 2022]", "Explanation": "The work by Kang et al. addresses the issue of degree bias in GNNs, which the citing paper may consider in their research to build upon the existing methods to mitigate the issue of low-degree nodes being more likely to be falsely predicted by GNNs."}, {"Category": "Extension or Continuation", "Citation": "[Liu et al., 2023]", "Explanation": "The work by Liu et al. focuses on addressing the issue of degree bias in GNNs, which the citing paper may consider in their research to build upon the existing methods to mitigate the issue of low-degree nodes being more likely to be falsely predicted by GNNs."}, {"Category": "Extension or Continuation", "Citation": "[Liang et al., 2022]", "Explanation": "The work by Liang et al. discusses the issue of degree bias in GNNs, which the citing paper may consider in their research to build upon the existing methods to mitigate the issue of low-degree nodes being more likely to be falsely predicted by GNNs."}, {"Category": "Methodological Basis", "Citation": "[Zhu et al., 2021b]", "Explanation": "The cited work provides a unified framework for message passing in GNNs, which the citing paper adopts to develop a new method for feature aggregation using a learned graph structure."}, {"Category": "Methodological Basis", "Citation": "[Ma et al., 2021c]", "Explanation": "The cited work offers a framework for message passing in GNNs that the citing paper uses to develop a new method for feature aggregation using a learned graph structure."}, {"Category": "Methodological Basis", "Citation": "[Yang et al., 2021]", "Explanation": "The cited work provides a framework for message passing in GNNs that the citing paper adopts to develop a new method for feature aggregation using a learned graph structure."}, {"Category": "Methodological Basis", "Citation": "[Xu et al., 2018]", "Explanation": "The cited work by Xu et al. provides the definition of influence in GNNs, which the citing paper uses to derive the influence score of node j on node i in their research."}, {"Category": "Data Source", "Citation": "[Sen et al., 2008]", "Explanation": "The cited work provides the three citation datasets (Cora, Citeseer, and Pubmed) that the citing paper uses in their research for the semi-supervised node classification task."}, {"Category": "Data Source", "Citation": "[Shchur et al., 2018]", "Explanation": "The cited work provides the two Amazon datasets (Computers and Photo) that the citing paper uses in their research for the semi-supervised node classification task."}, {"Category": "Data Source", "Citation": "[Hu et al., 2020b]", "Explanation": "The cited work provides the OGB dataset (ogbn-arxiv) that the citing paper uses in their research for the semi-supervised node classification task."}, {"Category": "Methodological Basis", "Citation": "[Wang et al., 2022]", "Explanation": "The cited work introduces the GRADE method, which the citing paper adopts to mitigate degree bias in their research for the semi-supervised node classification task."}, {"Category": "Methodological Basis", "Citation": "[Zhu et al., 2021a]", "Explanation": "The cited work, SRGNN, is used as a baseline in the citing paper to demonstrate the issue of feature distribution shift in the local gathering of labeled nodes and to mitigate this issue in the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[Fey and Lenssen, 2019]", "Explanation": "The cited work is the source of the data splits used in the experiments conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[Gasteiger et al., 2018]", "Explanation": "The cited work is the model used in the experiments to showcase the performance disparity across GNNs in terms of label distance bias."}, {"Category": "Data Source", "Citation": "[Kipf and Welling, 2016]", "Explanation": "The cited work is the model used in the experiments to highlight the performance disparity across GNNs in terms of label distance bias."}, {"Category": "Data Source", "Citation": "[Zhou et al., 2003]", "Explanation": "The cited work is the model used in the experiments to demonstrate the performance disparity across GNNs in terms of label distance bias."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work provides the actual values for the Degree and Shortest Path Distance metrics, which the citing paper uses to segregate nodes into separate sensitive groups in their experimental setup."}, {"Category": "Data Source", "Citation": "[2]", "Explanation": "The cited work is acknowledged for its role in providing the data used in the experimental setup of the citing paper, which is based on the search space in the original paper."}, {"Category": "Data Source", "Citation": "[3]", "Explanation": "The cited work is cited for its contribution to the experimental setup of the citing paper, as the data used in the study is based on the information provided in the original work."}, {"Category": "Data Source", "Citation": "[4]", "Explanation": "The cited work is cited for its contribution to the experimental setup of the citing paper, as the data used in the study is based on the information provided in the original work."}, {"Category": "Data Source", "Citation": "[5]", "Explanation": "The cited work is cited for its contribution to the experimental setup of the citing paper, as the data used in the study is based on the information provided in the original work."}, {"Category": "Data Source", "Citation": "[6]", "Explanation": "The cited work is cited for its contribution to the experimental setup of the citing paper, as the data used in the study is based on the information provided in the original work."}, {"Category": "Data Source", "Citation": "[7]", "Explanation": "The cited work is cited for its contribution to the experimental setup of the citing paper, as the data used in the study is based on the information provided in the original work."}]
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b20", "b32", "b1", "b46", "b88", "b20", "b60", "b101", "b15", "b32" ], "table_ref": [], "text": "Point cloud semantic segmentation is a crucial task for 3D scene understanding and has various applications [27,83], such as autonomous driving, unmanned aerial vehicles, and augmented reality. Current state-of-the-art approaches heavily rely on large-scale and densely annotated 3D datasets, which are costly to obtain [21,33]. To avoid demanding exhaustive annotation, weakly-supervised point cloud segmentation has emerged as a promising alternative. It aims to leverage a small set of annotated points while leaving the majority of points unlabeled in a large point cloud dataset for learning. Although current weakly supervised methods can offer a practical and cost-effective way to perform point cloud segmentation, their performance is still sub-optimal compared to fully-supervised approaches.\nDuring the exploration of weak supervision [85], a significant challenge is the insufficient training signals provided by the highly sparse labels. To tackle this issue, pseudo-labeling methods [85,94,32] have been proposed, which leverage predictions on unlabeled points as labels to facilitate the learning of the segmentation network. Despite some promising results, these pseudo-label methods have been outperformed by some recent methods based on consistency regularization [47,87]. We tend to attribute this to the use of label selection on pseudo-labels, such as confidence thresholding, which simultaneously optimizes the pseudo-labels p and predictions q taking the same and simple form of crossentropy. By reducing the noise via entropy regularization and bridging their distributional discrepancies, ERDA produces informative pseudo-labels that neglect the need for label selection. As in (c), it thus enables the model to consistently benefit from more pseudo-labels, surpasses other methods and its fully-supervised baseline, and can be extended to advance the fully-supervised performance. could lead to unlabeled points being wasted and under-explored. We hypothesize that the need for label selection arises from the low-confidence pseudo-labels assigned to unlabeled points, which are known for their noises [21] and potential unintended bias [60,100]. These less reliable and noisy pseudo-labels could contribute to discrepancies between the pseudo-labels and the model predictions, which might confuse and impede the learning process to a great extent. By addressing the above problem for label selection in weakly supervised 3D segmentation, we propose a novel learning-based approach in this study. Our method aims to leverage the information from all unlabeled points by mitigating the negative effects of the noisy pseudo-labels and the distributional discrepancy. Specifically, we introduce two learning objectives for the pseudo-label generation process. Firstly, we introduce an entropy regularization (ER) objective to reduce the noise and uncertainty in the pseudo-labels. This regularization promotes more informative, reliable, and confident pseudo-labels, which helps alleviate the limitations of noisy and uncertain pseudo-labels. Secondly, we propose a distribution alignment (DA) loss that minimizes statistical distances between pseudo-labels and model predictions. This ensures that the distribution of generated pseudo-labels remains close to the distribution of model predictions when regularizing their entropy.\nIn particular, we discover that formulating the distribution alignment loss using KL distance enables a simplification of our method into a cross-entropy-style learning objective that optimizes both the pseudo-label generator and the 3D segmentation network simultaneously. This makes our method straightforward to implement and apply. By integrating the entropy regularization and distribution alignment, we achieve the ERDA learning strategy, as shown in Fig. 1.\nEmpirically, we comprehensively experiment with three baselines and different weak supervision settings, including 0.02% (1-point, or 1pt), 1%, and 10%. Despite its concise design, our ERDA outperforms existing weakly-supervised methods on large-scale point cloud datasets such as S3DIS [2], ScanNet [16], and SensatUrban [33]. Notably, our ERDA can surpass the fully-supervised baselines using only 1% labels, demonstrating its significant effectiveness in leveraging pseudo-labels. Furthermore, we validate the scalability of our method by successfully generalizing it to more other settings, which illustrates the benefits of utilizing dense pseudo-label supervision with ERDA." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b38", "b11", "b3", "b72", "b27", "b21", "b22", "b56", "b57", "b2", "b47", "b69", "b51", "b58", "b33", "b25", "b41", "b59", "b87", "b89", "b54", "b34", "b67", "b8", "b49", "b0", "b64", "b78", "b13", "b39", "b70", "b96", "b81", "b82", "b42", "b6", "b46", "b88", "b28", "b9", "b1", "b12", "b96", "b88", "b81", "b82", "b1", "b88", "b1", "b23", "b60", "b64", "b99", "b74", "b73", "b92", "b19", "b60", "b99", "b74", "b101", "b64", "b101", "b19", "b93", "b101", "b30", "b40", "b101", "b30", "b19", "b93", "b40", "b19", "b30", "b101", "b30", "b64", "b97", "b4" ], "table_ref": [], "text": "Point cloud segmentation. Point cloud semantic segmentation aims to assign semantic labels to 3D points. The cutting-edge methods are deep-learning-based and can be classified into projection-based and point-based approaches. Projection-based methods project 3D points to grid-like structures, such as 2D image [84,55,39,12,4,45] or 3D voxels [15,71,67,28,22,23,76]. Alternatively, point-based methods directly operate on 3D points [56,57]. Recent efforts have focused on novel modules and backbones to enhance point features, such as 3D convolution [3,48,78,68,51,58], attentions [34,26,97,79,42,59], graph-based methods [74,44], and other modules such as sampling [18,86,88,7] and post-processing [54,35,66]. Although these methods have made significant progress, they rely on large-scale datasets with point-wise annotation and struggle with few labels [85]. To address the demanding requirement of point-wise annotation, our work explores weakly-supervised learning for 3D point cloud segmentation.\nWeakly-supervised point cloud segmentation. Compared to weakly-supervised 2D image segmentation [99,49,75,1,64], weakly-supervised 3D point cloud segmentation is less explored. In general, weakly-supervised 3D segmentation task focus on highly sparse labels: only a few scattered points are annotated in large point cloud scenes. Xu and Lee [85] first propose to use 10x fewer labels to achieve performance on par with a fully-supervised point cloud segmentation model. Later studies have explored more advanced ways to exploit different forms of weak supervision [77,14,40] and human annotations [53,69]. Recent methods tend to introduce perturbed self-distillation [95], consistency regularization [85,62,80,81,43], and leverage self-supervised learning [62, 37,47,87] based on contrastive learning [29,10]. Pseudo-labels are another approach to leverage unlabeled data, with methods such as pre-training networks on colorization tasks [94], using iterative training [32], employing separate networks to iterate between learning pseudo-labels and training 3D segmentation networks [53], or using super-point graph [44] with graph attentional module to propagate the limited labels over super-points [13]. However, these existing methods often require expensive training due to hand-crafted 3D data augmentations [95,87,80,81], iterative training [53,32], or additional modules [87,32], complicating the adaptation of backbone models from fully-supervised to weakly-supervised learning. In contrast, our work aims to achieve effective weakly supervised learning for the 3D segmentation task with straightforward motivations and simple implementation.\nPseudo-label refinement. Pseudo-labeling [46], a versatile method for entropy minimization [24], has been extensively studied in various tasks, including semi-supervised 2D classification [82,60], segmentation [64,89], and domain adaptation [98,73]. To generate high-quality supervision, various label selection strategies have been proposed based on learning status [72,91,20], label uncertainty [60,98,73,50], class balancing [100], and data augmentations [64,89,100]. Our method is most closely related to the works addressing bias in supervision, where mutual learning [20,70,92] and distribution alignment [100,31,41] have been discussed. However, these works typically focus on class imbalance [100,31] and rely on iterative training [70,20,92,41], label selection [20,31], and strong data augmentations [100,31], which might not be directly applicable to 3D point clouds. For instance, common image augmentations [64] like cropping and resizing may translate to point cloud upsampling [96], which remains an open question in the related research area. Rather than introducing complicated mechanisms, we argue that proper regularization on pseudo-labels and its alignment with model prediction can provide significant benefits using a very concise learning approach designed for the weakly supervised 3D point cloud segmentation task.\nBesides, it is shown that the data augmentations and repeated training in mutual learning [70,38] are important to avoid the feature collapse, i.e., the resulting pseudo-labels being uniform or the same as model predictions. We suspect the cause may originate from the entropy term in their use of raw statistical distance by empirical results, which potentially matches the pseudo-labels to noisy and confusing model prediction, as would be discussed in Sec. 3.2. Moreover, in self-supervised learning based on clustering [5] and distillation [6], it has also been shown that it would lead to feature collapse if matching to a cluster assignment or teacher output of a close-uniform distribution with high entropy, which agrees with the intuition in our ER term." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Formulation of ERDA", "publication_ref": [ "b23", "b61", "b101", "b93", "b19" ], "table_ref": [], "text": "As previously mentioned, we propose the ERDA approach to alleviate noise in the generated pseudolabels and reduce the distribution gaps between them and the segmentation network predictions. In general, our ERDA introduces two loss functions, including the entropy regularization loss and the distribution alignment loss for the learning on pseudo-labels. We denote the two loss functions as L ER and L DA , respectively. Then, we have the overall loss of ERDA as follows:\nL p = λL ER + L DA ,(1)\nwhere the λ > 0 modulates the entropy regularization which is similar to the studies [46,24].\nBefore detailing the formulation of L ER and L DA , we first introduce the notation. While the losses are calculated over all unlabeled points, we focus on one single unlabeled point for ease of discussion. We denote the pseudo-label assigned to this unlabeled point as p and the corresponding segmentation network prediction as q. Each p and q is a 1D vector representing the probability over classes.\nEntropy Regularization loss. We hypothesize that the quality of pseudo-labels can be hindered by noise, which in turn affects model learning. Specifically, we consider that the pseudo-label could be more susceptible to containing noise when it fails to provide a confident pseudo-labeling result, which leads to the presence of a high-entropy distribution in p.\nTo mitigate this, for the p, we propose to reduce its noise level by minimizing its Shannon entropy, which also encourages a more informative labeling result [61]. Therefore, we have:\nL ER = H(p),(2)\nwhere H(p) = i -p i log p i and i iterates over the vector. By minimizing the entropy of the pseudo-label as defined above, we promote more confident labeling results to help resist noise in the labeling process 1 .\nDistribution Alignment loss. In addition to the noise in pseudo-labels, we propose that significant discrepancies between the pseudo-labels and the segmentation network predictions could also confuse the learning process and lead to unreliable segmentation results. In general, the discrepancies can stem from multiple sources, including the noise-induced unreliability of pseudo-labels, differences between labeled and unlabeled data [100], and variations in pseudo-labeling methods and segmentation methods [92,20]. Although entropy regularization could mitigate the impact of noise in pseudolabels, significant discrepancies may still persist between the pseudo-labels and the predictions of the segmentation network. To mitigate this issue, we propose that the pseudo-labels and network can be jointly optimized to narrow such discrepancies, making generated pseudo-labels not diverge too far from the segmentation predictions. Therefore, we introduce the distribution alignment loss.\nTo properly define the distribution alignment loss (L DA ), we measure the KL divergence between the pseudo-labels (p) and the segmentation network predictions (q) and aim to minimize this divergence. Specifically, we define the distribution alignment loss as follows:\nL DA = KL(p||q),(3)\nwhere KL(p||q) refers to the KL divergence. Using the above formulation has several benefits. For example, the KL divergence can simplify the overall loss L p into a deceptively simple form that demonstrates desirable properties and also performs better than other distance measurements. More details will be presented in the following sections.\nSimplified ERDA. With the L ER and L DA formulated as above, given that KL(p||q) = H(p, q) -H(p) where H(p, q) is the cross entropy between p and q, we can have a simplified ERDA formulation as:\nL p = H(p, q) + (λ -1)H(p).(4)\nIn particular, when λ = 1, we obtain the final ERDA loss2 :\nL p = H(p, q) = i -p i log q i(5)\nThe above simplified ERDA loss describes that the entropy regularization loss and distribution alignment loss can be represented by a single cross-entropy-based loss that optimizes both p and q.\nWe would like to emphasize that Eq. ( 5) is distinct from the conventional cross-entropy loss. The conventional cross-entropy loss utilizes a fixed label and only optimizes the term within the logarithm function, whereas the proposed loss in Eq. ( 5) optimizes both p and q simultaneously.\nLDA KL(p||q) KL(q||p) JS(p, q) M SE(p, q) Lp H(p, q) -(1 -λ)H(p) H(q, p) -H(q) + λH(p) H( p + q 2 ) -( 1 2 -λ)H(p) - 1 2 H(q) 1 2 i (pi -qi) 2 + λH(p) S1 0 qi -1k=i 0 0 S2 (λ -1)pi j pj log pi pj 1 K -pi + λpi j pj log pi pj pi j̸ =i pj( 1 2 log Kpi + 1 Kpj + 1 + (λ - 1 2 ) log pi pj ) -p 2 i + pi j p 2 j + λpi j pj log pi pj\nTable 1. The formulation of Lp using different functions to formulate LDA. We study the gradient update on si, i.e., -∂Lp ∂s i under different situations. S1: update given confident pseudo-label, p being one-hot with ∃p k ∈ p, p k → 1. S2: update given confusing prediction, q being uniform with q1 = ... = qK = 1 K . More analysis as well as visualization can be found in the Sec. 3.2 and the supplementary Appendix C." }, { "figure_ref": [], "heading": "Delving into the Benefits of ERDA", "publication_ref": [ "b19", "b93", "b40" ], "table_ref": [], "text": "To formulate the distribution alignment loss, different functions can be employed to measure the differences between p and q. In addition to the KL divergence, there are other distance measurements like mean squared error (MSE) or Jensen-Shannon (JS) divergence for replacement. Although many mutual learning methods [20,92,41,38] have proven the effectiveness of KL divergence, a detailed comparison of KL divergence against other measurements is currently lacking in the literature. In this section, under the proposed ERDA learning framework, we show by comparison that KL(p||q) is a better choice and ER is necessary for weakly-supervised 3D segmentation.\nTo examine the characteristics of different distance measurements, including KL(p||q), KL(q||p), JS(p||q), and M SE(p||q), we investigate the form of our ERDA loss L p and its impact on the learning for pseudo-label generation network given two situations during training.\nMore formally, we shall assume a total of K classes and define that a pseudo-label p = [p 1 , ..., p K ] is based on the confidence scores s = [s 1 , ..., s K ], and that p = softmax(s). Similarly, we have a segmentation network prediction q = [q 1 , ..., q K ] for the same point. We re-write the ERDA loss L p in various forms and investigate the learning from the perspective of gradient update, as in Tab. 1.\nSituation 1: Gradient update given confident pseudo-label p. We first specifically study the case when p is very certain and confident, i.e., p approaching a one-hot vector. As in Tab. 1, most distances yield the desired zero gradients, which thus retain the information of a confident and reliable p. In this situation, however, the KL(q||p), rather than KL(p||q) in our method, produces non-zero gradients that would actually increase the noise among pseudo-labels during its learning, which is not favorable according to our motivation.\nSituation 2: Gradient update given confusing prediction q. In addition, we are also interested in how different choices of distance and λ would impact the learning on pseudo-label if the segmentation model produces confusing outputs, i.e., q tends to be uniform. In line with the motivation of ERDA learning, we aim to regularize the pseudo-labels to mitigate potential noise and bias, while discouraging uncertain labels with little information. However, as in Tab. 1, most implementations yield non-zero gradient updates to the pseudo-label generation network. This update would make p closer to the confused q, thus increasing the noise and degrading the training performance. Conversely, only KL(p||q) can produce a zero gradient when integrated with the entropy regularization with λ = 1. That is, only ERDA in Eq. ( 5) would not update the pseudo-label generation network when q is not reliable, which avoids confusing the p. Furthermore, when q is less noisy but still close to a uniform vector, it is indicated that there is a large close-zero plateau on the gradient surface of ERDA, which benefits the learning on p by resisting the influence of noise in q.\nIn addition to the above cases, the gradients of ERDA in Eq. ( 5) could be generally regarded as being aware of the noise level and the confidence of both pseudo-label p and the corresponding prediction q. Especially, ERDA produces larger gradient updates on noisy pseudo-labels, while smaller updates on confident and reliable pseudo-labels or given noisy segmentation prediction. Therefore, our formulation demonstrates its superiority in fulfilling our motivation of simultaneous noise reduction and distribution alignment, where both L ER and KL-based L DA are necessary. We provide more empirical studies in ablation (Sec. 4.3) and detailed analysis in the supplementary." }, { "figure_ref": [], "heading": "Implementation Details on Pseudo-Labels", "publication_ref": [], "table_ref": [], "text": "In our study, we use a prototypical pseudo-label generation process due to its popularity as well as simplicity [94]. Specifically, prototypes [63] denote the class centroids in the feature space, which Based on the momentum-updated prototypes, we attach an MLP-based projection network to help generate pseudo-labels and learn with our method. Aligned with our motivation, we do not introduce thresholding-based label selection or one-hot conversion [46,94] to process generated pseudo-labels. More details are in the supplementary.\nMore formally, we take as input a point cloud X , where the labeled points are X l and the unlabeled points are X u . For a labeled point x ∈ X l , we denote its label by y. The pseudo-label generation process can be described as follows:\nĈk = 1 N l k x∈X l ∧y=k g • f (x) , C k ← mC k + (1 -m) Ĉk , ∀x ∈ X u , s k = d(g • f (x), C k ) , p = softmax(s),\nwhere C k denotes the global class centroid for the k-th class, N l k is the number of labeled points of the k-th class, g • f = g(f (•)) is the transformation through the backbone network f and the projection network g, m is the momentum coefficient, and we use cosine similarity for d(•, •) to generate the score s. By default, we use 2-layer MLPs for the projection network g and set m = 0.999.\nBesides, due to the simplicity of ERDA, we are able to follow the setup of the baselines for training, which enables straightforward implementation and easy adaptation on various backbone models with little overhead.\nOverall objective. Finally, with ERDA learning in Eq. ( 5), we maximize the same loss for both labeled and unlabeled points, segmentation task, and pseudo-label generation, where we allow the gradient to back-propagate through the (pseudo-)labels. The final loss is given as\nL = 1 N l x∈X l L ce (q, y) + α 1 N u x∈X u L p (q, p),(6)\nwhere L p (q, p) = L ce (q, p) = H(q, p) is the typical cross-entropy loss used for point cloud segmentation, N l and N u are the numbers of labeled and unlabeled points, and α is the loss weight." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b15", "b32", "b18" ], "table_ref": [], "text": "We present the benefits of our proposed ERDA by experimenting with multiple large-scale datasets, including S3DIS [2], ScanNet [16], SensatUrban [33] and Pascal [19]. We also provide ablation studies for better investigation. Table 2. The results are obtained on the S3DIS datasets Area 5. For all baseline methods, we follow their official instructions in evaluation. The bold denotes the best performance in each setting." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b33", "b51", "b16", "b52", "b41", "b16", "b96", "b1" ], "table_ref": [], "text": "We choose RandLA-Net [34] and CloserLook3D [51] as our primary baselines following previous works. Additionally, while transformer models [17,52] have revolutionized the field of computer vision as well as 3D point cloud segmentation [97,42], none of the existing works have addressed the training of transformer for point cloud segmentation with weak supervision, even though these models are known to be data-hungry [17]. We thus further incorporate the PointTransformer (PT) [97] as our baseline to study the amount of supervision demanded for effective training of transformer.\nFor training, we follow the setup of the baselines and set the loss weight α = 0.1. For a fair comparison, we follow previous works [94, 95,32] and experiment with different settings, including the 0.02% (1pt), 1% and 10% settings, where the available labels are randomly sampled according to the ratio 3 . More details are given in the supplementary." }, { "figure_ref": [], "heading": "Performance Comparison", "publication_ref": [ "b16" ], "table_ref": [], "text": "Results on S3DIS. S3DIS [2] is a large-scale point cloud segmentation dataset that covers 6 large indoor areas with 272 rooms and 13 semantic categories. As shown in Tab. 2, ERDA significantly improves over different baselines on all settings and almost all classes. In particular, for confusing classes such as column, window, door, and board, our method provides noticeable and consistent improvements in all weak supervision settings. We also note that PT suffers from severe over-fitting and feature collapsing under the supervision of extremely sparse labels of \"1pt\" setting; whereas it is alleviated with ERDA, though not achieving a satisfactory performance. Such observation agrees with the understanding that transformer is data-hungry [17]." }, { "figure_ref": [], "heading": "Ground Truth", "publication_ref": [ "b33", "b69", "b46", "b58", "b33", "b69", "b5", "b91", "b66", "b15", "b88", "b32", "b1", "b73", "b64" ], "table_ref": [], "text": "Baseline ERDA Improvement Input Figure 3. We show obvious improvement of our ERDA over baseline (RandLA-Net) on different scenes from S3DIS Area 5. In the office and hallway (top 2), ERDA produces more detailed and complete segmentation for windows and doors, and avoids over-expansion of the board and bookcase on the wall, thanks to the informative pseudo-labels. In more cluttered scenes (bottom 2), ERDA tends to make cleaner predictions by avoiding improper situations such as desk inside clutter and preserving important semantic classes such as columns. 47.6 RandLA-Net [34] 70.0 KPConv [68] 70.6 HybridCR [47] 70.7 PT [97] 73.5 PointNeXt -XL [58] 74.9 RandLA-Net + ERDA 32.9 RandLA-Net [34] 52.7 KPConv [68] 57.6 LCPFormer [36] 63. Impressively, ERDA yields competitive performance against most supervised methods. For instance, with only 1% of labels, it achieves performance better than its stand-alone baselines with full supervision. Such result indicates that the ERDA is more successful than expected in alleviating the lack of training signals, as also demonstrated qualitatively in Fig. 3.\nTherefore, we further extend the proposed method to fully-supervised training, i.e., in setting \"Fully\" in Tab. 2. More specifically, we generate pseudo-labels for all points and regard the ERDA as an auxiliary loss for fully-supervised learning. Surprisingly, we observe non-trivial improvements (+3.7 for RandLA-Net and +3.4 for CloserLook3D) and achieve the state-of-the-art performance of 72.6 (+2.2) in mIoU with PT. We suggest that the improvements are due to the noise-aware learning from ERDA, which gradually reduces the noise during the model learning and demonstrates to be generally effective. Moreover, considering that the ground-truth labels could suffer from the problem of label noise [38,90,65], we also hypothesize that pseudo-labels from ERDA learning could stabilize fully-supervised learning and provide unexpected benefits.\nWe also conduct the 6-fold cross-validation, as reported in Tab. 3. We find our method achieves a leading performance among both weakly and fully-supervised methods, which further validates the effectiveness of our method.\nResults on ScanNet. ScanNet [16] is an indoor point cloud dataset that covers 1513 training scenes and 100 test scenes with 20 classes. In addition to the common settings, e.g., 1% labels, it also provides official data efficient settings, such as 20 points, where for each scene there are a pre-defined set of 20 points with the ground truth label. We evaluate on both settings and report the results in Tab. 4. We largely improve the performance under 0.1% and 1% labels. In 20pts setting, we also employ a convolutional baseline (CloserLook3D) for a fair comparison. With no modification on the model, we surpass MIL-transformer [87] that additionally augments the backbone with transformer modules and multi-scale inference. Besides, we apply ERDA to baseline under fully-supervised setting and achieve competitive performance. These results also validate the ability of ERDA in providing effective supervision signals.\nResults on SensatUrban. SensatUrban [33] is an urban-scale outdoor point cloud dataset that covers the landscape from three UK cities. In Tab. 5, ERDA surpasses SQN [32] under the same 0.1% setting as well as its fully-supervised baseline, and also largely improves under full supervision. It suggests that our method can be robust to different types of datasets and effectively exploits the limited annotations as well as the unlabeled points.\nGeneralizing to 2D Pascal. As our ERDA does not make specific assumptions on the 3D data, we explore its potential in generalizing to similar 2D settings. Specifically, we study an important task of semi-supervised segmentation on image [72,89,8] and implement our method to the popular baseline, FixMatch [64], which combines the pseudo-labels with weak-to-strong consistency and is shown to benefit from stronger augmentation [89]. We use DeepLabv3+ [9] with ResNet-101 [30].\nAs in Tab. 6, we show that ERDA brings consistent improvement from low to high data regime, despite the existence of strong data augmentation and the very different data as well as setting 4 . It thus indicates the strong generalization of our method. We also see that the improvement is less significant than the 3D cases. It might be because 2D data are more structured (e.g., pixels on a 2D grid) and are thus less noisy than the 3D point cloud." }, { "figure_ref": [ "fig_0" ], "heading": "Ablations and Analysis", "publication_ref": [], "table_ref": [], "text": "We mainly consider the 1% setting and ablates in Tab. 7 to better investigate ERDA and make a more thorough comparison with the current pseudo-label generation paradigm. For more studies on hyper-parameters, please refer to the supplementary.\nIndividual effectiveness of ER and DA. To validate our initial hypothesis, we study the individual effectiveness of L ER and L DA in Tab. 7a. While the pseudo-labels essentially improve the baseline performance, we remove its label selection and one-hot conversion when adding the ER or DA term. We find that using ER alone can already be superior to the common pseudo-labels and largely reduce the entropy of pseudo-labels (ent.) as expected, which verifies that the pseudo-labels are noisy, and reducing these noises could be beneficial. The improvement with the DA term alone is even more significant, indicating that a large discrepancy is indeed existing between the pseudo-labels and model prediction and is hindering the model training. Lastly, by combining the two terms, we obtain the ERDA that reaches the best performance but with the entropy of its pseudo-labels larger than ER only and smaller than DA only. It thus also verifies that the DA term could be biased to uniform distribution and that the ER term is necessary.\nDifferent choices of ER and DA. Aside from the analysis in Sec. 3.2, we empirically compare the results under different choices of distance for L DA and λ for L ER . As in Tab. 7b, the outstanding result justifies the choice of KL(q||p) with λ = 1. Additionally, all different choices and combina-tions of ER and DA terms improve over the common pseudo-labels (63.3), which also validates the general motivation for ERDA.\nAblating label selection. We explore in more detail how the model performance is influenced by the amount of exploitation on unlabeled points, as ERDA learning aims to enable full utilization of the unlabeled points. In particular, we consider three pseudo-labels types: common one-hot pseudo-labels, soft pseudo-labels (p), and soft pseudo-labels with ERDA learning. To reduce the number of pseudo-labels, we select sparse but high-confidence pseudo-labels following a common top-k strategy [94,53] with various values of k to study its influence. As in Tab. 7c, ERDA learning significantly improves the model performance under all cases, enables the model to consistently benefit from more pseudo-labels, and thus neglects the need for label selection such as top-k strategy, as also revealed in Fig. 1. Besides, using soft pseudo-labels alone can not improve but generally hinders the model performance, as one-hot conversion may also reduce the noise in pseudo-labels, which is also not required with ERDA." }, { "figure_ref": [], "heading": "Limitation and Future Work", "publication_ref": [], "table_ref": [], "text": "While we mostly focus on weak supervision in this paper, our method also brings improvements under fully-supervised setting. We would then like to further explore the effectiveness and relationship of L ER and L DA under full supervision as a future work. Besides, despite promising results, our method, like other weak supervision approaches, assumes complete coverage of semantic classes in the available labels, which may not always hold in real-world cases. Point cloud segmentation with missing or novel classes should be explored as an important future direction." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we study the weakly-supervised 3D point cloud semantic segmentation, which imposes the challenge of highly sparse labels. Though pseudo-labels are widely used, label selection is commonly employed to overcome the noise, but it also prevents the full utilization of unlabeled data. By addressing this, we propose a new learning scheme on pseudo-labels, ERDA, that reduces the noise, aligns to the model prediction, and thus enables comprehensive exploitation of unlabeled data for effective training. Experimental results show that ERDA outperforms previous methods in various settings and datasets. Notably, it surpasses its fully-supervised baselines and can be further generalized to full supervision as well as 2D images. " }, { "figure_ref": [], "heading": "LDA", "publication_ref": [], "table_ref": [], "text": "KL(p||q) KL(q||p) JS(p, q) M SE(p, q) Lp H(p, q) -(1 -λ)H(p) H(q, p) -H(q) + λH(p) H( p + q 2 ) -( 1 2 -λ)H(p) - 1 2 H(q) 1 2 i (pi -qi) 2 + λH(p) gi pi j pj(-log qi qj + (1 -λ) log pi pj ) pi -qi -λpi j pj log pi pj pi j pj( -1 2 log pi + qi pj + qj + ( 1 2 -λ) log pi pj ) pi(pi -qi) -pi j pj(pj -qj) -λpi j pj log pi pj Situations ∆ = -gi pk → 1 0 qi -1k=i 0 0 q1 = ... = qK (λ -1)pi j pj log pi pj 1 K -pi + λpi j pj log pi pj pi j̸ =i pj( 1 2 log Kpi + 1 Kpj + 1 + (λ - 1 2 ) log pi pj ) -p 2 i + pi j p 2 j + λpi j pj log pi pj qi → 1 + inf 1 -pi + λpi j pj log pi pj pi j̸ =i pj( 1 2 log pi + 1 pj + (λ - 1 2 ) log pi pj ) -p 2 i + pi(1 -pi) + pi j p 2 j + λpi j pj log pi pj qk̸ =i → 1 -inf -pi + λpi j pj log pi pj pi j̸ =i pj( 1 2 log pi pj + 1j=k + (λ - 1 2 ) log pi pj ) -p 2 i -pipk + pi j p 2 j + λpi j pj log pi pj" }, { "figure_ref": [], "heading": "E Supplementary: Ablations and Parameter Study", "publication_ref": [ "b46", "b9", "b10", "b24" ], "table_ref": [], "text": "We further study the hyper-parameters involved in the implementation of ERDA with the prototypical pseudo-label generation, including loss weight α, momentum coefficient m, and the use of projection network. As shown in Tab. 9, the proposed method acquires decent performance (mIoUs are all > 65 and mostly > 66) in a wide range of different hyper-parameter settings, compared with its fully-supervised baseline (64.7 mIoU) and previous state-of-the-art performance (65.3 mIoU by HybridCR [47]).\nAdditionally, we suggest that the projection network could be effective in facilitating the ERDA learning, which can be learned to specialize in the pseudo-label generation task. This could also be related to the advances in contrastive learning. Many works [10,11,25] suggest that a further projection on feature representation can largely boost the performance because such projection decouples the learned features from the pretext task. We share a similar motivation in decoupling the features for ERDA learning on the pseudo-label generation task from the features for the segmentation task. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgement. This project is supported in part by ARC FL-170100117, and IC-190100031." }, { "figure_ref": [], "heading": "A Supplementary: Introduction", "publication_ref": [], "table_ref": [], "text": "In this supplementary material, we provide more details regarding implementation details in Appendix B, more analysis of ERDA in Appendix C, full experimental results in Appendix D, and studies on parameters in Appendix E." }, { "figure_ref": [], "heading": "B Supplementary: Implementation and Training Details", "publication_ref": [ "b33", "b51", "b51", "b69", "b64" ], "table_ref": [], "text": "For the RandLA-Net [34] and CloserLook3D [51] baselines, we follow the instructions in their released code for training and evaluation, which are here (RandLA-Net) and here (CloserLook3D), respectively. Especially, in CloserLook3D [51], there are several local aggregation operations and we use the \"Pseudo Grid\" (KPConv-like) one, which provides a neat re-implementation of the popular KPConv [68] network (rigid version). For point transformer (PT) [97], we follow their paper and the instructions in the code base that claims to have the official code (here). For FixMatch [64], we use the publicly available implementation here.\nOur code and pre-trained models will be released." }, { "figure_ref": [], "heading": "C Supplementary: Delving into ERDA with More Analysis", "publication_ref": [], "table_ref": [], "text": "Following the discussion in Sec. 3, we study the impact of entropy regularization as well as different distance measurements from the perspective of gradient updates.\nIn particular, we study the gradient on the score of the i-th class i.e., s i , and denote it as g i = ∂Lp ∂si . Given that ∂pj ∂si = 1 (i=j) p i -p i p j , we have g i = p i j p j ( ∂Lp ∂pi -∂Lp ∂pj ). As shown in Tab. 8, we demonstrate the gradient update ∆ = -g i under different situations.\nIn addition to the analysis in Sec. 3.2, we find that, when q is certain, i.e., q approaching a one-hot vector, the update of our choice KL(p||q) would approach the infinity. We note that this could be hardly encountered since q is typically also the output of a softmax function. Instead, we would rather regard it as a benefit because it would generate effective supervision on those model predictions with high certainty, and the problem of gradient explosion could also be prevented by common operations such as gradient clipping.\nIn Fig. 4, we provide visualization for a more intuitive understanding on the impact of different formulations for L DA as well as their combination with L ER . Specifically, we consider a simplified case of binary classification and visualize their gradient updates when λ takes different values. We also visualize the gradient updates of L ER . By comparing the gradient updates, we observe that only KL(p||q) with λ = 1 can achieve small updates when q is close to uniform (q = 1 2 under the binary case), and that there is a close-0 plateau as indicated by the sparse contour lines.\nAdditionally, we also find that, when increasing the λ, all distances, except the KL(p||q), are modulated to be similar to the updates of having L ER alone; whereas KL(p||q) can still produce effective updates, which may indicate that KL(p||q) is more robust to the λ." }, { "figure_ref": [], "heading": "D Supplementary: Full Results", "publication_ref": [ "b15", "b32" ], "table_ref": [], "text": "We provide full results for the experiments reported in the main paper. For S3DIS [2], we provide the full results of S3DIS with 6-fold cross-validation in Tab. 10. For ScanNet [16] and SensatUrban [33], we evaluate on their online test servers, which are here and here, and provide the full results in Tab. 11 and Tab. 12, respectively." } ]
2023-10-20
[ { "authors": "Jiwoon Ahn; Suha Kwak", "journal": "", "ref_id": "b0", "title": "Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation", "year": "2018" }, { "authors": "Iro Armeni; Sasha Sax; Silvio Amir R Zamir; Savarese", "journal": "", "ref_id": "b1", "title": "Joint 2d-3d-semantic data for indoor scene understanding", "year": "2017" }, { "authors": "Alexandre Boulch; Gilles Puy; Renaud Marlet", "journal": "", "ref_id": "b2", "title": "Fkaconv: Feature-kernel alignment for point cloud convolution", "year": "2020" }, { "authors": "A Boulch; B Le Saux; N Audebert", "journal": "Eurographics Association", "ref_id": "b3", "title": "Unstructured point cloud semantic labeling using deep segmentation networks", "year": "2017" }, { "authors": "Mathilde Caron; Ishan Misra; Julien Mairal; Priya Goyal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b4", "title": "Unsupervised learning of visual features by contrasting cluster assignments", "year": "2020" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Herve Jegou; Julien Mairal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b5", "title": "Emerging properties in self-supervised vision transformers", "year": "2021-10" }, { "authors": "Chen Chen; Zhe Chen; Jing Zhang; Dacheng Tao", "journal": "", "ref_id": "b6", "title": "Sasa: Semantics-augmented set abstraction for point-based 3d object detection", "year": "2022" }, { "authors": "Ran Hao Chen; Yue Tao; Yidong Fan; Jindong Wang; Bernt Wang; Xing Schiele; Bhiksha Xie; Marios Raj; Savvides", "journal": "", "ref_id": "b7", "title": "Softmatch: Addressing the quantity-quality trade-off in semi-supervised learning", "year": "2023" }, { "authors": "Liang-Chieh Chen; Yukun Zhu; George Papandreou; Florian Schroff; Hartwig Adam", "journal": "", "ref_id": "b8", "title": "Encoderdecoder with atrous separable convolution for semantic image segmentation", "year": "2018" }, { "authors": "Ting Chen; Simon Kornblith; Mohammad Norouzi; Geoffrey Hinton", "journal": "", "ref_id": "b9", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "Ting Chen; Simon Kornblith; Kevin Swersky; Mohammad Norouzi; Geoffrey Hinton", "journal": "", "ref_id": "b10", "title": "Big selfsupervised models are strong semi-supervised learners", "year": "2020" }, { "authors": "Zhe Chen; Jing Zhang; Dacheng Tao", "journal": "IEEE/CAA Journal of Automatica Sinica", "ref_id": "b11", "title": "Progressive lidar adaptation for road detection", "year": "2019-05" }, { "authors": "Mingmei Cheng; Le Hui; Jin Xie; Jian Yang", "journal": "", "ref_id": "b12", "title": "Sspc-net: Semi-supervised semantic 3d point cloud segmentation network", "year": "2021" }, { "authors": "Julian Chibane; Francis Engelmann; Tuan Anh Tran; Gerard Pons-Moll", "journal": "", "ref_id": "b13", "title": "Box2mask: Weakly supervised 3d semantic instance segmentation using bounding boxes", "year": "2022" }, { "authors": "Christopher Choy; Junyoung Gwak; Silvio Savarese", "journal": "", "ref_id": "b14", "title": "4d spatio-temporal convnets: Minkowski convolutional neural networks", "year": "2019" }, { "authors": "Angela Dai; Angel X Chang; Manolis Savva; Maciej Halber; Thomas Funkhouser; Matthias Nießner", "journal": "IEEE", "ref_id": "b15", "title": "Scannet: Richly-annotated 3d reconstructions of indoor scenes", "year": "2017" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "ICLR", "ref_id": "b16", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "Oren Dovrat; Itai Lang; Shai Avidan", "journal": "", "ref_id": "b17", "title": "Learning to sample", "year": "2019-06" }, { "authors": "M Everingham; S M A Eslami; L Van Gool; C K I Williams; J Winn; A Zisserman", "journal": "International Journal of Computer Vision", "ref_id": "b18", "title": "The pascal visual object classes challenge: A retrospective", "year": "2006" }, { "authors": "Zhengyang Feng; Qianyu Zhou; Qiqi Gu; Xin Tan; Guangliang Cheng; Xuequan Lu; Jianping Shi; Lizhuang Ma", "journal": "Pattern Recognition", "ref_id": "b19", "title": "Dmt: Dynamic mutual training for semi-supervised learning", "year": "2005" }, { "authors": "Biao Gao; Yancheng Pan; Chengkun Li; Sibo Geng; Huijing Zhao", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b20", "title": "Are we hungry for 3d lidar data for semantic segmentation? a survey of datasets and methods", "year": "2022-07-01" }, { "authors": "Benjamin Graham; Martin Engelcke; Laurens Van Der Maaten", "journal": "", "ref_id": "b21", "title": "3d semantic segmentation with submanifold sparse convolutional networks", "year": "2018-06" }, { "authors": "Benjamin Graham; Laurens Van Der Maaten", "journal": "", "ref_id": "b22", "title": "Submanifold sparse convolutional networks", "year": "2017" }, { "authors": "Yves Grandvalet; Yoshua Bengio", "journal": "MIT Press", "ref_id": "b23", "title": "Semi-supervised learning by entropy minimization", "year": "2004" }, { "authors": "Jean-Bastien Grill; Florian Strub; Florent Altch'e; Corentin Tallec; Pierre H Richemond; Elena Buchatskaya; Carl Doersch; Bernardo Ávila Pires; Zhaohan Daniel Guo; Mohammad Gheshlaghi Azar; Bilal Piot; Koray Kavukcuoglu; Rémi Munos; Michal Valko", "journal": "", "ref_id": "b24", "title": "Bootstrap your own latent: A new approach to self-supervised learning", "year": "2020" }, { "authors": "Meng-Hao Guo; Junxiong Cai; Zheng-Ning Liu; Tai-Jiang Mu; Ralph R Martin; Shi-Min Hu", "journal": "", "ref_id": "b25", "title": "PCT: point cloud transformer", "year": "2020" }, { "authors": "Yulan Guo; Hanyun Wang; Qingyong Hu; Hao Liu; Li Liu; Mohammed Bennamoun", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b26", "title": "Deep learning for 3d point clouds: A survey", "year": "2020" }, { "authors": "Lei Han; Tian Zheng; Lan Xu; Lu Fang", "journal": "", "ref_id": "b27", "title": "Occuseg: Occupancy-aware 3d instance segmentation", "year": "2020" }, { "authors": "Kaiming He; Haoqi Fan; Yuxin Wu; Saining Xie; Ross Girshick", "journal": "", "ref_id": "b28", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2019" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "IEEE Computer Society", "ref_id": "b29", "title": "Deep residual learning for image recognition", "year": "2016-06" }, { "authors": "Ruifei He; Jihan Yang; Xiaojuan Qi", "journal": "", "ref_id": "b30", "title": "Re-distributing biased pseudo labels for semi-supervised semantic segmentation: A baseline investigation", "year": "2021" }, { "authors": "Qingyong Hu; Bo Yang; Guangchi Fang; Yulan Guo; Ales Leonardis; Niki Trigoni; Andrew Markham", "journal": "", "ref_id": "b31", "title": "Sqn: Weakly-supervised semantic segmentation of large-scale 3d point clouds", "year": "2022" }, { "authors": "Qingyong Hu; Bo Yang; Sheikh Khalid; Wen Xiao; Niki Trigoni; Andrew Markham", "journal": "International Journal of Computer Vision", "ref_id": "b32", "title": "Sensaturban: Learning semantics from urban-scale photogrammetric point clouds", "year": "2022" }, { "authors": "Qingyong Hu; Bo Yang; Linhai Xie; Stefano Rosa; Yulan Guo; Zhihua Wang; Niki Trigoni; Andrew Markham", "journal": "", "ref_id": "b33", "title": "Randla-net: Efficient semantic segmentation of large-scale point clouds", "year": "2019" }, { "authors": "Zeyu Hu; Mingmin Zhen; Xuyang Bai; Hongbo Fu; Chiew-Lan Tai", "journal": "", "ref_id": "b34", "title": "Jsenet: Joint semantic segmentation and edge detection network for 3d point clouds", "year": "2020" }, { "authors": "Zhuo Huang; Zhiyou Zhao; Banghuai Li; Jungong Han", "journal": "", "ref_id": "b35", "title": "Lcpformer: Towards effective 3d point cloud analysis via local context propagation in transformers", "year": "2022" }, { "authors": "Li Jiang; Shaoshuai Shi; Zhuotao Tian; Xin Lai; Shu Liu; Chi-Wing Fu; Jiaya Jia", "journal": "", "ref_id": "b36", "title": "Guided point contrastive learning for semi-supervised point cloud semantic segmentation", "year": "2021" }, { "authors": "Yi Kun; Wu Jianxin", "journal": "", "ref_id": "b37", "title": "Probabilistic End-to-end Noise Correction for Learning with Noisy Labels", "year": "2019" }, { "authors": "Abhijit Kundu; Xiaoqi Yin; Alireza Fathi; David A Ross; Brian Brewington; Thomas A Funkhouser; Caroline Pantofaru", "journal": "", "ref_id": "b38", "title": "Virtual multi-view fusion for 3d semantic segmentation", "year": "2020" }, { "authors": "Hyeokjun Kweon; Kuk-Jin Yoon", "journal": "", "ref_id": "b39", "title": "Joint learning of 2d-3d weakly supervised semantic segmentation", "year": "2022" }, { "authors": "Donghyeon Kwon; Suha Kwak", "journal": "", "ref_id": "b40", "title": "Semi-supervised semantic segmentation with error localization network", "year": "2022-06" }, { "authors": "Xin Lai; Jianhui Liu; Li Jiang; Liwei Wang; Hengshuang Zhao; Shu Liu; Xiaojuan Qi; Jiaya Jia", "journal": "", "ref_id": "b41", "title": "Stratified transformer for 3d point cloud segmentation", "year": "2022" }, { "authors": "Yuxiang Lan; Yachao Zhang; Yanyun Qu; Cong Wang; Chengyang Li; Jia Cai; Yuan Xie; Zongze Wu", "journal": "", "ref_id": "b42", "title": "Weakly supervised 3d segmentation via receptive-driven pseudo label consistency and structural consistency", "year": "2023-06" }, { "authors": "L Landrieu; M Simonovsky", "journal": "", "ref_id": "b43", "title": "Large-scale point cloud semantic segmentation with superpoint graphs", "year": "2018" }, { "authors": "Felix Järemo Lawin; Martin Danelljan; Patrik Tosteberg; Goutam Bhat; Fahad Shahbaz Khan; Michael Felsberg", "journal": "", "ref_id": "b44", "title": "Deep projective 3d semantic segmentation", "year": "2017" }, { "authors": "Dong-Hyun Lee", "journal": "", "ref_id": "b45", "title": "Pseudo-label : The simple and efficient semi-supervised learning method for deep neural networks", "year": "2013" }, { "authors": "Mengtian Li; Yuan Xie; Yunhang Shen; Bo Ke; Ruizhi Qiao; Bo Ren; Shaohui Lin; Lizhuang Ma", "journal": "", "ref_id": "b46", "title": "Hybridcr: Weakly-supervised 3d point cloud semantic segmentation via hybrid contrastive regularization", "year": "2022-06-01" }, { "authors": "Yangyan Li; Rui Bu; Mingchao Sun; Wei Wu; Xinhan Di; Baoquan Chen", "journal": "", "ref_id": "b47", "title": "Pointcnn: Convolution on x-transformed points", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b48", "title": "", "year": "2018" }, { "authors": "Di Lin; Jifeng Dai; Jiaya Jia; Kaiming He; Jian Sun", "journal": "", "ref_id": "b49", "title": "Scribblesup: Scribble-supervised convolutional networks for semantic segmentation", "year": "2016" }, { "authors": "Yuyuan Liu; Yu Tian; Yuanhong Chen; Fengbei Liu; Vasileios Belagiannis; Gustavo Carneiro", "journal": "", "ref_id": "b50", "title": "Perturbed and strict mean teachers for semi-supervised semantic segmentation", "year": "2022-06" }, { "authors": "Ze Liu; Han Hu; Yue Cao; Zheng Zhang; Xin Tong", "journal": "ECCV", "ref_id": "b51", "title": "A closer look at local aggregation operators in point cloud analysis", "year": "2020" }, { "authors": "Ze Liu; Yutong Lin; Yue Cao; Han Hu; Yixuan Wei; Zheng Zhang; Stephen Lin; Baining Guo", "journal": "", "ref_id": "b52", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "Zhengzhe Liu; Xiaojuan Qi; Chi-Wing Fu", "journal": "", "ref_id": "b53", "title": "One thing one click: A self-training approach for weakly supervised 3d semantic segmentation", "year": "2021" }, { "authors": "Tao Lu; Limin Wang; Gangshan Wu", "journal": "", "ref_id": "b54", "title": "Cga-net: Category guided aggregation for point cloud semantic segmentation", "year": "2021-06" }, { "authors": "A Milioto; I Vizzo; J Behley; C Stachniss", "journal": "", "ref_id": "b55", "title": "Rangenet++: Fast and accurate lidar semantic segmentation", "year": "2019" }, { "authors": "Charles Ruizhongtai; Qi ; Hao Su; Kaichun Mo; Leonidas J Guibas", "journal": "", "ref_id": "b56", "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "year": "2016" }, { "authors": "Charles Ruizhongtai; Qi ; Li Yi; Hao Su; Leonidas J Guibas", "journal": "", "ref_id": "b57", "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "year": "2017" }, { "authors": "Guocheng Qian; Yuchen Li; Houwen Peng; Jinjie Mai; Hasan Hammoud; Mohamed Elhoseiny; Bernard Ghanem", "journal": "", "ref_id": "b58", "title": "Pointnext: Revisiting pointnet++ with improved training and scaling strategies", "year": "2022" }, { "authors": "Haibo Qiu; Baosheng Yu; Dacheng Tao", "journal": "", "ref_id": "b59", "title": "Collect-and-distribute transformer for 3d point cloud analysis", "year": "" }, { "authors": "Mamshad Nayeem Rizve; Kevin Duarte; Yogesh S Rawat; Mubarak Shah", "journal": "", "ref_id": "b60", "title": "In defense of pseudolabeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning", "year": "2021" }, { "authors": "Claude Elwood; Shannon ", "journal": "The Bell System Technical Journal", "ref_id": "b61", "title": "A mathematical theory of communication", "year": "1948" }, { "authors": "Hanyu Shi; Jiacheng Wei; Ruibo Li; Fayao Liu; Guosheng Lin", "journal": "", "ref_id": "b62", "title": "Weakly supervised segmentation on outdoor 4d point clouds with temporal matching and spatial graph propagation", "year": "2022" }, { "authors": "Jake Snell; Kevin Swersky; Richard Zemel", "journal": "", "ref_id": "b63", "title": "Prototypical networks for few-shot learning", "year": "2017" }, { "authors": "Kihyuk Sohn; David Berthelot; Nicholas Carlini; Zizhao Zhang; Han Zhang; Colin A Raffel; Ekin Dogus Cubuk; Alexey Kurakin; Chun-Liang Li", "journal": "", "ref_id": "b64", "title": "Fixmatch: Simplifying semi-supervised learning with consistency and confidence", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b65", "title": "", "year": "2020" }, { "authors": "Hwanjun Song; Minseok Kim; Dongmin Park; Yooju Shin; Jae-Gil Lee", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b66", "title": "Learning from noisy labels with deep neural networks: A survey", "year": "2022" }, { "authors": "Liyao Tang; Yibing Zhan; Zhe Chen; Baosheng Yu; Dacheng Tao", "journal": "", "ref_id": "b67", "title": "Contrastive boundary learning for point cloud segmentation", "year": "2022" }, { "authors": "Lyne Tchapmi; Christopher Choy; Iro Armeni; Junyoung Gwak; Silvio Savarese", "journal": "", "ref_id": "b68", "title": "Segcloud: Semantic segmentation of 3d point clouds", "year": "2017-10" }, { "authors": "Hugues Thomas; Charles R Qi; Jean-Emmanuel Deschaud; Beatriz Marcotegui; François Goulette; Leonidas J Guibas", "journal": "", "ref_id": "b69", "title": "Kpconv: Flexible and deformable convolution for point clouds", "year": "2019" }, { "authors": "Ozan Unal; Dengxin Dai; Luc Van Gool", "journal": "", "ref_id": "b70", "title": "Scribble-supervised lidar semantic segmentation", "year": "2022" }, { "authors": "Guo-Hua Wang; Jianxin Wu", "journal": "", "ref_id": "b71", "title": "Repetitive reprediction deep decipher for semi-supervised learning", "year": "2003" }, { "authors": "Peng-Shuai Wang; Yang Liu; Yu-Xiao Guo; Chun-Yu Sun; Xin Tong", "journal": "ACM Transactions on Graphics (SIGGRAPH)", "ref_id": "b72", "title": "O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis", "year": "2017" }, { "authors": "Yidong Wang; Hao Chen; Qiang Heng; Wenxin Hou; Zhen Yue Fan; Jindong Wu; Marios Wang; Takahiro Savvides; Bhiksha Shinozaki; Bernt Raj; Xing Schiele; Xie", "journal": "", "ref_id": "b73", "title": "Freematch: Self-adaptive thresholding for semi-supervised learning", "year": "2023" }, { "authors": "Yuxi Wang; Junran Peng; Zhaoxiang Zhang", "journal": "", "ref_id": "b74", "title": "Uncertainty-aware pseudo label refinery for domain adaptive semantic segmentation", "year": "2021" }, { "authors": "Yue Wang; Yongbin Sun; Ziwei Liu; Sanjay E Sarma; Michael M Bronstein; Justin M Solomon", "journal": "", "ref_id": "b75", "title": "Dynamic graph CNN for learning on point clouds", "year": "2018" }, { "authors": "Yuchao Wang; Haochen Wang; Yujun Shen; Jingjing Fei; Wei Li; Guoqiang Jin; Liwei Wu; Rui Zhao; Xinyi Le", "journal": "", "ref_id": "b76", "title": "Semi-supervised semantic segmentation using unreliable pseudo-labels", "year": "2022" }, { "authors": "Zongji Wang; Feng Lu", "journal": "IEEE transactions on visualization and computer graphics", "ref_id": "b77", "title": "Voxsegnet: Volumetric cnns for semantic part segmentation of 3d shapes", "year": "2018" }, { "authors": "Jiacheng Wei; Guosheng Lin; Kim-Hui Yap; Tzu-Yi Hung; Lihua Xie", "journal": "", "ref_id": "b78", "title": "Multi-path region mining for weakly supervised 3d semantic segmentation on point clouds", "year": "2020" }, { "authors": "Wenxuan Wu; Zhongang Qi; Fuxin Li", "journal": "", "ref_id": "b79", "title": "Pointconv: Deep convolutional networks on 3d point clouds", "year": "2018" }, { "authors": "Xiaoyang Wu; Yixing Lao; Li Jiang; Xihui Liu; Hengshuang Zhao", "journal": "NeurIPS", "ref_id": "b80", "title": "Point transformer v2: Grouped vector attention and partition-based pooling", "year": "2022" }, { "authors": "Zhonghua Wu; Yicheng Wu; Guosheng Lin; Jianfei Cai", "journal": "", "ref_id": "b81", "title": "Reliability-adaptive consistency regularization for weakly-supervised point cloud segmentation", "year": "2023" }, { "authors": "Zhonghua Wu; Yicheng Wu; Guosheng Lin; Jianfei Cai; Chen Qian", "journal": "", "ref_id": "b82", "title": "Dual adaptive transformations for weakly supervised point cloud segmentation", "year": "2022" }, { "authors": "Qizhe Xie; Minh-Thang Luong; Eduard Hovy; Quoc V Le", "journal": "", "ref_id": "b83", "title": "Self-training with noisy student improves imagenet classification", "year": "2019" }, { "authors": "Y Xie; J Tian; X X Zhu", "journal": "IEEE Geoscience and Remote Sensing Magazine", "ref_id": "b84", "title": "Linking points with labels in 3d: A review of point cloud semantic segmentation", "year": "2020" }, { "authors": "Chenfeng Xu; Bichen Wu; Zining Wang; Wei Zhan; Peter Vajda; Kurt Keutzer; Masayoshi Tomizuka", "journal": "", "ref_id": "b85", "title": "Squeezesegv3: Spatially-adaptive convolution for efficient point-cloud segmentation", "year": "2020" }, { "authors": "Xun Xu; Gim Hee; Lee ", "journal": "", "ref_id": "b86", "title": "Weakly supervised semantic point cloud segmentation: Towards 10x fewer labels", "year": "2020" }, { "authors": "Chaoda Xu Yan; Zhen Zheng; Sheng Li; Shuguang Wang; Cui", "journal": "", "ref_id": "b87", "title": "Pointasnl: Robust point clouds processing using nonlocal neural networks with adaptive sampling", "year": "2020" }, { "authors": "Cheng-Kun Yang; Ji-Jia Wu; Kai-Syun Chen; Yung-Yu Chuang; Yen-Yu Lin", "journal": "", "ref_id": "b88", "title": "An mil-derived transformer for weakly supervised point cloud segmentation", "year": "2009" }, { "authors": "Jiancheng Yang; Qiang Zhang; Bingbing Ni; Linguo Li; Jinxian Liu; Mengdie Zhou; Qi Tian", "journal": "", "ref_id": "b89", "title": "Modeling point clouds with self-attention and gumbel subset sampling", "year": "2019" }, { "authors": "Lihe Yang; Lei Qi; Litong Feng; Wayne Zhang; Yinghuan Shi", "journal": "", "ref_id": "b90", "title": "Revisiting weak-to-strong consistency in semi-supervised semantic segmentation", "year": "2023" }, { "authors": "Shuquan Ye; Dongdong Chen; Songfang Han; Jing Liao", "journal": "", "ref_id": "b91", "title": "Learning with noisy labels for robust point cloud segmentation", "year": "2021" }, { "authors": "Bowen Zhang; Yidong Wang; Wenxin Hou; Hao Wu; Jindong Wang; Manabu Okumura; Takahiro Shinozaki", "journal": "", "ref_id": "b92", "title": "Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling", "year": "2021" }, { "authors": "Pan Zhang; Bo Zhang; Ting Zhang; Dong Chen; Fang Wen", "journal": "", "ref_id": "b93", "title": "Robust mutual learning for semisupervised semantic segmentation", "year": "2021" }, { "authors": "Xiaolong Zhang; Zuqiang Su; Xiaolin Hu; Yan Han; Shuxian Wang", "journal": "IEEE Transactions on Industrial Informatics", "ref_id": "b94", "title": "Semisupervised momentum prototype network for gearbox fault diagnosis under limited labeled samples", "year": "2022" }, { "authors": "Yachao Zhang; Zonghao Li; Yuan Xie; Yanyun Qu; Cuihua Li; Tao Mei", "journal": "", "ref_id": "b95", "title": "Weakly supervised semantic segmentation for large-scale point cloud", "year": "2021" }, { "authors": "Yachao Zhang; Yanyun Qu; Yuan Xie; Zonghao Li; Shanshan Zheng; Cuihua Li", "journal": "", "ref_id": "b96", "title": "Perturbed selfdistillation: Weakly supervised large-scale point cloud semantic segmentation", "year": "2021" }, { "authors": "Yan Zhang; Wenhan Zhao; Bo Sun; Ying Zhang; Wen Wen", "journal": "Algorithms", "ref_id": "b97", "title": "Point cloud upsampling algorithm: A systematic review", "year": "2022" }, { "authors": "Hengshuang Zhao; Li Jiang; Jiaya Jia; Philip Torr; Vladlen Koltun", "journal": "", "ref_id": "b98", "title": "Point transformer", "year": "2021" }, { "authors": "Zhedong Zheng; Yi Yang", "journal": "International Journal of Computer Vision", "ref_id": "b99", "title": "Rectifying pseudo label learning via uncertainty estimation for domain adaptive semantic segmentation", "year": "2021-01" }, { "authors": "Bolei Zhou; Aditya Khosla; Àgata Lapedriza; Aude Oliva; Antonio Torralba", "journal": "", "ref_id": "b100", "title": "Learning deep features for discriminative localization", "year": "2016" }, { "authors": "Yanning Zhou; Hang Xu; Wei Zhang; Bin Gao; Pheng-Ann Heng", "journal": "", "ref_id": "b101", "title": "C3-semiseg: Contrastive semisupervised segmentation via cross-set learning and dynamic class-balancing", "year": "2021" }, { "authors": "", "journal": "", "ref_id": "b102", "title": "The full results of ERDA with different baselines on ScanNet [test set, obtained from its online benchmark site by the time of submission. settings methods mIoU OA Ground Vegetation Buildings", "year": "" } ]
[ { "formula_coordinates": [ 3, 263.52, 713.2, 241.15, 9.65 ], "formula_id": "formula_0", "formula_text": "L p = λL ER + L DA ,(1)" }, { "formula_coordinates": [ 4, 276.6, 219, 228.07, 9.65 ], "formula_id": "formula_1", "formula_text": "L ER = H(p),(2)" }, { "formula_coordinates": [ 4, 267.15, 429.78, 237.52, 9.65 ], "formula_id": "formula_2", "formula_text": "L DA = KL(p||q),(3)" }, { "formula_coordinates": [ 4, 242.39, 529.9, 262.27, 9.68 ], "formula_id": "formula_3", "formula_text": "L p = H(p, q) + (λ -1)H(p).(4)" }, { "formula_coordinates": [ 4, 242.14, 566.67, 262.52, 19.94 ], "formula_id": "formula_4", "formula_text": "L p = H(p, q) = i -p i log q i(5)" }, { "formula_coordinates": [ 5, 111.81, 74.89, 385.21, 55.95 ], "formula_id": "formula_5", "formula_text": "LDA KL(p||q) KL(q||p) JS(p, q) M SE(p, q) Lp H(p, q) -(1 -λ)H(p) H(q, p) -H(q) + λH(p) H( p + q 2 ) -( 1 2 -λ)H(p) - 1 2 H(q) 1 2 i (pi -qi) 2 + λH(p) S1 0 qi -1k=i 0 0 S2 (λ -1)pi j pj log pi pj 1 K -pi + λpi j pj log pi pj pi j̸ =i pj( 1 2 log Kpi + 1 Kpj + 1 + (λ - 1 2 ) log pi pj ) -p 2 i + pi j p 2 j + λpi j pj log pi pj" }, { "formula_coordinates": [ 6, 187.78, 417.71, 236.44, 42.96 ], "formula_id": "formula_6", "formula_text": "Ĉk = 1 N l k x∈X l ∧y=k g • f (x) , C k ← mC k + (1 -m) Ĉk , ∀x ∈ X u , s k = d(g • f (x), C k ) , p = softmax(s)," }, { "formula_coordinates": [ 6, 208.66, 593.52, 296, 27.49 ], "formula_id": "formula_7", "formula_text": "L = 1 N l x∈X l L ce (q, y) + α 1 N u x∈X u L p (q, p),(6)" }, { "formula_coordinates": [ 16, 111.06, 74.31, 387.34, 110.01 ], "formula_id": "formula_8", "formula_text": "KL(p||q) KL(q||p) JS(p, q) M SE(p, q) Lp H(p, q) -(1 -λ)H(p) H(q, p) -H(q) + λH(p) H( p + q 2 ) -( 1 2 -λ)H(p) - 1 2 H(q) 1 2 i (pi -qi) 2 + λH(p) gi pi j pj(-log qi qj + (1 -λ) log pi pj ) pi -qi -λpi j pj log pi pj pi j pj( -1 2 log pi + qi pj + qj + ( 1 2 -λ) log pi pj ) pi(pi -qi) -pi j pj(pj -qj) -λpi j pj log pi pj Situations ∆ = -gi pk → 1 0 qi -1k=i 0 0 q1 = ... = qK (λ -1)pi j pj log pi pj 1 K -pi + λpi j pj log pi pj pi j̸ =i pj( 1 2 log Kpi + 1 Kpj + 1 + (λ - 1 2 ) log pi pj ) -p 2 i + pi j p 2 j + λpi j pj log pi pj qi → 1 + inf 1 -pi + λpi j pj log pi pj pi j̸ =i pj( 1 2 log pi + 1 pj + (λ - 1 2 ) log pi pj ) -p 2 i + pi(1 -pi) + pi j p 2 j + λpi j pj log pi pj qk̸ =i → 1 -inf -pi + λpi j pj log pi pj pi j̸ =i pj( 1 2 log pi pj + 1j=k + (λ - 1 2 ) log pi pj ) -p 2 i -pipk + pi j p 2 j + λpi j pj log pi pj" } ]
All Points Matter: Entropy-Regularized Distribution Alignment for Weakly-supervised 3D Segmentation
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning. Existing methods often rely on empirical label selection strategies, such as confidence thresholding, to generate beneficial pseudo-labels for model training. This approach may, however, hinder the comprehensive exploitation of unlabeled data points. We hypothesize that this selective usage arises from the noise in pseudo-labels generated on unlabeled data. The noise in pseudo-labels may result in significant discrepancies between pseudo-labels and model predictions, thus confusing and affecting the model training greatly. To address this issue, we propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions. More specifically, our method introduces an Entropy Regularization loss and a Distribution Alignment loss for weakly supervised learning in 3D segmentation tasks, resulting in an ERDA learning strategy. Interestingly, by using KL distance to formulate the distribution alignment loss, it reduces to a deceptively simple cross-entropy-based loss which optimizes both the pseudo-label generation network and the 3D segmentation network simultaneously. Despite the simplicity, our method promisingly improves the performance. We validate the effectiveness through extensive experiments on various baselines and large-scale datasets. Results show that ERDA effectively enables the effective usage of all unlabeled data points for learning and achieves state-of-the-art performance under different settings. Remarkably, our method can outperform fully-supervised baselines using only 1% of true annotations.
Liyao Tang; Zhe Chen; Shanshan Zhao; Chaoyue Wang; Dacheng Tao
[ { "figure_caption": "Figure 1 .1Figure 1. While existing pseudo-labels (a) are limited in the exploitation of unlabeled points, ERDA (b)simultaneously optimizes the pseudo-labels p and predictions q taking the same and simple form of crossentropy. By reducing the noise via entropy regularization and bridging their distributional discrepancies, ERDA produces informative pseudo-labels that neglect the need for label selection. As in (c), it thus enables the model to consistently benefit from more pseudo-labels, surpasses other methods and its fully-supervised baseline, and can be extended to advance the fully-supervised performance.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Detailed illustration of our ERDA with the prototypical pseudo-label generation process, which is shared for both (a) and (b) in Fig. 1. are calculated based on labeled data, and pseudo-labels are estimated based on the feature distances between unlabeled points and class centroids. As shown in Fig. 2, we use a momentum-based prototypical pseudo-label generation process due to its popularity as well as simplicity [94, 85, 93]. Specifically, prototypes [63] denote the class centroids in the feature space, which are calculated based on labeled data, and pseudo-labels are estimated based on the feature distances between unlabeled points and class centroids. To avoid expensive computational costs and compromised representations for each semantic class [94, 87, 47], momentum update is utilized as an approximation for global class centroids.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Contour visualization of the gradient update with binary classes for better understanding. For a clearer view, we use red for positive updates and blue for negative updates, the darker indicates larger absolute values and the lighter indicates smaller absolute values.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Results on S3DIS 6-fold.", "figure_data": "settingsmethods mIoUPointCNN [48]45.8RandLA-Net [34]64.5FullyKPConv [68]68.4HybridCR [47]59.9CloserLook3D + ERDA70.471.020ptsMIL-Trans [87] CloserLook3D + ERDA54.4 57.0CloserLook3D + ERDA PT + ERDA73.7 76.30.1%SQN [32] RandLA-Net + ERDA56.9 62.0zhang et al. [94]65.9zhang et al. [94]51.11%PSD [95] HybridCR [47] RandLA-Net + ERDA68.0 69.2 69.41%PSD [95] HybridCR [47] RandLA-Net + ERDA54.7 56.8 63.0CloserLook3D + ERDA72.3PT + ERDA73.5", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results on ScanNet test.", "figure_data": "settingsmethods mIoUPointNet [56]23.7PointNet++ [57]Fully", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Results on SensatUrban test.", "figure_data": "4RandLA-Net + ERDA64.70.1%SQN [32] RandLA-Net + ERDA54.0 56.4methods92183 366 732 1464FixMatch [64] 63.9 73.0 75.5 77.8 79.2+ ERDA 73.5 74.9 78.0 78.5 80.1", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Results on Pascal dataset.", "figure_data": "", "figure_id": "tab_4", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Ablations on ERDA. If not specified, the model is RandLA-Net trained with ERDA as well as dense pseudo-labels on S3DIS under the 1% setting and reports in mIoU. Default settings are marked in gray .", "figure_data": "ER DA mIoU ent.LDA \\ λ012kone-hot soft ERDAbaseline59.8 ---65.1 66.36463.1 62.8 64.5+ pseudo-labels63.3 2.49KL(p||q) 66.1 67.2 66.650063.0 62.3 64.1✓65.1 2.26KL(q||p) 66.1 65.9 65.21e363.3 63.5 65.5+ ERDA✓ 66.1 2.44JS65.2 65.4 65.11e463.0 62.9 65.6✓ ✓ 67.2 2.40M SE66.0 66.2 66.1dense 62.7 62.6 67.2(a) ERDA improves the results and reduces(b) ER and DA provide better results(c) ERDA consistently benefits thethe entropy (ent.), individually and jointly.when taking KL(p||q) with λ = 1.model with more pseudo-labels (k).", "figure_id": "tab_5", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "The formulation of Lp using different functions to formulate LDA. We present the gradients gi = ∂Lp ∂s i , and the corresponding update ∆ = -gi under different situations. Analysis can be found in the Sec. 3.2 and Appendix C.", "figure_data": "", "figure_id": "tab_6", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Parameter study on ERDA. If not specified, the model is RandLA-Net with ERDA trained with loss weight α = 0.1, momentum m = 0.999, and 2-layer MLPs as projection networks under 1% setting on S3DIS. Default settings are marked in gray .", "figure_data": "", "figure_id": "tab_8", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "settings methods mIoU ceiling floor wall beam column window door table chair sofa bookcase board clutter The full results of ERDA with different baselines on S3DIS 6-fold cross-validation. settings methods mIoU bathtub bed books. cabinet chair counter curtain desk door floor other pic fridge shower sink sofa table toilet wall wndw Fully", "figure_data": "RandLA-Net + ERDA71.094.096.1 83.7 59.248.362.773.6 65.6 78.6 71.566.865.457.9FullyCloserLook3D + ERDA73.794.193.6 85.8 65.550.258.779.2 71.8 79.6 74.873.072.059.5PT + ERDA76.394.997.8 86.2 65.455.264.180.9 84.8 79.3 74.074.069.366.2RandLA-Net + ERDA69.493.892.5 81.7 60.943.060.670.8 65.1 76.4 71.165.365.355.01%CloserLook3D + ERDA72.394.297.5 84.1 62.946.259.273.0 71.5 77.0 73.671.067.761.2PT + ERDA73.594.997.7 85.3 66.753.260.980.8 69.2 78.4 73.367.765.962.1", "figure_id": "tab_9", "figure_label": "10", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[27,83]", "Explanation": "The cited works provide a list of applications for point cloud semantic segmentation, which the citing paper uses to highlight the importance of the task in various fields."}, {"Category": "Data Source", "Citation": "[21,33]", "Explanation": "The cited works are used to acknowledge the cost of obtaining large-scale and densely annotated 3D datasets, which the citing paper uses to emphasize the need for alternative methods in point cloud segmentation."}, {"Category": "Extension or Continuation", "Citation": "[27,83]", "Explanation": "The cited works are used to build upon the discussion of point cloud semantic segmentation and its applications, further exploring the field and its potential for future research."}, {"Category": "Supporting Evidence", "Citation": "[21,33]", "Explanation": "The cited works provide evidence of the cost of obtaining large-scale and densely annotated 3D datasets, which the citing paper uses to support the need for alternative methods in point cloud segmentation."}, {"Category": "Methodological Basis", "Citation": "[85,94,32]", "Explanation": "The cited works are used to discuss the use of pseudo-labeling methods in point cloud segmentation, which the citing paper builds upon to highlight the challenges and potential solutions in the field."}, {"Category": "Methodological Basis", "Citation": "[47,87]", "Explanation": "The cited works are used to discuss the use of consistency regularization in point cloud segmentation, which the citing paper builds upon to highlight the recent advancements in the field."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work highlights the need for label selection in pseudo-labels due to the low confidence and potential bias in the assigned values, which the citing paper addresses by proposing a novel learning-based approach to address the problem."}, {"Category": "Extension or Continuation", "Citation": "[2]", "Explanation": "The cited work, S3DIS, is a large-scale point cloud dataset that the citing paper uses to test the effectiveness of their method. The citing paper extends the research by using S3DIS to evaluate the performance of their method in a real-world setting."}, {"Category": "Data Source", "Citation": "[16]", "Explanation": "The cited work, ScanNet, is a dataset that the citing paper uses to test the performance of their method. The citing paper relies on the data from ScanNet to conduct their research and analyze the results."}, {"Category": "Data Source", "Citation": "[33]", "Explanation": "The cited work, SensatUrban, is another dataset that the citing paper uses to evaluate the effectiveness of their method. The citing paper leverages the data from SensatUrban to test the scalability of their method in different settings."}, {"Category": "Data Source", "Citation": "[85]", "Explanation": "The cited work is acknowledged for providing a large-scale dataset with point-wise annotation, which the citing paper uses to address the demand for point-wise annotation in the field of point cloud segmentation."}, {"Category": "Methodological Basis", "Citation": "[99,49,75,1,64]", "Explanation": "The cited works on weakly-supervised 2D image segmentation provide a methodological basis for the citing paper to explore weakly-supervised 3D point cloud segmentation by leveraging similar techniques and strategies."}, {"Category": "Data Source", "Citation": "[85]", "Explanation": "The cited work by Xu and Lee is the first to explore weakly-supervised point cloud segmentation with a focus on highly sparse labels, providing a data source for the citing paper to build upon in their research on the same topic."}, {"Category": "Extension or Continuation", "Citation": "[77,14,40]", "Explanation": "The cited works have explored more advanced ways to exploit weak supervision and human annotations in the context of weakly-supervised 3D segmentation, extending the research in the citing paper to further develop the field."}, {"Category": "Extension or Continuation", "Citation": "[53,69]", "Explanation": "The cited works have also focused on human annotations in the context of weakly-supervised 3D segmentation, providing a basis for the citing paper to further explore the use of human annotations in the field."}, {"Category": "Extension or Continuation", "Citation": "[95]", "Explanation": "The cited work on perturbed self-distillation has introduced a new method for weakly-supervised 3D segmentation, which the citing paper can build upon to further develop the field and explore new techniques."}, {"Category": "Extension or Continuation", "Citation": "[85,62,80,81,43]", "Explanation": "The cited works have explored the use of consistency regularization in the context of weakly-supervised 3D segmentation, providing a basis for the citing paper to further develop and expand upon this technique in the field."}, {"Category": "Extension or Continuation", "Citation": "[62,37,47,87]", "Explanation": "The cited works have leveraged self-supervised learning in the context of weakly-supervised 3D segmentation, which the citing paper can build upon to further explore the use of this technique in the field."}, {"Category": "Extension or Continuation", "Citation": "[29,10]", "Explanation": "The cited works on contrastive learning have been used in the context of weakly-supervised 3D segmentation, providing a basis for the citing paper to further develop and expand upon this technique in the field."}, {"Category": "Methodological Basis", "Citation": "[94]", "Explanation": "The cited work on colorization tasks pre-training networks is used as a methodological basis for the citing paper to explore the use of unlabeled data in 3D segmentation tasks."}, {"Category": "Methodological Basis", "Citation": "[32]", "Explanation": "The cited work on iterative training is used as a methodological basis for the citing paper to discuss the use of iterative training in learning pseudo-labels in 3D segmentation tasks."}, {"Category": "Methodological Basis", "Citation": "[53]", "Explanation": "The cited work on using separate networks to iterate between learning pseudo-labels and training 3D segmentation networks is used as a methodological basis for the citing paper to discuss the use of separate networks in learning pseudo-labels in 3D segmentation tasks."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work on using super-point graph with graph attentional module to propagate the limited labels over super-points is used as a methodological basis for the citing paper to discuss the use of super-point graph in propagating limited labels in 3D segmentation tasks."}, {"Category": "Data Source", "Citation": "[95]", "Explanation": "The cited work on hand-crafted 3D data augmentations is used as a data source to highlight the need for expensive training in existing methods for 3D segmentation tasks."}, {"Category": "Data Source", "Citation": "[87]", "Explanation": "The cited work on hand-crafted 3D data augmentations is used as a data source to highlight the need for expensive training in existing methods for 3D segmentation tasks."}, {"Category": "Data Source", "Citation": "[80]", "Explanation": "The cited work on hand-crafted 3D data augmentations is used as a data source to highlight the need for expensive training in existing methods for 3D segmentation tasks."}, {"Category": "Data Source", "Citation": "[81]", "Explanation": "The cited work on hand-crafted 3D data augmentations is used as a data source to highlight the need for expensive training in existing methods for 3D segmentation tasks."}, {"Category": "Data Source", "Citation": "[98]", "Explanation": "The cited work on domain adaptation is used as a data source to highlight the need for additional modules in existing methods for 3D segmentation tasks."}, {"Category": "Data Source", "Citation": "[73]", "Explanation": "The cited work on domain adaptation is used as a data source to highlight the need for additional modules in existing methods for 3D segmentation tasks."}, {"Category": "Methodological Basis", "Citation": "[70,20,92]", "Explanation": "The cited works on mutual learning provide a basis for the method proposed in the citing paper to address bias in supervision by using iterative training to improve the quality of label selection."}, {"Category": "Data Source", "Citation": "[100]", "Explanation": "The cited work on class balancing serves as a data source for the method discussed in the citing paper, which aims to address bias in supervision by using class balancing techniques to improve the quality of label selection."}, {"Category": "Extension or Continuation", "Citation": "[31,41]", "Explanation": "The cited works on distribution alignment provide an extension to the method proposed in the citing paper for addressing bias in supervision by using distribution alignment techniques to improve the quality of label selection."}, {"Category": "Methodological Basis", "Citation": "[70,38]", "Explanation": "The cited works on mutual learning provide a method of data augmentation and repeated training that the citing paper adopts to avoid feature collapse in their own research."}, {"Category": "Methodological Basis", "Citation": "[46]", "Explanation": "The cited work introduces the concept of entropy regularization loss, which the citing paper adopts in their proposed ERDA approach to improve the quality of pseudo-labels and reduce distribution gaps."}, {"Category": "Methodological Basis", "Citation": "[24]", "Explanation": "The cited work also contributes to the ERDA approach by providing a modulatory parameter for the entropy regularization loss, which the citing paper uses to control the level of noise in the generated pseudo-labels."}, {"Category": "Methodological Basis", "Citation": "[61]", "Explanation": "The cited work provides a method for reducing the noise level in pseudo-labels by minimizing the Shannon entropy, which the citing paper adopts to improve the labeling results in their research."}, {"Category": "Extension or Continuation", "Citation": "[100]", "Explanation": "The cited work highlights the potential for discrepancies between labeled and unlabeled data, which the citing paper extends by proposing a method to address this issue in their research."}, {"Category": "Extension or Continuation", "Citation": "[92,20]", "Explanation": "The cited works discuss variations in pseudo-labeling methods and segmentation methods, which the citing paper extends by proposing a method to mitigate the impact of these variations in their research."}, {"Category": "Supporting Evidence", "Citation": "[20,92,41,38]", "Explanation": "The cited works have proven the effectiveness of KL divergence in mutual learning methods, which supports the use of KL divergence in the distribution alignment loss formulation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work, S3DIS, provides a dataset that the citing paper uses in their experiments to evaluate the performance of their proposed method."}, {"Category": "Data Source", "Citation": "[16]", "Explanation": "The cited work, ScanNet, is a dataset that the citing paper utilizes in their research to gather data and perform experiments."}, {"Category": "Data Source", "Citation": "[33]", "Explanation": "The cited work, SensatUrban, is a dataset that the citing paper uses in their research to gather data and perform experiments."}, {"Category": "Data Source", "Citation": "[19]", "Explanation": "The cited work, Pascal, is a dataset that the citing paper uses in their research to gather data and perform experiments."}, {"Category": "Extension or Continuation", "Citation": "[2]", "Explanation": "The citing paper extends the research of S3DIS by proposing a new method and conducting experiments on the dataset to improve performance."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work RandLA-Net is the primary baseline for the training of the point cloud segmentation model in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[51]", "Explanation": "The cited work CloserLook3D is also used as a baseline for the training of the point cloud segmentation model in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work on transformer models has revolutionized the field of computer vision and point cloud segmentation, and the citing paper incorporates the PointTransformer as a baseline to study the data-hungry nature of transformer training for point cloud segmentation with weak supervision."}, {"Category": "Supporting Evidence", "Citation": "[2]", "Explanation": "The cited work, S3DIS, is a large-scale point cloud segmentation dataset that is used in the citing paper to evaluate the performance of the proposed method, ERDA, in terms of improving the accuracy of indoor area segmentation across different classes."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work RandLA-Net serves as the baseline for the ERDA model, providing the foundational method for the improvement in segmentation performance shown in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[68]", "Explanation": "The cited work KPConv is used to compare the performance of the ERDA model, providing evidence of the improvement in segmentation results over the baseline method."}, {"Category": "Extension or Continuation", "Citation": "[47]", "Explanation": "The cited work HybridCR is used to further extend the research on the improvement of the ERDA model, exploring new dimensions and variables in the segmentation task."}, {"Category": "Data Source", "Citation": "[97]", "Explanation": "The cited work PT is used to acknowledge the data source for the performance comparison in the citing paper, highlighting the reliance on external data for the study conducted."}, {"Category": "Methodological Basis", "Citation": "[58]", "Explanation": "The cited work PointNeXt -XL is used to provide a methodological basis for the ERDA model, serving as a reference for the improvement in segmentation performance shown in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work LCPFormer is used to compare the performance of the ERDA model, providing evidence of the improvement in segmentation results over the baseline method."}, {"Category": "Extension or Continuation", "Citation": "[38,90,65]", "Explanation": "The cited works highlight the problem of label noise in ground-truth labels, which the citing paper extends to discuss the potential benefits of pseudo-labels generated from ERDA learning in stabilizing fully-supervised learning."}, {"Category": "Extension or Continuation", "Citation": "[16]", "Explanation": "The cited work, ScanNet, provides a dataset for indoor point cloud analysis that the citing paper extends by evaluating performance on both common and data efficient settings, including 1% labels and 20 points with ground truth labels."}, {"Category": "Supporting Evidence", "Citation": "[87]", "Explanation": "The cited work, MIL-transformer, provides a model that the citing paper compares to in the 20pts setting, demonstrating the effectiveness of the ERDA method in providing effective supervision signals for improving performance in point cloud analysis."}, {"Category": "Methodological Basis", "Citation": "[32]", "Explanation": "The cited work, SQN, serves as a baseline for the performance comparison in the citing paper and is used to evaluate the effectiveness of the method presented in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[33]", "Explanation": "The cited work, SensatUrban, is a dataset that the citing paper uses to demonstrate the robustness of the method presented in the paper to different types of datasets and the ability to effectively exploit limited annotations and unlabeled points."}, {"Category": "Data Source", "Citation": "[9]", "Explanation": "The cited work, DeepLabv3+, is a model that the citing paper uses in the study of semi-supervised segmentation on image data, demonstrating the method presented in the paper's potential in generalizing to similar 2D settings."}, {"Category": "Methodological Basis", "Citation": "[10,11,25]", "Explanation": "The cited works suggest that a further projection on feature representation can improve performance by decoupling the features for ERDA learning on the pseudo-label generation task from the features for the segmentation task, which aligns with the proposed method in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work, RandLA-Net, provides a training and evaluation method that the citing paper follows in their research on point cloud analysis."}, {"Category": "Data Source", "Citation": "[51]", "Explanation": "The cited work, CloserLook3D, is the source of a local aggregation operation that the citing paper uses in their research on point cloud analysis."}, {"Category": "Data Source", "Citation": "[97]", "Explanation": "The cited work, point transformer (PT), is the source of a method that the citing paper follows in their research on point cloud analysis."}, {"Category": "Data Source", "Citation": "[64]", "Explanation": "The cited work, FixMatch, is the source of a publicly available implementation that the citing paper utilizes in their research on point cloud analysis."}, {"Category": "Data Source", "Citation": "[2]", "Explanation": "The cited work, S3DIS, is a dataset that the citing paper uses for cross-validation in their experiments."}, {"Category": "Data Source", "Citation": "[16]", "Explanation": "The cited work, ScanNet, is a dataset that the citing paper evaluates on in their online test server."}, {"Category": "Data Source", "Citation": "[33]", "Explanation": "The cited work, SensatUrban, is a dataset that the citing paper evaluates on in their online test server."}]
[ { "figure_ref": [], "heading": "INTRODUCTION AND RELATED WORK", "publication_ref": [ "b11", "b39", "b7", "b23", "b32", "b31", "b41", "b0", "b7", "b8", "b23", "b41", "b10", "b32", "b38", "b46", "b20", "b45", "b4", "b37", "b39", "b38", "b3", "b30", "b17", "b9", "b27", "b26", "b35", "b12", "b21", "b22", "b40", "b49", "b19", "b34", "b14", "b15", "b33", "b43", "b46", "b47" ], "table_ref": [], "text": "Pose-estimation methods [12] can detect 3D human-body keypoints in a single RGB video stream. The keypoints detected in individual frames constitute a simplified spatio-temporal representation of human motion in the form of a so-called skeleton sequence. As indicated in [40], the analysis of such representation opens unprecedented application potential in many domains, ranging from virtual reality, through robotics and security, to sports and medicine. The ever-increasing popularity of skeleton data calls for technologies able to effectively and efficiently access large volumes of such spatio-temporal data based on its content.\nResearch in skeleton-data processing mainly focuses on designing deep-learning architectures for classification of labeled actions [8,24,33] or detection of such actions in continuous streams [32,42]. The proposed architectures are often learned in a supervised way based on transformers [1,8,9], convolutional [24], recurrent [42], or graph-convolutional [11,33] networks. Recently, self-supervised methods are becoming increasingly popular as they can learn motion semantics without knowledge of labels using reconstructionbased [39,47] or contrastive-based learning [21,46].\nThe trained architectures can serve as motion encoders that express the motion semantics by a high-dimensional feature vector extracted from the last hidden network layer. This concept can be transferred to the motion retrieval task to support content-based access based on the query-by-example paradigm [5,38,40], which aims at identifying the database motions that are the most similar to a user-defined query motion. Besides balancing descriptiveness and indexability of the motion features, the most critical issue is to specify a convenient query motion example. The example can be selected from available skeleton sequences [39], drawn in a visualization-driven graphical user interface [4], modeled by puppet interfaces [31], specified as a set of logical constraints [18], or artificially generated [10]. However, such a query example may not ever exist, or its construction requires professional modeling skills. This paper focuses on motion retrieval but simplifies query specification by enabling users to formulate a query by free text.\nWith the current advances in cross-modal learning, especially in the field of textual-visual processing, the trend is to learn common multi-modal spaces [28] so that similar images can be described and searched with textual descriptions [27]. A representative example is the CLIP model [36], which learns an effective common space for the visual and textual modalities. This allows the use of open vocabularies or complex textual queries for searching images.\nOur work has many analogies with the text-to-video retrieval task [13,22,23,41,50], given that the moving skeleton also evolves in space and time. Despite the popularity of such powerful and versatile text-vision models, no effort has been made for the skeletondata modality. Differently from video data, the skeleton is anonymized and avoids learning many common biases present in video datasets. To the best of our knowledge, there is only one approach [20] that relates to text-to-motion matching. However, it uses pre-training and tackles only the classification task. A few available datasets providing the training data for text-to-motion retrieval -e.g., the KIT Motion Language [35] and recently-released HumanML3D [15] datasets -are primarily used for motion generation from a textual description [16,34,44,47,48], where the idea is to align text and motion embeddings into a common space, but never explicitly handling the text-to-motion retrieval task." }, { "figure_ref": [], "heading": "Contributions of this Paper", "publication_ref": [ "b35", "b29", "b2", "b10" ], "table_ref": [], "text": "We tackle the above-mentioned gap by introducing a novel text-tomotion retrieval task, which aims at searching databases of skeleton sequences and retrieving those that are the most relevant to a detailed textual query. For this task, we define evaluation metrics, establish new qualitative baselines, and propose the first text-tomotion retrieval approach. These initial contributions can be employed for future studies on this challenging yet unexplored task.\nSpecifically, one of the main paper contributions is the proposal of a fair baseline by adopting promising (1) motion encoders already employed as backbones in other motion-related tasks and (2) text encoders successfully applied in natural language processing (NLP) and text-to-image retrieval. The core of this baseline is a two-stream pipeline where the motion and text modalities are processed by separate encoders. The obtained representations are then projected into the same common space, for which a metric is learned in a similar way as in CLIP [36] or ALADIN [30] in the text-to-image scenario. The choice of a two-stream pipeline is strategic to make the approach scalable to large motion collections, as feature vectors extracted from both modalities can be easily stored in off-the-shelf indexes implementing efficient similarity search access.\nInspired by recent advances in video processing [3], we also propose a transformer-based motion encoder -the Motion Transformer (MoT) -that employs divided space-time attention on skeleton joints. We show that MoT reaches competitive results with respect to a state-of-the-art motion encoder, DG-STGCN [11], on both KIT Motion Language and HumanML3D datasets. " }, { "figure_ref": [ "fig_0" ], "heading": "TEXT-TO-MOTION RETRIEVAL PIPELINE", "publication_ref": [], "table_ref": [], "text": "The main idea of our approach is to rely on a two-stream pipeline, where motion and text features are first extracted through adhoc encoders and then projected into the same common space, as schematically illustrated in Figure 2. In this section, we sketch the whole pipeline which consists of the: (i) text encoder, (ii) motion encoder, and (iii) loss function used to optimize the common space." }, { "figure_ref": [], "heading": "Text Encoders", "publication_ref": [ "b18", "b35", "b13", "b13", "b35", "b44", "b36" ], "table_ref": [], "text": "Inspired by recent works in NLP, we rely on two pre-trained textual models, namely BERT [19] and the textual encoder from CLIP [36].\nBERT. We use the implementation from [14], which performed the task of motion synthesis conditioned on a natural language prompt. This model stacks together a BERT pre-trained module and an LSTM model composed of two layers for aggregating the BERT output tokens, producing the final text embedding. We take the final hidden state of the LSTM model as our final sentence representation. As in [14], the BERT model is fixed. At training time, we only update the LSTM weights.\nCLIP. It is a recently-introduced vision-language model trained in a contrastive manner for projecting images and natural language descriptions in the same common space [36]. Here, we use the textual encoder of CLIP, which is composed of a transformer encoder [45] with modifications introduced in [37], and employs lower-cased byte pair encoding (BPE) representation of the text. We then stack an affine projection to the CLIP representation, whichsimilarly to the BERT+LSTM case -is the only layer to be trained." }, { "figure_ref": [], "heading": "Motion Encoders", "publication_ref": [ "b5", "b13", "b10", "b10", "b2" ], "table_ref": [], "text": "Differently from the textual pipeline, which takes as input an unstructured natural language sentence, the input to motion encoder models is a vector x ∈ R 𝑇 ×𝐽 ×𝐷 , where 𝑇 is the time length of the motion, 𝐽 is the number of joints of the human-body skeleton, and 𝐷 is the number of features used to encode each joint.\nBidirectional GRU. This architecture is widely adopted in timeseries processing, and an early variant that used LSTM was applied to frame-level action detection in continuous motion data [6]. In particular, we first increase the dimensionality of the input -which is 𝐷 = 9 in our case -by using a two-layer feed-forward network (FFN) before feeding it into the GRU:\n- → z , ← - z = ← -→ GRU(FFN(x))\n. Then, we compute the final motion embedding by concatenating the representations -→ 𝑧 and ← -𝑧 .\nUpper-Lower GRU. To better learn semantics of different body parts, we adopt the model in [14] to independently process the upper and lower parts of the skeleton using two GRU layers.\nDG-STGCN. This architecture [11] recently reached state-ofthe-art results in motion classification. Their GCN module features a spatial module, built of affinity matrices to capture dynamic graphical structures, and a temporal module that performs temporal aggregation using group-wise temporal convolutions. We refer the reader to the original formulation [11] for further details.\nMoT. Our proposed architecture that we built on top of the successful transformer-based video processing network ViViT [3]. In the original implementation, which processes a sequence of frames, the dimension 𝐽 is the number of grid-arranged rectangular patches from each frame. In our case, instead, the spatial features come from the joints. Instead of using as 𝐽 all individual skeleton joints, we first aggregate them obtaining features for five different body parts, similar to the pre-processing performed in Upper-Lower GRU. In this way, 𝐽 = 5, which is far less than the total number of skeleton joints. This is beneficial from a computational point of view, and we found that this solution also reaches the best performance." }, { "figure_ref": [], "heading": "Optimization", "publication_ref": [ "b25", "b48", "b35", "b22" ], "table_ref": [], "text": "We explore two widely-adopted metric learning loss functions, namely the symmetric triplet loss widely used in text-to-image [26] and the InfoNCE Loss, introduced for cross-modal matching in [49] and employed in CLIP [36] and recent cross-modal works [23]. We assume (m 𝑖 , c 𝑖 ) is the 𝑖-th motion and caption embedding pair, 𝑆 (•, •) is the cosine similarity, and 𝐵 is the batch size.\nThe symmetric triplet loss is defined as: " }, { "figure_ref": [], "heading": "EXPERIMENTAL EVALUATION 3.1 Metrics", "publication_ref": [ "b6", "b25", "b28", "b1" ], "table_ref": [], "text": "Exact-search. Exact-search metrics leverage the intrinsic ground truth available in the employed datasets, where motions come with one (or more) textual descriptions. We can consider motions associated with the given textual query as the exact solutions, while all the other ones as irrelevant by default. In this context, the recall@k measures the percentage of queries that find the correct result within the first k elements in the results list, while the median and mean ranks represent the median and mean rank of the exact result computed among all the queries.\nRelevance-based. There can exist motions relevant to a certain extent to the given textual query that are not paired in the dataset. In this context, the normalized Discounted Cumulative Gain (nDCG) metric is widely employed. The DCG takes into consideration the relevance a specific item has with the query, discounting it with a logarithmic factor that depends on the rank of that item: DCG 𝑝 = 𝑝 𝑖=1 2 rel 𝑖 -1 log 2 (𝑖+1) . The nDCG normalizes DCG by its maximum theoretical value and thus returns values in the [0, 1] range. We define the relevance similarly to previous works in image-totext retrieval [7,26,29], that use a proxy relevance between textual descriptions, which is much easier to compute. In this work, we use two textual relevance functions: (i) the SPICE relevance [2] -a handcrafted relevance that exploits graphs associated with the syntactic parse trees of the sentences and has a certain degree of robustness against synonyms; and (ii) the spaCy relevance obtained from the spaCy Python tool, which implements a deep learning-powered similarity score for pairs of texts." }, { "figure_ref": [], "heading": "Datasets and Evaluation Protocol", "publication_ref": [ "b14", "b34", "b14", "b42", "b24", "b16" ], "table_ref": [], "text": "We employ two recently introduced datasets, HumanML3D [15] and KIT Motion Language [35]. Both datasets carry one or more human-written descriptions for each motion. We employ the same pre-processing pipeline for both datasets -the one developed in the codebase of the HumanML3D dataset [15]. We employ 𝐷 = 9 features to represent each joint: six features encoding continuous rotation representation plus three features encoding rotation-invariant forward kinematics joint positions.\nKIT Motion-Language Dataset contains 3,911 recordings of fullbody motion in the Master Motor Map form [43], along with textual descriptions for each motion. It has a total of 6,278 annotations in English, where each motion recording has one or more annotations that explain the action, like \"A human walks two steps forwards, pivots 180 degrees, and walks two steps back\".\nHumanML3D is, in its essence, very similar to KIT Motion Language Dataset. However, it is a more recent dataset developed by adding textual annotations to already-existing and widely-used motion-capture datasets -AMASS [25] and HumanAct12 [17]. It contains 14,616 motions annotated by 44,970 textual descriptions.\nThe results are reported on the test set of the respective datasets after removing possibly redundant queries. In particular, we use 938 and 8,401 textual queries to search among 734 and 4,198 motions for the KIT and HumanML3D datasets, respectively. For HumanML3D, these motions are obtained by splitting the originally provided ones using the available segment annotations associating a motion subsequence with the text that describes it. In this sense, HumanML3D enables a finer retrieval, as texts are more likely to describe the correct subsequence instead of the whole motion." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "We report text-to-motion retrieval results in Table 1, obtained with the InfoNCE loss (see Section 3.3.1 for a comparison of loss functions). The best results are competitively achieved by both DG-STGCN and our transformer-based MoT. The first remarkable insight is the superiority of CLIP over the BERT+LSTM on all the metrics in both datasets. With CLIP, the effectiveness of DG-STGCN and MoT over GRU-based methods is evident, especially on the KIT dataset, where the mean rank is almost 30 % lower. The nDCG metric, through the highly-semantic text-based relevance scores, confirms the trend of the recall@k values, suggesting that the CLIP model paired with GCNs and Transformers can both retrieve exact and relevant results in earlier positions in the results list. Notably, from an absolute perspective, all the methods reach overall low performance on exact search, confirming the difficulty of the introduced text-to-motion retrieval task. This may be due to (i) some intrinsic limitations that are hard to eliminate -e.g., textual descriptions are written by annotators by possibly looking at the original video, which the network has no access to -or (ii) difficulties in capturing high-level semantics in motion or text data. In Figure 1, we report two qualitative examples of text-to-motion retrieval using CLIP + MoT, on HumanML3D. We can notice the potential of such natural-language-based approach to motion retrieval. Specifically, note how the approach is sensible to asymmetries -in the first case, where the counterclockwise adjective is specified in the query, only the correctly-oriented motions are returned in the first positions; in the second case, where no right or left is specified, both the original and mirrored motions are returned (e.g., the 1st and 2nd results)." }, { "figure_ref": [], "heading": "Ablation Study on Loss", "publication_ref": [], "table_ref": [], "text": "Function and Space Dimensionality. In Figure 3, we report performance when varying the dimensionality of the common space, for the two motion models DG-STGCN and\nMoT employing the CLIP text model. We can notice how, on both metrics in Figure 3a/3b, the effectiveness remains quite high even for very small dimensions of the common space, with a negligible improvement after 256 dimensions. Specifically, with only 16 dimensions instead of 256, the performance drops by only about 6 % on nDCG with SPICE relevance and on average 15 % on Recall@10, considering both motion encoders. This suggests that the intrinsic dimensionality of the learned space is quite small, opening the way for further studies and feature visualization in future works.\nIn Figure 4, we also report the remarkable performance gain achieved by InfoNCE loss over the standard symmetric triplet loss. We can see how the InfoNCE loss induces the best results basically in all the configurations, confirming its power even in the under-explored text-motion joint domain. Breaking down the contributions of this variation on the text and motion models in Figures 4a and4b respectively, we notice how the best gains are achieved by using the CLIP textual model and the MoT motion model." }, { "figure_ref": [], "heading": "CONCLUSIONS", "publication_ref": [], "table_ref": [], "text": "In this paper, we introduced the task of text-to-motion retrieval as an alternative to the query-by-example search, and inherently different from the searching using a query label from a fixed pool of labels. We employed two state-of-the-art text-encoder networks, as well as widely adopted motion-encoder networks, for learning a common space and producing the first baselines for this novel task. We demonstrated that the CLIP text encoder works best also for encoding domain-specific natural sentences inherently different from image-descriptive ones, and that Transformers and GCNs obtain better motion representation than GRU-based encoders. In future works, we plan to train the models jointly on the two datasets and perform some cross-dataset evaluation to measure their generalization abilities and robustness. Other improvements include the use of video modality other than the motion and some unsupervised pre-training methods for boosting performance." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This research was supported by ERDF \"CyberSecurity, CyberCrime and Critical Information Infrastructures Center of Excellence\" (No. CZ.02.1.01/0.0/0.0/16_019/0000822), by AI4Media -A European Excellence Centre for Media, Society, and Democracy (EC, H2020 No. 951911), and by SUN -Social and hUman ceNtered XR (EC, Horizon Europe No. 101092612)." } ]
2023-10-04
10.1145/3539618.3592069
[ { "authors": "Emre Aksan; Manuel Kaufmann; Peng Cao; Otmar Hilliges", "journal": "", "ref_id": "b0", "title": "A Spatiotemporal Transformer for 3D Human Motion Prediction", "year": "2020" }, { "authors": "Peter Anderson; Basura Fernando; Mark Johnson; Stephen Gould", "journal": "Springer", "ref_id": "b1", "title": "Spice: Semantic propositional image caption evaluation", "year": "2016" }, { "authors": "Anurag Arnab; Mostafa Dehghani; Georg Heigold; Chen Sun; Mario Lučić; Cordelia Schmid", "journal": "", "ref_id": "b2", "title": "ViViT: A Video Vision Transformer", "year": "2021" }, { "authors": "J Bernard; N Wilhelm; B Krüger; T May; T Schreck; J Kohlhammer", "journal": "IEEE Transactions on Visualization and Computer Graphics", "ref_id": "b3", "title": "MotionExplorer: Exploratory Search in Human Motion Capture Data Based on Hierarchical Aggregation", "year": "2013" }, { "authors": "Petra Budikova; Jan Sedmidubsky; Pavel Zezula", "journal": "ACM", "ref_id": "b4", "title": "Efficient Indexing of 3D Human Motions", "year": "2021" }, { "authors": "Fabio Carrara; Petr Elias; Jan Sedmidubsky; Pavel Zezula", "journal": "Multimedia Tools and Applications", "ref_id": "b5", "title": "LSTM-based real-time action detection and prediction in human motion streams", "year": "2019" }, { "authors": "Fabio Carrara; Andrea Esuli; Tiziano Fagni; Fabrizio Falchi; Alejandro Moreo Fernández", "journal": "Information Retrieval Journal", "ref_id": "b6", "title": "Picture it in your mind: Generating high level visual representations from textual descriptions", "year": "2018" }, { "authors": "Yi-Bin Cheng; Xipeng Chen; Junhong Chen; Pengxu Wei; Dongyu Zhang; Liang Lin", "journal": "", "ref_id": "b7", "title": "Hierarchical Transformer: Unsupervised Representation Learning for Skeleton-Based Human Action Recognition", "year": "2021" }, { "authors": "Yi-Bin Cheng; Xipeng Chen; Dongyu Zhang; Liang Lin", "journal": "ACM", "ref_id": "b8", "title": "Motion-Transformer: Self-Supervised Pre-Training for Skeleton-Based Action Recognition", "year": "2021" }, { "authors": "Z Deng; Q Gu; Q Li", "journal": "ACM", "ref_id": "b9", "title": "Perceptually consistent example-based human motion retrieval", "year": "2009" }, { "authors": "Jiaqi Haodong Duan; Kai Wang; Dahua Chen; Lin", "journal": "", "ref_id": "b10", "title": "DG-STGCN: Dynamic Spatial-Temporal Modeling for Skeleton-based Action Recognition", "year": "2022" }, { "authors": "Shradha Dubey; Manish Dixit", "journal": "Multimedia Systems", "ref_id": "b11", "title": "A comprehensive survey on human pose estimation approaches", "year": "2022" }, { "authors": "Han Fang; Pengfei Xiong; Luhui Xu; Yu Chen", "journal": "", "ref_id": "b12", "title": "Clip2video: Mastering video-text retrieval via image clip", "year": "2021" }, { "authors": "Anindita Ghosh; Noshaba Cheema; Cennet Oguz; Christian Theobalt; Philipp Slusallek", "journal": "", "ref_id": "b13", "title": "Synthesis of compositional animations from textual descriptions", "year": "2021" }, { "authors": "Chuan Guo; Shihao Zou; Xinxin Zuo; Sen Wang; Wei Ji; Xingyu Li; Li Cheng", "journal": "", "ref_id": "b14", "title": "Generating Diverse and Natural 3D Human Motions From Text", "year": "2022" }, { "authors": "Chuan Guo; Xinxin Zuo; Sen Wang; Li Cheng", "journal": "Springer Nature Switzerland", "ref_id": "b15", "title": "TM2T: Stochastic and Tokenized Modeling for the Reciprocal Generation of 3D Human Motions and Texts", "year": "2022" }, { "authors": "Chuan Guo; Xinxin Zuo; Sen Wang; Shihao Zou; Qingyao Sun; Annan Deng; Minglun Gong; Li Cheng", "journal": "", "ref_id": "b16", "title": "Action2motion: Conditioned generation of 3d human motions", "year": "2020" }, { "authors": "M Kapadia; I-K Chiang; T Thomas; N I Badler; J T Kider", "journal": "ACM", "ref_id": "b17", "title": "Efficient motion retrieval in large motion databases", "year": "2013" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton ; Lee Kristina; Toutanova ", "journal": "", "ref_id": "b18", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2019" }, { "authors": "Jihoon Kim; Youngjae Yu; Seungyoun Shin; Taehyun Byun; Sungjoon Choi", "journal": "", "ref_id": "b19", "title": "Learning Joint Representation of Human Motion and Language", "year": "2022" }, { "authors": "Lilang Lin; Sijie Song; Wenhan Yang; Jiaying Liu", "journal": "ACM", "ref_id": "b20", "title": "MS2L: Multi-Task Self-Supervised Learning for Skeleton Based Action Recognition", "year": "2020" }, { "authors": "Yu Liu; Huai Chen; Lianghua Huang; Di Chen; Bin Wang; Pan Pan; Lisheng Wang", "journal": "", "ref_id": "b21", "title": "Animating Images to Transfer CLIP for Video-Text Retrieval", "year": "1906" }, { "authors": "Huaishao Luo; Lei Ji; Ming Zhong; Yang Chen; Wen Lei; Nan Duan; Tianrui Li", "journal": "Neurocomputing", "ref_id": "b22", "title": "CLIP4Clip: An empirical study of CLIP for end to end video clip retrieval and captioning", "year": "2022" }, { "authors": "Na Lv; Ying Wang; Zhiquan Feng; Jingliang Peng", "journal": "", "ref_id": "b23", "title": "Deep Hashing for Motion Capture Data Retrieval", "year": "2021" }, { "authors": "Naureen Mahmood; Nima Ghorbani; Gerard Nikolaus F Troje; Michael J Pons-Moll; Black", "journal": "", "ref_id": "b24", "title": "AMASS: Archive of motion capture as surface shapes", "year": "2019" }, { "authors": "Nicola Messina; Giuseppe Amato; Andrea Esuli; Fabrizio Falchi; Claudio Gennaro; Stéphane Marchand-Maillet", "journal": "ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM)", "ref_id": "b25", "title": "Fine-grained visual textual alignment for cross-modal retrieval using transformer encoders", "year": "2021" }, { "authors": "Nicola Messina; Giuseppe Amato; Fabrizio Falchi; Claudio Gennaro; Stéphane Marchand-Maillet", "journal": "IEEE", "ref_id": "b26", "title": "Towards efficient cross-modal visual textual retrieval using transformer-encoder deep features", "year": "2021" }, { "authors": "Nicola Messina; Davide Alessandro Coccomini; Andrea Esuli; Fabrizio Falchi", "journal": "", "ref_id": "b27", "title": "Transformer-Based Multi-modal Proposal and Re-Rank for Wikipedia Image-Caption Matching", "year": "2022" }, { "authors": "Nicola Messina; Fabrizio Falchi; Andrea Esuli; Giuseppe Amato", "journal": "IEEE", "ref_id": "b28", "title": "Transformer reasoning network for image-text matching and retrieval", "year": "2021" }, { "authors": "Nicola Messina; Matteo Stefanini; Marcella Cornia; Lorenzo Baraldi; Fabrizio Falchi; Giuseppe Amato; Rita Cucchiara", "journal": "", "ref_id": "b29", "title": "ALADIN: Distilling Finegrained Alignment Scores for Efficient Image-Text Matching and Retrieval", "year": "2022" }, { "authors": "N Numaguchi; A Nakazawa; T Shiratori; J K Hodgins", "journal": "", "ref_id": "b30", "title": "A Puppet Interface for Retrieval of Motion Capture Data", "year": "2011" }, { "authors": "Konstantinos Papadopoulos; Enjie Ghorbel; Renato Baptista; Djamila Aouada; Björn E Ottersten", "journal": "Springer", "ref_id": "b31", "title": "Two-Stage RGB-Based Action Detection Using Augmented 3D Poses", "year": "2019" }, { "authors": "Wei Peng; Xiaopeng Hong; Guoying Zhao", "journal": "Pattern Recognition", "ref_id": "b32", "title": "Tripool: Graph Triplet Pooling for 3D Skeleton-Based Action Recognition", "year": "2021" }, { "authors": "Mathis Petrovich; Michael J Black; Gül Varol", "journal": "", "ref_id": "b33", "title": "Action-Conditioned 3D Human Motion Synthesis With Transformer VAE", "year": "2021" }, { "authors": "Matthias Plappert; Christian Mandery; Tamim Asfour", "journal": "Big Data", "ref_id": "b34", "title": "The KIT Motion-Language Dataset", "year": "2016" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark; Gretchen Krueger; Ilya Sutskever", "journal": "", "ref_id": "b35", "title": "Learning Transferable Visual Models From Natural Language Supervision", "year": "2021" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b36", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Jan Sedmidubsky; Petra Budikova; Vlastislav Dohnal; Pavel Zezula", "journal": "Springer", "ref_id": "b37", "title": "Motion Words: A Text-like Representation of 3D Skeleton Sequences", "year": "2020" }, { "authors": "Jan Sedmidubsky; Fabio Carrara; Giuseppe Amato", "journal": "Springer", "ref_id": "b38", "title": "SegmentCodeList: Unsupervised Representation Learning for Human Skeleton Data Retrieval", "year": "2023" }, { "authors": "Jan Sedmidubsky; Petr Elias; Petra Budikova; Pavel Zezula", "journal": "IEEE Access", "ref_id": "b39", "title": "Contentbased Management of Human Motion Data: Survey and Challenges", "year": "2021" }, { "authors": "Nina Shvetsova; Brian Chen; Andrew Rouditchenko; Samuel Thomas; Brian Kingsbury; S Rogerio; David Feris; James Harwath; Hilde Glass; Kuehne", "journal": "", "ref_id": "b40", "title": "Everything at once-multi-modal fusion transformer for video retrieval", "year": "2022" }, { "authors": "Sijie Song; Cuiling Lan; Junliang Xing; Wenjun Zeng; Jiaying Liu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b41", "title": "Spatio-Temporal Attention-Based LSTM Networks for 3D Action Recognition and Detection", "year": "2018" }, { "authors": "Ömer Terlemez; Stefan Ulbrich; Christian Mandery; Martin Do; Nikolaus Vahrenkamp; Tamim Asfour", "journal": "IEEE", "ref_id": "b42", "title": "Master Motor Map (MMM)-Framework and toolkit for capturing, representing, and reproducing human motion on humanoid robots", "year": "2014" }, { "authors": "Guy Tevet; Sigal Raab; Brian Gordon; Yonatan Shafir; Daniel Cohen-Or; Amit H Bermano", "journal": "", "ref_id": "b43", "title": "Human Motion Diffusion Model", "year": "2022" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b44", "title": "Attention is all you need", "year": "2017" }, { "authors": "Yang Yang; Guangjun Liu; Xuehao Gao", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b45", "title": "Motion Guided Attention Learning for Self-Supervised 3D Human Action Recognition", "year": "2022" }, { "authors": "Jianrong Zhang; Yangsong Zhang; Xiaodong Cun; Shaoli Huang; Yong Zhang; Hongwei Zhao; Hongtao Lu; Xi Shen", "journal": "", "ref_id": "b46", "title": "T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations", "year": "2023" }, { "authors": "Mingyuan Zhang; Zhongang Cai; Liang Pan; Fangzhou Hong; Xinying Guo; Lei Yang; Ziwei Liu", "journal": "", "ref_id": "b47", "title": "MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model", "year": "2022" }, { "authors": "Yuhao Zhang; Hang Jiang; Yasuhide Miura; Christopher D Manning; Curtis P Langlotz", "journal": "", "ref_id": "b48", "title": "Contrastive learning of medical visual representations from paired images and text", "year": "2020" }, { "authors": "Shuai Zhao; Linchao Zhu; Xiaohan Wang; Yi Yang", "journal": "", "ref_id": "b49", "title": "Centerclip: Token clustering for efficient text-video retrieval", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 452.2, 673.07, 79.56, 14.31 ], "formula_id": "formula_0", "formula_text": "- → z , ← - z = ← -→ GRU(FFN(x))" } ]
Text-to-Motion Retrieval: Towards Joint Understanding of Human Motion Data and Natural Language
A person walks in a counterclockwise circle." "The person is kneeling down on all fours to begin to crawl." 1 st 2 nd 3 rd 4 th 5 th Figure 1: Five motions retrieved for two different queries specified by free text (CLIP as text encoder, MoT as motion encoder).
Nicola Messina; Jan Sedmidubsky; Fabrizio Falchi; Tomáš Rebok
[ { "figure_caption": "Figure 2 :2Figure 2: Schematic illustration of the learning process of the common space of both the text and motion modalities.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :Figure 4 :34Figure 3: Performance varying space dimensionality.", "figure_data": "", "figure_id": "fig_1", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "Text-to-motion retrieval results on both the KIT Motion Language Dataset and HumanML3D Dataset. We report the best and the second-best results with bold and underlined font, respectively.", "figure_data": "KIT Motion Language DatasetHumanML3D DatasetRecall@k ↑Rank ↓nDCG ↑Recall@k ↑Rank ↓nDCG ↑Text Model Motion Modelr1r5r10 mean med SPICE spaCy r1r5r10 mean med SPICE spaCyBiGRU3.7 15.2 23.8 72.3300.271 0.706 2.9 11.8 19.8 253.9 550.250 0.768BERT+LSTMUpperLowerGRU 3.2 15.7 25.3 90.2 DG-STGCN 6.2 24.5 38.2 40.634 170.263 0.697 2.4 10.5 17.7 285.7 68 0.339 0.740 2.0 8.4 14.4 242.0 730.242 0.763 0.231 0.767MoT5.3 21.3 32.0 51.1200.318 0.723 2.5 11.2 19.4 234.5 510.247 0.768BiGRU6.6 21.5 32.3 52.0220.316 0.729 3.4 14.3 23.1 201.9 430.272 0.780CLIPUpperLowerGRU 6.4 22.0 32.2 52.3 DG-STGCN 7.2 26.1 38.2 36.922 16 0.355 0.751 4.1 16.0 26.5 159.6 33 0.291 0.789 0.321 0.732 3.1 12.6 20.8 200.4 47 0.269 0.779MoT6.5 26.4 42.6 35.5 14 0.352 0.748 3.5 14.8 24.5 166.2 380.280 0.785Recall@1022 24 26DG-STGCN MoTnDCG (SPICE)0.27 0.28 0.29DG-STGCN MoT16 32 64 128 256 512 1024 Dimensionality16 32 64 128 256 512 1024 Dimensionality(a) Recall@10(b) nDCG (SPICE)", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "[12]", "Explanation": "The cited work introduces a method for detecting 3D human-body keypoints in a single RGB video stream, which the citing paper builds upon in its research on human motion analysis using skeleton sequences."}, {"Category": "Data Source", "Citation": "[40]", "Explanation": "The cited work provides the context of the study on human motion analysis using skeleton sequences, which the citing paper uses to underpin its research in this area."}, {"Category": "Extension or Continuation", "Citation": "[8,24,33]", "Explanation": "The cited works focus on designing deep-learning architectures for action classification and action detection in continuous streams, which the citing paper extends by exploring new dimensions in the field of skeleton-data processing."}, {"Category": "Extension or Continuation", "Citation": "[1,8,9]", "Explanation": "The cited works on transformers in action classification and action detection in continuous streams are further extended in the citing paper to explore the use of such methods in the field of skeleton-data processing."}, {"Category": "Extension or Continuation", "Citation": "[24]", "Explanation": "The cited work on action classification using convolutional networks is extended in the citing paper to explore the use of such methods in the field of skeleton-data processing."}, {"Category": "Extension or Continuation", "Citation": "[42]", "Explanation": "The cited work on action detection in continuous streams using recurrent networks is extended in the citing paper to explore the use of such methods in the field of skeleton-data processing."}, {"Category": "Extension or Continuation", "Citation": "[11,33]", "Explanation": "The cited works on action detection using graph-convolutional networks are further extended in the citing paper to explore the use of such methods in the field of skeleton-data processing."}, {"Category": "Extension or Continuation", "Citation": "[39,47]", "Explanation": "The cited works on action classification using reconstruction-based methods are extended in the citing paper to explore the use of such methods in the field of skeleton-data processing."}, {"Category": "Extension or Continuation", "Citation": "[21,46]", "Explanation": "The cited works on action detection using contrastive-based learning are further extended in the citing paper to explore the use of such methods in the field of skeleton-data processing."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work on cross-modal learning provides a basis for the citing paper to learn common multi-modal spaces for motion retrieval tasks."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work on textual-visual processing is a methodological basis for the citing paper to learn common multi-modal spaces for text-based query specification in motion retrieval tasks."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work, CLIP model, provides a method for learning an effective common space for the visual and textual modalities, which the citing paper adopts to use open vocabularies or complex textual queries for searching images."}, {"Category": "Extension or Continuation", "Citation": "[13,22,23,41,50]", "Explanation": "The cited works on text-to-video retrieval task provide a basis for the citing paper to explore new dimensions and contexts in the text-to-motion matching task, which is similar to the text-to-video retrieval task but with a focus on the skeleton data modality."}, {"Category": "Data Source", "Citation": "[15]", "Explanation": "The cited dataset, HumanML3D, is a source of training data for the text-to-motion retrieval task, which the citing paper utilizes in its research to improve the text-to-motion matching process."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work, CLIP, serves as a methodological basis for the text-tomotion retrieval approach proposed in the citing paper, as it is used to learn a metric in a similar way to the text-to-image retrieval task."}, {"Category": "Data Source", "Citation": "[30]", "Explanation": "The cited work, ALADIN, is a data source for the text-tomotion retrieval approach, as it is used to learn a metric in a similar way to the text-to-image retrieval task."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work inspires the use of a transformer-based motion encoder in the citing paper, which is used to process skeleton joints in a more efficient and effective manner."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work provides the implementation of a pre-trained text model for motion synthesis conditioned on a natural language prompt, which the citing paper adopts in their research to produce sentence representations."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work introduces the vision-language model CLIP, which the citing paper uses to project images and natural language descriptions in a common space for text representation."}, {"Category": "Extension or Continuation", "Citation": "[37]", "Explanation": "The cited work introduces modifications to the CLIP textual encoder, which the citing paper extends to create a new model for text representation."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work introduces the use of a GRU model for frame-level action detection in continuous motion data, which the citing paper adopts in their own research to process the input motion data."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work introduces a model for independently processing the upper and lower parts of the skeleton using GRU layers, which the citing paper adopts in their research to better learn semantics of different body parts in the motion data."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work [11] provides a GCN module that the citing paper adopts in their architecture for motion classification, featuring a spatial module and a temporal module for dynamic graphical structure capture and temporal aggregation using group-wise temporal convolutions."}, {"Category": "Extension or Continuation", "Citation": "[3]", "Explanation": "The cited work [3] introduces the successful transformer-based video processing network ViViT, which the citing paper builds upon to develop a new architecture for motion classification that leverages the capabilities of the original model."}, {"Category": "Methodological Basis", "Citation": "[26]", "Explanation": "The cited work provides a widely-used text-to-image metric learning loss function that the citing paper adopts in their research on motion and caption embedding pairs."}, {"Category": "Methodological Basis", "Citation": "[49]", "Explanation": "The cited work introduces the InfoNCE Loss for cross-modal matching, which the citing paper employs in their research on motion and caption embedding pairs."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work, CLIP, utilizes the InfoNCE Loss in cross-modal matching, which the citing paper builds upon in their research on motion and caption embedding pairs."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work employs the InfoNCE Loss in cross-modal matching, which the citing paper builds upon in their research on motion and caption embedding pairs."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work by SPICE is used as a reference for the definition of relevance in the citing paper, which is a methodological basis for the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[26]", "Explanation": "The cited work by [26] is used as a data source for the image-to-text retrieval in the citing paper, providing a foundational element for the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[29]", "Explanation": "The cited work by [29] is used as a data source for the image-to-text retrieval in the citing paper, providing a foundational element for the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[15]", "Explanation": "The cited work is the codebase of the HumanML3D dataset, which provides the pre-processing pipeline used in the citing paper to represent each joint in the dataset."}, {"Category": "Data Source", "Citation": "[35]", "Explanation": "The cited work is the KIT Motion Language dataset, which is one of the two datasets employed in the citing paper to carry human-written descriptions for each motion."}, {"Category": "Data Source", "Citation": "[43]", "Explanation": "The cited work is the Master Motor Map form, which is the form used in the KIT Motion Language dataset to represent fullbody motion recordings."}, {"Category": "Data Source", "Citation": "[25]", "Explanation": "The cited work is the AMASS dataset, which is one of the motion-capture datasets used in the HumanML3D dataset to add textual annotations."}, {"Category": "Data Source", "Citation": "[17]", "Explanation": "The cited work is the HumanAct12 dataset, which is another motion-capture dataset used in the HumanML3D dataset to add textual annotations."}]
[ { "figure_ref": [ "fig_4", "fig_4" ], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b0", "b1", "b7", "b8", "b6", "b4", "b10", "b12", "b13", "b14", "b15" ], "table_ref": [], "text": "Dropout is used with gradient-descent-based algorithms for training neural networks (NNs) (Hinton et al., 2012;Srivastava et al., 2014), which obtains the state-of-the-art test performance in deep learning (Tan and Le, 2019;Helmbold and Long, 2015). The key idea behind dropout is to randomly remove a subset of neurons during the training process, specifically, the output of each neuron is multiplied with a random variable that takes the value 1/p with probability p and zero otherwise. This random variable is independently sampled at each feedforward operation. In contrast to the widespread use and empirical success of dropout, the mechanism by which it helps generalization in deep learning remains an ongoing area of research.\nThe noise structure introduced by stochastic algorithms is important for understanding their training behaviors. A series of recent works reveal that the noise structure inherent in stochastic gradient descent (SGD) plays a crucial role in facilitating the exploration of flatter solutions (Keskar et al., 2016;Feng and Tu, 2021;Zhu et al., 2018). Analogously, training with dropout introduces some noise with a specific type of architecture, acting as an implicit regularizer that facilitates better generalization abilities (Hinton et al., 2012;Srivastava et al., 2014;Wei et al., 2020;Zhang and Xu, 2022;Zhu et al., 2018).\nIn this paper, we first employ the framework of stochastic modified equations (SMEs) (Li et al., 2017) to approximate in distribution the training dynamics of the dropout algorithm applied to two-layer NNs. By employing this approach, we are able to quantify the leading order dynamics of the dropout algorithm and its variants in a precise manner. Additionally, we calculate the covariance structure of the noise generated by the stochasticity incorporated in dropout. We then utilize the covariance structure to understand why NNs trained by dropout have the tendency to possess better generalization abilities from the perspective of flatness (Keskar et al., 2016;Neyshabur et al., 2017).\nWe hypothesize that the flatness-improving ability of dropout noise is attributed to its alignment with the structure of the loss landscape, based on the similarity between the explicit forms of the Hessian and the dropout covariance under intuitive approximations. To investigate this hypothesis, we conduct empirical studies using three different approaches (shown respectively in Fig. 1, Fig. 2(a,b), and Fig. 2(c,d)) to assess the similarity between the flatness of the loss landscape and the noise structure induced by dropout at the obtained minima, and all of them consistently demonstrate two important relationships: i) Inverse variance-flatness relation: The noise is larger at the sharper direction of the loss landscape; ii) Hessian-variance alignment relation: The Hessian of the loss landscape at the found minima aligns with the noise covariance matrix. These two relations are compatible with each other in that they collectively contribute to the ability of the training algorithm to effectively identify flatter minima. Our experiments are conducted on several representative datasets, i.e., MNIST (LeCun et al., 1998), CIFAR-100 (Krizhevsky et al., 2009) and Multi30k (Elliott et al., 2016), and also on distinct NN structures, i.e., fully-connected neural networks (FNNs), ResNet-20 (He et al., 2016) and transformer (Vaswani et al., 2017) to demonstrate the universality of our findings." }, { "figure_ref": [], "heading": "Related works", "publication_ref": [ "b16", "b17", "b18", "b19", "b20", "b8", "b21", "b22", "b23", "b25", "b26", "b25", "b26", "b27", "b28", "b29", "b5", "b6", "b30" ], "table_ref": [], "text": "A flurry of recent works aims to shed light on the regularization effect conferred by dropout. Wager et al. (2013) show that dropout performs a form of adaptive regularization in the context of linear regression and logistic problems. McAllester (2013) propose a PAC-Bayesian bound, whereas Wan et al. (2013); Mou et al. (2018) derive some Rademacher-complexity-type error bounds specifically tailored for dropout. Mianjy and Arora (2020) demonstrate that dropout training with logistic loss achieves ε-suboptimality in test error within O(1/ε) iterations. Finally, Zhang and Xu (2022) establish that dropout enhances the flatness of the loss landscape and facilitates condensation through an additional regularization term endowed by dropout.\nContinuous formulations have been extensively utilized to study the dynamical behavior of stochastic algorithms. Li et al. (2017Li et al. ( , 2019) ) present an entirely rigorous and self-contained mathematical formulation of the SME framework that applies to a wide class of stochastic algorithms. Furthermore, Feng et al. (2017) adopt a semigroup approach to investigate the dynamics of SGD and online PCA. Malladi et al. (2022) derive the SME approximations for the adaptive stochastic algorithms including RMSprop and Adam, additionally, they provide efficient experimental verification of the validity of square root scaling rules arising from the SMEs.\nOne noteworthy observation is the association between the flatness of minima and improved generalization ability (Li et al., 2017;Jastrzebski et al., 2017Jastrzebski et al., , 2018)). Specifically, SGD is shown to preferentially select flat minima, especially under conditions of large learning rates and small batch sizes (Jastrzebski et al., 2017(Jastrzebski et al., , 2018;;Wu et al., 2018). Papyan (2018Papyan ( , 2019) ) attribute such enhancement of flatness by SGD to the similarity between covariance of the noise and Hessian of the loss function. Furthermore, Feng and Tu (2021) reveal an inverse variance-flatness relation within the dynamics of SGD. Additionally, Zhu et al. (2018); Wu et al. (2022) unveil the Hessian-variance alignment property of SGD noise, shedding light on the role of SGD in escaping from sharper minima and locating flatter minima." }, { "figure_ref": [], "heading": "Preliminary", "publication_ref": [], "table_ref": [], "text": "In this section, we present the notations and definitions that are utilized in our theoretical analysis. We remark that our experimental settings are more general than the counterparts in the theoretical analysis." }, { "figure_ref": [], "heading": "Notations", "publication_ref": [], "table_ref": [], "text": "We set a special vector (1, 1, 1, . . . , 1) ⊺ by 1 := (1, 1, 1, . . . , 1) ⊺ whose dimension varies. We set n for the number of input samples and m for the width of the NN. We let [n] = {1, 2, . . . , n}. We denote ⊗ as the Kronecker tensor product, and ⟨•, •⟩ for standard inner product between two vectors. We denote vector L 2 norm as ∥•∥ 2 , vector or function L ∞ norm as ∥•∥ ∞ . Finally, we denote the set of continuous functions f (•) : R D → R possessing continuous derivatives of order up to and including r by C r (R D ), the space of bounded measurable functions by B b (R D ), and the space of bounded continuous functions by C b (R D )." }, { "figure_ref": [], "heading": "Two-layer neural networks and loss function", "publication_ref": [], "table_ref": [], "text": "We consider the empirical risk minimization problem given by the quadratic loss:\nmin θ R S (θ) = 1 2n n i=1 (f θ (x i ) -y i ) 2 ,(1)\nwhere S := {(x i , y i )} n i=1 is the training sample, f θ (x) is the prediction function, θ are the parameters, and their dependence is modeled by a two-layer NN with m hidden neurons\nf θ (x) := m r=1 a r σ(w ⊺ r x),(2)\nwhere x ∈ R d , θ = vec(θ a , θ w ) ∈ R D , where D := m(d+1) throughout this paper. We remark that θ is the set of parameters with θ a = vec({a r } m r=1 ), θ w = vec({w r } m r=1 ), and σ(•) is the activation function. More precisely, θ = vec({q r } m r=1 ), where for each r ∈ [m], q r := (a r , w ⊺ r ) ⊺ , and the bias term b r can be incorporated by expanding x and w r to (x ⊺ , 1) ⊺ and (w ⊺ r , b r ) ⊺ ." }, { "figure_ref": [], "heading": "Dropout", "publication_ref": [], "table_ref": [], "text": "Given fixed learning rate ε > 0, then at the N -th iteration where t N := N ε, a scaling vector η N ∈ R m is sampled with independent random coordinates:\nFor each k ∈ [m], (η N ) k = 1 p\nwith probability p, 0 with probability 1 -p,\n(3) and we observe that {η N } N ≥1 is an i.i.d. Bernoulli sequence with Eη N = 1. With slight abuse of notations, the σ-fields\nF N := {σ(η 1 , η 2 , • • • η N )\n} forms a natural filtration. We then apply dropout to the two-layer NNs by computing\nf θ (x; η) := m r=1 (η) r a r σ(w ⊺ r x),(4)\nand we denote the empirical risk associated with dropout by\nR drop S (θ; η) := 1 2n n i=1 (f θ (x i ; η) -y i ) 2 = 1 2n n i=1 m r=1 (η) r a r σ(w ⊺ r x i ) -y i 2 . (5\n)\nWe observe that the parameters at the N -th step are updated as follows:\nθ N = θ N -1 -ε∇ θ R drop S (θ N -1 ; η N ) ,(6)\nwhere θ 0 := θ(0). Finally, we denote hereafter that for all i ∈ [n],\ne N i := e i (θ N -1 ; η N ) := f θ N -1 (x i ; η N ) -y i ." }, { "figure_ref": [], "heading": "Stochastic modified equations for dropout", "publication_ref": [], "table_ref": [], "text": "In this section, we approximate the iterative process of dropout (6) in the weak sense (Definition 1)." }, { "figure_ref": [], "heading": "Modified loss", "publication_ref": [], "table_ref": [], "text": "As the dropout iteration (6) can be written into\nθ N -θ N -1 = -ε∇ θ R drop S (θ N -1 ; η N ) = - ε n n i=1 e N i ∇ θ e N i . Since θ = vec({q r } m r=1 ) = vec ({(a r , w r )} m r=1 ), then given θ N -1 , for each k ∈ [m],\nthe expectation of the increment restricted to q k reads\nE θ N -1 n i=1 e N i ∇ q k e N i = E θ N -1 n i=1 e N i (η N ) k ∇ q k (a k σ(w ⊺ k x i )) = n i=1 e i ∇ q k (a k σ(w ⊺ k x i )) + 1 -p p n i=1 a k σ(w ⊺ k x i )∇ q k (a k σ(w ⊺ k x i )) ,\nwhere we denote for simplicity that e i := e i (θ) := m r=1 a r σ(w ⊺ r x i ) -y i , and compared with e N i , e i does not depend on the random variable η N . Hence, the modified loss L S (•) : R D → R for dropout can be defined as:\nL S (θ) := 1 2n n i=1 e 2 i + 1 -p 2np n i=1 m r=1 a 2 r σ(w ⊺ r x i ) 2 , (7\n)\nin that as θ N -1 is given, by taking conditional expectation, its increment reads\nθ N -θ N -1 = -εE θ N -1 ∇ θ R drop S (θ N -1 ; η N ) = -ε∇ θ L S (θ) θ=θ N -1 ,\nthen in the sense of expectations, {θ N } N ≥0 follows close to the gradient descent (GD) trajectory of L S (θ) with fixed learning rate ε." }, { "figure_ref": [], "heading": "Stochastic modified equations", "publication_ref": [ "b31", "b23", "b21", "b7", "b8" ], "table_ref": [], "text": "Firstly, from the results in Section 4.1, we observe that given θ\nN -1 , θ N -θ N -1 = -ε∇ θ L S (θ) θ=θ N -1 + √ εV (θ N -1 ),(8)\nwhere L S (•) : R D → R is the modified loss defined in (7), and V (•) : R D → R D is a D-dimensional random vector, and when given θ N -1 , V (θ N -1 ) has mean 0 and covariance εΣ(θ N -1 ), where Σ(•) : R D → R D×D , whose expression is deferred to Section 5.1.\nConsider the stochastic differential equation (SDE),\ndΘ t = b (Θ t ) dt + σ (Θ t ) dW t , Θ 0 = Θ(0),(9)\nwhere W t is a standard D-dimensional Brownian motion, and its Euler-Maruyama discretization with step size ε > 0 at the N -th step reads\nΘ εN = Θ ε(N -1) + εb Θ ε(N -1) + √ εσ Θ ε(N -1) Z N ,\nwhere\nZ N ∼ N (0, I D ) and Θ 0 = Θ(0). Thus, if we set b (Θ) := -∇ Θ L S (Θ), σ (Θ) := √ ε (Σ (Θ)) 1 2 , Θ 0 := θ 0 ,(10)\nthen we would expect (9) to be a 'good' approximation of (8) with time identification t = εN . Based on the previous work (Li et al., 2017), we use approximations in the weak sense (Kloeden and Platen, 2011, Section 9.7) since the path of dropout and the corresponding SDE are driven by noises sampled in different spaces.\nTo compare different discrete time approximations, we need to take the rate of weak convergence into consideration, and we also need to choose an appropriate class of functions as the space of test functions. We introduce the following set of smooth functions:\nC M b R D =    f ∈ C M R D ∥f ∥ C M := |β|≤M D β f ∞ < ∞    , (11\n)\nwhere D is the usual differential operator. We remark that C M b (R D ) is a subset of G(R D ), the class of functions with polynomial growth, which is chosen to be the space of test functions in previous works (Li et al., 2017;Kloeden and Platen, 2011;Malladi et al., 2022). Before we proceed to the definition of weak approximation, to ensure the rigor and validity of our analysis, we assume that Assumption 1. There exists T * > 0, such that for any t ∈ [0, T * ], there exists a unique t-continuous solution Θ t to SDE (9). Furthermore, for each l ∈ [3], there exists C(T * , Θ 0 ) > 0, such that\nsup 0≤s≤T * E ∥Θ s (•)∥ 2l 2 ≤ C(T * , Θ 0 ). (12\n)\nMoreover, for the dropout iterations (6), let 0 < ε < 1, T > 0 and set N T,ε := ⌊ T ε ⌋. There exists ε 0 > 0, such that given any learning rate ε ≤ ε 0 , then for all N ∈ [0 : N T * ,ε ] and for each l ∈ [3], there exists\nC(T * , θ 0 , ε 0 ) > 0, such that sup 0≤N ≤[N T * ,ε ] E ∥θ N ∥ 2l 2 ≤ C(T * , θ 0 , ε 0 ). (13\n)\nWe remark that if G(R D ) is chosen to be the test functions in Li et al. (2019), then similar relations to ( 12) and ( 13) shall be imposed, except that in our cases, we only require the second, fourth and sixth moments to be uniformly bounded, while in their cases, all 2l-moments are required for l ≥ 1. Definition 1. The SDE (9) is an order α weak approximation to the dropout (6), if for every g ∈ C M b R D , there exists C > 0 and ε 0 > 0, such that given any ε ≤ ε 0 and T ≤ T * , then for all\nN ∈ [N T,ε ], |Eg(Θ εN ) -Eg(θ N )| ≤ C(T * , g, ε 0 )ε α . (14\n)\nWe now state informally our approximation theorem. Theorem 1*. Fix time T ≤ T * and learning rate ε > 0, then if we choose\nb(Θ) = -∇ Θ L S (Θ), σ(Θ) = √ ε (Σ (Θ)) 1 2 ,\nthen for all t ∈ [0, T ], the stochastic processes Θ t satisfying\ndΘ t = b (Θ t ) dt + σ (Θ t ) dW t ,\nis an order-1 approximation of dropout (6). If we choose instead\nb(Θ) = -∇ Θ L S (Θ) + ε 4 ∥∇ Θ L S (Θ)∥ 2 2 , σ(Θ) = √ ε (Σ (Θ)) 1 2 ,\nthen Θ t is an order-2 approximation.\nIt is noteworthy that our findings reproduce the explicit regularization effect attributed to dropout (Wei et al., 2020;Zhang and Xu, 2022). This regularization effect modifies the expected training objective from R S (θ) to L S (θ). The regularization effect stems from the stochasticity of dropout. Unlike SGD, where the noise arises from the stochasticity involved in the selection of training samples, dropout introduces noise through the stochastic removal of parameters. In the sequel, we focus on how such stochasticity exerts an impact on our learning results.\n5 The effect of the noise structure on flatness\nWe begin this section by examining the expression of the noise structure arising from dropout." }, { "figure_ref": [], "heading": "Explicit form of the dropout noise structure", "publication_ref": [], "table_ref": [], "text": "In this subsection, we present the expression for Σ. Once again, as\nθ = vec({q r } m r=1 ) = vec ({(a r , w r )} m r=1 ), then covariance of ∇ θ R drop S (θ N -1 ; η N ) equals to Σ(θ N -1 ). We denote Σ kr (θ N -1 ) := Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ qr R drop S (θ N -1 ; η N ) , then Σ =     Σ 11 Σ 12 • • • Σ 1m Σ 21 Σ 22 • • • Σ 2m . . . . . . . . . . . . Σ m1 Σ m2 • • • Σ mm     .\nFor each k ∈ [m], we obtain that\nΣ kk (θ N -1 ) = Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ q k R drop S (θ N -1 ; η N ) = 1 p -1 1 n n i=1 e i,\\k + 1 p a k σ(w ⊺ k x i ) ∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 e i,\\k + 1 p a k σ(w ⊺ k x i ) ∇ q k (a k σ(w ⊺ k x i )) + 1 p 2 - 1 p m k ′ =1,k ′ ̸ =k 1 n n i=1 a k ′ σ(w ⊺ k ′ x i )∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 a k ′ σ(w ⊺ k ′ x i )∇ q k (a k σ(w ⊺ k x i ))\n,\nwhere e i,\\k := e i,\\k (θ\n) := m l=1,l̸ =k a l σ(w ⊺ l x i ) -y i , and for each k, r ∈ [m] with k ̸ = r, Σ kr (θ N -1 ) =Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ qr R drop S (θ N -1 ; η N ) = 1 p -1 m k ′ =1,k ′ ̸ =k,k ′ ̸ =r 1 n n i=1 a k ′ σ(w ⊺ k ′ x i )∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 a k ′ σ(w ⊺ k ′ x i )∇ qr (a r σ(w ⊺ r x i )) + 1 p -1 1 n n i=1 e i,\\k,\\r + 1 p a k σ(w ⊺ k x i ) + 1 p a r σ(w ⊺ r x i ) ∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 a k σ(w ⊺ k x i )∇ qr (a r σ(w ⊺ r x i )) + 1 p -1 1 n n i=1 a r σ(w ⊺ r x i )∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 e i,\\k,\\r + a k σ(w ⊺ k x i ) + 1 p a r σ(w ⊺ r x i ) ∇ qr (a r σ(w ⊺ r x i ))\n,\nwhere e i,\\k,\\r := e i,\\k,\\r (θ) := m l=1,l̸ =k,l̸ =r a l σ(w ⊺ l x i ) -y i . We remark that such expression is consistent in that for the extreme case where p = 1, dropout 'degenerates' to GD, hence the covariance matrix degenerates to a zero matrix, i.e., Σ = 0 D×D ." }, { "figure_ref": [], "heading": "Experimental results on the dropout noise structure", "publication_ref": [ "b6" ], "table_ref": [], "text": "In this subsection, we endeavor to show the structural similarity between the covariance and the Hessian in terms of both Hessian-variance alignment relations and Inverse variance-flatness relations.\nIntuitively, the structural similarity between the Hessian and covariance matrix is shown below:\nH(θ) ≈ 1 n n i=1 ∇ θ f θ (x i ) ⊗∇ θ f θ (x i ) + 1 -p p m r=1 ∇ qr (a r σ(w ⊺ r x i )) ⊗∇ qr (a r σ(w ⊺ r x i )) , Σ(θ) ≈ 1 n n i=1 l i,1 ∇ θ f θ (x i )⊗∇ θ f θ (x i ) + l i,2 1 -p p m r=1 ∇ qr (a r σ(w ⊺ r x i )) ⊗∇ qr (a r σ(w ⊺ r x i )) ,(15)\nwhere H(θ) := ∇ 2 θ L S (θ) , and l i,1 := (e i ) 2 + 1-p p m r=1 a 2 r σ(w ⊺ r x i ) 2 , l i,2 := (e i ) 2 , and the detailed derivation for (15) is deferred to the Appendix. We remark that the expression for the covariance matrix in (15) differs from the counterpart in Section 5.1 since some certain assumptions, as outlined in Zhu et al. (2018), have been imposed. With the established structural similarity through the aforementioned intuitive approximations shown in (15), we proceed to the empirical investigation concerning the intricate relationship between the Hessian and the covariance." }, { "figure_ref": [], "heading": "Random data collection methods", "publication_ref": [ "b5" ], "table_ref": [], "text": "We first introduce two types of dynamical datasets collected during dropout training to study the noise structure of dropout. These datasets are different from the training sample S.\nRandom trajectory data. The training process of NNs usually consists of two phases: the fast convergence phase and the exploration phase (Shwartz-Ziv and Tishby, 2017). In the exploration phase, the network is often considered to be near a minimum, and the movement of parameters is largely affected by the noise structure. Based on the previous work (Feng and Tu, 2021), we collect parameter sets D para := {θ i } N i=1 from N consecutive training steps in the exploration phase, where θ i is the network parameter set at i-th sample step. This sampling method requires a large number of training steps, so model parameters often have large fluctuations during the sampling process. To improve the sampling accuracy, we propose another type of random data to characterize the noise structure of dropout as follows.\nRandom gradient data. We train the network until the loss is near zero and then we freeze the training process, then we sample N realizations of the dropout variable to get the random gradient dataset, i.e., D grad := {g i } N i=1 . The i-th sample point g i is obtained as follows: i) Firstly, we generate a realization of the dropout variable η i under a given dropout rate; ii) Then, we compute the gradient of the loss function with respect to the parameters, denoted by g i (•) := ∇R drop S (•; η i ). Each element in D grad represents an evolution direction of network parameters, determined by the dropout variable. Therefore, studying the structure of D grad can help us understand how the dropout noise exerts an impact throughout the training process." }, { "figure_ref": [], "heading": "Hessian-Variance alignment", "publication_ref": [ "b6" ], "table_ref": [], "text": "In this subsection, we employ a metric Tr(H i Σ i ) established to be valuable (Zhu et al., 2018) in the assessment of the degree of alignment between the noise structure and curvature of the loss landscape, where Tr(•) stands for the trace of a square matrix, Σ i is the covariance matrix of D grad sampled at the ith-step, whose definition can be found in Section 5.2.1, and H i is the Hessian of the loss function at the ith-step.\nTo investigate the Hessian-Variance alignment relation, we construct an isotropic noise termed Σi by means of averaging, i.e., Σi = Tr(Σi) D I D×D , where D is the total number of parameters, I D×D is the identity matrix, and Σi is employed for comparative purposes. As shown in Fig. 1, under different learning rates and dropout rates, Tr(H i Σ i ) significantly exceeds Tr(H i Σi ) throughout the whole training process, thus indicating that dropout-induced noise possesses an anisotropic structure that aligns well with the Hessian across all directions. It should be acknowledged that due to computational limitations, this experiment limits the trace calculation of Σi to a subset of parameters, which can be regarded as the projection of the Hessian and the noise into some specific directions." }, { "figure_ref": [ "fig_4", "fig_4", "fig_4", "fig_1" ], "heading": "Inverse variance-flatness relation", "publication_ref": [], "table_ref": [], "text": "The alignment relation studied above also implies the inverse variance-flatness relation, i.e., the noise variance is large along the sharp direction of the loss landscape, and small along the flat We then proceed to the definitions of noise variance and interval flatness. Definition 2 (noise variance). For dataset D and its covariance Σ, we denote λ i (Σ) as the ith eigenvalue of Σ and its corresponding eigen direction as v i (Σ). Then we term λ i (Σ) the noise variance of D at the eigen direction v i (Σ).\nThe interval flatness below characterizes the flatness of the landscape around a local minimum. Definition 3 (interval flatness4 ). For a a local minimum θ * 0 , the loss function profile R v along direction v reads: R v (δ) ≡ R S (θ * 0 + δv), where δ represents the distance moved in the v direction. The interval flatness F v is then defined as the width of the region within which R v (δ) ≤ 2R v (0). We determine F v by finding two closest points θ l v < 0 and θ r v > 0 on each side of the minimum that satisfy\nR v (θ l v ) = R v (θ r v ) = 2R v (0)\n. The interval flatness is defined as:\nF v ≡ θ r v -θ l v .\n(16) Remark. The experiments show that the result is not sensitive to the selection of the pre-factor 2. A larger value of F v means a flatter landscape in the direction v.\nWe use PCA to study the weight variations when the training accuracy is nearly 100%. The networks are trained with full-batch GD for different learning rates and dropout rates under the same random seed. When the loss is small enough, we sample the parameters or gradients of parameters N times (N = 3000 for this experiment) and study the relationship between {λ i (Σ)} N i=1 and {F vi(Σ) } N i=1 for both weight dataset D para and gradient dataset D grad .\nFor different learning rates and dropout rates, Fig. 2(a,b) reveal an inverse relationship between the interval flatness of the loss landscape denoted as {F vi(Σ) } N i=1 , and the noise variance represented by the PCA spectrum {λ i (Σ)} N i=1 . Notably, a power-law relationship can be established between {F vi(Σ) } N i=1 and {λ i (Σ)} N i=1 . Specifically, in the low flatness region, the dropout-induced noise exhibits a large variance. As the loss landscape transitions into the high flatness regime, the linear relationship between variance and flatness becomes more evident. Overall, These findings consistently demonstrate the inverse relation between variance and flatness, as exemplified in Fig. 2(a,b). Subsequently, we delve into the definitions of Projected variance and Hessian flatness. Var(Projv i (S)) slope = 1 p: 0.8, lr: 0.05 p: 0.9, lr: 0.05 p: 0.8, lr: 0.1 p: 0.9, lr: 0.1 Definition 4 (projected variance). For a given direction v ∈ R D and dataset D = {θ i } N i=1 , where θ i ∈ R D , the inner product of v and θ i is denoted by Proj v (θ i ) := ⟨θ i , v⟩, then we can define the projected variance for D at the direction v as follows,\n(d) FNN, D = D grad\nVar(Proj v (D)) = N i=1 (Proj v (θ i ) -µ) 2 N ,\nwhere µ is the mean value of {Proj v (θ i )} N i=1 . Definition 5 (Hessian flatness). For Hessian H, as we denote λ i (H) by the i-th eigenvalue of H corresponding to the eigenvector v i (H), we term λ i (H) the Hessian flatness along direction v i (H).\nThe eigenvalues of the Hessian evaluated at a local minimum often serve as indicators of the flatness of the loss landscape, and larger eigenvalues correspond to sharper directions. In our investigation, we analyze the interplay between the eigenvalues of Hessian H at the final stage of the training process and the projected variance of dropout at each of the corresponding eigen directions, i.e., λ i (H) v.s. {Var(Proj vi(H) (D))} N i=1 . Specifically, we sample the parameters or gradients of parameters N times (N = 1000 for this experiment), and examine the relationship between {λ i (H)} N i=1 and {Var(Proj vi(H) (D))} N i=1 for both the parameter dataset D para and the gradient dataset D grad . Under various dropout rates and learning rates, Fig. 2(c,d) presents establishes a consistent powerlaw relationship between {λ i (H)} N i=1 and {Var(Proj vi(H) (D))} N i=1 , and this relationship remains robust irrespective of the choice between parameter dataset D para or the gradient dataset D grad . The positive correlation observed between the Hessian flatness and the projection variance provides insights into the structural characteristics of the dropout-induced noise. Specifically, these characteristics have the potential to facilitate the escape from sharp minima and enhance the generalization capabilities of NNs. Additionally, Fig. 2 highlights the distinct linear structure exhibited by gradient sampling in comparison to parameter sampling, which corroborates the discussions outlined in Section 5.2.1. For detailed experimental evidence, including our investigations involving ResNet and Transformer models, one may refer to Appendix B." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Our main contribution is twofold. First, we derive the SMEs that provide a weak approximation for the dynamics of the dropout algorithm for two-layer NNs. Second, we demonstrate that dropout exhibits the inverse variance-flatness relation and the Hessian-variance alignment relation through extensive empirical analysis, which is consistent with SGD. These relations are widely recognized to be beneficial for finding flatter minima, thus implying that dropout acts as an implicit regularizer that enhances the generalization abilities.\nGiven the broad applicability of the methodologies employed in our proof, we aim to extend the formulations of SMEs to an even wider class of stochastic algorithms applied to NNs with different architectures. Such an extension could help us better understand the role of stochastic algorithms in NN training. Moreover, the SME framework could offer a promising approach to the examination of the underlying mechanisms that explain the observed inverse variance-flatness " }, { "figure_ref": [ "fig_1", "fig_2", "fig_2" ], "heading": "A Experimental setups", "publication_ref": [ "b15", "b15" ], "table_ref": [], "text": "For Fig. 1, Fig. 2, we use the FNN with size 784-50-50-10 for the MNIST classification task. We train the network using GD with the first 10000 images as the training set. We add a dropout layer behind the second layer. The dropout rate and learning rate are specified and unchanged in each experiment. We only consider the parameter matrix corresponding to the weight and the bias of the fully-connected layer between two hidden layers. Therefore, for experiments in Fig. 1, D = 2500.\nFor Fig. 3(a, c, e, g), we add dropout layers after the convolutional layers, and for each dropout layer, p = 0.8. We only consider the parameter matrix corresponding to the weight of the first convolutional layer of the first block of the ResNet-20. Models are trained using full-batch GD on the CIFAR100 classification task for 1200 epochs. The learning rate is initialized at 0.01. Since the Hessian calculation of ResNet takes much time, we only perform it at a specific dropout rate and learning rate.\nFor Fig. 3(b, d, f, h), we use transformer Vaswani et al. (2017) with\nd model = 50, d k = d v = 20, d ff = 256, h = 4, N = 3\n, the meaning of the parameters is consistent with the original paper. We only consider the parameter matrix corresponding to the weight of the fully-connected layer whose output is queried in the Multi-Head Attention layer of the first block of the decoder. We apply dropout to the output of each sub-layer before it is added to the sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks. For each dropout layer, p = 0.9. For the English-German translation problem, we use the cross-entropy loss with label smoothing trained by full-batch Adam based on the Multi30k dataset. The learning rate strategy is the same as that in Vaswani et al. (2017). The warm-up step is 4000 epochs, the training step is 10000 epochs. We only use the first 2048 examples for training to compromise with the computational burden." }, { "figure_ref": [ "fig_2" ], "heading": "B Extended experiments on verifying the inverse flatness", "publication_ref": [], "table_ref": [], "text": "In this section, we verify the inverse relation between the covariance matrix and the Hessian matrix of dropout through different data collection methods and projection methods on larger network structures, such as ResNet-20 and transformer, and more complex datasets, such as CIFAR-100 and Multi30k, as shown in Fig. 3. \n10 0 10 1 Fvi ( )" }, { "figure_ref": [], "heading": "C Preliminaries C.1 Notations", "publication_ref": [], "table_ref": [], "text": "We adhere wherever possible to the following notation. Dimensional indices are written as subscripts with a bracket to avoid confusion with other sequential indices (e.g. time, iteration number), which do not have brackets. When more than one indices are present, we separate them with a comma, e.g. x k,(i) is the i-th coordinate of the vector x k , the k th member of a sequence. We set a special vector (1, 1, 1, . . . , 1) ⊺ by 1 := (1, 1, 1, . . . , 1) ⊺ whose dimension varies. We set n for the number of input samples, m for the width of the neural network, and D := m(d + 1) hereafter in this paper. We let [n] = {1, 2, . . . , n}. We set N (µ, Σ) as the normal distribution with mean µ and covariance Σ. We denote ⊗ as the Kronecker tensor product, ⟨•, •⟩ for standard inner product between two vectors, and A : B for the Frobenius inner product between two matrices A and B. We denote vector L 2 norm as ∥•∥ 2 , vector or function L ∞ norm as ∥•∥ ∞ , function L 1 norm as ∥•∥ 1 , matrix infinity norm as ∥•∥ ∞→∞ , matrix spectral (operator) norm as ∥•∥ 2→2 , and matrix Frobenius norm as ∥•∥ F . Finally, we denote the set of continuous functions f (•) : R D → R possessing continuous derivatives of order up to and including r by C r (R D ), and for a Polish space X , we denote the space of bounded measurable functions by B b (X ), and the space of bounded continuous functions by C b (X ). In the mathematical discipline of general topology, a Polish space is a separable complete metric space." }, { "figure_ref": [], "heading": "C.2 Problem Setup", "publication_ref": [], "table_ref": [], "text": "For the empirical risk minimization problem given by the quadratic loss:\nmin θ R S (θ) = 1 2n n i=1 (f θ (x i ) -y i ) 2 ,(17)\nwhere\nS := {(x i , y i )} n i=1 is the training sample, f θ (x)\nis the prediction function, θ are the parameters to be optimized over, and their dependence is modeled by a two-layer neural network (NN) with m hidden neurons\nf θ (x) := m r=1 a r σ(w ⊺ r x),(18)\nwhere\nx ∈ R d , θ = vec(θ a , θ w ) with θ a = vec({a r } m r=1 ), θ w = vec({w r } m r=1\n) is the set of parameters, σ(•) is the activation function applied coordinate-wisely to its input, and σ is 1-Lipschitz with σ ∈ C ∞ (R). More precisely, θ = vec({q r } m r=1 ) whereas for each r ∈ [m], q r := (a r , w ⊺ r ) ⊺ . We remark that the bias term b r can be incorporated by expanding x and w r to (x ⊺ , 1) ⊺ and\n(w ⊺ r , b r ) ⊺ .\nGiven fixed learning rate ε > 0, then at the N -th iteration, where t N := N ε, and a scaling vector η N ∈ R m is sampled with independent random coordinates:\nFor each k ∈ [m], (η N ) k = 1 p with probability p, 0 with probability 1 -p,(19)\nand we observe that {η N } N ≥1 is an i.i.d. Bernulli sequence with Eη 1 = 1, and naturally, with slight abuse of notations, the σ-fields\nF N := {σ(η 1 , η 2 , • • • η N )} forms a filtration.\nWe then apply dropout to two-layer NNs by computing\nf θ (x; η) := m r=1 (η) r a r σ(w ⊺ r x),(20)\nand we denote the empirical risk associated with dropout by\nR drop S (θ; η) : = 1 2n n i=1 (f θ (x i ; η) -y i ) 2 = 1 2n n i=1 m r=1 (η) r a r σ(w ⊺ r x i ) -y i 2 . (21\n)\nWe observe that the parameters at the N -th step are updated via back propagation as follows:\nθ N = θ N -1 -ε∇ θ R drop S (θ N -1 ; η N ) ,(22)\nwhere θ 0 := θ(0). Finally, we denote hereafter that for all i ∈ [n],\ne N i := e i (θ N -1 ; η N ) := f θ N -1 (x i ; η N ) -y i ,\nhence the empirical risk associated with dropout R drop S (θ N -1 ; η N ) can be written into\nR drop S (θ N -1 ; η N ) = 1 2n n i=1 e N i 2 ,\nthus the dropout iteration ( 22) reads\nθ N -θ N -1 = -ε∇ θ R drop S (θ N -1 ; η N ) = - ε n n i=1 e N i ∇ θ e N i ,\nand we may proceed to the introduction of the stochastic modified equation (SME) approximation." }, { "figure_ref": [], "heading": "D Stochastic Modified Equations for Dropout D.1 Modified Loss", "publication_ref": [], "table_ref": [], "text": "Recall that the parameters at the N -th step are updated as follows:\nθ N = θ N -1 - ε n n i=1 e N i ∇ θ e N i ,(23)\nand since {η N } N ≥1 is an i.i.d. sequence, then the dropout iteration (23) updates the parameters in a recursion form of \nθ N = F (θ N -1 , η N ), (24\n) where F (•, •) : R D × R m → R D is a smooth (C ∞ )\nE θ N -1 n i=1 e N i ∇ q k e N i = E θ N -1 n i=1 e N i (η N ) k ∇ q k (a k σ(w ⊺ k x i )) ,\nand since\nE θ N -1 e N i (η N ) k = E θ N -1   m r=1,r̸ =k (η N ) r a r σ(w ⊺ r x i ) -y i   E θ N -1 [(η N ) k ] + E θ N -1 (η N ) 2 k a k σ(w ⊺ k x i ) =   m r=1,r̸ =k a r σ(w ⊺ r x i ) -y i   + 1 p a k σ(w ⊺ k x i ) = m r=1 a r σ(w ⊺ r x i ) -y i + 1 p -1 a k σ(w ⊺ k x i ).\nFor simplicity, given fixed k ∈ [m], for any i ∈ [n], we denote hereafter that\ne i := e i (θ) := m r=1 a r σ(w ⊺ r x i ) -y i , e i,\\k := e i,\\k (θ) := m r=1,r̸ =k a r σ(w ⊺ r x i ) -y i ,\nwe remark that compared with e N i , e i and e i,\\k do not depend on the random variable η N . Then E θ N -1 e N i (η N ) k can be written in short by\nE θ N -1 e N i (η N ) k = e i,\\k + 1 p a k σ(w ⊺ k x i ) = e i + 1 p -1 a k σ(w ⊺ k x i ).(25)\nHence for each k ∈ [m], expectation of the increment restricted to q k reads\nE θ N -1 n i=1 e N i (η N ) k ∇ q k (a k σ(w ⊺ k x i )) = n i=1 e i ∇ q k (a k σ(w ⊺ k x i )) + n i=1 1 p -1 a k σ(w ⊺ k x i )∇ q k (a k σ(w ⊺ k x i )) ,\nthen we define the modified loss L S (•) : R m(d+1) → R for dropout:\nL S (θ) := 1 2n n i=1 e 2 i + 1 -p 2np n i=1 m r=1 a 2 r σ(w ⊺ r x i ) 2 ,(26)\nsince as θ N -1 is given, then by taking the conditional expectation, increment of the dropout iteration (23) reads\nθ N -θ N -1 = -εE θ N -1 ∇ θ R drop S (θ N -1 ; η N ) = -ε∇ θ L S (θ) θ=θ N -1 ,\nwhich implies that in the sense of expectations, {θ N } N ≥0 follows close to the gradient descent trajectory of L S (θ) with fixed learning rate ε." }, { "figure_ref": [], "heading": "D.2 Stochastic Modified Equations", "publication_ref": [ "b31", "b23", "b21" ], "table_ref": [], "text": "We then follow the strategy of Li et al. (2017) to derive the stochastic modified equations (SME) for dropout. Firstly, from the results in Section D.1, we observe that given θ N -1 ,\nθ N -θ N -1 = -ε∇ θ L S (θ) θ=θ N -1 + √ εV (θ N -1 ),(27)\nwhere L S (•) : R m(d+1) → R is the modified loss defined in (26), and V (•) : R m(d+1) → R m(d+1) is a m(d + 1)-dimensional random vector, and when given θ N -1 , V (θ N -1 ) has mean 0 and covariance εΣ(θ N -1 ), where\nΣ(•) : R m(d+1) → R m(d+1)×m(d+1) is the covariance of ∇ θ R drop S (θ N -1 ; η N ). Recall that θ = vec({q r } m r=1 ) = vec ({(a r , w r )} m r=1\n), and for any k, r ∈ [m], we denote that\nΣ kr (θ N -1 ) := Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ qr R drop S (θ N -1 ; η N ) , then Σ =     Σ 11 Σ 12 • • • Σ 1m Σ 21 Σ 22 • • • Σ 2m . . . . . . . . . . . . Σ m1 Σ m2 • • • Σ mm     .\nFor each k ∈ [m], we obtain that\nΣ kk (θ N -1 ) = Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ q k R drop S (θ N -1 ; η N ) = 1 p -1 1 n n i=1 e i,\\k + 1 p a k σ(w ⊺ k x i ) ∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 e i,\\k + 1 p a k σ(w ⊺ k x i ) ∇ q k (a k σ(w ⊺ k x i )) + 1 p 2 - 1 p m l=1,l̸ =k 1 n n i=1 a l σ(w ⊺ l x i )∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 a l σ(w ⊺ l x i )∇ q k (a k σ(w ⊺ k x i ))\n,\nand for each k, r ∈ [m] with k ̸ = r, Σ kr (θ N -1 ) = Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ qr R drop S (θ N -1 ; η N ) = 1 p -1 1 n n i=1 e i,\\k,\\r + 1 p a k σ(w ⊺ k x i ) + 1 p a r σ(w ⊺ r x i ) ∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 a k σ(w ⊺ k x i )∇ qr (a r σ(w ⊺ r x i )) + 1 p -1 1 n n i=1 a r σ(w ⊺ r x i )∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 e i,\\k,\\r + a k σ(w ⊺ k x i ) + 1 p a r σ(w ⊺ r x i ) ∇ qr (a r σ(w ⊺ r x i )) ,\nwhere we denote hereafter that e i,\\k,\\r := e i,\\k,\\r (θ) := m l=1,l̸ =k,l̸ =r\na l σ(w ⊺ l x i ) -y i ,\nand compared with e N i , e i,\\k,\\r still does not depend on the random variable η N . We remark that the expression above is consistent in that for the extreme case where p = 1, dropout 'degenerates' to gradient descent (GD), hence the covariance matrix degenerates to a zero matrix, i.e., Σ = 0 D×D . We remark that details for the derivation of Σ is deferred to Section G. Now, as we consider the stochastic differential equation (SDE),\ndΘ t = b (Θ t ) dt + σ (Θ t ) dW t , Θ 0 = Θ(0),(28)\nwhere W t is a standard m(d + 1)-dimensional standard Wiener process, whose Euler-Maruyama discretization with step size ε > 0 at the N -th step reads\nΘ εN = Θ ε(N -1) + εb Θ ε(N -1) + √ εσ Θ ε(N -1) Z N ,\nwhere Z N ∼ N (0, I m(d+1) ) and Θ 0 = Θ(0). Thus, if we set\nb (Θ) := -∇ Θ L S (Θ), σ (Θ) := √ ε (Σ (Θ)) 1 2 , Θ 0 := θ 0 ,(29)\nthen we would expect (28) to be a 'good' approximation of ( 27) with the time identification t = εN . Based on the earlier work of Li et al. (2017), since the path of dropout and the counterpart of SDE are driven by noises sampled in different spaces. Firstly, notice that the stochastic process {θ N } N ≥0 induces a probability measure on the product space\nR D × R D × • • • × R D × • • • , whereas {Θ t } t≥0 induces a probability measure on C [0, ∞), R D .\nTo compare them, one can form a piece-wise linear interpolation of the former. Alternatively, as we do in this work, we sample a discrete number of points from the latter. Secondly, the process {θ N } N ≥0 is adapted to the filtration generated by F N whereas the process {Θ t } t≥0 is adapted to an independent Wiener filtration F t . Hence, it is not appropriate to compare individual sample paths. Rather, we define below a sense of weak approximations (Kloeden and Platen, 2011, Section 9.7) by comparing the distributions of the two processes.\nTo compare different discrete time approximations, we need to take the rate of weak convergence into consideration, and we also need to choose an appropriate class of functions as the space of test functions. We introduce the following set of smooth functions:\nC M b R m(d+1) =    f ∈ C M R m(d+1) ∥f ∥ C M := |β|≤M D β f ∞ < ∞    ,\nwhere D is the usual differential operator. We remark that C M b (R D ) is a subset of G(R D ), the class of functions with polynomial growth, which is chosen to be the space of test functions in previous works (Li et al., 2017;Kloeden and Platen, 2011;Malladi et al., 2022).\nBefore we proceed to the definition of weak approximation, to ensure the rigor and validity of our analysis, we shall assert an assumption regarding the existence and uniqueness of solutions to the SDE (28). Assumption 2. There exists T * > 0, such that for any time t ∈ [0, T * ], there exists a unique t-continuous solution Θ t of the initial value problem:\ndΘ t = b (Θ t ) dt + σ (Θ t ) dW t , Θ 0 = Θ(0),\nwith the property that Θ t is adapted to the filtration F t generated by W s for all time s ≤ t. Furthermore, for any t ∈ [0, T * ],\nE t 0 ∥Θ s (•)∥ 2 2 ds < ∞.\nMoreover, we assume that the second, fourth and sixth moments of the solution to SDE (28) are uniformly bounded with respect to time t, i.e., for each l ∈ [3], there exists C(T * , Θ 0 ) > 0, such that\nsup 0≤s≤T * E ∥Θ s (•)∥ 2l 2 ≤ C(T * , Θ 0 ). (30\n)\nAs for the dropout iterations (23), we assume further that the second, fourth and sixth moments of the dropout iterations (23) are uniformly bounded with respect to the number of iterations N , i.e., let 0 < ε < 1, T > 0 and set N T,ε := ⌊ T ε ⌋, then for each l ∈ [3], there exists T * > 0 and ε 0 > 0, such that for any given learning rate ε ≤ ε 0 and all N ∈ [0 :\nN T * ,ε ], there exists C(T * , θ 0 , ε 0 ) > 0, such that sup 0≤N ≤[N T * ,ε ] E ∥θ N ∥ 2l 2 ≤ C(T * , θ 0 , ε 0 ).(31)\nWe remark that if G(R D ) is chosen to be the test functions in Li et al. (2019), then similar relations to (30) and (31) shall be imposed, except that in our cases, we only require the second, fourth and sixth moments to be uniformly bounded, while in their cases, all 2l-moments are required for l ≥ 1.Establishments of the validity of Assumption 2 regarding the existence and uniqueness of the SDE will be exhibited in Section F.\nThe definition of weak approximation is stated out as follows. Definition 6. The SDE (28) is an order α weak approximation to the dropout (23), if for every d+1) , there exists C > 0 and ε 0 > 0, such that given any ε ≤ ε 0 and\ng ∈ C M b R m(\nT ≤ T * , then for all N ∈ [N T,ε ], |Eg(Θ εN ) -Eg(θ N )| ≤ C(T, g, ε 0 )ε α .(32)" }, { "figure_ref": [], "heading": "E Semigroup and Proof Details for the Main Theorem", "publication_ref": [], "table_ref": [], "text": "In this section, we use a semigroup approach (Feng et al., 2018) to study the time-homogeneous Markov chains (processes) formed by dropout." }, { "figure_ref": [], "heading": "E.1 Discrete and Continuous Semigroup", "publication_ref": [], "table_ref": [], "text": "Definition 7. A Markov operator over a Polish space X is a bounded linear operator P :\nB b (X ) → B b (X ) satisfying • P1 = 1;\n• Pφ is positive whenever φ is positive;\n• If a sequence {φ n } ⊂ B b (X ) converges pointwise to an element φ ∈ B b (X ), then Pφ n converges pointwise to Pφ;\nTo demonstrate further inequalities that Markov operators satisfy, we offer the following proposition Proposition 1. A Markov operator P : B b (X ) → B b (X ) over a Polish space X satisfies\n• (Pf (x)) + ≤ Pf + (x); • (Pf (x)) -≤ Pf -(x); • |Pf (x)| ≤ P|f (x)|.\nMoreover, if the Polish space X is equipped with a measure µ, a function f :\nX → R is said to be an element of L 1 (X ) if X |f |dµ < ∞.\nThen for every f ∈ L 1 (X ), the following holds\n• ∥Pf ∥ 1 ≤ ∥f ∥ 1 .\nIn mathematics, the positive part of a real function is defined by the formula\nf + (x) = max(f (x), 0) = f (x) if f (x) > 0, 0 otherwise.\nSimilarly, the negative part of f is defined as\nf -(x) = max(-f (x), 0) = -min(f (x), 0) = -f (x) if f (x) < 0, 0 otherwise.\nWe proceed to the proof for Proposition 1\nProof. From the definition of f + and f -, it follows that\n(Pf ) + = Pf + -Pf -+ = max 0, Pf + -Pf - ≤ max 0, Pf + = Pf + .\nSimilarly, we obtain that\n(Pf ) -= Pf + -Pf --= max 0, Pf --Pf + ≤ max 0, Pf -= Pf -.\nHence for the last inequality\n|Pf | = (Pf ) + + (Pf ) - ≤ Pf + + Pf - = P f + + f -= P|f |.\nFinally, by integrating the above relation over X , we obtain that\n∥Pf ∥ 1 = X |Pf | dµ ≤ X P |f | dµ = X |f | dµ = ∥f ∥ 1 .(33)\nInequality ( 33) is extremely important, and any operator P that satisfies it is called a contraction. This relation is known as the contractive property of P. To illustrate its power, note that for any f ∈ L 1 (X ), we have\n∥P n f ∥ 1 = P • P n-1 f 1 ≤ P n-1 f 1 .\nAs we consider Markov processes with continuous time, it is natural to consider a family of Markov operators indexed by time. We call such a family a Markov semigroup (Hairer, 2008), provided that it satisfies the relation P t+s = P t • P s , for any time s, t > 0.\n(34) And if given A ∈ B(X ), where B(X ) is the Borel σ-algebra on X , and given any two times s < t, if the following holds almost surely\nP (X t ∈ A | X s ) = (P t-s 1 A ) (X s ) ,\nthen we call X t a time-homogeneous Markov process with semigroup {P t } t≥0 .\nIn our case for dropout, we set the Polish space X = R D , and since\nC M b (R D ) ⊂ B b (R D ), then WLOG we fix g ∈ C M b (R D ) and define P ε g( θ) := E g θ -ε∇ θ R drop S (θ; η) | θ= θ .(35)\nWe conclude that the dropout iterations ( 23) forms a time-homogeneous Markov chain with discrete Markov semigroup {P n ε } n≥0 . As for the SDE (28), based on Assumption 2 and combined with the results in (Hairer, 2008, Example 2.11), the Markov semigroup {P t } t≥0 associated to the solutions of the SDE reads: For any g ∈ B b (R D ), ∂ t P t g = LP t g, where L is termed the generator of the diffusion process (28), which reads\nLg := ⟨b, ∇ Θ g⟩ + 1 2 σσ ⊺ : ∇ 2 Θ g.(36)\nMoreover, for a fixed test function g ∈ C M b (R D ), then for any two times s, t ≥ 0, P t g(Θ s ) := exp(tL)g(Θ s ) := E Θs [g(Θ t+s )] ,\n(37) and {P t } t≥0 forms a continuous Markov semigroup for the SDE (28)." }, { "figure_ref": [], "heading": "E.2 Semigroup Expansion with Accuracy of Order One", "publication_ref": [ "b31" ], "table_ref": [], "text": "Our results are essentially based on Itô-Taylor expansions (Kloeden and Platen, 2011) or Taylor's theorem with the Lagrange form of the remainder (Li et al., 2019, Lemma 27). Theorem 1 (Order-1 accuracy). Fix time\nT ≤ T * , if we choose b (Θ) := -∇ Θ L S (Θ), σ (Θ) := √ ε (Σ (Θ)) 1 2 , then for all t ∈ [0, T ], the stochastic processes Θ t satisfying dΘ t = b (Θ t ) dt + σ (Θ t ) dW t , Θ 0 = Θ(0),(38\n) is an order-1 approximation of dropout (23), i.e., given any test function g ∈ C 4 b (R D ), there exists ε 0 > 0 and C(T, ∥g∥ C 4 , ε 0 ) > 0, such that for any ε ≤ ε 0 and T ≤ T * , and for all N ∈ [N T,ε ], the following holds:\n|Eg(θ N ) -Eg(Θ εN )| ≤ C(T, ∥g∥ C 4 , θ 0 , ε 0 )η,(39\n) where θ 0 = Θ 0 .\nProof. By application of Taylor's theorem with the Lagrange form of the remainder, we have that for some α ≥ 1,\ng(ϑ) -g( θ) = α s=1 1 s! D i1,...,ij =1 s j=1 ϑ (ij ) -θ(ij) ∂ s g ∂ϑ (i1) . . . ∂ϑ (ij ) ( θ) + 1 (α + 1)! D i1,...,ij =1 α+1 j=1 ϑ (ij ) -θ(ij) ∂ α+1 g ∂ϑ (i1) . . . ∂ϑ (ij ) (γϑ + (1 -γ) θ),\nfor some γ ∈ (0, 1). We adopt the Einstein's summation convention, where repeated (spatial) indices are summed, i.e.,\nx (i) x (i) := D i=1 x (i) x (i) .\nAs we choose ϑ := θ 1 , θ := θ 0 and α = 1, then we obtain that\ng(θ 1 ) -g(θ 0 ) = ⟨∇ θ g(θ 0 ), θ 1 -θ 0 ⟩ + 1 2 ∇ 2 θ g(γθ 1 + (1 -γ)θ 0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) = ⟨∇ θ g(θ 0 ), θ 1 -θ 0 ⟩ + 1 2 ∇ 2 θ g( θ0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ),\nwhere θ0 := γθ 1 + (1 -γ)θ 0 , and we observe that since\nθ 1 -θ 0 = -ε∇ θ L S (θ) θ=θ0 + √ εV (θ 0 ), then Eg(θ 1 ) -Eg(θ 0 ) = ⟨∇ θ g(θ 0 ), Eθ 1 -Eθ 0 ⟩ + 1 2 E ∇ 2 θ g( θ0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) = -ε ∇ θ g(θ 0 ), ∇ θ L S (θ) θ=θ0 + E 1 ε (θ 0 ),\nwhere the remainder term E 1 ε (•) : R D → R, whose expression reads\nE 1 ε (θ 0 ) := 1 2 E ∇ 2 θ g( θ0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) ,(40)\nand we remark that θ0 and θ 1 are implicitly defined by θ 0 . Then, directly from Assumption 2, we obtain that\nE 1 ε (θ 0 ) = 1 2 E ∇ 2 θ g( θ0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) ≤ 1 2 ∥g∥ C 4 E ∥θ 1 -θ 0 ∥ 2 2 = ε 2 ∥g∥ C 4 E ∇ θ R drop S (θ 0 ; η 1 ) 2 2 ≤ ε 2 ∥g∥ C 4 C(T * , θ 0 , ε 0 ), since ∇ θ L S (θ)\nand Σ (θ) can be bounded above by the second and fourth moments of the dropout iteration ( 23).\nWe observe that\nΘ ε -Θ 0 = ε 0 b(Θ s )ds + ε 0 σ(Θ s )dW s .\nAs we choose ϑ := Θ ε , θ := Θ 0 and α = 1, then we obtain that\ng(Θ ε ) -g(Θ 0 ) = ⟨∇ Θ g(Θ 0 ), Θ ε -Θ 0 ⟩ + 1 2 ∇ 2 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ),\nwhere\nΘ 0 := γΘ ε + (1 -γ)Θ 0 ,\nfor some γ ∈ (0, 1). Then\nEg(Θ ε ) -Eg(Θ 0 ) = ⟨∇ Θ g(Θ 0 ), EΘ ε -EΘ 0 ⟩ + 1 2 E ∇ 2 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) = ∇ Θ g(Θ 0 ), ε 0 E[b(Θ s )]ds + 1 2 E ∇ 2 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ,\nand since\n⟨∇ Θ g(Θ 0 ), E[b(Θ s )]⟩ = ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + s 0 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ v )dv,\nthen we obtain that\nEg(Θ ε ) -Eg(Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 0 s 0 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ v )dvds + 1 2 E ∇ 2 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), b(Θ 0 )⟩ + ε 2 Ē1 ε (Θ 0 ), where the remainder term Ē1 ε (•) : R D → R, whose expression reads Ē1 ε (Θ 0 ) := ε 0 s 0 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ v )dvds + 1 2 E ∇ 2 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ,(41)\nand we remark that Θ 0 and Θ ε are implicitly defined by Θ 0 . As we choose b\n(Θ) = -∇ Θ L S (Θ), σ (Θ) = √ ε (Σ (Θ)) 1 2 , then we carry out the computation for L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ v ), L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ v ) = ⟨∇ Θ L S (Θ v ), ∇ Θ ⟨∇ Θ g(Θ 0 ), ∇ Θ L S (Θ)⟩ | Θ=Θv ⟩ + ε 2 Σ (Θ v ) : ∇ 2 Θ (⟨∇ Θ g(Θ 0 ), ∇ Θ L S (Θ)⟩) | Θ=Θv , since ∇ Θ L S (Θ), ∇ 2 Θ L S (Θ), ∇ 3 Θ L S (Θ)\nand Σ (Θ) can be bounded above by the second, fourth and sixth moments of the solution to SDE (28), hence we may apply the mean value theorem to (41) and obtain that\nĒ1 ε (Θ 0 ) = ε 0 sL ⟨∇ Θ g(Θ 0 ), b⟩ ( Θ s )ds + 1 2 E ∇ 2 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ≤ ε 0 s ∥g∥ C 4 C(T * , Θ 0 )ds + 1 2 ∥g∥ C 4 E ∥Θ ε -Θ 0 ∥ 2 2 ≤ ε 2 2 ∥g∥ C 4 C(T * , Θ 0 ) + ∥g∥ C 4 E ε 0 b(Θ s )ds + ε 0 σ(Θ s )dW s 2 2 ≤ ε 2 2 ∥g∥ C 4 C(T * , Θ 0 ) + 2 ∥g∥ C 4 E ε 0 b(Θ s )ds 2 2 + 2 ∥g∥ C 4 E ε 0 σ(Θ s )dW s 2 2 ≤ ε 2 2 ∥g∥ C 4 C(T * , Θ 0 ) + 2 ∥g∥ C 4 ε 2 E ∇ Θ L S ( Θ 0 ) 2 2 + 2 ∥g∥ C 4 E ε 0 ∥σ(Θ s )∥ 2 F ds ≤ ε 2 2 ∥g∥ C 4 C(T * , Θ 0 ) + 2 ∥g∥ C 4 ε 2 E ∇ Θ L S ( Θ 0 ) 2 2 + 2 ∥g∥ C 4 εE ε Σ( Θ 0 ) F ≤ ε 2 ∥g∥ C 4 C(T * , Θ 0 ).\nTo sum up for now,\n|Eg(θ 1 ) -Eg(Θ ε )| = Eg(θ 0 ) -ε ∇ θ g(θ 0 ), ∇ θ L S (θ) θ=θ0 + E 1 ε (θ 0 ) -Eg(Θ 0 ) -ε ⟨∇ Θ g(Θ 0 ), b(Θ 0 )⟩ + Ē1 ε (Θ 0 ) , since θ 0 = Θ 0 and b (Θ 0 ) = -∇ Θ L S (Θ) θ=θ0 , thus P 1 ε g(θ 0 ) -P ε g(Θ 0 ) = |Eg(θ 1 ) -Eg(Θ ε )| ≤ E 1 ε (θ 0 ) + Ē1 ε (Θ 0 ) ≤ ε 2 ∥g∥ C 4 C(T * , θ 0 , ε 0 ) + ε 2 ∥g∥ C 4 C(T * , Θ 0 ) = O(ε 2 ). (42) For the N -th step iteration, since |Eg(θ N ) -Eg(Θ εN )| = P N ε g(θ 0 ) -P εN g(Θ 0\n) , and the RHS of the above equation can be written into a telescoping sum as\nP N ε g(θ 0 ) -P εN g(Θ 0 ) = N l=1 P N -l+1 ε • P (l-1)ε g(θ 0 ) -P N -l ε • P lε g(Θ 0 ) ,\nhence by application of Proposition 1, we obtain that\n|Eg(θ N ) -Eg(Θ εN )| ≤ N l=1 P N -l+1 ε • P (l-1)ε g(θ 0 ) -P N -l ε • P lε g(Θ 0 ) ≤ N l=1 P N -l ε • P 1 ε • P (l-1)ε -P ε • P (l-1)ε g(Θ 0 ) , since P 1 ε • P (l-1)ε -P ε • P (l-1)ε g(Θ 0\n) can be regarded as L 1 (R D ) if we choose measure µ to be the delta measure concentrated on Θ 0 . i.e., µ := δ Θ0 , hence by the conctration property of Markov operators, we obtain further that\n|Eg(θ N ) -Eg(Θ εN )| ≤ N l=1 P 1 ε • P (l-1)ε -P ε • P (l-1)ε g(Θ 0 ) ≤ N l=1 P 1 ε g(Θ (l-1)ε ) -P ε g(Θ (l-1)ε ) .\nBy taking expectation conditioned on Θ (l-1)ε , then similar to the relation (42), the following holds\nP 1 ε g(Θ (l-1)ε ) -P ε g(Θ (l-1)ε ) = E |Eg(θ l ) -Eg(Θ ε l)| Θ (l-1)ε ≤ E E 1 ε (Θ (l-1)ε ) + E Ē1 ε (Θ (l-1)ε ) ≤ ε 2 ∥g∥ C 4 C(T * , θ 0 , ε 0 ) + ε 2 ∥g∥ C 4 C(T * , Θ 0 ) = O(ε 2 ).\nWe remark that the last line of the above relation is essentially based on Assumption 2, since E E 1 ε (Θ (l-1)ε ) and E Ē1 ε (Θ (l-1)ε ) can be bounded above by the second, fourth and sixth moments of the solution to SDE (28), hence we may apply dominated convergence theorem to obtain the last line of the above relation.\nTo sum up, as\nP N ε g(θ 0 ) -P εN g(Θ 0 ) ≤ N l=1 P N -l+1 ε • P (l-1)ε g(θ 0 ) -P N -l ε • P lε g(Θ 0 ) = N O(ε 2 ), hence for N = N T,ε , P N ε g(θ 0 ) -P εN g(Θ 0 ) = N O(ε 2 ) = N εO(ε) ≤ T O(ε) = O(ε)." }, { "figure_ref": [], "heading": "E.3 Semigroup Expansion with Accuracy of Order Two", "publication_ref": [], "table_ref": [], "text": "Theorem 2 (Order-2 accuracy). Fix time\nT ≤ T * , if we choose b(Θ) = -∇ Θ L S (Θ) + ε 4 ∥∇ Θ L S (Θ)∥ 2 2 , σ(Θ) = √ ε (Σ (Θ)) 1 2 ,\nthen for all t ∈ [0, T ], the stochastic processes Θ t satisfying\ndΘ t = b (Θ t ) dt + σ (Θ t ) dW t , Θ 0 = Θ(0),(43)\nis an order-2 approximation of dropout (23), i.e., given any test function g ∈ C 6 b (R D ), there exists ε 0 > 0 and C(T, ∥g∥ C 6 , ε 0 ) > 0, such that for any ε ≤ ε 0 and T ≤ T * , and for all N ∈ [N T,ε ], the following holds:\n|Eg(θ N ) -Eg(Θ εN )| ≤ C(T, ∥g∥ C 6 , θ 0 , ε 0 )η, (44\n) where θ 0 = Θ 0 .\nProof. By application of Taylor's theorem with the Lagrange form of the remainder, we have that for some α ≥ 1,\ng(ϑ) -g( θ) = α s=1 1 s! D i1,...,ij =1 s j=1 ϑ (ij ) -θ(ij) ∂ s g ∂ϑ (i1) . . . ∂ϑ (ij ) ( θ) + 1 (α + 1)! D i1,...,ij =1 α+1 j=1 ϑ (ij ) -θ(ij) ∂ α+1 g ∂ϑ (i1) . . . ∂ϑ (ij ) (γϑ + (1 -γ) θ),\nfor some γ ∈ (0, 1).\nAs we choose ϑ := θ 1 , θ := θ 0 and α = 2, with slight misuse of the Frobenius inner product notation, we obtain that\ng(θ 1 ) -g(θ 0 ) = ⟨∇ θ g(θ 0 ), θ 1 -θ 0 ⟩ + 1 2 ∇ 2 θ g(θ 0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) + 1 6 ∇ 3 θ g(γθ 1 + (1 -γ)θ 0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) = ⟨∇ θ g(θ 0 ), θ 1 -θ 0 ⟩ + 1 2 ∇ 2 θ g(θ 0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) + 1 6 ∇ 3 θ g( θ0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ),\nwhere θ0 := γθ 1 + (1 -γ)θ 0 , and we observe that since\nθ 1 -θ 0 = -ε∇ θ L S (θ) θ=θ0 + √ εV (θ 0 ), then Eg(θ 1 ) -Eg(θ 0 ) = ⟨∇ θ g(θ 0 ), Eθ 1 -Eθ 0 ⟩ + 1 2 ∇ 2 θ g(θ 0 ) : E [(θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 )] + 1 6 E ∇ 3 θ g( θ0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) = -ε ∇ θ g(θ 0 ), ∇ θ L S (θ) θ=θ0 + ε 2 2 ∇ 2 θ g(θ 0 ) : ∇ θ L S (θ) θ=θ0 ⊗ ∇ θ L S (θ) θ=θ0 + Σ(θ 0 ) + E 2 ε (θ 0 ),\nwhere the remainder term E 2 ε (•) : R D → R, whose expression reads\nE 2 ε (θ 0 ) := 1 6 E ∇ 3 θ g( θ0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) ,(45)\nand we remark that θ0 and θ 1 are implicitly defined by θ 0 . Then, directly from Assumption 2, we obtain that\nE 2 ε (θ 0 ) ≤ 1 6 ∥g∥ C 6 E ∥θ 1 -θ 0 ∥ 3 2 = ε 3 ∥g∥ C 6 E ∇ θ R drop S (θ 0 ; η 1 ) 3 2 ≤ ε 3 ∥g∥ C 6 C(T * , θ 0 , ε 0 ), since ∇ θ L S (θ)\nand Σ (θ) can be bounded above by the second and fourth moments of the dropout iteration (23).\nWe observe that\nΘ ε -Θ 0 = ε 0 b(Θ s )ds + ε 0 σ(Θ s )dW s .\nAs we choose ϑ := Θ ε , θ := Θ 0 and α = 3, then we obtain that\ng(Θ ε ) -g(Θ 0 ) = ⟨∇ Θ g(Θ 0 ), Θ ε -Θ 0 ⟩ + 1 2 ∇ 2 Θ g(Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) + 1 6 ∇ 3 Θ g(Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) + 1 24 ∇ 4 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ), where Θ 0 := γΘ ε + (1 -γ)Θ 0 , for some γ ∈ (0, 1). Then Eg(Θ ε ) -Eg(Θ 0 ) = ⟨∇ Θ g(Θ 0 ), EΘ ε -EΘ 0 ⟩ + 1 2 ∇ 2 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 6 ∇ 3 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 24 E ∇ 4 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) = ∇ Θ g(Θ 0 ), ε 0 E[b(Θ s )]ds + 1 2 ∇ 2 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 6 ∇ 3 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 24 E ∇ 4 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ,\nand since\n⟨∇ Θ g(Θ 0 ), E[b(Θ s )]⟩ = ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + s 0 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ v )dv,\nthen we obtain that\nEg(Θ ε ) -Eg(Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 0 s 0 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ v )dvds + 1 2 ∇ 2 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 6 ∇ 3 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 24 E ∇ 4 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ,\nand once again since\nL ⟨∇ Θ g(Θ 0 ), b⟩ (Θ v ) = L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 ) + v 0 L (L ⟨∇ Θ g(Θ 0 ), b⟩) (Θ u )du,\nthen we obtain that\nEg(Θ ε ) -Eg(Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 0 s 0 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ v )dvds + 1 2 ∇ 2 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 6 ∇ 3 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 24 E ∇ 4 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 0 s 0 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 )dvds + ε 0 s 0 v 0 L (L ⟨∇ Θ g(Θ 0 ), b⟩) (Θ u )dudvds + 1 2 ∇ 2 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 6 ∇ 3 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 24 E ∇ 4 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 2 2 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 ) + 1 2 ∇ 2 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + Ē2 ε (Θ 0 ),\nwhere the remainder term Ē2 ε (•) : R D → R, whose expression reads\nĒ2 ε (Θ 0 ) := ε 0 s 0 v 0 L (L ⟨∇ Θ g(Θ 0 ), b⟩) (Θ u )dudvds + 1 6 ∇ 3 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 24 E ∇ 4 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ,(46)\nand we remark that Θ 0 and Θ ε are implicitly defined by Θ 0 . As we choose\nb (Θ) = -∇ Θ L S (Θ) + ε 4 ∥∇ Θ L S (Θ)∥ 2 2 , σ (Θ) = √ ε (Σ (Θ)) 1 2 ,\nthen we carry out the computation for\nL (L ⟨∇ Θ g(Θ 0 ), b⟩) (Θ u ), L (L ⟨∇ Θ g(Θ 0 ), b⟩) (Θ u ) =L (⟨b, ∇ Θ (⟨∇ Θ g(Θ 0 ), b⟩)⟩) (Θ u ) + L ε 2 Σ : ∇ 2 Θ (⟨∇ Θ g(Θ 0 ), b⟩) (Θ u ) = ⟨b, ∇ Θ (⟨b, ∇ Θ (⟨∇ Θ g(Θ 0 ), b⟩)⟩)⟩ + ε 2 Σ : ∇ Θ b, ∇ 2 Θ (⟨∇ Θ g(Θ 0 ), b⟩) + ε 2 b, ∇ Θ Σ : ∇ 2 Θ (⟨∇ Θ g(Θ 0 ), b⟩) + ε 2 4 Σ : ∇ 2 Θ Σ : ∇ 2 Θ (⟨∇ Θ g(Θ 0 ), b⟩) =b ⊺ ∇ Θ (b ⊺ ∇ Θ b∇ Θ g(Θ 0 )) (Θ u ) + εR ε (Θ u ) = ∇ Θ L S (Θ u ), ∇ Θ 1 2 ∇ Θ ∥∇ Θ L S (Θ u )∥ 2 2 , ∇ Θ g(Θ 0 ) + εR ′ ε (Θ u ), since ∇ Θ L S (Θ), ∇ 2 Θ L S (Θ), ∇ 3 Θ L S (Θ), Σ (Θ), R ε (Θ u ) and R ′ ε (Θ u\n) can be bounded above by the second, fourth and sixth moments of the solution to SDE (28). Moreover, we observe that\nE [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] =E ε 0 b(Θ s )ds + ε 0 σ(Θ s )dW s ⊗ ε 0 b(Θ s )ds + ε 0 σ(Θ s )dW s ⊗ ε 0 b(Θ s )ds + ε 0 σ(Θ s )dW s ,\nand its entry can be categorized into four types. The first one is the pure drift part, i.e.,\nε 0 b(Θ s )ds ⊗ ε 0 b(Θ s )ds ⊗ ε 0 b(Θ s )ds,\nthen by application of the mean value theorem and the fact that\n∇ Θ L S (Θ), ∇ 2 Θ L S (Θ), ∇ 3 Θ L S (Θ)\n, and Σ (Θ) can be bounded above by the second, fourth and sixth moments of the solution to SDE (28), we obtain that\nE ε 0 b(Θ s )ds ⊗ ε 0 b(Θ s )ds ⊗ ε 0 b(Θ s )ds =ε 3 Eb( Θ s ) ⊗ b( Θ s ) ⊗ b( Θ s ) = O(ε 3 ).\nThe second one is the pure noise part, i.e.,\nε 0 σ(Θ s )dW s ⊗ ε 0 σ(Θ s )dW s ⊗ ε 0 σ(Θ s )dW s ,\nand as the odd moments of zero mean Gaussian variables are zero, hence we have\nE ε 0 σ(Θ s )dW s ⊗ ε 0 σ(Θ s )dW s ⊗ ε 0 σ(Θ s )dW s = 0,\nthe third and fourth one are both of the mixed part, for the third one\nε 0 b(Θ s )ds ⊗ ε 0 b(Θ s )ds ⊗ ε 0 σ(Θ s )dW s ,\nwhose expectation is of course zero since the drift part and the noise part is independent, and the fact the odd moments of zero mean Gaussian variables are zero, and for the fourth one\nε 0 b(Θ s )ds ⊗ ε 0 σ(Θ s )dW s ⊗ ε 0 σ(Θ s )dW s ,\nwe obtain that\nE ε 0 b(Θ s )ds ⊗ ε 0 σ(Θ s )dW s ⊗ ε 0 σ(Θ s )dW s =εEb( Θ s ) ⊗ E ε 0 σ(Θ s )dW s ⊗ ε 0 σ(Θ s )dW s = O(ε 3 ).\nAs we denote\nR3 (Θ 0 ) := E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] ,\nthen we obtain that\nvec( R3 (Θ 0 )) 2 ≤ ε 3 C(T * , Θ 0 ).\nHence we may apply the mean value theorem to ( 46) and obtain that\nĒ2 ε (Θ 0 ) = ε 0 s 0 vL (L ⟨∇ Θ g(Θ 0 ), b⟩) ( Θ u )dvds + 1 6 ∇ 3 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 24 E ∇ 4 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ≤ ε 0 s 0 v ∥g∥ C 6 C(T * , Θ 0 )dvds + 1 6 ∥g∥ C 6 ε 3 C(T * , Θ 0 ) + 1 24 ∥g∥ C 6 ∥Θ ε -Θ 0 ∥ 4 2 = ε 3 6 ∥g∥ C 6 C(T * , Θ 0 ) + 1 6 ∥g∥ C 6 ε 3 C(T * , Θ 0 ) + 1 24 ∥g∥ C 6 E ε 0 b(Θ s )ds + ε 0 σ(Θ s )dW s 4 2 ≤ ε 3 ∥g∥ C 6 C(T * , Θ 0 ) + 1 6 ∥g∥ C 6 ε 3 C(T * , Θ 0 ) + 4 24 ∥g∥ C 6 ε 3 E ∇ Θ L S ( Θ 0 ) 2 2 + 4 24 ∥g∥ C 6 E ε 0 σ(Θ s )dW s 4 2 ≤ ε 3 ∥g∥ C 6 C(T * , Θ 0 ) + 1 6 ∥g∥ C 6 ε 3 C(T * , Θ 0 ) + 4 24 ∥g∥ C 6 ε 3 E ∇ Θ L S ( Θ 0 ) 2 2 + C 24 ∥g∥ C 6 E ε 0 ∥σ(Θ s )∥ 4 F ds ≤ ε 3 ∥g∥ C 6 C(T * , Θ 0 ) + 1 6 ∥g∥ C 6 ε 3 C(T * , Θ 0 ) + ε 3 ∥g∥ C 6 E ∇ Θ L S ( Θ 0 ) 2 2 + C ∥g∥ C 6 εE ε 2 Σ( Θ 0 ) 2 F ≤ ε 3 ∥g∥ C 6 C(T * , Θ 0 ).\nWe remark that for the last but third line we apply the Burkholder-Davis-Gundy inequality.\nTo sum up for now,\nEg(θ 1 ) -Eg(θ 0 ) = -ε ∇ θ g(θ 0 ), ∇ θ L S (θ) θ=θ0 + ε 2 2 ∇ 2 θ g(θ 0 ) : ∇ θ L S (θ) θ=θ0 ⊗ ∇ θ L S (θ) θ=θ0 + Σ(θ 0 ) + E 2 ε (θ 0 ),and\nEg(Θ ε ) -Eg(Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 2 2 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 ) + 1 2 ∇ 2 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + Ē2 ε (Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 2 2 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 ) + 1 2 ∇ 2 Θ g(Θ 0 ) : E ε 0 b(Θ s )ds + ε 0 σ(Θ s )dW s ⊗ ε 0 b(Θ s )ds + ε 0 σ(Θ s )dW s + Ē2 ε (Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 2 2 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 ) + 1 2 ∇ 2 Θ g(Θ 0 ) : E ε 0 b(Θ s )ds ⊗ ε 0 b(Θ s )ds + 1 2 ∇ 2 Θ g(Θ 0 ) : E ε 0 σ(Θ s )dW s ⊗ ε 0 σ(Θ s )dW s + Ē2 ε (Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 2 2 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 ) + 1 2 ∇ 2 Θ g(Θ 0 ) : E ε 0 ε 0 b(Θ s ) ⊗ b(Θ u )dsdu + 1 2 ∇ 2 Θ g(Θ 0 ) : E ε 0 σ(Θ s )dW s ⊗ ε 0 σ(Θ s )dW s + Ē2 ε (Θ 0 ),\nwe observe that\n1 2 ∇ 2 Θ g(Θ 0 ) : E ε 0 σ(Θ s )dW s ⊗ ε 0 σ(Θ s )dW s =E ε 0 1 2 ∇ 2 Θ g(Θ 0 ) : σσ ⊺ (Θ s )ds = ε 2 E ε 0 ∇ 2 Θ g(Θ 0 ) : Σ(Θ s )ds , thus Eg(Θ ε ) -Eg(Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 2 2 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 ) + 1 2 ∇ 2 Θ g(Θ 0 ) : E ε 0 ε 0 b(Θ s ) ⊗ b(Θ u )dsdu + ε 2 E ε 0 ∇ 2 Θ g(Θ 0 ) : Σ(Θ s )ds + Ē2 ε (Θ 0 ). Since ∇ 2 Θ g(Θ 0 ) : E [b(Θ s ) ⊗ b(Θ u )] =∇ 2 Θ g(Θ 0 ) : E[b(Θ s ) ⊗ b(Θ 0 )] + u 0 L ∇ 2 Θ g(Θ 0 ) : b(Θ s ) ⊗ b(Θ v ) dv =∇ 2 Θ g(Θ 0 ) : E[b(Θ 0 ) ⊗ b(Θ 0 )] + s 0 L ∇ 2 Θ g(Θ 0 ) : b(Θ w ) ⊗ b(Θ 0 ) dw + u 0 L ∇ 2 Θ g(Θ 0 ) : b(Θ s ) ⊗ b(Θ v ) dv,\nand since\n∇ 2 Θ g(Θ 0 ) : E [Σ(Θ s )] =∇ 2 Θ g(Θ 0 ) : E [Σ(Θ 0 )] + s 0 L ∇ 2 Θ g(Θ 0 ) : Σ(Θ s ) dv,\nwe are one step away to finish our proof,\nEg(Θ ε ) -Eg(Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 2 2 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 ) + 1 2 ∇ 2 Θ g(Θ 0 ) : E ε 0 ε 0 b(Θ 0 ) ⊗ b(Θ 0 )dsdu + ε 2 E ε 0 ∇ 2 Θ g(Θ 0 ) : Σ(Θ 0 )ds + Ē2 ε (Θ 0 ),\nwhere we misuse our notations for Ē2 ε (Θ 0 ), and the term\nε 0 ε 0 s 0 L ∇ 2 Θ g(Θ 0 ) : b(Θ w ) ⊗ b(Θ 0 ) dwdsdu + ε 0 ε 0 u 0 L ∇ 2 Θ g(Θ 0 ) : b(Θ s ) ⊗ b(Θ v ) dvdsdu + ε 0 s 0 L ∇ 2 Θ g(Θ 0 ) : Σ(Θ s ) dvds,\nis included, and Ē2 ε (Θ 0 ) is still of order O(ε 3 ) by similar reasoning and we omit its demonstration. Thus\nEg(Θ ε ) -Eg(Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 2 2 ⟨b(Θ 0 ), ∇ Θ ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 )⟩ + ε 3 2 Σ(Θ 0 ) : ∇ 2 Θ ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 ) + ε 2 2 ∇ 2 Θ g(Θ 0 ) : E [b(Θ 0 ) ⊗ b(Θ 0 )] + ε 2 2 E ∇ 2 Θ g(Θ 0 ) : Σ(Θ 0 ) + Ē2 ε (Θ 0 ),\nand recall that since we choose\nb (Θ) = -∇ Θ L S (Θ) + ε 4 ∥∇ Θ L S (Θ)∥ 2 2 , σ (Θ) = √ ε (Σ (Θ)) 1 2 , then Eg(Θ ε ) -Eg(Θ 0 ) = -ε ⟨∇ Θ g(Θ 0 ), ∇ Θ (L S (Θ)) | Θ=Θ0 ⟩ - ε 2 4 ∇ Θ g(Θ 0 ), ∇ Θ ∥∇ Θ L S (Θ)∥ 2 2 | Θ=Θ0 + ε 2 2 ⟨∇ Θ (L S (Θ)) | Θ=Θ0 , ∇ Θ ⟨∇ Θ g(Θ 0 ), ∇ Θ (L S (Θ))⟩ | Θ=Θ0 ⟩ + ε 2 2 ∇ 2 Θ g(Θ 0 ) : (∇ Θ (L S (Θ)) | Θ=Θ0 ) ⊗ ∇ Θ (L S (Θ)) | Θ=Θ0 ) + ε 2 2 ∇ 2 Θ g(Θ 0 ) : Σ(Θ 0 ) + Ē2 ε (Θ 0 ) = -ε ⟨∇ Θ g(Θ 0 ), ∇ Θ (L S (Θ)) | Θ=Θ0 ⟩ + ε 2 2 ∇ 2 Θ g(Θ 0 ) : (∇ Θ (L S (Θ)) | Θ=Θ0 ) ⊗ ∇ Θ (L S (Θ)) | Θ=Θ0 ) + ε 2 2 ∇ 2 Θ g(Θ 0 ) : Σ(Θ 0 ) + Ē2 ε (Θ 0 ), thus, we have |Eg(θ 1 ) -Eg(Θ ε )| = Eg(θ 0 ) -ε ∇ θ g(θ 0 ), ∇ θ L S (θ) θ=θ0 + ε 2 2 ∇ 2 θ g(θ 0 ) : ∇ θ L S (θ) θ=θ0 ⊗ ∇ θ L S (θ) θ=θ0 + Σ(θ 0 ) + E 2 ε (θ 0 ) -Eg(Θ 0 ) + ε ⟨∇ Θ g(Θ 0 ), ∇ Θ (L S (Θ)) | Θ=Θ0 ⟩ - ε 2 2 ∇ 2 Θ g(Θ 0 ) : (∇ Θ (L S (Θ)) | Θ=Θ0 ) ⊗ ∇ Θ (L S (Θ)) | Θ=Θ0 ) - ε 2 2 ∇ 2 Θ g(Θ 0 ) : Σ(Θ 0 ) + Ē2 ε (Θ 0 ) ≤ E 2 ε (θ 0 ) + Ē2 ε (Θ 0 ) ≤ ε 3 ∥g∥ C 6 C(T * , θ 0 , ε 0 ) + ε 3 ∥g∥ C 6 C(T * , Θ 0 ) = O(ε 3 ).\nFor the N -th step iteration, since |Eg(θ N ) -Eg(Θ εN )| = P N ε g(θ 0 ) -P εN g(Θ 0 ) , and the RHS of the above equation can be written into a telescoping sum as\nP N ε g(θ 0 ) -P εN g(Θ 0 ) = N l=1 P N -l+1 ε • P (l-1)ε g(θ 0 ) -P N -l ε • P lε g(Θ 0 ) ,\nhence by application of Proposition 1, we obtain that\n|Eg(θ N ) -Eg(Θ εN )| ≤ N l=1 P N -l+1 ε • P (l-1)ε g(θ 0 ) -P N -l ε • P lε g(Θ 0 ) ≤ N l=1 P N -l ε • P 1 ε • P (l-1)ε -P ε • P (l-1)ε g(Θ 0 ) ,\nsince P 1 ε • P (l-1)ε -P ε • P (l-1)ε g(Θ 0 ) can be regarded as L 1 (R D ) if we choose measure µ to be the delta measure concentrated on Θ 0 . i.e., µ := δ Θ0 , hence by the conctration property of Markov operators, we obtain further that\n|Eg(θ N ) -Eg(Θ εN )| ≤ N l=1 P 1 ε • P (l-1)ε -P ε • P (l-1)ε g(Θ 0 ) ≤ N l=1 P 1 ε g(Θ (l-1)ε ) -P ε g(Θ (l-1)ε ) .\nBy taking expectation conditioned on Θ (l-1)ε , then similar to the relation (42), the following holds\nP 1 ε g(Θ (l-1)ε ) -P ε g(Θ (l-1)ε ) = E |Eg(θ l ) -Eg(Θ ε l)| Θ (l-1)ε ≤ E E 2 ε (Θ (l-1)ε ) + E Ē2 ε (Θ (l-1)ε ) ≤ ε 3 ∥g∥ C 6 C(T * , θ 0 , ε 0 ) + ε 3 ∥g∥ C 6 C(T * , Θ 0 ) = O(ε 3 ).\nWe remark that the last line of the above relation is essentially based on Assumption 2, since E E 2 ε (Θ (l-1)ε ) and E Ē2 ε (Θ (l-1)ε ) can be bounded above by the second, fourth and sixth moments of the solution to SDE (28), hence we may apply dominated convergence theorem to obtain the last line of the above relation.\nTo sum up, as\nP N ε g(θ 0 ) -P εN g(Θ 0 ) ≤ N l=1 P N -l+1 ε • P (l-1)ε g(θ 0 ) -P N -l ε • P lε g(Θ 0 ) = N O(ε 3 ), hence for N = N T,ε , P N ε g(θ 0 ) -P εN g(Θ 0 ) = N O(ε 3 ) = N εO(ε) ≤ T O(ε 2 ) = O(ε 2 )." }, { "figure_ref": [], "heading": "F Validation for Assumption 1", "publication_ref": [], "table_ref": [], "text": "In this section, we endeavor to demonstrate the validity of Assumption 1. We begin this section by making some estimates on the modified loss L S and covariance Σ." }, { "figure_ref": [], "heading": "F.1 Estimates on Modified Loss and Covariance", "publication_ref": [], "table_ref": [], "text": "For the modified loss, recall that θ = vec({q r } m r=1 ) = vec ({(a r , w r )} m r=1 ), as we have\n∇ q k L S (Θ) = 1 n n i=1 e i ∇ q k (a k σ(w ⊺ k x i )) + 1 -p np n i=1 a k σ(w ⊺ k x i )∇ q k (a k σ(w ⊺ k x i )) ,\nand under the usual convention that for all i ∈\n[n], 1 c ≤ ∥x i ∥ 2 , |y i | ≤ c,\nwhere c is some universal constant, and that σ(0) = 0, we obtain that\n|e i | = m r=1 a r σ(w ⊺ r x i ) -y i ≤ 1 + m r=1 |a r | ∥w r ∥ 2 ≤ 1 + 1 2 m r=1 |a r | 2 + ∥w r ∥ 2 2 ≤ 1 + ∥Θ∥ 2 2 , hence ∥∇ q k L S (Θ)∥ 2 ≤ 1 + ∥Θ∥ 2 2 ∥q k ∥ 2 + 1 -p p ∥q k ∥ 3 2 , thus we have ∥∇ Θ L S (Θ)∥ 2 ≤ 1 + ∥Θ∥ 2 2 ∥Θ∥ 2 + 1 -p p ∥Θ∥ 3 2 ≤ C p (1 + ∥Θ∥ 3 2 ). Moreover, since ∇ 2 Θ L S (Θ) = 1 n n i=1 ∇ Θ e i ⊗ ∇ Θ e i + e i ∇ 2 Θ e i + 1 -p np n i=1 diag ∇ 2 q k a 2 k σ(w ⊺ k x i ) 2 ,\nas we denote only for now × as matrix multiplication,\n∇ 2 Θ L S (Θ)∇ Θ L S (Θ) = 1 n n i=1 ∇ Θ e i ⊗ ∇ Θ e i + e i ∇ 2 Θ e i + 1 -p np n i=1 diag ∇ 2 q k a 2 k σ(w ⊺ k x i ) 2 × 1 n n i=1 e i ∇ Θ e i + 1 -p np n i=1 ∇ Θ a 2 k σ(w ⊺ k x i ) 2 , then the components in ∇ 2 Θ L S (Θ)∇ Θ L S (Θ) can be categorized into six different types: Firstly, ∥(∇ Θ e i ⊗ ∇ Θ e i ) e j ∇ Θ e j ∥ 2 ≤ |e j | ∥∇ Θ e i ∥ 2 2 ∥∇ Θ e j ∥ 2 ≤ 1 + ∥Θ∥ 2 2 ∥Θ∥ 3 2 ≤ 1 + ∥Θ∥ 5 2 .\nSecondly,\ne i ∇ 2 Θ e i e j ∇ Θ e j 2 ≤ 1 + ∥Θ∥ 2 2 2 ∇ 2 Θ e i 2→2 ∥∇ Θ e j ∥ 2 ≤ 1 + ∥Θ∥ 4 2 ∥Θ∥ 2 2 ≤ 1 + ∥Θ∥ 6 2 .\nThirdly,\ndiag ∇ 2 q k a 2 k σ(w ⊺ k x i ) 2 e j ∇ Θ e j 2 ≤ 1 + ∥Θ∥ 2 2 diag ∇ 2 q k a 2 k σ(w ⊺ k x i ) 2 2→2 ∥Θ∥ 2 ≤ 1 + ∥Θ∥ 2 2 1 + ∥Θ∥ 3 2 ∥Θ∥ 2 ≤ 1 + ∥Θ∥ 6 2 .\nFourthly,\n(∇ Θ e i ⊗ ∇ Θ e i ) ∇ Θ a 2 k σ(w ⊺ k x j ) 2 2 ≤ ∥∇ Θ e i ∥ 2 2 ∥Θ∥ 3 2 ≤ 1 + ∥Θ∥ 5 2 .\nFifthly,\ne i ∇ 2 Θ e i ∇ Θ a 2 k σ(w ⊺ k x j ) 2 2 ≤ 1 + ∥Θ∥ 2 2 ∇ 2 Θ e i 2→2 ∥Θ∥ 3 2 ≤ 1 + ∥Θ∥ 2 2 ∥Θ∥ 4 2 ≤ 1 + ∥Θ∥ 6 2 .\nFinally,\ndiag ∇ 2 q k a 2 k σ(w ⊺ k x i ) 2 ∇ Θ a 2 k σ(w ⊺ k x j ) 2 2 ≤ diag ∇ 2 q k a 2 k σ(w ⊺ k x i ) 2 2→2 ∥Θ∥ 3 2 ≤ 1 + ∥Θ∥ 3 2 ∥Θ∥ 3 2 ≤ 1 + ∥Θ∥ 6 2 .\nTo sum up, for the drift term b(Θ), regardless of the choice of first order or second order accuracy, we obtain that\n∥b(Θ)∥ 2 ≤ 1 + ∥Θ∥ 6 2 .\nAs for the covariance Σ, recall that θ = vec({q r } m r=1 ) = vec ({(a r , w r )} m r=1 ), then we obtain that the covariance Σ reads\nΣ =     Σ 11 Σ 12 • • • Σ 1m Σ 21 Σ 22 • • • Σ 2m . . . . . . . . . . . . Σ m1 Σ m2 • • • Σ mm     .\nFor each k ∈ [m], we obtain that\nΣ kk (Θ) = 1 p -1 1 n n i=1 e i,\\k + 1 p a k σ(w ⊺ k x i ) ∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 e i,\\k + 1 p a k σ(w ⊺ k x i ) ∇ q k (a k σ(w ⊺ k x i )) + 1 p 2 - 1 p m l=1,l̸ =k 1 n n i=1 a l σ(w ⊺ l x i )∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 a l σ(w ⊺ l x i )∇ q k (a k σ(w ⊺ k x i ))\n,\nand for each k, r ∈ [m] with k ̸ = r, Σ kr (Θ) = 1 p -1 1 n n i=1 e i,\\k,\\r + 1 p a k σ(w ⊺ k x i ) + 1 p a r σ(w ⊺ r x i ) ∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 a k σ(w ⊺ k x i )∇ qr (a r σ(w ⊺ r x i )) + 1 p -1 1 n n i=1 a r σ(w ⊺ r x i )∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 e i,\\k,\\r + a k σ(w ⊺ k x i ) + 1 p a r σ(w ⊺ r x i ) ∇ qr (a r σ(w ⊺ r x i )) ,\nhence we obtain that\n∥Σ kk (Θ)∥ 2 F ≤ C p   e i,\\k + 1 p a k σ(w ⊺ k x i ) 2 + m l=1,l̸ =k a 2 l σ(w ⊺ l x i ) 2   ∥∇ Θ e i ∥ 2 2 ≤ C p (1 + ∥Θ∥ 2 2 ) 2 ∥Θ∥ 2 2 ≤ (1 + ∥Θ∥6\n2 ), and by similar reasoning \n∥Σ kr (Θ)∥ 2 F ≤ (1 + ∥Θ∥ 6 2 ).\nM (Θ) := b(Θ) if ∥Θ∥ 2 ≤ M, b(M Θ ∥Θ∥ 2 ) if ∥Θ∥ 2 > M.\nWe also perform similar truncation to σ(Θ) and obtain its truncation σ M (Θ). Then b M and σ M satisfy the Lipschitz condition and the linear growth condition, hence by application of the classical results (Oksendal, 2013, Theorem 5.2.1) in SDE, there exists a unique solution Θ\nM (•) to the truncated SDE dΘ t = b M (Θ t ) dt + σ M (Θ t ) dW t , Θ 0 = Θ(0).(47)\nWe may choose M large enough, such that ∥Θ 0 ∥ 2 < M, and the solution to SDE (28) coincides with the solution to SDE (47) at least for a period of time T * > 0 since ∥Θ 0 ∥ 2 < M . We remark that T * is the desired time in Assumption 2. We also remark that not only for any time t ∈ [0, T * ], the second, fourth and sixth moments of the solution to SDE (28) are uniformly bounded with respect to time t, but also that for any time t ∈ [0, T * ], all moments of the solution to SDE (28) are uniformly bounded with respect to time t.\nAt this point, it is important to discuss that we prove is that for fixed time T , we can take the learning rate ε > 0 small enough so that the SME is a good approximation of the distribution of the dropout iterates. What we did not prove is that for fixed ε, the approximations hold for arbitrary time T . In particular, it is not hard to construct systems where for fixed ε, both the SME and the asymptotic expansion fails when time T is large enough." }, { "figure_ref": [], "heading": "F.3 Moment Estimates of the Dropout Iteration", "publication_ref": [], "table_ref": [], "text": "Recall that the dropout iteration reads\nθ N = θ N -1 -ε∇ θ R drop S (θ N -1 ; η N ) , then we obtain that E ∥θ N ∥ 2l 2 = E ∥θ N -1 ∥ 2l 2 -2lεE ∥θ N -1 ∥ 2l-2 2 θ N -1 , ∇ θ R drop S (θ N -1 ; η N ) + O(ε 2 ),\nthen for learning rate ε small enough, we observe that {E ∥θ N ∥ 2l 2 } N ≥0 follows close to the trajectory of a ordinary differential equation (ODE). Moreover, from the estimates obtained in Section F.1,\n∥θ N -1 ∥ 2l-2 2 θ N -1 , ∇ θ R drop S (θ N -1 ; η N ) ≤ ∥θ N -1 ∥ 2l-1 2 ∇ θ R drop S (θ N -1 ; η N ) 2 = ∥θ N -1 ∥ 2l-1 2 e N i ∇ θ e N i 2 ≤ ∥θ N -1 ∥ 2l-1 2 C p (1 + ∥θ N -1 ∥ 2 2 ) ∥θ N -1 ∥ 2 ≤C p (1 + ∥θ N -1 ∥ 2l+2 2\n), we remark that as the above estimates hold almost surely, then for learning rate ε small enough, we may apply Gronwall inequality to {E ∥θ N ∥ 2l 2 } N ≥0 and shows that for some N * , all moments of the dropout iterations are uniformly bounded with respect to N , since for the ODE\ndu dt = 1 + u 1+λ , u 0 := u(0),(48)\nwith λ > 0. There exists time T * > 0, such that for any time t ∈ [0, T * ], its solution {u t } t≥0 is uniformly bounded with respect to time t. And since for small enough learning rate, all moments of the dropout iterations {E ∥θ N ∥ 2l 2 } N ≥0 follows close to the trajectory of ODEs of (48) type, hence all these moments are also uniformly bounded with respect to N ." }, { "figure_ref": [], "heading": "G Some Computations on the Covariance", "publication_ref": [], "table_ref": [], "text": "Once again, since θ = vec({q r } m r=1 ) = vec ({(a r , w r )} m r=1 ), then the covariance of ∇ θ R drop S (θ N -1 ; η N ) equals to the matrix Σ(θ N -1 ), and as we denote for any k, r ∈ [m],\nΣ kr (θ N -1 ) := Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ qr R drop S (θ N -1 ; η N ) , then Σ =     Σ 11 Σ 12 • • • Σ 1m Σ 21 Σ 22 • • • Σ 2m . . . . . . . . . . . . Σ m1 Σ m2 • • • Σ mm     ." }, { "figure_ref": [], "heading": "G.1 Elements on the Diagonal", "publication_ref": [], "table_ref": [], "text": "In this part, we compute\nΣ kk for all k ∈ [m]. Σ kk (θ N -1 ) = Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ q k R drop S (θ N -1 ; η N ) = 1 n 2 n i,j=1 Cov e N i (η N ) k , e N j (η N ) k ∇ q k (a k σ(w ⊺ k x i )) ⊗ ∇ q k (a k σ(w ⊺ k x j )) , in order to compute Cov e N i (η N ) k , e N j (η N ) k , we need to compute firstly E e N i e N j (η N ) 2 k , and since E e N i e N j (η N ) 2 k consists of four parts, one of which is E     m k ′ =1,k ′ ̸ =k (η N ) k ′ a k ′ σ(w ⊺ k ′ x i ) -y i     m l=1,l̸ =k (η N ) l a l σ(w ⊺ l x j ) -y j   (η N ) 2 k   =E     m k ′ =1,k ′ ̸ =k (η N ) k ′ a k ′ σ(w ⊺ k ′ x i ) -y i     m l=1,l̸ =k (η N ) l a l σ(w ⊺ l x j ) -y j     E (η N ) 2 k = 1 p E   m k ′ =1,k ′ ̸ =k (η N ) 2 k ′ a 2 k ′ σ(w ⊺ k ′ x i )σ(w ⊺ k ′ x j )   + E   k ′ ̸ =l, k ′ ,l̸ =k (η N ) k ′ (η N ) l a k ′ a l σ(w ⊺ k ′ x i )σ(w ⊺ l x j )   -y i E   m k ′ =1,k ′ ̸ =k (η N ) k ′ a k ′ σ(w ⊺ k ′ x j )   -y j E   m k ′ =1,k ′ ̸ =k (η N ) k ′ a k ′ σ(w ⊺ k ′ x i )   + y i y j = 1 p 2 m k ′ =1,k ′ ̸ =k a 2 k ′ σ(w ⊺ k ′ x i )σ(w ⊺ k ′ x j ) + 1 p k ′ ̸ =l, k ′ ,l̸ =k a k ′ a l σ(w ⊺ k ′ x i )σ(w ⊺ l x j ) - y i p m k ′ =1,k ′ ̸ =k a k ′ σ(w ⊺ k ′ x j ) - y j p m k ′ =1,k ′ ̸ =k a k ′ σ(w ⊺ k ′ x i ) + y i y j p = 1 p   m k ′ =1,k ′ ̸ =k a k ′ σ(w ⊺ k ′ x i ) -y i     m k ′ =1,k ′ ̸ =k a k ′ σ(w ⊺ k ′ x j ) -y j   + 1 p 2 - 1 p   m k ′ =1,k ′ ̸ =k a 2 k ′ σ(w ⊺ k ′ x i )σ(w ⊺ k ′ x j )   ,\nand the second part reads\nE   (η N ) k a k σ(w ⊺ k x i )   m l=1,l̸ =k (η N ) l a l σ(w ⊺ l x j ) -y j   (η N ) 2 k   = a k σ(w ⊺ k x i ) p 2   m k ′ =1,k ′ ̸ =k a k ′ σ(w ⊺ k ′ x j ) -y j   ,\nand by symmetry, the third part reads\nE   (η N ) k a k σ(w ⊺ k x j )   m l=1,l̸ =k (η N ) l a l σ(w ⊺ l x i ) -y i   (η N ) 2 k   = a k σ(w ⊺ k x j ) p 2   m k ′ =1,k ′ ̸ =k a k ′ σ(w ⊺ k ′ x i ) -y i   ,\nand finally, the fourth part reads\nE (η N ) k a k σ(w ⊺ k x i )(η N ) k a k σ(w ⊺ k x j )(η N ) 2 k = 1 p 3 a 2 k σ(w ⊺ k x i )σ(w ⊺ k x j ).\nTo sum up, \nE e N i e N j (η N ) 2 k = 1 p 2 - 1 p   m k ′ =1,k ′ ̸ =k a 2 k ′ σ(w ⊺ k ′ x i )σ(w ⊺ k ′ x j )   + 1 p e i,\nx i )σ(w ⊺ k x j ), hence Cov e N i (η N ) k , e N j (η N ) k =E e N i e N j (η N ) 2 k -E e N i (η N ) k E e N i (η N ) k = 1 p 2 - 1 p   m k ′ =1,k ′ ̸ =k a 2 k ′ σ(w ⊺ k ′ x i )σ(w ⊺ k ′ x j )   + 1 p -1 e i,\nx i )σ(w ⊺ k x j ) = 1 p -1 E e N i (η N ) k E e N j (η N ) k + 1 p 2 - 1 p   m k ′ =1,k ′ ̸ =k a 2 k ′ σ(w ⊺ k ′ x i )σ(w ⊺ k ′ x j )   ,\nby summation over the indices i and j, for each k ∈ [m], the covariance matrix reads:\nΣ kk (θ N -1 ) = Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ q k R drop S (θ N -1 ; η N ) = 1 p -1 1 n n i=1 e i,\\k + 1 p a k σ(w ⊺ k x i ) ∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 e i,\\k + 1 p a k σ(w ⊺ k x i ) ∇ q k (a k σ(w ⊺ k x i )) + 1 p 2 - 1 p m l=1,l̸ =k 1 n n i=1 a l σ(w ⊺ l x i )∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1\na l σ(w ⊺ l x i )∇ q k (a k σ(w ⊺ k x i )) ." }, { "figure_ref": [], "heading": "G.2 Elements off the Diagonal", "publication_ref": [], "table_ref": [], "text": "In this part, we compute Σ kr for all k, r ∈ [m],\nwhere k ̸ = r.\nΣ kr (θ N -1 ) = Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ qr R drop S (θ N -1 ; η N ) = 1 n 2 n i,j=1\nCov e N i (η N ) k , e N j (η N ) r ∇ q k (a k σ(w ⊺ k x i )) ⊗ ∇ qr (a k σ(w ⊺ k x j )) , in order to compute Cov e N i (η N ) k , e N j (η N ) r , we need to compute firstly E e N i e N j (η N ) k (η N ) r , and since E e N i e N j (η N ) k (η N ) r consists of nine parts, one of which is\nE     m k ′ =1,k ′ ̸ =k,k ′ ̸ =r (η N ) k ′ a k ′ σ(w ⊺ k ′ x i ) -y i     m l=1,l̸ =k,l̸ =r (η N ) l a l σ(w ⊺ l x j ) -y j   (η N ) k (η N ) r   =E     m k ′ =1,k ′ ̸ =k,k ′ ̸ =r (η N ) k ′ a k ′ σ(w ⊺ k ′ x i ) -y i     m l=1,l̸ =k,l̸ =r (η N ) l a l σ(w ⊺ l x j ) -y j     E [(η N ) k (η N ) r ] = 1 p m k ′ =1,k ′ ̸ =k,k ′ ̸ =r a 2 k ′ σ(w ⊺ k ′ x i )σ(w ⊺ k ′ x j ) + k ′ ̸ =l and k ′ ,l̸ =k,r a k ′ a l σ(w ⊺ k ′ x i )σ(w ⊺ l x j ) -y i m k ′ =1,k ′ ̸ =k,k ′ ̸ =r a k ′ σ(w ⊺ k ′ x j ) -y j m k ′ =1,k ′ ̸ =k,k ′ ̸ =r a k ′ σ(w ⊺ k ′ x i ) + y i y j =   m k ′ =1,k ′ ̸ =k,k ′ ̸ =r a k ′ σ(w ⊺ k ′ x i ) -y i     m k ′ =1,k ′ ̸ =k,k ′ ̸ =r a k ′ σ(w ⊺ k ′ x j ) -y j   + 1 p -1   m k ′ =1,k ′ ̸ =k,k ′ ̸ =r a 2 k ′ σ(w ⊺ k ′ x i )σ(w ⊺ k ′ x j )  \n= e i,\\k,\\r e j,\\k,\\r\n+ 1 p -1   m k ′ =1,k ′ ̸ =k,k ′ ̸ =r a 2 k ′ σ(w ⊺ k ′ x i )σ(w ⊺ k ′ x j )   ,\nand the second part reads\nE     m k ′ =1,k ′ ̸ =k,k ′ ̸ =r (η N ) k ′ a k ′ σ(w ⊺ k ′ x i ) -y i   (η N ) k a k σ(w ⊺ k x j )(η N ) k (η N ) r   =E   m k ′ =1,k ′ ̸ =k,k ′ ̸ =r (η N ) k ′ a k ′ σ(w ⊺ k ′ x i ) -y i   a k σ(w ⊺ k x j )E (η N ) 2 k (η N ) r =\na k σ(w ⊺ k x j ) p e i,\\k,\\r , by similar reasoning and symmetry, the third part reads\nE     m k ′ =1,k ′ ̸ =k,k ′ ̸ =r (η N ) k ′ a k ′ σ(w ⊺ k ′ x i ) -y i   (η N ) r a r σ(w ⊺ r x j )(η N ) k (η N ) r   =E   m k ′ =1,k ′ ̸ =k,k ′ ̸ =r (η N ) k ′ a k ′ σ(w ⊺ k ′ x i ) -y i   a r σ(w ⊺ r x j )E (η N ) k (η N ) 2 r =\na r σ(w ⊺ r x j ) p e i,\\k,\\r , also by similar reasoning and symmetry, the fourth part reads\nE     m k ′ =1,k ′ ̸ =k,k ′ ̸ =r (η N ) k ′ a k ′ σ(w ⊺ k ′ x j ) -y j   (η N ) k a k σ(w ⊺ k x i )(η N ) k (η N ) r   =E   m k ′ =1,k ′ ̸ =k,k ′ ̸ =r (η N ) k ′ a k ′ σ(w ⊺ k ′ x j ) -y j   a k σ(w ⊺ k x i )E (η N ) 2 k (η N ) r = a k σ(w ⊺ k x i ) p\ne j,\\k,\\r , and the fifth part reads\nE [(η N ) k a k σ(w ⊺ k x i )(η N ) k a k σ(w ⊺ k x j )(η N ) k (η N ) r ] = E (η N ) 3 k (η N ) r a 2 k σ(w ⊺ k x i )σ(w ⊺ k x j ) = 1 p 2 a 2 k σ(w ⊺ k x i )σ(w ⊺ k x j ),\nand the sixth part reads\nE [(η N ) k a k σ(w ⊺ k x i )(η N ) r a r σ(w ⊺ r x j )(η N ) k (η N ) r ] =E (η N ) 2 k (η N ) 2 r a k a r σ(w ⊺ k x i )σ(w ⊺ r x j ) = 1 p 2 a k a r σ(w ⊺ k x i )σ(w ⊺ r x j ),\nalso by similar reasoning and symmetry, the seventh part reads\nE     m k ′ =1,k ′ ̸ =k,k ′ ̸ =r (η N ) k ′ a k ′ σ(w ⊺ k ′ x j ) -y j   (η N ) r a r σ(w ⊺ r x i )(η N ) k (η N ) r   =E   m k ′ =1,k ′ ̸ =k,k ′ ̸ =r (η N ) k ′ a k ′ σ(w ⊺ k ′ x j ) -y j   a r σ(w ⊺ r x i )E (η N ) k (η N ) 2 r =\na r σ(w ⊺ r x i ) p e j,\\k,\\r , and the eighth part reads\nE [(η N ) r a r σ(w ⊺ r x i )(η N ) k a k σ(w ⊺ k x j )(η N ) k (η N ) r ] = E (η N ) 2 k (η N ) 2 r a k a r σ(w ⊺ k x i )σ(w ⊺ r x j ) = 1 p 2 a k a r σ(w ⊺ k x i )σ(w ⊺ r x j ),\nand the ninth part reads E [(η N ) r a r σ(w ⊺ r x i )(η N ) r a r σ(w ⊺ r x j )(η N ) k (η N ) r ] =E (η N ) k (η N ) 3 r a 2 r σ(w ⊺ r x i )σ(w ⊺ r x j ) = 1 p 2 a 2 r σ(w ⊺ r x i )σ(w ⊺ r x j ). \nx i )σ(w ⊺ k x j ) + 1 p 2 -1 a r a k σ(w ⊺ r x i )σ(w ⊺ k x j ),\nby summation over the indices i and j, the covariance matrix reads\nΣ kr (θ N -1 ) = Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ qr R drop S (θ N -1 ; η N ) = 1 p -1 1 n n i=1 e i,\\k,\\r + 1 p a k σ(w ⊺ k x i ) + 1 p a r σ(w ⊺ r x i ) ∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 a k σ(w ⊺ k x i )∇ qr (a r σ(w ⊺ r x i )) + 1 p -1 1 n n i=1 a r σ(w ⊺ r x i )∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1\ne i,\\k,\\r + a k σ(w ⊺ k x i ) + 1 p a r σ(w ⊺ r x i ) ∇ qr (a r σ(w ⊺ r x i )) ," }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work is sponsored by the National Key R&D Program of China Grant No. 2022YFA1008200 (Z. X., T. L.), the Shanghai Sailing Program, the Natural Science Foundation of Shanghai Grant No. 20ZR1429000 (Z. X.), the National Natural Science Foundation of China Grant No. 62002221 (Z. X.), the National Natural Science Foundation of China Grant No. 12101401 (T. L.), Shanghai Municipal Science and Technology Key Project No. 22JC1401500 (T. L.), Shanghai Municipal of Science and Technology Major Project No. 2021SHZDZX0102, and the HPC of School of Mathematical Sciences and the Student Innovation Center, and the Siyuan-1 cluster supported by the Center for High Performance Computing at Shanghai Jiao Tong University." }, { "figure_ref": [], "heading": "H The structural similarity between Hessian and covariance", "publication_ref": [ "b7" ], "table_ref": [], "text": "We can derive the Hessian of the loss landscape in the expectation sense with respect to the dropout noise η and the covariance matrix of dropout noise under intuitive approximations. We first show our assumptions as follows: Assumption 1. The NN piece-wise linear activation. Assumption 2. The parameters of NN's output layer are fixed during training. Assumption 3. We study the loss landscape after training reaches a stable stage, i.e., the loss function in the sense of expectation is small enough,\nHessian matrix with dropout regularization Based on the Assumption 1, 2, the Hessian matrix of the loss function with respect to f drop θ,η (x) can be written in the mean sense as:\nwhere\nProof. We first compute the Hessian matrix after taking expectations with respect to the dropout variable,\nThe first and second terms on the RHS of the Eq. ( 49) are as follows,\nNote that for linear activate function,\nThus the Eq. ( 49) can be rewritten as\nCovariance matrix with dropout regularization Based on the Assumption 3, the covariance matrix of the loss function under the randomness of dropout variable η and data x can be written as:\n,\nProof. For simplicity, we approximate the loss function through Taylor expansion, which is also used in Wei et al. (2020),\nwhere ℓ(f θ (x i ; η),\n2 . The covariance matrix under dropout regularization is\nCombining the properties of the dropout variable η, we have,\nWe calculate the two terms on the RHS of the Eq. ( 50) separately:\nThus the Eq. ( 50) can be rewritten as\n(e i ) 2 • ∇ qr (a r σ(w ⊺ r x i )) ⊗ ∇ qr (a r σ(w ⊺ r x i )).\nNote that\n(a r σ(w ⊺ r x i )) 2 = E η 2ℓ(f θ (x i ; η), y i ),\nwe have\n(ℓ(f θ (x i ), y i )) • ∇ qr (a r σ(w ⊺ r x i )) ⊗ ∇ qr (a r σ(w ⊺ r x i ))." } ]
2023-05-25
[ { "authors": "G E Hinton; N Srivastava; A Krizhevsky; I Sutskever; R R Salakhutdinov", "journal": "", "ref_id": "b0", "title": "Improving neural networks by preventing co-adaptation of feature detectors", "year": "2012" }, { "authors": "N Srivastava; G Hinton; A Krizhevsky; I Sutskever; R Salakhutdinov", "journal": "The journal of machine learning research", "ref_id": "b1", "title": "Dropout: a simple way to prevent neural networks from overfitting", "year": "2014" }, { "authors": "M Tan; Q Le", "journal": "PMLR", "ref_id": "b2", "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "year": "2019" }, { "authors": "D P Helmbold; P M Long", "journal": "The Journal of Machine Learning Research", "ref_id": "b3", "title": "On the inductive bias of dropout", "year": "2015" }, { "authors": "N S Keskar; D Mudigere; J Nocedal; M Smelyanskiy; P T P Tang", "journal": "", "ref_id": "b4", "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "year": "2016" }, { "authors": "Y Feng; Y Tu", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b5", "title": "The inverse variance-flatness relation in stochastic gradient descent is critical for finding flat minima", "year": "2021" }, { "authors": "Z Zhu; J Wu; B Yu; L Wu; J Ma", "journal": "", "ref_id": "b6", "title": "The anisotropic noise in stochastic gradient descent: Its behavior of escaping from sharp minima and regularization effects", "year": "2018" }, { "authors": "C Wei; S Kakade; T Ma", "journal": "PMLR", "ref_id": "b7", "title": "The implicit and explicit regularization effects of dropout", "year": "2020" }, { "authors": "Z Zhang; Z.-Q J Xu", "journal": "", "ref_id": "b8", "title": "Implicit regularization of dropout", "year": "2022" }, { "authors": "Q Li; C Tai; E Weinan", "journal": "PMLR", "ref_id": "b9", "title": "Stochastic modified equations and adaptive stochastic gradient algorithms", "year": "2017" }, { "authors": "B Neyshabur; S Bhojanapalli; D Mcallester; N Srebro", "journal": "", "ref_id": "b10", "title": "Exploring generalization in deep learning", "year": "2017" }, { "authors": "Y Lecun; L Bottou; Y Bengio; P Haffner", "journal": "", "ref_id": "b11", "title": "Gradient-based learning applied to document recognition", "year": "1998" }, { "authors": "A Krizhevsky", "journal": "", "ref_id": "b12", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "D Elliott; S Frank; K Sima'an; L Specia", "journal": "", "ref_id": "b13", "title": "Multi30k: Multilingual english-german image descriptions", "year": "2016" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b14", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Ł Kaiser; I Polosukhin", "journal": "", "ref_id": "b15", "title": "Attention is all you need", "year": "2017" }, { "authors": "S Wager; S Wang; P S Liang", "journal": "Advances in neural information processing systems", "ref_id": "b16", "title": "Dropout training as adaptive regularization", "year": "2013" }, { "authors": "D Mcallester", "journal": "", "ref_id": "b17", "title": "A pac-bayesian tutorial with a dropout bound", "year": "2013" }, { "authors": "L Wan; M Zeiler; S Zhang; Y Lecun; R Fergus", "journal": "Citeseer", "ref_id": "b18", "title": "Regularization of neural networks using dropconnect", "year": "2013" }, { "authors": "W Mou; Y Zhou; J Gao; L Wang", "journal": "PMLR", "ref_id": "b19", "title": "Dropout training, data-dependent regularization, and generalization bounds", "year": "2018" }, { "authors": "P Mianjy; R Arora", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b20", "title": "On convergence and generalization of dropout training", "year": "2020" }, { "authors": "Q Li; C Tai; E Weinan", "journal": "The Journal of Machine Learning Research", "ref_id": "b21", "title": "Stochastic modified equations and dynamics of stochastic gradient algorithms i: Mathematical foundations", "year": "2019" }, { "authors": "Y Feng; L Li; J.-G Liu", "journal": "", "ref_id": "b22", "title": "Semi-groups of stochastic gradient descent and online principal component analysis: properties and diffusion approximations", "year": "2017" }, { "authors": "S Malladi; K Lyu; A Panigrahi; S Arora", "journal": "", "ref_id": "b23", "title": "On the SDEs and scaling rules for adaptive gradient algorithms", "year": "2022" }, { "authors": "H Li; Z Xu; G Taylor; C Studer; T Goldstein", "journal": "", "ref_id": "b24", "title": "Visualizing the loss landscape of neural nets", "year": "2017" }, { "authors": "S Jastrzebski; Z Kenton; D Arpit; N Ballas; A Fischer; Y Bengio; A Storkey", "journal": "", "ref_id": "b25", "title": "Three factors influencing minima in sgd", "year": "2017" }, { "authors": "S Jastrzebski; Z Kenton; N Ballas; A Fischer; Y Bengio; A Storkey", "journal": "", "ref_id": "b26", "title": "On the relation between the sharpest directions of dnn loss and the sgd step length", "year": "2018" }, { "authors": "L Wu; C Ma; W E ", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b27", "title": "How sgd selects the global minima in over-parameterized learning: A dynamical stability perspective", "year": "2018" }, { "authors": "V Papyan", "journal": "", "ref_id": "b28", "title": "The full spectrum of deepnet hessians at scale: Dynamics with sgd training and sample size", "year": "2018" }, { "authors": "V Papyan", "journal": "", "ref_id": "b29", "title": "Measurements of three-level hierarchical structure in the outliers in the spectrum of deepnet hessians", "year": "2019" }, { "authors": "L Wu; M Wang; W Su", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b30", "title": "The alignment property of sgd noise and how it helps select flat minima: A stability analysis", "year": "2022" }, { "authors": "P Kloeden; E Platen", "journal": "Springer", "ref_id": "b31", "title": "Numerical Solution of Stochastic Differential Equations, Stochastic Modelling and Applied Probability", "year": "2011" }, { "authors": "R Shwartz-Ziv; N Tishby", "journal": "", "ref_id": "b32", "title": "Opening the black box of deep neural networks via information", "year": "2017" }, { "authors": "S P Meyn; R L Tweedie", "journal": "Springer Science & Business Media", "ref_id": "b33", "title": "Markov chains and stochastic stability", "year": "2012" } ]
[ { "formula_coordinates": [ 3, 228.8, 222.67, 275.87, 30.32 ], "formula_id": "formula_0", "formula_text": "min θ R S (θ) = 1 2n n i=1 (f θ (x i ) -y i ) 2 ,(1)" }, { "formula_coordinates": [ 3, 254.6, 288.37, 250.07, 30.2 ], "formula_id": "formula_1", "formula_text": "f θ (x) := m r=1 a r σ(w ⊺ r x),(2)" }, { "formula_coordinates": [ 3, 226.5, 415.94, 196.77, 37.56 ], "formula_id": "formula_2", "formula_text": "For each k ∈ [m], (η N ) k = 1 p" }, { "formula_coordinates": [ 3, 197.41, 482.63, 106.68, 9.68 ], "formula_id": "formula_3", "formula_text": "F N := {σ(η 1 , η 2 , • • • η N )" }, { "formula_coordinates": [ 3, 240.79, 509.74, 263.88, 30.2 ], "formula_id": "formula_4", "formula_text": "f θ (x; η) := m r=1 (η) r a r σ(w ⊺ r x),(4)" }, { "formula_coordinates": [ 3, 132.28, 564.19, 368.51, 32.84 ], "formula_id": "formula_5", "formula_text": "R drop S (θ; η) := 1 2n n i=1 (f θ (x i ; η) -y i ) 2 = 1 2n n i=1 m r=1 (η) r a r σ(w ⊺ r x i ) -y i 2 . (5" }, { "formula_coordinates": [ 3, 500.8, 576.58, 3.87, 8.64 ], "formula_id": "formula_6", "formula_text": ")" }, { "formula_coordinates": [ 3, 226.39, 621.68, 278.28, 13.83 ], "formula_id": "formula_7", "formula_text": "θ N = θ N -1 -ε∇ θ R drop S (θ N -1 ; η N ) ,(6)" }, { "formula_coordinates": [ 3, 213.5, 659.6, 185, 12.69 ], "formula_id": "formula_8", "formula_text": "e N i := e i (θ N -1 ; η N ) := f θ N -1 (x i ; η N ) -y i ." }, { "formula_coordinates": [ 4, 108, 108.08, 345.74, 46.01 ], "formula_id": "formula_9", "formula_text": "θ N -θ N -1 = -ε∇ θ R drop S (θ N -1 ; η N ) = - ε n n i=1 e N i ∇ θ e N i . Since θ = vec({q r } m r=1 ) = vec ({(a r , w r )} m r=1 ), then given θ N -1 , for each k ∈ [m]," }, { "formula_coordinates": [ 4, 161.89, 168.08, 288.21, 63.51 ], "formula_id": "formula_10", "formula_text": "E θ N -1 n i=1 e N i ∇ q k e N i = E θ N -1 n i=1 e N i (η N ) k ∇ q k (a k σ(w ⊺ k x i )) = n i=1 e i ∇ q k (a k σ(w ⊺ k x i )) + 1 -p p n i=1 a k σ(w ⊺ k x i )∇ q k (a k σ(w ⊺ k x i )) ," }, { "formula_coordinates": [ 4, 197.44, 273.42, 303.36, 30.32 ], "formula_id": "formula_11", "formula_text": "L S (θ) := 1 2n n i=1 e 2 i + 1 -p 2np n i=1 m r=1 a 2 r σ(w ⊺ r x i ) 2 , (7" }, { "formula_coordinates": [ 4, 500.8, 284.79, 3.87, 8.64 ], "formula_id": "formula_12", "formula_text": ")" }, { "formula_coordinates": [ 4, 154.73, 324.03, 302.55, 15.5 ], "formula_id": "formula_13", "formula_text": "θ N -θ N -1 = -εE θ N -1 ∇ θ R drop S (θ N -1 ; η N ) = -ε∇ θ L S (θ) θ=θ N -1 ," }, { "formula_coordinates": [ 4, 197.78, 400.56, 306.88, 27.1 ], "formula_id": "formula_14", "formula_text": "N -1 , θ N -θ N -1 = -ε∇ θ L S (θ) θ=θ N -1 + √ εV (θ N -1 ),(8)" }, { "formula_coordinates": [ 4, 207.35, 486.69, 297.32, 9.68 ], "formula_id": "formula_15", "formula_text": "dΘ t = b (Θ t ) dt + σ (Θ t ) dW t , Θ 0 = Θ(0),(9)" }, { "formula_coordinates": [ 4, 186.46, 519.29, 239.09, 17.62 ], "formula_id": "formula_16", "formula_text": "Θ εN = Θ ε(N -1) + εb Θ ε(N -1) + √ εσ Θ ε(N -1) Z N ," }, { "formula_coordinates": [ 4, 134.47, 542.68, 370.2, 56.61 ], "formula_id": "formula_17", "formula_text": "Z N ∼ N (0, I D ) and Θ 0 = Θ(0). Thus, if we set b (Θ) := -∇ Θ L S (Θ), σ (Θ) := √ ε (Σ (Θ)) 1 2 , Θ 0 := θ 0 ,(10)" }, { "formula_coordinates": [ 4, 169.97, 688.72, 330.55, 34.54 ], "formula_id": "formula_18", "formula_text": "C M b R D =    f ∈ C M R D ∥f ∥ C M := |β|≤M D β f ∞ < ∞    , (11" }, { "formula_coordinates": [ 4, 500.52, 703.05, 4.15, 8.64 ], "formula_id": "formula_19", "formula_text": ")" }, { "formula_coordinates": [ 5, 229.05, 151.44, 271.47, 19.5 ], "formula_id": "formula_20", "formula_text": "sup 0≤s≤T * E ∥Θ s (•)∥ 2l 2 ≤ C(T * , Θ 0 ). (12" }, { "formula_coordinates": [ 5, 500.52, 154.72, 4.15, 8.64 ], "formula_id": "formula_21", "formula_text": ")" }, { "formula_coordinates": [ 5, 155.02, 201.11, 345.5, 40.61 ], "formula_id": "formula_22", "formula_text": "C(T * , θ 0 , ε 0 ) > 0, such that sup 0≤N ≤[N T * ,ε ] E ∥θ N ∥ 2l 2 ≤ C(T * , θ 0 , ε 0 ). (13" }, { "formula_coordinates": [ 5, 500.52, 224.07, 4.15, 8.64 ], "formula_id": "formula_23", "formula_text": ")" }, { "formula_coordinates": [ 5, 108, 316.13, 392.52, 20.56 ], "formula_id": "formula_24", "formula_text": "N ∈ [N T,ε ], |Eg(Θ εN ) -Eg(θ N )| ≤ C(T * , g, ε 0 )ε α . (14" }, { "formula_coordinates": [ 5, 500.52, 327.35, 4.15, 8.64 ], "formula_id": "formula_25", "formula_text": ")" }, { "formula_coordinates": [ 5, 257.6, 378.44, 96.8, 26.92 ], "formula_id": "formula_26", "formula_text": "b(Θ) = -∇ Θ L S (Θ), σ(Θ) = √ ε (Σ (Θ)) 1 2 ," }, { "formula_coordinates": [ 5, 238.69, 430.64, 134.61, 9.68 ], "formula_id": "formula_27", "formula_text": "dΘ t = b (Θ t ) dt + σ (Θ t ) dW t ," }, { "formula_coordinates": [ 5, 211.74, 461.56, 188.53, 37.19 ], "formula_id": "formula_28", "formula_text": "b(Θ) = -∇ Θ L S (Θ) + ε 4 ∥∇ Θ L S (Θ)∥ 2 2 , σ(Θ) = √ ε (Σ (Θ)) 1 2 ," }, { "formula_coordinates": [ 5, 107.74, 675.51, 396.26, 47.27 ], "formula_id": "formula_29", "formula_text": "θ = vec({q r } m r=1 ) = vec ({(a r , w r )} m r=1 ), then covariance of ∇ θ R drop S (θ N -1 ; η N ) equals to Σ(θ N -1 ). We denote Σ kr (θ N -1 ) := Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ qr R drop S (θ N -1 ; η N ) , then Σ =     Σ 11 Σ 12 • • • Σ 1m Σ 21 Σ 22 • • • Σ 2m . . . . . . . . . . . . Σ m1 Σ m2 • • • Σ mm     ." }, { "formula_coordinates": [ 6, 150.94, 186.68, 302.23, 154.65 ], "formula_id": "formula_30", "formula_text": "Σ kk (θ N -1 ) = Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ q k R drop S (θ N -1 ; η N ) = 1 p -1 1 n n i=1 e i,\\k + 1 p a k σ(w ⊺ k x i ) ∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 e i,\\k + 1 p a k σ(w ⊺ k x i ) ∇ q k (a k σ(w ⊺ k x i )) + 1 p 2 - 1 p m k ′ =1,k ′ ̸ =k 1 n n i=1 a k ′ σ(w ⊺ k ′ x i )∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 a k ′ σ(w ⊺ k ′ x i )∇ q k (a k σ(w ⊺ k x i ))" }, { "formula_coordinates": [ 6, 108, 354.15, 421.82, 253.95 ], "formula_id": "formula_31", "formula_text": ") := m l=1,l̸ =k a l σ(w ⊺ l x i ) -y i , and for each k, r ∈ [m] with k ̸ = r, Σ kr (θ N -1 ) =Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ qr R drop S (θ N -1 ; η N ) = 1 p -1 m k ′ =1,k ′ ̸ =k,k ′ ̸ =r 1 n n i=1 a k ′ σ(w ⊺ k ′ x i )∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 a k ′ σ(w ⊺ k ′ x i )∇ qr (a r σ(w ⊺ r x i )) + 1 p -1 1 n n i=1 e i,\\k,\\r + 1 p a k σ(w ⊺ k x i ) + 1 p a r σ(w ⊺ r x i ) ∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 a k σ(w ⊺ k x i )∇ qr (a r σ(w ⊺ r x i )) + 1 p -1 1 n n i=1 a r σ(w ⊺ r x i )∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 e i,\\k,\\r + a k σ(w ⊺ k x i ) + 1 p a r σ(w ⊺ r x i ) ∇ qr (a r σ(w ⊺ r x i ))" }, { "formula_coordinates": [ 7, 108, 92.82, 405.53, 74.04 ], "formula_id": "formula_32", "formula_text": "H(θ) ≈ 1 n n i=1 ∇ θ f θ (x i ) ⊗∇ θ f θ (x i ) + 1 -p p m r=1 ∇ qr (a r σ(w ⊺ r x i )) ⊗∇ qr (a r σ(w ⊺ r x i )) , Σ(θ) ≈ 1 n n i=1 l i,1 ∇ θ f θ (x i )⊗∇ θ f θ (x i ) + l i,2 1 -p p m r=1 ∇ qr (a r σ(w ⊺ r x i )) ⊗∇ qr (a r σ(w ⊺ r x i )) ,(15)" }, { "formula_coordinates": [ 8, 383.53, 486.23, 118.39, 12.19 ], "formula_id": "formula_33", "formula_text": "R v (θ l v ) = R v (θ r v ) = 2R v (0)" }, { "formula_coordinates": [ 8, 276.06, 508.63, 59.88, 12.69 ], "formula_id": "formula_34", "formula_text": "F v ≡ θ r v -θ l v ." }, { "formula_coordinates": [ 9, 413.47, 153.63, 77.16, 8.35 ], "formula_id": "formula_35", "formula_text": "(d) FNN, D = D grad" }, { "formula_coordinates": [ 9, 216.72, 339.34, 178.57, 25.41 ], "formula_id": "formula_36", "formula_text": "Var(Proj v (D)) = N i=1 (Proj v (θ i ) -µ) 2 N ," }, { "formula_coordinates": [ 13, 107.75, 229.23, 396.25, 20.56 ], "formula_id": "formula_37", "formula_text": "d model = 50, d k = d v = 20, d ff = 256, h = 4, N = 3" }, { "formula_coordinates": [ 14, 133.5, 211.53, 61.83, 11.21 ], "formula_id": "formula_38", "formula_text": "10 0 10 1 Fvi ( )" }, { "formula_coordinates": [ 15, 228.8, 345.67, 275.87, 30.32 ], "formula_id": "formula_39", "formula_text": "min θ R S (θ) = 1 2n n i=1 (f θ (x i ) -y i ) 2 ,(17)" }, { "formula_coordinates": [ 15, 133.42, 378.06, 189.44, 12.32 ], "formula_id": "formula_40", "formula_text": "S := {(x i , y i )} n i=1 is the training sample, f θ (x)" }, { "formula_coordinates": [ 15, 254.6, 408.59, 250.07, 30.2 ], "formula_id": "formula_41", "formula_text": "f θ (x) := m r=1 a r σ(w ⊺ r x),(18)" }, { "formula_coordinates": [ 15, 135.93, 440.72, 310.9, 12.2 ], "formula_id": "formula_42", "formula_text": "x ∈ R d , θ = vec(θ a , θ w ) with θ a = vec({a r } m r=1 ), θ w = vec({w r } m r=1" }, { "formula_coordinates": [ 15, 106.83, 483.78, 41.93, 12.77 ], "formula_id": "formula_43", "formula_text": "(w ⊺ r , b r ) ⊺ ." }, { "formula_coordinates": [ 15, 226.5, 528.81, 278.51, 39.32 ], "formula_id": "formula_44", "formula_text": "For each k ∈ [m], (η N ) k = 1 p with probability p, 0 with probability 1 -p,(19)" }, { "formula_coordinates": [ 15, 233.39, 586.2, 183.05, 9.68 ], "formula_id": "formula_45", "formula_text": "F N := {σ(η 1 , η 2 , • • • η N )} forms a filtration." }, { "formula_coordinates": [ 15, 240.4, 614.15, 264.27, 30.2 ], "formula_id": "formula_46", "formula_text": "f θ (x; η) := m r=1 (η) r a r σ(w ⊺ r x),(20)" }, { "formula_coordinates": [ 15, 193.56, 658.81, 306.96, 66.53 ], "formula_id": "formula_47", "formula_text": "R drop S (θ; η) : = 1 2n n i=1 (f θ (x i ; η) -y i ) 2 = 1 2n n i=1 m r=1 (η) r a r σ(w ⊺ r x i ) -y i 2 . (21" }, { "formula_coordinates": [ 15, 500.52, 688.28, 4.15, 8.64 ], "formula_id": "formula_48", "formula_text": ")" }, { "formula_coordinates": [ 16, 226.39, 92.17, 278.28, 13.83 ], "formula_id": "formula_49", "formula_text": "θ N = θ N -1 -ε∇ θ R drop S (θ N -1 ; η N ) ,(22)" }, { "formula_coordinates": [ 16, 213.11, 130.07, 185.78, 12.69 ], "formula_id": "formula_50", "formula_text": "e N i := e i (θ N -1 ; η N ) := f θ N -1 (x i ; η N ) -y i ," }, { "formula_coordinates": [ 16, 231.34, 170, 149.32, 30.32 ], "formula_id": "formula_51", "formula_text": "R drop S (θ N -1 ; η N ) = 1 2n n i=1 e N i 2 ," }, { "formula_coordinates": [ 16, 180.88, 224.06, 250.24, 30.32 ], "formula_id": "formula_52", "formula_text": "θ N -θ N -1 = -ε∇ θ R drop S (θ N -1 ; η N ) = - ε n n i=1 e N i ∇ θ e N i ," }, { "formula_coordinates": [ 17, 241.68, 133.44, 262.99, 30.32 ], "formula_id": "formula_53", "formula_text": "θ N = θ N -1 - ε n n i=1 e N i ∇ θ e N i ,(23)" }, { "formula_coordinates": [ 17, 262.64, 193.01, 237.88, 9.68 ], "formula_id": "formula_54", "formula_text": "θ N = F (θ N -1 , η N ), (24" }, { "formula_coordinates": [ 17, 107.64, 193.36, 397.03, 23.38 ], "formula_id": "formula_55", "formula_text": ") where F (•, •) : R D × R m → R D is a smooth (C ∞ )" }, { "formula_coordinates": [ 17, 163.88, 278.31, 284.24, 30.32 ], "formula_id": "formula_56", "formula_text": "E θ N -1 n i=1 e N i ∇ q k e N i = E θ N -1 n i=1 e N i (η N ) k ∇ q k (a k σ(w ⊺ k x i )) ," }, { "formula_coordinates": [ 17, 148.23, 329.42, 315.54, 126.2 ], "formula_id": "formula_57", "formula_text": "E θ N -1 e N i (η N ) k = E θ N -1   m r=1,r̸ =k (η N ) r a r σ(w ⊺ r x i ) -y i   E θ N -1 [(η N ) k ] + E θ N -1 (η N ) 2 k a k σ(w ⊺ k x i ) =   m r=1,r̸ =k a r σ(w ⊺ r x i ) -y i   + 1 p a k σ(w ⊺ k x i ) = m r=1 a r σ(w ⊺ r x i ) -y i + 1 p -1 a k σ(w ⊺ k x i )." }, { "formula_coordinates": [ 17, 213.04, 478.76, 185.92, 63.62 ], "formula_id": "formula_58", "formula_text": "e i := e i (θ) := m r=1 a r σ(w ⊺ r x i ) -y i , e i,\\k := e i,\\k (θ) := m r=1,r̸ =k a r σ(w ⊺ r x i ) -y i ," }, { "formula_coordinates": [ 17, 206.23, 582.16, 298.44, 49.51 ], "formula_id": "formula_59", "formula_text": "E θ N -1 e N i (η N ) k = e i,\\k + 1 p a k σ(w ⊺ k x i ) = e i + 1 p -1 a k σ(w ⊺ k x i ).(25)" }, { "formula_coordinates": [ 17, 154.56, 656.84, 302.88, 63.51 ], "formula_id": "formula_60", "formula_text": "E θ N -1 n i=1 e N i (η N ) k ∇ q k (a k σ(w ⊺ k x i )) = n i=1 e i ∇ q k (a k σ(w ⊺ k x i )) + n i=1 1 p -1 a k σ(w ⊺ k x i )∇ q k (a k σ(w ⊺ k x i )) ," }, { "formula_coordinates": [ 18, 197.38, 100.02, 307.29, 30.32 ], "formula_id": "formula_61", "formula_text": "L S (θ) := 1 2n n i=1 e 2 i + 1 -p 2np n i=1 m r=1 a 2 r σ(w ⊺ r x i ) 2 ,(26)" }, { "formula_coordinates": [ 18, 154.66, 184.16, 302.68, 15.5 ], "formula_id": "formula_62", "formula_text": "θ N -θ N -1 = -εE θ N -1 ∇ θ R drop S (θ N -1 ; η N ) = -ε∇ θ L S (θ) θ=θ N -1 ," }, { "formula_coordinates": [ 18, 197.72, 315.18, 306.95, 20.42 ], "formula_id": "formula_63", "formula_text": "θ N -θ N -1 = -ε∇ θ L S (θ) θ=θ N -1 + √ εV (θ N -1 ),(27)" }, { "formula_coordinates": [ 18, 108, 375.11, 397.74, 24.29 ], "formula_id": "formula_64", "formula_text": "Σ(•) : R m(d+1) → R m(d+1)×m(d+1) is the covariance of ∇ θ R drop S (θ N -1 ; η N ). Recall that θ = vec({q r } m r=1 ) = vec ({(a r , w r )} m r=1" }, { "formula_coordinates": [ 18, 108, 416.29, 342.81, 103.27 ], "formula_id": "formula_65", "formula_text": "Σ kr (θ N -1 ) := Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ qr R drop S (θ N -1 ; η N ) , then Σ =     Σ 11 Σ 12 • • • Σ 1m Σ 21 Σ 22 • • • Σ 2m . . . . . . . . . . . . Σ m1 Σ m2 • • • Σ mm     ." }, { "formula_coordinates": [ 18, 150.94, 565.71, 302.23, 154.65 ], "formula_id": "formula_66", "formula_text": "Σ kk (θ N -1 ) = Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ q k R drop S (θ N -1 ; η N ) = 1 p -1 1 n n i=1 e i,\\k + 1 p a k σ(w ⊺ k x i ) ∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 e i,\\k + 1 p a k σ(w ⊺ k x i ) ∇ q k (a k σ(w ⊺ k x i )) + 1 p 2 - 1 p m l=1,l̸ =k 1 n n i=1 a l σ(w ⊺ l x i )∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 a l σ(w ⊺ l x i )∇ q k (a k σ(w ⊺ k x i ))" }, { "formula_coordinates": [ 19, 108, 75.16, 372.16, 172.62 ], "formula_id": "formula_67", "formula_text": "and for each k, r ∈ [m] with k ̸ = r, Σ kr (θ N -1 ) = Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ qr R drop S (θ N -1 ; η N ) = 1 p -1 1 n n i=1 e i,\\k,\\r + 1 p a k σ(w ⊺ k x i ) + 1 p a r σ(w ⊺ r x i ) ∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 a k σ(w ⊺ k x i )∇ qr (a r σ(w ⊺ r x i )) + 1 p -1 1 n n i=1 a r σ(w ⊺ r x i )∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 e i,\\k,\\r + a k σ(w ⊺ k x i ) + 1 p a r σ(w ⊺ r x i ) ∇ qr (a r σ(w ⊺ r x i )) ," }, { "formula_coordinates": [ 19, 346.05, 278.74, 68.72, 13.06 ], "formula_id": "formula_68", "formula_text": "a l σ(w ⊺ l x i ) -y i ," }, { "formula_coordinates": [ 19, 207.35, 379.07, 297.32, 9.68 ], "formula_id": "formula_69", "formula_text": "dΘ t = b (Θ t ) dt + σ (Θ t ) dW t , Θ 0 = Θ(0),(28)" }, { "formula_coordinates": [ 19, 186.46, 418.89, 239.09, 17.62 ], "formula_id": "formula_70", "formula_text": "Θ εN = Θ ε(N -1) + εb Θ ε(N -1) + √ εσ Θ ε(N -1) Z N ," }, { "formula_coordinates": [ 19, 255.38, 465.43, 249.28, 41.73 ], "formula_id": "formula_71", "formula_text": "b (Θ) := -∇ Θ L S (Θ), σ (Θ) := √ ε (Σ (Θ)) 1 2 , Θ 0 := θ 0 ,(29)" }, { "formula_coordinates": [ 19, 108, 549.85, 395.7, 24.62 ], "formula_id": "formula_72", "formula_text": "R D × R D × • • • × R D × • • • , whereas {Θ t } t≥0 induces a probability measure on C [0, ∞), R D ." }, { "formula_coordinates": [ 19, 146.4, 688.72, 319.2, 34.54 ], "formula_id": "formula_73", "formula_text": "C M b R m(d+1) =    f ∈ C M R m(d+1) ∥f ∥ C M := |β|≤M D β f ∞ < ∞    ," }, { "formula_coordinates": [ 20, 207.35, 178.84, 197.31, 9.68 ], "formula_id": "formula_74", "formula_text": "dΘ t = b (Θ t ) dt + σ (Θ t ) dW t , Θ 0 = Θ(0)," }, { "formula_coordinates": [ 20, 256.29, 224.45, 99.43, 26.29 ], "formula_id": "formula_75", "formula_text": "E t 0 ∥Θ s (•)∥ 2 2 ds < ∞." }, { "formula_coordinates": [ 20, 235, 285.56, 265.52, 19.5 ], "formula_id": "formula_76", "formula_text": "sup 0≤s≤T * E ∥Θ s (•)∥ 2l 2 ≤ C(T * , Θ 0 ). (30" }, { "formula_coordinates": [ 20, 500.52, 288.85, 4.15, 8.64 ], "formula_id": "formula_77", "formula_text": ")" }, { "formula_coordinates": [ 20, 108, 345.08, 396.67, 41.37 ], "formula_id": "formula_78", "formula_text": "N T * ,ε ], there exists C(T * , θ 0 , ε 0 ) > 0, such that sup 0≤N ≤[N T * ,ε ] E ∥θ N ∥ 2l 2 ≤ C(T * , θ 0 , ε 0 ).(31)" }, { "formula_coordinates": [ 20, 108, 483.74, 54.94, 12.55 ], "formula_id": "formula_79", "formula_text": "g ∈ C M b R m(" }, { "formula_coordinates": [ 20, 108, 483.74, 396.67, 34.09 ], "formula_id": "formula_80", "formula_text": "T ≤ T * , then for all N ∈ [N T,ε ], |Eg(Θ εN ) -Eg(θ N )| ≤ C(T, g, ε 0 )ε α .(32)" }, { "formula_coordinates": [ 21, 108, 153.35, 396, 39.94 ], "formula_id": "formula_81", "formula_text": "B b (X ) → B b (X ) satisfying • P1 = 1;" }, { "formula_coordinates": [ 21, 135.4, 220.95, 368.1, 19.65 ], "formula_id": "formula_82", "formula_text": "• If a sequence {φ n } ⊂ B b (X ) converges pointwise to an element φ ∈ B b (X ), then Pφ n converges pointwise to Pφ;" }, { "formula_coordinates": [ 21, 135.4, 285.03, 102.29, 47.14 ], "formula_id": "formula_83", "formula_text": "• (Pf (x)) + ≤ Pf + (x); • (Pf (x)) -≤ Pf -(x); • |Pf (x)| ≤ P|f (x)|." }, { "formula_coordinates": [ 21, 108, 343.3, 396, 45.19 ], "formula_id": "formula_84", "formula_text": "X → R is said to be an element of L 1 (X ) if X |f |dµ < ∞." }, { "formula_coordinates": [ 21, 135.4, 413.06, 72.76, 11.14 ], "formula_id": "formula_85", "formula_text": "• ∥Pf ∥ 1 ≤ ∥f ∥ 1 ." }, { "formula_coordinates": [ 21, 205.67, 451.68, 199.47, 22.05 ], "formula_id": "formula_86", "formula_text": "f + (x) = max(f (x), 0) = f (x) if f (x) > 0, 0 otherwise." }, { "formula_coordinates": [ 21, 159.5, 501.4, 291.8, 22.05 ], "formula_id": "formula_87", "formula_text": "f -(x) = max(-f (x), 0) = -min(f (x), 0) = -f (x) if f (x) < 0, 0 otherwise." }, { "formula_coordinates": [ 21, 196.65, 571.31, 218.71, 28.64 ], "formula_id": "formula_88", "formula_text": "(Pf ) + = Pf + -Pf -+ = max 0, Pf + -Pf - ≤ max 0, Pf + = Pf + ." }, { "formula_coordinates": [ 21, 196.43, 624.11, 219.15, 28.64 ], "formula_id": "formula_89", "formula_text": "(Pf ) -= Pf + -Pf --= max 0, Pf --Pf + ≤ max 0, Pf -= Pf -." }, { "formula_coordinates": [ 21, 244.12, 676.22, 123.76, 42.22 ], "formula_id": "formula_90", "formula_text": "|Pf | = (Pf ) + + (Pf ) - ≤ Pf + + Pf - = P f + + f -= P|f |." }, { "formula_coordinates": [ 22, 215.79, 95.84, 288.88, 43.86 ], "formula_id": "formula_91", "formula_text": "∥Pf ∥ 1 = X |Pf | dµ ≤ X P |f | dµ = X |f | dµ = ∥f ∥ 1 .(33)" }, { "formula_coordinates": [ 22, 220.29, 203.61, 171.42, 14.21 ], "formula_id": "formula_92", "formula_text": "∥P n f ∥ 1 = P • P n-1 f 1 ≤ P n-1 f 1 ." }, { "formula_coordinates": [ 22, 230.35, 290.34, 151.29, 9.68 ], "formula_id": "formula_93", "formula_text": "P (X t ∈ A | X s ) = (P t-s 1 A ) (X s ) ," }, { "formula_coordinates": [ 22, 107.53, 322.82, 397.14, 44.27 ], "formula_id": "formula_94", "formula_text": "C M b (R D ) ⊂ B b (R D ), then WLOG we fix g ∈ C M b (R D ) and define P ε g( θ) := E g θ -ε∇ θ R drop S (θ; η) | θ= θ .(35)" }, { "formula_coordinates": [ 22, 237.39, 464.55, 267.28, 22.31 ], "formula_id": "formula_95", "formula_text": "Lg := ⟨b, ∇ Θ g⟩ + 1 2 σσ ⊺ : ∇ 2 Θ g.(36)" }, { "formula_coordinates": [ 22, 108, 588.14, 392.52, 74.01 ], "formula_id": "formula_96", "formula_text": "T ≤ T * , if we choose b (Θ) := -∇ Θ L S (Θ), σ (Θ) := √ ε (Σ (Θ)) 1 2 , then for all t ∈ [0, T ], the stochastic processes Θ t satisfying dΘ t = b (Θ t ) dt + σ (Θ t ) dW t , Θ 0 = Θ(0),(38" }, { "formula_coordinates": [ 22, 209.26, 700.08, 291.25, 11.18 ], "formula_id": "formula_97", "formula_text": "|Eg(θ N ) -Eg(Θ εN )| ≤ C(T, ∥g∥ C 4 , θ 0 , ε 0 )η,(39" }, { "formula_coordinates": [ 23, 122.71, 102.87, 366.57, 67.24 ], "formula_id": "formula_98", "formula_text": "g(ϑ) -g( θ) = α s=1 1 s! D i1,...,ij =1 s j=1 ϑ (ij ) -θ(ij) ∂ s g ∂ϑ (i1) . . . ∂ϑ (ij ) ( θ) + 1 (α + 1)! D i1,...,ij =1 α+1 j=1 ϑ (ij ) -θ(ij) ∂ α+1 g ∂ϑ (i1) . . . ∂ϑ (ij ) (γϑ + (1 -γ) θ)," }, { "formula_coordinates": [ 23, 256.35, 200.59, 99.31, 30.32 ], "formula_id": "formula_99", "formula_text": "x (i) x (i) := D i=1 x (i) x (i) ." }, { "formula_coordinates": [ 23, 150.29, 259.9, 311.42, 60 ], "formula_id": "formula_100", "formula_text": "g(θ 1 ) -g(θ 0 ) = ⟨∇ θ g(θ 0 ), θ 1 -θ 0 ⟩ + 1 2 ∇ 2 θ g(γθ 1 + (1 -γ)θ 0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) = ⟨∇ θ g(θ 0 ), θ 1 -θ 0 ⟩ + 1 2 ∇ 2 θ g( θ0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 )," }, { "formula_coordinates": [ 23, 108, 337.87, 369.76, 81.2 ], "formula_id": "formula_101", "formula_text": "θ 1 -θ 0 = -ε∇ θ L S (θ) θ=θ0 + √ εV (θ 0 ), then Eg(θ 1 ) -Eg(θ 0 ) = ⟨∇ θ g(θ 0 ), Eθ 1 -Eθ 0 ⟩ + 1 2 E ∇ 2 θ g( θ0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) = -ε ∇ θ g(θ 0 ), ∇ θ L S (θ) θ=θ0 + E 1 ε (θ 0 )," }, { "formula_coordinates": [ 23, 199.64, 444.82, 305.03, 22.31 ], "formula_id": "formula_102", "formula_text": "E 1 ε (θ 0 ) := 1 2 E ∇ 2 θ g( θ0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) ,(40)" }, { "formula_coordinates": [ 23, 108, 499.22, 338.46, 80.69 ], "formula_id": "formula_103", "formula_text": "E 1 ε (θ 0 ) = 1 2 E ∇ 2 θ g( θ0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) ≤ 1 2 ∥g∥ C 4 E ∥θ 1 -θ 0 ∥ 2 2 = ε 2 ∥g∥ C 4 E ∇ θ R drop S (θ 0 ; η 1 ) 2 2 ≤ ε 2 ∥g∥ C 4 C(T * , θ 0 , ε 0 ), since ∇ θ L S (θ)" }, { "formula_coordinates": [ 23, 214.08, 609.76, 183.84, 26.29 ], "formula_id": "formula_104", "formula_text": "Θ ε -Θ 0 = ε 0 b(Θ s )ds + ε 0 σ(Θ s )dW s ." }, { "formula_coordinates": [ 23, 182.4, 661.19, 247.21, 36.02 ], "formula_id": "formula_105", "formula_text": "g(Θ ε ) -g(Θ 0 ) = ⟨∇ Θ g(Θ 0 ), Θ ε -Θ 0 ⟩ + 1 2 ∇ 2 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )," }, { "formula_coordinates": [ 23, 252.37, 713.17, 107.26, 9.68 ], "formula_id": "formula_106", "formula_text": "Θ 0 := γΘ ε + (1 -γ)Θ 0 ," }, { "formula_coordinates": [ 24, 139.83, 89.59, 332.34, 62.92 ], "formula_id": "formula_107", "formula_text": "Eg(Θ ε ) -Eg(Θ 0 ) = ⟨∇ Θ g(Θ 0 ), EΘ ε -EΘ 0 ⟩ + 1 2 E ∇ 2 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) = ∇ Θ g(Θ 0 ), ε 0 E[b(Θ s )]ds + 1 2 E ∇ 2 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ," }, { "formula_coordinates": [ 24, 140.5, 165.67, 331, 26.29 ], "formula_id": "formula_108", "formula_text": "⟨∇ Θ g(Θ 0 ), E[b(Θ s )]⟩ = ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + s 0 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ v )dv," }, { "formula_coordinates": [ 24, 107.64, 204.73, 397.03, 133.78 ], "formula_id": "formula_109", "formula_text": "Eg(Θ ε ) -Eg(Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 0 s 0 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ v )dvds + 1 2 E ∇ 2 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), b(Θ 0 )⟩ + ε 2 Ē1 ε (Θ 0 ), where the remainder term Ē1 ε (•) : R D → R, whose expression reads Ē1 ε (Θ 0 ) := ε 0 s 0 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ v )dvds + 1 2 E ∇ 2 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ,(41)" }, { "formula_coordinates": [ 24, 108, 358.46, 354.75, 95.68 ], "formula_id": "formula_110", "formula_text": "(Θ) = -∇ Θ L S (Θ), σ (Θ) = √ ε (Σ (Θ)) 1 2 , then we carry out the computation for L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ v ), L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ v ) = ⟨∇ Θ L S (Θ v ), ∇ Θ ⟨∇ Θ g(Θ 0 ), ∇ Θ L S (Θ)⟩ | Θ=Θv ⟩ + ε 2 Σ (Θ v ) : ∇ 2 Θ (⟨∇ Θ g(Θ 0 ), ∇ Θ L S (Θ)⟩) | Θ=Θv , since ∇ Θ L S (Θ), ∇ 2 Θ L S (Θ), ∇ 3 Θ L S (Θ)" }, { "formula_coordinates": [ 24, 124.58, 475.04, 360.43, 246.31 ], "formula_id": "formula_111", "formula_text": "Ē1 ε (Θ 0 ) = ε 0 sL ⟨∇ Θ g(Θ 0 ), b⟩ ( Θ s )ds + 1 2 E ∇ 2 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ≤ ε 0 s ∥g∥ C 4 C(T * , Θ 0 )ds + 1 2 ∥g∥ C 4 E ∥Θ ε -Θ 0 ∥ 2 2 ≤ ε 2 2 ∥g∥ C 4 C(T * , Θ 0 ) + ∥g∥ C 4 E ε 0 b(Θ s )ds + ε 0 σ(Θ s )dW s 2 2 ≤ ε 2 2 ∥g∥ C 4 C(T * , Θ 0 ) + 2 ∥g∥ C 4 E ε 0 b(Θ s )ds 2 2 + 2 ∥g∥ C 4 E ε 0 σ(Θ s )dW s 2 2 ≤ ε 2 2 ∥g∥ C 4 C(T * , Θ 0 ) + 2 ∥g∥ C 4 ε 2 E ∇ Θ L S ( Θ 0 ) 2 2 + 2 ∥g∥ C 4 E ε 0 ∥σ(Θ s )∥ 2 F ds ≤ ε 2 2 ∥g∥ C 4 C(T * , Θ 0 ) + 2 ∥g∥ C 4 ε 2 E ∇ Θ L S ( Θ 0 ) 2 2 + 2 ∥g∥ C 4 εE ε Σ( Θ 0 ) F ≤ ε 2 ∥g∥ C 4 C(T * , Θ 0 )." }, { "formula_coordinates": [ 25, 108, 90.66, 396.67, 153.61 ], "formula_id": "formula_112", "formula_text": "|Eg(θ 1 ) -Eg(Θ ε )| = Eg(θ 0 ) -ε ∇ θ g(θ 0 ), ∇ θ L S (θ) θ=θ0 + E 1 ε (θ 0 ) -Eg(Θ 0 ) -ε ⟨∇ Θ g(Θ 0 ), b(Θ 0 )⟩ + Ē1 ε (Θ 0 ) , since θ 0 = Θ 0 and b (Θ 0 ) = -∇ Θ L S (Θ) θ=θ0 , thus P 1 ε g(θ 0 ) -P ε g(Θ 0 ) = |Eg(θ 1 ) -Eg(Θ ε )| ≤ E 1 ε (θ 0 ) + Ē1 ε (Θ 0 ) ≤ ε 2 ∥g∥ C 4 C(T * , θ 0 , ε 0 ) + ε 2 ∥g∥ C 4 C(T * , Θ 0 ) = O(ε 2 ). (42) For the N -th step iteration, since |Eg(θ N ) -Eg(Θ εN )| = P N ε g(θ 0 ) -P εN g(Θ 0" }, { "formula_coordinates": [ 25, 145.93, 261.44, 320.14, 30.55 ], "formula_id": "formula_113", "formula_text": "P N ε g(θ 0 ) -P εN g(Θ 0 ) = N l=1 P N -l+1 ε • P (l-1)ε g(θ 0 ) -P N -l ε • P lε g(Θ 0 ) ," }, { "formula_coordinates": [ 25, 108, 309.47, 355.89, 81.62 ], "formula_id": "formula_114", "formula_text": "|Eg(θ N ) -Eg(Θ εN )| ≤ N l=1 P N -l+1 ε • P (l-1)ε g(θ 0 ) -P N -l ε • P lε g(Θ 0 ) ≤ N l=1 P N -l ε • P 1 ε • P (l-1)ε -P ε • P (l-1)ε g(Θ 0 ) , since P 1 ε • P (l-1)ε -P ε • P (l-1)ε g(Θ 0" }, { "formula_coordinates": [ 25, 167.07, 433.05, 274.54, 65.73 ], "formula_id": "formula_115", "formula_text": "|Eg(θ N ) -Eg(Θ εN )| ≤ N l=1 P 1 ε • P (l-1)ε -P ε • P (l-1)ε g(Θ 0 ) ≤ N l=1 P 1 ε g(Θ (l-1)ε ) -P ε g(Θ (l-1)ε ) ." }, { "formula_coordinates": [ 25, 142.07, 519.3, 331.17, 61.88 ], "formula_id": "formula_116", "formula_text": "P 1 ε g(Θ (l-1)ε ) -P ε g(Θ (l-1)ε ) = E |Eg(θ l ) -Eg(Θ ε l)| Θ (l-1)ε ≤ E E 1 ε (Θ (l-1)ε ) + E Ē1 ε (Θ (l-1)ε ) ≤ ε 2 ∥g∥ C 4 C(T * , θ 0 , ε 0 ) + ε 2 ∥g∥ C 4 C(T * , Θ 0 ) = O(ε 2 )." }, { "formula_coordinates": [ 25, 108, 649.47, 383.04, 60.47 ], "formula_id": "formula_117", "formula_text": "P N ε g(θ 0 ) -P εN g(Θ 0 ) ≤ N l=1 P N -l+1 ε • P (l-1)ε g(θ 0 ) -P N -l ε • P lε g(Θ 0 ) = N O(ε 2 ), hence for N = N T,ε , P N ε g(θ 0 ) -P εN g(Θ 0 ) = N O(ε 2 ) = N εO(ε) ≤ T O(ε) = O(ε)." }, { "formula_coordinates": [ 26, 211.67, 93.57, 188.65, 53.63 ], "formula_id": "formula_118", "formula_text": "T ≤ T * , if we choose b(Θ) = -∇ Θ L S (Θ) + ε 4 ∥∇ Θ L S (Θ)∥ 2 2 , σ(Θ) = √ ε (Σ (Θ)) 1 2 ," }, { "formula_coordinates": [ 26, 207.35, 174.22, 297.32, 9.68 ], "formula_id": "formula_119", "formula_text": "dΘ t = b (Θ t ) dt + σ (Θ t ) dW t , Θ 0 = Θ(0),(43)" }, { "formula_coordinates": [ 26, 209.26, 224.88, 291.25, 11.18 ], "formula_id": "formula_120", "formula_text": "|Eg(θ N ) -Eg(Θ εN )| ≤ C(T, ∥g∥ C 6 , θ 0 , ε 0 )η, (44" }, { "formula_coordinates": [ 26, 108, 225.22, 396.67, 24.25 ], "formula_id": "formula_121", "formula_text": ") where θ 0 = Θ 0 ." }, { "formula_coordinates": [ 26, 122.71, 291.87, 366.57, 67.24 ], "formula_id": "formula_122", "formula_text": "g(ϑ) -g( θ) = α s=1 1 s! D i1,...,ij =1 s j=1 ϑ (ij ) -θ(ij) ∂ s g ∂ϑ (i1) . . . ∂ϑ (ij ) ( θ) + 1 (α + 1)! D i1,...,ij =1 α+1 j=1 ϑ (ij ) -θ(ij) ∂ α+1 g ∂ϑ (i1) . . . ∂ϑ (ij ) (γϑ + (1 -γ) θ)," }, { "formula_coordinates": [ 26, 140.84, 414.38, 330.31, 94.25 ], "formula_id": "formula_123", "formula_text": "g(θ 1 ) -g(θ 0 ) = ⟨∇ θ g(θ 0 ), θ 1 -θ 0 ⟩ + 1 2 ∇ 2 θ g(θ 0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) + 1 6 ∇ 3 θ g(γθ 1 + (1 -γ)θ 0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) = ⟨∇ θ g(θ 0 ), θ 1 -θ 0 ⟩ + 1 2 ∇ 2 θ g(θ 0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) + 1 6 ∇ 3 θ g( θ0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 )," }, { "formula_coordinates": [ 26, 108, 528.44, 372.53, 150.24 ], "formula_id": "formula_124", "formula_text": "θ 1 -θ 0 = -ε∇ θ L S (θ) θ=θ0 + √ εV (θ 0 ), then Eg(θ 1 ) -Eg(θ 0 ) = ⟨∇ θ g(θ 0 ), Eθ 1 -Eθ 0 ⟩ + 1 2 ∇ 2 θ g(θ 0 ) : E [(θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 )] + 1 6 E ∇ 3 θ g( θ0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) = -ε ∇ θ g(θ 0 ), ∇ θ L S (θ) θ=θ0 + ε 2 2 ∇ 2 θ g(θ 0 ) : ∇ θ L S (θ) θ=θ0 ⊗ ∇ θ L S (θ) θ=θ0 + Σ(θ 0 ) + E 2 ε (θ 0 )," }, { "formula_coordinates": [ 26, 173.52, 704.6, 331.15, 22.31 ], "formula_id": "formula_125", "formula_text": "E 2 ε (θ 0 ) := 1 6 E ∇ 3 θ g( θ0 ) : (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) ⊗ (θ 1 -θ 0 ) ,(45)" }, { "formula_coordinates": [ 27, 108, 99.75, 338.46, 54.66 ], "formula_id": "formula_126", "formula_text": "E 2 ε (θ 0 ) ≤ 1 6 ∥g∥ C 6 E ∥θ 1 -θ 0 ∥ 3 2 = ε 3 ∥g∥ C 6 E ∇ θ R drop S (θ 0 ; η 1 ) 3 2 ≤ ε 3 ∥g∥ C 6 C(T * , θ 0 , ε 0 ), since ∇ θ L S (θ)" }, { "formula_coordinates": [ 27, 214.08, 183.29, 183.84, 26.29 ], "formula_id": "formula_127", "formula_text": "Θ ε -Θ 0 = ε 0 b(Θ s )ds + ε 0 σ(Θ s )dW s ." }, { "formula_coordinates": [ 27, 107.64, 232.76, 383.11, 288.56 ], "formula_id": "formula_128", "formula_text": "g(Θ ε ) -g(Θ 0 ) = ⟨∇ Θ g(Θ 0 ), Θ ε -Θ 0 ⟩ + 1 2 ∇ 2 Θ g(Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) + 1 6 ∇ 3 Θ g(Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) + 1 24 ∇ 4 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ), where Θ 0 := γΘ ε + (1 -γ)Θ 0 , for some γ ∈ (0, 1). Then Eg(Θ ε ) -Eg(Θ 0 ) = ⟨∇ Θ g(Θ 0 ), EΘ ε -EΘ 0 ⟩ + 1 2 ∇ 2 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 6 ∇ 3 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 24 E ∇ 4 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) = ∇ Θ g(Θ 0 ), ε 0 E[b(Θ s )]ds + 1 2 ∇ 2 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 6 ∇ 3 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 24 E ∇ 4 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ," }, { "formula_coordinates": [ 27, 140.5, 536.65, 331, 26.29 ], "formula_id": "formula_129", "formula_text": "⟨∇ Θ g(Θ 0 ), E[b(Θ s )]⟩ = ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + s 0 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ v )dv," }, { "formula_coordinates": [ 27, 108, 578.86, 402.16, 98.81 ], "formula_id": "formula_130", "formula_text": "Eg(Θ ε ) -Eg(Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 0 s 0 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ v )dvds + 1 2 ∇ 2 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 6 ∇ 3 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 24 E ∇ 4 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ," }, { "formula_coordinates": [ 27, 133.37, 695.06, 345.25, 26.29 ], "formula_id": "formula_131", "formula_text": "L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ v ) = L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 ) + v 0 L (L ⟨∇ Θ g(Θ 0 ), b⟩) (Θ u )du," }, { "formula_coordinates": [ 28, 108, 95.18, 393.02, 274.72 ], "formula_id": "formula_132", "formula_text": "Eg(Θ ε ) -Eg(Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 0 s 0 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ v )dvds + 1 2 ∇ 2 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 6 ∇ 3 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 24 E ∇ 4 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 0 s 0 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 )dvds + ε 0 s 0 v 0 L (L ⟨∇ Θ g(Θ 0 ), b⟩) (Θ u )dudvds + 1 2 ∇ 2 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 6 ∇ 3 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 24 E ∇ 4 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 2 2 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 ) + 1 2 ∇ 2 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + Ē2 ε (Θ 0 )," }, { "formula_coordinates": [ 28, 121.53, 406.83, 383.14, 74.83 ], "formula_id": "formula_133", "formula_text": "Ē2 ε (Θ 0 ) := ε 0 s 0 v 0 L (L ⟨∇ Θ g(Θ 0 ), b⟩) (Θ u )dudvds + 1 6 ∇ 3 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 24 E ∇ 4 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ,(46)" }, { "formula_coordinates": [ 28, 210.84, 519.3, 190.31, 37.19 ], "formula_id": "formula_134", "formula_text": "b (Θ) = -∇ Θ L S (Θ) + ε 4 ∥∇ Θ L S (Θ)∥ 2 2 , σ (Θ) = √ ε (Σ (Θ)) 1 2 ," }, { "formula_coordinates": [ 28, 132.88, 572.52, 341.68, 146.78 ], "formula_id": "formula_135", "formula_text": "L (L ⟨∇ Θ g(Θ 0 ), b⟩) (Θ u ), L (L ⟨∇ Θ g(Θ 0 ), b⟩) (Θ u ) =L (⟨b, ∇ Θ (⟨∇ Θ g(Θ 0 ), b⟩)⟩) (Θ u ) + L ε 2 Σ : ∇ 2 Θ (⟨∇ Θ g(Θ 0 ), b⟩) (Θ u ) = ⟨b, ∇ Θ (⟨b, ∇ Θ (⟨∇ Θ g(Θ 0 ), b⟩)⟩)⟩ + ε 2 Σ : ∇ Θ b, ∇ 2 Θ (⟨∇ Θ g(Θ 0 ), b⟩) + ε 2 b, ∇ Θ Σ : ∇ 2 Θ (⟨∇ Θ g(Θ 0 ), b⟩) + ε 2 4 Σ : ∇ 2 Θ Σ : ∇ 2 Θ (⟨∇ Θ g(Θ 0 ), b⟩) =b ⊺ ∇ Θ (b ⊺ ∇ Θ b∇ Θ g(Θ 0 )) (Θ u ) + εR ε (Θ u ) = ∇ Θ L S (Θ u ), ∇ Θ 1 2 ∇ Θ ∥∇ Θ L S (Θ u )∥ 2 2 , ∇ Θ g(Θ 0 ) + εR ′ ε (Θ u ), since ∇ Θ L S (Θ), ∇ 2 Θ L S (Θ), ∇ 3 Θ L S (Θ), Σ (Θ), R ε (Θ u ) and R ′ ε (Θ u" }, { "formula_coordinates": [ 29, 145.25, 107.43, 313.66, 75.83 ], "formula_id": "formula_136", "formula_text": "E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] =E ε 0 b(Θ s )ds + ε 0 σ(Θ s )dW s ⊗ ε 0 b(Θ s )ds + ε 0 σ(Θ s )dW s ⊗ ε 0 b(Θ s )ds + ε 0 σ(Θ s )dW s ," }, { "formula_coordinates": [ 29, 219.68, 215.39, 178.18, 26.29 ], "formula_id": "formula_137", "formula_text": "ε 0 b(Θ s )ds ⊗ ε 0 b(Θ s )ds ⊗ ε 0 b(Θ s )ds," }, { "formula_coordinates": [ 29, 357.55, 251.54, 142.92, 12.5 ], "formula_id": "formula_138", "formula_text": "∇ Θ L S (Θ), ∇ 2 Θ L S (Θ), ∇ 3 Θ L S (Θ)" }, { "formula_coordinates": [ 29, 207.5, 293.24, 196.99, 41.51 ], "formula_id": "formula_139", "formula_text": "E ε 0 b(Θ s )ds ⊗ ε 0 b(Θ s )ds ⊗ ε 0 b(Θ s )ds =ε 3 Eb( Θ s ) ⊗ b( Θ s ) ⊗ b( Θ s ) = O(ε 3 )." }, { "formula_coordinates": [ 29, 185.45, 365.51, 253.98, 26.29 ], "formula_id": "formula_140", "formula_text": "ε 0 σ(Θ s )dW s ⊗ ε 0 σ(Θ s )dW s ⊗ ε 0 σ(Θ s )dW s ," }, { "formula_coordinates": [ 29, 154.49, 420.57, 303.02, 26.29 ], "formula_id": "formula_141", "formula_text": "E ε 0 σ(Θ s )dW s ⊗ ε 0 σ(Θ s )dW s ⊗ ε 0 σ(Θ s )dW s = 0," }, { "formula_coordinates": [ 29, 205.27, 476.35, 207, 26.29 ], "formula_id": "formula_142", "formula_text": "ε 0 b(Θ s )ds ⊗ ε 0 b(Θ s )ds ⊗ ε 0 σ(Θ s )dW s ," }, { "formula_coordinates": [ 29, 191.69, 542.32, 234.15, 26.29 ], "formula_id": "formula_143", "formula_text": "ε 0 b(Θ s )ds ⊗ ε 0 σ(Θ s )dW s ⊗ ε 0 σ(Θ s )dW s ," }, { "formula_coordinates": [ 29, 161.93, 596.05, 288.15, 54.19 ], "formula_id": "formula_144", "formula_text": "E ε 0 b(Θ s )ds ⊗ ε 0 σ(Θ s )dW s ⊗ ε 0 σ(Θ s )dW s =εEb( Θ s ) ⊗ E ε 0 σ(Θ s )dW s ⊗ ε 0 σ(Θ s )dW s = O(ε 3 )." }, { "formula_coordinates": [ 29, 192.49, 674.35, 229.18, 12.17 ], "formula_id": "formula_145", "formula_text": "R3 (Θ 0 ) := E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] ," }, { "formula_coordinates": [ 29, 241.18, 710.68, 135.17, 14.66 ], "formula_id": "formula_146", "formula_text": "vec( R3 (Θ 0 )) 2 ≤ ε 3 C(T * , Θ 0 )." }, { "formula_coordinates": [ 30, 129.05, 154.17, 349.84, 354.49 ], "formula_id": "formula_147", "formula_text": "Ē2 ε (Θ 0 ) = ε 0 s 0 vL (L ⟨∇ Θ g(Θ 0 ), b⟩) ( Θ u )dvds + 1 6 ∇ 3 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + 1 24 E ∇ 4 Θ g( Θ 0 ) : (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 ) ≤ ε 0 s 0 v ∥g∥ C 6 C(T * , Θ 0 )dvds + 1 6 ∥g∥ C 6 ε 3 C(T * , Θ 0 ) + 1 24 ∥g∥ C 6 ∥Θ ε -Θ 0 ∥ 4 2 = ε 3 6 ∥g∥ C 6 C(T * , Θ 0 ) + 1 6 ∥g∥ C 6 ε 3 C(T * , Θ 0 ) + 1 24 ∥g∥ C 6 E ε 0 b(Θ s )ds + ε 0 σ(Θ s )dW s 4 2 ≤ ε 3 ∥g∥ C 6 C(T * , Θ 0 ) + 1 6 ∥g∥ C 6 ε 3 C(T * , Θ 0 ) + 4 24 ∥g∥ C 6 ε 3 E ∇ Θ L S ( Θ 0 ) 2 2 + 4 24 ∥g∥ C 6 E ε 0 σ(Θ s )dW s 4 2 ≤ ε 3 ∥g∥ C 6 C(T * , Θ 0 ) + 1 6 ∥g∥ C 6 ε 3 C(T * , Θ 0 ) + 4 24 ∥g∥ C 6 ε 3 E ∇ Θ L S ( Θ 0 ) 2 2 + C 24 ∥g∥ C 6 E ε 0 ∥σ(Θ s )∥ 4 F ds ≤ ε 3 ∥g∥ C 6 C(T * , Θ 0 ) + 1 6 ∥g∥ C 6 ε 3 C(T * , Θ 0 ) + ε 3 ∥g∥ C 6 E ∇ Θ L S ( Θ 0 ) 2 2 + C ∥g∥ C 6 εE ε 2 Σ( Θ 0 ) 2 F ≤ ε 3 ∥g∥ C 6 C(T * , Θ 0 )." }, { "formula_coordinates": [ 30, 117.32, 681.05, 377.36, 40.88 ], "formula_id": "formula_148", "formula_text": "Eg(θ 1 ) -Eg(θ 0 ) = -ε ∇ θ g(θ 0 ), ∇ θ L S (θ) θ=θ0 + ε 2 2 ∇ 2 θ g(θ 0 ) : ∇ θ L S (θ) θ=θ0 ⊗ ∇ θ L S (θ) θ=θ0 + Σ(θ 0 ) + E 2 ε (θ 0 ),and" }, { "formula_coordinates": [ 31, 125.28, 90.88, 361.43, 303.23 ], "formula_id": "formula_149", "formula_text": "Eg(Θ ε ) -Eg(Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 2 2 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 ) + 1 2 ∇ 2 Θ g(Θ 0 ) : E [(Θ ε -Θ 0 ) ⊗ (Θ ε -Θ 0 )] + Ē2 ε (Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 2 2 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 ) + 1 2 ∇ 2 Θ g(Θ 0 ) : E ε 0 b(Θ s )ds + ε 0 σ(Θ s )dW s ⊗ ε 0 b(Θ s )ds + ε 0 σ(Θ s )dW s + Ē2 ε (Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 2 2 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 ) + 1 2 ∇ 2 Θ g(Θ 0 ) : E ε 0 b(Θ s )ds ⊗ ε 0 b(Θ s )ds + 1 2 ∇ 2 Θ g(Θ 0 ) : E ε 0 σ(Θ s )dW s ⊗ ε 0 σ(Θ s )dW s + Ē2 ε (Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 2 2 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 ) + 1 2 ∇ 2 Θ g(Θ 0 ) : E ε 0 ε 0 b(Θ s ) ⊗ b(Θ u )dsdu + 1 2 ∇ 2 Θ g(Θ 0 ) : E ε 0 σ(Θ s )dW s ⊗ ε 0 σ(Θ s )dW s + Ē2 ε (Θ 0 )," }, { "formula_coordinates": [ 31, 108, 416.77, 353.99, 304.59 ], "formula_id": "formula_150", "formula_text": "1 2 ∇ 2 Θ g(Θ 0 ) : E ε 0 σ(Θ s )dW s ⊗ ε 0 σ(Θ s )dW s =E ε 0 1 2 ∇ 2 Θ g(Θ 0 ) : σσ ⊺ (Θ s )ds = ε 2 E ε 0 ∇ 2 Θ g(Θ 0 ) : Σ(Θ s )ds , thus Eg(Θ ε ) -Eg(Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 2 2 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 ) + 1 2 ∇ 2 Θ g(Θ 0 ) : E ε 0 ε 0 b(Θ s ) ⊗ b(Θ u )dsdu + ε 2 E ε 0 ∇ 2 Θ g(Θ 0 ) : Σ(Θ s )ds + Ē2 ε (Θ 0 ). Since ∇ 2 Θ g(Θ 0 ) : E [b(Θ s ) ⊗ b(Θ u )] =∇ 2 Θ g(Θ 0 ) : E[b(Θ s ) ⊗ b(Θ 0 )] + u 0 L ∇ 2 Θ g(Θ 0 ) : b(Θ s ) ⊗ b(Θ v ) dv =∇ 2 Θ g(Θ 0 ) : E[b(Θ 0 ) ⊗ b(Θ 0 )] + s 0 L ∇ 2 Θ g(Θ 0 ) : b(Θ w ) ⊗ b(Θ 0 ) dw + u 0 L ∇ 2 Θ g(Θ 0 ) : b(Θ s ) ⊗ b(Θ v ) dv," }, { "formula_coordinates": [ 32, 185.14, 89.66, 241.72, 40.66 ], "formula_id": "formula_151", "formula_text": "∇ 2 Θ g(Θ 0 ) : E [Σ(Θ s )] =∇ 2 Θ g(Θ 0 ) : E [Σ(Θ 0 )] + s 0 L ∇ 2 Θ g(Θ 0 ) : Σ(Θ s ) dv," }, { "formula_coordinates": [ 32, 153.79, 150.93, 304.42, 78.71 ], "formula_id": "formula_152", "formula_text": "Eg(Θ ε ) -Eg(Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 2 2 L ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 ) + 1 2 ∇ 2 Θ g(Θ 0 ) : E ε 0 ε 0 b(Θ 0 ) ⊗ b(Θ 0 )dsdu + ε 2 E ε 0 ∇ 2 Θ g(Θ 0 ) : Σ(Θ 0 )ds + Ē2 ε (Θ 0 )," }, { "formula_coordinates": [ 32, 196.97, 250.89, 225.25, 80.61 ], "formula_id": "formula_153", "formula_text": "ε 0 ε 0 s 0 L ∇ 2 Θ g(Θ 0 ) : b(Θ w ) ⊗ b(Θ 0 ) dwdsdu + ε 0 ε 0 u 0 L ∇ 2 Θ g(Θ 0 ) : b(Θ s ) ⊗ b(Θ v ) dvdsdu + ε 0 s 0 L ∇ 2 Θ g(Θ 0 ) : Σ(Θ s ) dvds," }, { "formula_coordinates": [ 32, 129.25, 362.38, 353.49, 100.89 ], "formula_id": "formula_154", "formula_text": "Eg(Θ ε ) -Eg(Θ 0 ) = ε ⟨∇ Θ g(Θ 0 ), E[b(Θ 0 )]⟩ + ε 2 2 ⟨b(Θ 0 ), ∇ Θ ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 )⟩ + ε 3 2 Σ(Θ 0 ) : ∇ 2 Θ ⟨∇ Θ g(Θ 0 ), b⟩ (Θ 0 ) + ε 2 2 ∇ 2 Θ g(Θ 0 ) : E [b(Θ 0 ) ⊗ b(Θ 0 )] + ε 2 2 E ∇ 2 Θ g(Θ 0 ) : Σ(Θ 0 ) + Ē2 ε (Θ 0 )," }, { "formula_coordinates": [ 32, 108, 479.42, 378.5, 242.51 ], "formula_id": "formula_155", "formula_text": "b (Θ) = -∇ Θ L S (Θ) + ε 4 ∥∇ Θ L S (Θ)∥ 2 2 , σ (Θ) = √ ε (Σ (Θ)) 1 2 , then Eg(Θ ε ) -Eg(Θ 0 ) = -ε ⟨∇ Θ g(Θ 0 ), ∇ Θ (L S (Θ)) | Θ=Θ0 ⟩ - ε 2 4 ∇ Θ g(Θ 0 ), ∇ Θ ∥∇ Θ L S (Θ)∥ 2 2 | Θ=Θ0 + ε 2 2 ⟨∇ Θ (L S (Θ)) | Θ=Θ0 , ∇ Θ ⟨∇ Θ g(Θ 0 ), ∇ Θ (L S (Θ))⟩ | Θ=Θ0 ⟩ + ε 2 2 ∇ 2 Θ g(Θ 0 ) : (∇ Θ (L S (Θ)) | Θ=Θ0 ) ⊗ ∇ Θ (L S (Θ)) | Θ=Θ0 ) + ε 2 2 ∇ 2 Θ g(Θ 0 ) : Σ(Θ 0 ) + Ē2 ε (Θ 0 ) = -ε ⟨∇ Θ g(Θ 0 ), ∇ Θ (L S (Θ)) | Θ=Θ0 ⟩ + ε 2 2 ∇ 2 Θ g(Θ 0 ) : (∇ Θ (L S (Θ)) | Θ=Θ0 ) ⊗ ∇ Θ (L S (Θ)) | Θ=Θ0 ) + ε 2 2 ∇ 2 Θ g(Θ 0 ) : Σ(Θ 0 ) + Ē2 ε (Θ 0 ), thus, we have |Eg(θ 1 ) -Eg(Θ ε )| = Eg(θ 0 ) -ε ∇ θ g(θ 0 ), ∇ θ L S (θ) θ=θ0 + ε 2 2 ∇ 2 θ g(θ 0 ) : ∇ θ L S (θ) θ=θ0 ⊗ ∇ θ L S (θ) θ=θ0 + Σ(θ 0 ) + E 2 ε (θ 0 ) -Eg(Θ 0 ) + ε ⟨∇ Θ g(Θ 0 ), ∇ Θ (L S (Θ)) | Θ=Θ0 ⟩ - ε 2 2 ∇ 2 Θ g(Θ 0 ) : (∇ Θ (L S (Θ)) | Θ=Θ0 ) ⊗ ∇ Θ (L S (Θ)) | Θ=Θ0 ) - ε 2 2 ∇ 2 Θ g(Θ 0 ) : Σ(Θ 0 ) + Ē2 ε (Θ 0 ) ≤ E 2 ε (θ 0 ) + Ē2 ε (Θ 0 ) ≤ ε 3 ∥g∥ C 6 C(T * , θ 0 , ε 0 ) + ε 3 ∥g∥ C 6 C(T * , Θ 0 ) = O(ε 3 )." }, { "formula_coordinates": [ 33, 145.93, 327.49, 320.14, 30.55 ], "formula_id": "formula_156", "formula_text": "P N ε g(θ 0 ) -P εN g(Θ 0 ) = N l=1 P N -l+1 ε • P (l-1)ε g(θ 0 ) -P N -l ε • P lε g(Θ 0 ) ," }, { "formula_coordinates": [ 33, 148.11, 381.74, 315.79, 65.73 ], "formula_id": "formula_157", "formula_text": "|Eg(θ N ) -Eg(Θ εN )| ≤ N l=1 P N -l+1 ε • P (l-1)ε g(θ 0 ) -P N -l ε • P lε g(Θ 0 ) ≤ N l=1 P N -l ε • P 1 ε • P (l-1)ε -P ε • P (l-1)ε g(Θ 0 ) ," }, { "formula_coordinates": [ 33, 167.07, 517.75, 274.54, 65.73 ], "formula_id": "formula_158", "formula_text": "|Eg(θ N ) -Eg(Θ εN )| ≤ N l=1 P 1 ε • P (l-1)ε -P ε • P (l-1)ε g(Θ 0 ) ≤ N l=1 P 1 ε g(Θ (l-1)ε ) -P ε g(Θ (l-1)ε ) ." }, { "formula_coordinates": [ 33, 142.07, 610.23, 331.17, 61.88 ], "formula_id": "formula_159", "formula_text": "P 1 ε g(Θ (l-1)ε ) -P ε g(Θ (l-1)ε ) = E |Eg(θ l ) -Eg(Θ ε l)| Θ (l-1)ε ≤ E E 2 ε (Θ (l-1)ε ) + E Ē2 ε (Θ (l-1)ε ) ≤ ε 3 ∥g∥ C 6 C(T * , θ 0 , ε 0 ) + ε 3 ∥g∥ C 6 C(T * , Θ 0 ) = O(ε 3 )." }, { "formula_coordinates": [ 34, 108, 93.09, 383.04, 68.49 ], "formula_id": "formula_160", "formula_text": "P N ε g(θ 0 ) -P εN g(Θ 0 ) ≤ N l=1 P N -l+1 ε • P (l-1)ε g(θ 0 ) -P N -l ε • P lε g(Θ 0 ) = N O(ε 3 ), hence for N = N T,ε , P N ε g(θ 0 ) -P εN g(Θ 0 ) = N O(ε 3 ) = N εO(ε) ≤ T O(ε 2 ) = O(ε 2 )." }, { "formula_coordinates": [ 35, 131.63, 164.84, 348.75, 30.32 ], "formula_id": "formula_161", "formula_text": "∇ q k L S (Θ) = 1 n n i=1 e i ∇ q k (a k σ(w ⊺ k x i )) + 1 -p np n i=1 a k σ(w ⊺ k x i )∇ q k (a k σ(w ⊺ k x i )) ," }, { "formula_coordinates": [ 35, 259.64, 199.57, 93.92, 35.32 ], "formula_id": "formula_162", "formula_text": "[n], 1 c ≤ ∥x i ∥ 2 , |y i | ≤ c," }, { "formula_coordinates": [ 35, 108, 251.32, 308.35, 282.34 ], "formula_id": "formula_163", "formula_text": "|e i | = m r=1 a r σ(w ⊺ r x i ) -y i ≤ 1 + m r=1 |a r | ∥w r ∥ 2 ≤ 1 + 1 2 m r=1 |a r | 2 + ∥w r ∥ 2 2 ≤ 1 + ∥Θ∥ 2 2 , hence ∥∇ q k L S (Θ)∥ 2 ≤ 1 + ∥Θ∥ 2 2 ∥q k ∥ 2 + 1 -p p ∥q k ∥ 3 2 , thus we have ∥∇ Θ L S (Θ)∥ 2 ≤ 1 + ∥Θ∥ 2 2 ∥Θ∥ 2 + 1 -p p ∥Θ∥ 3 2 ≤ C p (1 + ∥Θ∥ 3 2 ). Moreover, since ∇ 2 Θ L S (Θ) = 1 n n i=1 ∇ Θ e i ⊗ ∇ Θ e i + e i ∇ 2 Θ e i + 1 -p np n i=1 diag ∇ 2 q k a 2 k σ(w ⊺ k x i ) 2 ," }, { "formula_coordinates": [ 35, 108, 549.28, 387.57, 168.59 ], "formula_id": "formula_164", "formula_text": "∇ 2 Θ L S (Θ)∇ Θ L S (Θ) = 1 n n i=1 ∇ Θ e i ⊗ ∇ Θ e i + e i ∇ 2 Θ e i + 1 -p np n i=1 diag ∇ 2 q k a 2 k σ(w ⊺ k x i ) 2 × 1 n n i=1 e i ∇ Θ e i + 1 -p np n i=1 ∇ Θ a 2 k σ(w ⊺ k x i ) 2 , then the components in ∇ 2 Θ L S (Θ)∇ Θ L S (Θ) can be categorized into six different types: Firstly, ∥(∇ Θ e i ⊗ ∇ Θ e i ) e j ∇ Θ e j ∥ 2 ≤ |e j | ∥∇ Θ e i ∥ 2 2 ∥∇ Θ e j ∥ 2 ≤ 1 + ∥Θ∥ 2 2 ∥Θ∥ 3 2 ≤ 1 + ∥Θ∥ 5 2 ." }, { "formula_coordinates": [ 36, 226.07, 94.69, 159.36, 78.51 ], "formula_id": "formula_165", "formula_text": "e i ∇ 2 Θ e i e j ∇ Θ e j 2 ≤ 1 + ∥Θ∥ 2 2 2 ∇ 2 Θ e i 2→2 ∥∇ Θ e j ∥ 2 ≤ 1 + ∥Θ∥ 4 2 ∥Θ∥ 2 2 ≤ 1 + ∥Θ∥ 6 2 ." }, { "formula_coordinates": [ 36, 192.14, 206.5, 227.22, 76.81 ], "formula_id": "formula_166", "formula_text": "diag ∇ 2 q k a 2 k σ(w ⊺ k x i ) 2 e j ∇ Θ e j 2 ≤ 1 + ∥Θ∥ 2 2 diag ∇ 2 q k a 2 k σ(w ⊺ k x i ) 2 2→2 ∥Θ∥ 2 ≤ 1 + ∥Θ∥ 2 2 1 + ∥Θ∥ 3 2 ∥Θ∥ 2 ≤ 1 + ∥Θ∥ 6 2 ." }, { "formula_coordinates": [ 36, 219.31, 316.61, 172.88, 49.41 ], "formula_id": "formula_167", "formula_text": "(∇ Θ e i ⊗ ∇ Θ e i ) ∇ Θ a 2 k σ(w ⊺ k x j ) 2 2 ≤ ∥∇ Θ e i ∥ 2 2 ∥Θ∥ 3 2 ≤ 1 + ∥Θ∥ 5 2 ." }, { "formula_coordinates": [ 36, 232.6, 399.32, 146.3, 76.48 ], "formula_id": "formula_168", "formula_text": "e i ∇ 2 Θ e i ∇ Θ a 2 k σ(w ⊺ k x j ) 2 2 ≤ 1 + ∥Θ∥ 2 2 ∇ 2 Θ e i 2→2 ∥Θ∥ 3 2 ≤ 1 + ∥Θ∥ 2 2 ∥Θ∥ 4 2 ≤ 1 + ∥Θ∥ 6 2 ." }, { "formula_coordinates": [ 36, 192.56, 509.11, 226.39, 72.7 ], "formula_id": "formula_169", "formula_text": "diag ∇ 2 q k a 2 k σ(w ⊺ k x i ) 2 ∇ Θ a 2 k σ(w ⊺ k x j ) 2 2 ≤ diag ∇ 2 q k a 2 k σ(w ⊺ k x i ) 2 2→2 ∥Θ∥ 3 2 ≤ 1 + ∥Θ∥ 3 2 ∥Θ∥ 3 2 ≤ 1 + ∥Θ∥ 6 2 ." }, { "formula_coordinates": [ 36, 258.75, 620.72, 94.49, 14.11 ], "formula_id": "formula_170", "formula_text": "∥b(Θ)∥ 2 ≤ 1 + ∥Θ∥ 6 2 ." }, { "formula_coordinates": [ 36, 228.53, 674.83, 154.94, 49.73 ], "formula_id": "formula_171", "formula_text": "Σ =     Σ 11 Σ 12 • • • Σ 1m Σ 21 Σ 22 • • • Σ 2m . . . . . . . . . . . . Σ m1 Σ m2 • • • Σ mm     ." }, { "formula_coordinates": [ 37, 132.44, 92, 339.24, 134.42 ], "formula_id": "formula_172", "formula_text": "Σ kk (Θ) = 1 p -1 1 n n i=1 e i,\\k + 1 p a k σ(w ⊺ k x i ) ∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 e i,\\k + 1 p a k σ(w ⊺ k x i ) ∇ q k (a k σ(w ⊺ k x i )) + 1 p 2 - 1 p m l=1,l̸ =k 1 n n i=1 a l σ(w ⊺ l x i )∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 a l σ(w ⊺ l x i )∇ q k (a k σ(w ⊺ k x i ))" }, { "formula_coordinates": [ 37, 108, 234.07, 390.42, 149.68 ], "formula_id": "formula_173", "formula_text": "and for each k, r ∈ [m] with k ̸ = r, Σ kr (Θ) = 1 p -1 1 n n i=1 e i,\\k,\\r + 1 p a k σ(w ⊺ k x i ) + 1 p a r σ(w ⊺ r x i ) ∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 a k σ(w ⊺ k x i )∇ qr (a r σ(w ⊺ r x i )) + 1 p -1 1 n n i=1 a r σ(w ⊺ r x i )∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 e i,\\k,\\r + a k σ(w ⊺ k x i ) + 1 p a r σ(w ⊺ r x i ) ∇ qr (a r σ(w ⊺ r x i )) ," }, { "formula_coordinates": [ 37, 143.96, 404.91, 323.59, 68.05 ], "formula_id": "formula_174", "formula_text": "∥Σ kk (Θ)∥ 2 F ≤ C p   e i,\\k + 1 p a k σ(w ⊺ k x i ) 2 + m l=1,l̸ =k a 2 l σ(w ⊺ l x i ) 2   ∥∇ Θ e i ∥ 2 2 ≤ C p (1 + ∥Θ∥ 2 2 ) 2 ∥Θ∥ 2 2 ≤ (1 + ∥Θ∥6" }, { "formula_coordinates": [ 37, 249.17, 497.49, 113.65, 14.11 ], "formula_id": "formula_175", "formula_text": "∥Σ kr (Θ)∥ 2 F ≤ (1 + ∥Θ∥ 6 2 )." }, { "formula_coordinates": [ 37, 224.5, 566.89, 166.98, 26.57 ], "formula_id": "formula_176", "formula_text": "M (Θ) := b(Θ) if ∥Θ∥ 2 ≤ M, b(M Θ ∥Θ∥ 2 ) if ∥Θ∥ 2 > M." }, { "formula_coordinates": [ 37, 108, 620.56, 396.67, 31.47 ], "formula_id": "formula_177", "formula_text": "M (•) to the truncated SDE dΘ t = b M (Θ t ) dt + σ M (Θ t ) dW t , Θ 0 = Θ(0).(47)" }, { "formula_coordinates": [ 38, 108, 207.54, 379.71, 51.3 ], "formula_id": "formula_178", "formula_text": "θ N = θ N -1 -ε∇ θ R drop S (θ N -1 ; η N ) , then we obtain that E ∥θ N ∥ 2l 2 = E ∥θ N -1 ∥ 2l 2 -2lεE ∥θ N -1 ∥ 2l-2 2 θ N -1 , ∇ θ R drop S (θ N -1 ; η N ) + O(ε 2 )," }, { "formula_coordinates": [ 38, 211.13, 301.52, 183.66, 91.77 ], "formula_id": "formula_179", "formula_text": "∥θ N -1 ∥ 2l-2 2 θ N -1 , ∇ θ R drop S (θ N -1 ; η N ) ≤ ∥θ N -1 ∥ 2l-1 2 ∇ θ R drop S (θ N -1 ; η N ) 2 = ∥θ N -1 ∥ 2l-1 2 e N i ∇ θ e N i 2 ≤ ∥θ N -1 ∥ 2l-1 2 C p (1 + ∥θ N -1 ∥ 2 2 ) ∥θ N -1 ∥ 2 ≤C p (1 + ∥θ N -1 ∥ 2l+2 2" }, { "formula_coordinates": [ 38, 243.75, 440.99, 260.92, 22.31 ], "formula_id": "formula_180", "formula_text": "du dt = 1 + u 1+λ , u 0 := u(0),(48)" }, { "formula_coordinates": [ 39, 108, 142.24, 342.81, 99.15 ], "formula_id": "formula_181", "formula_text": "Σ kr (θ N -1 ) := Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ qr R drop S (θ N -1 ; η N ) , then Σ =     Σ 11 Σ 12 • • • Σ 1m Σ 21 Σ 22 • • • Σ 2m . . . . . . . . . . . . Σ m1 Σ m2 • • • Σ mm     ." }, { "formula_coordinates": [ 39, 108, 286.93, 432.58, 434.23 ], "formula_id": "formula_182", "formula_text": "Σ kk for all k ∈ [m]. Σ kk (θ N -1 ) = Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ q k R drop S (θ N -1 ; η N ) = 1 n 2 n i,j=1 Cov e N i (η N ) k , e N j (η N ) k ∇ q k (a k σ(w ⊺ k x i )) ⊗ ∇ q k (a k σ(w ⊺ k x j )) , in order to compute Cov e N i (η N ) k , e N j (η N ) k , we need to compute firstly E e N i e N j (η N ) 2 k , and since E e N i e N j (η N ) 2 k consists of four parts, one of which is E     m k ′ =1,k ′ ̸ =k (η N ) k ′ a k ′ σ(w ⊺ k ′ x i ) -y i     m l=1,l̸ =k (η N ) l a l σ(w ⊺ l x j ) -y j   (η N ) 2 k   =E     m k ′ =1,k ′ ̸ =k (η N ) k ′ a k ′ σ(w ⊺ k ′ x i ) -y i     m l=1,l̸ =k (η N ) l a l σ(w ⊺ l x j ) -y j     E (η N ) 2 k = 1 p E   m k ′ =1,k ′ ̸ =k (η N ) 2 k ′ a 2 k ′ σ(w ⊺ k ′ x i )σ(w ⊺ k ′ x j )   + E   k ′ ̸ =l, k ′ ,l̸ =k (η N ) k ′ (η N ) l a k ′ a l σ(w ⊺ k ′ x i )σ(w ⊺ l x j )   -y i E   m k ′ =1,k ′ ̸ =k (η N ) k ′ a k ′ σ(w ⊺ k ′ x j )   -y j E   m k ′ =1,k ′ ̸ =k (η N ) k ′ a k ′ σ(w ⊺ k ′ x i )   + y i y j = 1 p 2 m k ′ =1,k ′ ̸ =k a 2 k ′ σ(w ⊺ k ′ x i )σ(w ⊺ k ′ x j ) + 1 p k ′ ̸ =l, k ′ ,l̸ =k a k ′ a l σ(w ⊺ k ′ x i )σ(w ⊺ l x j ) - y i p m k ′ =1,k ′ ̸ =k a k ′ σ(w ⊺ k ′ x j ) - y j p m k ′ =1,k ′ ̸ =k a k ′ σ(w ⊺ k ′ x i ) + y i y j p = 1 p   m k ′ =1,k ′ ̸ =k a k ′ σ(w ⊺ k ′ x i ) -y i     m k ′ =1,k ′ ̸ =k a k ′ σ(w ⊺ k ′ x j ) -y j   + 1 p 2 - 1 p   m k ′ =1,k ′ ̸ =k a 2 k ′ σ(w ⊺ k ′ x i )σ(w ⊺ k ′ x j )   ," }, { "formula_coordinates": [ 40, 170.99, 93.07, 270.03, 73.61 ], "formula_id": "formula_183", "formula_text": "E   (η N ) k a k σ(w ⊺ k x i )   m l=1,l̸ =k (η N ) l a l σ(w ⊺ l x j ) -y j   (η N ) 2 k   = a k σ(w ⊺ k x i ) p 2   m k ′ =1,k ′ ̸ =k a k ′ σ(w ⊺ k ′ x j ) -y j   ," }, { "formula_coordinates": [ 40, 171.43, 195.59, 269.15, 73.61 ], "formula_id": "formula_184", "formula_text": "E   (η N ) k a k σ(w ⊺ k x j )   m l=1,l̸ =k (η N ) l a l σ(w ⊺ l x i ) -y i   (η N ) 2 k   = a k σ(w ⊺ k x j ) p 2   m k ′ =1,k ′ ̸ =k a k ′ σ(w ⊺ k ′ x i ) -y i   ," }, { "formula_coordinates": [ 40, 153.83, 297.72, 304.35, 22.31 ], "formula_id": "formula_185", "formula_text": "E (η N ) k a k σ(w ⊺ k x i )(η N ) k a k σ(w ⊺ k x j )(η N ) 2 k = 1 p 3 a 2 k σ(w ⊺ k x i )σ(w ⊺ k x j )." }, { "formula_coordinates": [ 40, 159.64, 346.58, 277.38, 63.27 ], "formula_id": "formula_186", "formula_text": "E e N i e N j (η N ) 2 k = 1 p 2 - 1 p   m k ′ =1,k ′ ̸ =k a 2 k ′ σ(w ⊺ k ′ x i )σ(w ⊺ k ′ x j )   + 1 p e i," }, { "formula_coordinates": [ 40, 108, 511.97, 358.55, 139.59 ], "formula_id": "formula_187", "formula_text": "x i )σ(w ⊺ k x j ), hence Cov e N i (η N ) k , e N j (η N ) k =E e N i e N j (η N ) 2 k -E e N i (η N ) k E e N i (η N ) k = 1 p 2 - 1 p   m k ′ =1,k ′ ̸ =k a 2 k ′ σ(w ⊺ k ′ x i )σ(w ⊺ k ′ x j )   + 1 p -1 e i," }, { "formula_coordinates": [ 40, 117.45, 661.98, 377.1, 55.91 ], "formula_id": "formula_188", "formula_text": "x i )σ(w ⊺ k x j ) = 1 p -1 E e N i (η N ) k E e N j (η N ) k + 1 p 2 - 1 p   m k ′ =1,k ′ ̸ =k a 2 k ′ σ(w ⊺ k ′ x i )σ(w ⊺ k ′ x j )   ," }, { "formula_coordinates": [ 41, 150.94, 105.76, 302.23, 154.65 ], "formula_id": "formula_189", "formula_text": "Σ kk (θ N -1 ) = Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ q k R drop S (θ N -1 ; η N ) = 1 p -1 1 n n i=1 e i,\\k + 1 p a k σ(w ⊺ k x i ) ∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 e i,\\k + 1 p a k σ(w ⊺ k x i ) ∇ q k (a k σ(w ⊺ k x i )) + 1 p 2 - 1 p m l=1,l̸ =k 1 n n i=1 a l σ(w ⊺ l x i )∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1" }, { "formula_coordinates": [ 41, 119.65, 343.25, 276.47, 49.57 ], "formula_id": "formula_190", "formula_text": "Σ kr (θ N -1 ) = Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ qr R drop S (θ N -1 ; η N ) = 1 n 2 n i,j=1" }, { "formula_coordinates": [ 41, 108, 458.46, 426.69, 222.86 ], "formula_id": "formula_191", "formula_text": "E     m k ′ =1,k ′ ̸ =k,k ′ ̸ =r (η N ) k ′ a k ′ σ(w ⊺ k ′ x i ) -y i     m l=1,l̸ =k,l̸ =r (η N ) l a l σ(w ⊺ l x j ) -y j   (η N ) k (η N ) r   =E     m k ′ =1,k ′ ̸ =k,k ′ ̸ =r (η N ) k ′ a k ′ σ(w ⊺ k ′ x i ) -y i     m l=1,l̸ =k,l̸ =r (η N ) l a l σ(w ⊺ l x j ) -y j     E [(η N ) k (η N ) r ] = 1 p m k ′ =1,k ′ ̸ =k,k ′ ̸ =r a 2 k ′ σ(w ⊺ k ′ x i )σ(w ⊺ k ′ x j ) + k ′ ̸ =l and k ′ ,l̸ =k,r a k ′ a l σ(w ⊺ k ′ x i )σ(w ⊺ l x j ) -y i m k ′ =1,k ′ ̸ =k,k ′ ̸ =r a k ′ σ(w ⊺ k ′ x j ) -y j m k ′ =1,k ′ ̸ =k,k ′ ̸ =r a k ′ σ(w ⊺ k ′ x i ) + y i y j =   m k ′ =1,k ′ ̸ =k,k ′ ̸ =r a k ′ σ(w ⊺ k ′ x i ) -y i     m k ′ =1,k ′ ̸ =k,k ′ ̸ =r a k ′ σ(w ⊺ k ′ x j ) -y j   + 1 p -1   m k ′ =1,k ′ ̸ =k,k ′ ̸ =r a 2 k ′ σ(w ⊺ k ′ x i )σ(w ⊺ k ′ x j )  " }, { "formula_coordinates": [ 41, 190.04, 687.41, 223.91, 33.76 ], "formula_id": "formula_192", "formula_text": "+ 1 p -1   m k ′ =1,k ′ ̸ =k,k ′ ̸ =r a 2 k ′ σ(w ⊺ k ′ x i )σ(w ⊺ k ′ x j )   ," }, { "formula_coordinates": [ 42, 108, 92.28, 333.14, 73.61 ], "formula_id": "formula_193", "formula_text": "E     m k ′ =1,k ′ ̸ =k,k ′ ̸ =r (η N ) k ′ a k ′ σ(w ⊺ k ′ x i ) -y i   (η N ) k a k σ(w ⊺ k x j )(η N ) k (η N ) r   =E   m k ′ =1,k ′ ̸ =k,k ′ ̸ =r (η N ) k ′ a k ′ σ(w ⊺ k ′ x i ) -y i   a k σ(w ⊺ k x j )E (η N ) 2 k (η N ) r =" }, { "formula_coordinates": [ 42, 108, 193.23, 332.16, 73.61 ], "formula_id": "formula_194", "formula_text": "E     m k ′ =1,k ′ ̸ =k,k ′ ̸ =r (η N ) k ′ a k ′ σ(w ⊺ k ′ x i ) -y i   (η N ) r a r σ(w ⊺ r x j )(η N ) k (η N ) r   =E   m k ′ =1,k ′ ̸ =k,k ′ ̸ =r (η N ) k ′ a k ′ σ(w ⊺ k ′ x i ) -y i   a r σ(w ⊺ r x j )E (η N ) k (η N ) 2 r =" }, { "formula_coordinates": [ 42, 108, 294.18, 368.9, 73.61 ], "formula_id": "formula_195", "formula_text": "E     m k ′ =1,k ′ ̸ =k,k ′ ̸ =r (η N ) k ′ a k ′ σ(w ⊺ k ′ x j ) -y j   (η N ) k a k σ(w ⊺ k x i )(η N ) k (η N ) r   =E   m k ′ =1,k ′ ̸ =k,k ′ ̸ =r (η N ) k ′ a k ′ σ(w ⊺ k ′ x j ) -y j   a k σ(w ⊺ k x i )E (η N ) 2 k (η N ) r = a k σ(w ⊺ k x i ) p" }, { "formula_coordinates": [ 42, 115.44, 394.01, 376.97, 38.28 ], "formula_id": "formula_196", "formula_text": "E [(η N ) k a k σ(w ⊺ k x i )(η N ) k a k σ(w ⊺ k x j )(η N ) k (η N ) r ] = E (η N ) 3 k (η N ) r a 2 k σ(w ⊺ k x i )σ(w ⊺ k x j ) = 1 p 2 a 2 k σ(w ⊺ k x i )σ(w ⊺ k x j )," }, { "formula_coordinates": [ 42, 158.67, 456.58, 294.67, 37.89 ], "formula_id": "formula_197", "formula_text": "E [(η N ) k a k σ(w ⊺ k x i )(η N ) r a r σ(w ⊺ r x j )(η N ) k (η N ) r ] =E (η N ) 2 k (η N ) 2 r a k a r σ(w ⊺ k x i )σ(w ⊺ r x j ) = 1 p 2 a k a r σ(w ⊺ k x i )σ(w ⊺ r x j )," }, { "formula_coordinates": [ 42, 108, 519.71, 333.04, 73.61 ], "formula_id": "formula_198", "formula_text": "E     m k ′ =1,k ′ ̸ =k,k ′ ̸ =r (η N ) k ′ a k ′ σ(w ⊺ k ′ x j ) -y j   (η N ) r a r σ(w ⊺ r x i )(η N ) k (η N ) r   =E   m k ′ =1,k ′ ̸ =k,k ′ ̸ =r (η N ) k ′ a k ′ σ(w ⊺ k ′ x j ) -y j   a r σ(w ⊺ r x i )E (η N ) k (η N ) 2 r =" }, { "formula_coordinates": [ 42, 111.06, 619.54, 385.73, 38.28 ], "formula_id": "formula_199", "formula_text": "E [(η N ) r a r σ(w ⊺ r x i )(η N ) k a k σ(w ⊺ k x j )(η N ) k (η N ) r ] = E (η N ) 2 k (η N ) 2 r a k a r σ(w ⊺ k x i )σ(w ⊺ r x j ) = 1 p 2 a k a r σ(w ⊺ k x i )σ(w ⊺ r x j )," }, { "formula_coordinates": [ 43, 235.13, 492.74, 207.42, 22.31 ], "formula_id": "formula_200", "formula_text": "x i )σ(w ⊺ k x j ) + 1 p 2 -1 a r a k σ(w ⊺ r x i )σ(w ⊺ k x j )," }, { "formula_coordinates": [ 43, 131.84, 542.51, 337.52, 153.07 ], "formula_id": "formula_201", "formula_text": "Σ kr (θ N -1 ) = Cov ∇ q k R drop S (θ N -1 ; η N ) , ∇ qr R drop S (θ N -1 ; η N ) = 1 p -1 1 n n i=1 e i,\\k,\\r + 1 p a k σ(w ⊺ k x i ) + 1 p a r σ(w ⊺ r x i ) ∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1 a k σ(w ⊺ k x i )∇ qr (a r σ(w ⊺ r x i )) + 1 p -1 1 n n i=1 a r σ(w ⊺ r x i )∇ q k (a k σ(w ⊺ k x i )) ⊗ 1 n n i=1" } ]
Stochastic Modified Equations and Dynamics of Dropout Algorithm
Dropout is a widely utilized regularization technique in the training of neural networks, nevertheless, its underlying mechanism and its impact on achieving good generalization abilities remain poorly understood. In this work, we derive the stochastic modified equations for analyzing the dynamics of dropout, where its discrete iteration process is approximated by a class of stochastic differential equations. In order to investigate the underlying mechanism by which dropout facilitates the identification of flatter minima, we study the noise structure of the derived stochastic modified equation for dropout. By drawing upon the structural resemblance between the Hessian and covariance through several intuitive approximations, we empirically demonstrate the universal presence of the inverse variance-flatness relation and the Hessian-variance relation, throughout the training process of dropout. These theoretical and empirical findings make a substantial contribution to our understanding of the inherent tendency of dropout to locate flatter minima.
Zhongwang Zhang; Yuqing Li; Tao Luo; Zhi-Qin John Xu
[ { "figure_caption": "Figure 2 :2Figure 2: (a, b)The inverse relation between the variance {λ i (Σ)} N i=1 and the interval flatness {F vi(Σ) } N i=1 for different choices of p and learning rate lr with different network structures. The PCA is done for different datasets D sampled from parameters for the top line and sampled from gradients of parameters for the bottom line. The dashed lines give the approximate slope of the scatter. (c, d)The relation between the variance {Var(Proj vi(H) (D))} N i=1 and the eigenvalue {λ i (H)} N i=1 for different choices of p and learning rate lr with different network structures. The projection is done for different datasets D sampled from parameters for the top line and sampled from gradients of parameters for the bottom line. The dashed lines give the approximate slope of the scatter. Refer to Appendix B for further experiments such as ResNet and Transformer.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: (a, b, c, d) The inverse relation between the variance {λ i (Σ)} N i=1 and the interval flatness {F vi(Σ) } N i=1 for different choices of p and learning rate lr with different network structures. The PCA is done for different datasets D sampled from parameters for the top line and sampled from gradients of parameters for the bottom line. The dashed lines give the approximate slope of the scatter. (e, f, g, h) The relation between the variance {Var(Proj vi(H) (D))} N i=1 and the eigenvalue {λ i (H)} N i=1 for different choices of p and learning rate lr with different network structures. The projection is done for different datasets D sampled from parameters for the top line and sampled from gradients of parameters for the bottom line. The dashed lines give the approximate slope of the scatter.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "function, and {η N } N ≥1 is a disturbance sequence on R m , whose marginal distribution possesses a density supported on an open subset of R m . Then, based on the results in Meyn and Tweedie (2012), the dropout iterations (23) forms a time-homogeneous Markov chain. Thus, we may misuse E[• | F N ], the conditional expectation given F N , with E θ N -1 [•], the conditional expectation given θ N -1 . Then, for each k ∈ [m], the conditional expectation of the increment restricted to q k reads", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "F. 22Existence, Uniqueness and Moment Estimates of the Solution to SDE Existence of the solution to SDE (28) is proved by a truncation procedure: For each M ≥ 1, define the truncation function b", "figure_data": "", "figure_id": "fig_4", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Comparison between Tr(H i Σ i ) and Tr(H i Σi ) in each training epoch i for different choices of p and learning rate lr. The FNN is trained on the MNIST dataset using the first 10000 examples as the training dataset. The solid and the dotted lines represent the value of Tr(H i Σ i ) and Tr(H i Σi ), respectively. direction. In this subsection, we verify this relation by two sets of experiments. Firstly, we present two different approaches to characterize the flatness of loss landscape and the covariance of noise from the random trajectory data D para and random gradient data D grad , then we numerically demonstrate the inverse variance-flatness relation. Due to space limitations, we defer the experiments on ResNet and Transformer to Appendix B. For convenience, D refers to either the dataset D para or the dataset D para depending on its context, so is the case for their corresponding covariance Σ and Hessian H.", "figure_data": "10 110 3Tr(H )trace10 7 10 5epoch 0 50 100 150 200 250 300 Tr(H ) p: 0.8, lr: 0.1 p: 0.8, lr: 0.05 p: 0.9, lr: 0.1 p: 0.9, lr: 0.05Figure 1:", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "relation and Hessianvariance relation and beyond. Y. Feng, L. Li, J.-G. Liu, Semigroups of stochastic gradient descent and online principal component analysis: properties and diffusion approximations, Communications in Mathematical Sciences 16 (2018) 777-789.", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "\\k e j,\\k + a k σ(w ⊺ k x j )", "figure_data": "p 2e i,\\k +a k σ(w ⊺ k x i ) p 2e j,\\k+1 p 3 a 2 k σ(w ⊺ k x i )σ(w ⊺ k x j ),andE e N i (η N ) k E e N j (η N ) k= e i,\\k +1 pa k σ(w ⊺ k x i )e j,\\k +1 pa k σ(w ⊺ k x j )=e i,\\k e j,\\k +a k σ(w ⊺ k x j ) pe i,\\k +a k σ(w ⊺ k x i ) pe j,\\k +1 p 2 a 2 k σ(w ⊺ k", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "To sum up, E e N i e N j (η N ) k (η N ) r =e i,\\k,\\r e j,\\k,\\r+ =1,k ′ ̸ =k,k ′ ̸ =r a 2 k ′ σ(w ⊺ k ′ x i )σ(w ⊺ k ′ x j ) N ) k E e N j (η N ) r = e i,\\k,\\r + a r σ(w ⊺ r x i ) + 1 p a k σ(w ⊺ k x i ) e j,\\k,\\r + a k σ(w ⊺ k x j ) + 1 p a r σ(w ⊺ r x j )=e i,\\k,\\r e j,\\k,\\r + e i,\\k,\\r a k σ(w ⊺ k x j ) + 1 p e i,\\k,\\r a r σ(w ⊺ r x j ) + a r σ(w ⊺ r x i )e j,\\k,\\r+ a r a k σ(w ⊺ r x i )σ(w ⊺ k x j ) + N i (η N ) k , e N j (η N ) r =E e N i e N j (η N ) k (η N ) r -E e N i (η N ) k E e N i (η N ) r", "figure_data": "1 p-1  k ′  m  +a k σ(w ⊺ k x j ) pe i,\\k,\\r+a r σ(w ⊺ r x j ) pe i,\\k,\\r +a k σ(w ⊺ k x i ) pe j,\\k,\\r +1 p 2 a 2 k σ(w ⊺ k x i )σ(w ⊺ k x j )+1 p 2 a k a r σ(w ⊺ k x i )σ(w ⊺ r x j ) +a r σ(w ⊺ r x i ) pe j,\\k,\\r+1 p 2 a k a r σ(w ⊺ k x i )σ(w ⊺ r x j ) +1 p 2 a 2 r σ(w ⊺ r x i )σ(w ⊺ r x j ),andE e N i (η 1 pa 2 r σ(w ⊺ r x i )σ(w ⊺ r x j ) +1 pa k σ(w ⊺ k x i )e j,\\k,\\r+1 pa 2 k σ(w ⊺=1 p-1  k ′ =1,k ′ ̸ =k,k ′ ̸ =r ma 2 k ′ σ(w ⊺ k ′ x i )σ(w ⊺ k ′ x j )  +1 p-1 a k σ(w ⊺ k x j )e i,\\k,\\r+1 p-1 a r σ(w ⊺ r x i )e j,\\k,\\r +1 p 2 -1 pa 2 r σ(w ⊺ r x i )σ(w ⊺ r x j )+1 p 2 -1 pa 2 k σ(w ⊺ k", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "(Hinton et al., 2012)", "Explanation": "The cited work by Hinton et al. (2012) is mentioned as a foundational work that has been used in the training of neural networks (NNs) and has achieved state-of-the-art test performance in deep learning."}, {"Category": "Supporting Evidence", "Citation": "(Tan and Le, 2019)", "Explanation": "The cited work by Tan and Le (2019) is mentioned as a reference to the use of dropout in the training of NNs, which has been shown to be effective in deep learning."}, {"Category": "Supporting Evidence", "Citation": "(Helmbold and Long, 2015)", "Explanation": "The cited work by Helmbold and Long (2015) is mentioned as a reference to the use of dropout in the training of NNs, which has been shown to be effective in deep learning."}, {"Category": "Methodological Basis", "Citation": "(Srivastava et al., 2014)", "Explanation": "The cited work by Srivastava et al. (2014) is mentioned as a methodological basis for the use of dropout in the training of NNs, which involves randomly removing a subset of neurons during the training process."}, {"Category": "Extension or Continuation", "Citation": "(Keskar et al., 2016)", "Explanation": "The cited work by Keskar et al. (2016) is mentioned as an extension of the research on the noise structure in SGD, which has been shown to be important for understanding the training behaviors of stochastic algorithms."}, {"Category": "Extension or Continuation", "Citation": "(Feng and Tu, 2021)", "Explanation": "The cited work by Feng and Tu (2021) is mentioned as an extension of the research on the noise structure in SGD, which has been shown to play a crucial role in facilitating the exploration of flatter solutions in deep learning."}, {"Category": "Extension or Continuation", "Citation": "(Zhu et al., 2018)", "Explanation": "The cited work by Zhu et al. (2018) is mentioned as an extension of the research on the noise structure in SGD, which has been shown to be important for understanding the training behaviors of stochastic algorithms in deep learning."}, {"Category": "Methodological Basis", "Citation": "(Li et al., 2017)", "Explanation": "The cited work introduces the framework of stochastic modified equations (SMEs) to approximate the training dynamics of the dropout algorithm applied to two-layer NNs, which the citing paper adopts in their research to quantify the leading order dynamics of the dropout algorithm and its variants."}, {"Category": "Data Source", "Citation": "(LeCun et al., 1998)", "Explanation": "The cited work provides the MNIST dataset as a foundational element for the research conducted in the citing paper on fully-connected neural networks."}, {"Category": "Data Source", "Citation": "(Krizhevsky et al., 2009)", "Explanation": "The cited work provides the CIFAR-100 dataset as a foundational element for the research conducted in the citing paper on fully-connected neural networks."}, {"Category": "Data Source", "Citation": "(Elliott et al., 2016)", "Explanation": "The cited work provides the Multi30k dataset as a foundational element for the research conducted in the citing paper on fully-connected neural networks."}, {"Category": "Data Source", "Citation": "(He et al., 2016)", "Explanation": "The cited work provides the ResNet-20 structure as a foundational element for the research conducted in the citing paper on fully-connected neural networks."}, {"Category": "Data Source", "Citation": "(Vaswani et al., 2017)", "Explanation": "The cited work provides the transformer structure as a foundational element for the research conducted in the citing paper on fully-connected neural networks."}, {"Category": "Methodological Basis", "Citation": "(Wager et al., 2013)", "Explanation": "The cited work by Wager et al. (2013) provides a form of adaptive regularization in the context of linear regression and logistic problems, which the citing paper adopts to study the regularization effect conferred by dropout in a more general context."}, {"Category": "Data Source", "Citation": "(McAllester, 2013)", "Explanation": "The cited work by McAllester (2013) proposes a PAC-Bayesian bound that the citing paper utilizes in their study to understand the regularization effect of dropout in a more rigorous and self-contained manner."}, {"Category": "Extension or Continuation", "Citation": "(Wan et al., 2013; Mou et al., 2018)", "Explanation": "The cited works by Wan et al. (2013) and Mou et al. (2018) derive some Rademacher-complexity-type error bounds specifically tailored for dropout, which the citing paper extends to study the regularization effect of dropout in a more detailed and specific manner."}, {"Category": "Methodological Basis", "Citation": "(Mianjy and Arora, 2020)", "Explanation": "The cited work by Mianjy and Arora (2020) demonstrates that dropout training with logistic loss achieves \u03b5-suboptimality in test error within O(1/\u03b5) iterations, which the citing paper adopts to study the regularization effect of dropout in a more practical and efficient manner."}, {"Category": "Methodological Basis", "Citation": "(Zhang and Xu, 2022)", "Explanation": "The cited work by Zhang and Xu (2022) establishes that dropout enhances the flatness of the loss landscape and facilitates condensation through an additional regularization term endowed by dropout, which the citing paper builds upon to study the regularization effect of dropout in a more detailed and specific context."}, {"Category": "Methodological Basis", "Citation": "(Li et al., 2017; Li et al., 2019)", "Explanation": "The cited works by Li et al. (2017) and Li et al. (2019) present an entirely rigorous and self-contained mathematical formulation of the SME framework that applies to a wide class of stochastic algorithms, which the citing paper adopts to study the dynamics of stochastic algorithms in a more rigorous and self-contained manner."}, {"Category": "Methodological Basis", "Citation": "(Feng et al., 2017)", "Explanation": "The cited work by Feng et al. (2017) adopts a semigroup approach to investigate the dynamics of SGD and online PCA, which the citing paper builds upon to study the dynamics of stochastic algorithms in a more detailed and specific context."}, {"Category": "Methodological Basis", "Citation": "(Malladi et al., 2022)", "Explanation": "The cited work provides a method for deriving the SME approximations for adaptive stochastic algorithms, which the citing paper adopts in their research to improve the efficiency of their experimental verification."}, {"Category": "Supporting Evidence", "Citation": "(Li et al., 2017; Jastrzebski et al., 2017; Jastrzebski et al., 2018)", "Explanation": "The cited works provide evidence that the flatness of minima is associated with improved generalization ability, which the citing paper uses to support their research on the relationship between flatness and generalization ability."}, {"Category": "Supporting Evidence", "Citation": "(Jastrzebski et al., 2017)", "Explanation": "The cited work shows that SGD preferentially selects flat minima under certain conditions, which the citing paper uses to support their research on the effect of SGD on flatness of minima."}, {"Category": "Supporting Evidence", "Citation": "(Papyan, 2018)", "Explanation": "The cited work attributes the enhancement of flatness by SGD to the similarity between covariance of the noise and Hessian of the loss function, which the citing paper uses to support their research on the inverse variance-flatness relation within the dynamics of SGD."}, {"Category": "Supporting Evidence", "Citation": "(Feng and Tu, 2021)", "Explanation": "The cited work reveals an inverse variance-flatness relation within the dynamics of SGD, which the citing paper uses to support their research on the effect of SGD on flatness of minima."}, {"Category": "Supporting Evidence", "Citation": "(Zhu et al., 2018)", "Explanation": "The cited work shows that the dynamics of SGD is related to the flatness of minima, which the citing paper uses to support their research on the effect of SGD on flatness of minima."}, {"Category": "Supporting Evidence", "Citation": "(Wu et al., 2018)", "Explanation": "The cited work shows that SGD preferentially selects flat minima under certain conditions, which the citing paper uses to support their research on the effect of SGD on flatness of minima."}, {"Category": "Methodological Basis", "Citation": "(Li et al., 2017)", "Explanation": "The cited work provides a method for using approximations in the weak sense to compare discrete time approximations in the context of path of dropout and SDEs."}, {"Category": "Supporting Evidence", "Citation": "(Kloeden and Platen, 2011)", "Explanation": "The cited work provides a section in their work that discusses the use of weak sense approximations in the context of path of dropout and SDEs, which supports the use of this method in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Malladi et al., 2022)", "Explanation": "The cited work is mentioned as a previous work that has used the class of functions with polynomial growth as the space of test functions in the context of path of dropout and SDEs. The citing paper extends this work by introducing a new set of smooth functions to be used as the space of test functions."}, {"Category": "Methodological Basis", "Citation": "(12)", "Explanation": "The cited work provides a definition of weak approximation in the context of SDEs, which the citing paper adopts in its research to ensure the rigor and validity of the analysis."}, {"Category": "Methodological Basis", "Citation": "(Li et al., 2019)", "Explanation": "The cited work by Li et al. provides the test functions that the citing paper uses to impose similar relations in their research on learning rates and moments in neural networks."}, {"Category": "Methodological Basis", "Citation": "(Wei et al., 2020)", "Explanation": "The cited work by Wei et al. provides the method of using dropout to introduce noise in the training process, which the citing paper adopts to study the effect of stochasticity on learning results."}, {"Category": "Methodological Basis", "Citation": "(Zhang and Xu, 2022)", "Explanation": "The cited work by Zhang and Xu further builds upon the method of using dropout to study the effect of stochasticity on learning results, providing additional insights and data to the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Zhu et al., 2018)", "Explanation": "The cited work by Zhu et al. (2018) provides the assumptions and conditions that are used in the derivation of the covariance matrix in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Zhu et al., 2018)", "Explanation": "The cited work establishes a metric for assessing the degree of alignment between noise structure and curvature of the loss landscape, which the citing paper adopts in its research to evaluate the performance of a model in the context of training process."}, {"Category": "Methodological Basis", "Citation": "(Vaswani et al., 2017)", "Explanation": "The cited work by Vaswani et al. (2017) provides the transformer model architecture and training parameters that the citing paper uses in their research on the English-German translation problem."}, {"Category": "Methodological Basis", "Citation": "(Li et al., 2017)", "Explanation": "The cited work by Li et al. (2017) provides the strategy for deriving the stochastic modified equations (SME) for dropout, which the citing paper adopts in its research to derive the equations for the modified loss and random vector in the context of dropout."}, {"Category": "Methodological Basis", "Citation": "(Li et al., 2017)", "Explanation": "The cited work of Li et al. (2017) provides a method for comparing the stochastic processes in the path of dropout and the counterpart of SDE, which the citing paper adopts in their research to form a piece-wise linear interpolation and sample discrete points for comparison."}, {"Category": "Methodological Basis", "Citation": "(Li et al., 2017)", "Explanation": "The cited work by Li et al. (2017) provides a method of using polynomial growth as a test function in the class of functions, which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "(Kloeden and Platen, 2011)", "Explanation": "The cited work by Kloeden and Platen (2011) offers a method of using polynomial growth as a test function in the class of functions, which the citing paper builds upon in their research."}, {"Category": "Methodological Basis", "Citation": "(Malladi et al., 2022)", "Explanation": "The cited work by Malladi et al. (2022) presents a method of using polynomial growth as a test function in the class of functions, which the citing paper adopts in their research."}, {"Category": "Supporting Evidence", "Citation": "(28)", "Explanation": "The cited work establishes the existence of a solution to the SDE (28), which is a foundational element for the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(23)", "Explanation": "The cited work provides the specific iterations of dropout that the citing paper adopts in their research."}, {"Category": "Supporting Evidence", "Citation": "(30)", "Explanation": "The cited work offers a framework for understanding the relationship between the number of iterations and the second, fourth, and sixth moments of the dropout iterations."}, {"Category": "Extension or Continuation", "Citation": "(31)", "Explanation": "The citing paper extends the research by introducing a new learning rate and a specific range of values for the number of iterations to analyze the effect on the test functions."}, {"Category": "Methodological Basis", "Citation": "(Kloeden and Platen, 2011)", "Explanation": "The cited work by Kloeden and Platen provides the It\u00f4-Taylor expansions that the citing paper uses to conduct its research and analysis."}, {"Category": "Methodological Basis", "Citation": "(Li et al., 2019, Lemma 27)", "Explanation": "The cited work by Li et al. includes the Taylor's theorem with the Lagrange form of the remainder, which the citing paper leverages in its research to achieve order-1 accuracy in their results."}, {"Category": "Methodological Basis", "Citation": "( 50)", "Explanation": "The cited work provides the basis for the calculation of the covariance matrix under dropout regularization in the citing paper by presenting the properties of the dropout variable and the loss function."}]
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b0", "b15", "b16", "b12", "b11", "b8", "b1", "b17", "b10", "b0", "b11", "b13", "b12", "b18", "b19", "b20", "b21", "b8", "b22", "b23", "b12", "b13" ], "table_ref": [], "text": "I Mage fusion is a fundamental technique for visual per- ception and facilitates wide vision applications, e.g., visual enhancement [1], [2], [3], [4] and semantic understanding [5], [6], [7], [8]. Over the past few years, deep learning technology has increasingly energized image fusion approaches, achieving state-of-the-art performance. Unfortunately, three aspects of these approaches can be improved. (i) Most of them focus on promoting the visual effects of the fused images rather than considering the downstream vision tasks, placing an obstacle to scene understanding applications. (ii) Current fusion methods design handcrafted architectures with an increase of depth or width, which rely on verbose dedicated adjustments; thus, they inevitably cause time-consuming structural engineering. (iii) These methods are learned with specific training data, which cannot acquire the generalization ability for various fusion scenarios. In this part, we first briefly discuss their major shortcomings of learning-based methods and then propose our core contribution.\n• Risheng. Liu First, existing methods mostly concentrate on image fusion individually and few consider the underlying relationship of downstream vision tasks with fusion. There are two categories of learning-based fusion methods, including conventional framework with plugging learnable mechanisms and end-to-end learning. In detail, the first category of fusion first utilizes learnable auto-encoders to extract the features and leverages traditional fusion rules (e.g., ℓ 1 norm [9], weighted average [10] and maxing choose [11]) to fuse the features of different modalities. The fused images are reconstructed by the corresponding decoders. These handcrafted fusion rules actually realize the simple information aggregation. These methods are limited by handcrafted strategies, which cannot accomplish adaptive modal feature preservation. On the other hand, end-to-end learningbased schemes have been proposed to straightforwardly fuse diverse modalities by versatile architectures [12], [13], [14], [15] or generative adversarial networks [1], [16], [17]. Existing fusion schemes mostly focus on the improvement of fusion quality supervised by statistic measurements (e.g., modal gradients [13] and image quality assessment [12]), where these statistic measurements provide the guideline to fuse images containing the information as close as modal images. It is worth pointing out that without comprehensive modeling, existing fusion methods are easy to neglect the representative features for underlying vision tasks and deteriorate their performance.\nSecond, current methods, either plugging learnable modules or end-to-end learning widely rely on handcrafted architectures. However, the manual design is easy to induce feature redundancy, causing to generate edge artifacts, and cannot sufficiently utilize the distinct characteristics of modal information. Furthermore, designing highly performed architectures are with huge labor and ample hand-crafted experience. For instance, as for plugged learnable modules, dense blocks [9], multi-scale module [2], spatial attention [18] and feature decomposition [11] are utilized to cascade depth for extracting modal features. As for end-toend learning, dense connection [1], [12], [14], and residual modules [13], [19] are proposed to aggregate the modal feature jointly. Meanwhile, there are few works that leverage the differentiable architecture search [20] to discover the suitable architectures for image fusion [21], [22]. Although these methods achieve remarkable performance, the mainstream search strategies are always leveraged for the largescale dataset, sacrificing the accurate gradient estimation. This would cause the unstable architecture search procedure under small-scale fusion data, partially damaging the final performance of fusion.\nThird, most fusion methods are trained on specific training data. Unfortunately, due to the distribution of diverse fusion tasks varying significantly, these methods cannot acquire the fast adaptation ability and flexibly transfer these solutions to other fusion scenes. Specifically, schemes of plugged learnable modules [9], [23] are trained with a large dataset, such as MS-COCO [24], in order to sufficiently learn the ability of encoding and reconstructing features. However, these methods cannot effectively extract the salient and typical information from multi-modal images because of the difference in data distribution. As for end-to-end learning, there actually lacks an effective practice to investigate the intrinsic feature representative among diverse fusion tasks. Though several methods introduce versatile architectures [13], [14], the feature learning is still based on the specific data." }, { "figure_ref": [], "heading": "Our Contributions", "publication_ref": [], "table_ref": [], "text": "To partially overcome these critical limitations, we propose a Task-guided, Implicitly-searched, and Meta-initialized (TIM) image fusion model. Specifically, we first formulate the task-guided image fusion as a constrained strategy, to aggregate the information from downstream vision tasks to assist the unsupervised learning procedure of fusion. Then, rather than leveraging differentiable search directly, we develop an Implicit Architecture Search (IAS) to investigate the structure of the fusion model with high efficiency and stability. In order to acquire the generalization ability, we propose the Pretext Meta Initialization (PMI) to learn the general feature extraction, endowing the fast adaptation ability for fusion model to address various fusion scenarios. The main contributions of this work can be summarized as:" }, { "figure_ref": [], "heading": "•", "publication_ref": [], "table_ref": [], "text": "Targeting to incorporate the task-related guidance into the learning of image fusion, we establish a constrained strategy to model image fusion with downstream tasks, in order to break down the bottleneck of ignoring vision tasks information of most fusion approaches." }, { "figure_ref": [], "heading": "•", "publication_ref": [], "table_ref": [], "text": "As for architecture construction, we propose an implicit search strategy to automatically discover of fusion model with high efficiency, avoiding the verbose adjustment and huge structural engineering of mainstream design methodologies." }, { "figure_ref": [], "heading": "•", "publication_ref": [], "table_ref": [], "text": "As for parameter training, we develop the pretext meta initialization strategy to learn the intrinsic fea-ture extraction among different fusion data, thus makes the fusion model has the capability to realize the fast adaptation for various scenarios, only using few amounts of data." }, { "figure_ref": [], "heading": "•", "publication_ref": [], "table_ref": [], "text": "We successively apply our fusion method to various downstream vision perception tasks. Objective and subjective comparisons on enhancement and semantic understanding tasks with sufficient evaluations demonstrate our superiority and the effectiveness of proposed mechanisms." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b24", "b25", "b26", "b27", "b28", "b29", "b30", "b31", "b32", "b8", "b33", "b21", "b1", "b0", "b16", "b15", "b16", "b11", "b13", "b12", "b0", "b34", "b35" ], "table_ref": [], "text": "In this section, we briefly review the development of image fusion methods and discuss the limitations of current image fusion schemes.\nConventional fusion frameworks include three aspects, i.e., feature extraction and handcrafted rule-based fusion, feature reconstruction. Typically, multi-scale transform schemes are broadly used for multi-modality fusion tasks, such as discrete wavelet [25], [26], contourlet transform [27], non-subsampled transform [28], curvelet transform [29] and discrete cosine scheme [30]. These algorithms firstly transpose the images into various scales, and design suitable fusion rules to generate results. Moreover, methods based on sparse representation attempt to learn the comprehensive dictionary from different multi-modality images. For example, Liu et.al. [31] propose an adaptive sparse representation to learn the structural information from image patches. Furthermore, Principal Component Analysis (PCA) [32] and Independent Component Analysis (ICA) are two effective tools to preserve the intrinsic and compact features for subspace analysis-based fusion. Optimization models, e.g., total variation minimization [33] are used for IVIF task, which fuses images on the perspectives from pixel intensities and gradient distribution of different modalities. However, certain manually designed rules are not inapplicable for various multi-modality fusion scenes and tasks. On the other hand, classic image fusion methods cannot fully characterize the typical properties of each modality image. Complex strategies limit performance and decrease the inference efficiency.\nSince the strong feature fitting ability, plentiful learningbased image fusion methods are proposed, which are composited by two categories: conventional frameworks with learnable mechanisms and end-to-end learning. The schemes of introducing learnable mechanisms are basically composited by an encoder, fusion rules and the decoder. At the training phase, the encoder and decoder are jointly trained for each modality, aiming to fully characterize the representative features and reconstruct the source images. At the test phase, diverse fusion rules such as summation, weighted average and ℓ 1 norm operations are applied, based on the vital features from each encoder, to obtain the comprehensive fused feature. For instance, Li et. al. [9] propose a Densenet as encoder and the ℓ 1 norm as rules for IVIF task. Liu et. al. [34] utilize the coarse-to-fine fusion architecture as the encoder and the edge-attention mechanism as the fusion rules without well-aligned datasets. Liu et. al. [22] utilize architecture search to perform the construction of the encoder and decoder. Liu et. al. [2] integrate the flexible plugged priors and data-driven modules as the bi-level optimization based on different characteristics for Infrared-Visible Image Fusion (IVIF) and Medical Image Fusion (MIF) tasks. However, similar with the conventional fusion schemes, these hybrid methods are limited by the complicated strategies design, which is easy to induce artifacts and blurs and always ignore the salient modal features.\nIn recent years, designing end-to-end fusion models has been received widespread attention. Generative Adversarial Networks (GAN) [1], [17] attempt to control generator output the fused image with thermal radiation in infrared images and gradient details in visible images by the reinforcement from different discriminators. For the first time, Ma et. al. introduce several generative adversarial schemes [16], [17] to transfer diverse modal distribution for the fused images. Liu et. al. proposes the target-ware dual adversarial learning to preserve the structural information of infrared images and texture details from visible images, which is also benefit for subsequent detection task. Moreover, the universal frameworks (e.g., FusionDN [12], U2Fusion [14], SDNet [13]) based on image quality assessment, continuous learning and squeeze with decomposition mechanisms are widely performed on digital image (e.g., multi-focus and exposure), multi-spectral and medical fusion tasks. Though, these schemes can realize diverse fusion tasks based on information aggregation with statistic metrics, lacking a concrete fusion standard to define the meaningful information. Lastly, there are several works [1], [35], [36] to connect image fusion with semantic understanding tasks. However, these works lack the investigation of inner taskguided relationship, efficient architecture construction and rely on the complex training strategies, which are easy to focus on one task and neglect the optimization of others." }, { "figure_ref": [ "fig_0" ], "heading": "THE PROPOSED METHOD", "publication_ref": [ "b0", "b8", "b9", "b12" ], "table_ref": [], "text": "In essence, the mainstream deep learning-based methods for image fusion [1], [9], [10], [13] are to perform end-toend training with network, so as to establish the mapping between multi-modal inputs and fused images directly. In this part, we first propose a image fusion network, which can be formulated as I F = N F (I A , I B ; θ F ). I A , I B and I F denote the multi-modal inputs and fused images, respectively. Introducing the constraint of loss function ℓ F , we can utilize ℓ F (N F (I A , I B ; θ F )) to train the fusion network. 1 However, the straightforward training of fusion lacks the consideration of following vision tasks to integrate taskpreferred information, which cannot effectively promote the task performance. Thus, we aims to leverage the task guidance to establish task-oriented image fusion. The overview of paradigm is shown at the Fig. 1." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Image Fusion with Task Guidance", "publication_ref": [ "b36", "b37", "b38", "b39" ], "table_ref": [], "text": "In this part, we introduce a task-specific objective for image fusion from the nested optimization perspective, which decouples the while framework into two parts, including image fusion network N F and vision task network N T .\n1. In this part, we only introduce the formal representations of the fusion network and loss function. Concrete network construction and loss function are presented in the implementation details of Sec. 4.\nThus, the holistic network and parameters are decoupled as N = N T • N F and θ = {θ T , θ F }. The goal of vision task is to realize the vision perception with generating the taskoriented outputs y based on one fusion image I F . Similarly, the learning procedure can be defined as y = N T (I F ; θ T ). This framework can effectively transfer the general solution of single-modal vision task into our framework, which can composite a highly efficient N T . In this way, we bridge vision tasks into image fusion procedure, where the optimization of image fusion is constrained by losses of informative richness ℓ F and task-specific maintenance proportion ℓ T . Taking the effective feedback of task performance as the fusion standard, the task-oriented image fusion can be achieved. The whole task-guided objective to bridge visual perception task and image fusion is shown at Fig. 1 (a).\nmin θT ℓ T (N T (I F ; θ T )),(1)\ns.t.\nI F = N F (I A , I B ; θ * F ), θ * F = arg min θF ℓ F (N F (I A , I B ; θ F )).(2)\nThe constrained formulation is shown at Eq. ( 1) and Eq. (2). Specifically, as for the given vision task, we introduce the standard loss functions ℓ T to train N T based on a single fusion image I F . Meanwhile, we consider the image fusion process as a constraint, which is expressed at Eq. ( 2) and reveals the procedure of obtaining the fused image I F based on the optimal network parameters θ * F . Directly solving this nested optimization is challenging, due to the complex coupled formulation. Specifically, the gradient of task-specific objective can be formulated as\n∂ℓT ∂θT = ∂ℓT(θT;θT(θ * F )) ∂θT + G(θ T (θ * F ))\n, where G(θ T (θ * F )) represents the indirect gradient based the response from image fusion θ * F . Noting that, rather than providing vision tasks with more fusion response, we aims to strengthen the image fusion with task guidance. Thus, instead of straightforwardly addressing this task-specific objective using exact solutions [37], [38], [39], [40], we streamline a gradual stagewise procedure to aggregate task preference for fusion.\nIn order to investigate the relationship between image fusion and downstream vision tasks, a straightforward way is joint learning. Joint learning from scratch may leads to the hard convergence without well initialized I F . Thus, we firstly put more attention to addressing single image fusion constraint (i.e., Eq. ( 2)). In detail, one major obstacle is to obtain efficient architecture, which should be effective for feature extraction. We present the Implicit Architecture Search (IAS) to discover effective architectures to composite N F . Exploring further, facing with different data distribution of vision tasks, well initialized parameters of image fusion can realize the flexible adaptation. Thus, we propose the Pretext Meta Initialization (PMI) to learn generalizable parameters (denoted as θ 0 F ) to investigate the task-agnostic fusion ability. Based on IAS and PMI, we can utilize the gradient descent\nθ k F -∇ θF ℓ F (N F (I A , I B ; θ k F )\n) to obtain fundamental fused images, which is shown at the bottom of Fig. 1 (a).\nThen we put the constraint of image fusion into the optimization of vision tasks to jointly optimize the network of fusion and downstream tasks. The composited objective can be written as min θT,θF ℓ T (N\nT (I F ; θ T )) + ηℓ F (N F (I A , I B ; θ F )),\nwhere η is the balanced weight. Obviously, this formulation reveals that the gradient of θ F is composited by the measurement of information richness from ℓ F and task guidance from ℓ T . Noting that, this learning strategy is mutually beneficial for both two networks. On the one hand, the nested optimization of image fusion with I F can guide the learning of vision task. On the other hand, the backward feedback of specific vision with y can facilitate the taskrelated information into image fusion to finally realize the task-oriented learning, as shown by the cycled yellow arrows in Fig. 1 (a)." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Implicit Architecture Search", "publication_ref": [ "b19", "b20", "b19", "b21" ], "table_ref": [], "text": "As shown in Fig. 1 (a), we utilize architecture search to discover efficient architecture for image fusion. Currently, there are two popular methodologies to design the architecture of image fusion, i.e., manual design and general architecture search. However, handcrafted architectures of fusion are mostly based on existing mechanisms, limited in the heavy labor and design experiences. On the other hand, the mainstream differentiable search strategies [20], [21] have been introduced with large-scale datasets, which cannot estimate the accurate gradients due to the one-step approximation considering the efficiency. Thus, these methods are easy to generate unstable architectures, especially for the insufficient data of image fusion. Thus, we propose the implicit architecture search, which can effectively support the solving procedure of Eq. ( 2) towards stable architectures.\nThe whole procedure is plotted at Fig. 1 (b). Following with differentiable relaxation [20], [22], we introduce α F to denote the architecture weights of N F . Then we introduce the search objective ℓ αF to measure the influence of α F .\nThe goal of implicit strategy is to avoid the insufficient learning of θ F and large computation, which is more suitable for the unsupervised fusion tasks. Noting that, we omit the subscript F for brief representation. As for the solving procedure, by substituting θ, the concrete gradient G α of ℓ α can be generally written as:\nG α = ∇ α ℓ α (α; θ) + ∇ θ ℓ α (α; θ)∇ α θ(α).(3)\nBased on the assumption that lower-level sub-problem have one single optimal solution and referring to implicit function theory, the optimal parameters θ characterizes that ∇ θ ℓ(θ; α) = 0 and\n∇ α θ(α) = -∇ 2 α,θ ℓ(α; θ)∇ 2 θ,θ ℓ(α; θ) -1 .\nIn this way, we can obtain the preciser gradient estimation than general search strategies, avoiding the insufficiency of one-step update. Inspired by Gauss-Newton (GN) method, we leverage the outer product of first-order gradient to approximate second-order derivative. Based on the least square method, the implicit approximation of architecture gradient can be formulated as:\nG α = ∇ α ℓ α (α; θ) - ∇ θ ℓ(α; θ) ⊤ ∇ θ ℓ α (α; θ) ∇ θ ℓ(α; θ) ⊤ ∇ θ ℓ(α; θ) ∇ α ℓ(α; θ).\n(4) Furthermore, we discuss the advantages of proposed methods. Firstly, this strategy is based on the requirement of sufficiently learned network parameters. The optimal parameters can provide the accurate gradient estimation. Secondly, compared with the general differentiable search, since it is not required to update once of each iteration, it has the search stability for architectures. Moreover, image fusion task is a unsupervised task without abundant data. IAS actually has more efficient for this task.\nThen we introduce the concrete search objective. We first propose a operation-sensitive regularization Reg into the search objective, in order to indicate the basic properties of operations (e.g., computational cost and compactness of architecture). For instance, Reg can be considered as the weighted summation of latency based on all operations, which is used to constrain the parameter volume. We also can control the compactness to define Reg with the total number of skip connections. Thus, the search objective is formulated as: ℓ αF = ℓ F + λ(Reg(α F )). where λ represents the trade-off coefficient to balance the fusion quality and operation-sensitive properties." }, { "figure_ref": [ "fig_0" ], "heading": "Pretext Meta Initialization", "publication_ref": [ "b40", "b41", "b42", "b13" ], "table_ref": [], "text": "Obviously, θ F plays a critical role to bridge the information aggregation of image fusion and following vision tasks. Well initialization θ F should reveal the intrinsic fusion principles and is as an intermediary for fast adaptation. On the other hand, θ F should merge the stylized domain information to strengthen the generalization ability for unseen fusion data. However, existing image fusion methods seldom digest the inhere fusion principles. These approaches design specific fusion rules with models for specific fusion tasks. More importantly, fusion tasks are varied widely and have distinct intensity distributions. It is untoward to obtain the generalizable θ 0 F by directly pre-training on the hybrid fusion datasets, which cannot sufficiently store meta knowledge of fusion tasks and is without consistent representations.\nTherefore, as shown at Fig. 1 (c), we present the pretext meta initialization strategy to learn the fast adaptation ability, which can assist the framework adapt to specific fusion task fast to learn task-oriented θ * F , associated with informative fusion and downstream vision perception tasks. We denote ω as the weights learning from the pretext task among diverse fusion scenes. In fact, we introduce an additional constraint into Eq. (1) and Eq. ( 2), which is defined as follows:\nθ 0 F = ω * , with ω * = arg min ω M i=1 f (ω; θ Fi (ω)),(5)\nwhere M denotes the fusion tasks. Thus, we construct a pretext meta initializationconstraint for image fusion-based vision optimization. It is actually another optimization problem based on image fusion constraint, i.e., Eq. ( 2), which brings challenging computation difficulties. Thus, we propose a hierarchical solving procedure [41], [42], [43]. We consider this solution under the solution of image fusion constraint. In details, we define f as the feature-level informative richness measurement, aiming to weight the generalization ability of ω, following with [14]. The solving procedure of pretext objective Eq. ( 5) can be divided into two steps, i.e., optimizing θ Fi with specific fusion scenes and minimizing the meta objective among diverse scenes. As for each scene, we can obtain specific θ Fi by several gradient steps, which can be formulated as θ Fi ← ω -∇ ω ℓ F (N F (I A , I B )). Then we measure the performance of these task-specific weights θ Fi to learn the common latent distribution and essential fusion principles of image fusion tasks. The computation procedure is Updating parameters θ F using Eq. ( 2) with T steps.\nω ← ω -∇ ω M i=1 f (ω; θ Fi (ω))." }, { "figure_ref": [], "heading": "4:", "publication_ref": [ "b14" ], "table_ref": [], "text": "Using implicit approximation (i.e., Eq. ( 4)) to update architecture. α F ← α F -G α . 5: end while 6: % Pretext meta initialization. 7: while not converged do 8:\nfor each fusion task i with K steps do 9: \nθ Fi ← ω -∇ ω ℓ F (N F (I A , I B )).\nω ← ω -∇ ω M i=1 f (ω; θ Fi (ω)).\n12: end while 13: θ 0 F = ω * . 14: % Task-guided learning of whole network. 15\n: θ * T , θ * F = arg min θT,θF ℓ T (N T (θ T )) + ηℓ F (N F (θ 0 F )). 16: return α F , θ * T and θ * F .\ncan reflect the generalizable ability of ω. We perform two steps iteratively until achieving ω * . Then we assign the values of ω for θ 0 F and continue to solve other constraints of Eq. ( 1). The concrete details are reported in the Alg. 1. Related ablation studies to demonstrate the effectiveness of two strategies are performed on Section 5.3. It worth to point out that, based on the well initialization, we can utilize few training data and small iterations to achieve the significant results compared with direct training.\nTo sum up, we provide other two important supports to endow the effective architecture construction principle for N F and establish the pretext meta initialization to learn the adaptive parameters among different data. Thus, these techniques effectively support the optimization of image fusion constraints, i.e., Eq. ( 2). We summarize the complete scheme as Alg. 1. Noting that, in order to simplify the representation, we omit the concrete learning rates." }, { "figure_ref": [], "heading": "APPLICATIONS", "publication_ref": [], "table_ref": [], "text": "In this section, we will elaborately illustrate the implementation details for image fusion. Considering two vision tasks, including image fusion for visual enhancement and semantic understanding, we extend the architecture design for these tasks and report the necessary training details." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b20", "b20", "b20", "b21" ], "table_ref": [], "text": "In this part, implementation details for image fusion network N F including architecture construction and parameter training are introduced.\nSearch Configurations. The search space of image fusion is introduced from [21], which provide various image fusion-oriented cells and operators. In [21], it provides the details of cells (i.e., successive cell C SC , decomposition cell C DC and multi-scale fusion cell C MS ). The search space of operators including Channel Attention (CA), Spatial Attention (SA), Dilated Convolution (DC), Residual Block (RB), Dense Block (DB) and Separable Convolution (SC) with different kernel size (3 × 3 and 5 × 5) are provided at [21], [22]. We define the regularization Reg as the weighted summation of GPU latency, aiming to obtain the lightweight efficient architecture. It is computed by linear summation, i.e., Reg(α) = l o∈O α l LAT(o). l denotes the layer index. LAT(o) represents the latency of operation o. As for the search of N F , we set 20 and 80 epochs to optimize the weights for cells and weights for operators respectively. Simultaneously, we collect 200 training data from IVIF and MIF tasks respectively, and update each epoch using one kind task-specific dataset alternately. In detail, we divided the whole dataset equally into the network parameters updating and architecture optimizing. In search phase, N F is composed by two candidate cells, each cell has two blocks. Utilizing SGD optimizer with initial learning rate 1e -3 , T = 20 iterations and performing cosine annealing strategy, we search the whole architecture 100 epochs.\nTraining Configurations. At the pretext meta initialization stage of image fusion, we utilize 400 pairs from multiple tasks to optimize well initialization ω * . In details, we consider four fusion tasks including IVIF (e.g., TNO, Road-Scene) and MIF (e.g., MRI, CT, PET and SPECT fusion) tasks. The learning rate for single-task (step 9) and multi-tasks updating (step 11) are set as 1e -3 and 1e -4 respectively. As for single-task learning, we conduct 4 gradient updates, with the Adam optimizer. Furthermore, we prepare amounts of patches with size 64 × 64 and generate corresponding saliency maps. As for RGB images, we transform them to YCbCr channels and take the Y channel for fusion. Data augmentation, e.g., flip and rotation are utilized. All search and training experiments of image fusion are performed on a NVIDIA GeForce GTX 1070 GPU and 3.2 GHz Intel Core i7-8700 CPU." }, { "figure_ref": [], "heading": "Image Fusion for Visual Enhancement", "publication_ref": [ "b20", "b13", "b21", "b13" ], "table_ref": [], "text": "Designing suitable image fusion schemes to sufficiently incorporate different characteristics is a vital component. As analyzed in [21], the image fusion should preserve complete but discrepant information, i.e., structural target information and abundant texture details. Therefore, we formulate both two objects as the parallel fusion structure as N T to investigate these discrepancies, i.e., target extraction and detail enhancement. In this way, we conduct the implicit search for the whole framework. In order to constrain the computation burden, we utilize two successive cells (with two candidate operators) to composite the outer representation of this parallel enhancement network. Introducing different losses, the principled objects can be achieved. Finally, by introducing spatial attention and three 3 × 3 convolutions, the final fused images can be obtained, where the goal of spatial attention is to generate a map to fuse these hierarchical features.\nIn order to simplify the training procedure, we consider two kind of losses, intensity loss and SSIM metric. We utilize the Mean Square Error (MSE) loss ℓ int = ∥I 1 -I 2 ∥2 2 to measure the difference of pixel intensity. Structural similarity, denoted ℓ ssim , which is defined as ℓ ssim = 1 -SSIM(I 1 , I 2 ). Therefore, the whole loss is written as: ℓ = ℓ int + µℓ ssim . We introduce two weight formulations to measure the information preservation. On the one hand, targeting to extract rich features information in N F module, we introduce to estimate weight maps computed by the shallow and deeplevel features from VGG network, following with [14]. To simplify, we denote it as ℓ F .\nOn the other hand, focusing on the visual quality of specific fusion tasks, we introduce the spatial saliency map estimation to weight the proportional information based on pixel distribution. Firstly, we calculate the histogram map of source images, denoted as H his . Inspired by fusion rule [22], the contrast ratio of each pixel can be computed by M(i) = 255 j=0 H his (i)|j -i|, where i, j denote the intensity value. Then, we obtain the final estimation map by softmax function to constrain the range between 0 and 1. In this way, given two modality-based images I A , I B , fusion image I F and saliency-guided weights M A , M B , we can obtain the weighted loss function, i.e.,\nℓ V int = ∥M A ⊗ (I F -I A )∥ 2 2 + ∥M B ⊗ (I F -I B )∥ 2 2 and ℓ V ssim = 1 -SSIM(M A ⊗ I A , M F ⊗ I A ) + 1 -SSIM(M B ⊗ I F , M B ⊗ I B ).\nTo simplify, we denote this formulation as ℓ T = ℓ V int +µℓ V ssim . As for the parallel outputs of N T , ℓ int and ℓ ssim are utilized to constrain the similarity of different modalities to realize the target extraction and detail enhancement respectively.\nUtilizing the hybrid datasets in TNO 2 and Road-Scene [14], we search the parallel fusion structure based on the searched N F for IVIF. Moreover, collecting 150 pairs of multi-modal medical data from Harvard website 3 , we can search three task-specific networks for MRI-CT, MRI-PET and MRI-SPECT fusion tasks. SGD optimizer with learning rate 1e -3 and consine annealing strategy is utilized to train with 100 epochs. Then inserted subsequent N T , we train the whole network jointly. Furthermore, we also illustrate details about the enhancement of training strategy for visual fusion. We set the learning rate as 1e -4 and introduce the Adam Optimizer to train the whole network for 100 epochs for infrared-visible and medical fusion tasks." }, { "figure_ref": [], "heading": "Image Fusion for Semantic Understanding", "publication_ref": [ "b43" ], "table_ref": [], "text": "Based on the results from N F , we can strengthen diverse N T for semantic Understanding tasks (i.e., multi-spectral object detection and segmentation) by proposed architecture search. It should be emphasized that our goal is not to completely design the entire semantic perception network, but to search for the core feature expression to improve the performance of perception tasks.\nTargeting to obtain the efficient feature fusion for semantic perception, we improve the directed acyclic graphtype cell with feature distillation mechanism for flexible representations, which is denoted as C FD . The graph cell contains several nodes, where the edges represent the relaxation of operators. At the final node, this cell performs the feature distillation mechanisms [44] by concatenating the features from other nodes. To be specific, we utilize the cascaded feature distillation cell to constitute the modular feature fusion parts (e.g., neck part in object detection and feature decoding in segmentation), allowing a seamless way for changing different backbones. Considering the low-weighted and efficient feature representations for these high-level perception tasks, we introduce diverse singlelayer convolutions to constitute the search space, including normal convolution with k × k, k ∈ {1, 3, 5, 7}, dilated convolution with k × k, k ∈ {3, 5, 7} and dilate rate of 2, residual convolution with kernel size k × k, k ∈ {3, 5, 7} and skip connection." }, { "figure_ref": [], "heading": "Object Detection", "publication_ref": [ "b44", "b45", "b46", "b47", "b5", "b48" ], "table_ref": [], "text": "In this paper, we utilize RetinaNet [45] as the baseline scheme. Recently, a series of NAS-based detection schemes [46], [47], [48] are proposed to discover the neck part, including searching the connection modes from topdown and bottom-up perspectives, or operators for multiscale feature fusion. Following bottom-up principle, we utilize the feature distillation cell to fuse feature progressively. In detail, focusing on two features with diverse scales from backbones, we first resize the features with lower resolution and concatenate them into the cell at three-levels, where the cells contains four nodes.\nWe introduce the MultiSpectral dataset proposed by Takumi et.al [6] for experiments. This dataset is captured by RGB, FIR, MIR and NIR cameras. Due to the low resolution (256 × 256) and blurred imaging, we re-partitioned and filtered the dataset. In detail, we select 2550 pairs for training and 250 pairs for testing. Five categories of objects are consisted in this dataset, including color cone, car stop, car, person and bike. In order to impose the principles of detection, we adopt the widely-used RetinaNet [49] as the baseline for comparison. The major improvement comes from the FPN re-design by automatic search and pretext meta initialization. Utilizing MultiSpectral dataset and plugging the N F , we search the whole architecture with proposed search strategy from scratch progressively. More concretely, the batch size, architecture learning rate and search epochs are 1, 3×e -4 and 120 respectively. Targeting to fast convergence, we firstly train the fusion module with 40 epochs for well initialization. As for the training procedure, we train the whole architecture 160000 steps and set the learning rate as 2×e -3 and delay it with a cosine annealing to 1 ×e -8 ." }, { "figure_ref": [], "heading": "Semantic Segmentation", "publication_ref": [ "b49" ], "table_ref": [], "text": "As for semantic segmentation, we introduce ResNet18 as the encoder to conduct feature extraction. Compared with existing RGB-T segmentation schemes [50], which leverage two backbones to encoding different modal features, our segmentation scheme is lightweight based on the nested formulation with image fusion. As for the decoder part, we utilize the similar fusion strategy to integrate the features from high and low-level feature maps. We first utilize the residual upsampling mechanism to resize the lowresolution feature as large as high-level ones with same channels. Then we concatenate them as the inputs. A residual connection is utilized for the output of cells. Similarly, we also utilize three-level features and put forward two cells to fuse them. Each cell has two nodes. Finally, the estimated map is generated from 1 8 size. Coupling with N F and searched segmentation module, we further investigate the jointly learning between image fusion and semantic perception. As for segmentation task, we leverage the widely used MFNet dataset, including 1083 pairs for training and 361 pairs for testing. This dataset is composited by various scenarios (e.g, poor light, glare, daytime) with size 640×480 and nine categories (i.e., background, bump, color cone, guardrail, curve, bike, person and car). Plugged the pre-searched N F , we search the segmentation network specifically. Widely-used Cross-Entropy loss computed in 1 8 and 1 16 is introduced as the search and training loss ℓ T . With batch size of 2 and initial learning rate 1e -2 and data augmentation (random clip and rotation), we search the decoder part for 100 epochs. Utilizing SGD optimizer, we decay the learning rate from 1e -2 to 1e -8 within 240 epochs with all training data to train the network." }, { "figure_ref": [], "heading": "EXPERIMENTAL RESULTS", "publication_ref": [], "table_ref": [], "text": "In this section, we first perform the task-guided image fusion on two categories of applications including visual improving and semantic understanding. Then we conduct comprehensive technique analyses to illustrate the effectiveness of two mechanisms (i.e., IAS and PMI)." }, { "figure_ref": [], "heading": "Image Fusion for Visual Enhancement", "publication_ref": [], "table_ref": [], "text": "In this part, we performed comprehensive experiments to demonstrate our superiority based on objective and subjective evaluations on Infrared-Visible Image Fusion (IVIF). In order to verify the flexibility of our method, we extend the scheme to address Medical Image Fusion (MIF)." }, { "figure_ref": [ "fig_2", "fig_2", "fig_2", "fig_3" ], "heading": "Infrared-Visible Image Fusion", "publication_ref": [ "b16", "b50", "b8", "b15", "b10", "b22", "b21", "b0", "b12", "b13" ], "table_ref": [], "text": "We compared with ten state-of-the-art learning-based competitors, which include DDcGAN [17], RFN [51], DenseFuse [9], FGAN [16], DID [11], MFEIF [23], SMOA [22], TARDAL [1], SDNet [13] and U2Fusion [14]. The outer structure of our final searched architecture are C MS and C SC for fusion, C SC and C SC for enhancement. The inner operators are 3-RB, 3-DC, 3-DB, 3-DC, SA, 3-DC, CA and SA respectively.\nQualitative Comparisons. We perform the objective evaluation on two representation datasets TNO and Road-Scene in Fig. 2 and Fig. 3. From this intuitive view, three discriminative advantages can be concluded. Firstly, our scheme can highlight significant high-contrast and clear thermal objects, as shown in the first and third row of Fig. 2. However DenseFuse and U2Fusion maintain abundant texture features from different modalities, the remarkable targets of thermal radiation cannot be preserved well. Second, the proposed method effectively preserves ample texture and structure information in visible images. As shown in the first row of Fig. 2 and the last row of Fig. 3, the sky, ground material and the color of wall are sufficiently maintained in our results, which is consistent with human visual system. Suffering from the strong pixel intensity of infrared image, most of fusion schemes have color distortion, cannot preserve rich texture structure. Furthermore, the proposed scheme can efficiently remove artifacts coming from different modalities, such as the thermal blur and visible noise. For instance, MFEIF, DID and AUIF schemes contain obvious noises and artifacts shown in the second row of Fig. 2 and Fig. 3. In contrast, our scheme not only highlights the distinct infrared targets but also preserves textural details, accomplishing the comprehensive results. information (e.g., image edges) fused by source images. Q AB/F is utilized to measure the textures details by a statistical scheme (i.e., computing the amount of edge information transformed from source images). We report the numerical comparison in Table . 1 and Table . 2. Obviously, the general TIM achieves the best performance for IVIF. Moreover, the significant improvement of MI and Q AB/F compared with the newest fusion schemes (i.e., TARDAL and SDNet) indicates that the proposed TIM achieves excellent performance with visual-pleasant, distinct but complementary information and flourish texture details. Furthermore, TIM w/ L also obtains comparable performances in both datasets. On the other hand, we also compare the computation efficiency with these competitive fusion schemes, including parameters, FLOPs and inference time under the second row of tables. We utilize ten pairs with size of 448 × 620 to conduct these comparisons for TNO. Meanwhile, ten pairs with size of 560 × 304 are utilized to conduct these comparisons for RoadScene. DenseFuse network has few parameters and FLOPs but the inference time is limited by ℓ 1 -norm fusion rule. Similarly, fusion rule-based schemes (e.g., RFN, MFEIF and SMOA) also obtain sub-optimal inference time compared with end-toend networks. TIM w/ L achieves the fastest inference time and the lowest FLOPs between both datasets. Compared with the newest fusion scheme TARDAL, TIM w/ L reduces 57.23% parameters and 57.03% of FLOPs on TNO, which can be more easily delloyed on hardware to guarantee realtime inference. The comprehensive analysis between visual quality and inference time is shown at Fig. 4." }, { "figure_ref": [ "fig_4" ], "heading": "Image Fusion with Registration", "publication_ref": [ "b51", "b52", "b53" ], "table_ref": [], "text": "In real world, due to the diverse imaging pipelines and complex environments (e.g., temperature changes and mechanical stress), obtaining highly-accurate aligned multispectral images are challenging. The misalignment of source images is easy to generate fusion results with artifacts and ghosts [52]. Our method can effectively address the misaligned image fusion based on the flexible formulation. Considering the image fusion constraint (Eq. ( 2)) to connect vision task, we introduce another constraint to align the source images I B , I B respectively, which can be written as\nI B = N R (I A , I ′ B ; θ R ).\nWe denote the unaligned image as I ′ B and registration module as N R . By the effective nested formulation, we can introduce pretrained MRRN scheme [53] as N R to composite more general image fusion. For validating the robustness and flexibility of our scheme, we synthesize corrupted infrared images utilizing random deformation fields by affine and elastic transformations. Then we utilize the initialized parameters to learn a robuster fusion scheme, which can efficiently address unregistered multi-spectral image fusion scenarios. The numerical and visual results are reported in Table . 3 and Fig. 5 respectively. Other fusion schemes are based on the pairs registered by VoxelMorph [54]. Since corruption from distortions in infrared images cannot be recovered exactly, state-of-theart algorithms such as AUIF and SDNet still contain obvious ghosts, shown as the first row. We can conclude that our method can effectively persevere visible details and sufficient thermal information under the misaligned multispectral images." }, { "figure_ref": [ "fig_5" ], "heading": "Extension to Medical Image Fusion", "publication_ref": [ "b30", "b54", "b55", "b56", "b27", "b57", "b58" ], "table_ref": [], "text": "Since the flexible formulation, we can extend our method to address other challenging fusion tasks, e.g., medical image fusion. Four typical images including MRI, CT, PET and SPECT, provide diverse structural and functional perception of physiological systems. Utilizing Harvard dataset, we adopt the aforementioned search schemes and configurations to discover suitable architectures for three tasks. The hierarchical structure (i.e., N T ) of MRI-CT fusion is composited by 5-RB, 5-RB, 5-RB and SA operations. The operations for MRI-PET fusion includes 3-SC, 3-RB, 3-RB and 5-RB. Furthermore, 5-RB, 3-DB, 3-RB and SA consists of the architecture of MRI-SPECT fusion. In this part, we conduct visual and numerical comparisons with six schemes including ASR [31], CSMCA [55], CURVELET [56], DTCWT [57], NSCT [28] and PAPCNN [58]. Qualitative Comparisons. Intuitively, the qualitative results of MRI-PET/SPECT fusion are shown at Fig. 7 with various brain-hemispheric transaxial sections. Essentially, limited by imaging equipment, PET/SPECT images are with constrained resolutions and have mosaic degradation. MRI provides ample structural details. The goal of these tasks is to maintain the structural details and functional color expression. The proposed scheme can improve the visual quality by removing mosaic of PET/SPECT, shown in the last row of Fig. 7. The fusion results of other compared schemes still exists mosaic and noises. Furthermore, ASR and CSMCA cannot either maintain the significant structure of MRI or recover the color representation. Compared with these competitors, proposed scheme suppresses the generation of noised artifacts, highlights the informative structure of soft-tissue (e.g., edges) and is without color distortion. The high-contrast visual performance demonstrates the comprehensiveness.\nQuantitative Comparisons. Objective evaluations are also conducted to the superiority of our fused [59]. Due to the diverse imaging quality of these medical modalities (e.g., mosaic factors), we utilize EN to measure the amount of information remaining in fused images. Furthermore, the edge details is not so dense as visible images, we utilize SCD rather edge-aware metrics (FMI edge and Q AB/F ). SCD is leveraged to measure the correlation between the difference images. We depict the numerical performances for three medical image fusion tasks by plotting the box-type figures in Fig. 6. Clearly, the proposed scheme achieves the consistent optimal mean values on these fusion tasks under four numerical metrics." }, { "figure_ref": [], "heading": "Image Fusion for Semantic Understanding", "publication_ref": [], "table_ref": [], "text": "Benefiting from the nested optimization, we can facilitate the improvement for two semantic understanding tasks (e.g., object detection and segmentation) based on image fusion." }, { "figure_ref": [], "heading": "Object Detection", "publication_ref": [ "b48" ], "table_ref": [], "text": "Quantitative Comparisons. As shown in Table . 4, we report the qualitative results for object detection on Multi-Spectral Dataset. We illustrate the results of separate detection using RetinaNet [49] DDcGAN scheme cannot preserve effective edges information of infrared targets, which fails to detect any pedestrians.\nOur results preserve clear thermal targets (e.g., pedestrians) and ample details. For instance, the example in last row is a challenging scenarios, where the object is not salient either in infrared and visible modality. Our detector can successful detects this object that demonstrated the superiority." }, { "figure_ref": [ "fig_7" ], "heading": "Semantic Segmentation", "publication_ref": [], "table_ref": [], "text": "Quantitative Comparisons. We utilize the searched semantic segmentation network to test ten fusion methods and provide the detailed results in Table . 5, which measured by mean Intersection-over-Union (mIoU) and mean Accuracy (mACC). From this table, we can observe that our scheme, designed with universal formulation, achieves the highest numerical performance under all eight categories. Moreover the pre-trained fusion schemes which have visual-pleasant results or concentrate on statistic metrics cannot be consistent with remarkable segmentation performance. That also demonstrates the goal of not only ensuring complementary informative fusion but also assisting the improvement of semantic segmentation has been achieved uniformly.\nQualitative Comparisons. Furthermore, Fig. 9 depicts the visual comparisons with single modality images and other competitive fusion schemes. The results of each modality fully reflect the complementary feature expression. Obviously, thermal-sensitive objects (e.g., pedestrian on the poor lighting condition) cannot be correctly predicted in other fusion-based schemes. That illustrates our scheme can obtain sufficient infrared salient information for target classification. On the other hand, our scheme also can preserve the texture details well to estimate other thermal-insensitive objects, such as the bumps shown in the first row and the car in the second row. To sum up, these three representa- tive scenarios effectively display our outperformed results compared with these advanced competitors." }, { "figure_ref": [ "fig_0", "fig_0", "fig_9", "fig_9" ], "heading": "Ablation Study", "publication_ref": [ "b19" ], "table_ref": [], "text": "In order to evaluate the effectiveness of two proposed techniques, we perform relevant ablation analyses. All ablation experiments are based on the infrared-visible fusion vision task. We firstly evaluate the fusion performance under the proposed search compared with the mainstream search strategy (taking DARTS [20] as an example). Then we verify the importance of pretext meta initialization.\nSearch Strategy. We perform the comparison of search strategies (i.e., mainstream search DARTS and proposed IAS) on TNO dataset by objective losses (log formation) and numerical results in Fig. 10. As shown in subfigure (a), our proposed scheme achieves rapid convergence from losses based on the accurate gradient estimation, which can support to find better solution of architecture relaxation weights. At subfigure (b), we report the performance of ten architectures based on DARTS and IAS by randomly searching ten times. We can observe that, the performances of DARTS-based architectures are not consistent, which cannot realize the stable performance. Our IAS-based ar- chitectures not only achieve the higher numerical results, but also accomplish the stable performance. The qualitative and quantitative results of final performance are reported in Fig. 11 and Table . 6. Through these comparison, we can conclude that IAS can realize the high performance, efficiency and stability. Obviously, network discovered by the proposed search improves performances under these metrics. That also demonstrates the advantages of proposed search strategy. Aiming to compare the single-operator composited architecture fairly, we only leverage N F module to conduct experiments (training with ℓ T ). We leverage widely used inner- Meta Initialization. PMI strategy is proposed to achieve the generalization parameters for the fast adaptation of image fusion. We conducted the experiments to verify the influences of proposed training strategy and discuss the optimal inner updates (i.e., K) of initialization in Table. 9. \"w/o Initialization\" denotes the version that is performed with end-to-end training directly with specific data. Evidently, suitable K can improve final numerical performances significantly. When K = 4, we can obtain the comprehensive numerical results, benefiting from the intrinsic fusion feature learning from multi-tasks and multidata distribution. Especially, increasing the number of inner updates cannot always strength the performance. From this table, we can conclude that pretext meta initialization can effectively learn the inhere fusion features, which can improve the performance of image fusion remarkably. Furthermore, we also plot the curve of losses and related fusion quality (measured by VIF) in Fig. 12. We can observe that the variant with PMI has more lower validation loss and converges faster to the stable stage compared with the version \"w/o PMI\". On the other hand, our scheme with PMI can quickly achieve the best VIF metric, which represents the robust visual quality. More importantly, we also demonstrate that, only using partially few training data of specific tasks, we also can obtain the significant numerical results. As shown in Fig. 12, we also illustrate the remarkable results based on PMI with different scales of training data. Dataset-L, Dataset-M and Dataset-S include 6195, 3097 and 1548 pairs of patches for IVIF. As shown at subfigure (b), PMI with utilizing large dataset has the slower convergence. The variant with small dataset cannot maintain the stable stage and is easy to generate oscillation. Thus, considering the training efficiency and quality, we select 3097 pairs of patches to train the fusion. For further evaluating the role of initialization, we compare with the version based on the direct fusion. This version is to utilize the original training strategy (only with ℓ F ) to generate fused images. Numerical comparison among four tasks is reported in Table . 10. Obviously, task-oriented fusion can effectively improve the performance of different tasks. We can summarize that PMI is benefit for the taskguided fusion of visual effects and semantic understanding." }, { "figure_ref": [], "heading": "CONCLUDING REMARKS", "publication_ref": [], "table_ref": [], "text": "In this paper, we developed a generic task-guided image fusion. Based on a constrained strategy, we realized the flexible learning paradigm to guide image fusion, incorporating information from downstream vision tasks. The implicit architecture search strategy was proposed to discover nimble and effective fusion networks. We also introduced the pretext meta initialization strategy to endow the fast adaptation of image fusion with multiple fusion scenarios. Comprehensive qualitative and quantitative results among various visual enhancement and semantic understanding tasks demonstrated the superiority. Furthermore, implicit search strategy is also capable of the architecture construction for more unsupervised vision tasks. This constrained paradigm can be extended to other visual applications (e.g., image restoration and semantic understanding)." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This work is partially supported by the National Key R&D Program of China (2020YFB1313503), the National Natural Science Foundation of China (Nos. U22B2052 and 61922019), the Fundamental Research Funds for the Central Universities and the Major Key Project of PCL (PCL2021A12)." } ]
2023-05-25
[ { "authors": "J Liu; X Fan; Z Huang; G Wu; R Liu; W Zhong; Z Luo", "journal": "", "ref_id": "b0", "title": "Target-aware dual adversarial learning and a multi-scenario multi-modality benchmark to fuse infrared and visible for object detection", "year": "2022" }, { "authors": "R Liu; J Liu; Z Jiang; X Fan; Z Luo", "journal": "IEEE Transactions on Image Processing", "ref_id": "b1", "title": "A bilevel integrated model with data-driven layer ensemble for multi-modality image fusion", "year": "2020" }, { "authors": "J Liu; G Wu; J Luan; Z Jiang; R Liu; X Fan", "journal": "Information Fusion", "ref_id": "b2", "title": "Holoco: Holistic and local contrastive learning network for multi-exposure image fusion", "year": "2023" }, { "authors": "Z Jiang; Z Zhang; X Fan; R Liu", "journal": "", "ref_id": "b3", "title": "Towards all weather and unobstructed multi-spectral image stitching: Algorithm and benchmark", "year": "2022" }, { "authors": "T Ma; L Ma; X Fan; Z Luo; R Liu", "journal": "", "ref_id": "b4", "title": "Pia: Parallel architecture with illumination allocator for joint enhancement and detection in low-light", "year": "2022" }, { "authors": "K Takumi; K Watanabe; Q Ha; A Tejero-De-Pablos; Y Ushiku; T Harada", "journal": "ACM MM", "ref_id": "b5", "title": "Multispectral object detection for autonomous vehicles", "year": "2017" }, { "authors": "W Zhou; S Dong; C Xu; Y Qian", "journal": "AAAI", "ref_id": "b6", "title": "Edge-aware guidance fusion network for rgb thermal scene parsing", "year": "2021" }, { "authors": "W Zhou; J Liu; J Lei; L Yu; J.-N Hwang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b7", "title": "Gmnet: gradedfeature multilabel-learning network for rgb-thermal urban scene semantic segmentation", "year": "2021" }, { "authors": "H Li; X.-J Wu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b8", "title": "Densefuse: A fusion approach to infrared and visible images", "year": "2018" }, { "authors": "Z Zhao; S Xu; J Zhang; C Liang; C Zhang; J Liu", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b9", "title": "Efficient and model-based infrared and visible image fusion via algorithm unrolling", "year": "2021" }, { "authors": "Z Zhao; S Xu; C Zhang; J Liu; J Zhang; P Li", "journal": "", "ref_id": "b10", "title": "Didfuse: Deep image decomposition for infrared and visible image fusion", "year": "2020" }, { "authors": "H Xu; J Ma; Z Le; J Jiang; X Guo", "journal": "AAAI", "ref_id": "b11", "title": "Fusiondn: A unified densely connected network for image fusion", "year": "2020" }, { "authors": "H Zhang; J Ma", "journal": "International Journal of Computer Vision", "ref_id": "b12", "title": "Sdnet: A versatile squeeze-anddecomposition network for real-time image fusion", "year": "2021" }, { "authors": "H Xu; J Ma; J Jiang; X Guo; H Ling", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b13", "title": "U2fusion: A unified unsupervised image fusion network", "year": "2020" }, { "authors": "Z Zhao; H Bai; J Zhang; Y Zhang; S Xu; Z Lin; R Timofte; L V Gool", "journal": "IEEE CVPR", "ref_id": "b14", "title": "Cddfuse: Correlation-driven dual-branch feature decomposition for multi-modality image fusion", "year": "2023" }, { "authors": "J Ma; W Yu; P Liang; C Li; J Jiang", "journal": "Information Fusion", "ref_id": "b15", "title": "Fusiongan: A generative adversarial network for infrared and visible image fusion", "year": "2019" }, { "authors": "J Ma; H Xu; J Jiang; X Mei; X.-P Zhang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b16", "title": "Ddcgan: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion", "year": "2020" }, { "authors": "H Li; X.-J Wu; T Durrani", "journal": "IEEE Transactions on Instrumentation and Measurement", "ref_id": "b17", "title": "Nestfuse: An infrared and visible image fusion architecture based on nest connection and spatial/channel attention models", "year": "2020" }, { "authors": "H Zhang; H Xu; Y Xiao; X Guo; J Ma", "journal": "AAAI", "ref_id": "b18", "title": "Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity", "year": "2020" }, { "authors": "H Liu; K Simonyan; Y Yang", "journal": "", "ref_id": "b19", "title": "Darts: Differentiable architecture search", "year": "2018" }, { "authors": "R Liu; Z Liu; J Liu; X Fan", "journal": "", "ref_id": "b20", "title": "Searching a hierarchically aggregated fusion architecture for fast multi-modality image fusion", "year": "2021" }, { "authors": "J Liu; Y Wu; Z Huang; R Liu; X Fan", "journal": "IEEE Signal Processing Letters", "ref_id": "b21", "title": "Smoa: Searching a modality-oriented architecture for infrared and visible image fusion", "year": "2021" }, { "authors": "J Liu; X Fan; J Jiang; R Liu; Z Luo", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b22", "title": "Learning a deep multi-scale feature ensemble and an edge-attention guidance for image fusion", "year": "2021" }, { "authors": "T.-Y Lin; M Maire; S Belongie; J Hays; P Perona; D Ramanan; P Dollár; C L Zitnick", "journal": "", "ref_id": "b23", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "V S Petrovic; C S Xydeas", "journal": "IEEE Transactions on Image processing", "ref_id": "b24", "title": "Gradient-based multiresolution image fusion", "year": "2004" }, { "authors": "J J Lewis; R J O'callaghan; S G Nikolov; D R Bull; N Canagarajah", "journal": "Information fusion", "ref_id": "b25", "title": "Pixel-and region-based image fusion with complex wavelets", "year": "2007" }, { "authors": "A L Da Cunha; J Zhou; M N Do", "journal": "IEEE transactions on image processing", "ref_id": "b26", "title": "The nonsubsampled contourlet transform: theory, design, and applications", "year": "2006" }, { "authors": "G Bhatnagar; Q J Wu; Z Liu", "journal": "IEEE transactions on multimedia", "ref_id": "b27", "title": "Directive contrast based multimodal medical image fusion in nsct domain", "year": "2013" }, { "authors": "F Nencini; A Garzelli; S Baronti; L Alparone", "journal": "Information fusion", "ref_id": "b28", "title": "Remote sensing image fusion using the curvelet transform", "year": "2007" }, { "authors": "N Ahmed; T Natarajan; K R Rao", "journal": "IEEE transactions on Computers", "ref_id": "b29", "title": "Discrete cosine transform", "year": "1974" }, { "authors": "Y Liu; Z Wang", "journal": "IET Image Processing", "ref_id": "b30", "title": "Simultaneous image fusion and denoising with adaptive sparse representation", "year": "2014" }, { "authors": "C He; Q Liu; H Li; H Wang", "journal": "Procedia Engineering", "ref_id": "b31", "title": "Multimodal medical image fusion based on ihs and pca", "year": "2010" }, { "authors": "J Ma; C Chen; C Li; J Huang", "journal": "Information Fusion", "ref_id": "b32", "title": "Infrared and visible image fusion via gradient transfer and total variation minimization", "year": "2016" }, { "authors": "R Liu; Z Li; X Fan; C Zhao; H Huang; Z Luo", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b33", "title": "Learning deformable image registration from optimization: perspective, modules, bilevel training and beyond", "year": "2021" }, { "authors": "L Tang; J Yuan; J Ma", "journal": "Information Fusion", "ref_id": "b34", "title": "Image fusion in the loop of highlevel vision tasks: A semantic-aware real-time infrared and visible image fusion network", "year": "2022" }, { "authors": "Y Sun; B Cao; P Zhu; Q Hu", "journal": "", "ref_id": "b35", "title": "Detfusion: A detection-driven infrared and visible image fusion network", "year": "2022" }, { "authors": "R Liu; P Mu; X Yuan; S Zeng; J Zhang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b36", "title": "A general descent aggregation framework for gradient-based bi-level optimization", "year": "2022" }, { "authors": "R Liu; X Liu; X Yuan; S Zeng; J Zhang", "journal": "", "ref_id": "b37", "title": "A hessian-free interior-point method for non-convex bilevel optimization", "year": "2021" }, { "authors": "R Liu; X Liu; S Zeng; J Zhang; Y Zhang", "journal": "", "ref_id": "b38", "title": "Optimizationderived learning with essential convergence analysis of training and hyper-training", "year": "2022" }, { "authors": "R Liu; L Ma; X Yuan; S Zeng; J Zhang", "journal": "IEEE TIP", "ref_id": "b39", "title": "Task-oriented convex bilevel optimization with latent feasibility", "year": "2022" }, { "authors": "C Finn; P Abbeel; S Levine", "journal": "PMLR", "ref_id": "b40", "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "year": "2017" }, { "authors": "R Liu; Y Liu; S Zeng; J Zhang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b41", "title": "Towards gradient-based bilevel optimization with non-convex followers and beyond", "year": "2021" }, { "authors": "D Jin; L Ma; R Liu; X Fan", "journal": "", "ref_id": "b42", "title": "Bridging the gap between lowlight scenes: Bilevel learning for fast adaptation", "year": "2021" }, { "authors": "R Liu; L Ma; J Zhang; X Fan; Z Luo", "journal": "", "ref_id": "b43", "title": "Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement", "year": "2021" }, { "authors": "T.-Y Lin; P Goyal; R Girshick; K He; P Dollár", "journal": "", "ref_id": "b44", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "G Ghiasi; T.-Y Lin; Q V Le", "journal": "", "ref_id": "b45", "title": "Nas-fpn: Learning scalable feature pyramid architecture for object detection", "year": "2019" }, { "authors": "H Xu; L Yao; W Zhang; X Liang; Z Li", "journal": "", "ref_id": "b46", "title": "Auto-fpn: Automatic network architecture adaptation for object detection beyond classification", "year": "2019" }, { "authors": "N Wang; Y Gao; H Chen; P Wang; Z Tian; C Shen; Y Zhang", "journal": "International Journal of Computer Vision", "ref_id": "b47", "title": "Nas-fcos: Efficient search for object detection architectures", "year": "2021" }, { "authors": "T.-Y Ross; G Dollár", "journal": "", "ref_id": "b48", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "Q Zhang; S Zhao; Y Luo; D Zhang; N Huang; J Han", "journal": "", "ref_id": "b49", "title": "Abmdrnet: Adaptive-weighted bi-directional modality difference reduction network for rgb-t semantic segmentation", "year": "2021" }, { "authors": "H Li; X.-J Wu; J Kittler", "journal": "Information Fusion", "ref_id": "b50", "title": "Rfn-nest: An end-to-end residual fusion network for infrared and visible images", "year": "2021" }, { "authors": "Z Huang; J Liu; X Fan; R Liu; W Zhong; Z Luo", "journal": "Springer", "ref_id": "b51", "title": "Reconet: Recurrent correction network for fast and efficient multi-modality image fusion", "year": "2022" }, { "authors": "W Di; L Jinyuan; F Xin; R Liu", "journal": "", "ref_id": "b52", "title": "Unsupervised misaligned infrared and visible image fusion via cross-modality image generation and registration", "year": "2022" }, { "authors": "G Balakrishnan; A Zhao; M R Sabuncu; J Guttag; A V Dalca", "journal": "", "ref_id": "b53", "title": "An unsupervised learning model for deformable medical image registration", "year": "2018" }, { "authors": "Y Liu; X Chen; R K Ward; Z J Wang", "journal": "IEEE Signal Processing Letters", "ref_id": "b54", "title": "Medical image fusion via convolutional sparsity based morphological component analysis", "year": "2019" }, { "authors": "L Yang; B Guo; W Ni", "journal": "Neurocomputing", "ref_id": "b55", "title": "Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform", "year": "2008" }, { "authors": "L Cao; L Jin; H Tao; G Li; Z Zhuang; Y Zhang", "journal": "IEEE signal processing letters", "ref_id": "b56", "title": "Multifocus image fusion based on spatial frequency in discrete cosine transform domain", "year": "2014" }, { "authors": "M Yin; X Liu; Y Liu; X Chen", "journal": "IEEE Transactions on Instrumentation and Measurement", "ref_id": "b57", "title": "Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain", "year": "2018" }, { "authors": "V Aslantas; E Bendes", "journal": "Aeuinternational Journal of electronics and communications", "ref_id": "b58", "title": "A new image quality metric for image fusion: The sum of the correlations of differences", "year": "2015" } ]
[ { "formula_coordinates": [ 3, 359.4, 226.61, 204.61, 14.69 ], "formula_id": "formula_0", "formula_text": "min θT ℓ T (N T (I F ; θ T )),(1)" }, { "formula_coordinates": [ 3, 384.62, 244.03, 179.39, 31.16 ], "formula_id": "formula_1", "formula_text": "I F = N F (I A , I B ; θ * F ), θ * F = arg min θF ℓ F (N F (I A , I B ; θ F )).(2)" }, { "formula_coordinates": [ 3, 313.2, 397.19, 140.05, 16.28 ], "formula_id": "formula_2", "formula_text": "∂ℓT ∂θT = ∂ℓT(θT;θT(θ * F )) ∂θT + G(θ T (θ * F ))" }, { "formula_coordinates": [ 3, 348.03, 666.14, 110.4, 12.39 ], "formula_id": "formula_3", "formula_text": "θ k F -∇ θF ℓ F (N F (I A , I B ; θ k F )" }, { "formula_coordinates": [ 3, 431.33, 725.8, 132.67, 9.88 ], "formula_id": "formula_4", "formula_text": "T (I F ; θ T )) + ηℓ F (N F (I A , I B ; θ F ))," }, { "formula_coordinates": [ 4, 347.61, 450.77, 216.39, 9.68 ], "formula_id": "formula_5", "formula_text": "G α = ∇ α ℓ α (α; θ) + ∇ θ ℓ α (α; θ)∇ α θ(α).(3)" }, { "formula_coordinates": [ 4, 312, 507.34, 252, 22.55 ], "formula_id": "formula_6", "formula_text": "∇ α θ(α) = -∇ 2 α,θ ℓ(α; θ)∇ 2 θ,θ ℓ(α; θ) -1 ." }, { "formula_coordinates": [ 4, 317.14, 608.12, 241.73, 24.8 ], "formula_id": "formula_7", "formula_text": "G α = ∇ α ℓ α (α; θ) - ∇ θ ℓ(α; θ) ⊤ ∇ θ ℓ α (α; θ) ∇ θ ℓ(α; θ) ⊤ ∇ θ ℓ(α; θ) ∇ α ℓ(α; θ)." }, { "formula_coordinates": [ 5, 74.83, 480.78, 225.17, 30.32 ], "formula_id": "formula_8", "formula_text": "θ 0 F = ω * , with ω * = arg min ω M i=1 f (ω; θ Fi (ω)),(5)" }, { "formula_coordinates": [ 5, 102.16, 734.4, 138.04, 14.11 ], "formula_id": "formula_9", "formula_text": "ω ← ω -∇ ω M i=1 f (ω; θ Fi (ω))." }, { "formula_coordinates": [ 5, 347.15, 195.58, 123, 9.88 ], "formula_id": "formula_10", "formula_text": "θ Fi ← ω -∇ ω ℓ F (N F (I A , I B ))." }, { "formula_coordinates": [ 5, 337.65, 215.72, 140.93, 14.11 ], "formula_id": "formula_11", "formula_text": "ω ← ω -∇ ω M i=1 f (ω; θ Fi (ω))." }, { "formula_coordinates": [ 5, 313.4, 263.27, 217.8, 23.93 ], "formula_id": "formula_12", "formula_text": ": θ * T , θ * F = arg min θT,θF ℓ T (N T (θ T )) + ηℓ F (N F (θ 0 F )). 16: return α F , θ * T and θ * F ." }, { "formula_coordinates": [ 6, 312, 229.08, 252, 22.96 ], "formula_id": "formula_13", "formula_text": "ℓ V int = ∥M A ⊗ (I F -I A )∥ 2 2 + ∥M B ⊗ (I F -I B )∥ 2 2 and ℓ V ssim = 1 -SSIM(M A ⊗ I A , M F ⊗ I A ) + 1 -SSIM(M B ⊗ I F , M B ⊗ I B )." }, { "formula_coordinates": [ 9, 360.93, 618.22, 86.41, 14.04 ], "formula_id": "formula_14", "formula_text": "I B = N R (I A , I ′ B ; θ R )." } ]
A Task-guided, Implicitly-searched and Metainitialized Deep Model for Image Fusion
Image fusion plays a key role in a variety of multi-sensor-based vision systems, especially for enhancing visual quality and/or extracting aggregated features for perception. However, most existing methods just consider image fusion as an individual task, thus ignoring its underlying relationship with these downstream vision problems. Furthermore, designing proper fusion architectures often requires huge engineering labor. It also lacks mechanisms to improve the flexibility and generalization ability of current fusion approaches. To mitigate these issues, we establish a Task-guided, Implicit-searched and Meta-initialized (TIM) deep model to address the image fusion problem in a challenging real-world scenario. Specifically, we first propose a constrained strategy to incorporate information from downstream tasks to guide the unsupervised learning process of image fusion. Within this framework, we then design an implicit search scheme to automatically discover compact architectures for our fusion model with high efficiency. In addition, a pretext meta initialization technique is introduced to leverage divergence fusion data to support fast adaptation for different kinds of image fusion tasks. Qualitative and quantitative experimental results on different categories of image fusion problems and related downstream tasks (e.g., visual enhancement and semantic understanding) substantiate the flexibility and effectiveness of our TIM. The source code will be available at https://github.com/LiuZhu-CV/TIMFusion.
Risheng Liu; Zhu Liu; Jinyuan Liu; Zhongxuan Luo
[ { "figure_caption": "Fig. 1 .1Fig. 1. Schematic of the main components of the TIM scheme. We propose a task-guided image fusion, which introduces the task guidance for image fusion at (a). The concrete procedure of the implicit architecture search strategy for the fusion network construction is shown in (b). Pretext meta initialization to learn the inhere fusion principle for fast adaptation of fusion is shown at (c).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Qualitative comparison of our method with five state-of-the-arts fusion methods on RoadScene dataset.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Comprehensive analysis of the proposed scheme on fusion quality, computation efficiency, and parameters for infrared-visible image fusion. x-axis represents average run-time, testing by images with size of 448 × 620. Y-axis denotes the results of MI to reflect the information richness. Area of circles represents the parameter amounts.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Qualitative comparison of joint image registration and fusion.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. Quantitative comparison results of the proposed method with six competitive methods on three medical image fusion tasks, i.e., MRI-CT, MRI-PET, MRI-SPECT fusion. X-axis denotes the fusion metrics. Y-axis represents the value of metric. The green triangle and orange line are the mean and medium value.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. Object detection results based on image fusion compared with several state-of-the-art fusion methods.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .9Fig. 9. Semantic segmentation results based on image fusion compared with several state-of-the-art fusion methods.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 10 .Fig. 11 .1011Fig. 10. Comparison with DARTS and proposed IAS. (a) plots the objective losses. (b) depicts the performance stability of searched architectures by randomly searching 10 times.", "figure_data": "", "figure_id": "fig_8", "figure_label": "1011", "figure_type": "figure" }, { "figure_caption": "Fig. 12 .12Fig. 12. Illustration of the effectiveness of PMI with diverse scale of training data. The curves of losses and VIF metrics on the validation datasets are plotted at (a) and (b) respectively.", "figure_data": "", "figure_id": "fig_9", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "is with DUT-RU International School of Information Science & Engineering, Dalian University of Technology, Dalian, 116024, China and is also with Peng Cheng Laboratory, Shenzhen, 518055, China. (Corresponding author, e-mail: [email protected]).", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Task-Oriented Image Fusion Input: The search space O, loss functions ℓ F , ℓ T and other necessary hyper-parameters. Output: The optimal architectures and parameters.", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Numerical results with representative image fusion methods on TNO dataset.", "figure_data": "MetricsDDcGAN RFN DenseFuse FGANDIDMFEIF SMOA TARDAL SDNet U2Fusion TIM w/o L TIM w/ LMI1.8612.1082.4022.1942.4392.6162.2732.7832.0921.9343.2853.416FMI0.7610.8090.8170.4010.7790.8170.8180.8120.8060.7960.8180.816VIF0.6930.8060.8020.6310.8280.8050.7320.8500.7530.7920.8610.884Q AB/F0.3550.3220.4160.2220.4060.4520.3690.4100.4130.4160.4580.456Parameters (M)1.0972.7330.0740.9250.3730.7050.2230.2970.0670.6591.2320.127FLOPs (G)896.84727.7448.96497.76 103.56 195.82 61.86982.3737.35366.34194.6335.39Time (s)0.2115.9120.2510.1240.1180.1418.0710.0010.0450.1230.1450.001TABLE 2Numerical comparison with representative image fusion methods on RoadScene dataset.MetricsDDcGAN RFN DenseFuse FGAN DID MFEIF SMOA TARDAL SDNet U2Fusion TIM w/o L TIM w/ LMI2.6312.8663.0932.859 3.1163.3033.0043.4803.4152.9173.6753.896FMI0.7800.7830.7880.774 0.7670.7920.7850.7690.7810.7770.7970.786VIF0.6190.7730.7920.614 0.8240.8110.7600.7790.8110.7650.8380.775Q AB/F0.3180.3060.5030.275 0.4810.4720.4610.4450.5150.5190.5260.430FLOPs (G)549.68432.1030.01301.96 63.48 120.0337.9250.4922.89224.5362.1021.69Time (s)0.4014.1150.0430.089 0.0770.0887.4210.0010.0410.1260.0280.001", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Quantitative comparison of joint image registration and fusion.", "figure_data": "Metrics DenseFuse AUIF DID SDNet U2Fusion TARDAL TIMMI2.9282.620 2.654 2.6222.2022.7962.954FMI0.7220.707 0.708 0.7110.7130.7100.719VIF0.5290.488 0.490 0.4340.4500.4900.560Q AB/F0.3020.284 0.290 0.2950.3010.2580.303", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Quantitative results of object detection on Multi-Spectral dataset.", "figure_data": "Methods Color Cone Car Stop Car Person Bike mAPVisible0.24540.4201 0.5782 0.4229 0.4539 0.4241Infrared0.23600.3671 0.5757 0.5596 0.4205 0.4318DDCGAN0.21050.3591 0.4629 0.3086 0.4790 0.3640RFN0.25870.4210 0.5258 0.4526 0.4470 0.4210DenseFuse0.26350.3957 0.5541 0.5185 0.4790 0.4422AUIF0.22590.3982 0.5318 0.4776 0.5138 0.4294DID0.23530.3855 0.5176 0.5050 0.4527 0.4192MFEIF0.25150.3999 0.5483 0.5300 0.4941 0.4447SMOA0.25920.4041 0.5561 0.5275 0.5289 0.4551TARDAL0.15040.2867 0.4724 0.5594 0.3075 0.3553SDNet0.23290.3857 0.5138 0.4470 0.4595 0.4078U2Fusion0.24060.4138 0.5629 0.5113 0.5396 0.4537TIM0.25760.4285 0.6253 0.5638 0.5080 0.4766results based on four metrics, MI, Entropy (EN), VIF andSum of Correlation of difference (SCD)", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Subsequently, we illustrate the visual comparisons in Fig.8. As shown in the first row,", "figure_data": "based on single inputs, generated by fusionnetworks. The RetinaNet is trained by fused images basedon simple average principle. Our framework, has shownsignificant improvements against fusion-based methods andsingle modality images. More specific, existing detectionschemes establish the training and testing on visible imagesdataset. Obviously, the network effectively detects visible-salient objects under the training of visible images. Incontrast, infrared imaging contains thermal information,which is benefit for the detection of car engines and humanbodies. However, this modality is insensitive for other weak-thermal objects such as bike and color cone. Comparedwith fusion-based methods, our method fully integratesthe complementary advantages, which achieves the bestprecision on person, car and stop.Qualitative Comparisons.", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Qualitative comparison on four groups of MRI-PET/SPECT images with various medical fusion algorithms.", "figure_data": "MI/SCD-/-2.122/1.3642.043/1.4812.066/1.3012.056/1.3112.225/1.3122.303/1.3642.660/1.680MI/SCD-/-2.058/1.3852.037/1.5412.049/0.3262.008/1.3302.407/1.3282.427/1.3852.739/1.712MRIPETASRCSMCACURVELETDTCWTNSCTPAPCNNTIMMI/SCD-/-1.598/1.3641.613/1.4811.685/1.3011.723/1.311 1.915/1.312 1.598/1.364 2.481/1.680MI/SCD-/-1.617/1.4461.657/1.6571.617/1.3821.667/1.3841.823/1.3751.617/1.4462.491/1.772MRISPECTASRCSMCACURVELETDTCWTNSCTPAPCNNTIMFig. 7.", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Evaluation of fusion performance of IAS with DARTS. MS and C SC ). The results are shown in Table.7. As for the effectiveness of operators, 3-DB obtains the highest numerical results under MI metric but has the second slowest inference time. Moreover, constrained by hardware latency and λ = 0.5, our scheme balances the inference time and performances. For the verification about effectiveness of trade-off parameter λ, we also provide two versions with different latency constraints. The results are reported in the Table.8. Obviously, the inference time and parameters are sensitive to the adjustment of λ. We can observe that with the increase of λ, the time is reduced with numerical performance.", "figure_data": "StrategyMIFMIVIFQ AB/FDARTS2.4140.8080.7590.416IAS3.2850.8180.8610.458operators to design heuristic structures with fixed outerstructure (using C", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "The performance of single operator-composited network on TNO.", "figure_data": "OperatorMIVIF Q AB/F Parameters (M) Time (s)3-DC3.130 0.8390.4120.9920.05855-DC3.087 0.8290.3830.9970.09633-RB3.207 0.8740.4220.9990.08215-RB3.398 0.8880.4581.0110.07743-DB3.465 0.8950.4280.9240.19515-DB3.415 0.8690.4280.9970.3622TABLE 8Effectiveness of hardware regularization on TNO.λ = 03.569 0.8940.4650.9240.1532λ = 0.5 3.438 0.8880.4410.9950.0769λ = 23.416 0.8160.4560.1270.0012", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Evaluation of update number K for pretext meta initialization.", "figure_data": "NumbersMIFMIVIFQ AB/Fw/o Initialization3.2150.8080.8490.443K = 23.2810.8100.8390.444K = 43.2850.8180.8610.458K = 63.0730.8050.7970.422K = 83.0930.8100.7950.434K = 103.1280.8090.8330.442", "figure_id": "tab_9", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Numerical comparison between image fusion with direct fusion and proposed task-guided fusion among four representative vision tasks.", "figure_data": "TaskVisual Enhancement IVIF MIFSemantic Understanding Detection SegmentationMetricsMIMImAPmIOUDirect Fusion 2.1411.8520.4470.575TIM3.2852.3590.4760.6851.81.61.41.210.80.60.40.250100150200250300350400450500", "figure_id": "tab_10", "figure_label": "10", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work by Liu et al. provides a method for visual enhancement that serves as a foundational technique for image fusion in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The work by Liu et al. on visual enhancement serves as a methodological basis for the image fusion techniques discussed in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The work by Liu et al. on visual enhancement is further discussed in the citing paper to highlight the importance of the technique in image fusion."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The work by Liu et al. on visual enhancement is mentioned again in the citing paper to emphasize the significance of the technique in the field of image fusion."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The work by Liu et al. on visual enhancement is used as a reference in the citing paper to support the discussion on semantic understanding applications in image fusion."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The work by Liu et al. on visual enhancement is cited again in the citing paper to highlight the impact of the technique on the field of image fusion."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The work by Liu et al. on visual enhancement is mentioned in the citing paper to further discuss the application of the technique in the field of image fusion."}, {"Category": "Methodological Basis", "Citation": "[8]", "Explanation": "The work by Liu et al. on visual enhancement is cited in the citing paper to support the claim that the technique is a fundamental technique for visual perception in the field of image fusion."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work provides a method of end-to-end learning-based fusion schemes for diverse modalities, which the citing paper adopts in their research to improve fusion quality."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work introduces the use of modal gradients in end-to-end learning-based fusion schemes, which the citing paper utilizes to improve fusion quality in their research."}, {"Category": "Data Source", "Citation": "[16]", "Explanation": "The cited work provides a generative adversarial network for end-to-end learning-based fusion schemes, which the citing paper uses as a data source in their research to improve fusion quality."}, {"Category": "Extension or Continuation", "Citation": "[17]", "Explanation": "The cited work further extends the end-to-end learning-based fusion schemes by using generative adversarial networks, which the citing paper builds upon in their research to improve fusion quality in diverse modalities."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work introduces the use of dense blocks for feature extraction in image fusion methods, which the citing paper adopts in their own research."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work introduces the use of multi-scale modules in image fusion methods, which the citing paper adopts in their own research."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work introduces the use of spatial attention in image fusion methods, which the citing paper adopts in their own research."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work introduces the use of feature decomposition in image fusion methods, which the citing paper adopts in their own research."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work introduces the use of dense connection in end-to-end learning for image fusion, which the citing paper adopts in their own research."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work introduces the use of dense connection in end-to-end learning for image fusion, which the citing paper adopts in their own research."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work introduces the use of residual modules in end-to-end learning for image fusion, which the citing paper adopts in their own research."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work introduces the use of residual modules in end-to-end learning for image fusion, which the citing paper adopts in their own research."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work introduces the use of differentiable architecture search in image fusion methods, which the citing paper adopts in their own research."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work introduces the use of architecture search for image fusion in a large-scale dataset, which the citing paper adopts in their own research."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work introduces the use of architecture search for image fusion in a large-scale dataset, which the citing paper adopts in their own research."}, {"Category": "Methodological Basis", "Citation": "[24]", "Explanation": "The cited work, MS-COCO, is used as a large dataset to train the learnable modules in fusion methods. This is a methodological basis for the training of these methods, as it provides a specific dataset to work with."}, {"Category": "Methodological Basis", "Citation": "[25], [26]", "Explanation": "The cited works introduce the use of discrete wavelet transform for multi-modality fusion tasks, which the citing paper adopts in its research on image fusion."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work presents the contourlet transform as a method for multi-modality fusion tasks, which the citing paper may have used in its research on image fusion."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work introduces the non-subsampled transform as a method for multi-modality fusion tasks, which the citing paper may have used in its research on image fusion."}, {"Category": "Methodological Basis", "Citation": "[29]", "Explanation": "The cited work presents the curvelet transform as a method for multi-modality fusion tasks, which the citing paper may have used in its research on image fusion."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work discusses the use of discrete cosine scheme for multi-modality fusion tasks, which the citing paper may have adopted in its research on image fusion."}, {"Category": "Methodological Basis", "Citation": "[31]", "Explanation": "The cited work proposes an adaptive sparse representation to learn structural information from image patches, which the citing paper may have used in its research on image fusion."}, {"Category": "Methodological Basis", "Citation": "[32]", "Explanation": "The cited work presents Principal Component Analysis (PCA) as a tool for subspace analysis-based fusion, which the citing paper may have used in its research on image fusion."}, {"Category": "Methodological Basis", "Citation": "[33]", "Explanation": "The cited work discusses the use of total variation minimization for image fusion tasks, which the citing paper may have adopted in its research on image fusion."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work by Li et al. introduces a Densenet as an encoder and the \u2113 1 norm as rules for the image fusion task, which the citing paper adopts in their research to fully characterize the representative features and reconstruct the source images."}, {"Category": "Methodological Basis", "Citation": "[1], [17]", "Explanation": "The cited works on Generative Adversarial Networks (GAN) are used as a basis for the design of end-to-end fusion models in the citing paper, which aims to control the output of the fused image with thermal radiation in infrared images and gradient details in visible images through reinforcement from different discriminators."}, {"Category": "Methodological Basis", "Citation": "[16], [17]", "Explanation": "The cited works on generative adversarial schemes are utilized in the citing paper to transfer diverse modal distribution for the fused images, which is a key methodological element in the end-to-end fusion model design."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work, FusionDN, is used as a methodological basis for the citing paper in terms of image quality assessment and continuous learning techniques."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work, U2Fusion, is used as a methodological basis for the citing paper in terms of the squeeze with decomposition mechanism for image fusion tasks."}, {"Category": "Data Source", "Citation": "[13]", "Explanation": "The cited work, SDNet, is used as a data source for the citing paper in terms of the universal frameworks for image quality assessment and continuous learning in digital image fusion tasks."}, {"Category": "Extension or Continuation", "Citation": "[1]", "Explanation": "The cited work is an extension of the research on connecting image fusion with semantic understanding tasks, as the citing paper further investigates the inner task-guided relationship and efficient architecture construction in this area."}, {"Category": "Extension or Continuation", "Citation": "[35]", "Explanation": "The cited work is an extension of the research on image fusion with semantic understanding tasks, as the citing paper focuses on the investigation of the inner task-guided relationship in this area."}, {"Category": "Extension or Continuation", "Citation": "[36]", "Explanation": "The cited work is an extension of the research on image fusion with semantic understanding tasks, as the citing paper highlights the need for a concrete fusion standard to define meaningful information in this area."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work provides a method for end-to-end training of a network to establish a mapping between multi-modal inputs and fused images, which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work is a method for image fusion that the citing paper may have used as a reference or basis for their own research."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work is another method for image fusion that the citing paper may have used as a reference or basis for their research."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work is a method for end-to-end training of a network for image fusion that the citing paper may have used as a reference or basis for their research."}, {"Category": "Methodological Basis", "Citation": "[37], [38], [39], [40]", "Explanation": "The cited works provide a method for addressing the task-specific objective in image fusion using a stagewise procedure to aggregate task preference."}, {"Category": "Methodological Basis", "Citation": "[20], [21]", "Explanation": "The cited works introduce mainstream differentiable search strategies that the citing paper adopts to design the architecture of image fusion in a more efficient and effective manner."}, {"Category": "Extension or Continuation", "Citation": "[22]", "Explanation": "The cited work introduces differentiable relaxation that the citing paper builds upon to develop the implicit architecture search strategy for image fusion."}, {"Category": "Methodological Basis", "Citation": "( 2)", "Explanation": "The cited work provides the definition of a fusion task, which the citing paper adopts in their research to construct a pretext meta initialization constraint for image fusion-based vision optimization."}, {"Category": "Methodological Basis", "Citation": "(5)", "Explanation": "The cited work defines the pretext objective for image fusion constraint, which the citing paper uses to develop a hierarchical solving procedure for the optimization problem based on image fusion constraint."}, {"Category": "Extension or Continuation", "Citation": "[41], [42], [43]", "Explanation": "The cited works are extensions of the research on the solution of image fusion constraint, as the citing paper proposes a hierarchical solving procedure that builds upon the work of others in the field."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work provides a search space for image fusion that the citing paper adopts in constructing the architecture of the image fusion network N F ."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work provides a detailed analysis of the image fusion process, which the citing paper uses to guide the design of a suitable image fusion scheme that incorporates different characteristics."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work provides a method for estimating weight maps based on the shallow and deep-level features from the VGG network, which the citing paper adopts in the N F module to measure the information preservation."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work introduces a fusion rule that the citing paper uses to calculate the contrast ratio of each pixel in the source images, which is then used to estimate the final weight map for proportional information based on pixel distribution."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work provides a hybrid dataset that the citing paper utilizes in the search for a parallel fusion structure in the IVIF model."}, {"Category": "Methodological Basis", "Citation": "[44]", "Explanation": "The cited work introduces the feature distillation mechanism, which the citing paper adopts to improve the feature fusion in semantic perception tasks by concatenating features from other nodes in the graph cell."}, {"Category": "Data Source", "Citation": "[45]", "Explanation": "The cited work, RetinaNet, is the baseline scheme used in the citing paper to conduct experiments and comparisons."}, {"Category": "Extension or Continuation", "Citation": "[46], [47], [48]", "Explanation": "The cited works propose a series of NAS-based detection schemes that the citing paper extends by utilizing the feature distillation cell to fuse features progressively in a bottom-up manner."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work, the MultiSpectral dataset, is used in the experiments of the citing paper to test the performance of the proposed method in detecting objects in different spectral bands."}, {"Category": "Methodological Basis", "Citation": "[50]", "Explanation": "The cited work provides a comparison of existing RGB-T segmentation schemes, which the citing paper uses to develop a lightweight and efficient segmentation scheme based on image fusion and nested formulation."}, {"Category": "Supporting Evidence", "Citation": "[17]", "Explanation": "The cited work, DDcGAN, is used as a comparison in the study conducted in the citing paper to evaluate the performance of the proposed method."}, {"Category": "Extension or Continuation", "Citation": "[51]", "Explanation": "The cited work, RFN, is mentioned as a state-of-the-art competitor in the study conducted in the citing paper, indicating a continuation of research in the field of image fusion."}, {"Category": "Extension or Continuation", "Citation": "[9]", "Explanation": "The cited work, DenseFuse, is compared to the proposed method in the study conducted in the citing paper, indicating a continuation of research in the field of image fusion."}, {"Category": "Extension or Continuation", "Citation": "[16]", "Explanation": "The cited work, FGAN, is used as a comparison in the study conducted in the citing paper to evaluate the performance of the proposed method."}, {"Category": "Extension or Continuation", "Citation": "[11]", "Explanation": "The cited work, DID, is compared to the proposed method in the study conducted in the citing paper, indicating a continuation of research in the field of image fusion."}, {"Category": "Extension or Continuation", "Citation": "[23]", "Explanation": "The cited work, MFEIF, is used as a comparison in the study conducted in the citing paper to evaluate the performance of the proposed method."}, {"Category": "Extension or Continuation", "Citation": "[22]", "Explanation": "The cited work, SMOA, is compared to the proposed method in the study conducted in the citing paper, indicating a continuation of research in the field of image fusion."}, {"Category": "Extension or Continuation", "Citation": "[1]", "Explanation": "The cited work, TARDAL, is used as a comparison in the study conducted in the citing paper to evaluate the performance of the proposed method."}, {"Category": "Extension or Continuation", "Citation": "[13]", "Explanation": "The cited work, SDNet, is used as a comparison in the study conducted in the citing paper to evaluate the performance of the proposed method."}, {"Category": "Extension or Continuation", "Citation": "[14]", "Explanation": "The cited work, U2Fusion, is compared to the proposed method in the study conducted in the citing paper, indicating a continuation of research in the field of image fusion."}, {"Category": "Methodological Basis", "Citation": "[52]", "Explanation": "The cited work provides a method for generating fusion results with artifacts and ghosts, which the citing paper builds upon to address the issue of misaligned image fusion."}, {"Category": "Extension or Continuation", "Citation": "[53]", "Explanation": "The cited work introduces a pretrained MRRN scheme for image fusion, which the citing paper extends to a more general and flexible formulation for addressing unregistered multispectral image fusion scenarios."}, {"Category": "Methodological Basis", "Citation": "[54]", "Explanation": "The cited work, VoxelMorph, serves as the basis for the fusion schemes used in the citing paper, providing a method for registering pairs of multispectral images."}, {"Category": "Methodological Basis", "Citation": "[31]", "Explanation": "The cited work, ASR, is used as a basis for the method proposed in the citing paper to address the task of medical image fusion."}, {"Category": "Extension or Continuation", "Citation": "[55]", "Explanation": "The cited work, CSMCA, is extended in the citing paper to further improve the performance of medical image fusion."}, {"Category": "Extension or Continuation", "Citation": "[56]", "Explanation": "The cited work, CURVELET, is extended in the citing paper to address the task of medical image fusion in a more effective way."}, {"Category": "Extension or Continuation", "Citation": "[57]", "Explanation": "The cited work, DTCWT, is extended in the citing paper to provide a new method for medical image fusion."}, {"Category": "Extension or Continuation", "Citation": "[28]", "Explanation": "The cited work, NSCT, is extended in the citing paper to improve the performance of medical image fusion."}, {"Category": "Extension or Continuation", "Citation": "[58]", "Explanation": "The cited work, PAPCNN, is extended in the citing paper to provide a new method for medical image fusion."}, {"Category": "Supporting Evidence", "Citation": "[59]", "Explanation": "The cited work provides a method for objective evaluations of the quality of medical image fusion, which the citing paper utilizes to measure the amount of information remaining in fused images."}, {"Category": "Supporting Evidence", "Citation": "[49]", "Explanation": "The cited work, RetinaNet, is used as a method for object detection in the citing paper. The results obtained from the use of this method are presented in the form of a table, indicating that the method is effective in detecting objects in the given scenario."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work (DARTS) serves as the basis for the comparison of search strategies in the proposed ablation analysis, providing a benchmark for evaluating the performance of the proposed search method (IAS). The results demonstrate the effectiveness of the proposed method in achieving better performance and stability in the search process."}]
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b0", "b5", "b6", "b7", "b3", "b8", "b9", "b2", "b10", "b11", "b12", "b13", "b14", "b15", "b11", "b13", "b16", "b17", "b3" ], "table_ref": [], "text": "Estimating the six degrees of freedom (DoF) pose of objects from a single RGB image remains a formidable task, primarily due to the presence of ambiguity induced by symmetric objects and occlusions. Symmetric objects exhibit identical visual appearance from multiple viewpoints, which makes it difficult to differentiate among their various orientations. On the other hand, occlusions arise when part of an object is obstructed from view by another object or its own structure, which can obscure crucial details necessary to determine the complete shape and orientation of the object. Pose ambiguity presents a unique challenge as it transforms the direct one-to-one correspondence between an image and its associated object pose into a complex one-to-many scenario. As a result, a single observed image might correspond to several potential poses, which leads to significant performance degradation for methods reliant on one-to-one correspondence. Despite extensive exploration in the object pose estimation literature [1][2][3][4][5], pose ambiguity remains a persisting and unresolved issue.\nIn the face of these challenges, recent advancements in pose regression have introduced the use of symmetry-aware annotations to improve pose estimation accuracy [1,[6][7][8]. These methods typically employ symmetry-aware and surrogate losses that can tackle the pose ambiguity problem. The applicability and efficacy of these losses, nevertheless, depend on the provision of symmetry annotations that characterize the symmetrical properties of objects. Although the acquisition of these annotations for objects with distinct symmetry is relatively straightforward, it becomes a considerably complex task for objects with complex shapes or under occlusion. An example is a texture-less cup, where the true orientation becomes ambiguous if the handle is not visible. Furthermore, real-world applications usually present partially observed environments, where objects may appear in various orientations and with degrees of occlusion. The manual labor and time required to annotate the symmetry of each object under such circumstances is impractical. As a result, despite the potential of symmetry-aware annotations in addressing pose ambiguity, securing these annotations, particularly in complex and partially observed environments, presents an obstacle in object pose estimation.\nSeveral contemporary studies have sought to circumvent the difficulties associated with obtaining symmetry annotations by reframing the original pose estimation problem as a density estimation problem. By treating 'equivalent poses' as a multi-modal distribution, methods such as Implicit- PDF [4], HyperPose-PDF [9], and SpyroPose [10] leverage deep neural networks (DNNs) to implicitly characterize the non-parametric density on the rotation manifold SO (3). This enables them to estimate poses without explicit reliance on symmetry annotations. While these advances are noteworthy, they also introduce new complexities. For instance, the computation of the maximum likelihood loss during training requires exhaustive sampling across the whole SO(3) space. Moreover, the accuracy of inference is dependent on the resolution of the grid search, which necessitates a significant amount of grid sampling. These computationally demanding approaches may not only restrict the achievable precision of pose estimation, but also present obstacles when extending to larger spaces such as SE(3) due to the substantial memory requirements. Despite the innovative strides made in utilizing implicit density representation for pose estimation, the computational intensity and resource demands associated with these methods present significant impediments for their practical applications.\nGround Truth\nRecognizing these challenges, the research community is pivoting towards diffusion models (DMs) [11][12][13][14]. These models have demonstrated promise in managing complex and multi-modal problems. Specifically, diffusion models are adept at efficiently tackling multi-modal distributions, an attribute that could be advantageous in addressing pose ambiguity issues. Their effectiveness lies in the iterative sampling process, which incorporates noise and enables a more focused exploration of the pose space while reducing computational demands. Moreover, diffusion models excel in terms of scalability, providing the capacity for managing larger spaces involving considerable parameters. This scalability is primarily due to the ability of these models to learn complex and high-dimensional distributions without the need for explicit density estimation. Given these attributes, diffusion models offer a promising solution to the computational intensity and resource demands for pose estimation tasks. In previous endeavors, the authors [15,16] applied the denoising diffusion probabilistic model (DDPM) [12] and score-based generative model (SGM) [14] to the SO(3) space, and achieved superior results in recovering unknown densities on the SO(3) rotation manifold. Meanwhile, other research efforts [17,18] have extended the application of diffusion models to the more complex SE(3) space. Unfortunately, it is crucial to note that these studies concentrated on the vector space, without extending their techniques to the image space or specifically applying them to object poses.\nIn light of the above motivations, in this paper, we introduce a novel approach that applies diffusion models to the SE(3) group for object pose estimation tasks, specifically aimed at addressing the pose ambiguity problem. This method draws its inspiration from the correlation observed between rotation and translation distributions, a phenomenon often resultant from the perspective effect inherent in image projection. We propose that by jointly estimating the distribution of rotation and translation on SE(3), we may secure more accurate and reliable results as shown in Fig. 1. To the best of our knowledge, this is the first work to apply diffusion models to SE(3) within the context of image space. To substantiate our approach, we have developed a new synthetic dataset, called SYMSOL-T, based on the original SYMSOL dataset [4]. SYMSOL-T enriches the original SYMSOL dataset by incorporating randomly sampled translations, thus providing a more rigorous testbed for evaluating the effectiveness of our method in capturing the joint density of object rotations and translations.\nFollowing the motivations discussed, we have extensively evaluated our SE(3) diffusion model using the synthetic SYMSOL-T dataset. The experimental results affirm the model's competence in handling SE(3), successfully addressing the pose ambiguity problem in 6D object pose estimation. Moreover, the diffusion model has proven effective in mitigating ambiguities introduced by image perspective effects. Importantly, the surrogate Stein score formulation we propose on SE(3) exhibits improved convergence in Langevin dynamics compared to the score calculated via automatic differentiation. This not only underscores the robustness of our formulation, but also demonstrates its potential to handle complex dynamics in object pose estimation tasks." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Lie Groups and Their Application in Pose Estimation", "publication_ref": [], "table_ref": [], "text": "A Lie group represnted as G, serves as a cornerstone in the mathematical framework for pose estimation, due to the concepts of group and smooth (or differentiable) manifold, which is a topological space that locally resembles linear space. Importantly, following the group axioms, the composition operation is ensured denotes as • : G × G → G, and the inversion map, both exhibit smoothness in relation to the group structure. For simplicity, we denote the composition operator between two group elements X, Y ∈ G as X • Y = XY in the following content. Every Lie group G has an associated Lie algebra, denoted as g. Lie group with its Lie algebra are related through the following mappings: Exp : g → G, Log : G → g. In the realm of pose estimation, there are two commonly used Lie groups: SO(3) and SE(3). The Lie group SO(3), together with its associated Lie algebra so(3), represents rotations in the Euclidean space. On the other hand, the Lie group SE(3) and its corresponding Lie algebra se(3) describe rigid-body transformations. Rigid-body transformations include both rotations and translations in Euclidean space. Such group structures construct the mathematical basis for analyzing and solving pose estimation problems, particularly dealing with six Degrees of Freedom (6DoF) pose estimation." }, { "figure_ref": [], "heading": "Parametrization of SE(3)", "publication_ref": [ "b18", "b17", "b16" ], "table_ref": [], "text": "The special Euclidean group SE(3) is a fundamental group for studying rigid-body motions as it encompasses both rotations and translations in three-dimensional Euclidean space. While various parametrizations for SE(3) are presented in [19], in this work, we consider two parametrizations.\nThe first type of parametrization separates the rotations R ∈ SO(3) and translations T ∈ R 3 to form a composite manifold ⟨R 3 , SO(3)⟩ and its lie algebra ⟨R 3 , so(3)⟩, namely R 3 SO(3). It has the composition rule defined as\n(R 2 , T 2 )(R 1 , T 1 ) = (R 2 R 1 , T 2 + T 1 )\n. This parametrization is widely used in the previous diffusion models on SE(3) considering its simplicity [18,17], which induce a separated diffusion process for both R and T .\nAnother parametrization is the well-known SE(3). We denote its element in Lie algebra as τ = (ρ, ϕ) ∈ se(3) and its corresponding group element (R, T ) = (Exp(ϕ), J l (ϕ)ρ) ∈ SE(3), where J l is the left-Jacobian on SO(3). The composition rule for SE(3) parametrization is defined as\n(R 2 , T 2 )(R 1 , T 1 ) = (R 2 R 1 , T 2 + R 2 T 1 )\n. This interconnection between rotations and translations in SE(3) results in a diffusion process that mimicking the intricate dynamics of rigid-body motion, enhancing the fidelity of the diffusion model." }, { "figure_ref": [], "heading": "Score-Based Generative Modeling", "publication_ref": [ "b10", "b19", "b10" ], "table_ref": [], "text": "The (Stein) score of a probability density p(x) is defined as ∇ x log p(x). Consider an i.i.d samples {x i ∈ R D } N i=1 generated by a data distribution p data (x). In score-based generative models (SGMs), a primary formulation of diffusion models, data is incrementally transformed into a simple known prior distribution (e.g., Gaussian distribution) in the forward process. Consider a sequence of increasing positive noise scales {σ i } L i=1 , with σ min = σ 1 < σ 2 < ... < σ L = σ max . σ min and σ max are small and large enough to approximate p σmin (x) to p data (x), and p σmax (x) to N (x; 0, σ 2 max I) respectively. The forward process is formulated with a perturbation kernel p σ (x|x) := N (x; x, σ 2 I) and p σ (x) := p data (x)p σ (x|x)dx. In NCSN [11], a network s θ (x, σ) is trained using a Denoising Score Matching objective [20] as follow:\nθ * = arg min θ L(θ; σ) ≜ 1 2 E pdata(x) E x∼N (x,σ 2 I) ∥s θ (x, σ) -∇ x log p σ (x|x)∥ 2 2\n(1)\nIdeally, the optimal score-based model s θ * (x, σ) matches ∇ x log p(x) almost everywhere for σ ∈ {σ i } L i=1 . As for generating samples, SGMs apply the iterative reverse process. In NCSN [11], they leverage the Langevin MCMC to run M steps to produce a sample for each p σi (x) sequentially:\nxm i = xm-1 i + ϵ i s θ * (x m-1 i , σ i ) + √ 2ϵ i z m i , m = 1, 2, ..., M,(2)\nwhere ϵ i > 0 is the step size, and z m i is standard normal. Overall, diffusion models, and particularly SGMs, provide a robust framework for handling complex data distributions, and they serve as the foundation for the denoising procedure of our methodology. 3 Preliminaries\nUrain et al. [17] R 3 SO(3) N R 3 × N SO(3) ✓ Score / Autograd R 3 SO(3) Yim et al. [18] R 3 SO(3) N R 3 × IG SO(3) ✗ Score / Autograd ⟨R 3 , so(3)⟩ Ours SE(3) N SE(3) △ Score / Closed Form SE(3)" }, { "figure_ref": [ "fig_1" ], "heading": "Comparative Analysis of Diffusion Models in Pose Estimation", "publication_ref": [ "b14", "b15", "b16", "b17", "b14", "b15", "b20", "b17", "b16" ], "table_ref": [], "text": "Diffusion models have been successfully employed for pose estimation tasks [15][16][17][18]. However, the selection of distribution and computation methods varies across different implementations, leading to differing outcomes and computational efficiencies. Fig. 2 provides a comparison of prior diffusion model approaches used in pose estimation along with ours. It highlights the distinctive groups, distributions, methods, as well as diffusion spaces each employs. Several earlier studies [15,16] proposed approaches operating within the SO(3) space and employs normal distributions on SO(3) [21] (denoted as IG SO(3) ). A significant limitation of IG SO( 3) is the lack of a closed form, which imposes computational constraints. Similarly, the method proposed by [18] operates in the tangent space of SE(3) parameterized with R 3 SO(3) (represented as r3so3). Its distribution lacks a closed form as well, posing challenges to its computational efficiency. On the other hand, the authors in [17] employed a joint Gaussian distribution in the R 3 and SO(3) spaces, which does possess a closed form, potentially enhancing computational efficiency. Unfortunately, this method operates within the R 3 SO(3) space and might not fully exploit the benefits offered by the SE(3) space." }, { "figure_ref": [ "fig_1" ], "heading": "The Benefits of", "publication_ref": [], "table_ref": [], "text": "SE(3) over R 3 SO(3) in Perspective-Affected Pose Estimation\nIn the realm of pose estimation, the effect of image perspective present a notable challenge. It intertwines rotation and translation in the image space, leading to the phenomenon of pose ambiguity. Fig. 2 (right) exemplifies this through four cubes, each of which appears similarly oriented but actually differs in rotation degrees, complicating model predictions for accurate rotation angles. The parametrizations of R 3 SO3 and SE(3) offer different approaches to dealing with this problem. Specifically, R 3 SO3 does not factor in the relationship between rotation and translation, whereas SE(3) actively incorporates it into its structure. As a result, it is reasonable to hypothesize that SE(3) might be more capable of mitigating performance degradation stemming from the image perspective effect. This potential advantage of SE(3), further elaborated in Section 2.2, presents an intriguing avenue for future exploration in pose estimation research." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [ "b21", "b22", "b16", "b23" ], "table_ref": [], "text": "To apply score-based generative modeling to a Lie group G, we first establish a perturbation kernel on G that conforms to the Gaussian distribution, as proposed by [22,23]. The kernel is expressed as:\np Σ (Y |X) := N G (Y ; X, Σ) ≜ 1 ζ(Σ) exp - 1 2 Log(X -1 Y ) ⊤ Σ -1 Log(X -1 Y ) ,(3)\nwhere Σ represents a covariance matrix filled with σ on the diagonal, ζ(Σ) denotes a normalizing constant, and X, Y ∈ G are group elements. The score on G corresponds to the gradient of the log-density of the data distribution relative to the group element Y . It can be formulated as follows:\n∇ Y log p Σ (Y |X) = -J -⊤ r (Log(X -1 Y ))Σ -1 Log(X -1 Y ).(4)\nThis term can be expressed in a closed form if the inverse of the right-Jacobian J -1 r on G also has a closed form. However, an alternative approach suggested by [17] would be to compute this term using automatic differentiation. By substituting Y with X, where X = XExp(z), z ∼ N (0, σ 2 i I), \n∇ Y log p σ ( X|X) = - 1 σ 2 J -⊤ r (z)z.(5)\nA score model s θ ( X, σ), can then be trained using the Denoising Score Matching objective in Eq. ( 1):\nθ * = arg min θ L(θ; σ) ≜ 1 2 E pdata(X) E X∼N G (X,Σ) s θ ( X, σ) -∇ X log p σ ( X|X) 2 2 (6)\nTo complete the process, we employ a variant of Langevin Dynamics [24], tailored to the Lie group context, as a means to generate a sample from a noise. The procedure can be expressed as follows:\nXi+1 = Xi Exp(ϵ i s θ ( Xi , σ i ) + √ 2ϵ i z i ), z i ∼ N (0, I).(7)" }, { "figure_ref": [], "heading": "Efficient Computation of Stein Score", "publication_ref": [ "b20", "b18" ], "table_ref": [], "text": "In Section 4.1, we introduced the application of score-based generative modeling to a Lie group G.\nThe score can be computed via either automatic differentiation or a closed-form expression. However, obtaining the closed-form score presents a challenge due to its dependency on the distribution selection. For instance, deriving the closed-form score induced by IG SO(3) [21] can be challenging. Moreover, the computation of the score relies on the availability of a closed-form expression for the Jacobian matrix on G. Even if such an expression is available, it may not necessarily offer computational savings compared to automatic differentiation. Therefore, we next discuss a simplification method of the Stein score under certain conditions for reducing computational costs on the Lie group.\nThe Stein score, induced by N G , can be expressed in closed-form if the Jacobian matrix on G is invertible and if the left and right Jacobian matrices conform to the relation described as follows:\nJ l (z) = J ⊤ r (z), J -1 l (z) = J -⊤ r (z),(8)\nwhere z ∈ g. As pointed out by [19], SO(3) exhibits this property, and the closed-form score on SO(3) can be simplified by demonstrating the following property, which holds on any G:\nJ l (z)z = z. (9\n)\nThe derivation is in the supplementary material. The score on SO(3) can then be reformulated as:\n∇ Y log p σ ( X|X) = - 1 σ 2 J -1 l (z)z = - 1 σ 2 z. (10\n)\nThis demonstrates that the score on SO(3) can be simplified to the sampled Gaussian noise z scaled by -1/σ 2 , thus eliminating the need for both automatic differentiation and Jacobian calculations.\nSimilarly, the score on R 3 SO(3) has a closed-form as its Jacobians satisfy the relations of Eq. ( 8):\nJ l (z) = (I, J l (ϕ)) = (I, J ⊤ r (ϕ)) = J ⊤ r (z),(11)\nwhere z = (T, ϕ) ∈ ⟨R 3 , so(3)⟩. This implies that the score on R 3 SO(3) can also be simplified to the formulation represented by Eq. ( 10)." }, { "figure_ref": [ "fig_3" ], "heading": "Surrogate Stein Score Calculation on SE(3)", "publication_ref": [ "b18", "b24" ], "table_ref": [], "text": "The preceding sections have explained how the selection of an appropriate Gaussian kernel enables the simplification of the score on SO(3) and R 3 SO(3) to sampled Gaussian noise. While this insight may suggest the feasibility of a similar simplification of the score calculation on SE(3), we demonstrate that SE(3) does not conform to the property outlined in Eq. ( 8). The inverse of the left-Jacobian on SE(3) at z = (ρ, ϕ) ∈ se( 3) is given by J\n-1 l (z) = J -1 l (ϕ) Z(ρ,ϕ) 0 J -1 l (ϕ) where Z(ρ, ϕ) = -J -1 l (ϕ)Q(ρ, ϕ)J -1 l (ϕ).\nThe full form of Q(ρ, ϕ) can be found in [19,25] and our supplementary material. By showing Q ⊤ (-ρ, -θ) = Q(ρ, θ), we derive the following inequality:\nJ -⊤ r (z) = (J -1 l (-z)) ⊤ = J -1 l (ρ) 0 Z(ρ,ϕ) J -1 l (ρ) ̸ = J -1 l (z). (12\n)\nThis inequality indicates the potential discrepancy between the score vector and the denoising direction due to the curvature of the manifold, which may impede the convergence of Langevin dynamics and necessitate additional denoising steps. To address this problem, we turn to higher-order approximation methods by breaking one Langevin dynamics step into multiple smaller sub-steps. Fig. 3 (right) illustrates this one-step denoising process on SE(2) from a noisy sample X = XExp(z) to its cleaned counterpart X, with contour lines representing the distance to X in 2D Euclidean space. We observe that increasing the number of sub-steps eventually leads the integral of those small transformations approches the inverse transformation of z. As a result, we propose substituting the true score in Eq. ( 5) with a surrogate score in our training objective of Eq. ( 6) on SE(3), defined as:\nsX ( X, σ) ≜ - 1 σ 2 z.(13)\nThe detailed training and sampling procedures are available in our supplemental material." }, { "figure_ref": [ "fig_3" ], "heading": "Proposed Framework", "publication_ref": [ "b11", "b10", "b25", "b26", "b11", "b10", "b27", "b28" ], "table_ref": [], "text": "Fig. 3 (left) presents an overview of our proposed framework, comprising a multi-layer perceptron (MLP) cascaded with multiple MLP blocks. This structure is inspired by recent conditional generative models [12,11], while we modifies their approach by substituting linear layers for convolutional ones to condition image generation. The score model processes noisy poses as input and provides estimated scores as output, both represented in a vector form of corresponding lie algebra space of the poses. To extend the score model's applicability to the image domain, we utilize a ResNet [26] for extracting feature embeddings from input images and encode the time index i using positional embedding [27]. Our score models then condition on image and time features in an interleaved fashion. We found that a single repetition of the MLP block was sufficient in our experimental settings. This architecture enables separation of image feature extractors from our score models, obviating the need for forwarding in every denoising step and thereby significantly enhancing inference speed.\nRegarding the design of the conditioning mechanism, a few previous works [12,11] employ scale-bias condition, which is formulated as f (x, c) = A(c)x + B(c). Nevertheless, our empirical observations suggest that this conditioning mechanism does not perform well on learning distributions on SO(3).\nWe hypothesize that this is attributable to the lack of expressivity of the neural networks. Inspired by [28,29], we propose a new Fourier-based conditioning mechanism, which is formulated as follows:\nf i (x, c) = d-1 j=0 W ij (A j (c) cos(πx j ) + B j (c) sin(πx j )) (14\n)\nwhere d is the dimension of our linear layer. This form bears similarity to the Fourier series\nf (t) = ∞ k=0 A k cos 2πkt P + B k sin 2πkt P .\nOur motivation stems from the fact that the pose distribution on SO(3) is circular, and can therefore be represented as periodic functions. By the definition of periodic functions, their derivatives are also periodic. It is worth noting that this conditioning does not introduce additional parameters in our neural network design, as W ij is provided by the subsequent linear layer. Our experimental findings suggest that this conditioning scheme enhances the ability of neural network to capture periodic features of score fields on SO(3). " }, { "figure_ref": [], "heading": "Experimental Hypotheses and Validation Objectives", "publication_ref": [], "table_ref": [], "text": "Before diving into our experimental findings, we outline our hypotheses. This prepares the ground for understanding the significance of our method and its implications for 6D object pose estimation.\n1. Applicability of Score-based Diffusion Model in the Image Domain: Our first hypothesis postulates that the score-based diffusion model can be effectively applied in the image domain to address the pose ambiguity issue prevalent in 6D object pose estimation tasks.\n2. Advantage of SE(3) parametrization over R 3 SO3: We hypothesize that the SE(3) parametrization can offer a comprehensive representation of the joint distribution of object rotation and translation, thus providing an advantage over the R 3 SO(3) parametrization." }, { "figure_ref": [], "heading": "Mitigation of Perspective Effect Ambiguity through SE(3) parametrization:", "publication_ref": [], "table_ref": [], "text": "We posit that the SE(3) parametrization may alleviate the ambiguity introduced by image perspective effects, a significant challenge in accurate pose estimation, as described in Section 1." }, { "figure_ref": [], "heading": "Efficacy of Surrogate Stein Score Formulation on SE(3):", "publication_ref": [], "table_ref": [], "text": "We hypothesize that our surrogate Stein score formulation on SE(3) exhibits improved convergence in Langevin Dynamics compared to the original Stein score computed using automatic differentiation. This aims to confirm the robustness and efficacy of our proposed method amidst complex dynamics." }, { "figure_ref": [], "heading": "Experimental Setups", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets and Baselines", "publication_ref": [ "b3", "b2", "b3", "b8", "b29", "b14", "b15" ], "table_ref": [], "text": "In our experiments, we utilize two synthetic datasets: SYMSOL and SYMSOL-T. These datasets serve to examine the effectiveness of our density estimators in different spaces and to benchmark our performance against existing baselines. Additional details are offered in the supplementary material.\nSYMSOL. SYMSOL is a dataset specifically designed for evaluating density estimators in the SO(3) space. This dataset, first introduced by [4], comprises 250k images of five texture-less, symmetric 3D objects, with each subject to random rotations. The objects include tetrahedron (tet.), cube, icosahedron (icosa.), cone, and cylinder (cyl.), and each exhibits unique symmetries which introduce various degrees of pose ambiguity. For this dataset, we compare our score model on the SO(3) space with several recent works [3,4,9]. The baseline models we compare with utilize a pre-trained ResNet50 [30] as their backbones. For the purpose of comparison, we additionally include prior diffusion models [15,16] in our evaluation within the SO(3) context of our framework. Please note that our work is the first to pioneer the exploration of diffusion models on the SYMSOL dataset." }, { "figure_ref": [], "heading": "SYMSOL-T.", "publication_ref": [], "table_ref": [], "text": "To extend our evaluation into the SE(3) space, we developed the SYMSOL-T dataset, which is derived from SYMSOL. This new dataset further incorporate random translations, and therefore introduces an additional layer of complexity due to perspective-induced ambiguity. Similar to SYMSOL, it features the same five symmetric shapes and the same number of random samples. For SYMSOL-T, we benchmark our proposed methods against two pose regression methods. These two methods are trained using a symmetry-aware loss function, but with different estimation strategies: one directly estimates the pose from an image, while the other employs iterative refinement." }, { "figure_ref": [], "heading": "Quantitative Results on SYMSOL", "publication_ref": [ "b3", "b8" ], "table_ref": [ "tab_1" ], "text": "In this section, we present the quantitative results evaluated on SYMSOL, and compare our diffusionbased methods with non-parametric ones. We assess the performance of our score model on SO(3) across various shapes using both ResNet34 and ResNet50 as the backbones, with the results reported in Table 1. Our model demonstrates promising performance, consistently surpassing the contemporary non-parametric baseline models. It is observed that our model, even when based on the less complex ResNet34 backbone, is still able to achieve results that exceed those of the other baselines using the more complex ResNet50 backbone. The average angular errors are consistently below 1 degree across all shape categories. The performance further improves when employing ResNet50, which emphasizes the potential robustness and scalability of using diffusion models for addressing the pose ambiguity problem. However, it is important to observe that our model with ResNet50 exhibits a slightly reduced performance for the cone shape compared to the ResNet34 variant. This discrepancy can be attributed to our practice of training a single model across all shapes, a strategy that parallels those adopted by Implicit-PDF [4] and HyperPosePDF [9]. Such an approach may lead to mutual influences among shapes with diverse pose distributions, and potentially compromise optimal performance for certain shapes. This observation highlights opportunities for future improvements to our model, specifically in enhancing its ability to effectively learn from data spanning various domains. The endeavors would potentially address the diverse complexities associated with distinct shape and pose characteristics." }, { "figure_ref": [], "heading": "Quantitative Results on SYMSOL-T", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "In this section, we report the quantitative results obtained from the SYMSOL-T dataset evaluation, as shown in Table 2. The results reveal that our SE(3) and R 3 SO(3) score models outperform the pose regression and iterative regression baselines in terms of estimation accuracy. However, the R 3 SO(3) score model encounters difficulty when learning the distribution of the icosahedron shape. In contrast, our SE(3) score model excels in estimating rotation across all shapes and achieves competitive results in translation compared to the R 3 SO(3) score model, thus demonstrating its ability to model the joint distribution of rotation and translation. Please note that the SE(3) and R 3 SO(3) score models do not rely on symmetry annotations, which distinguish them from the pose regression and iterative regression baselines that leverage symmetry supervision. This supports our initial hypothesis that score models are capable of addressing the pose ambiguity problem in the image domain. In the comparison between the R 3 SO(3) score model and iterative regression, both models employ iterative refinement. However, our R 3 SO(3) score model consistently outperforms iterative regression on tetrahedron, cube, cone, and cylinder shapes. The key difference is that iterative regression focuses on minimizing pose errors without explicitly learning the underlying true distributions. In contrast, our R 3 SO(3) score model captures different scales of noise, enabling it to learn the true distribution of pose uncertainty and achieve more accurate results. Regarding translation performance, the R 3 SO(3) score model takes the lead over the SE(3) score model. The former's performance can be credited to its assumption of independence between rotation and translation, which effectively eliminates mutual interference. On the other hand, the SE(3) score model learns the joint distribution of rotation and translation, which leads to more robust rotation estimations. The observations therefore support our second hypothesis that SE(3) can provide a more comprehensive pose estimation than R 3 SO(3)." }, { "figure_ref": [ "fig_4" ], "heading": "Analysis of SE(3) and R 3 SO(3) in the Presence of Image Perspective Ambiguity", "publication_ref": [], "table_ref": [], "text": "To delve deeper into the effects of image perspective on our pose estimation methods, we additionally synthesized three variants of the SYMSOL-T dataset: Uniform, Edge, and Centered. The Uniform variant consists of uniformly sampled translations, the Edge variant includes translations at the maximum distance from the center, and the Centered variant comprises zero translations. Fig. 4 showcases a comparison of the evaluation results for these three variants. We present the distributions of angular errors made by the SE(3) and R 3 SO(3) score models on these dataset variants and four shapes: tetrahedron, cube, cone, and cylinder. These distributions of angular errors depict the uncertainty of the pose estimations. In line with our hypothesis, the Edge variant, which is most influenced by image perspective, exhibits greater uncertainty compared to the Centered variant. The Uniform variant situates itself between these two. It is evident that both the R 3 SO3 and SE(3) score models demonstrate higher uncertainty on the Edge dataset across all shapes, with reduced uncertainty on the Centered dataset. The SE(3) score model demonstrates an impressive ability to counter the pose ambiguity introduced by image perspective, a capability that becomes evident when compared with the R 3 SO3 score model. The observation therefore confirms our third hypothesis that SE(3) does exhibit greater robustness to the ambiguity caused by the image perspective issue." }, { "figure_ref": [], "heading": "Performance Analysis: Surrogate Score versus Automatically Differentiated True Score", "publication_ref": [ "b14", "b15", "b14", "b11", "b15", "b10" ], "table_ref": [ "tab_3", "tab_4" ], "text": "To test our hypothesis concerning convergence speed, we compare two versions of our score model. The first version, termed SE(3)-surrogate, is trained with the surrogate score described in Eq. ( 13).\nThe second version, termed as SE(3)-autograd, is trained with the true score described in Eq. ( 5) and calculated by automatic differentiation. We trained both estimators and evaluated their performance using different steps of Langevin dynamics. The results are reported in Table 3. Our findings show that when a larger number of Langevin steps (e.g., 100 steps) are used, both score models produce comparable results. However, the performance of SE(3)-autograd significantly declines in comparison to SE(3)-surrogate when the number of sampling steps decreases from 50 to 10 and then to 5. This performance drop is due to the curved manifold represented by the SE(3) parametrization, which can result in the score vector not consistently pointing towards the noise-free data. These results substantiate our fourth hypothesis, and suggest that the application of the surrogate score can lead to faster convergence than the use of the true score calculated through automatic differentiation. In this experiment, we further compare our SO(3) score model with the diffusion models proposed by [15] and [16] using the SYMSOL dataset. While these studies do not specifically address object pose estimation, we have adapted their methods to fit within our framework. The authors of [15] extend the DDPM [12] to SO(3) using an analogy approach, while the authors of [16] reformulate the SGM [11] to apply it to the SO(3) space. The results of these comparisons are presented in Table 4. Our analysis shows that, when excluding Fourier-based conditioning, our diffusion models achieve comparable results across different shapes. However, when Fourier-based conditioning is incorporated, performance significantly improves for three out of the five shapes. This suggests that Fourier-based conditioning enhances the our model's ability to learn pose distributions." }, { "figure_ref": [], "heading": "Comparison with Other Diffusion Models", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we presented a novel approach that applies diffusion models to the SE(3) group for object pose estimation, effectively addressing the pose ambiguity issue. Inspired by the correlation between rotation and translation distributions caused by image projection effects, we jointly estimated their distributions on SE(3) for improved accuracy. This is the first work to apply diffusion models to SE(3) in the image domain. To validate it, we developed the SYMSOL-T dataset, which enriches the original SYMSOL dataset with randomly sampled translations. Our experiments confirmed the applicability of score-based diffusion models in the image domain, the advantage of SE(3) parametrization over R 3 SO(3), and the mitigation of perspective effect ambiguity through SE(3) parametrization. Moreover, our surrogate Stein score formulation on SE(3) exhibited improved convergence in Langevin Dynamics, validating its robustness and efficacy in complex dynamics.\nconnect, which gives rise to tilde shapes on the sphere. In the case of R 3 , a single circle is present due to the unique solution for the translation. The samples generated from our score model are tightly concentrated in the center of each circle. This evidence highlights the capability of our model to accurately capture equivalent object poses originating from either discrete or continuous symmetries." }, { "figure_ref": [], "heading": "B Proofs", "publication_ref": [ "b7", "b18" ], "table_ref": [], "text": "B.1 Closed-form of Stein scores\nIn this section, we present the derivation of the closed-form solution for the Stein scores. We begin with a revisitation of the Gaussian distribution on the Lie group G, which is formulated as follows:\np Σ (Y |X) := N G (Y ; X, Σ) ≜ 1 ζ(Σ) exp - 1 2 Log(X -1 Y ) ⊤ Σ -1 Log(X -1 Y ) .(16)\nTo derive Eq. ( 4), we utilize the definition of Stein scores, which is defined as the derivative of log-density of the data distribution with respect to the group element Y ∈ G, expressed as follows:\n∇ Y log p Σ (Y |X) ⊤ = ∂ ∂Y - 1 2 Log(X -1 Y ) ⊤ Σ -1 Log(X -1 Y ) = ∂ ∂Log(X -1 Y ) - 1 2 Log(X -1 Y ) ⊤ Σ -1 Log(X -1 Y ) ∂Log(X -1 Y ) ∂Y = -Log(X -1 Y ) ⊤ Σ -1 ∂Log(X -1 Y ) ∂(X -1 Y ) • ∂(X -1 Y ) ∂Y = -Log(X -1 Y ) ⊤ Σ -1 J -1 r (Log(X -1 Y )) • I = -Log(X -1 Y ) ⊤ Σ -1 J -1 r (Log(X -1 Y )).(17)\nBased on the above derivation, the closed-form solution for the Stein scores is obtained as follows:\n∇ Y log p Σ (Y |X) = -J -⊤ r (Log(X -1 Y ))Σ -1 Log(X -1 Y ).(18)\nB.2 Left and Right Jacobians on SO(3)\nIn this section, we present the derivation of Eq. (8). Let z = [z x , z y , z z ] ∈ so(3) and ϕ = ∥z∥ 2 2 . The skew-symmetric matrix induced by z can therefore be represented as follows:\nz × = 0 -z z z y z z 0 -z x -z y z x 0(19)\nAs demonstrated in [19], the left and the right Jacobian on SO(3) can be expressed as the following closed-form expressions:\nJ r (z) = I - 1 -cos ϕ ϕ 2 z × + ϕ -sin ϕ ϕ 3 z 2 × J -1 r (z) = I + 1 2 z × + 1 ϕ - 1 + cos ϕ 2ϕ sin ϕ z 2 × J l (z) = I + 1 -cos ϕ ϕ 2 z × + ϕ -sin ϕ ϕ 3 z 2 × J -1 l (z) = I - 1 2 z × + 1 ϕ - 1 + cos ϕ 2ϕ sin ϕ z 2 × .(20)\nAs a result, Eq. ( 8) of the main manuscript can be derived as follow:\nJ l (z) = J ⊤ r (z), J -1 l (z) = J -⊤ r (z).(21)" }, { "figure_ref": [], "heading": "B.3 Eigenvector of The Jacobians", "publication_ref": [], "table_ref": [], "text": "For the purpose of proving Eq. ( 9), we consider the derivative of exponential mapping on G, where k ∈ R and z ∈ g. More specifically, by applying the chain rule on the derivative of the small perturbation Exp(kz) on G with respect to k, we can obtain the resultant equation as follows:\n∂Exp(kz) ∂k = ∂Exp(kz) ∂(kz) ∂(kz) ∂k = J l (kz)z.(22)\nOn the other hand, by applying the differential rule, the following equations can be derived:\n∂Exp(kz) ∂k = lim h→0 Log(Exp((k + h)z)Exp(kz) -1 ) k = lim h→0 Log(Exp(hz)Exp(kz)Exp(kz) -1 ) h = z.(23)\nBy further combining Eqs. ( 22) and ( 23) and setting k = 1, the following equation can be derived:\nJ l (z)z = z.(24)\nThe resultant Eq. ( 24) suggests that z is an eigenvector of J l (z). Please note that the same rule can also be employed to provide a proof for the right-Jacobian as follows:\nJ r (z)z = z." }, { "figure_ref": [], "heading": "B.4 Closed-Form of Stein Scores on SE(3)", "publication_ref": [ "b18", "b24" ], "table_ref": [], "text": "In this section, we delve into the closed-form solution of Stein scores on SE(3), which is referenced in Section 4.3. Let z = (ρ, ϕ) ∈ se(3), where ρ represents the translational part and ϕ denotes the rotational part. We define φ = ∥ϕ∥ 2 2 and recall the inverse of the left-Jacobian on SE(3) as follows:\nJ -1 l (z) = J -1 l (ϕ) Z(ρ, ϕ) 0 J -1 l (ϕ) ,(26)\nwhere [19,25] as:\nZ(ρ, ϕ) = -J -1 l (ϕ)Q(ρ, ϕ)J -1 l (ϕ). The complete form of Q(ρ, ϕ) is defined in\nQ(ρ, ϕ) = 1 2 ρ × + φ -sin φ φ3 (ϕ × ρ × + ρ × ϕ × + ϕ × ρ × ϕ × ) - 1 - φ2 2 -cos φ φ4 (ϕ 2 × ρ × + ρ × ϕ 2 × -3ϕ × ρ × ϕ × ) - 1 2 1 - φ2 2 -cos φ φ4 -3 φ -sin φ - φ3 6 φ5 (ϕ × ρ × ϕ 2 × + ϕ 2 × ρ × ϕ × ) .(27)\nFrom the Eq. ( 27), an essential property can be observed and expressed as follows:\nQ ⊤ (-ρ, -ϕ) = Q(ρ, ϕ).(28\n) Based on the above derivation, the closed-form expression of the inverse transposed right-Jacobian on SE(3) combined with the property outlined in Eq. ( 28) can be derived as follows:\nJ -⊤ r (z) = J -1 l (-z) ⊤ = J -1 l (-ϕ) Z(-ρ, -ϕ) 0 J -1 l (-ϕ) ⊤ = J -1 r (ϕ) -J -1 r (ϕ)Q(-ρ, -ϕ)J -1 r (ϕ) 0 J -1 r (ϕ) ⊤ = J -⊤ r (ϕ) 0 -J -⊤ r (ϕ)Q ⊤ (-ρ, -ϕ)J -⊤ r (ϕ) J -⊤ r (ϕ) = J -1 l (ϕ) 0 -J -1 l (ϕ)Q(ρ, ϕ)J -1 l (ϕ) J -1 l (ϕ) = J -1 l (ϕ) 0 Z(ρ, ϕ) J -1 l (ϕ).(29)\nThe closed-form solution of Stein score on SE(3) can then be computed by the definition of Stein score as follows:\n∇ Y log p σ ( X|X) = - 1 σ 2 J -1 l (ϕ) 0 Z(ρ, ϕ) J -1 l (ϕ) z.(30)\nAfter examining the derivation process, it is clear that this computation involves the costly calculation of Jacobians, and does not confer any computational benefits when using automatic differentiation. However, by adopting the surrogate score presented in Eq. ( 13), it is possible to reduce the computation of the Jacobian J -⊤ r (z), while simultaneously improving performance, as explained in Section 5.6." }, { "figure_ref": [], "heading": "C Related Works C.1 Methodologies for Dealing with Pose Ambiguity Issues", "publication_ref": [ "b33", "b34", "b35", "b6", "b36", "b7", "b37", "b38", "b5", "b39", "b40", "b41", "b42", "b2", "b43", "b0", "b3", "b8", "b44", "b9" ], "table_ref": [], "text": "Pose ambiguity remains a significant challenge within the realm of object pose estimation. A variety of strategies have been employed in the literature to explicitly tackle this issue, ranging from the application of symmetry supervisions to the use of surrogate losses [34,35]. Recent regression-based techniques, such as those presented in [36,7,37,8], strive to minimize the pose discrepancy by seeking the closest candidate within a set of ambiguous poses. Other methods, such as [38,39], apply constraints to the regression targets, particularly for rotation angles, to mitigate ambiguity. In addition, there are approaches like [6,40,41] that propose regressing to a predetermined set of geometric features derived from symmetry annotations. Although these previous methods are able to effectively handle pose ambiguity arising from symmetric objects, they often necessitate manual labeling of equivalent poses.\nOn the other hand, several studies have investigated methods to model the inherent uncertainty in pose ambiguity. This involves the quantification and representation of uncertainty associated with the estimated poses. Some works have employed parametric distributions such as Bingham distributions [42,43,3] and von-Mises distributions [44] to model the orientation uncertainty. There are also approaches, such as in [1], that estimate a Bingham distribution using an ensemble of pose hypotheses. A number of studies [4,9,45,10] have opted to employ non-parametric distributions to implicitly represent rotation uncertainty densities on SO(3). While these methods do not require symmetry annotations, they typically consider rotations and translations as independent factors." }, { "figure_ref": [], "heading": "C.2 Previous Diffusion Probabilistic Models and Their Application Domains", "publication_ref": [ "b45", "b11", "b12", "b13", "b10", "b46", "b47", "b48", "b49", "b50", "b51", "b52", "b53", "b54", "b55", "b56", "b57", "b58", "b59", "b60", "b61", "b62", "b23", "b63", "b15", "b14", "b11", "b13", "b10", "b16", "b17" ], "table_ref": [], "text": "Diffusion probabilistic models [46,[12][13][14]11] represent a class of generative models designed to learn the underlying probability distribution of data. They have been applied to various generative tasks, and have shown impressive results in several application domains, including image [47][48][49][50], video [51][52][53], audio [54,55], and natural language processing [56,57]. Recently, diffusion models have found broader applications in discriminative tasks [58], such as semantic segmentation [59,60] and object detection [61]. In the realm of human pose estimation, they have been useful in addressing joint location ambiguity stemming from the projection of 2D keypoints into 3D space [62,63]. Despite the presence of similar challenges in object pose estimation, this area remains relatively unexplored.\nWhile the aforementioned diffusion models primarily operate in Euclidean space, several authors have extended this concept to more complex spaces, such as manifolds, to accommodate data residing on these structures. For instance, the authors in [24] extended diffusion models to Riemannian manifolds, and leveraged Geodesic Random Walk [64] for sampling. Other studies [16,15] applied the Denoising Diffusion Probabilistic Models (DDPM) [12] and score-based generative models [14,11] to the SO(3) manifold to recover probability density of data on SO(3). Further extensions of diffusion models to SE(3) have been proposed for tasks such as unfolding protein structures and grasping objects [17,18]. These approaches typically used a straightforward parametrization on SE(3), and treated rotation and translation as separate entities for diffusion. On the contrary, our proposed approach advocates for the joint diffusion of rotation and translation. Moreover, our approach applies diffusion on SE(3) within the image space, specifically tailored for the task of object pose estimation." }, { "figure_ref": [], "heading": "D Limitations and Broader Impacts", "publication_ref": [ "b64" ], "table_ref": [], "text": "While our score model on SE(3) performs well on the synthetic datasets, there are still limitations in real-world applications. In the SYMSOL experiment, we observed that training a single score model across multiple shapes can be affected by the interactions between the distributions of different objects. This interaction leads to a slight decrease in performance and poses a challenge when extending the model to real-world datasets [65] featuring a wide variety of objects and occlusions.\nFrom an architectural perspective, the current design, which conditions image feature embeddings represented by low-dimensional vector values on the score model, can potentially create a bottleneck due to the limited amount of information it can carry. Therefore, future work could explore better architectures. This might involve integrating high-dimensional image features with the score model or devising improved methods for learning image feature representations. These improvements could allow our score models to be applied in more complex scenarios in the future research endeavors. " }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "In this supplementary material, we first present additional details of our experimental designs in Section A. This is followed by providing the proofs of our Stein score calculation in Section B. In Section C, we review the previous works relevant to our paper. Finally, in Section D, we discuss the limitations and broader impacts of our work." }, { "figure_ref": [], "heading": "A Additional Experimental Details", "publication_ref": [ "b15", "b30", "b31" ], "table_ref": [], "text": "A.1 Calculation of Stein Scores Using Automatic Differentiation in JAX As stated by [16], the Stein scores can be computed as follows:\nwhere k ∈ R, τ ∈ g, kτ indicates a small perturbation on G. In practice, this can be computed by automatic differentiation. In the following code snippet, we demonstrate our implementation based on JAX [31] and jaxlie [32]. Listing 1: Calculation of Stein scores using automatic differentiation" }, { "figure_ref": [], "heading": "A.2 Algorithms", "publication_ref": [], "table_ref": [], "text": "The algorithms used for our training and sampling procedures are presented in Algorithms 1 and 2, respectively. The notations employed conform to those detailed in the main manuscript." }, { "figure_ref": [], "heading": "A.3 Datasets", "publication_ref": [ "b3" ], "table_ref": [], "text": "The SYMSOL-T dataset contains 250k images of five symmetric, texture-less three-dimensional objects. Following the structure of SYMSOL [4], each shape has 45k training images and 5k testing images. The dataset ensures that translations over the x, y, and z axes are uniformly sampled within the range of [-1, 1]. In the experiments examining image perspective ambiguity in Section 5.5, each of the dataset variants (i.e., Uniform, Edge, and Centered) comprises 200 images per shape. Our analysis is performed based on 1k randomly generated poses from our score models for each image. " }, { "figure_ref": [], "heading": "Hyperparameters", "publication_ref": [ "b29", "b32" ], "table_ref": [], "text": "In our experiments, we utilize a pre-trained ResNet34 model [30] as the standard backbone across all methods, unless explicitly stated otherwise. In the SYMSOL experiments, we select 16 different clean samples in each iteration. Each of these samples is perturbed to generate 256 different noisy variants, resulting in a total of 4,096 noisy samples. The proposed score-based model is then trained for 400k steps to denoise these samples. In the SYMSOL-T experiments, the pose regression approach is trained for 400k steps. Meanwhile, the iterative regression and both our R 3 SO(3) and SE(3) score models are subjected to an extended training duration of 800k steps. We employ the Adam optimizer [33] with an initial learning rate set at 10 -4 . During the latter half of the training schedule, we apply an exponential decay, which lowers the learning rate to 10 -5 . For the diffusion process, we use a linear noise scheduling approach that ranges from 10 -4 to 1.0, divided into 100 discrete steps. " }, { "figure_ref": [], "heading": "A.5 Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "In our experiments, we employ two evaluation metrics to assess the accuracy of the estimated poses.\nIn the SYMSOL experiments, we adopt the minimum angular distance, measured in degrees, between a set of ground truth equivalent rotations and the estimated rotations as the evaluation metric. For the SYMSOL-T experiments, we incorporate the Euclidean distance between the ground truth and the estimated translations as our metric to evaluate the accuracy of translation. Each of these distance metrics is computed per sample, and we report their averages over all samples in our results. These two evaluation metrics allow us to verify the quality of the poses estimated by our proposed method." }, { "figure_ref": [], "heading": "A.6 Visualization of SYMSOL-T Results", "publication_ref": [ "b3", "b2" ], "table_ref": [], "text": "In Fig 5, we present the SYMSOL-T results obtained from our SE(3) score model for each shape. The model predictions are displayed in green and correlate to the corresponding original input images that are illustrated in gray. To visualize the density predictions, we adopt the strategy employed in [4] to represent the rotation and translation densities generated by our model in the SO(3) and R 3 spaces, respectively. Specifically, we use the Mollweide projection for visualizing the SO(3) space, with longitude and latitude values representing the yaw and pitch of the object's rotation, respectively. The color in the SO (3) space indicates the roll of the object's rotation. In the R 3 space, the (R, G, B) color channels are utilized to represent the 3D coordinates (x, y, z). Within the plots, the circles denote sets of equivalent poses, with each dot representing a single sample. For each plot, We generate a total of 1, 000 random samples from our model. Please note that both the cone and the cylinder exhibit continuous symmetries. This causes the circles on SO(3) to overlap densely and" } ]
2023-05-25
[ { "authors": "Fabian Manhardt; Diego Martín Arroyo; Christian Rupprecht; Benjamin Busam; Tolga Birdal; Nassir Navab; Federico Tombari", "journal": "", "ref_id": "b0", "title": "Explaining the ambiguity of object detection and 6d pose from visual data", "year": "2019" }, { "authors": "Tomás Hodan; Dániel Baráth; Jiri Matas", "journal": "", "ref_id": "b1", "title": "EPOS: estimating 6d pose of objects with symmetries", "year": "2020" }, { "authors": "Haowen Deng; Mai Bui; Nassir Navab; Leonidas Guibas; Slobodan Ilic; Tolga Birdal", "journal": "", "ref_id": "b2", "title": "Deep bingham networks: Dealing with uncertainty and ambiguity in pose estimation", "year": "2020" }, { "authors": "Kieran A Murphy; Carlos Esteves; Varun Jampani; Srikumar Ramalingam; Ameesh Makadia", "journal": "", "ref_id": "b3", "title": "Implicit-pdf: Non-parametric representation of probability distributionson the rotation manifold", "year": "2021" }, { "authors": "Tomáš Hodan; Pavel Haluza; Štepán Obdržálek; Jiri Matas; Manolis Lourakis; Xenophon Zabulis", "journal": "IEEE", "ref_id": "b4", "title": "T-less: An rgb-d dataset for 6d pose estimation of texture-less objects", "year": "2017" }, { "authors": "Kiru Park; Timothy Patten; Markus Vincze", "journal": "", "ref_id": "b5", "title": "Pix2pose: Pixel-wise coordinate regression of objects for 6d pose estimation", "year": "2019" }, { "authors": "Gu Wang; Fabian Manhardt; Federico Tombari; Xiangyang Ji", "journal": "", "ref_id": "b6", "title": "Gdr-net: Geometry-guided direct regression network for monocular 6d object pose estimation", "year": "2021" }, { "authors": "Stefan Thalhammer; Timothy Patten; Markus Vincze", "journal": "", "ref_id": "b7", "title": "COPE: end-to-end trainable constant runtime object pose estimation", "year": "2023" }, { "authors": "Yixiao Guo; Jiawei Liu; Guo Li; Luo Mai; Hao Dong", "journal": "", "ref_id": "b8", "title": "Fast and flexible human pose estimation with hyperpose", "year": "2021" }, { "authors": "Frederik Rasmus Laurvig Haugaard; Thorbjørn Hagelskjaer; Mosekjaer Iversen", "journal": "", "ref_id": "b9", "title": "Spyropose: Importance sampling pyramids for object pose distribution estimation in SE(3)", "year": "2023" }, { "authors": "Yang Song; Stefano Ermon", "journal": "", "ref_id": "b10", "title": "Generative modeling by estimating gradients of the data distribution", "year": "2019" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "", "ref_id": "b11", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b12", "title": "Denoising diffusion implicit models", "year": "2021" }, { "authors": "Yang Song; Jascha Sohl-Dickstein; Diederik P Kingma; Abhishek Kumar; Stefano Ermon; Ben Poole", "journal": "", "ref_id": "b13", "title": "Score-based generative modeling through stochastic differential equations", "year": "2021" }, { "authors": "Adam Leach; Sebastian M Schmon; Matteo T Degiacomi; Chris G Willcocks", "journal": "", "ref_id": "b14", "title": "Denoising diffusion probabilistic models on so(3) for rotational alignment", "year": "2022" }, { "authors": "Yesukhei Jagvaral; Francois Lanusse; Rachel Mandelbaum", "journal": "", "ref_id": "b15", "title": "Diffusion generative models on so(3)", "year": "2023" }, { "authors": "Julen Urain; Niklas Funk; Jan Peters; Georgia Chalvatzaki", "journal": "", "ref_id": "b16", "title": "Se(3)-diffusionfields: Learning smooth cost functions for joint grasp and motion optimization through diffusion", "year": "2022" }, { "authors": "Jason Yim; Brian L Trippe; Valentin De Bortoli; Emile Mathieu; Arnaud Doucet; Regina Barzilay; Tommi S Jaakkola", "journal": "", "ref_id": "b17", "title": "SE(3) diffusion model with application to protein backbone generation", "year": "2023" }, { "authors": "Joan Solà; Jérémie Deray; Dinesh Atchuthan", "journal": "", "ref_id": "b18", "title": "A micro lie theory for state estimation in robotics", "year": "2018" }, { "authors": "Pascal Vincent", "journal": "Neural Comput", "ref_id": "b19", "title": "A connection between score matching and denoising autoencoders", "year": "2011" }, { "authors": "I Dmitry; Nikolayev; I Tatjana; Savyolov", "journal": "Textures and Microstructures", "ref_id": "b20", "title": "Normal distribution on the rotation group so (3)", "year": "1970" }, { "authors": "Salem Said; Lionel Bombrun; Yannick Berthoumieu; Jonathan H Manton", "journal": "IEEE Trans. Inf. Theory", "ref_id": "b21", "title": "Riemannian gaussian distributions on the space of symmetric positive definite matrices", "year": "2017" }, { "authors": "Gregory Chirikjian; Marin Kobilarov", "journal": "IEEE", "ref_id": "b22", "title": "Gaussian approximation of non-linear measurement models on lie groups", "year": "2014" }, { "authors": "Emile Valentin De Bortoli; Michael John Mathieu; James Hutchinson; Yee Whye Thornton; Arnaud Teh; Doucet", "journal": "", "ref_id": "b23", "title": "Riemannian score-based generative modelling", "year": "2022" }, { "authors": "Timothy D Barfoot; Paul Timothy; Furgale ", "journal": "IEEE Trans. Robotics", "ref_id": "b24", "title": "Associating uncertainty with three-dimensional poses for use in estimation problems", "year": "2014" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b25", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b26", "title": "Attention is all you need", "year": "2017" }, { "authors": "Tilman Liu Ziyin; Masahito Hartwig; Ueda", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b27", "title": "Neural networks fail to learn periodic functions and how to fix it", "year": "2020" }, { "authors": "Jiyoung Lee; Wonjae Kim; Daehoon Gwak; Edward Choi", "journal": "", "ref_id": "b28", "title": "Conditional generation of periodic signals with fourier-based decoder", "year": "2021" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b29", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "James Bradbury; Roy Frostig; Peter Hawkins; Matthew James Johnson; Chris Leary; Dougal Maclaurin; George Necula; Adam Paszke; Jake Vanderplas; Skye Wanderman-Milne; Qiao Zhang", "journal": "", "ref_id": "b30", "title": "JAX: composable transformations of Python+NumPy programs", "year": "2018" }, { "authors": "Brent Yi; Michelle Lee; Alina Kloss; Roberto Martín-Martín; Jeannette Bohg", "journal": "", "ref_id": "b31", "title": "Differentiable factor graph optimization for learning smoothers", "year": "2021" }, { "authors": "P Diederick; Jimmy Kingma; Ba", "journal": "", "ref_id": "b32", "title": "Adam: A method for stochastic optimization", "year": "2015" }, { "authors": "Yu Xiang; Tanner Schmidt; Venkatraman Narayanan; Dieter Fox", "journal": "", "ref_id": "b33", "title": "Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes", "year": "2018" }, { "authors": "Arash Amini; Arul Selvam Periyasamy; Sven Behnke", "journal": "", "ref_id": "b34", "title": "Yolopose: Transformer-based multi-object 6d pose estimation using keypoint regression", "year": "2022" }, { "authors": "Yann Labbé; Justin Carpentier; Aubry Mathieu; Josef Sivic", "journal": "Springer", "ref_id": "b35", "title": "Cosypose: Consistent multiview multi-object 6d pose estimation", "year": "2020" }, { "authors": "Yan Di; Fabian Manhardt; Gu Wang; Xiangyang Ji; Nassir Navab; Federico Tombari", "journal": "", "ref_id": "b36", "title": "So-pose: Exploiting self-occlusion for direct 6d pose estimation", "year": "2021" }, { "authors": "Sida Peng; Yuan Liu; Qixing Huang; Xiaowei Zhou; Hujun Bao", "journal": "", "ref_id": "b37", "title": "Pvnet: Pixel-wise voting network for 6dof pose estimation", "year": "2019" }, { "authors": "Mahdi Rad; Vincent Lepetit", "journal": "", "ref_id": "b38", "title": "BB8: A scalable, accurate, robust to partial occlusion method for predicting the 3d poses of challenging objects without using depth", "year": "2017" }, { "authors": "He Wang; Srinath Sridhar; Jingwei Huang; Julien Valentin; Shuran Song; Leonidas J Guibas", "journal": "", "ref_id": "b39", "title": "Normalized object coordinate space for category-level 6d object pose and size estimation", "year": "2019" }, { "authors": "Lin Huang; Tomas Hodan; Lingni Ma; Linguang Zhang; Luan Tran; Christopher D Twigg; Po-Chen Wu; Junsong Yuan; Cem Keskin; Robert Wang", "journal": "", "ref_id": "b40", "title": "Neural correspondence field for object pose estimation", "year": "2022" }, { "authors": "Brian Okorn; Mengyun Xu; Martial Hebert; David Held", "journal": "IEEE", "ref_id": "b41", "title": "Learning orientation distributions for object pose estimation", "year": "2020" }, { "authors": "Igor Gilitschenski; Roshni Sahoo; Wilko Schwarting; Alexander Amini; Sertac Karaman; Daniela Rus", "journal": "", "ref_id": "b42", "title": "Deep orientation uncertainty learning based on a bingham loss", "year": "2020" }, { "authors": "Sergey Prokudin; Peter Gehler; Sebastian Nowozin", "journal": "", "ref_id": "b43", "title": "Deep directional statistics: Pose estimation with uncertainty quantification", "year": "2018" }, { "authors": "Ondrej David M Klee; Robert Biza; Robin Platt; Walters", "journal": "", "ref_id": "b44", "title": "Image to sphere: Learning equivariant features for efficient pose prediction", "year": "2023" }, { "authors": "Ling Yang; Zhilong Zhang; Yang Song; Shenda Hong; Runsheng Xu; Yue Zhao; Yingxia Shao; Wentao Zhang; Bin Cui; Ming-Hsuan Yang", "journal": "", "ref_id": "b45", "title": "Diffusion models: A comprehensive survey of methods and applications", "year": "2022" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b46", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b47", "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2022" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b48", "title": "Highresolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b49", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Ruihan Yang; Prakhar Srivastava; Stephan Mandt", "journal": "", "ref_id": "b50", "title": "Diffusion probabilistic modeling for video generation", "year": "2022" }, { "authors": "Jonathan Ho; Tim Salimans; Alexey Gritsenko; William Chan; Mohammad Norouzi; David J Fleet", "journal": "", "ref_id": "b51", "title": "Video diffusion models", "year": "2022" }, { "authors": "Jonathan Ho; William Chan; Chitwan Saharia; Jay Whang; Ruiqi Gao; Alexey Gritsenko; P Diederik; Ben Kingma; Mohammad Poole; David J Norouzi; Fleet", "journal": "", "ref_id": "b52", "title": "Imagen video: High definition video generation with diffusion models", "year": "2022" }, { "authors": "Rongjie Huang; Zhou Zhao; Huadai Liu; Jinglin Liu; Chenye Cui; Yi Ren", "journal": "", "ref_id": "b53", "title": "Prodiff: Progressive fast diffusion model for high-quality text-to-speech", "year": "2022" }, { "authors": "Dongchao Yang; Jianwei Yu; Helin Wang; Wen Wang; Chao Weng; Yuexian Zou; Dong Yu", "journal": "IEEE/ACM Transactions on Audio, Speech, and Language Processing", "ref_id": "b54", "title": "Diffsound: Discrete diffusion model for text-to-sound generation", "year": "2023" }, { "authors": "Shansan Gong; Mukai Li; Jiangtao Feng; Zhiyong Wu; Lingpeng Kong", "journal": "", "ref_id": "b55", "title": "Diffuseq: Sequence to sequence text generation with diffusion models", "year": "2022" }, { "authors": "Xiang Li; John Thickstun; Ishaan Gulrajani; Percy S Liang; Tatsunori B Hashimoto", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b56", "title": "Diffusion-lm improves controllable text generation", "year": "2022" }, { "authors": " Florinel-Alin; Vlad Croitoru; Radu Hondru; Tudor Ionescu; Mubarak Shah", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b57", "title": "Diffusion models in vision: A survey", "year": "2023" }, { "authors": "Tomer Amit; Tal Shaharbany; Eliya Nachmani; Lior Wolf", "journal": "", "ref_id": "b58", "title": "Segdiff: Image segmentation with diffusion probabilistic models", "year": "2021" }, { "authors": "Dmitry Baranchuk; Ivan Rubachev; Andrey Voynov; Valentin Khrulkov; Artem Babenko", "journal": "", "ref_id": "b59", "title": "Label-efficient semantic segmentation with diffusion models", "year": "2021" }, { "authors": "Shoufa Chen; Peize Sun; Yibingimp Song; Ping Luo", "journal": "", "ref_id": "b60", "title": "Diffusiondet: Diffusion model for object detection", "year": "2022" }, { "authors": "Jeongjun Choi; Dongseok Shim; H ; Jin Kim", "journal": "", "ref_id": "b61", "title": "Diffupose: Monocular 3d human pose estimation via denoising diffusion probabilistic model", "year": "2022" }, { "authors": "Karl Holmquist; Bastian Wandt", "journal": "", "ref_id": "b62", "title": "Diffpose: Multi-hypothesis human pose estimation using diffusion models", "year": "2022" }, { "authors": "Erik Jørgensen", "journal": "Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete", "ref_id": "b63", "title": "The central limit problem for geodesic random walks", "year": "1975" }, { "authors": "Tomáš Hodaň; Martin Sundermeyer; Bertram Drost; Yann Labbé; Eric Brachmann; Frank Michel; Carsten Rother; Jiří Matas", "journal": "Springer", "ref_id": "b64", "title": "Bop challenge 2020 on 6d object localization", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 222.73, 361.55, 151.06, 9.65 ], "formula_id": "formula_0", "formula_text": "(R 2 , T 2 )(R 1 , T 1 ) = (R 2 R 1 , T 2 + T 1 )" }, { "formula_coordinates": [ 3, 106.83, 432.49, 164.04, 9.65 ], "formula_id": "formula_1", "formula_text": "(R 2 , T 2 )(R 1 , T 1 ) = (R 2 R 1 , T 2 + R 2 T 1 )" }, { "formula_coordinates": [ 3, 142.61, 596.45, 321.58, 23.34 ], "formula_id": "formula_2", "formula_text": "θ * = arg min θ L(θ; σ) ≜ 1 2 E pdata(x) E x∼N (x,σ 2 I) ∥s θ (x, σ) -∇ x log p σ (x|x)∥ 2 2" }, { "formula_coordinates": [ 3, 178.72, 660.41, 325.95, 18.91 ], "formula_id": "formula_3", "formula_text": "xm i = xm-1 i + ϵ i s θ * (x m-1 i , σ i ) + √ 2ϵ i z m i , m = 1, 2, ..., M,(2)" }, { "formula_coordinates": [ 4, 116.26, 99.73, 258.99, 24.85 ], "formula_id": "formula_4", "formula_text": "Urain et al. [17] R 3 SO(3) N R 3 × N SO(3) ✓ Score / Autograd R 3 SO(3) Yim et al. [18] R 3 SO(3) N R 3 × IG SO(3) ✗ Score / Autograd ⟨R 3 , so(3)⟩ Ours SE(3) N SE(3) △ Score / Closed Form SE(3)" }, { "formula_coordinates": [ 4, 196.56, 374.03, 261.83, 10.46 ], "formula_id": "formula_5", "formula_text": "SE(3) over R 3 SO(3) in Perspective-Affected Pose Estimation" }, { "formula_coordinates": [ 4, 143.46, 597.05, 361.21, 22.31 ], "formula_id": "formula_6", "formula_text": "p Σ (Y |X) := N G (Y ; X, Σ) ≜ 1 ζ(Σ) exp - 1 2 Log(X -1 Y ) ⊤ Σ -1 Log(X -1 Y ) ,(3)" }, { "formula_coordinates": [ 4, 185.25, 669.18, 319.42, 12.69 ], "formula_id": "formula_7", "formula_text": "∇ Y log p Σ (Y |X) = -J -⊤ r (Log(X -1 Y ))Σ -1 Log(X -1 Y ).(4)" }, { "formula_coordinates": [ 5, 234.09, 243.66, 270.58, 22.31 ], "formula_id": "formula_8", "formula_text": "∇ Y log p σ ( X|X) = - 1 σ 2 J -⊤ r (z)z.(5)" }, { "formula_coordinates": [ 5, 134.2, 294.07, 370.47, 23.55 ], "formula_id": "formula_9", "formula_text": "θ * = arg min θ L(θ; σ) ≜ 1 2 E pdata(X) E X∼N G (X,Σ) s θ ( X, σ) -∇ X log p σ ( X|X) 2 2 (6)" }, { "formula_coordinates": [ 5, 193.49, 349.19, 311.17, 17.63 ], "formula_id": "formula_10", "formula_text": "Xi+1 = Xi Exp(ϵ i s θ ( Xi , σ i ) + √ 2ϵ i z i ), z i ∼ N (0, I).(7)" }, { "formula_coordinates": [ 5, 229.94, 524.38, 274.73, 13.38 ], "formula_id": "formula_11", "formula_text": "J l (z) = J ⊤ r (z), J -1 l (z) = J -⊤ r (z),(8)" }, { "formula_coordinates": [ 5, 282, 574.99, 218.79, 9.68 ], "formula_id": "formula_12", "formula_text": "J l (z)z = z. (9" }, { "formula_coordinates": [ 5, 500.8, 575.34, 3.87, 8.64 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 5, 215.71, 611.53, 284.81, 22.31 ], "formula_id": "formula_14", "formula_text": "∇ Y log p σ ( X|X) = - 1 σ 2 J -1 l (z)z = - 1 σ 2 z. (10" }, { "formula_coordinates": [ 5, 500.52, 618.59, 4.15, 8.64 ], "formula_id": "formula_15", "formula_text": ")" }, { "formula_coordinates": [ 5, 219.22, 681.46, 285.45, 12.69 ], "formula_id": "formula_16", "formula_text": "J l (z) = (I, J l (ϕ)) = (I, J ⊤ r (ϕ)) = J ⊤ r (z),(11)" }, { "formula_coordinates": [ 6, 108, 139.43, 396, 33.62 ], "formula_id": "formula_17", "formula_text": "-1 l (z) = J -1 l (ϕ) Z(ρ,ϕ) 0 J -1 l (ϕ) where Z(ρ, ϕ) = -J -1 l (ϕ)Q(ρ, ϕ)J -1 l (ϕ)." }, { "formula_coordinates": [ 6, 196.26, 194.56, 304.26, 20.47 ], "formula_id": "formula_18", "formula_text": "J -⊤ r (z) = (J -1 l (-z)) ⊤ = J -1 l (ρ) 0 Z(ρ,ϕ) J -1 l (ρ) ̸ = J -1 l (z). (12" }, { "formula_coordinates": [ 6, 500.52, 200.36, 4.15, 8.64 ], "formula_id": "formula_19", "formula_text": ")" }, { "formula_coordinates": [ 6, 265.75, 333.61, 238.91, 22.31 ], "formula_id": "formula_20", "formula_text": "sX ( X, σ) ≜ - 1 σ 2 z.(13)" }, { "formula_coordinates": [ 6, 193.14, 604.3, 307.38, 30.32 ], "formula_id": "formula_21", "formula_text": "f i (x, c) = d-1 j=0 W ij (A j (c) cos(πx j ) + B j (c) sin(πx j )) (14" }, { "formula_coordinates": [ 6, 500.52, 615.03, 4.15, 8.64 ], "formula_id": "formula_22", "formula_text": ")" }, { "formula_coordinates": [ 6, 108, 655.68, 190.12, 14.56 ], "formula_id": "formula_23", "formula_text": "f (t) = ∞ k=0 A k cos 2πkt P + B k sin 2πkt P ." }, { "formula_coordinates": [ 16, 144.78, 213.36, 359.89, 22.31 ], "formula_id": "formula_24", "formula_text": "p Σ (Y |X) := N G (Y ; X, Σ) ≜ 1 ζ(Σ) exp - 1 2 Log(X -1 Y ) ⊤ Σ -1 Log(X -1 Y ) .(16)" }, { "formula_coordinates": [ 16, 120.78, 276.26, 383.89, 123.03 ], "formula_id": "formula_25", "formula_text": "∇ Y log p Σ (Y |X) ⊤ = ∂ ∂Y - 1 2 Log(X -1 Y ) ⊤ Σ -1 Log(X -1 Y ) = ∂ ∂Log(X -1 Y ) - 1 2 Log(X -1 Y ) ⊤ Σ -1 Log(X -1 Y ) ∂Log(X -1 Y ) ∂Y = -Log(X -1 Y ) ⊤ Σ -1 ∂Log(X -1 Y ) ∂(X -1 Y ) • ∂(X -1 Y ) ∂Y = -Log(X -1 Y ) ⊤ Σ -1 J -1 r (Log(X -1 Y )) • I = -Log(X -1 Y ) ⊤ Σ -1 J -1 r (Log(X -1 Y )).(17)" }, { "formula_coordinates": [ 16, 185.25, 427.38, 319.42, 12.69 ], "formula_id": "formula_26", "formula_text": "∇ Y log p Σ (Y |X) = -J -⊤ r (Log(X -1 Y ))Σ -1 Log(X -1 Y ).(18)" }, { "formula_coordinates": [ 16, 252.14, 506.72, 252.53, 31.47 ], "formula_id": "formula_27", "formula_text": "z × = 0 -z z z y z z 0 -z x -z y z x 0(19)" }, { "formula_coordinates": [ 16, 214.8, 578.44, 289.87, 103.82 ], "formula_id": "formula_28", "formula_text": "J r (z) = I - 1 -cos ϕ ϕ 2 z × + ϕ -sin ϕ ϕ 3 z 2 × J -1 r (z) = I + 1 2 z × + 1 ϕ - 1 + cos ϕ 2ϕ sin ϕ z 2 × J l (z) = I + 1 -cos ϕ ϕ 2 z × + ϕ -sin ϕ ϕ 3 z 2 × J -1 l (z) = I - 1 2 z × + 1 ϕ - 1 + cos ϕ 2ϕ sin ϕ z 2 × .(20)" }, { "formula_coordinates": [ 16, 224.96, 710.97, 279.71, 13.38 ], "formula_id": "formula_29", "formula_text": "J l (z) = J ⊤ r (z), J -1 l (z) = J -⊤ r (z).(21)" }, { "formula_coordinates": [ 17, 220.46, 131.84, 284.21, 22.31 ], "formula_id": "formula_30", "formula_text": "∂Exp(kz) ∂k = ∂Exp(kz) ∂(kz) ∂(kz) ∂k = J l (kz)z.(22)" }, { "formula_coordinates": [ 17, 189.45, 173.22, 315.21, 50.22 ], "formula_id": "formula_31", "formula_text": "∂Exp(kz) ∂k = lim h→0 Log(Exp((k + h)z)Exp(kz) -1 ) k = lim h→0 Log(Exp(hz)Exp(kz)Exp(kz) -1 ) h = z.(23)" }, { "formula_coordinates": [ 17, 282, 242.03, 222.66, 9.68 ], "formula_id": "formula_32", "formula_text": "J l (z)z = z.(24)" }, { "formula_coordinates": [ 17, 241.88, 364.44, 262.78, 25.41 ], "formula_id": "formula_34", "formula_text": "J -1 l (z) = J -1 l (ϕ) Z(ρ, ϕ) 0 J -1 l (ϕ) ,(26)" }, { "formula_coordinates": [ 17, 134.47, 393.43, 322.06, 13.38 ], "formula_id": "formula_35", "formula_text": "Z(ρ, ϕ) = -J -1 l (ϕ)Q(ρ, ϕ)J -1 l (ϕ). The complete form of Q(ρ, ϕ) is defined in" }, { "formula_coordinates": [ 17, 144.71, 410.5, 359.96, 90.58 ], "formula_id": "formula_36", "formula_text": "Q(ρ, ϕ) = 1 2 ρ × + φ -sin φ φ3 (ϕ × ρ × + ρ × ϕ × + ϕ × ρ × ϕ × ) - 1 - φ2 2 -cos φ φ4 (ϕ 2 × ρ × + ρ × ϕ 2 × -3ϕ × ρ × ϕ × ) - 1 2 1 - φ2 2 -cos φ φ4 -3 φ -sin φ - φ3 6 φ5 (ϕ × ρ × ϕ 2 × + ϕ 2 × ρ × ϕ × ) .(27)" }, { "formula_coordinates": [ 17, 254.99, 523.63, 245.53, 11.03 ], "formula_id": "formula_37", "formula_text": "Q ⊤ (-ρ, -ϕ) = Q(ρ, ϕ).(28" }, { "formula_coordinates": [ 17, 193.51, 565.76, 311.15, 160.31 ], "formula_id": "formula_38", "formula_text": "J -⊤ r (z) = J -1 l (-z) ⊤ = J -1 l (-ϕ) Z(-ρ, -ϕ) 0 J -1 l (-ϕ) ⊤ = J -1 r (ϕ) -J -1 r (ϕ)Q(-ρ, -ϕ)J -1 r (ϕ) 0 J -1 r (ϕ) ⊤ = J -⊤ r (ϕ) 0 -J -⊤ r (ϕ)Q ⊤ (-ρ, -ϕ)J -⊤ r (ϕ) J -⊤ r (ϕ) = J -1 l (ϕ) 0 -J -1 l (ϕ)Q(ρ, ϕ)J -1 l (ϕ) J -1 l (ϕ) = J -1 l (ϕ) 0 Z(ρ, ϕ) J -1 l (ϕ).(29)" }, { "formula_coordinates": [ 18, 207.73, 102.03, 296.94, 25.41 ], "formula_id": "formula_39", "formula_text": "∇ Y log p σ ( X|X) = - 1 σ 2 J -1 l (ϕ) 0 Z(ρ, ϕ) J -1 l (ϕ) z.(30)" } ]
Confronting Ambiguity in 6D Object Pose Estimation via Score-Based Diffusion on SE(3)
Addressing accuracy limitations and pose ambiguity in 6D object pose estimation from single RGB images presents a significant challenge, particularly due to object symmetries or occlusions. In response, we introduce a novel score-based diffusion method applied to the SE(3) group, marking the first application of diffusion models to SE(3) within the image domain, specifically tailored for pose estimation tasks. Extensive evaluations demonstrate the method's efficacy in handling pose ambiguity, mitigating perspective-induced ambiguity, and showcasing the robustness of our surrogate Stein score formulation on SE(3). This formulation not only improves the convergence of Langevin dynamics but also enhances computational efficiency. Thus, we pioneer a promising strategy for 6D object pose estimation.
Tsu-Ching Hsiao; Hao-Wei Chen; Hsuan-Kung Yang; Chun-Yi Lee
[ { "figure_caption": "Figure 1 :1Figure 1: Visualization of the denoising process of our SE(3) score-based model for 6DoF pose estimation.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Left: Comparison of different methods. △ means closed form but with approximation. N SE(3) please refer to Eq. (3). Right: Visualizing pose ambiguity caused by image perspective. The rotations between the four cubes differ by an angle of 15 degrees.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Left: An overview of the framework. Right: Visualization of a denoising step from a noisy sample X to its cleaned counterpart X on SE(2). The contours illustrate the distances to X in 2D Euclidean space, while each line represents a denoising path with varying sub-sampling steps. and integrating the above definition, the score on G can be reformulated as follows:", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The distribution of angular errors of the SE(3) and R 3 SO(3) score models with three configurations and four shapes, in which the width represents the density of data points at a particular range. Please note that the results of R 3 SO(3) on icosa. are not reported as this model fails to adequately handle this particular shape.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Visualization of our SYMSOL-T results. Please refer to Section A.6 for the detailed descriptions.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Evaluation results on SYMSOL.", "figure_data": "MethodsAvg.SYMSOL (Spread ↓) tet. cube icosa. cone cyl.DBN [3]22.44 16.7 40.729.510.1 15.2Implicit-PDF [4]3.964.64.08.41.41.4HyperPosePDF [9]1.943.27 2.183.240.55 0.48Ours (ResNet34)0.420.43 0.440.520.35 0.35Ours (ResNet50)0.370.28 0.320.40.53 0.31", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Evaluation results on SYMSOL-T.", "figure_data": "SYMSOL-T (Spread ↓)Methodstet.cubeicosa.conecyl.RtRtRtRtRtRegression2.92 0.064 2.860.052.460.037 1.84 0.058 2.24 0.049Iterative regression 4.25 0.0484.20.037 29.33 0.026 1.63 0.037 2.34 0.032Ours (R 3 SO(3))1.38 0.017 1.93 0.010 29.35 0.009 1.33 0.016 0.86 0.010Ours (SE(3))0.59 0.016 0.58 0.0110.640.012 0.54 0.016 0.41 0.011", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Evaluation results for various denoising steps applied to score models on SE(3), trained using automatic differentiation and surrogate scores.", "figure_data": "SYMSOL-T (Spread ↓)MethodsStepstet.cubeicosa.conecyl.RtRtRtRtRt1000.60 0.019 0.59 0.012 0.67 0.012 0.58 0.018 0.41 0.012SE(3)-autograd50 100.61 0.019 0.61 0.013 0.66 0.013 0.58 0.019 0.41 0.013 2.89 0.102 3.21 0.113 3.24 0.113 3.12 0.104 3.16 0.108512.93 0.418 13.07 0.407 10.33 0.302 10.83 0.377 10.09 0.3451000.59 0.016 0.58 0.011 0.64 0.012 0.55 0.016 0.41 0.011SE(3)-surrogate50 100.56 0.017 0.58 0.011 0.65 0.012 0.54 0.017 0.41 0.011 0.63 0.017 0.70 0.012 1.71 0.015 0.56 0.019 0.43 0.01451.22 0.024 2.00 0.028 5.31 0.048 0.72 0.035 0.62 0.031", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparison with diffusion-based approaches.", "figure_data": "MethodsSYMSOL (Spread ↓) Avg. tet. cube icosa. cone cyl.Leach et al. [15]0.57 0.63 0.540.770.51 0.38Jagvaral et al. [16] 1.18 0.52 0.773.970.32 0.32Ours w/o fourier0.89 0.48 0.462.860.33 0.34Ours0.42 0.43 0.440.520.35 0.35", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work introduces the use of symmetry-aware annotations to improve pose estimation accuracy, which the citing paper adopts in their research to address the issue of pose ambiguity."}, {"Category": "Supporting Evidence", "Citation": "[6]", "Explanation": "The cited work provides a method for tackling the pose ambiguity problem through the use of symmetry-aware and surrogate losses, which the citing paper leverages in their research to address the issue of pose ambiguity."}, {"Category": "Extension or Continuation", "Citation": "[7]", "Explanation": "The cited work extends the research on pose estimation by introducing a new method for handling pose ambiguity, which the citing paper builds upon in their own research to address the same issue."}, {"Category": "Extension or Continuation", "Citation": "[8]", "Explanation": "The cited work further extends the research on pose estimation by introducing a new method for addressing pose ambiguity, which the citing paper builds upon in their own research to address the same issue."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work introduces the denoising diffusion probabilistic model (DDPM), which the citing paper adopts in their research to address the pose ambiguity issue in diffusion models."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work presents the score-based generative model (SGM), which the citing paper utilizes in their research to improve the performance of diffusion models in recovering unknown densities on the SO(3) rotation manifold."}, {"Category": "Methodological Basis", "Citation": "[15,16]", "Explanation": "The cited works apply the DDPM and SGM to the SO(3) space, which the citing paper builds upon in their research to achieve better results in recovering unknown densities on the SO(3) rotation manifold."}, {"Category": "Methodological Basis", "Citation": "[17,18]", "Explanation": "The cited works extend the application of diffusion models to the SE(3) space, which the citing paper builds upon in their research to address the complexity and high dimensionality of the problem in diffusion models."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work, SYMSOL dataset, serves as the basis for the development of a new synthetic dataset called SYMSOL-T, which is used in the study conducted in the citing paper to evaluate the effectiveness of the proposed method in capturing the joint density of object rotations and translations."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work provides a set of parametrizations for the special Euclidean group SE(3), which the citing paper adopts in their research to study rigid-body motions in three-dimensional Euclidean space."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work, NCSN, provides a specific method for training a network using a Denoising Score Matching objective, which the citing paper adopts in its own research."}, {"Category": "Supporting Evidence", "Citation": "[20]", "Explanation": "The cited work, Denoising Score Matching objective, serves as a foundational method for the research conducted in the citing paper, as it is used to train the network s \u03b8 (x, \u03c3) in the Denoising Score Matching objective."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work introduces the Langevin MCMC method for running M steps in a sample production process, which the citing paper adopts in their own research to produce samples for each p \u03c3i (x) sequentially."}, {"Category": "Methodological Basis", "Citation": "[15][16][17][18]", "Explanation": "The cited works have successfully employed diffusion models for pose estimation tasks, which the citing paper builds upon by adopting similar methods and approaches in their research."}, {"Category": "Data Source", "Citation": "IG SO(3)", "Explanation": "The cited work proposes a normal distribution on SO(3), which the citing paper utilizes in their research to address the computational challenges in diffusion model approaches for pose estimation."}, {"Category": "Extension or Continuation", "Citation": "r3so3", "Explanation": "The method proposed in the cited work operates in the tangent space of SE(3) parameterized with R 3 SO(3), which the citing paper extends by exploring new dimensions and variables in their research on diffusion model approaches for pose estimation."}, {"Category": "Methodological Basis", "Citation": "Joint Gaussian distribution", "Explanation": "The cited work employs a joint Gaussian distribution in the R 3 and SO(3) spaces, which the citing paper leverages to enhance computational efficiency in their research on diffusion model approaches for pose estimation."}, {"Category": "Methodological Basis", "Citation": "[22,23]", "Explanation": "The cited works establish a perturbation kernel on a Lie group G that conforms to the Gaussian distribution, which the citing paper adopts in their research to apply score-based generative modeling to the group G."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work suggests an approach to compute a term using automatic differentiation, which the citing paper adopts in their research to improve the training of a score model."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work provides a closed-form expression for the score induced by IG SO(3), which the citing paper uses to compute the score in their research on Lie groups."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work provides the property of SO(3) that allows the closed-form score to be derived in the citing paper, which serves as a methodological basis for the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[19,25]", "Explanation": "The cited work provides the full form of Q(\u03c1, \u03d5), which the citing paper uses in the calculation of the inverse of the left-Jacobian on SE(3). This information serves as a methodological basis for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[12,11]", "Explanation": "The cited works provide the inspiration for the design of the multi-layer perceptron (MLP) in the score model, which is used to process noisy poses and provide estimated scores in the image generation process."}, {"Category": "Data Source", "Citation": "[3,4,9]", "Explanation": "The cited works provide the SYMSOL dataset, which serves as the basis for evaluating the effectiveness of the score model in the SO(3) space in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[15,16]", "Explanation": "The cited works introduce prior diffusion models that are explored within the SO(3) context of the framework in the citing paper, indicating a continuation of research in this area."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work, Implicit-PDF, is used as a training strategy for the score model in the citing paper, which is employed to address the pose ambiguity problem."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work, HyperPosePDF, is also used as a training strategy for the score model in the citing paper, to address the pose ambiguity problem."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work by [15] extends the DDPM model to SO(3) using an analogy approach, which the citing paper adopts in their framework to perform object pose estimation."}, {"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work by [16] reformulates the SGM model to apply it to the SO(3) space, which the citing paper adapts to fit within their framework for object pose estimation."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work provides the closed-form expressions of the left and right Jacobians on SO(3), which the citing paper adopts in their research to express the skew-symmetric matrix induced by z."}, {"Category": "Methodological Basis", "Citation": "[19,25]", "Explanation": "The cited works provide the inverse of the left-Jacobian on SE(3), which the citing paper uses in the definition of the Z function in Equation (26) to calculate the Stein score on SE(3). This function is essential for the analysis of the Stein score in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[34,35]", "Explanation": "The cited works provide strategies for explicitly tackling pose ambiguity in object pose estimation, which the citing paper adopts in its research to address the issue."}, {"Category": "Extension or Continuation", "Citation": "[36,7,37,8]", "Explanation": "The cited works present regression-based techniques for minimizing pose discrepancy in object pose estimation, which the citing paper builds upon to further explore this approach."}, {"Category": "Methodological Basis", "Citation": "[38,39]", "Explanation": "The cited works apply constraints to regression targets to mitigate pose ambiguity in object pose estimation, which the citing paper adopts in its research to address the issue."}, {"Category": "Methodological Basis", "Citation": "[6,40,41]", "Explanation": "The cited works propose regressing to a predetermined set of geometric features derived from symmetry annotations to handle pose ambiguity in object pose estimation, which the citing paper builds upon to further explore this approach."}, {"Category": "Methodological Basis", "Citation": "[42,43,3]", "Explanation": "The cited works use parametric distributions such as Bingham distributions to model orientation uncertainty in object pose estimation, which the citing paper adopts in its research to model the inherent uncertainty in pose ambiguity."}, {"Category": "Methodological Basis", "Citation": "[44]", "Explanation": "The cited work uses a von-Mises distribution to model orientation uncertainty in object pose estimation, which the citing paper adopts in its research to model the inherent uncertainty in pose ambiguity."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work in [1] provides a method for estimating a Bingham distribution using an ensemble of pose hypotheses, which the citing paper adopts in their research to address the problem of rotation uncertainty densities on SO(3)."}, {"Category": "Data Source", "Citation": "[4,9,45,10]", "Explanation": "The cited works in [4,9,45,10] have employed non-parametric distributions to represent rotation uncertainty densities on SO(3), which the citing paper references to highlight the use of this approach in the field of rotation uncertainty estimation."}, {"Category": "Methodological Basis", "Citation": "[46,[12][13][14]11]", "Explanation": "The cited works provide a class of generative models that the citing paper builds upon to learn the underlying probability distribution of data in various application domains."}, {"Category": "Extension or Continuation", "Citation": "[58]", "Explanation": "The cited work extends the application of diffusion models to discriminative tasks, which the citing paper further explores in the context of semantic segmentation and object detection."}, {"Category": "Extension or Continuation", "Citation": "[62,63]", "Explanation": "The cited works on human pose estimation have provided useful insights for addressing joint location ambiguity in object pose estimation, which the citing paper builds upon to address similar challenges in the new area of object pose estimation."}, {"Category": "Methodological Basis", "Citation": "[24]", "Explanation": "The cited work has extended the concept of diffusion models to Riemannian manifolds, which the citing paper leverages to sample data in more complex spaces such as manifolds."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work, Denoising Diffusion Probabilistic Models (DDPM), provides the methodology for applying diffusion to the SO(3) manifold in the context of object pose estimation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[14,11]", "Explanation": "The cited works on score-based generative models contribute to the methodology of applying diffusion to the SO(3) manifold for the task of object pose estimation in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[16,15]", "Explanation": "The cited works on applying diffusion to the SO(3) manifold for data recovery tasks have been extended in the citing paper to the task of object pose estimation on SE(3)."}, {"Category": "Supporting Evidence", "Citation": "[65]", "Explanation": "The cited work features a wide variety of objects and occlusions, which presents a challenge in real-world applications and highlights the need for improvements in the score model design."}, {"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work provides a formula for computing Stein scores, which the citing paper adopts in their implementation using JAX and jaxlie for automatic differentiation."}, {"Category": "Data Source", "Citation": "[4]", "Explanation": "The cited work, SYMSOL, is the source of the SYMSOL-T dataset used in the citing paper for the analysis of image perspective ambiguity."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work by He et al. (2016) is the standard backbone used in the experiments of the citing paper, providing a methodological basis for the research conducted."}, {"Category": "Data Source", "Citation": "[30]", "Explanation": "The cited work by He et al. (2016) is the source of the pre-trained ResNet34 model used in the experiments of the citing paper, serving as a data source for the research conducted."}, {"Category": "Extension or Continuation", "Citation": "[30]", "Explanation": "The cited work by He et al. (2016) is the basis for the research conducted in the SYMSOL experiments, as the pre-trained ResNet34 model is used to generate the clean samples and their noisy variants."}, {"Category": "Extension or Continuation", "Citation": "[30]", "Explanation": "The cited work by He et al. (2016) is the basis for the research conducted in the SYMSOL-T experiments, as the pre-trained ResNet34 model is used to train the pose regression approach and the iterative regression models."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work provides a strategy for visualizing the rotation and translation densities generated by the model in the SO(3) and R3 spaces, which the citing paper adopts in its research to present the results in a more visual and informative manner."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b3", "b57", "b10", "b23", "b57", "b21", "b9", "b35", "b21", "b0", "b13", "b30", "b34", "b57", "b0", "b26", "b33", "b54", "b0" ], "table_ref": [], "text": "The development of advanced driver assistance systems (ADAS) and automated driving functions has made remarkable progress in recent years, resulting in increased safety and convenience for drivers. A robust environment perception is the key requirement for these systems, which rely on sensors such as radar, camera, or LiDAR to detect surrounding objects. Each sensor has its own advantages and disadvantages that must be considered when designing a perception system. An extensive review on multi-modular automotive object detection is presented in [4].\nRadar sensors are advantageous in that they are less affected by adverse environmental conditions such as rain, fog, or darkness, and they have a longer detection range when compared to cameras and LiDARs. However, they are Fig. 1: Overview of RC-BEVFusion network architecture. The block marked in grey is inherited from an exchangeable camera-only baseline, while the block marked in blue shows our proposed radar-camera fusion plug-in module.\nlimited in their ability to provide detailed information about the shape and texture of objects [58]. Cameras, on the other hand, provide rich visual information and can recognize objects based on their appearance, but their performance can be affected by changes in lighting conditions and inaccurate depth estimation [11,24]. LiDARs provide detailed 3D information and are less affected by lighting conditions, but they can be expensive and have limited range [58].\nSensor fusion has the potential to overcome these limitations of individual sensors. In particular, the combination of radar and camera sensors arguably offers the most complementary features. The main challenge is how to associate radar and camera features given that conventional radars provide data on the bird's eye view (BEV) plane, whereas cameras provide data on the image plane. Projecting radar points to the image discards too much geometric information, whereas projecting camera features to the sparse radar points discards too much semantic information [22].\nRecent advancements in camera-only networks using view transformers [10,36] have enabled a new type of fusion on the BEV plane, which is well suited for radar data. In this paper, we propose RC-BEVFusion, a novel radar-camera fusion architecture on the BEV plane inspired by [22] and illustrated in Figure 1. In contrast to previous radar-camera fusion techniques [14,31,35], our architecture allows radar and camera features to equally contribute to the final detections, enabling the network to detect obstacles that may be missed by one of the modalities. It is a flexible architecture that inherits several elements from an exchangable camera-only baseline: a camera encoder, a camera-to-BEV view transformer, a BEV encoder and a detection head. On top of these modules, we propose two radar encoder branches: RadarGridMap and BEVFeatureNet. Our results show that they can be used as a plug-in module in various camera-based architectures and significantly enhance their performance.\nTo train and evaluate our network, we need a large-scale automotive dataset with radar point clouds, unmasked camera images with a lot of variety in the scenes and 3D object annotations. A recent overview on radar datasets is given in [58]. First, there are datasets with conventional 2+1D radar sensors that provide a list of detections with measurements of the range, range rate, azimuth angle, and radar cross section (RCS). Out of these, the nuScenes dataset [1] is the only one that fulfils our requirements. Second, there are recent datasets with high-performance 3+1D radar sensors that provide denser point clouds and additionally measure the elevation angle. From this group, the Astyx [27] dataset is too small, while the View-of-Delft [34] dataset is medium-sized but has limited visual variety in the images due to the high annotation frequency. The recently presented TJ4DRadSet [55] may be a future option but the data is not fully released yet. In this work, we therefore choose to conduct our experiments with the nuScenes dataset [1].\nThe remaining paper is organized as follows. Section 2 provides an overview of related work on automotive object detection with camera-only, radar-only and radar-camera fusion. Section 3 describes the proposed radar-camera fusion architecture on the BEV plane. Section 4 presents extensive experimental results on the nuScenes dataset and demonstrates the effectiveness of the proposed architecture. Finally, Section 5 concludes the paper and outlines future work." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [], "table_ref": [], "text": "The task of 3D object detection is mostly conducted with the help of cameras, LiDARs and, less frequently, radars. In the following, we give an overview of recent advances on image-only and radar-only object detection, as well as LiDAR-camera and radar-camera sensor fusion." }, { "figure_ref": [], "heading": "Image-only object detection", "publication_ref": [ "b27", "b42", "b47", "b56", "b25", "b48", "b19", "b49", "b38", "b35", "b9", "b17", "b16", "b55" ], "table_ref": [], "text": "Image-based 3D object detection is a difficult task in the field of computer vision because it involves identifying and localizing objects in 3D space using only a single camera as a sensor. This is in contrast to LiDAR and radar systems, which provide depth measurements and can more accurately determine the 3D location of objects.\nEarly approaches to image-based 3D object detection focused on using known geometric information to estimate 3D bounding boxes from 2D detections [28]. More recent techniques have extended existing 2D object detection models with additional detection heads specifically designed for 3D object detection [43,48,57]. Some approaches have also used predicted depth maps as auxiliary features [26] or to create a pseudo-LiDAR point cloud, which is then processed using LiDAR-based object detectors [49].\nThe most recent research in this area has mainly followed two directions. The first is the use of transformer-based techniques, which leverage the ability of transformer models to process sequences of data and perform self-attention to learn more complex relationships between features [20,50]. The second direction is the development of BEV-based object detectors. To this end, the features need to be transformed from the image plane to the BEV plane. A pioneering work uses orthographic feature transform [39], where a voxel grid is projected to the image to extract features. To reduce memory consumption, the voxel grid is then collapsed along the vertical axis to create BEV features. More recent methods are based on the Lift-Splat-Shoot (LSS) view transformer [36], which lifts image features into a 3D pseudo point cloud via dense depth prediction, before again collapsing the vertical dimension to create BEV features. The idea is first incorporated into a 3D object detection network in [10], and later refined with LiDAR-based depth supervision [18] and temporal stereo [17]. In [56], a more efficient view transformer in terms of memory and computations is introduced by formulating LSS into matrix operations with decomposed ring and ray matrices, and compressing image features and the estimated depth map along the vertical dimension. In this work, we build upon these BEV-based object detectors and integrate our radar-camera fusion as a plug-in module." }, { "figure_ref": [], "heading": "Radar-only object detection", "publication_ref": [ "b41", "b24", "b32", "b4", "b39", "b14", "b33", "b39", "b2", "b40", "b44" ], "table_ref": [], "text": "Automotive radars are a common sensor used in autonomous vehicles for detecting objects in the environment. These radars typically provide a preprocessed list of detections, but the sparsity and lack of semantic information make it difficult to use the data for stand-alone 3D object detection. As a result, much of the research in this area has focused on either semantic segmentation of radar point clouds [42] or experimental setups using the raw radar cube [25,33].\nRecently, there have been some point cloud-based techniques for object detection as well. These can be broadly divided into convolutional and graph-based approaches. Some convolutional approaches assign each point in the point cloud to a cell in a BEV grid and include feature layers such as the maximum RCS and Doppler values [5,40]. Since the BEV grid is similar in structure to an image, traditional convolutional networks can be applied for object detection. Other convolutional approaches use variants of PointPillars [15] to create a pillar grid automatically from the point cloud [34,40]. In contrast, graph neural network based approaches perform object detection directly on the radar point cloud [3,41]. Recent work combines both approaches by first extracting features with a graph neural network and then mapping the features to a BEV grid for further processing [45]. In this work, we examine radar feature encoders inspired by these ideas." }, { "figure_ref": [], "heading": "Sensor fusion object detection", "publication_ref": [ "b36", "b50", "b6", "b45", "b46", "b5", "b18", "b21", "b28", "b29", "b1", "b31", "b43", "b30", "b12", "b11", "b34" ], "table_ref": [], "text": "Sensor fusion aims at leveraging the strengths of a diverse sensor combination. Most sensor fusion research focuses on LiDAR-camera fusion, as LiDAR provides accurate 3D information and cameras provide high semantic value, while sharing the same optical propagation principles. There are several techniques for fusing LiDAR and camera data. Some approaches are based on projecting 2D detections from the camera into a frustum and matching them with LiDAR points to refine the 3D detections [37,51]. Other techniques augment the LiDAR points with semantics from the image and use LiDAR-based object detectors to perform detection [7,46,47]. The most recent approaches to LiDAR-camera fusion involve extracting BEV features from both the LiDAR and camera data and fusing them on the BEV plane before applying a joint BEV encoder to perform object detection [6,19,22].\nWhen compared to LiDAR, automotive radar uses a different wavelength and measurement principle. As a consequence, radar typically shows strong RCS fluctuations and reduced resolution in range and angle, resulting in less dense point clouds. Moreover, whereas modern radars also measure elevation, many conventional radars only provide detections on the BEV plane. These differences make the fusion especially challenging and prevent the simple replacement of LiDAR with radar processing. Early research commonly projected the radar detections onto the image plane to associate the data. This approach can be used to find regions of interest in the image [29,30] or to create additional image channels with radar data that can be used with image-based networks [2,32,44].\nMore recent methods have moved away from this 2D approach and instead focus on fusing based on 3D information. One approach is to refine image-based 3D detections with associated radar data [31]. This can be sub-optimal because it discards the possibility of radar-only detections. Another approach is to project 3D regions of interest (ROIs) to the image and BEV plane to extract features from each sensor [13]. Finally, cross-attention has been used to align and fuse the features in 3D [12,35]. In this work, we propose a novel architecture to fuse radar and camera data on the BEV plane." }, { "figure_ref": [], "heading": "RC-BEVFusion", "publication_ref": [], "table_ref": [], "text": "In this section, we describe our proposed model architectures. We start by giving an overview of the general fusion architecture, before providing more detail on the proposed radar encoders, the camera-only networks we use as baselines and the loss function." }, { "figure_ref": [], "heading": "Overview", "publication_ref": [], "table_ref": [], "text": "In this work, we introduce a novel radar branch and use it as a plug-in module on different camera-based 3D object detection networks to improve their performance. The prerequisite for our proposed radar-camera fusion is that the camera-only network uses BEV features as an intermediate representation. The general architecture is shown in Figure 1. The block marked in grey is inherited from an exchangeable camera-only baseline, while the block marked in blue shows our proposed radar plug-in module. First, BEV features are extracted from the images and the radar point cloud separately. To this end, a backbone network is used to extract features from each image before they are transformed into joint BEV features with a view transformer. We set up our radar encoder so that it creates BEV features in the same shape and geometric orientation as the camera BEV features. The features are then fused by concatenation followed by a 1×1 convolutional layer that reduces the embedding dimension to the original dimension of the camera BEV features. We can then use the same BEV encoder and 3D detection heads that are used in the respective camera-only network, introducing little overhead with our fusion." }, { "figure_ref": [], "heading": "Radar encoders", "publication_ref": [ "b4", "b7", "b14", "b55", "b0", "b14", "b37" ], "table_ref": [], "text": "We propose two radar encoders, RadarGridMap and BEVFeatureNet, which we explain in the following. They each consist of two stages: First, we create a regular, structured BEV grid from the sparse radar point cloud. Then, we apply a convolutional backbone that further encodes the BEV features.\nRadarGridMap Inspired by [5], we design a hand-crafted radar BEV grid. We map each detection to a cell on the grid and fill the cell with four channels: the number of detections per cell, the maximum RCS value, and the minimum and maximum signed compensated Doppler values.\nAfter the grid mapping, we use a small generalized ResNet [8] as our radar backbone. We use 16 layers grouped into residual blocks with BatchNorm and ReLU. We use two downsampling stages that double the channels but reduce the resolution of the BEV grid. We design the size of the BEV grid so that the output of the radar backbone has the same shape as the camera BEV features.\nBEVFeatureNet The BEVFeatureNet illustrated in Figure 2 is inspired by the pillar feature encoding of PointPillars [15], but adapted for radar data. First, we map each point of the radar point cloud to a cell in a predefined BEV grid. In the original pillar feature encoding, each point in the LiDAR point cloud has MatrixVT [56] ResNet-50 MatrixVT ResNet-18 CenterPoint 704x256 SECOND-FPN-128 80-0.8-0.8 SECOND-FPN-64 coordinates x, y, and z, and reflectivity r. Further, the point is augmented with the distances to the arithmetic mean of all points in its pillar, x c , y c , and z c and the offsets to the pillar center x p and y p . The 2+1D radar point cloud in nuScenes [1] does not have a z-coordinate or a reflectivity r, but instead it has a radial velocity v d , measured via the Doppler effect, and an RCS. We therefore discard the z-axis and its augmented feature, replace the reflectivity with the RCS, and include the radial velocity values. We further aggregate multiple radar sweeps and append the timestamp difference to the latest sweep t s to each point. Thus, we obtain the 9-dimensional feature set:\n⃗ F = [x, y, RCS, v d , t s , x c , y c , x p , y p ](1)\nAs in [15], we then use a set of non-empty BEV grid cells B with a fixed number of points per cell N p to create a dense tensor of size (F, B, N p ). If the number of non-empty BEV grid cells or the number of points per cell is lower or higher than the fixed number, we apply zero-padding or random sampling, respectively. For each point, we then apply a simplified PointNet [38] with a 1×1 convolutional layer followed by BatchNorm and ReLU resulting in a tensor of shape (C, B, N p ), before a max operation over the points per cell reduces the dimension to (C, B).\nWe then map the C-dimensional features back to their position on the BEV grid. Finally, the same convolutional backbone as for RadarGridMap is applied." }, { "figure_ref": [], "heading": "Camera-only baselines", "publication_ref": [ "b9", "b20", "b35", "b7", "b21", "b7", "b35", "b17", "b9", "b9", "b8", "b17", "b7", "b51", "b9", "b16", "b17", "b52", "b55", "b35", "b17", "b55" ], "table_ref": [ "tab_1" ], "text": "We selected various camera-only baselines to showcase the plug-in character of our radar fusion module. In this section, we list details for the camera baselines we examined. A compact overview of the modules is given in Table 1.\nBEVDet BEVDet [10] is the first network that uses BEV-based features for object detection on the nuScenes dataset. First, high-level features from each of the N i input images of shape (H i , W i ) are extracted separately. To this end, a SwinTransformer-Tiny [21] backbone network outputs multi-scale feature maps, which are then processed using the feature pyramid network from [36], FPN-LSS, which upsamples the low resolution feature maps to match the high resolution map, concatenates them and runs them through a ResNet [8] block. This leads to a feature map of shape (N i , H i /8, W i /8, C) with C feature channels. Then, a 1x1 convolution followed by a Softmax is used to predict a depth classification into D pre-defined depth bins. An outer product across the feature and depth classification channels creates a very large tensor of shape (N i , H i /8, W i /8, D, C).\nUsing the intrinsic and extrinsic calibration matrices of each camera, this tensor can be unprojected into a pseudo-pointcloud. The vertical dimension of this pseudo-pointcloud is then collapsed by summing up the features from all points that fall into the same cell of a pre-defined BEV grid with shape (H bev , W bev ).\nWe follow the implementation of [22], which provides computationally efficient BEV pooling and uses a heavier view transformer to enable more accurate depth estimation, which is important to associate the features from radar and camera.\nTo further encode the BEV features, the generalized ResNet [8] structure from [36] followed again by FPN-LSS is used.\nBEVDepth BEVDepth [18] uses a similar structure to BEVDet [10] but aims to achieve a more accurate depth estimation. To this end, the single convolutional layer for depth estimation in BEVDet [10] is replaced with a camera-aware DepthNet module, that concatenates and flattens the camera's intrinsic and extrinsic calibration parameters and uses an MLP to rescale them to match the dimension of the image features C. This calibration vector is used to re-weight the image features using a Squeeze-and-Excitation module [9]. During training, BEVDepth [18] further uses the depth value from projected LiDAR points on the image to directly supervise the depth estimation via binary cross-entropy loss.\nWe use the released configuration, which encodes the images with a ResNet-50 [8] backbone followed by the feature pyramid net from [52], SECOND-FPN, which concatenates upsampled multi-scale feature maps. The BEV encoder uses a ResNet-18 again followed by SECOND-FPN. The view transformer has a lower resolution than the one we use for BEVDet [10]. It also uses a multi-frame fusion with one previous keyframe, thus two images from each camera taken 500 ms apart are used to create the BEV features, resulting in more accurate velocity estimation.\nBEVStereo BEVStereo [17] builds upon BEVDepth [18] and uses the same configuration for most modules. In addition to the monocular depth estimation with DepthNet, it introduces a temporal stereo depth estimation module, which is based on multi-view-stereo [53]. To this end, for each pixel in the current image feature map, several depth candidates are predicted and used to retrieve corresponding features from the previous image features using a homography warping. The confidence of each candidate is evaluated based on the current and previous feature's similarity and used to iteratively optimize the depth candidates. After three iterations, the depth candidates are used to construct the stereo depth. Since the stereo depth is not viable for pixels that do not have corresponding pixels in the previous image, a convolutional WeightNet module is used to combine the monocular depth estimations from the current and previous image with the stereo depth estimation to produce the final depth estimates.\nMatrixVT In MatrixVT [56], an alternative view transformer is proposed. The view transformation step of LSS [36] is first generalized into matrix operations, which leads to a very large and sparse feature transportation tensor of shape (H bev , W bev , N i , H i /8, W i /8, D), which transforms the image feature tensor with depth estimates to the BEV features. To combat this, the image feature and the dense depth prediction are compressed along the vertical dimension, resulting in prime feature and prime depth matrices. This is feasible due to the low response variance in the height dimension of the image. To further reduce its sparsity, the feature transportation tensor is orthogonally decomposed into a ring tensor, which encodes distance information, and a ray tensor, which encodes directional information. Using efficient mathematical operations, the resulting view transformer achieves lower computational cost and memory requirements. We use a configuration based on BEVDepth [18], which only replaces the view transformer with MatrixVT [56] and does not use multi-frame fusion." }, { "figure_ref": [], "heading": "Detection Head and Loss", "publication_ref": [ "b53", "b53", "b15", "b53" ], "table_ref": [], "text": "All examined camera baselines use the CenterPoint [54] detection head, so we can apply the same loss in all architectures. For each class k, CenterPoint [54] predicts a BEV heatmap with peaks at object center locations. The heatmap is trained using a ground-truth heatmap y filled with Gaussian distributions around ground truth object centers. Given the heatmap score p kij at position i, j in the BEV grid and the ground truth y kij , we can compute the Gaussian focal loss [16] as:\nL hm = - 1 N o K k=1 H bev i=1 W bev j=1 (1 -p kij ) α log (p kij ) y kij = 1 (1 -y kij ) β (p kij ) α log (1 -p kij ) otherwise(2)\nwhere N o is the number of objects per image, K is the number of classes, H bev and W bev are the height and width of the BEV grid, and α and β are hyperparameters. In addition, CenterPoint [54] has regression heads that output all parameters needed to decode 3D bounding boxes: a sub-pixel location refinement, a height above ground, the dimensions, the velocity, and the sine and cosine of the yaw rotation angle. The regression heads are trained with L1 loss." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [ "b0" ], "table_ref": [], "text": "In this section, we present our experimental results. We first give some more information on the dataset, evaluation metrics and training settings. We then list quantitative results on the nuScenes validation set to ensure a fair comparison between our proposed fusion networks with their camera-only baselines. We also show results for the nuScenes [1] test benchmark and provide an inference example for a qualitative comparison. Ablative studies, detailed class-wise results, rain and night scene evaluation and more qualitative examples are provided in the supplementary material." }, { "figure_ref": [], "heading": "Data and metrics", "publication_ref": [ "b0" ], "table_ref": [], "text": "For our experiments, we need a large-scale, real-world dataset with unmasked camera images, series-production automotive radars and 3D bounding box annotations. To the best of our knowledge, the nuScenes dataset [1] is currently the only dataset that fulfils these requirements. It covers data from six cameras at 12 Hz, five conventional 2+1D radar sensors at 13 Hz and one LiDAR at 20 Hz, as well as 1.4 million 3D bounding boxes annotated at 2 Hz. We follow the official splits into 700 training, 150 validation and 150 test scenes and reduce the 27 annotated classes to the 10 classes evaluated on the benchmark. We also use the official metrics: the mean average precision (mAP), the true positive metrics covering mean errors for translation (mATE), scale (mASE), orientation (mAOE), velocity (mAVE) and nuScenes attribute (mAAE), as well as the condensed nuScenes detection score (NDS)." }, { "figure_ref": [], "heading": "Quantitative Evaluation", "publication_ref": [ "b14", "b15", "b22" ], "table_ref": [ "tab_1" ], "text": "Training details For our radar-camera fusion networks, we adopt the configurations from the camera-only baselines listed in Table 1 to allow for a fair comparison. In addition, we design our radar encoder branch so that the BEV features have the same shape and orientation as the camera BEV features. To increase the point cloud density while limiting the use of outdated data, we choose to aggregate five radar sweeps, which corresponds to 300 ms of past data. The aggregated radar point cloud is still sparse when compared with LiDAR data, so that we can reduce the number of non-empty grid cells and points per grid cell of the BEVFeatureNet drastically with respect to the pillar feature encoding in [15]. We empirically find that setting B = 2000, N p = 10, and C = 32 is sufficient, which allows for little computational overhead. For the Gaussian focal loss, we follow [16] and set α = 2 and β = 4. We train for 20 epochs with an AdamW [23] optimizer, a base learning rate of 2e-4 and weight decay of 1e-2." }, { "figure_ref": [], "heading": "NuScenes validation results", "publication_ref": [], "table_ref": [ "tab_2", "tab_2" ], "text": "We show the results for our radar-camera fusion networks w.r.t. their camera-only baselines on the nuScenes validation split in Table 2. First, we compare the two radar encoders with our model based on BEVDet. In both cases, the proposed fusion offers significant performance increases, with the mAP increasing up to 24% and the NDS 28%. The up to 23% reduced translation error shows how the direct depth measurements provided by the radar can lead to more precise location predictions. The most significant improvement of 55% is achieved for the velocity error, which is enabled by the direct velocity measurement of the radar. This effect also helps determining whether an object is currently moving or stopped, thus reducing the attribute error. The two metrics that remain relatively unaffected by the fusion are scale and orientation error. This is as expected since the sparse radar detections do not help to determine an object's size or orientation. Overall, we observe similar results for the RadarGridMap and the BEVFeatureNet encoder. This demonstrates the effectiveness and modularity of the proposed BEV-based feature fusion. In general, we recommend using the BEVFeatureNet because it requires less hand-crafted input, is more scalable, and achieves slightly better results.\nIn the second part of Table 2, we use the BEVFeatureNet encoder as a plugin branch in different camera-only baselines. We observe significant performance increase for all examined architectures, again confirming the modularity of the proposed architecture. There are two potential reasons for the difference in relative performance increase between the camera-only baselines. First, BEVDepth and BEVStereo use temporal fusion and therefore achieve better velocity prediction and overall scores, leading to smaller margins for the radar-camera fusion. Second, we use a BEVDet variant with a heavier view transformer especially designed for fusion on the BEV space. This modification may explain the relatively high performance gains.\nFinally, we also measure the inference latency on an Nvidia RTX 2080 Ti GPU to demonstrate that our fusion approach introduces only small computational overhead due to the efficient radar encoder design. " }, { "figure_ref": [], "heading": "NuScenes test results", "publication_ref": [ "b9", "b13", "b30", "b34" ], "table_ref": [ "tab_3", "tab_3" ], "text": "For the nuScenes test benchmark, we show our results compared to other published radar-camera fusion models in Table 3. We note that many methods tune their models for the benchmark submission by enlarging their network and the input image resolution and by using test time augmentation. All of these techniques trade off smaller performance gains for large computational cost and are therefore not helpful in an automotive application in which fast decisions are required. For instance, the authors of BEVDet achieve 15.6 frames per second with the tiny configuration similar to ours, while the base configuration used for the benchmark achieves only 1.9 frames per second [10]. We therefore combat this trend and only retrain our model with scenes from the training and validation set for the test submission. As shown in Table 3, our proposed RC-BEVFusion with BEVFeatureNet and BEVDet significantly outperforms previously published radar-camera fusion networks in all metrics except the orientation error, even without tuning for the benchmark, while achieving 7.2 frames per second. A key advantage of our architecture compared to existing methods [14,31,35], is that radar and camera features can equally contribute to the final detections, allowing the model to detect objects that might be missed by each single modality." }, { "figure_ref": [ "fig_1" ], "heading": "Qualitative Evaluation", "publication_ref": [ "b9" ], "table_ref": [], "text": "To get a better impression of the difference in object detection performance, we present an inference example for BEVDet [10] and RC-BEVFusion based on BEVFeatureNet and BEVDet in Figure 3. We show the camera-only inference results and radar-camera fusion results in the left and right subfigure, respectively. We show the full surround view with all six cameras, the top row shows the front left, center and right camera, while the bottom row shows the back right, center and left camera. On the bottom, we show a BEV perspective with LiDAR points in black for reference and radar points in red for the fusion network only. In each perspective, we show the projected 3D bounding boxes predicted by the networks, where the color indicates the class: pedestrians are blue, cars are yellow and trucks are orange.\nIn the scene, we see a crowded intersection with lots of cars and pedestrians. At first, the visual impression when looking at the camera images is that most objects are well detected. However, comparing the dashed red ellipses in the BEV perspective on the right, we can see that the radar-camera fusion enables much more accurate detection of the pedestrians in the front and back right area, as well as the cars in the front left, front right and back area." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we have presented a novel radar-camera fusion architecture on the BEV plane. We propose two radar encoders and show that they can be integrated into several camera-based architectures that use BEV features. In our experiments, the proposed radar-camera fusion outperforms the camera-only baselines by a large margin, demonstrating its effectiveness. Without tuning the model for the test submission to avoid unrealistic computational cost, we outperform previously published radar-camera fusion networks. Our qualitative evaluation shows improved localization accuracy at daytime and higher recall at nighttime. In future work, we want to study the potential of BEV-based radar-camera fusion with the high-resolution, 3+1D radar sensors appearing in recently introduced datasets." }, { "figure_ref": [], "heading": "A Supplementary Material", "publication_ref": [], "table_ref": [], "text": "In the supplementary material, we present ablative studies regarding augmentation strategies in Section A.1. Further, we list additional quantitative results including class-wise evaluation results in Section A.2. To highlight the advantage of a radar-camera fusion, we provide results of a filtered evaluation for rain and night scenes in Section A.3, in which radar is especially useful. Finally, additional qualitative examples are provided in Section A.4." }, { "figure_ref": [], "heading": "A.1 Ablations", "publication_ref": [ "b9", "b9", "b39", "b17" ], "table_ref": [ "tab_4", "tab_5", "tab_2", "tab_6" ], "text": "In Table 4 we show the impact of different augmentation strategies on our RC-BEVFusion with BEVFeatureNet and BEVDet. As in [10], we use a two-stage augmentation. First, 2D augmentation is applied by rotating, resizing and horizontally flipping the images. In the BEV view transformer, the 2D augmentations are reversed so that the original orientation is retained. Then, 3D augmentations are applied to the BEV features, radar points and ground truth boundings boxes. They include scaling, rotating, as well as horizontal and vertical flipping. Our results show that 3D augmentation is very important, while 2D augmentation helps to improve the results only slightly. This is due to the camera encoder branch being shared across the six cameras, leading to more variety in the images than in the BEV plane. Therefore, the BEV encoder is more prone to overfitting and requires more augmentation [10]. Further, we examine two potential augmentation strategies for our Radar-GridMap encoder, which have been proposed based on a similar setup in [40]. To combat the sparsity of the radar grid, a blurring filter spreads values from the reference cell to neighboring cells depending on the number of detections per reference cell. Further, the authors find that the distribution of compensated Doppler values is heavy-sided towards zero and introduce a Doppler skewing function to spread the distribution for values close to zero. The results are shown in Table 5. The grid mapping variant without extra augmentation works quite well, confirming the effectiveness of the approach. The blurring filter has little impact on the results and thus is not deemed necessary. The Doppler skewing function leads to higher velocity errors and should therefore be omitted. A.2 Class-wise results\nIn addition to the quantitative evaluation in Section 4.2 and Table 2, we present some more detailed, class-wise results in Table 6. To reduce the amount of data, we only list values of the three most common dynamic classes: car, pedestrian, and truck. The results show that the average precision for important classes increased across the board, with an even greater increase for the larger classes that are more easily detectable by radar: car and especially truck. Translation errors decreased most notably for cars, while scale errors remained relatively unchanged, except for a slight decrease for our model based on BEVDepth [18], which may be due to statistical effects. Orientation errors improved for models without temporal fusion, but increased for our model based on BEVDepth, possibly indicating difficulty in estimating the orientation of some additionally detected cars and trucks. Velocity errors saw a significant reduction, especially for cars and trucks and for models without temporal fusion. Even with temporal fusion, there was a considerable decrease in velocity error. Finally, the attribute error decreased most for pedestrians, with radar making it easier to determine if a pedestrian is moving or standing." }, { "figure_ref": [], "heading": "A.3 Results for rain and night scenes", "publication_ref": [ "b0", "b9" ], "table_ref": [ "tab_7", "tab_6" ], "text": "In this section, we want to evaluate the performance of our radar-camera fusion in adverse conditions for the camera. We therefore filter the scene descriptions in the nuScenes [1] validation set for the terms \"rain\" and \"night\", respectively, to obtain 27 rain and 15 night scenes, on which we run the evaluation for BEVDet [10] and our corresponding RC-BEVFusion algorithm with BEVFeatureNet. Since not all classes are represented in the rain and night scenes, the averaged metrics across all classes are less meaningful. We therefore again present class-wise results of the three most common dynamic classes: car, pedestrian, and truck. The results in Table 7 show that compared to the overall AP increase across all scenes listed in Table 6, there were higher improvements for rain and especially for night scenes. We conclude that the camera-only model struggles in these adverse conditions, particularly in detecting pedestrians and trucks at night. This is where the proposed radar-camera fusion can add the most value. Note that the true positive metrics for pedestrians and trucks at night should be treated with caution due to the low number of matches for the camera-only model. As discussed above, we again observe significant decreases in translation, velocity, and attribute errors, while the scale error remains relatively unchanged. This time, there is also a considerable improvement in orientation error. This finding suggests that the camera-only model struggles to accurately predict orientation in these adverse conditions, and further emphasizes the potential of radar-camera fusion in such scenarios." }, { "figure_ref": [ "fig_2", "fig_4" ], "heading": "A.4 Additional qualitative evaluation", "publication_ref": [ "b9" ], "table_ref": [], "text": "We provide additional selected qualitative examples for the camera-only baseline BEVDet [10] in comparison with our proposed radar-camera fusion model with BEVFeatureNet and BEVDet. Figure 4 shows another example at daytime. Our fusion network achieves better performance for long ranges as can be seen with the distant cars in the front and back right area. It also detects an occluded car two vehicles ahead of the ego-vehicle. Figure 5 shows an example during rain. As indicated by the quantitative evaluation in the previous section, the network shows a much better overall scene understanding with the barriers in the back area as well as the vehicles in the front area. In addition, the orientation estimation for the truck on the left is improved and an additional vehicle is detected on the right. However, this frame also shows some failure cases that still exist. The two pedestrians on the left are only detected as one by both networks, which are possibly confused by the umbrellas. Also, the fusion network detects an additional parked car in the front right due to matching radar detections, which is a false positive. Figure 6 shows another example at night with several cars and one pedestrian. We can observe that the camera fails to detect several vehicles, which are difficult to spot given the low light conditions. With our radar-camera fusion, we are able to accurately detect the objects even in difficult lighting conditions. Fig. 6: Inference example at nighttime. Our network detects several cars that were missed by the camera-only network due to difficult lighting." }, { "figure_ref": [], "heading": "Acknowledgment", "publication_ref": [], "table_ref": [], "text": "This work is partly funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) and partly financed by the European Union in the frame of NextGenerationEU within the project \"Solutions and Technologies for Automated Driving in Town\" (FKZ 19A22006P) and partly funded by the Federal Ministry of Education and Research Germany in the funding program Photonics Research Germany under the project FUMOS (FKZ 13N16302). The authors would like to thank the consortia for the successful cooperation." } ]
2023-09-28
[ { "authors": "H Caesar; V Bankiti; A H Lang; S Vora; V E Liong; Q Xu; A Krishnan; Y Pan; G Baldan; O Beijbom", "journal": "", "ref_id": "b0", "title": "nuscenes: A multimodal dataset for autonomous driving", "year": "2019" }, { "authors": "S Chadwick; W Maddern; P Newman", "journal": "IEEE", "ref_id": "b1", "title": "Distant vehicle detection using radar and vision", "year": "2019" }, { "authors": "A Danzer; T Griebel; M Bach; K Dietmayer", "journal": "IEEE", "ref_id": "b2", "title": "2d car detection in radar data with pointnets", "year": "2019" }, { "authors": "Di Feng; C Haase-Schütz; L Rosenbaum; H Hertlein; C Glaeser; F Timm; W Wiesbeck; K Dietmayer", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b3", "title": "Deep multi-modal object detection and semantic segmentation for autonomous driving: Datasets, methods, and challenges", "year": "2020" }, { "authors": "M Dreher; E Erçelik; T Bänziger; A Knoll", "journal": "IEEE", "ref_id": "b4", "title": "Radar-based 2d car detection using deep neural networks", "year": "2020" }, { "authors": "F Drews; Di Feng; F Faion; L Rosenbaum; M Ulrich; C Gläser", "journal": "IEEE", "ref_id": "b5", "title": "Deepfusion: A robust and modular 3d object detector for lidars, cameras and radars", "year": "2022" }, { "authors": "J Fei; W Chen; P Heidenreich; S Wirges; C Stiller", "journal": "IEEE", "ref_id": "b6", "title": "Semanticvoxels: Sequential fusion for 3d pedestrian detection using lidar point cloud and semantic segmentation", "year": "2020" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b7", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "J Hu; L Shen; G Sun", "journal": "", "ref_id": "b8", "title": "Squeeze-and-excitation networks", "year": "2018" }, { "authors": "J Huang; G Huang; Z Zhu; D Du", "journal": "", "ref_id": "b9", "title": "Bevdet: High-performance multi-camera 3d object detection in bird-eye-view", "year": "2021" }, { "authors": "W C Hung; H Kretzschmar; V Casser; J J Hwang; D Anguelov", "journal": "", "ref_id": "b10", "title": "Let-3dap: Longitudinal error tolerant 3d average precision for camera-only 3d detection", "year": "2022" }, { "authors": "J J Hwang; H Kretzschmar; J Manela; S Rafferty; N Armstrong-Crews; T Chen; D Anguelov", "journal": "Springer", "ref_id": "b11", "title": "Cramnet: Camera-radar fusion with ray-constrained crossattention for robust 3d object detection", "year": "2022" }, { "authors": "Y Kim; J W Choi; D Kum", "journal": "IEEE", "ref_id": "b12", "title": "Grif net: Gated region of interest fusion network for robust 3d object detection from radar point cloud and monocular image", "year": "2020" }, { "authors": "Y Kim; S Kim; J W Choi; D Kum", "journal": "", "ref_id": "b13", "title": "Craft: Camera-radar 3d object detection with spatio-contextual fusion transformer", "year": "2022" }, { "authors": "A H Lang; S Vora; H Caesar; L Zhou; J Yang; O Beijbom", "journal": "", "ref_id": "b14", "title": "Pointpillars: Fast encoders for object detection from point clouds", "year": "2019" }, { "authors": "H Law; J Deng", "journal": "", "ref_id": "b15", "title": "Cornernet: Detecting objects as paired keypoints", "year": "2018" }, { "authors": "Y Li; H Bao; Z Ge; J Yang; J Sun; Z Li", "journal": "", "ref_id": "b16", "title": "Bevstereo: Enhancing depth estimation in multi-view 3d object detection with dynamic temporal stereo", "year": "2022" }, { "authors": "Y Li; Z Ge; G Yu; J Yang; Z Wang; Y Shi; J Sun; Z Li", "journal": "", "ref_id": "b17", "title": "Bevdepth: Acquisition of reliable depth for multi-view 3d object detection", "year": "2022" }, { "authors": "T Liang; H Xie; K Yu; Z Xia; Z Lin; Y Wang; T Tang; B Wang; Z Tang", "journal": "", "ref_id": "b18", "title": "Bevfusion: A simple and robust lidar-camera fusion framework", "year": "2022" }, { "authors": "Y Liu; T Wang; X Zhang; J Sun", "journal": "", "ref_id": "b19", "title": "Petr: Position embedding transformation for multi-view 3d object detection", "year": "2022" }, { "authors": "Z Liu; Y Lin; Y Cao; H Hu; Y Wei; Z Zhang; S Lin; B Guo", "journal": "", "ref_id": "b20", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "Z Liu; H Tang; A Amini; X Yang; H Mao; D Rus; S Han", "journal": "", "ref_id": "b21", "title": "Bevfusion: Multi-task multi-sensor fusion with unified bird's-eye view representation", "year": "2022" }, { "authors": "I Loshchilov; F Hutter", "journal": "", "ref_id": "b22", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "X Ma; Y Zhang; D Xu; D Zhou; S Yi; H Li; W Ouyang", "journal": "", "ref_id": "b23", "title": "Delving into localization errors for monocular 3d object detection", "year": "2021" }, { "authors": "B Major; D Fontijne; A Ansari; R Teja Sukhavasi; R Gowaikar; M Hamilton; S Lee; S Grzechnik; S Subramanian", "journal": "", "ref_id": "b24", "title": "Vehicle detection with automotive radar using deep learning on range-azimuth-doppler tensors", "year": "2019" }, { "authors": "F Manhardt; W Kehl; A Gaidon", "journal": "", "ref_id": "b25", "title": "Roi-10d: Monocular lifting of 2d detection to 6d pose and metric shape", "year": "2019" }, { "authors": "M Meyer; G Kuschk", "journal": "IEEE", "ref_id": "b26", "title": "Automotive radar dataset for deep learning based 3d object detection", "year": "2019" }, { "authors": "A Mousavian; D Anguelov; J Flynn; J Kosecka", "journal": "", "ref_id": "b27", "title": "3d bounding box estimation using deep learning and geometry", "year": "2017" }, { "authors": "R Nabati; H Qi", "journal": "IEEE", "ref_id": "b28", "title": "Rrpn: Radar region proposal network for object detection in autonomous vehicles", "year": "2019" }, { "authors": "R Nabati; H Qi", "journal": "", "ref_id": "b29", "title": "Radar-camera sensor fusion for joint object detection and distance estimation in autonomous vehicles", "year": "2020" }, { "authors": "R Nabati; H Qi", "journal": "", "ref_id": "b30", "title": "Centerfusion: Center-based radar and camera fusion for 3d object detection", "year": "2021" }, { "authors": "F Nobis; M Geisslinger; M Weber; J Betz; M Lienkamp", "journal": "IEEE", "ref_id": "b31", "title": "A deep learningbased radar and camera sensor fusion architecture for object detection", "year": "2019" }, { "authors": "A Palffy; J Dong; J F P Kooij; D M Gavrila", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b32", "title": "Cnn based road user detection using the 3d radar cube", "year": "2020" }, { "authors": "A Palffy; E Pool; S Baratam; J F P Kooij; D M Gavrila", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b33", "title": "Multi-class road user detection with 3+ 1d radar in the view-of-delft dataset", "year": "2022" }, { "authors": "S Pang; D Morris; H Radha", "journal": "", "ref_id": "b34", "title": "Transcar: Transformer-based camera-and-radar fusion for 3d object detection", "year": "2023" }, { "authors": "J Philion; S Fidler", "journal": "Springer", "ref_id": "b35", "title": "Lift, splat, shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3d", "year": "2020" }, { "authors": "C R Qi; W Liu; C Wu; H Su; L J Guibas", "journal": "", "ref_id": "b36", "title": "Frustum pointnets for 3d object detection from rgb-d data", "year": "2018" }, { "authors": "C R Qi; H Su; K Mo; L J Guibas", "journal": "", "ref_id": "b37", "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "year": "2017" }, { "authors": "T Roddick; A Kendall; R Cipolla", "journal": "", "ref_id": "b38", "title": "Orthographic feature transform for monocular 3d object detection", "year": "2018" }, { "authors": "N Scheiner; F Kraus; N Appenrodt; J Dickmann; B Sick", "journal": "AI Perspectives", "ref_id": "b39", "title": "Object detection for automotive radar point clouds-a comparison", "year": "2021" }, { "authors": "N Scheiner; O Schumann; F Kraus; N Appenrodt; J Dickmann; B Sick", "journal": "", "ref_id": "b40", "title": "Off-the-shelf sensor vs. experimental radar-how much resolution is necessary in automotive radar classification?", "year": "2020" }, { "authors": "O Schumann; M Hahn; J Dickmann; C Wöhler", "journal": "IEEE", "ref_id": "b41", "title": "Semantic segmentation on radar point clouds", "year": "2018" }, { "authors": "A Simonelli; S R Bulo; L Porzi; M López-Antequera; P Kontschieder", "journal": "", "ref_id": "b42", "title": "Disentangling monocular 3d object detection", "year": "2019" }, { "authors": "L Stäcker; P Heidenreich; J Rambach; D Stricker", "journal": "", "ref_id": "b43", "title": "Fusion point pruning for optimized 2d object detection with radar-camera fusion", "year": "2022" }, { "authors": "M Ulrich; S Braun; D Köhler; D Niederlöhner; F Faion; C Gläser; H Blume", "journal": "", "ref_id": "b44", "title": "Improved orientation estimation and detection with hybrid object detection networks for automotive radar", "year": "2022" }, { "authors": "S Vora; A H Lang; B Helou; O Beijbom", "journal": "", "ref_id": "b45", "title": "Pointpainting: Sequential fusion for 3d object detection", "year": "2020" }, { "authors": "C Wang; C Ma; M Zhu; X Yang", "journal": "", "ref_id": "b46", "title": "Pointaugmenting: Cross-modal augmentation for 3d object detection", "year": "2021" }, { "authors": "T Wang; X Zhu; J Pang; D Lin", "journal": "", "ref_id": "b47", "title": "Fcos3d: Fully convolutional one-stage monocular 3d object detection", "year": "2021" }, { "authors": "Y Wang; W L Chao; D Garg; B Hariharan; M Campbell; K Q Weinberger", "journal": "", "ref_id": "b48", "title": "Pseudo-lidar from visual depth estimation: Bridging the gap in 3d object detection for autonomous driving", "year": "2019" }, { "authors": "Y Wang; V C Guizilini; T Zhang; Y Wang; H Zhao; J Solomon", "journal": "PMLR", "ref_id": "b49", "title": "Detr3d: 3d object detection from multi-view images via 3d-to-2d queries", "year": "2022" }, { "authors": "Z Wang; K Jia", "journal": "IEEE", "ref_id": "b50", "title": "Frustum convnet: Sliding frustums to aggregate local point-wise features for amodal 3d object detection", "year": "2019" }, { "authors": "Y Yan; Y Mao; B Li", "journal": "Sensors", "ref_id": "b51", "title": "Second: Sparsely embedded convolutional detection", "year": "2018" }, { "authors": "Y Yao; Z Luo; S Li; T Fang; L Quan", "journal": "", "ref_id": "b52", "title": "Mvsnet: Depth inference for unstructured multi-view stereo", "year": "2018" }, { "authors": "T Yin; X Zhou; P Krahenbuhl", "journal": "", "ref_id": "b53", "title": "Center-based 3d object detection and tracking", "year": "2021" }, { "authors": "L Zheng; Z Ma; X Zhu; B Tan; S Li; K Long; W Sun; S Chen; L Zhang; M Wan", "journal": "", "ref_id": "b54", "title": "Tj4dradset: A 4d radar dataset for autonomous driving", "year": "2022" }, { "authors": "H Zhou; Z Ge; Z Li; X Zhang", "journal": "", "ref_id": "b55", "title": "Matrixvt: Efficient multi-camera to bev transformation for 3d perception", "year": "2022" }, { "authors": "X Zhou; D Wang; P Krähenbühl", "journal": "", "ref_id": "b56", "title": "Objects as points", "year": "2019" }, { "authors": "Y Zhou; L Liu; H Zhao; M López-Benítez; L Yu; Y Yue", "journal": "Sensors", "ref_id": "b57", "title": "Towards deep radar perception for autonomous driving: Datasets, methods, and challenges", "year": "2022" } ]
[ { "formula_coordinates": [ 7, 233.39, 384.62, 247.2, 12.17 ], "formula_id": "formula_0", "formula_text": "⃗ F = [x, y, RCS, v d , t s , x c , y c , x p , y p ](1)" }, { "formula_coordinates": [ 9, 147.58, 460.88, 333.01, 30.72 ], "formula_id": "formula_1", "formula_text": "L hm = - 1 N o K k=1 H bev i=1 W bev j=1 (1 -p kij ) α log (p kij ) y kij = 1 (1 -y kij ) β (p kij ) α log (1 -p kij ) otherwise(2)" } ]
RC-BEVFusion: A Plug-In Module for Radar-Camera Bird's Eye View Feature Fusion
Radars and cameras belong to the most frequently used sensors for advanced driver assistance systems and automated driving research. However, there has been surprisingly little research on radarcamera fusion with neural networks. One of the reasons is a lack of large-scale automotive datasets with radar and unmasked camera data, with the exception of the nuScenes dataset. Another reason is the difficulty of effectively fusing the sparse radar point cloud on the bird's eye view (BEV) plane with the dense images on the perspective plane. The recent trend of camera-based 3D object detection using BEV features has enabled a new type of fusion, which is better suited for radars. In this work, we present RC-BEVFusion, a modular radar-camera fusion network on the BEV plane. We propose two novel radar encoder branches, and show that they can be incorporated into several stateof-the-art camera-based architectures. We show significant performance gains of up to 28% increase in the nuScenes detection score, which is an important step in radar-camera fusion research. Without tuning our model for the nuScenes benchmark, we achieve the best result among all published methods in the radar-camera fusion category.
Lukas Stäcker; Shashank Mishra; Philipp Heidenreich; Jason Rambach; Didier Stricker
[ { "figure_caption": "Fig. 3 :3Fig.3: Inference example at daytime. Predicted 3D bounding boxes projected to all six cameras (top) and BEV plane (bottom) with LiDAR points (black) and radar points (red) for reference. Our proposed fusion network more accurately detects pedestrians and vehicles (s. dashed red ellipses).", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig.4: Inference example at daytime. Our network more accurately detects distant cars in the front and back right area as well as an occluded car directly in front.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 :5Fig. 5: Inference example during rain. Our network has much better overall scene understanding with the road barriers in the back and the cars in front and on the right.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "BEVFeatureNet radar encoder. The radar detections are mapped to BEV grid cells for augmentation and restructuring into a dense tensor. After applying a simplified PointNet, the encoded features are remapped to the BEV grid.", "figure_data": "Point AugmentationPointNet FeatureFeature Remappingand RestructuringEncodingto BEV GridRadar Points in BEV GridEncoded BEV FeaturesDx,y,rcs,vd,ts,xc,yc,xp,ypx,y,rcs,vd,ts,xc,yc,xp,yp x,y,rcs,vd,ts,xc,yc,xp,ypCx,y,rcs,vd,ts,xc,yc,xp,ypx,y,rcs,vd,ts,xc,yc,xp,ypBxNBx,y,rcs,vd,ts,xc,yc,xp,ypFig. 2:", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Camera-only network configurations. For FPN-LSS, SECOND-FPN and Generalized ResNet, we indicate the number of output channels of the module. For LSS and MatrixVT, we indicate the number of channels and the resolution of the output BEV feature in meters. MF denotes multi-frame temporal fusion.", "figure_data": "ModelCamera Encoder View Transf. BEV EncoderHeadInput Res. MFBEVDet [10]Swin-Tiny FPN-LSS-256LSS 80-0.4-0.4Gen. ResNet-512 CenterPoint 704x256 FPN-LSS-256BEVDepth [18]ResNet-50 SECOND-FPN-128 80-0.8-0.8 SECOND-FPN-64 LSS ResNet-18CenterPoint 704x256 ✓BEVStereo [17]ResNet-50 SECOND-FPN-128 80-0.8-0.8 SECOND-FPN-64 LSS ResNet-18CenterPoint 704x256 ✓", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Experimental results for our radar-camera fusion used in different architectures on the nuScenes val split. The inference latency T is measured on a Nvidia RTX 2080 Ti. *We use the implementation of BEVDet-Tiny with a heavier view transformer from[22]. † We list the results as reported by the authors.", "figure_data": "Cam. model Radar modelmAP↑ NDS↑ mATE↓ mASE↓ mAOE↓ mAVE↓ mAAE↓ T [ms][10] BEVDet*None0.350 0.4110.6600.2750.5320.9180.260122Ours BEVDet*RadarGridMap∆r0.429 0.525 23% 28%0.523 -21%0.272 -1%0.507 -5%0.412 -55%0.183 -30%132Ours BEVDet*BEVFeatureNet∆r0.434 0.525 24% 28%0.511 -23%0.270 -2%0.527 -1%0.421 -54%0.182 -30%139[18] † BEVDepth None0.359 0.4800.6120.2690.5070.4090.201132Ours BEVDepth BEVFeatureNet∆r0.405 0.521 13% 9%0.542 -11%0.274 2%0.512 1%0.309 -24%0.181 -10%146[17] † BEVStereo None0.372 0.5000.5980.2700.4380.3670.190308Ours BEVStereo BEVFeatureNet∆r0.423 0.545 14% 9%0.504 -16%0.268 -1%0.453 3%0.270 -26%0.174 -8%322[56] MatrixVT None0.319 0.4000.6690.2810.4940.9120.23854Ours MatrixVT BEVFeatureNet∆r0.386 0.495 21% 24%0.549 -18%0.275 -2%0.539 9%0.423 -54%0.193 -19%64", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Experimental results for published radar-camera fusion models on the nuScenes test benchmark.", "figure_data": "ModelmAP↑ NDS↑ mATE↓ mASE↓ mAOE↓ mAVE↓ mAAE↓CenterFusion [31] 0.326 0.449 0.6310.2610.5160.6140.115CRAFT [14]0.411 0.523 0.4670.2680.4560.5190.114TransCAR [35]0.422 0.522 0.6300.260 0.3830.4950.121Ours (BEVDet) 0.476 0.567 0.444 0.244 0.4610.4390.128", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation study for 2D and 3D augmentations. All experiments conducted with the model based on BEVFeatureNet and BEVDet.", "figure_data": "3D 2D mAP↑ NDS↑ mATE↓ mAVE↓0.3800.4530.5950.676✓0.4190.5130.5200.478✓✓ 0.434 0.5250.5110.421", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation study for augmentation strategies of the RadarGridMap encoder: a blurring filter (BF) and a Doppler skewing (DS) technique. All experiments conducted with the model based on BEVDet.", "figure_data": "BF DS mAP↑ NDS↑ mATE↓ mAVE↓0.429 0.5250.5230.412✓0.4240.5240.5170.439✓0.4250.5080.5230.482✓✓0.4230.4870.5160.650", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Experimental class-wise results for our radar-camera fusion used in different architectures on the three most common dynamic classes of the nuScenes val split. *We use the implementation of BEVDet-Tiny with a heavier view transformer from[22]. † We list the results as reported by the authors.", "figure_data": "Cam. model Radar modelClassAP↑ ATE↓ ASE↓ AOE↓ AVE↓ AAE↓car0.538 0.529 0.157 0.128 0.992 0.232[10] BEVDet*Noneped.0.377 0.700 0.304 1.383 0.872 0.759truck0.290 0.679 0.209 0.165 0.911 0.252car0.700 0.315 0.156 0.106 0.395 0.196 ∆r 30% -40% -1% -17% -60% -16%Ours BEVDet*BEVFeatureNetped.0.468 0.528 0.300 1.016 0.701 0.329 ∆r 24% -25% -1% -27% -20% -57%truck0.405 0.493 0.206 0.131 0.329 0.202 ∆r 40% -27% -1% -21% -64% -20%car0.559 0.475 0.157 0.112 0.370 0.205[18] † BEVDepth Noneped.0.363 0.690 0.297 0.831 0.491 0.244truck0.270 0.659 0.196 0.103 0.356 0.181car0.661 0.356 0.162 0.134 0.289 0.193 ∆r 18% -25% 3% 20% -22% -6%Ours BEVDepth BEVFeatureNetped.0.410 0.585 0.295 0.732 0.434 0.208 ∆r 13% -15% -1% -12% -12% -15%truck0.332 0.563 0.214 0.162 0.254 0.198 ∆r 23% -15% 9% 57% -29% 9%car0.567 0.457 0.156 0.104 0.343 0.204[17] † BEVStereo Noneped.0.402 0.653 0.297 0.803 0.479 0.249truck0.299 0.650 0.205 0.103 0.321 0.197car0.687 0.324 0.159 0.106 0.250 0.192 ∆r 21% -29% 2% 2% -27% -6%Ours BEVStereo BEVFeatureNetped.0.469 0.530 0.295 0.694 0.413 0.197 ∆r 17% -19% -1% -14% -14% -21%truck0.364 0.516 0.208 0.106 0.214 0.184 ∆r 22% -21% 1% 3% -33% -7%car0.517 0.529 0.162 0.155 1.049 0.221[56] MatrixVTNoneped.0.309 0.746 0.300 1.204 0.813 0.465truck0.244 0.713 0.213 0.154 0.917 0.219car0.658 0.346 0.162 0.141 0.400 0.190 ∆r 27% -35% 0% -9% -62% -14%Ours MatrixVTBEVFeatureNetped.0.386 0.618 0.298 1.071 0.695 0.335 ∆r 25% -17% -1% -11% -15% -28%truck0.320 0.547 0.214 0.133 0.337 0.201 ∆r 31% -23% 0% -14% -63% -8%", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Experimental class-wise results for our radar-camera fusion based on BEVDet on the three most common dynamic classes of the nuScenes val split, filtered by rain and night scenes, respectively. *We use the implementation of BEVDet-Tiny with a heavier view transformer from[22].", "figure_data": "SplitCam. model Radar modelClassAP↑ ATE↓ ASE↓ AOE↓ AVE↓ AAE↓car0.517 0.548 0.158 0.133 0.693 0.151[10] BEVDet*Noneped.0.218 0.748 0.360 1.679 1.043 0.725truck0.276 0.751 0.216 0.153 0.479 0.120Raincar0.723 0.304 0.160 0.121 0.277 0.141 ∆r 40% -45% 1% -9% -60% -7%Ours BEVDet*BEVFeatureNetped.0.338 0.516 0.360 1.011 0.849 0.332 ∆r 55% -31% 0% -40% -19% -54%truck0.449 0.539 0.200 0.120 0.235 0.108 ∆r 63% -28% -7% -22% -51% -10%car0.403 0.527 0.137 0.111 1.619 0.485[10] BEVDet*Noneped.0.045 0.664 0.296 1.509 0.675 0.469truck0.057 0.630 0.221 0.151 2.795 0.582Nightcar0.611 0.310 0.137 0.092 0.538 0.469 ∆r 52% -41% 0% -17% -67% -3%Ours BEVDet*BEVFeatureNetped.0.191 0.262 0.283 0.810 0.592 0.037 ∆r 324% -61% -4% -46% -12% -92%truck0.265 0.304 0.181 0.127 0.616 0.697 ∆r 365% -52% -18% -16% -78% 20%", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work provides a review on multi-modular automotive object detection, which the citing paper builds upon in the development of advanced driver assistance systems (ADAS) and automated driving functions."}, {"Category": "Data Source", "Citation": "[58]", "Explanation": "The cited work highlights the limitations of radar sensors in providing detailed information about the shape and texture of objects, which the citing paper considers in the design of a perception system for ADAS and automated driving functions."}, {"Category": "Methodological Basis", "Citation": "[11,24]", "Explanation": "The cited works discuss the limitations of cameras in providing accurate depth estimation and the effect of changes in lighting conditions on their performance, which the citing paper takes into account in the design of a perception system for ADAS and automated driving functions."}, {"Category": "Methodological Basis", "Citation": "[10,36]", "Explanation": "The cited works on camera-only networks using view transformers provide a new type of fusion on the BEV plane that the citing paper leverages in the development of the proposed RC-BEVFusion architecture."}, {"Category": "Extension or Continuation", "Citation": "[14,31,35]", "Explanation": "The cited works on radar-camera fusion techniques provide a basis for the development of the proposed RC-BEVFusion architecture, which extends the research by allowing radar and camera features to equally contribute to the final detections."}, {"Category": "Data Source", "Citation": "[1]", "Explanation": "The nuScenes dataset is cited as the only one that fulfils the requirements of the citing paper for a large-scale automotive dataset with radar point clouds, unmasked camera images, and 3D object annotations."}, {"Category": "Data Source", "Citation": "[1]", "Explanation": "The nuScenes dataset is used as the primary data source for the experiments conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The Astyx dataset is mentioned as a small dataset that the citing paper does not use in its research due to its size."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The View-of-Delft dataset is mentioned as a medium-sized dataset with limited visual variety in the images, which the citing paper does not use in its research due to its limitations."}, {"Category": "Methodological Basis", "Citation": "[55]", "Explanation": "The TJ4DRadSet dataset is mentioned as a potential future option for the citing paper but is not fully released yet, indicating that the citing paper may consider using it in future research."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work provides early approaches to image-based 3D object detection that focus on using known geometric information to estimate bounding boxes from 2D detections, which the citing paper builds upon in its research on the topic."}, {"Category": "Methodological Basis", "Citation": "[43,48,57]", "Explanation": "The cited works extend existing 2D object detection models with additional detection heads specifically designed for 3D object detection, which the citing paper adopts in its research to improve the performance of image-based 3D object detection."}, {"Category": "Data Source", "Citation": "[26]", "Explanation": "The cited work uses predicted depth maps as auxiliary features, which the citing paper acknowledges as a data source in its research on image-based 3D object detection."}, {"Category": "Data Source", "Citation": "[49]", "Explanation": "The cited work processes a pseudo-LiDAR point cloud using LiDAR-based object detectors, which the citing paper utilizes as a data source in its research to improve the accuracy of image-based 3D object detection."}, {"Category": "Extension or Continuation", "Citation": "[20,50]", "Explanation": "The cited works use transformer-based techniques to process sequences of data and perform self-attention to learn more complex relationships between features, which the citing paper extends in its research to further improve the performance of image-based 3D object detection."}, {"Category": "Extension or Continuation", "Citation": "[26]", "Explanation": "The cited work uses predicted depth maps as auxiliary features to create a pseudo-LiDAR point cloud, which the citing paper extends in its research to develop BEV-based object detectors for image-based 3D object detection."}, {"Category": "Methodological Basis", "Citation": "[39]", "Explanation": "The cited work introduces the concept of orthographic feature transform, which the citing paper adopts to transform features from the image plane to the BEV plane in their research on object detection."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work introduces the LSS view transformer, which the citing paper uses to lift image features into a 3D pseudo point cloud and then collapse the vertical dimension to create BEV features in their research on object detection."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work is the first to incorporate the LSS view transformer into a 3D object detection network, which the citing paper builds upon in their research on object detection."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work refines the LSS view transformer with LiDAR-based depth supervision, which the citing paper may have considered in their research on object detection."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work refines the LSS view transformer with temporal stereo, which the citing paper may have considered in their research on object detection."}, {"Category": "Methodological Basis", "Citation": "[56]", "Explanation": "The cited work introduces a more efficient view transformer in terms of memory and computations, which the citing paper may have considered in their research on object detection by formulating LSS into matrix operations and compressing features along the vertical dimension."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work PointPillars is used as a basis for creating a pillar grid in the citing paper, which is a key method for object detection in the point cloud."}, {"Category": "Extension or Continuation", "Citation": "[3,41]", "Explanation": "The cited works on graph neural network based approaches for object detection in the radar point cloud are extended in the citing paper to further explore the use of this method in the field of autonomous vehicles."}, {"Category": "Data Source", "Citation": "[42]", "Explanation": "The cited work on semantic segmentation of radar point clouds is used as a data source in the citing paper to provide a foundational understanding of the techniques used in the field of radar point cloud analysis."}, {"Category": "Methodological Basis", "Citation": "[25,33]", "Explanation": "The cited works on experimental setups using the raw radar cube are used as a methodological basis in the citing paper to provide a reference for the development of new techniques in the field of radar point cloud analysis."}, {"Category": "Methodological Basis", "Citation": "[45]", "Explanation": "The cited work provides a method of combining graph neural networks and BEV grids for feature extraction, which the citing paper adopts in their research on radar feature encoders."}, {"Category": "Data Source", "Citation": "[37,51]", "Explanation": "The cited works provide the techniques for projecting 2D detections from the camera into a frustum and matching them with LiDAR points to refine 3D detections, which the citing paper uses as a method for sensor fusion."}, {"Category": "Data Source", "Citation": "[7,46,47]", "Explanation": "The cited works present techniques for augmenting LiDAR points with semantics from the image and using LiDAR-based object detectors for detection, which the citing paper leverages for sensor fusion."}, {"Category": "Data Source", "Citation": "[6,19,22]", "Explanation": "The cited works extract BEV features from both LiDAR and camera data and fuse them on the BEV plane before applying a joint BEV encoder for object detection, which the citing paper adopts as a method for sensor fusion."}, {"Category": "Data Source", "Citation": "[3]", "Explanation": "The cited work provides a comparison of LiDAR and radar sensors, highlighting the differences in wavelength and measurement principles, which the citing paper uses to discuss the challenges in sensor fusion with radar data."}, {"Category": "Methodological Basis", "Citation": "[29,30]", "Explanation": "The cited works provide early research on projecting radar detections onto the image plane, which the citing paper uses to find regions of interest in the image."}, {"Category": "Methodological Basis", "Citation": "[12,35]", "Explanation": "The cited works propose the use of cross-attention to align and fuse features in 3D, which the citing paper adopts in their novel architecture to fuse radar and camera data on the BEV plane."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The cited work provides a hand-crafted radar BEV grid mapping method that the citing paper adopts in the design of the radar encoder."}, {"Category": "Methodological Basis", "Citation": "[8]", "Explanation": "The cited work introduces the generalized ResNet model, which the citing paper uses as the radar backbone in the design of the radar encoder."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work provides a pillar feature encoding method for radar data that the citing paper adapts in the design of the BEVFeatureNet radar encoder."}, {"Category": "Data Source", "Citation": "[56]", "Explanation": "The cited work provides the MatrixVT ResNet-50 and MatrixVT ResNet-18 models that are used in the point feature encoding process in the citing paper."}, {"Category": "Data Source", "Citation": "[1]", "Explanation": "The cited work is the nuScenes dataset, which is used in the radar point cloud mapping process in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work provides a set of non-empty BEV grid cells and a fixed number of points per cell, which the citing paper adopts in their research to create a dense tensor and apply a PointNet model for feature extraction."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "BEVDet is a method that uses BEV-based features for object detection on the nuScenes dataset, which the citing paper adopts to build upon in their research on camera-only baselines for radar fusion."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work provides a computationally efficient method for BEV pooling, which the citing paper adopts in their implementation of the BEV features."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work provides a generalized ResNet structure for encoding the BEV features, which the citing paper uses in their implementation."}, {"Category": "Supporting Evidence", "Citation": "[18]", "Explanation": "The cited work, BEVDepth, uses a similar structure to BEVDet to achieve accurate depth estimation, which the citing paper references to support the use of a more accurate depth estimation in their research."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work BEVDet is used as a methodological basis for the camera-aware DepthNet module in the citing paper, which replaces the single convolutional layer for depth estimation and uses a camera calibration vector to re-weight the image features."}, {"Category": "Data Source", "Citation": "[18]", "Explanation": "The cited work BEVDepth is used as a data source for the training of the depth estimation module in the citing paper, as it provides the depth value from projected LiDAR points on the image to directly supervise the depth estimation via binary cross-entropy loss."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "BEVStereo builds upon BEVDepth and adopts the same configuration for most modules, indicating a methodological basis for the research conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[18]", "Explanation": "The citing paper extends the research of BEVDepth by introducing a temporal stereo depth estimation module based on multi-view-stereo, exploring new dimensions in the field of monocular depth estimation."}, {"Category": "Methodological Basis", "Citation": "[53]", "Explanation": "The citing paper adopts the homography warping technique from multi-view-stereo to retrieve features from the previous image in the temporal stereo depth estimation module."}, {"Category": "Methodological Basis", "Citation": "[56]", "Explanation": "MatrixVT proposes an alternative view transformer that the citing paper may consider for future research in the field of view transformation."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The view transformation step of LSS is generalized into matrix operations, which serves as the methodological basis for the view transformer in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[18]", "Explanation": "The configuration used in the citing paper is based on BEVDepth, which is a continuation of the research in the cited work."}, {"Category": "Data Source", "Citation": "[56]", "Explanation": "The view transformer in the citing paper is replaced with MatrixVT, which is a data source for the new configuration used in the study."}, {"Category": "Methodological Basis", "Citation": "[54]", "Explanation": "The cited work CenterPoint is used as the detection head in the camera baselines, providing a methodological basis for the detection process in the citing paper."}, {"Category": "Data Source", "Citation": "[16]", "Explanation": "The cited work Gaussian focal loss is used in the heatmap training process, serving as a data source for the loss function in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[54]", "Explanation": "The cited work, CenterPoint, provides a method for regressing 3D bounding boxes in a sub-pixel location refinement, height above ground, dimensions, velocity, and yaw rotation angle using L1 loss, which the citing paper adopts in their research."}, {"Category": "Data Source", "Citation": "[1]", "Explanation": "The cited work, nuScenes, is the source of the dataset used in the citing paper for evaluation and analysis."}, {"Category": "Data Source", "Citation": "[1]", "Explanation": "The nuScenes dataset is the primary data source for the experiments conducted in the citing paper, providing the necessary information for camera images, radar sensors, and 3D bounding box annotations."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work provides a pillar feature encoding method that the citing paper adopts in their radar encoder branch to increase the point cloud density and reduce the number of non-empty grid cells and points per grid cell in the BEV features."}, {"Category": "Supporting Evidence", "Citation": "[16]", "Explanation": "The cited work provides a Gaussian focal loss configuration that the citing paper follows in their training process to set the alpha and beta values for the loss function."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work by BEVDet is used as a reference for the computational cost of their model in the context of fast decision-making in automotive applications. The comparison highlights the tradeoff between performance gains and computational cost, which is important for the design of efficient models in this field."}, {"Category": "Methodological Basis", "Citation": "[14,31,35]", "Explanation": "The cited works provide a methodological basis for the architecture presented in the citing paper, as it allows for the integration of radar and camera features in the final detections, which is a key feature of the proposed architecture."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work, BEVDet, serves as the basis for the inference example presented in the citing paper, providing the framework and methods for object detection in BEV (bird's eye view) images."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work provides a two-stage augmentation strategy that the citing paper adopts in their research to improve the results in the BEV view transformer and the BEV features."}, {"Category": "Extension or Continuation", "Citation": "[40]", "Explanation": "The cited work provides a similar setup for the Radar-GridMap encoder that the citing paper uses to examine potential augmentation strategies for their research."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work, BEVDepth, is used as a basis for the model in the citing paper, contributing to the improved translation and velocity errors observed in the study."}, {"Category": "Extension or Continuation", "Citation": "[10]", "Explanation": "The cited work BEVDet is the camera-only baseline used in the citing paper for comparison with the proposed radar-camera fusion model. The citing paper extends the research by exploring the benefits of fusion in terms of long-range detection and occlusion handling."}]
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b8", "b15", "b52", "b9", "b12", "b36", "b44", "b12", "b0", "b3", "b34", "b35", "b47", "b21", "b49", "b68", "b7", "b7", "b30", "b55" ], "table_ref": [], "text": "Machine learning has exhibited extraordinary and super-human performance on various tasks [9,16]. Its common paradigm, Empirical Risk Minimization (ERM) [53], trains models by minimizing the learning error on training data under the independent and identically distributed (i.i.d.) assumption. However, distribution shifts, i.e., out-of-distribution (OOD) problems, usually exist between training and real-world testing data, which leads to the performance degradation of ERM-based methods [10,13,37,45]. In order to solve the OOD generalization problem, researchers turn to developing methods for domain generalization (DG), a task that aims to generalize the models learned in multiple source domains to other unseen target domains. In DG, the features of samples can be divided into two parts: invariant (a.k.a. semantic or stable) features and variant (a.k.a. non-semantic or unstable) features. Invariant features are those having a domain-invariant relation with true class labels, while variant features shift with the change of domains. Thus, DG methods tend to perform invariant learning: identify the stable relations among all source domains and predict class labels with invariant parts. ERM-based approaches may easily learn and rely on variant features for prediction [13] because they even do not tell which domain the training samples are from at all. Assuming that all data come from one distribution, they achieve high accuracies in seen domains but might degrade a lot in unseen target domains.\nTo tackle this issue, some methods turn to using the gap between different domains to surpass ERM. They utilize the original domain labels as extra supervision signals [1,4,35,36,48]. Data are separately treated according to which domain they are sampled from. Among them, some fine-grained methods try to minimize some statistical metrics (e.g., MMD [22,50] or Wasserstein distance [69]) among feature representations of different domains. Models trained by these methods are more robust to complex domains than ERM and show good performances in bridging the domain gap. However, these methods neglect the quality of the original domain labels they use, i.e., the risk of lacking heterogeneity. We illustrate this phenomenon with Figure 1. Here we aim to predict classes by shapes, which are invariant features, while the colors are domain-variant. In realistic scenarios, we may follow a makesense prior that making the samples within a domain diverse as the left of Figure 1. However, when performing DG, the left pattern will easily make the model predict labels with domain-variant features because their domains own two very similar distributions in colors. In fact, the optimal pattern is shown on the right of Figure 1, which makes the domain-variant features heterogeneous across domains and homogeneous within each domain. As illustrated above, sub-optimal domain labels will introduce bias during training and mislead the generalization learning. To reduce the bias, enlarging domain heterogeneity, i.e., the variety across domains, is an effective approach. From this view, domain labels could be regarded as heterogeneity-focused dividing according to prior. If source domains are heterogeneous enough, models may be more robust to unseen domains because source sub-populations provide more kinds of data distribution during training. However, if source domains are homogeneous, the latent data distribution and predictive strategy within every source domain may tend to be similar. Models may tend to rely on a common predictive path for training, which is unstable for unseen distributions. Therefore, digging domain heterogeneity is significant for reducing bias in DG.\nRecently, some methods notice the importance of data heterogeneity and turn to utilize heterogeneity to boost generalization learning. A more heterogeneous dataset is believed to be able to help disentangle features better and cut off the shortcut of neural networks to variant features [8]. They mostly generate a dividing pattern (a.k.a. infer environments or split groups) to re-separate training samples into newly generated domains. By applying their patterns, they achieve favorable accuracies [8,31,56]. However, there is no precise definition and metric for heterogeneity in the DG task yet. Without the metric during domain label generation, the chosen dividing pattern may be sub-optimal, which might even bring new noise and disturb generalization learning. In addition, their experiments are mainly based on synthetic or low-dimension data, which is insufficient to verify their potential in dividing domains and generalization abilities in real-world scenarios.\nIn this paper, we propose a quantitative learning potential-guided heterogeneity metric and introduce a heterogeneity-based twostage DG algorithm through contrastive learning. We first point out that domain heterogeneity mainly lies in variant features in DG under the invariant learning framework. Our metric is calculated with the ratio of the average distance between the representations of different domains, which needs to construct and contrast the same-class pairs within each domain and across domains. When it involves measuring the heterogeneity of a pattern, we apply the metric to the features from a variance-focused model, which also indicates the potential to obtain heterogeneity. Our method comprises generating heterogeneous patterns and enhancing generalization with the pattern. We select the most heterogeneous dividing pattern from the generated ones in the first stage, which is measured with our heterogeneity metric quantitatively. The first stage contains two interactive modules: the heterogeneity exploration module and the pattern generation module. They are performed iteratively and boosted by each other. In the second stage, with the domain labels generated by the first stage, we construct positive pairs with same-class and different-domain data while negative ones indicate different-class data within the same domain. Then an invariance-aimed contrastive learning is employed to help train a well-generalized model for the DG task.\nTo summarize, our contributions include:\n• We point out the heterogeneity in DG, i.e., the original domain labels may not be optimal when treated as supervision signals. Sub-optimal domain labels will introduce bias and disturb the generalization learning. " }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b61", "b69", "b0", "b54", "b57", "b58", "b60", "b65", "b69", "b47", "b0", "b28", "b29", "b71", "b1", "b48", "b72", "b27", "b25", "b32", "b50", "b7", "b30", "b31", "b55", "b33", "b37", "b38", "b62", "b73", "b23", "b39", "b41", "b43", "b56", "b64", "b70", "b29", "b42", "b5", "b14", "b22", "b24", "b63", "b67" ], "table_ref": [], "text": "We first provide an overview of the related work involved with our setting and method.\nDomain generalization. Domain generalization (DG) aims to train a well-generalized model which can avoid the performance degradation in unseen domains [62]. Domains are given as natural input in DG, which means models can access more information. However, sub-optimal signals will also induce distribution shift [70] or make the model rely on spurious features [1]. That's why DG may be more hard to solve compared to ordinary classification tasks. Several different approaches are proposed to tackle this problem. Some works use data augmentation to enlarge source data space [55,58,59,61,66,70]. Methods based on GroupDRO [48] or IRM [1] turn to optimize one or some specific groups' risk to achieve higher generalization [29,30,72]. There are also some methods employing feature alignment [2,49,73] or decorrelation [28] to guide the learning procedure.\nHeterogeneity in domain generalization. Data heterogeneity broadly exists in various fields. In DG, heterogeneity mainly refers to the diversity of each domain. It is caused by the distribution shifts of data when dividing the implicit whole data distribution into different sub-populations, i.e., domains. Though there are some formulations for data heterogeneity in other machine learning problems [26,33,51], domain heterogeneity has no precise and uniform metric till now. EIIL [8] is one of the first methods that notice the gain brought by re-dividing domains in DG. It infers domain labels with a biased model. HRM [31] designs a framework where dividing domains and performing invariant learning are optimized jointly. KerHRM [32] develops HRM by integrating the procedure with kernel methods to better capture features. IP-IRM [56] also takes the strategy of dividing domains to better disentangle features from a group-theoretic view. Above novel methods achieve favorable performances on synthetic or low-dimension data. However, they all treat generating dividing patterns as a middle process and haven't proposed the metric for domain heterogeneity. In addition, their potentials in high-dimension data, which are less decomposed on the raw feature level, are not fully verified. These are the issues we aim to make up in this paper.\nInvariant learning. Invariant learning actually can be seen as one of the effective approaches in DG. Its core is differentiating invariant and variant parts in an image and making the model tend to learn the invariant features across different domains. Following this line, some works [34,38,39,63,74] achieve the goal by the pre-defined causal graph to learn the key features involving class labels. Some works also consider the disentangled method [24,40,42,44,57,65,71] aiming to split the features completely. Besides, ZIN [30] proposes to use explicit auxiliary information to help learn stable and invariant features, which is also a promising direction.\nContrastive learning. Contrastive learning (CL) contrasts semantically similar and dissimilar pairs of samples, which aims to map positive sample pairs closer while pushing apart negative ones in the feature space. The prototype of CL is the architecture of Contrastive Predictive Coding (CPC) [43]. It contains InfoNCE loss which can be optimized to maximize a lower bound on mutual information in the theoretical guarantee. Contrastive learning has already achieved great success in self-supervised learning [6,15] due to the need of modeling unlabeled data. It has been applied in various tasks due to the ability to make the hidden representations of the samples from the same class close to each other [23,25,64,68]." }, { "figure_ref": [], "heading": "PROBLEM SETTING", "publication_ref": [ "b52", "b0" ], "table_ref": [], "text": "We formalize the problem setting in domain generalization (DG) task. Suppose that we have source data D = X × Y for a classification task, every sample 𝑥 𝑖 follows 𝑥 𝑖 ∈ X and its label 𝑦 𝑖 follows 𝑦 𝑖 ∈ Y. Typically, we need a non-linear feature extractor 𝜙 and a classifier 𝑤. 𝜙 will map the sample from input space to representation space, which can be formalized as 𝜙 : X → H . H denotes the representation space here. Then 𝜙 (𝑥 𝑖 ) ∈ R 𝑑 denotes the 𝑑-dimension feature representation of 𝑥 𝑖 . 𝑤 (𝜙 (𝑥 𝑖 )) ∈ R | Y | denotes the predicted possibilities for all labels. In DG, every sample has its initial domain label. Let 𝜀 𝑡𝑟 be the training domains (a.k.a. environments or groups). Then we use 𝑑 𝑥 𝑖 ∈ 𝜀 𝑡𝑟 to denote the domain label of the training sample 𝑥 𝑖 . Some solutions don't rely on domain labels. For example, Empirical Risk Minimization (ERM) [53] minimizes the loss over all the samples no matter which domain label they have:\nL 𝐸𝑅𝑀 = 𝐸 (𝑥 𝑖 ,𝑦 𝑖 ) ∈ D [ℓ (𝑤 (𝜙 (𝑥 𝑖 )), 𝑦 𝑖 )],(1)\nwhere ℓ is the Cross Entropy loss function for classification. For some other methods which utilize domain labels, the risk will have an extra penalty term. Take Invariant Risk Minimization (IRM) [1] as an example,\nL 𝐼𝑅𝑀 = ∑︁ 𝜖 ∈𝜀 𝑡𝑟 R 𝜖 (𝜙, 𝑤) + 𝜆 ∇ w R 𝜖 ( w • 𝜙) .(2)\nHere R 𝜖 (𝜙, 𝑤) = 𝐸 {(𝑥𝑖,𝑦𝑖 " }, { "figure_ref": [], "heading": "DOMAIN HETEROGENEITY 4.1 Variance-based Heterogeneity", "publication_ref": [], "table_ref": [], "text": "In domain generalization (DG), all the samples are always given with their specific domain labels due to the need of differentiating source domains during training. Therefore, domain labels reflect the process of splitting data into several sub-populations, which denotes domain heterogeneity. Domain heterogeneity has a close connection with training models. Suppose that all the training and testing data form an implicit overall distribution. After splitting them into several domains, the data points in each domain will form a complete data distribution and have their specific predictive mechanisms. If the source domains are homogeneous, the latent distribution and predictive strategy within every source domain may tend to be similar. Models trained in these domains will also tend to learn a common predictive mechanism because they can hardly access novel distributions. Therefore, when facing the target unseen domains, the robustness of the model may not be favorable. Therefore, pursuing domain heterogeneity is important for DG.\nHere we explain where domain heterogeneity lies. Following the idea of invariant learning, the feature representation obtained from supervised learning is composed of invariant features and variant features. The invariant parts will help predict labels while the variant parts may harm training. Therefore, common methods use techniques to make the model sensitive to the invariant parts in latent space as much as possible instead of the variant ones. For a pair of same-label samples, their invariant parts should be naturally close because they own the same class label, which highly involves invariant features. While their variant parts can show arbitrary relations. For example, two samples whose labels are both dogs may have similar invariant parts like dogs' noses. But as for variant parts, they can be various. They can be similar if they both have grass as the background while they can be totally different if one is in the grass and the other is in the water. 1 An ideal heterogeneous dividing pattern focuses more on the variant parts. The reason is that: the target unseen domain will share the same invariant information (because they are highly involved with the label information, which is shared among every domain) with the source domains, while the variant parts from the target domain may be unseen during training. Thus, if we make the variant parts of source domains homogeneous within a single domain and heterogeneous among different domains, we will maximally imitate the situation of facing unseen domains during training. Therefore, our goal comes to: re-divide domain labels for every sample to form a more heterogeneous dividing pattern than original one." }, { "figure_ref": [], "heading": "The Metric of Heterogeneity", "publication_ref": [], "table_ref": [], "text": "Since the heterogeneity among source domains counts, how to measure it quantitatively becomes the main problem. Obviously, different dividing patterns (different allocation of domain labels for all samples) have different heterogeneity because of the bias in the variant parts. Suppose that we let a model with fixed architecture learn the representation with a dividing pattern. If the dividing pattern is consistent with the assumptions in section 4.1, i.e., the variant parts are homogeneous enough within each domain and heterogeneous enough across different domains, then the following requirement should be fulfilled: for any pair of same-class samples, the distance between their representations should be as small as possible if they are from the same domain while the distance should be as large as possible if they are from different domains.\nConsidering that the heterogeneity lies in the variant features as stated above, we characterize the heterogeneity quantitatively with the proportion between the distances of different groups of the same-class features:\nL 𝐻 = ∑︁ (𝑥,𝑦) 𝑙𝑜𝑔( 𝐸 { (𝑥 ′ ,𝑦 ′ ) |𝑑 𝑥 ′ =𝑑 𝑥 ,𝑦 ′ =𝑦 } [||𝜙 (𝑥) -𝜙 (𝑥 ′ )|| 2 ] 𝐸 { (𝑥 ′′ ,𝑦 ′′ ) |𝑑 𝑥 ′′ ≠𝑑 𝑥 ,𝑦 ′′ =𝑦 } [||𝜙 (𝑥) -𝜙 (𝑥 ′′ )|| 2 ]\n).\nFor a sample (𝑥, 𝑦), we collect all its same-class sample (𝑦 = 𝑦 ′ = 𝑦 ′′ ). Then we set 𝑥's positive (negative) pair samples 𝑥 ′ (𝑥 ′′ ) as the ones which are from the same (different) domain\n𝑑 𝑥 ′ = 𝑑 𝑥 (𝑑 𝑥 ′′ ≠ 𝑑 𝑥 ).\nThe more heterogeneous the dividing pattern is, the less will the numerators become and the more will the denominators become.\nL 𝐻 thus will become less. We use Equation (3) to judge whether the current dividing pattern is more heterogeneous in the iterative process. We calculate the distance between the features instead of the original data here. Recall section 4.1, we can have such intuition because we can differentiate the variant or invariant part just with original images. But as for a fixed model, the representation it learns will lose information because of the reduction of the dimensions.\nFrom the view of neural networks, it is the similarity between lowdimension representation mainly works. As a result, if we want to measure heterogeneity, we should concentrate on the similarity between full-trained representations instead of the original data. Therefore, we put stress on the potential for learning heterogeneous features with the given dividing pattern. We then use an alreadytrained model 𝜙 to generate features instead of using other methods (e.g., kernel tricks) only performing on the data itself.\nThen it comes to how to train 𝜙. Note that our metric for heterogeneity stresses the potential to learn the best variant representation in a given dividing pattern. Therefore, training variance-based 𝜙 should minimize the loss containing classification error and the guidance for exploring heterogeneity, i.e., variant features in DG:\nL 𝑣𝑎𝑟 = ∑︁ (𝑥,𝑦) ℓ (𝑤 (𝜙 (𝑥)), 𝑦) + 𝜆 1 L 𝐻 .(4)\nThe first term ℓ makes sure 𝜙 focus on the feature which carries helpful information for classification (no matter if it is an invariant feature or variant feature). The second term is our metric for heterogeneity. Minimizing it promotes the variant features becoming different across different domains and becoming similar in the same domain. Hyperparameter 𝜆 1 balances the two terms." }, { "figure_ref": [], "heading": "The Utilization of Heterogeneity", "publication_ref": [], "table_ref": [], "text": "Given an ideal heterogeneous dividing pattern following our metric, it comes to training feature extractor 𝜙 and classifier 𝑤 for the classification task. Here we aim to make feature extractor 𝜙 focus more on the invariant parts in each image. With the reorganized domain labels from the last stage, the variant parts of the samples from the same domain would be homogeneous. Therefore, we transfer our target to other two kinds of sample pairs: the sample pairs with the same domain label and different class labels, and the sample pairs with different domain labels and the same class label. It is obvious that in representation space, the features should follow the strategy of making intra-class distance small and inter-class distance large. We continue using the dog's example in section 4.1 to explain the motivation. Here we have already obtained a heterogeneous dividing pattern by reorganizing domain labels. The model should learn the invariant features (e.g., animals' noses or hairs) which will help distinguish whether the sample is a dog or cat, not the variant features (e.g., the background or the style of the image) which tend to be homogeneous within a single domain and heterogeneous across different domains. Suppose that there are two domains that are heterogeneous, the first domain has images of cats or dogs in the grass and the second domain has same-label images whose background is water. 𝜙 fulfilling 𝑑𝑖𝑠𝑡 (𝜙 (𝑥 𝑑𝑜𝑔,𝑔𝑟𝑎𝑠𝑠 ), 𝜙 (𝑥 𝑑𝑜𝑔,𝑤𝑎𝑡𝑒𝑟 )) ≪ 𝑑𝑖𝑠𝑡 (𝜙 (𝑥 𝑑𝑜𝑔,𝑔𝑟𝑎𝑠𝑠 ), 𝜙 (𝑥 𝑐𝑎𝑡,𝑔𝑟𝑎𝑠𝑠 )) would be more appropriate for predicting the label. To achieve the goal of encoding more invariant features rather than variant features, we leverage two specific kinds of sample pairs mentioned above. For (𝑥, 𝑦) ∈ D, its negative pair samples are images having different class labels from 𝑦 and the same domain label as 𝑑 𝑥 :\n𝑃 - 𝑥 = {𝑥 ′ |𝑑 𝑥 ′ = 𝑑 𝑥 , 𝑦 ′ ≠ 𝑦}.\nThe positive pair samples are images having the same class label as 𝑦 and different domain labels from 𝑑 𝑥 :\n𝑃 + 𝑥 = {𝑥 ′ |𝑑 𝑥 ′ ≠ 𝑑 𝑥 , 𝑦 ′ = 𝑦}.\nWe use Maximum Mean Discrepancy (MMD) as an estimation of the similarity of a pair of features. Then we write a penalty term, which contrasts the pairs to promote utilizing domain heterogeneity, as: .\nHere we compare the metric in Equation ( 3) and the regularization term in Equation ( 5) again. Their aims are totally different. In Equation ( 3), the core lies in utilizing the relations among variant features, which in turn helps train 𝜙 for digging the potential of enlarging heterogeneity in Equation ( 4). However, when it involves utilizing heterogeneity, the ultimate goal is training the best feature extractor 𝜙 and classifier 𝑤 for classification. In DG, the ideal 𝜙 will only encode invariant features and not encode variant features to representation space at all. So our aim is to make the feature extractor focus more on the invariant features." }, { "figure_ref": [], "heading": "PROPOSED METHODS", "publication_ref": [ "b7", "b30", "b31", "b55", "b0" ], "table_ref": [], "text": "Our heterogeneity-based two-stage contrastive learning (HTCL) method is based on the problem setting in section 3 and the metric in section 4.2. Following the analysis above, we treat the procedure of exploring heterogeneity and the one of training models with newly generated domain labels separately. The first stage aims to generate a heterogeneous dividing pattern. It can be seen as a preprocessing stage that only re-divides domain labels and outputs the most heterogeneous ones, which corresponds to section 4.2. The second stage receives the domain labels from the first stage as input. During its training process, it utilizes the contrastive loss mentioned in section 4.3 to help learn invariant features. We detail these two stages in section 5.1 and 5.2. Note that the two stages have no iteration process. They are performed only once in a consistent order while the modules within the first stage have an iterative process. That's a difference from existing methods [8,31,32,56] mentioned in section 1, which also aim to generate domain labels by themselves but incorporate dividing domains and invariant learning together. Algorithm (1) shows the whole process of HTCL." }, { "figure_ref": [], "heading": "Heterogeneous Dividing Pattern Generation", "publication_ref": [ "b3" ], "table_ref": [], "text": "With the training set and original domain labels, we explore heterogeneity by dividing images into |𝜀 𝑡𝑟 | disjoint domains. However, exploring heterogeneity with variant features and measuring heterogeneity can't be optimized simultaneously. As a result, we design two interactive modules and make them work alternately. For the first goal, we can just update 𝜙 and 𝑤 through Equation (4). When it refers to constructing the corresponding positive and negative pairs for L 𝑣𝑎𝑟 , we only construct them within every batch instead of the whole dataset.\nHere we re-write L 𝐻 which has been mentioned in Equation ( 3). L 𝐻 is our heterogeneity metric, and we also apply it in Equation ( 4) to guide heterogeneity exploration. Considering the calculation is conducted per batch, we re-formalize L 𝐻 more detailedly as:\nL 𝐻 = - ∑︁ (𝑋 𝑖 ,𝑦 𝑖 ) ∑︁ (𝑑 𝑚 ,𝑑 𝑛 ) log 𝑑𝑖𝑠𝑡 (𝜙 (𝑋 𝑑 𝑚 𝑖 ), 𝜙 (𝑋 𝑑 𝑛 𝑖 )) 𝑑𝑖𝑠𝑡 (𝜙 (𝑋 𝑑 𝑚 𝑖 )) + 𝑑𝑖𝑠𝑡 (𝜙 (𝑋 𝑑 𝑛 𝑖 ))\n.\nWe denote a single batch by 𝐵 = {(𝑥, 𝑦) " }, { "figure_ref": [], "heading": "𝑖", "publication_ref": [], "table_ref": [], "text": "and 𝑋 𝑑 𝑛 𝑖 denotes the samples whose domain labels are 𝑑 𝑚 and 𝑑 𝑛 separately. For convenience, here we set the input of feature extractor 𝜙 as a set of images and every dimension of the features denotes the predicted probability of each label for a single image. Then we set function 𝑑𝑖𝑠𝑡 (•, •) to measure the average distance between different kinds of representation:\n𝑑𝑖𝑠𝑡 (𝜙 1 , 𝜙 2 ) = |𝜙 1 | -1 𝑖=0 |𝜙 2 | -1 𝑗=0 ∥𝜙 1 [𝑖] -𝜙 2 [ 𝑗] ∥ 2 |𝜙 1 | × |𝜙 2 | .(7)\nIn above equation, 𝜙 1 has a shape of |𝜙 1 | × 𝑑 and the entry 𝜙 1 [𝑖] denotes the representation of 𝑖-th image in 𝜙 1 . 𝜙 2 has a similar form. We then calculate the average distance between different representations. Similarly, we can measure the average distance among same-domain and same-label samples as follows:\n𝑑𝑖𝑠𝑡 (𝜙) = |𝜙 | -2 𝑖=0 |𝜙 | -1 𝑗=𝑖+1 ∥𝜙 [𝑖] -𝜙 [ 𝑗] ∥ 2 |𝜙 | 2 -|𝜙 | . (8\n)\nAfter training for several epochs, we obtain the representation with the final 𝜙 * as the optimal feature extractor. Then it comes to the second goal: measuring the heterogeneity. Similarly, we calculate the ratio batchwisely and finally take the expectation among all batches' counted pairs to avoid the problem brought by batches' random shuffle:\n𝐻 = 𝐸 𝐵 ⊂ D [L 𝐻 ].(9)\nThus, 𝐻 specifies the quantity of the heterogeneity in a given dividing pattern. Note that when performing Equation ( 9), we use a trained feature extractor 𝜙 * to replace 𝜙 in L 𝐻 , which makes sure Equation ( 9) reflect the maximum potential of the variance-based model in such a dividing pattern.\nRemark. When it refers to heterogeneity, the subjective is feature representation rather than raw data itself. A fully-trained representation reflects the learning potential of a fixed model with a given dividing pattern. So we choose to train 𝜙 in such above Equation ( 4) manner before measuring heterogeneity instead of directly performing clustering methods (e.g., density-based) on raw data. share the same class while 𝑥 3 doesn't, the distance between 𝜙 * (𝑥 1 ) and 𝜙 * (𝑥 2 ) will naturally be less than the distance between 𝜙 * (𝑥 1 ) and 𝜙 * (𝑥 3 ) no matter whether 𝑥 1 and 𝑥 3 share the similar variant features or not. In a word, same-label samples tend to be close to each other. Then they may be divided into one domain when invariant features dominate a main position in the representation space. There are two drawbacks. On the one hand, when feeding this kind of dividing pattern into heterogeneity exploration module, we can't construct effective pairs to boost the process of learning variant features and measuring heterogeneity. On the other hand, if domain labels are highly involved with class labels, the existence of domain labels will be of no help to the final process of learning semantic invariant features. Commonly, the number of domains is less than the number of labels 2 . From the view of information theory, the information brought by domain labels is the subset of the information brought by class labels. Then our final training process will degrade to ERM. Therefore, we should avoid this module generating dividing patterns by actual classes.\nThe numbers of samples contained in each domain differ a lot. If a domain contains few samples while another domain contains a lot, it will cause similar problems mentioned in the last point. Thus, we should also balance the numbers in all the domains.\nThe overall predictive probability of mapped domains, i.e., 𝑓 (𝜙 * (X)), has a shape of R | X | × |𝜀 𝑡𝑟 | . We calculate along the first dimension to obtain all samples' average probability distribution: 𝑓 (𝜙 * (X)) 𝑎𝑣𝑔 ∈ R 1× |𝜀 𝑡𝑟 | . Then we guide 𝑓 (•) which generates candidate domain labels as follows:\nL 𝑑𝑖𝑣𝑖𝑑𝑒 = 𝐻 (𝑓 (𝜙 * (X)) 𝑎𝑣𝑔 ) + ( 1 |𝜖 𝑡𝑟 | -min(𝑓 (𝜙 * (X)) 𝑎𝑣𝑔 )). (10\n)\nNote that 𝜙 * is fixed here and we only update 𝑓 (•). 𝐻 (𝑝) denotes the entropy of the distribution 𝑝. The first term uses entropy on classes to encourage the same-label samples not to shrink to a single domain, which reduces the influence of invariant features.\nThe second term is a penalty term to avoid the minimum sample number in one domain becoming too small." }, { "figure_ref": [ "fig_3" ], "heading": "Summary of generating a heterogeneous dividing pattern.", "publication_ref": [ "b9" ], "table_ref": [], "text": "Generating a heterogeneous dividing pattern is the core stage in our method. We design two interactive modules to iteratively learn the optimal dividing pattern which contains more variant information.\nAt the start, we use original domain labels as the initial dividing pattern and send them to the heterogeneity exploration module. Then we update 𝜙 with Equation ( 4) and measure the heterogeneity with Equation ( 9). After obtaining the optimal feature representation, we send it into the pattern generation module to generate a new dividing pattern with Equation (10). The newly generated dividing pattern is then sent as input to the heterogeneity exploration module again. We repeat this procedure several times to learn variant features as possible and select the best dividing pattern which obtains the minimum value in Equation ( 9). Figure 2 illustrates the whole framework. " }, { "figure_ref": [], "heading": "Prediction with Heterogeneous Dividing Pattern", "publication_ref": [ "b49" ], "table_ref": [], "text": "In this part, our aim changes from learning a heterogeneous dividing pattern to learning invariant features from the samples to help predict the labels in the unseen target domain. Apart from the standard classification loss, we add a distance-based term to prevent the representations between different domains too far. We measure the distance between two different representations as:\n𝑚𝑚𝑑 (𝐷 1 , 𝐷 2 ) = 1 4𝑑 2 ∥𝐶𝑜𝑣 (𝐷 1 ) -𝐶𝑜𝑣 (𝐷 2 )∥ 2 𝐹 .(11)\n𝐶𝑜𝑣 (•) means covariances and ∥•∥ 2 𝐹 denotes the squared matrix Frobenius norm. 𝑑 still denotes the number of dimensions of the feature representation. The invariance-based contrastive loss for better utilizing the heterogeneous domain labels takes the form as:\nL 𝑐𝑜𝑛𝑡 = ∑︁ 𝜖 ∈𝜀 𝑡𝑟 ∑︁ 𝑦 ∈ Y log(1 + 𝑚𝑚𝑑 (𝜙 (𝑋 𝑠 ), 𝜙 (𝑋 𝑝𝑜𝑠 )) 𝑚𝑚𝑑 (𝜙 (𝑋 𝑠 ), 𝜙 (𝑋 𝑛𝑒𝑔 ))\n).\nConsistent to section 4.3, here 𝑋 𝑠 denotes all the source samples belonging to domain 𝜖 and owning label 𝑦. 𝑋 𝑝𝑜𝑠 contains all the postive pair samples for 𝑋 𝑠 , which are not from domain 𝜖 but own the same class as 𝑦. 𝑋 𝑛𝑒𝑔 are from 𝜖 while have different classes from 𝑦. Then we calculate all the distances between different domain representations like the method in CORAL [50]:\nL 𝑚𝑚𝑑 = ∑︁ 𝜖 1 ,𝜖 2 ∈𝜀 𝑡𝑟 , 𝜖 1 ≠𝜖 2 𝑚𝑚𝑑 (𝜙 (𝑋 𝜖 1 ), 𝜙 (𝑋 𝜖 2 )).(13)\nThen the ultimate objective function for DG turns to:\nL 𝑝𝑟𝑒𝑑𝑖𝑐𝑡 = ∑︁ (𝑥,𝑦) ℓ (𝑤 (𝜙 (𝑥)), 𝑦) + 𝜆 𝑐𝑜𝑛𝑡 L 𝑐𝑜𝑛𝑡 + 𝜆 𝑚𝑚𝑑 L 𝑚𝑚𝑑 . (14\n)\n𝜆 𝑐𝑜𝑛𝑡 and 𝜆 𝑚𝑚𝑑 are the hyperparameters to balance each guidance." }, { "figure_ref": [], "heading": "EXPERIMENTS 6.1 Experimental Settings", "publication_ref": [ "b20", "b53", "b10", "b2", "b13", "b13", "b15", "b46", "b15", "b46", "b4" ], "table_ref": [], "text": "We first introduce the common settings of our experiments. Datasets. We use four kinds of domain generalization (DG) datasets to evaluate our proposed HTCL method:\n• PACS [21] contains 9991 images shared by 7 classes and 4 domains {art, cartoon, photo, and sketch}. • OfficeHome [54] contains 15588 images shared by 65 classes and 4 domains {art, clipart, product, real}. • VLCS [11] contains 10729 images shared by 5 classes and 4 domains {VOC2007, LabelMe, Caltech101, SUN09}. • TerraIncognita [3] comprises photos of wild animals taken by cameras at different locations. Following [14], we use 4 domains {L100, L38, L43, L46}, which contains 24330 images shared by 10 classes.\nThese four datasets show different kinds of shift in DG. PACS and OfficeHome are mainly distinguished by images' style by human eyes, while VLCS and TerraIncognita divide domains by different backgrounds, which involves spurious correlation with actual labels. Note that our method stresses exploring the heterogeneity in representation space. The method therefore can be fit for both two kinds of shift because representation is needed for both learning processes, which gains an advantage over only style-based or only causality-based DG methods. Evaluation metric. We use the training and evaluation protocol presented by DomainBed benchmark [14]. In DG, a domain is chosen as unseen target domain and other domains can be seen as source domains for training. Following the instruction of the benchmark, we split each source domain into 8:2 training/validation splits and integrate the validation subsets of each source domain to create an overall validation set, which is used for validation. The ultimate chosen model is tested on the unseen target domain, and we record the mean and standard deviation of out-of-domain classification accuracies from three different runs with different train-validation splits. For one dataset, we set its every domain as test domain once to record the average accuracy and integrate the prediction accuracy of every domain by a second averaging to stand for the performance. Implementation details. Our method comprises two stages: generating a heterogeneous dividing pattern and training a prediction model. In the first phase, we use ResNet-18 [16] pre-trained on Ima-geNet [47] as the backbone feature extractor 𝜙. We change ResNet-18's last FC layer's output to a low 64-dimension for saving computation time. We additionally set a classifier whose input dimension is 64 to help classification. For generating new dividing patterns, we use a multilayer perceptron (MLP) to divide samples into domains. The MLP has 3 hidden layers with 256 hidden units. Finally, we send the optimal domain labels for all the training samples to the ultimate training stage and the networks trained in heterogeneity generation stage are dropped. As for the hyper-paramters referred in Algorithm 1, we set 𝑇 1 = 5, 𝜆 1 = 0.01, 𝜆 𝑐𝑜𝑛𝑡 = 1, 𝜆 𝑚𝑚𝑑 = 1 from HTCL. The value of 𝑇 2 follows the default value in DomainBed.\nWhen training the ultimate model for predicting, we follow Do-mainBed's strategy. We use ResNet-50 [16] pre-trained on ImageNet [47] as the backbone network for all datasets, and we use an additional FC layer to map features to classes as a classifier. As for model selection, we turn to SWAD [5] for weight averaging." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [ "b49", "b40" ], "table_ref": [ "tab_4" ], "text": "We first demonstrate our results and compare HTCL with various domain generalization methods in Table 1. Our method achieves the best performance on most of the datasets. The performance of our method exceeds that of CORAL [50] by 1.7% on average. Specifically, the performance gain on TerraIncognita, which is the hardest to predict among all these four dataset, is delightful, reaching 2.3% over SagNet [41] method. All the above comparisons reveal the effect of our method and further demonstrate the improvement brought by seeking and utilizing heterogeneity." }, { "figure_ref": [ "fig_4" ], "heading": "Comparison with Other Similar Methods", "publication_ref": [ "b31", "b51" ], "table_ref": [ "tab_6" ], "text": "Our method aims for utilizing heterogeneity to help prediction and we implement this goal by generating a heterogeneous dividing pattern, i.e., domain labels. This strategy of re-allocating domain labels is shared by several previous methods. Thus, we compare the performances of these methods under the DomainBed framework. These previous methods have no official version for the DG datasets we used and they mainly report their performances on lowdimension data. So we re-implement them with the help of their code and integrate them into the benchmark framework. As for the hyperparameters, we mainly follow their recommendation and finetune some of them for better performances. The comparison results on the PACS dataset, which is one of the most fashioned DG datasets, are shown in Table 3. We report both their performances on target seen domains (In-domain) and performances on target unseen domains (Out-of-domain). We can find that our method is more suitable for DG tasks than other previous methods. All these methods enjoy a rather high In-domain accuracy. However, as for out-of-domain performances, our method shows better perthan KerHRM [32] by 4.7%. In addition, our method's standard deviation on testing accuracies is also lower than other methods, which indicates the robustness of HTCL. The results confirm the problem mentioned in section 1: though the novel ideas of these methods are effective in their experiments on synthetic or low-dimension data, the representation in DG datasets is more complex and hard to disentangle. Therefore, some tricks in the feature level will degrade in bigger DG datasets. That's why we design our method with the help of supervision information to obtain a better representation for classification as mentioned in section 4.2.\nTo further explore these methods' process of dividing domain labels, we use t-SNE [52] to visualize the feature representations of three methods: EIIL, KerHRM, and our HTCL. All these three methods generate new dividing patterns based on the learned features in an interactive module. In other words, the features they learn will influence the process of domain label generation. We obtain the features which decide the final domain labels in these methods and assess their quality. Figure 3 illustrates the scattering results of their features with class labels and domain labels separately by different colors. It can be seen that the features of the three methods all form clusters with class labels. While as for the right figures which are differentiated with domain labels, it can be seen that EIIL tends to separate the same-class samples into different domains too fairly. The domains split by KerHRM follow the strategy to lower intra-domain distances and enlarge inter-domain distances better than EIIL. However, the same-class samples may tend to be divided " }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [ "b17" ], "table_ref": [ "tab_5" ], "text": "Considering that the process of generating dividing patterns has no interaction with ultimate training, it is necessary to split the phases to evaluate our methods. We conduct ablation studies to investigate every module's role in helping enhance generalization. We consider three factors: totally dropping the first stage that generates heterogeneous dividing patterns, replacing the candidate pattern generating guidance (Equation ( 10)) with simple K-Means cluster algorithm [18] in the first stage, and dropping the contrastive term in the ultimate training stage which aims for utilizing heterogeneity. The results of performances on PACS are listed in Table 2. It confirms that every lack of the module will lower the generalization ability of the model compared to the original HTCL method. There is another observation. In PACS, when setting target unseen domain as C (cartoon) and S (sketch), models' testing accuracies are worse than setting that as A (art) and P (photo), which means samples from C or S are more hard to perform prediction when being set as target domain. Obviously, enhancing the testing accuracies of these domains is more valuable. We note that maintaining the original HTCL framework outperforms in these domains, which indicates our method achieve robustness by pursuing heterogeneity." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this work, we comprehensively consider the role of domain labels in the domain generalization (DG) task and explain why domain-heterogeneous datasets can help model obtain better performances on unseen domains. By building a connection between variant features and heterogeneity, we propose a metric for measuring heterogeneity quantitatively with contrastive learning. Besides, we notice that an invariance-aimed contrastive learning performed in the ultimate training will make model better utilize the information brought by heterogeneous domains. Thus, we integrate both digging heterogeneity and utilizing heterogeneity in one framework to help train well-generalized models. We denote this integrated method as heterogeneity-based two-stage contrastive learning (HTCL) for DG. Extensive experimental results show the effectiveness of HTCL on complex DG datasets." }, { "figure_ref": [], "heading": "A THE EFFECTIVENESS OF HETEROGENEOUS DIVIDING PATTERN", "publication_ref": [ "b47", "b49", "b21", "b3" ], "table_ref": [ "tab_8" ], "text": "Since generating heterogeneous dividing pattern does not require any modification on ultimate training and model architecture, we combine this procedure with other four methods to verify the effectiveness of heterogeneous domain labels. These methods (Group-DRO [48], CORAL [50], MMD [22], MTL [4]) need to distinguish different source domains during their training procedures. We generate new heterogeneous domain labels by the first stage of HTCL.\nThen we apply these labels to those DG methods and don't change their original backbones. The comparison is shown in Table 4. By applying the procedure, all methods' performances are improved.\nThe gain comes to at least 2%, which shows the huge potential brought by generating heterogeneous dividing pattern. " }, { "figure_ref": [ "fig_5" ], "heading": "B SENSITIVITY ANALYSIS", "publication_ref": [ "b13" ], "table_ref": [ "tab_9" ], "text": "In this subsection, we study the model sensitivity with respect to the hyper-parameters referred in Algorithm (1). Table 5 demonstrates the comparison results. The changes of all hyper-parameters don't affect the performances too much and they can all surpass current baselines. As for 𝜆 𝑚𝑚𝑑 and 𝑇 2 in Algorithm (1), we set them to fixed 0.5 and 5000 (iterations) respectively as DomainBed [14] pre-defined to achieve a fair comparison. The visualization for the sensitivity analysis is in Figure 4. To demonstrate the results of 𝜆 1 intuitively, we take the logarithm of its selected values and then map them on the x-axis. It can be seen that our method is robust to the value change of all hyper-parameters. " }, { "figure_ref": [], "heading": "C FURTHER DISCUSSION ON THE MAIN RESULTS", "publication_ref": [ "b10", "b10" ], "table_ref": [ "tab_4", "tab_4" ], "text": "Here we further analyze the main results in Table 1. As Table 1 illustrates, HTCL doesn't show the superior performance on VLCS [11] dataset. We think it is the original data heterogeneity that limits the performance of our method in specific datasets. Different from other three datasets, data of each domain of VLCS are collected from a specific dataset, which indirectly increases the original heterogeneity of the whole dataset. In the first stage of HTCL, we aim to replace original dividing pattern with our newly generated heterogeneous pattern. However, when the given pattern is already heterogeneous enough like VLCS [11], The performance gain brought by heterogeneous dividing pattern generation will naturally be reduced. Therefore, considering the influence of original data heterogeneity before applying our method may be our future improvement." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by National Natural Science Foundation of China (62006207, 62037001, U20A20387), Young Elite Scientists Sponsorship Program by CAST (2021QNRC001), Zhejiang Province Natural Science Foundation (LQ21F020020), Project by Shanghai AI Laboratory (P22KS00111), Program of Zhejiang Province Science and Technology (2022C01044), the StarryNight Science Fund of Zhejiang University Shanghai Institute for Advanced Study (SN-ZJU-SIAS-0010), and the Fundamental Research Funds for the Central Universities (226-2022-00142, 226-2022-00051)." } ]
2023-11-11
10.1145/3580305.3599481
[ { "authors": "Martin Arjovsky; Léon Bottou; Ishaan Gulrajani; David Lopez-Paz", "journal": "", "ref_id": "b0", "title": "Invariant risk minimization", "year": "2019" }, { "authors": "Haoyue Bai; Rui Sun; Lanqing Hong; Fengwei Zhou; Nanyang Ye; Han-Jia Ye; S-H Gary Chan; Zhenguo Li", "journal": "", "ref_id": "b1", "title": "Decaug: Out-of-distribution generalization via decomposed feature representation and semantic augmentation", "year": "2021" }, { "authors": "Sara Beery; Grant Van Horn; Pietro Perona", "journal": "", "ref_id": "b2", "title": "Recognition in terra incognita", "year": "2018" }, { "authors": "Gilles Blanchard; Aniket Anand Deshmukh; Urun Dogan; Gyemin Lee; Clayton Scott", "journal": "Journal of Machine Learning Research", "ref_id": "b3", "title": "Domain Generalization by Marginal Transfer Learning", "year": "2021" }, { "authors": "Junbum Cha; Sanghyuk Chun; Kyungjae Lee; Han-Cheol Cho; Seunghyun Park; Yunsung Lee; Sungrae Park", "journal": "", "ref_id": "b4", "title": "SWAD: Domain Generalization by Seeking Flat Minima", "year": "2021" }, { "authors": "Ting Chen; Simon Kornblith; Mohammad Norouzi; Geoffrey Hinton", "journal": "PMLR", "ref_id": "b5", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "Mathieu Chevalley; Charlotte Bunne; Andreas Krause; Stefan Bauer", "journal": "", "ref_id": "b6", "title": "Invariant causal mechanisms through distribution matching", "year": "2022" }, { "authors": "Elliot Creager; Jörn-Henrik Jacobsen; Richard Zemel", "journal": "", "ref_id": "b7", "title": "Environment Inference for Invariant Learning", "year": "2021" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b8", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Logan Engstrom; Andrew Ilyas; Shibani Santurkar; Dimitris Tsipras; Jacob Steinhardt; Aleksander Madry", "journal": "PMLR", "ref_id": "b9", "title": "Identifying statistical bias in dataset replication", "year": "2020" }, { "authors": "Chen Fang; Ye Xu; Daniel N Rockmore", "journal": "", "ref_id": "b10", "title": "Unbiased metric learning: On the utilization of multiple datasets and web images for softening bias", "year": "2013" }, { "authors": "Yaroslav Ganin; Evgeniya Ustinova; Hana Ajakan; Pascal Germain; Hugo Larochelle; François Laviolette; Mario Marchand; Victor Lempitsky", "journal": "Journal of machine learning research", "ref_id": "b11", "title": "Domain-adversarial training of neural networks", "year": "2016" }, { "authors": "Robert Geirhos; Jörn-Henrik Jacobsen; Claudio Michaelis; Richard Zemel; Wieland Brendel; Matthias Bethge; Felix A Wichmann", "journal": "Nature Machine Intelligence", "ref_id": "b12", "title": "Shortcut learning in deep neural networks", "year": "2020" }, { "authors": "Ishaan Gulrajani; David Lopez-Paz", "journal": "", "ref_id": "b13", "title": "In Search of Lost Domain Generalization", "year": "2021" }, { "authors": "Kaiming He; Haoqi Fan; Yuxin Wu; Saining Xie; Ross Girshick", "journal": "", "ref_id": "b14", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b15", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Zeyi Huang; Haohan Wang; Eric P Xing; Dong Huang", "journal": "European Conference on Computer Vision", "ref_id": "b16", "title": "Self-challenging improves cross-domain generalization", "year": "2020" }, { "authors": "K Krishna; M Narasimha Murty", "journal": "IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)", "ref_id": "b17", "title": "Genetic K-means algorithm", "year": "1999" }, { "authors": "David Krueger; Ethan Caballero; Joern-Henrik Jacobsen; Amy Zhang; Jonathan Binas; Remi Le Priol; Aaron Courville", "journal": "", "ref_id": "b18", "title": "Out-of-distribution generalization via risk extrapolation (rex)", "year": "2020" }, { "authors": "Da Li; Yongxin Yang; Yi-Zhe Song; Timothy Hospedales", "journal": "", "ref_id": "b19", "title": "Learning to generalize: Meta-learning for domain generalization", "year": "2018" }, { "authors": "Da Li; Yongxin Yang; Yi-Zhe Song; Timothy M Hospedales", "journal": "", "ref_id": "b20", "title": "Deeper, broader and artier domain generalization", "year": "2017" }, { "authors": "Haoliang Li; Sinno Jialin Pan; Shiqi Wang; Alex C Kot", "journal": "", "ref_id": "b21", "title": "Domain generalization with adversarial feature learning", "year": "2018" }, { "authors": "Mengze Li; Han Wang; Wenqiao Zhang; Jiaxu Miao; Wei Ji; Zhou Zhao; Shengyu Zhang; Fei Wu", "journal": "", "ref_id": "b22", "title": "WINNER: Weakly-supervised hIerarchical decomposi-tioN and aligNment for spatio-tEmporal video gRounding", "year": "2023" }, { "authors": "Mengze Li; Tianbao Wang; Jiahe Xu; Kairong Han; Shengyu Zhang; Zhou Zhao; Jiaxu Miao; Wenqiao Zhang; Shiliang Pu; Fei Wu", "journal": "", "ref_id": "b23", "title": "Multi-modal Action Chain Abductive Reasoning", "year": "2023" }, { "authors": "Mengze Li; Tianbao Wang; Haoyu Zhang; Shengyu Zhang; Zhou Zhao; Wenqiao Zhang; Jiaxu Miao; Shiliang Pu; Fei Wu", "journal": "", "ref_id": "b24", "title": "Hero: Hierarchical spatiotemporal reasoning with contrastive action correspondence for end-to-end video object grounding", "year": "2022" }, { "authors": "Tian Li; Anit Kumar Sahu; Manzil Zaheer; Maziar Sanjabi; Ameet Talwalkar; Virginia Smith", "journal": "Proceedings of Machine Learning and Systems", "ref_id": "b25", "title": "Federated optimization in heterogeneous networks", "year": "2020" }, { "authors": "Ya Li; Mingming Gong; Xinmei Tian; Tongliang Liu; Dacheng Tao", "journal": "", "ref_id": "b26", "title": "Domain generalization via conditional invariant representations", "year": "2018" }, { "authors": "Yufan Liao; Qi Wu; Xing Yan", "journal": "", "ref_id": "b27", "title": "Decorr: Environment Partitioning for Invariant Learning and OOD Generalization", "year": "2022" }, { "authors": "Yong Lin; Hanze Dong; Hao Wang; Tong Zhang", "journal": "", "ref_id": "b28", "title": "Bayesian invariant risk minimization", "year": "2022" }, { "authors": "Yong Lin; Shengyu Zhu; Lu Tan; Peng Cui", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b29", "title": "ZIN: When and How to Learn Invariance Without Environment Partition", "year": "2022" }, { "authors": "Jiashuo Liu; Zheyuan Hu; Peng Cui; Bo Li; Zheyan Shen", "journal": "PMLR", "ref_id": "b30", "title": "Heterogeneous risk minimization", "year": "2021" }, { "authors": "Jiashuo Liu; Zheyuan Hu; Peng Cui; Bo Li; Zheyan Shen", "journal": "", "ref_id": "b31", "title": "Kernelized heterogeneous risk minimization", "year": "2021" }, { "authors": "Jiashuo Liu; Jiayun Wu; Renjie Pi; Renzhe Xu; Xingxuan Zhang; Bo Li; Peng Cui", "journal": "", "ref_id": "b32", "title": "Measure the Predictive Heterogeneity", "year": "2023" }, { "authors": "Fangrui Lv; Jian Liang; Shuang Li; Bin Zang; Chi Harold Liu; Ziteng Wang; Di Liu", "journal": "", "ref_id": "b33", "title": "Causality Inspired Representation Learning for Domain Generalization", "year": "2022" }, { "authors": "Zheqi Lv; Zhengyu Chen; Shengyu Zhang; Kun Kuang; Wenqiao Zhang; Mengze Li; Beng ; Chin Ooi; Fei Wu", "journal": "", "ref_id": "b34", "title": "IDEAL: Toward High-efficiency Device-Cloud Collaborative and Dynamic Recommendation System", "year": "2023" }, { "authors": "Zheqi Lv; Feng Wang; Shengyu Zhang; Kun Kuang; Hongxia Yang; Fei Wu", "journal": "", "ref_id": "b35", "title": "Personalizing Intervened Network for Long-tailed Sequential User Behavior Modeling", "year": "2022" }, { "authors": "Zheqi Lv; Wenqiao Zhang; Shengyu Zhang; Kun Kuang; Feng Wang; Yongwei Wang; Zhengyu Chen; Tao Shen; Hongxia Yang; Beng ; Chin Ooi; Fei Wu", "journal": "", "ref_id": "b36", "title": "DUET: A Tuning-Free Device-Cloud Collaborative Parameters Generation Framework for Efficient Device Model Generalization", "year": "2023" }, { "authors": "Divyat Mahajan; Shruti Tople; Amit Sharma", "journal": "PMLR", "ref_id": "b37", "title": "Domain generalization using causal matching", "year": "2021" }, { "authors": "Qiaowei Miao; Junkun Yuan; Kun Kuang", "journal": "", "ref_id": "b38", "title": "Domain Generalization via Contrastive Causal Learning", "year": "2022" }, { "authors": "Milton Llera Montero; J H Casimir; Rui Ponte Ludwig; Gaurav Costa; Jeffrey Malhotra; Bowers", "journal": "", "ref_id": "b39", "title": "The role of disentanglement in generalisation", "year": "2020" }, { "authors": "Hyeonseob Nam; Hyunjae Lee; Jongchan Park; Wonjun Yoon; Donggeun Yoo", "journal": "", "ref_id": "b40", "title": "Reducing Domain Gap by Reducing Style Bias", "year": "2021" }, { "authors": "Ziwei Niu; Junkun Yuan; Xu Ma; Yingying Xu; Jing Liu; Yen-Wei Chen; Ruofeng Tong; Lanfen Lin", "journal": "IEEE Transactions on Multimedia", "ref_id": "b41", "title": "Knowledge Distillation-based Domain-invariant Representation Learning for Domain Generalization", "year": "2023" }, { "authors": "Aaron Van Den Oord; Yazhe Li; Oriol Vinyals", "journal": "", "ref_id": "b42", "title": "Representation learning with contrastive predictive coding", "year": "2018" }, { "authors": "Praneeth Vihari Piratla; Sunita Netrapalli; Sarawagi", "journal": "PMLR", "ref_id": "b43", "title": "Efficient domain generalization via common-specific low-rank decomposition", "year": "2020" }, { "authors": "Benjamin Recht; Rebecca Roelofs; Ludwig Schmidt; Vaishaal Shankar", "journal": "PMLR", "ref_id": "b44", "title": "Do imagenet classifiers generalize to imagenet?", "year": "2019" }, { "authors": "Yangjun Ruan; Yann Dubois; Chris J Maddison", "journal": "", "ref_id": "b45", "title": "Optimal representations for covariate shift", "year": "2021" }, { "authors": "Olga Russakovsky; Jia Deng; Hao Su; Jonathan Krause; Sanjeev Satheesh; Sean Ma; Zhiheng Huang; Andrej Karpathy; Aditya Khosla; Michael Bernstein", "journal": "International journal of computer vision", "ref_id": "b46", "title": "Imagenet large scale visual recognition challenge", "year": "2015" }, { "authors": "Shiori Sagawa; * ; Pang Wei Koh; * Tatsunori; B Hashimoto; Percy Liang", "journal": "", "ref_id": "b47", "title": "Distributionally Robust Neural Networks", "year": "2020" }, { "authors": "Yuge Shi; Jeffrey Seely; Philip Torr; N Siddharth; Awni Hannun; Nicolas Usunier; Gabriel Synnaeve", "journal": "", "ref_id": "b48", "title": "Gradient Matching for Domain Generalization", "year": "2022" }, { "authors": "Baochen Sun; Kate Saenko", "journal": "Springer", "ref_id": "b49", "title": "Deep coral: Correlation alignment for deep domain adaptation", "year": "2016" }, { "authors": "Christopher Tran; Elena Zheleva", "journal": "", "ref_id": "b50", "title": "Improving Data-driven Heterogeneous Treatment Effect Estimation Under Structure Uncertainty", "year": "2022" }, { "authors": "Laurens Van Der Maaten; Geoffrey Hinton", "journal": "Journal of machine learning research", "ref_id": "b51", "title": "Visualizing data using t-SNE", "year": "2008" }, { "authors": "N Vladimir; Vapnik", "journal": "IEEE transactions on neural networks", "ref_id": "b52", "title": "An overview of statistical learning theory", "year": "1999" }, { "authors": "Hemanth Venkateswara; Jose Eusebio; Shayok Chakraborty; Sethuraman Panchanathan", "journal": "", "ref_id": "b53", "title": "Deep hashing network for unsupervised domain adaptation", "year": "2017" }, { "authors": "Riccardo Volpi; Hongseok Namkoong; Ozan Sener; John C Duchi; Vittorio Murino; Silvio Savarese", "journal": "Advances in neural information processing systems", "ref_id": "b54", "title": "Generalizing to unseen domains via adversarial data augmentation", "year": "2018" }, { "authors": "Tan Wang; Zhongqi Yue; Jianqiang Huang; Qianru Sun; Hanwang Zhang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b55", "title": "Self-supervised learning disentangled group representation as feature", "year": "2021" }, { "authors": "Anpeng Wu; Junkun Yuan; Kun Kuang; Bo Li; Runze Wu; Qiang Zhu; Yueting Zhuang; Fei Wu", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b56", "title": "Learning Decomposed Representations for Treatment Effect Estimation", "year": "2023" }, { "authors": "Jun Wu; Jingrui He", "journal": "", "ref_id": "b57", "title": "Indirect Invisible Poisoning Attacks on Domain Adaptation", "year": "2021" }, { "authors": "Qinwei Xu; Ruipeng Zhang; Ya Zhang; Yanfeng Wang; Qi Tian", "journal": "", "ref_id": "b58", "title": "A fourier-based framework for domain generalization", "year": "2021" }, { "authors": "Huan Shen Yan; Nanxiang Song; Lincan Li; Liu Zou; Ren", "journal": "", "ref_id": "b59", "title": "Improve unsupervised domain adaptation with mixup training", "year": "2020" }, { "authors": "Junkun Yuan; Xu Ma; Defang Chen; Kun Kuang; Fei Wu; Lanfen Lin", "journal": "", "ref_id": "b60", "title": "Label-Efficient Domain Generalization via Collaborative Exploration and Generalization", "year": "2022" }, { "authors": "Junkun Yuan; Xu Ma; Defang Chen; Kun Kuang; Fei Wu; Lanfen Lin", "journal": "International Journal of Computer Vision", "ref_id": "b61", "title": "Domain-specific bias filtering for single labeled domain generalization", "year": "2023" }, { "authors": "Junkun Yuan; Xu Ma; Ruoxuan Xiong; Mingming Gong; Xiangyu Liu; Fei Wu; Lanfen Lin; Kun Kuang", "journal": "ACM Transactions on Knowledge Discovery from Data", "ref_id": "b62", "title": "Instrumental Variable-Driven Domain Generalization with Unobserved Confounders", "year": "2023" }, { "authors": "Fengda Zhang; Kun Kuang; Long Chen; Yuxuan Liu; Chao Wu; Jun Xiao", "journal": "", "ref_id": "b63", "title": "Fairness-aware Contrastive Learning with Partially Annotated Sensitive Attributes", "year": "2023" }, { "authors": "Hanlin Zhang; Yi-Fan Zhang; Weiyang Liu; Adrian Weller; Bernhard Schölkopf; Eric P Xing", "journal": "", "ref_id": "b64", "title": "Towards principled disentanglement for domain generalization", "year": "2022" }, { "authors": "Min Zhang; Siteng Huang; Wenbin Li; Donglin Wang", "journal": "Springer", "ref_id": "b65", "title": "Tree Structure-Aware Few-Shot Image Classification via Hierarchical Aggregation", "year": "2022-10-23" }, { "authors": "Marvin Zhang; Henrik Marklund; Abhishek Gupta; Sergey Levine; Chelsea Finn", "journal": "", "ref_id": "b66", "title": "Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift", "year": "2020" }, { "authors": "Shengyu Zhang; Dong Yao; Zhou Zhao; Tat-Seng Chua; Fei Wu", "journal": "", "ref_id": "b67", "title": "Causerec: Counterfactual user sequence synthesis for sequential recommendation", "year": "2021" }, { "authors": "Fan Zhou; Zhuqing Jiang; Changjian Shui; Boyu Wang; Brahim Chaib-Draa", "journal": "", "ref_id": "b68", "title": "Domain generalization with optimal transport and metric learning", "year": "2020" }, { "authors": "Kaiyang Zhou; Yongxin Yang; Yu Qiao; Tao Xiang", "journal": "", "ref_id": "b69", "title": "Domain Generalization with MixStyle", "year": "2021" }, { "authors": "Xiao Zhou; Yong Lin; Renjie Pi; Weizhong Zhang; Renzhe Xu; Peng Cui; Tong Zhang", "journal": "PMLR", "ref_id": "b70", "title": "Model agnostic sample reweighting for out-of-distribution learning", "year": "2022" }, { "authors": "Xiao Zhou; Yong Lin; Weizhong Zhang; Tong Zhang", "journal": "PMLR", "ref_id": "b71", "title": "Sparse invariant risk minimization", "year": "2022" }, { "authors": "Didi Zhu; Yincuan Li; Junkun Yuan; Zexi Li; Yunfeng Shao; Kun Kuang; Chao Wu", "journal": "", "ref_id": "b72", "title": "Universal Domain Adaptation via Compressive Attention Matching", "year": "2023" }, { "authors": "Zhao Ziyu; Kun Kuang; Bo Li; Peng Cui; Runze Wu; Jun Xiao; Fei Wu", "journal": "Data Mining and Knowledge Discovery", "ref_id": "b73", "title": "Differentiated matching for individual and average treatment effect estimation", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 369.72, 271.7, 189.02, 9.47 ], "formula_id": "formula_0", "formula_text": "L 𝐸𝑅𝑀 = 𝐸 (𝑥 𝑖 ,𝑦 𝑖 ) ∈ D [ℓ (𝑤 (𝜙 (𝑥 𝑖 )), 𝑦 𝑖 )],(1)" }, { "formula_coordinates": [ 3, 358.82, 334.91, 199.92, 22.02 ], "formula_id": "formula_1", "formula_text": "L 𝐼𝑅𝑀 = ∑︁ 𝜖 ∈𝜀 𝑡𝑟 R 𝜖 (𝜙, 𝑤) + 𝜆 ∇ w R 𝜖 ( w • 𝜙) .(2)" }, { "formula_coordinates": [ 4, 58.58, 528.1, 214.73, 28.45 ], "formula_id": "formula_2", "formula_text": "L 𝐻 = ∑︁ (𝑥,𝑦) 𝑙𝑜𝑔( 𝐸 { (𝑥 ′ ,𝑦 ′ ) |𝑑 𝑥 ′ =𝑑 𝑥 ,𝑦 ′ =𝑦 } [||𝜙 (𝑥) -𝜙 (𝑥 ′ )|| 2 ] 𝐸 { (𝑥 ′′ ,𝑦 ′′ ) |𝑑 𝑥 ′′ ≠𝑑 𝑥 ,𝑦 ′′ =𝑦 } [||𝜙 (𝑥) -𝜙 (𝑥 ′′ )|| 2 ]" }, { "formula_coordinates": [ 4, 53.35, 586.5, 240.69, 19.2 ], "formula_id": "formula_4", "formula_text": "𝑑 𝑥 ′ = 𝑑 𝑥 (𝑑 𝑥 ′′ ≠ 𝑑 𝑥 )." }, { "formula_coordinates": [ 4, 373.14, 251.02, 185.6, 21.99 ], "formula_id": "formula_5", "formula_text": "L 𝑣𝑎𝑟 = ∑︁ (𝑥,𝑦) ℓ (𝑤 (𝜙 (𝑥)), 𝑦) + 𝜆 1 L 𝐻 .(4)" }, { "formula_coordinates": [ 4, 388.52, 698.5, 98.83, 11.14 ], "formula_id": "formula_6", "formula_text": "𝑃 - 𝑥 = {𝑥 ′ |𝑑 𝑥 ′ = 𝑑 𝑥 , 𝑦 ′ ≠ 𝑦}." }, { "formula_coordinates": [ 5, 124.92, 113.07, 97.72, 11.14 ], "formula_id": "formula_7", "formula_text": "𝑃 + 𝑥 = {𝑥 ′ |𝑑 𝑥 ′ ≠ 𝑑 𝑥 , 𝑦 ′ = 𝑦}." }, { "formula_coordinates": [ 5, 329.73, 420.62, 204.26, 27.21 ], "formula_id": "formula_9", "formula_text": "L 𝐻 = - ∑︁ (𝑋 𝑖 ,𝑦 𝑖 ) ∑︁ (𝑑 𝑚 ,𝑑 𝑛 ) log 𝑑𝑖𝑠𝑡 (𝜙 (𝑋 𝑑 𝑚 𝑖 ), 𝜙 (𝑋 𝑑 𝑛 𝑖 )) 𝑑𝑖𝑠𝑡 (𝜙 (𝑋 𝑑 𝑚 𝑖 )) + 𝑑𝑖𝑠𝑡 (𝜙 (𝑋 𝑑 𝑛 𝑖 ))" }, { "formula_coordinates": [ 5, 355.98, 573.25, 202.76, 37.33 ], "formula_id": "formula_11", "formula_text": "𝑑𝑖𝑠𝑡 (𝜙 1 , 𝜙 2 ) = |𝜙 1 | -1 𝑖=0 |𝜙 2 | -1 𝑗=0 ∥𝜙 1 [𝑖] -𝜙 2 [ 𝑗] ∥ 2 |𝜙 1 | × |𝜙 2 | .(7)" }, { "formula_coordinates": [ 5, 370.04, 674.58, 185.53, 36.02 ], "formula_id": "formula_12", "formula_text": "𝑑𝑖𝑠𝑡 (𝜙) = |𝜙 | -2 𝑖=0 |𝜙 | -1 𝑗=𝑖+1 ∥𝜙 [𝑖] -𝜙 [ 𝑗] ∥ 2 |𝜙 | 2 -|𝜙 | . (8" }, { "formula_coordinates": [ 5, 555.57, 696.86, 3.17, 7.94 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 6, 142.2, 154.9, 152.38, 9.19 ], "formula_id": "formula_14", "formula_text": "𝐻 = 𝐸 𝐵 ⊂ D [L 𝐻 ].(9)" }, { "formula_coordinates": [ 6, 324.65, 413.86, 230.67, 19.44 ], "formula_id": "formula_15", "formula_text": "L 𝑑𝑖𝑣𝑖𝑑𝑒 = 𝐻 (𝑓 (𝜙 * (X)) 𝑎𝑣𝑔 ) + ( 1 |𝜖 𝑡𝑟 | -min(𝑓 (𝜙 * (X)) 𝑎𝑣𝑔 )). (10" }, { "formula_coordinates": [ 6, 555.32, 419.93, 3.42, 7.94 ], "formula_id": "formula_16", "formula_text": ")" }, { "formula_coordinates": [ 7, 90.39, 456.98, 204.19, 20.54 ], "formula_id": "formula_17", "formula_text": "𝑚𝑚𝑑 (𝐷 1 , 𝐷 2 ) = 1 4𝑑 2 ∥𝐶𝑜𝑣 (𝐷 1 ) -𝐶𝑜𝑣 (𝐷 2 )∥ 2 𝐹 .(11)" }, { "formula_coordinates": [ 7, 81.31, 529.54, 178.07, 25.79 ], "formula_id": "formula_18", "formula_text": "L 𝑐𝑜𝑛𝑡 = ∑︁ 𝜖 ∈𝜀 𝑡𝑟 ∑︁ 𝑦 ∈ Y log(1 + 𝑚𝑚𝑑 (𝜙 (𝑋 𝑠 ), 𝜙 (𝑋 𝑝𝑜𝑠 )) 𝑚𝑚𝑑 (𝜙 (𝑋 𝑠 ), 𝜙 (𝑋 𝑛𝑒𝑔 ))" }, { "formula_coordinates": [ 7, 91.17, 630.4, 203.41, 23.28 ], "formula_id": "formula_20", "formula_text": "L 𝑚𝑚𝑑 = ∑︁ 𝜖 1 ,𝜖 2 ∈𝜀 𝑡𝑟 , 𝜖 1 ≠𝜖 2 𝑚𝑚𝑑 (𝜙 (𝑋 𝜖 1 ), 𝜙 (𝑋 𝜖 2 )).(13)" }, { "formula_coordinates": [ 7, 59.2, 672.02, 231.97, 21.99 ], "formula_id": "formula_21", "formula_text": "L 𝑝𝑟𝑒𝑑𝑖𝑐𝑡 = ∑︁ (𝑥,𝑦) ℓ (𝑤 (𝜙 (𝑥)), 𝑦) + 𝜆 𝑐𝑜𝑛𝑡 L 𝑐𝑜𝑛𝑡 + 𝜆 𝑚𝑚𝑑 L 𝑚𝑚𝑑 . (14" }, { "formula_coordinates": [ 7, 291.16, 675.84, 3.42, 7.94 ], "formula_id": "formula_22", "formula_text": ")" } ]
Quantitatively Measuring and Contrastively Exploring Heterogeneity for Domain Generalization
Domain generalization (DG) is a prevalent problem in real-world applications, which aims to train well-generalized models for unseen target domains by utilizing several source domains. Since domain labels, i.e., which domain each data point is sampled from, naturally exist, most DG algorithms treat them as a kind of supervision information to improve the generalization performance. However, the original domain labels may not be the optimal supervision signal due to the lack of domain heterogeneity, i.e., the diversity among domains. For example, a sample in one domain may be closer to another domain, its original label thus can be the noise to disturb the generalization learning. Although some methods try to solve it by re-dividing domains and applying the newly generated dividing pattern, the pattern they choose may not be the most heterogeneous due to the lack of the metric for heterogeneity. In this paper, we point out that domain heterogeneity mainly lies in variant features under the invariant learning framework. With contrastive learning, we propose a learning potential-guided metric for domain heterogeneity by promoting learning variant features. Then we notice the differences between seeking variance-based heterogeneity and training invariance-based generalizable model. We thus propose a novel method called Heterogeneity-based Two-stage Contrastive Learning (HTCL) for the DG task. In the first stage, we generate the most heterogeneous dividing pattern with our contrastive metric. In the second stage, we employ an invariance-aimed contrastive
Yunze Tong; Junkun Yuan; Min Zhang; Keli Zhang; Kun Kuang
[ { "figure_caption": "Figure 1 :1Figure 1: The toy example for illustrating the problem of lack in heterogeneity. See main text for details.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "- 𝑥 𝑚𝑚𝑑 (𝜙 (𝑥 ), 𝜙 (𝑥 -) ) 𝑥-∈𝑃 - 𝑥 𝑚𝑚𝑑 (𝜙 (𝑥 ), 𝜙 (𝑥 -) ) + 𝑥+ ∈𝑃 + 𝑥 𝑚𝑚𝑑 (𝜙 (𝑥 ), 𝜙 (𝑥 + ) )", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "5. 1 . 111Heterogeneity exploration module. A fixed dividing pattern and raw samples are given as input. Our goal includes:• learning an adequate 𝜙 guided by Equation (4) and generating the low-dimension representation𝑥 𝑖 𝜙 (𝑥 𝑖 ) of all samplesfor the next pattern generation module. • measuring the heterogeneity of the input dividing pattern with the metric L 𝐻 quantitatively.", "figure_data": "", "figure_id": "fig_2", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The framework for the first stage of HTCL. M 1 denotes the heterogeneity exploration module, and M 2 refers to the pattern generation module. M 1 outputs a well-trained feature representation for M 2 . M 2 returns a newly generated dividing pattern to M 1 . The input and output of each module are shown in blue boxes. The final optimal output 𝑑 𝑥 𝑖 * , which will be used as the domain labels in ultimate invariant learning, is shown in green box. It updates according to the heterogeneity metric during the iterations.For the first goal, we can just update 𝜙 and 𝑤 through Equation(4). When it refers to constructing the corresponding positive and negative pairs for L 𝑣𝑎𝑟 , we only construct them within every batch instead of the whole dataset.Here we re-write L 𝐻 which has been mentioned in Equation (3). L 𝐻 is our heterogeneity metric, and we also apply it in Equation (4) to guide heterogeneity exploration. Considering the calculation is conducted per batch, we re-formalize L 𝐻 more detailedly as:", "figure_data": "", "figure_id": "fig_3", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: T-SNE[52] of the feature representations during generating domain labels in three methods. All these three methods generate dividing patterns with the help of learned feature representations. We obtain the features and compare samples' relative distances with t-SNE. The colors of left column reflect classes while the ones of right denote domains.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Results of the sensitivity analysis with respect to different hyper-parameters.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": ") |𝑑 𝑥 𝑖 =𝜖 } [ℓ (𝑤 (𝜙 (𝑥 𝑖 )), 𝑦 𝑖 )] denotes the per-", "figure_data": "domain classification loss. The second term in Equation (2) enforcessimultaneous optimality of the 𝜙 and 𝑤 in every domain. w is aconstant scalar multiplier of 1.0 for each output dimension.No matter whether to use domain labels, the aims of DG methodsare consistent: training a robust model which can generalize tounseen domains with only several source domains.", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "|𝐵 | } ⊂ D. For each kind of label 𝑦 𝑖 ∈ Y, 𝑋 𝑖 denotes the set of all images which own class label 𝑦 𝑖 in the batch 𝐵. 𝑑 𝑋 𝑖 denotes the set of all domains which contain at least one sample in 𝑋 𝑖 . ∀(𝑑 𝑚 , 𝑑 𝑛 ) s.t. 𝑑 𝑚 ∈ 𝑑 𝑋 𝑖 , 𝑑 𝑛 ∈ 𝑑 𝑋 𝑖 , 𝑑 𝑚 ≠ 𝑑 𝑛 ,", "figure_data": "𝑋 𝑑 𝑚", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Considering that heterogeneity exploration module inevitably uses class-label as supervision information, the input of Algorithm 1 The pseudo code of HTCL Input: training samples D = X × Y, original domain labels 𝑑 𝑥 𝑖 𝑖𝑛𝑖𝑡 , hyperparamters 𝜆 1 , 𝜆 𝑐𝑜𝑛𝑡 , 𝜆 𝑚𝑚𝑑 ,𝑇 1 ,𝑇 2 // The first stage: heterogeneous dividing pattern generation Initialization: 𝜙, 𝑤 initialized with pretrained ResNet, 𝑑 𝑥 𝑖 ← 𝑑 𝑥 𝑖 𝑖𝑛𝑖𝑡 , 𝐻 𝜙, 𝑤 this module will naturally carry invariant features. Suppose that 𝜙 * can mainly extract invariant features of samples 𝑥 1 , 𝑥 2 , 𝑥 3 in the early stage. If 𝑥 1 and 𝑥 2", "figure_data": "end ifGenerate a new dividing pattern 𝑑 𝑥 𝑖 with Equation (10)until reach 𝑇 1 iterations// The second stage: prediction with heterogeneous divid-ing patternInitialization: 𝜙, 𝑤 initialized with pretrained ResNet again, 𝑑 𝑥 𝑖", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The comparison with other DG methods in standard DomainBed[14] benchmark. We report average accuracy on all target domains under three runs. The results of baselines are from their original corresponding literatures or DomainBed. We highlight the best results with boldface.", "figure_data": "AlgorithmPACSVLCSOfficeHome TerraIncognita Avg.ERM [53]85.5±0.2 77.5±0.466.5±0.346.1±1.868.9IRM [1]83.5±0.8 78.5±0.564.3±2.247.6±0.868.4GroupDRO [48]84.4±0.8 76.7±0.666.0±0.743.2±1.167.5Mixup [60]84.6±0.6 77.4±0.668.1±0.347.9±0.869.5MLDG [20]84.9±1.0 77.2±0.466.8±0.647.7±0.969.2CORAL [50]86.2±0.3 78.8±0.668.7±0.347.6±1.070.4MMD [22]84.6±0.5 77.5±0.966.3±0.142.2±1.667.7DANN [12]83.7±0.4 78.6±0.465.9±0.646.7±0.568.7CDANN [27]82.6±0.9 77.5±0.165.7±1.345.8±1.667.9MTL [4]84.6±0.5 77.2±0.466.4±0.545.6±1.268.5SagNet [41]86.3±0.2 77.8±0.568.1±0.148.6±1.070.2ARM [67]85.1±0.4 77.6±0.364.8±0.345.5±0.368.3VREx [19]84.9±0.6 78.3±0.266.4±0.646.4±0.669.0RSC [17]85.2±0.9 77.1±0.565.5±0.946.6±1.068.6CAD [46]85.2±0.9 78.0±0.567.4±0.247.3±2.269.5CausalRL-CORAL [7] 85.8±0.1 77.5±0.668.6±0.347.3±0.869.8CausalRL-MMD [7]84.0±0.8 77.6±0.465.7±0.646.3±0.968.4HTCL (Ours)88.6±0.3 77.6±0.571.3±0.650.9±1.972.1", "figure_id": "tab_4", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Summary of the ablation study on PACS. The second row block contains the ablation on the modules of the first stage (generating heterogeneous dividing pattern), while the third row block denotes the ablation on the second stage (predicting with heterogeneous dividing pattern). We record the degradation compared with original method in the last column.", "figure_data": "ACPSAvg. (Δ)", "figure_id": "tab_5", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The comparison results among the methods which also generate new dividing pattern for DG on PACS[21] dataset. In-domain results denote the testing performance on the target seen domains. Out-of-domain results denote that on the target unseen domain. The standard deviation of testing runs is reported after the average testing accuracies.", "figure_data": "Out-of-domain In-domainEIIL [8]81.7 ±0.693.2 ±0.3IP-IRM [56]81.7 ±0.497.1 ±0.6KerHRM [32]83.9 ±2.397.5 ±0.1HTCL (Ours)88.6 ±0.397.8 ±0.1", "figure_id": "tab_6", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "6.4.1 Effects of applying heterogeneous dividing pattern. We first totally drop the stage of re-allocating domain labels. Instead, we use the original domain labels, which turns into the common setting of DomainBed. As shown in Table2, dropping the procedure of digging heterogeneity lowers the predictive accuracy by 0.4%, which shows the significance of creating domain-heterogeneous dataset in DG. Effects of the contrastive term for utilizing heterogeneity. As for the ultimate training, we add an extra contrastive term in the loss function to improve model's generalization ability. We conduct the experiments without this term. As shown in Table2, the average accuracy reduces from 88.6% to 88.3% when dropping this term.6.4.4 Summary of the ablation study. Above ablation study confirms that every part of HTCL plays a role in helping generalization.", "figure_data": "dividing patterns has similar results with totally dropping the firstmodule (both of them obtain 88.2% testing accuracy on PACS).6.4.36.4.2 Effects of our guidance to generate specific candidate patterns.Generating candidate dividing patterns is necessary for learningvariant features. In pattern generation module, we design Equa-tion (10) to guide the split of domains. Here we replace this mod-ule with a common K-Means cluster algorithm on the given low-dimension representation. As seen in Table 2, our original patterngeneration method outperforms the K-Means method by 0.4%. Infact, simply using the K-means algorithm to generate candidate", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The results of adding our newly generated dividing pattern to common DG methods on PACS[21] dataset. The heterogeneous domain labels are generated by HTCL's first stage. We then apply them on common DG methods. The gain of accuracies is reported in the last column.", "figure_data": "Original With new domain labelsΔGroupDRO [48] 84.4 ±0.888.1 ±0.2+3.7CORAL [50]86.2 ±0.388.3 ±0.4+2.1MMD [22]84.7 ±0.587.4 ±0.1+2.7MTL [4]84.6 ±0.588.4 ±0.4+3.8", "figure_id": "tab_8", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "The sensitivity analysis for 𝑇 1 , 𝜆 1 and 𝜆 𝑐𝑜𝑛𝑡 on PACS[21] dataset.", "figure_data": "𝑇 1135 (default)79Test Acc. (%) 88.287.788.687.8 88.5𝜆 101e-31e-2 (default) 1e-11Test Acc. (%) 87.787.388.688.1 87.7𝜆 𝑐𝑜𝑛𝑡0.5 1.0 (default)1.52.02.5Test Acc. (%) 87.588.687.287.6 87.4", "figure_id": "tab_9", "figure_label": "5", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[9,16]", "Explanation": "The cited works provide evidence of the extraordinary performance of machine learning on various tasks, which serves as a basis for the discussion of the common paradigm of ERM in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[10,13,37,45]", "Explanation": "The cited works highlight the issue of distribution shifts and OOD problems in training and testing data, which the citing paper further explores in the context of developing methods for domain generalization."}, {"Category": "Supporting Evidence", "Citation": "[13]", "Explanation": "The cited work highlights the issue of learning variant features for prediction in ERM-based approaches, which the citing paper uses to support the need for a more robust approach to domain adaptation."}, {"Category": "Methodological Basis", "Citation": "[1,4,35,36,48]", "Explanation": "The cited works provide a basis for the use of original domain labels as extra supervision signals in the citing paper to improve domain adaptation performance."}, {"Category": "Extension or Continuation", "Citation": "[22,50,69]", "Explanation": "The cited works extend the use of statistical metrics such as MMD and Wasserstein distance in fine-grained methods for domain adaptation, which the citing paper further builds upon to improve robustness to complex domains."}, {"Category": "Methodological Basis", "Citation": "[8]", "Explanation": "The cited work highlights the importance of data heterogeneity in boosting generalization learning, which the citing paper builds upon by generating a dividing pattern to re-separate training samples into newly generated domains."}, {"Category": "Extension or Continuation", "Citation": "[31]", "Explanation": "The cited work is mentioned in the context of generating a dividing pattern to re-separate training samples into newly generated domains, which the citing paper expands upon by applying the pattern to achieve favorable accuracies in their experiments."}, {"Category": "Extension or Continuation", "Citation": "[56]", "Explanation": "The cited work is also mentioned in the context of generating a dividing pattern to re-separate training samples into newly generated domains, which the citing paper further extends by applying the pattern to achieve favorable accuracies in their experiments."}, {"Category": "Methodological Basis", "Citation": "[48]", "Explanation": "The cited work on GroupDRO is used as a method to optimize the risk of specific groups in the context of domain generalization in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work on IRM is used as a method to optimize the risk in the context of domain generalization in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work on feature alignment is used as a method to guide the learning process in the context of domain generalization in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work on decorrelation is used as a method to guide the learning process in the context of domain generalization in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[26,33,51]", "Explanation": "The cited works provide formulations for data heterogeneity in other machine learning problems, which the citing paper uses to support the discussion on domain heterogeneity in the context of data re-dividing."}, {"Category": "Extension or Continuation", "Citation": "[8]", "Explanation": "The cited work, EIIL, is one of the first methods to notice the gain brought by re-dividing domains in DG. The citing paper extends this work by further exploring the potential of re-dividing domains in DG."}, {"Category": "Methodological Basis", "Citation": "[31]", "Explanation": "The cited work, HRM, designs a framework where dividing domains and performing invariant learning are optimized jointly. The citing paper adopts this framework as a methodological basis for its own research on domain heterogeneity."}, {"Category": "Data Source", "Citation": "[32]", "Explanation": "The cited work, KerHRM, develops the framework of HRM by integrating the procedure with kernel methods. The citing paper uses this work as a data source to better capture features in the context of domain heterogeneity."}, {"Category": "Extension or Continuation", "Citation": "[56]", "Explanation": "The cited work, IP-IRM, also takes the strategy of dividing domains to better disentangle features from a group-theoretic view. The citing paper extends this work by further exploring the potential of re-dividing domains in DG from a group-theoretic perspective."}, {"Category": "Methodological Basis", "Citation": "[34,38,39,63,74]", "Explanation": "The cited works provide a pre-defined causal graph to learn key features involving class labels, which the citing paper adopts to achieve the goal of invariant learning in DG."}, {"Category": "Extension or Continuation", "Citation": "[24,40,42,44,57,65,71]", "Explanation": "The cited works consider the disentangled method to split features completely, which the citing paper extends by exploring new dimensions and variables in the same research area of invariant learning in DG."}, {"Category": "Data Source", "Citation": "[30]", "Explanation": "The cited work ZIN uses explicit auxiliary information to help learn stable and invariant features, which the citing paper acknowledges as a promising direction in the field of invariant learning in DG."}, {"Category": "Supporting Evidence", "Citation": "[43]", "Explanation": "The cited work Contrastive Predictive Coding (CPC) is the prototype of contrastive learning, which the citing paper uses to support the theoretical guarantee of InfoNCE loss in the field of contrastive learning."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work provides a foundation for the use of contrastive learning in self-supervised learning, which the citing paper builds upon in their research."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work is another example of the use of contrastive learning in self-supervised learning, further supporting the method used in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[23]", "Explanation": "The cited work demonstrates the effectiveness of contrastive learning in modeling unlabeled data, providing evidence for the use of this method in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[25]", "Explanation": "The cited work further supports the use of contrastive learning in various tasks, highlighting its ability to make hidden representations of samples from the same class close to each other."}, {"Category": "Extension or Continuation", "Citation": "[64]", "Explanation": "The cited work extends the use of contrastive learning to a new task, which the citing paper may build upon in their research to explore additional applications of the method."}, {"Category": "Extension or Continuation", "Citation": "[68]", "Explanation": "The cited work provides another example of the use of contrastive learning in a new context, which the citing paper may build upon to further expand the range of tasks where the method can be applied."}, {"Category": "Methodological Basis", "Citation": "[53]", "Explanation": "The cited work on Empirical Risk Minimization (ERM) provides a method for minimizing the loss over all samples in the training data, which the citing paper adopts in their research to minimize the loss in their classification task."}, {"Category": "Methodological Basis", "Citation": "[8]", "Explanation": "The cited work [8] is used as a reference for the method of generating domain labels in the first stage of the HTCL method proposed in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[31]", "Explanation": "The cited work [31] is referenced for the process of dividing domains and learning invariant features in the second stage of the HTCL method."}, {"Category": "Methodological Basis", "Citation": "[32]", "Explanation": "The cited work [32] is used as a reference for the process of generating domain labels and learning invariant features in the second stage of the HTCL method."}, {"Category": "Methodological Basis", "Citation": "[56]", "Explanation": "The cited work [56] is mentioned in the context of existing methods for generating domain labels and learning invariant features, which the HTCL method proposed in the citing paper is compared to."}, {"Category": "Methodological Basis", "Citation": "[50]", "Explanation": "The cited work CORAL is used as a method to calculate the distances between different domain representations in the citing paper, providing a specific technique for data analysis and model training."}, {"Category": "Data Source", "Citation": "[21]", "Explanation": "The dataset PACS is used as a source of images for the evaluation of the proposed HTCL method in the citing paper."}, {"Category": "Data Source", "Citation": "[54]", "Explanation": "The dataset OfficeHome is used as a source of images for the evaluation of the proposed HTCL method in the citing paper."}, {"Category": "Data Source", "Citation": "[11]", "Explanation": "The dataset VLCS is used as a source of images for the evaluation of the proposed HTCL method in the citing paper."}, {"Category": "Data Source", "Citation": "[3]", "Explanation": "The dataset TerraIncognita is used as a source of images for the evaluation of the proposed HTCL method in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The evaluation metric used in the cited work, DomainBed benchmark, serves as the basis for the training and evaluation protocol followed in the citing paper for the domain generalization task."}, {"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work, ResNet-18, serves as the backbone feature extractor in the first phase of the research conducted in the citing paper. The authors adapt the pre-trained model to change the last FC layer output to a low dimension for efficient computation."}, {"Category": "Extension or Continuation", "Citation": "[47]", "Explanation": "The cited work, ImageNet, is used as a pre-training dataset in the first phase of the research conducted in the citing paper. The authors build upon the pre-training by changing the last FC layer output to a low dimension for efficient computation."}, {"Category": "Data Source", "Citation": "The cited work in the last sentence of the input is not provided in the format of a number or author name. However, it is worth noting that the authors of the citing paper refer to a specific training stage and a set of networks that are used in the research conducted in the cited work. This indicates a reliance on external data and pre-existing models as a foundational element for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work provides the backbone network architecture (ResNet-50) that the citing paper uses in their research, serving as the methodological basis for the model selection process."}, {"Category": "Data Source", "Citation": "[47]", "Explanation": "The cited work (ImageNet) is the data source for the pre-training of the backbone network in the citing paper, providing the necessary data for the model to be trained and used in the research."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The cited work (SWAD) is used for weight averaging in the model selection process, serving as a methodological basis for the model training and selection in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[50]", "Explanation": "The cited work, CORAL, is used to compare the performance of the citing paper (HTCL) and show that HTCL achieves better results on most datasets, including a performance gain of 1.7% on average."}, {"Category": "Methodological Basis", "Citation": "[32]", "Explanation": "The cited work, KerHRM, provides a method for domain generalization that the citing paper builds upon in the implementation of the heterogeneous dividing pattern for domain labels in the benchmark framework."}, {"Category": "Supporting Evidence", "Citation": "[52]", "Explanation": "The cited work, t-SNE, is used to visualize the feature representations of three methods in the citing paper, providing a visual representation of the features learned in the process of domain label generation."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work introduces the K-Means cluster algorithm, which the citing paper replaces the candidate pattern generating guidance in the first stage of the process of generating dividing patterns to improve the performance of the model in enhancing generalization."}, {"Category": "Methodological Basis", "Citation": "[48]", "Explanation": "The cited work, Group-DRO, is a method that the citing paper adopts in the first stage of HTCL to generate new heterogeneous domain labels. The citing paper uses the method to improve the performance of DG methods by applying the generated labels to distinguish different source domains during training."}, {"Category": "Methodological Basis", "Citation": "[50]", "Explanation": "The cited work, CORAL, is a method that the citing paper adopts in the first stage of HTCL to generate new heterogeneous domain labels. The citing paper uses the method to improve the performance of DG methods by applying the generated labels to distinguish different source domains during training."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work, MMD, is a method that the citing paper adopts in the first stage of HTCL to generate new heterogeneous domain labels. The citing paper uses the method to improve the performance of DG methods by applying the generated labels to distinguish different source domains during training."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work, MTL, is a method that the citing paper adopts in the first stage of HTCL to generate new heterogeneous domain labels. The citing paper uses the method to improve the performance of DG methods by applying the generated labels to distinguish different source domains during training."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work, DomainBed, is used as a reference for the hyper-parameters in the citing paper, specifically the values of \ud835\udf06 \ud835\udc5a\ud835\udc5a\ud835\udc51 and \ud835\udc47 2, to ensure a fair comparison in the study of model sensitivity."}, {"Category": "Data Source", "Citation": "[11]", "Explanation": "The cited work, VLCS dataset, is the source of the data used in the analysis conducted in the citing paper."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b26", "b19", "b7", "b11", "b14", "b22" ], "table_ref": [], "text": "Given the database, the text-to-SQL task (Zhong et al., 2017;Xu et al., 2017) aims to convert the natural language question into the corresponding SQL to complete complicated querying. As the wild usage of relational database, this task attract great attention and has been widely studied in both academic and industrial communities.\nRecently, text-to-SQL researches (Hui et al., 2022;Lin et al., 2020;Qi et al., 2022) mainly focus on building a parser under a cross-domain setup (Yu et al., 2018;Wang et al., 2020b), where the databases of the training set and the evaluation set do not overlap. It aims to construct a universal parser that can automatically adapt different domains to inhibit the problem of data scarcity. However, domain-specific knowledge, especially domain convention, is crucial but difficult to transform across different domains under cross-domain setup. Another line of research focuses on the experiment environment where the training data and the evaluation data are based on the same database, which is known as a single-domain setup. A single-domain text-to-SQL system can parse domain knowledge more easily and also has more wide applications in the real world. However, the problem of data scarcity always comes up when security issues and privacy issues exist. Therefore, both of these setups will face particular difficulties when it comes to the real world.\nTo this end, we introduce the cross-schema setup in this work. The cross-schema text-to-SQL tasks aim to build a text-to-SQL parser that can automatically adapt different databases from the same domain, which can avoid the aforementioned problems. Actually, the cross-schema text-to-SQL also has broad applications in the real world. For example, all the hospital store the information of patients and medical resources in databases with different structures. Most information categories are identical across these databases, for instance, the patient name and the treatment date. Moreover, domain-specific representations such as medicine names in databases and user questions are also commonly used. In this case, we can build a universal in-domain text-to-SQL parser that can be deployed on the new database from the given domain. Compared with the cross-domain setup, a crossschema parser will not always confront completely unseen domain knowledge. On the other hand, compared with the single-domain setup, the problem of data scarcity can also be inhibited because the data from other in-domain databases can be used to train the model. However, a cross-schema text-to-SQL parser need to automatically adapt different database schema structure. Unfortunately, this issue is less investigated before. Therefore, how to construct a structural-general parser is the mainly challenge of cross-domain text-to-SQL.\nIn this paper, we propose a large-scale CrosS-Schema Chinese text-to-SQL dataset (CSS), containing 33,620 question/SQL pairs across 21 databases. We generate (question, SQL) pairs with templates and manually paraphrase the question by crowd-sourced. For the databases, we collect 2 real-world database schemas involving medical insurance and medical treatment. As the privacy issues, we are not allowed to use the original data. Therefore, we fill the databases with pseudo values. Based on these 2 seed databases, we alter the schema and expand 19 databases with different structures. Hence, CSS can be used to develop cross-schema text-to-SQL systems. On the other hand, the original 2 databases correspond 4,340 samples, which construct the largest Chinese singledomain corpus. This corpus also allows researchers to carry on related studies. Our main contributions can be summarized as follows:\n1. We present the cross-schema text-to-SQL task and propose a large-scale dataset, CSS, for corresponding studies. The dataset and baseline models will be available if accepted.\n2. We provide a real-world Chinese corpus for single-domain text-to-SQL researches.\n3. To show the potential and usefulness of CSS, we conducted and reported the baselines of cross-schema text-to-SQL and Chinese singledomain text-to-SQL." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b13", "b2", "b24", "b12", "b4", "b8", "b16", "b12", "b8", "b9", "b20", "b3", "b6", "b0", "b25", "b26", "b22", "b5" ], "table_ref": [], "text": "Single-domain text-to-SQL datasets Earliest semantic parsing models are designed for singledomain systems to answer complex questions. ATIS (Price, 1990;Dahl et al., 1994) contains manually annotated questions for the flight-booking task. GeoQuery (Zelle and Mooney, 1996) contains manually annotated questions about US geography. Popescu et al. (2003); Giordani and Moschitti (2012); Iyer et al. (2017) convert GeoQuery into the SQL version. Restaurants (Tang and Mooney, 2000;Popescu et al., 2003) is a dataset including questions about restaurants and their food types etc. Scholar (Iyer et al., 2017) includes questions about academic publications and corresponding automatically generated SQL queries. Academic (Li and Jagadish, 2014) enumerates all query logics supported by the Microsoft Academic Search (MAS) website and writes corresponding question utterances. Yelp and IMDB (Yaghmazadeh et al., 2017) consists of questions about the Yelp website and the Internet Movie Database. Advising (Finegan-Dollak et al., 2018) consists of questions about the course information database at the University of Michigan along with artificial data records.\nSingle-domain text-to-SQL datasets contain only one database. Although text-to-SQL models trained with single-domain datasets are applied in corresponding specific domains, different systems with the same domain but different backgrounds have diverse databases, which means that models should have the generalization ability to be transferred among different systems. Existing singledomain datasets do not own the feature that requires models to improve cross-schema generalization ability. On the contrary, our cross-schema setup is raised for this issue.\nCross-domain text-to-SQL datasets Recent researches expect text-to-SQL models (Guo et al., 2019;Bogin et al., 2019;Zhang et al., 2019) to generalize to unseen databases. Thus cross-domain text-to-SQL datasets are released. Zhong et al. (2017) releases WikiSQL, a dataset of 80,654 manually annotated question/SQL pairs distributed across more than 20k tables from Wikipedia. Although WikiSQL is a large-scale dataset, each database schema merely consists of one table and each SQL query merely consists of SELECT, FROM, WHERE clauses. Yu et al. (2018) releases Spider, a large-scale complex cross-domain text-to-SQL dataset. Comparing with previous datasets, Spider owns much more complex databases for various domains and complex SQL queries with advanced SQL clauses and nested SQL structures. Wang et al. (2020b) releases DuSQL, yet another large-scale cross-domain text-to-SQL dataset but in Chinese. Having similar form with Spider, DuSQL has become a popular Chinese text-to-SQL dataset. There are also some conversational cross-domain text-to-SQL datasets, including SParC (Yu et al., 2019b), CoSQL (Yu et al., 2019a), CHASE (Guo et al., 2021), DIR (Li et al., 2023b) etc.\nAlthough our cross-schema dataset owns more than one databases, it is different from crossdomain datasets. It concentrates on model generalization ability across different databases which share the similar structure since they are in the same domain." }, { "figure_ref": [ "fig_0" ], "heading": "Dataset Collection", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce our method of constructing the medical dataset CSS in detail. The dataset construction method mainly consists of five steps: 1) initial databases creation, 2) question/SQL templates creation, 3) values filling, 4) questions rewriting, and 5) database schema extension.\nWe discuss five steps of constructing the dataset in Section 3.1-3.5 respectively. Figure 1 shows the overview of the complete process. " }, { "figure_ref": [], "heading": "Initial Databases Creation", "publication_ref": [], "table_ref": [], "text": "To construct the dataset, the first step is to create initial databases. We collect two databases from the real world scenario, i.e. the insurance database and the medical database. The insurance database mainly stores medical consumption records of many different patients. The medical database mainly stores records of medical diagnostic and examination results.\nIt is obvious that records data in medical databases are usually sensitive, since the issue of patients privacy is involved in these data. It is not feasible to use data from the real world directly in our dataset. To protect privacy of users involved in the medical system, we generate database cellvalues with certain rules and ensure that generated data are reasonable." }, { "figure_ref": [], "heading": "Question/SQL Templates Creation", "publication_ref": [], "table_ref": [], "text": "Creating abundant and diverse question/SQL templates is an important step for constructing the dataset, which influences the quality of the generated dataset a lot. A question/SQL template can be regarded as an example of the dataset, which consists of a question template and a SQL query template answering the question. The only difference between the question/SQL template and the real dataset example is that values carrying information (e.g. ID, name, time) in the question/SQL template are replaced with special tokens. In the subsequent steps, values can be generated and filled into corresponding question/SQL templates with certain rules, which means that all question/SQL templates can be transformed into real dataset examples eventually.\nIn general, we use three methods to create various question/SQL templates. Firstly, given medical databases, we enumerate all columns and attempt to raise a question for each column as far as possible. Sometimes we put several columns with close lexical relations into one question/SQL template, since the diversity of the SELECT clause can get increased. It is obvious that question/SQL templates written by this method are relatively simple.\nSecondly, we raise a few medical query scenarios and create question/SQL templates based on them. In the real world, different people with different occupations and social roles will ask different types of questions. For instance, patients may care their medical consumption records and doctors may care medical examination results. Based on different real-world scenarios, we can raise various questions that meet needs of people with different social roles (e.g. doctor, patient). Furthermore, these question/SQL templates are usually more challenge since their SQL skeletons are usually more complex and diverse.\nThirdly, we add question/SQL templates which include SQL keywords and SQL skeletons that never occur in previous templates. We count occurrence frequencies for all SQL grammar rules and SQL skeletons that occur in dataset examples. Referring to statistical results, we create questions and corresponding SQL queries which consist of SQL grammar rules that occur in few dataset examples. Detailed statistical results are shown in Section 4.2. By creating question/SQL templates with this method, the SQL diversity of the dataset can get improved.\nWe eventually raise 434 different question/SQL templates totally. All these templates will get processed in subsequent steps." }, { "figure_ref": [], "heading": "Values Filling", "publication_ref": [], "table_ref": [], "text": "In Actually one unique question/SQL template can be used to generate several different dataset examples, since the template can be completed with various random values. We generate 10 dataset examples for each question/SQL template. Consequently there are totally 4,340 question/SQL pairs which are directly generated from 434 question/SQL templates." }, { "figure_ref": [], "heading": "Questions Rewriting", "publication_ref": [], "table_ref": [], "text": "Although 4,340 question/SQL pairs directly generated from templates can already be used to train and test text-to-SQL models, they cannot be directly added into the eventual medical dataset. Question sentences generated from question templates are usually unnatural. Moreover, 10 question sentences generated from the same one question template share the same sentence pattern. which means lack of natural language diversity.\nTo tackle the issue of language naturalness and diversity, we recruit annotators to rewrite dataset examples. All questions directly derived from question templates are rewritten by annotators. In this process, lexical and syntactic patterns of question sentences get changed, which leads to improvement of natural language diversity of the dataset.\nTo ensure the diversity of rewritten question sentences, we design a specific metric to evaluate the rewriting quality. We recruit two groups of annotators and request them to rewrite question sentences with metric scores as high as possible. Finally we merge two rewriting results from different annotating groups with some rules and acquire all rewritten questions. Detailed explanation of the metric is shown in Appendix A.\nThe correctness of rewritten questions is also an important issue. We use the automatic method to examine rewritten questions and make sure that key information are always maintained after the rewriting process.\nPayment. All annotators were paid based on their annotations. Annotators would get paid 0.58 RMB for each annotation example." }, { "figure_ref": [], "heading": "Database Schema Extension", "publication_ref": [], "table_ref": [], "text": "Database schema extension is a key feature of CSS. Text-to-SQL models with good performance should have the ability to be used in various medical systems. In the real world application, different medical systems may use different databases. However, these databases may share the similar structure, since all of them are designed for the medical domain. Consequently, we believe that cross-schema generalization ability for text-to-SQL models is significant and add this challenge task in CSS.\nCSS originally contains 2 databases. Based on them, we follow Li et al. (2023a) " }, { "figure_ref": [], "heading": "Dataset Statistics and Comparison", "publication_ref": [], "table_ref": [], "text": "In this section, we list some statistical information of CSS and existing datasets and do comparison. We mainly discuss scale statistics and SQL statistics with various datasets, including single-domain datasets, cross-domain datasets and CSS. Databases in CSS have a great number of columns, composite primary keys, and foreign keys, which indicates that databases in CSS commonly possess complex structures. This is also a challenge feature of CSS. It requires models to find out effective information from complex database structures." }, { "figure_ref": [], "heading": "Scale Statistics", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "SQL Statistics", "publication_ref": [], "table_ref": [], "text": "First of all, we clarify the concept named SQL skeleton. For a certain SQL query, it is feasible to remove detailed schema items and values from the SQL query. Concretely, we replace tables used in the SQL query with the special token \"tab\". Columns and values are processed with the similar method. Columns are replaced with the special token \"col\" and values are replaced with the special token \"value\". Then the result is defined as the SQL skeleton, which retains the basic structure of the original SQL query.\nTable 2 shows SQL statistics of existing datasets. CSS totally possesses 562 different SQL skeletons, which is comparable with ATIS and surpasses other single-domain datasets. Note that SQL queries in CSS are commonly very long. The average and maximum number of SQL query tokens are 55.41 and 243 respectively, which has surpassed almost all existing datasets except ATIS. The statistical result indicates that SQL queries in CSS are diverse and complex. This is still a challenge for text-to-SQL models." }, { "figure_ref": [], "heading": "Tasks and Models", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dataset Splitting", "publication_ref": [], "table_ref": [], "text": "We provide three methods to split the dataset into train/dev/test sets. Different dataset splitting methods correspond to different tasks and raise different challenges for models. For the first method, 4,340 original dataset examples are shuffled at random and then are split with the ratio 0.8/0.1/0.1. This sub-task is an ordinary text-to-SQL task setting and requires models to generalize well on natural language.\nFor the second method, 434 question/SQL templates are shuffled at random and then are split with the ratio 0.8/0.1/0.1. Then 4,340 original question/SQL pairs fall into corresponding dataset subsets. Comparing with other dataset splitting methods, larger language gap and SQL gap exist among train/dev/test sets, since different question/SQL templates generally express different meanings. Models are required to have the stronger SQL pattern generalization ability under this sub-task.\nFor the third method, we add extended dataset examples and split all 33,620 examples according to their databases. All databases are split with the ratio 0.6/0.2/0.2. No overlap of databases exists in train/dev/test sets. This dataset splitting method provides a challenge task, which requires models to possess the stronger generalization ability across diverse databases sharing similar structures." }, { "figure_ref": [ "fig_2" ], "heading": "Syntactic Role Prediction", "publication_ref": [ "b1", "b1" ], "table_ref": [], "text": "How to improve the cross-schema generalization ability of text-to-SQL models is a key challenge raised in CSS. In this section, we introduce our simple method to tackle the issue of model generalization ability across different databases.\nThe text-to-SQL model LGESQL (Cao et al., 2021) add an auxiliary task named graph pruning in order to improve the model performance. Given the natural language question and the database schema, the model is required to predict whether each schema item occurs in the SQL query. Following Cao et al. (2021) this task, the model is required to predict in which SQL clause each question token occurs.\nThe SQL query structure may change as the database schema changes. Figure 3 shows an instance, where two databases share the similar structure but the key information \"doctor\" in the question are used in the FROM clause and the WHERE clause respectively. We hypothesize that model with strong cross-schema generalization ability should distinguish syntactic roles of every question tokens under different databases.\nConcretely, according to the text-to-SQL model LGESQL, the model input is a graph G = (V, E) constructed with the given question and the database schema. Graph nodes V include question tokens and schema items (i.e. tables and columns) and graph edges E indicate relations among them. The model encodes each node i into an embedding vector x i . Then the context vector xi for each node i can be computed with multi-head attention.\nα h ij = softmax j∈N i (x i W h q )(x j W h k ) T d/H , xi = (concat H h=1 j∈N i α h ij x j W h v )W o ,\nwhere d is the dimension of embedding vectors, H is the number of heads, N i is the neighborhood of the node i, and\nW h q , W h k , W h v ∈ R d×d/H , W o ∈ R d×d are network parameters.\nFor each question node q i , the model can predict in which SQL clause it occurs with x q i and xq i . Specifically we divide the SQL query into 16 different parts, which are discussed in detail in Appendix B. Thus the auxiliary task is a binary classification task for each question token and each SQL part.\nP (y q i |x q i , xq i ) = σ([x q i ; xq i ]W + b),\nwhere W ∈ R 2d×16 , b ∈ R 1×16 are network parameters and y q i is the probability vector. The ground truth y g q i ,j is 1 when the question token q i occurs in the j-th SQL part. The training object is\nL = - q i j\n[y g q i ,j log P (y q i ,j |x q i , xq i )\n+ (1 -y g q i ,j ) log(1 -P (y q i ,j |x q i , xq i )].\nThe syntactic role prediction task is combined with the main task in a multitasking way. In addition, SRP can also be added into the RATSQL model directly, since RATSQL and LGESQL both encode graph nodes into embedding vectors and SRP only takes these vectors as the input. 6 Experiments" }, { "figure_ref": [], "heading": "Experiment Setup", "publication_ref": [ "b1", "b15" ], "table_ref": [], "text": "Baseline approaches We adopt three competitive text-to-SQL models as the baseline approaches, i.e. RATSQL (Wang et al., 2020a), LGESQL (Cao et al., 2021), and PICARD (Scholak et al., 2021).\nRATSQL and LGESQL process given information with graph encoding and decode the abstract syntax tree (AST) of the result SQL query. PICARD is a sequence-to-sequence approach and is different from the other two approaches. RATSQL constructs a graph with question tokens and schema items (i.e. tables and columns) and encodes the graph with the relation-aware selfattention mechanism. With the unified framework, RATSQL can easily establish and handle relations among graph nodes and then encode elements with various categories jointly.\nComparing with RATSQL, LGESQL improves the model performance by utilizing the line graph.\nLGESQL pays more attention to the topological structure of graph edges and distinguishes local and non-local relations for graph nodes. Besides the original graph used in RATSQL, LGESQL also constructs the corresponding line graph, since the line graph can help facilitate propagating encoding messages among nodes and edges. Different from RATSQL and LGESQL, PI-CARD is a sequence-to-sequence model. Nowadays large pretrained language models have possessed the strong ability for handling and processing natural language with unconstrained output space. However, SQL is a formal language with strict grammar rules. Invalid SQL queries are very likely to be generated if pretrained models are directly finetuned with text-to-SQL datasets. PI-CARD provides an approach, which can help reject invalid tokens during each decoding step and generate sequences in the constrained output space." }, { "figure_ref": [], "heading": "Model", "publication_ref": [], "table_ref": [], "text": "For each baseline model, we use pretrained language models (PLMs) within the encoding module.\nIn our experiments, the PLM longformer-chinese-base-4096 is applied in RATSQL and LGESQL and the PLM mbart-large-50 is applied in PICARD.\nEvaluation metrics There are several metrics to evaluate text-to-SQL model performances, including exact match and execution accuracy etc. The exact match metric requires the predicted SQL query to be equivalent to the gold SQL query. The execution accuracy metric requires the execution result of the predicted SQL query to be correct.\nWe mainly use the exact match (EM) metric in our experiments. Concretely, we present model performances with (w) and without (w/o) value evaluation respectively." }, { "figure_ref": [], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [ "tab_7", "tab_9", "tab_8", "tab_10" ], "text": "According to 3 different dataset splitting methods, we test baseline models under 3 sub-task settings. Table 3 shows model performances under dataset splitting method according to examples. LGESQL achieves the best performance under this sub-task, i.e. 90.8% EM(w/o) accuracy and 81.1% EM(w) accuracy on the test set. This indicates that existing text-to-SQL parsing models have had the ability to perform very well if all databases and possible SQL structures have appeared in the train set. Models merely need to generalize on natural language, which is simple when utilizing strong PLMs.\nTable 5 shows model performances under the template-splitting sub-task. Comparing with the previous sub-task, performances of three baseline models decrease a lot. Although RATSQL achieves the best performance under this sub-task, the EM(w/o) accuracy and the EM(w) accuracy on the test set are only 58.9% and 53.0% respectively. Question/SQL templates in dev/test sets do not appear in the train set. Thus models have to predict unseen SQL patterns when testing. The experiment result indicates that there is still a large room for the improvement of model generalization ability across SQL patterns. We believe that CSS can also help facilitate researches on improving model ability of predicting unseen SQL patterns. template-splitting sub-task. There is a room of model performances between PICARD and ASTbased approaches, especially when values in SQL queries are concerned in evaluation. Table 4 shows two instances from the test set in the templatesplitting sub-task, where the PICARD model successfully generates the structure of the SQL query but predicts the wrong value. As shown in Table 2, SQL queries in CSS are commonly very long and complex, which leads to great difficulty for PICARD decoding. The decoding error would accumulate as the decoding step increases. According to our statistical results, during the decoding process of AST-based approaches, the average number of AST nodes is 56.95. Although the average number of tokens in the SQL query is 55.41, PLM used in PICARD would split tokens into many subwords. Consequently, decoding steps of PICARD is actually much more than AST-based approaches. Furthermore, table and column names in CSS are commonly consisted of unnatural tokens, which improves the decoding difficulty of PICARD a lot.\nTable 6 shows model performances under dataset splitting method according to different databases. Under this sub-task, we use RATSQL as the baseline model and attempt to add the auxiliary task SRP, expecting to improve the model performance across different databases. The experiment result shows that the model performance increases about 1.7% on the dev set and increases about 3.3%-3.8% on the test set when SRP is applied into RATSQL. This proves that SRP can help improve the crossschema generalization ability of the model when using SRP as a simple baseline method. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper presents CSS, a large-scale crossschema Chinese text-to-SQL dataset designed for the medical domain. We illustrate the detailed process of dataset construction and also present statistical information comparing with existing datasets. We raise a challenge task in CSS, which requires models to generalize across various databases but in the same domain. To tackle the above task, we designed a baseline method named syntactic role prediction as an auxiliary task for model training. We conduct benchmark experiments with three competitive baseline models and prove that future researches on CSS is valuable." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "We raise a new challenge task in our medical dataset CCS. Comparing with existing datasets, CCS requires text-to-SQL models to generalize to different databases with the similar structure in the same domain. To tackle this problem, we provide a baseline method named syntactic role prediction, which is an auxiliary task and can be combined with the main task in a multitasking way. Our experiments prove that SRP can help improve the cross-schema generalization ability of models. However, the improvement is not that large. How to generalize models across different databases sharing the similar structure is still a challenge issue. We expect that future works can solve this difficult problem." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "We collect two original medical databases from the real world. However, cell-values in medical databases are commonly sensitive, since the information of patients and doctors are involved in these values. Thus we only retain the database schema and generate sufficient cell-values with certain rules. We ensure that generated values are reasonable and that privacy of medical system users can get protected." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "We thank Xiaowen Li, Kunrui Zhu and Ruihui Zhao from Tencent Jarvis Lab for providing necessary initial data. We also thank all the anonymous reviewers for their thoughtful comments. This work has been supported by the China NSFC Project (No.62106142 and No.62120106006), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), CCF-Tencent Open Fund and Startup Fund for Youngman Research at SJTU (SFYR at SJTU)." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Our dataset is publicly available at https://huggingface. co/datasets/zhanghanchong/css." }, { "figure_ref": [], "heading": "A Rewriting Metric", "publication_ref": [], "table_ref": [], "text": "First of all, we define the rewriting ratio (RR) between two different sentences s 1 and s 2 , i.e." }, { "figure_ref": [], "heading": "RR(s", "publication_ref": [], "table_ref": [], "text": "where EditDistance(s 1 , s 2 ) represents the edit distance between s 1 and s 2 . Assume that s i,1 , s i,2 , • • • , s i,10 are ten rewritten question sentences derived from the same question/SQL template i. In order to improve the language diversity, we expect ten rewritten sentences to differ from each other. Thus we request annotators to maximize\nwhen rewriting, where N is the number of question/SQL templates.\nWhen merging rewriting results from two groups of annotators, for each example with the original question sentence s o , we need to decide between two rewritten sentences s r 1 and s r 2 . Here we choose s r 1 only if" }, { "figure_ref": [], "heading": "B Syntactic Role Prediction", "publication_ref": [], "table_ref": [], "text": "We divide the SQL query into 16 different parts.\nTable 7 shows detailed situations. For each question token q i , we find out all schema items which have schema linking relations with q i . Then for each SQL part, we label that q i appears in this part if q i itself or one of those schema items appears in this part. " }, { "figure_ref": [], "heading": "Name", "publication_ref": [], "table_ref": [], "text": "" } ]
2023-05-25
10.18653/v1/P19-1448
[ { "authors": "Ben Bogin; Jonathan Berant; Matt Gardner", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Representing schema structure with graph neural networks for text-to-SQL parsing", "year": "2019" }, { "authors": "Ruisheng Cao; Lu Chen; Zhi Chen; Yanbin Zhao; Su Zhu; Kai Yu", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "LGESQL: Line graph enhanced text-to-SQL model with mixed local and non-local relations", "year": "2021" }, { "authors": "Deborah A Dahl; Madeleine Bates; Michael Brown; William Fisher; Kate Hunicke-Smith; David Pallett; Christine Pao; Alexander Rudnicky; Elizabeth Shriberg", "journal": "", "ref_id": "b2", "title": "Expanding the scope of the ATIS task: The ATIS-3 corpus", "year": "1994-03-08" }, { "authors": "Catherine Finegan-Dollak; Jonathan K Kummerfeld; Li Zhang; Karthik Ramanathan; Sesh Sadasivam; Rui Zhang; Dragomir Radev", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Improving textto-SQL evaluation methodology", "year": "2018" }, { "authors": "Alessandra Giordani; Alessandro Moschitti", "journal": "", "ref_id": "b4", "title": "Translating questions to SQL queries with generative parsers discriminatively reranked", "year": "2012" }, { "authors": "Jiaqi Guo; Ziliang Si; Yu Wang; Qian Liu; Ming Fan; Jian-Guang Lou; Zijiang Yang; Ting Liu", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Chase: A large-scale and pragmatic Chinese dataset for cross-database context-dependent text-to-SQL", "year": "2021" }, { "authors": "Jiaqi Guo; Zecheng Zhan; Yan Gao; Yan Xiao; Jian-Guang Lou; Ting Liu; Dongmei Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Towards complex text-to-SQL in cross-domain database with intermediate representation", "year": "2019" }, { "authors": "Binyuan Hui; Ruiying Geng; Lihan Wang; Bowen Qin; Yanyang Li; Bowen Li; Jian Sun; Yongbin Li", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "S 2 SQL: Injecting syntax to question-schema interaction graph encoder for text-to-SQL parsers", "year": "2022" }, { "authors": "Srinivasan Iyer; Ioannis Konstas; Alvin Cheung; Jayant Krishnamurthy; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Learning a neural semantic parser from user feedback", "year": "2017" }, { "authors": "Fei Li; H V Jagadish", "journal": "Proc. VLDB Endow", "ref_id": "b9", "title": "Constructing an interactive natural language interface for relational databases", "year": "2014" }, { "authors": "Jieyu Li; Lu Chen; Ruisheng Cao; Su Zhu; Hongshen Xu; Zhi Chen; Hanchong Zhang; Kai Yu; ; A ; Jieyu Li; Zhi Chen; Lu Chen; Zichen Zhu; Hanqi Li; Ruisheng Cao; Kai ", "journal": "Applied Sciences", "ref_id": "b10", "title": "Dir: A large-scale dialogue rewrite dataset for cross-domain conversational text-to-sql", "year": "2023" }, { "authors": "Victoria Xi; Richard Lin; Caiming Socher; Xiong", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Bridging textual and tabular data for crossdomain text-to-SQL semantic parsing", "year": "2020" }, { "authors": "Ana-Maria Popescu; Oren Etzioni; Henry Kautz", "journal": "Association for Computing Machinery", "ref_id": "b12", "title": "Towards a theory of natural language interfaces to databases", "year": "2003" }, { "authors": "P J Price", "journal": "", "ref_id": "b13", "title": "Evaluation of spoken language systems: the ATIS domain", "year": "1990-06-24" }, { "authors": "Jiexing Qi; Jingyao Tang; Ziwei He; Xiangpeng Wan; Yu Cheng; Chenghu Zhou; Xinbing Wang; Quanshi Zhang; Zhouhan Lin", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "RASAT: Integrating relational structures into pretrained Seq2Seq model for text-to-SQL", "year": "2022" }, { "authors": "Torsten Scholak; Nathan Schucher; Dzmitry Bahdanau", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "PICARD: Parsing incrementally for constrained auto-regressive decoding from language models", "year": "2021" }, { "authors": "R Lappoon; Raymond J Tang; Mooney", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Automated construction of database interfaces: Intergrating statistical and relational learning for semantic parsing", "year": "2000" }, { "authors": "Bailin Wang; Richard Shin; Xiaodong Liu; Oleksandr Polozov; Matthew Richardson", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "RAT-SQL: Relation-aware schema encoding and linking for text-to-SQL parsers", "year": "2020" }, { "authors": "Lijie Wang; Ao Zhang; Kun Wu; Ke Sun; Zhenghua Li; Hua Wu; Min Zhang; Haifeng Wang", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "DuSQL: A large-scale and pragmatic Chinese text-to-SQL dataset", "year": "2020" }, { "authors": "Xiaojun Xu; Chang Liu; Dawn Song", "journal": "", "ref_id": "b19", "title": "Sqlnet: Generating structured queries from natural language without reinforcement learning", "year": "2017" }, { "authors": "Navid Yaghmazadeh; Yuepeng Wang; Isil Dillig; Thomas Dillig", "journal": "", "ref_id": "b20", "title": "Type-and content-driven synthesis of SQL queries from natural language", "year": "2017" }, { "authors": "Tao Yu; Rui Zhang; Heyang Er; Suyi Li; Eric Xue; Bo Pang; Victoria Xi; Yi Lin; Tianze Chern Tan; Zihan Shi; Youxuan Li; Michihiro Jiang; Sungrok Yasunaga; Tao Shim; Alexander Chen; Zifan Fabbri; Luyao Li; Yuwen Chen; Shreya Zhang; Vincent Dixit; Caiming Zhang; Richard Xiong; Walter Socher; Dragomir Lasecki; ; Radev", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "CoSQL: A conversational text-to-SQL challenge towards crossdomain natural language interfaces to databases", "year": "2019" }, { "authors": "Tao Yu; Rui Zhang; Kai Yang; Michihiro Yasunaga; Dongxu Wang; Zifan Li; James Ma; Irene Li; Qingning Yao; Shanelle Roman; Zilin Zhang; Dragomir Radev", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task", "year": "2018" }, { "authors": "Tao Yu; Rui Zhang; Michihiro Yasunaga; Yi Chern Tan; Xi Victoria Lin; Suyi Li; Heyang Er; Irene Li; Bo Pang; Tao Chen; Emily Ji; Shreya Dixit; David Proctor; Sungrok Shim; Jonathan Kraft; Vincent Zhang; Caiming Xiong; Richard Socher; Dragomir Radev", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "SParC: Cross-domain semantic parsing in context", "year": "2019" }, { "authors": "John M Zelle; Raymond J Mooney", "journal": "AAAI Press", "ref_id": "b24", "title": "Learning to parse database queries using inductive logic programming", "year": "1996" }, { "authors": "Rui Zhang; Tao Yu; Heyang Er; Sungrok Shim; Eric Xue; Xi Victoria Lin; Tianze Shi; Caiming Xiong; Richard Socher; Dragomir Radev", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Editingbased SQL query generation for cross-domain context-dependent questions", "year": "2019" }, { "authors": "Victor Zhong; Caiming Xiong; Richard Socher", "journal": "", "ref_id": "b26", "title": "Seq2sql: Generating structured queries from natural language using reinforcement learning", "year": "2017" } ]
[ { "formula_coordinates": [ 6, 328.48, 431.12, 173.6, 61.31 ], "formula_id": "formula_0", "formula_text": "α h ij = softmax j∈N i (x i W h q )(x j W h k ) T d/H , xi = (concat H h=1 j∈N i α h ij x j W h v )W o ," }, { "formula_coordinates": [ 6, 306.14, 526.48, 218.27, 26.87 ], "formula_id": "formula_1", "formula_text": "W h q , W h k , W h v ∈ R d×d/H , W o ∈ R d×d are network parameters." }, { "formula_coordinates": [ 6, 329.16, 645.38, 172.23, 11.61 ], "formula_id": "formula_2", "formula_text": "P (y q i |x q i , xq i ) = σ([x q i ; xq i ]W + b)," }, { "formula_coordinates": [ 6, 323.18, 729.57, 59.31, 22.37 ], "formula_id": "formula_3", "formula_text": "L = - q i j" } ]
CSS: A Large-scale Cross-schema Chinese Text-to-SQL Medical Dataset
The cross-domain text-to-SQL task aims to build a system that can parse user questions into SQL on complete unseen databases, and the single-domain text-to-SQL task evaluates the performance on identical databases. Both of these setups confront unavoidable difficulties in real-world applications. To this end, we introduce the cross-schema text-to-SQL task, where the databases of evaluation data are different from that in the training data but come from the same domain. Furthermore, we present CSS 1 , a large-scale CrosS-Schema Chinese text-to-SQL dataset, to carry on corresponding studies. CSS originally consisted of 4,340 question/SQL pairs across 2 databases. In order to generalize models to different medical systems, we extend CSS and create 19 new databases along with 29,280 corresponding dataset examples. Moreover, CSS is also a large corpus for single-domain Chinese textto-SQL studies. We present the data collection approach and a series of analyses of the data statistics. To show the potential and usefulness of CSS, benchmarking baselines have been conducted and reported.
Hanchong Zhang; Jieyu Li; Lu Chen; Ruisheng Cao; Yunyan Zhang; Yu Huang; Yefeng Zheng; Kai Yu
[ { "figure_caption": "Figure 1 :1Figure 1: Overview of the dataset collection process.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: An instance of database schema extension.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Given the question, the corresponding SQL query differs among various but similar databases.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "List the records of patient Tiangan Shui admitted to hospital 7539997, including the records with the department name containing Otolaryngology. Gold: SELECT * FROM person_info JOIN hz_info JOIN zyjzjlb WHERE person_info.XM = \"水天干\" AND hz_info.YLJGDM = \"7539997\" AND zyjzjlb.JZKSMC LIKE \"%耳鼻喉科%\" Pred: SELECT * FROM person_info JOIN hz_info JOIN zyjzjlb WHERE person_info.XM = \"水天干\" AND hz_info.YLJGDM = \"7539997\" AND zyjzjlb.JZKSMC LIKE \"%耳鼻炎%\" Q: 从01年1月31日 一 直 到09年8月12日 内 患 者80476579被开出盐酸多奈哌齐片(薄膜)的总 次数一共有多少? Q: How many times has patient 80476579 been prescribed donepezil hydrochloride tablets (thin film) from 2001-01-31 to 2009-08-12? Gold: SELECT COUNT(*) FROM t_kc21 JOIN t_kc22 WHERE t_kc21.PERSON_ID == \"80476579\" AND t_kc22.STA_DATE BETWEEN \"2001-01-31\" AND \"2009-08-12\" AND t_kc22.SOC_SRT_DIRE_NM == \"盐酸多奈哌 齐片(薄膜)\" Pred: SELECT COUNT(*) FROM t_kc21 JOIN t_kc22 WHERE t_kc21.PERSON_ID == \"80476579\" AND t_kc22.STA_DATE BETWEEN \"2001-01-31\" AND \"2009-08-12\" AND t_kc22.SOC_SRT_DIRE_NM == \"盐酸多奈\"", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Case study for SRP. JOIN conditions in the FROM clause are omitted for brevity.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "Question template: 门诊就诊$1中患者的体温是多少? What is the temperature of the outpatient where visit ID is $门诊就诊记录(outpatient_record) 体温(temperature) 患者姓名(name) 就诊流水号(ID)", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "table namecolumn namesQuestion: 门诊就诊123中患者的体温是多少?What is the temperature of the outpatient where visit ID is 123 ?SQL: SELECT temperature FROM outpatient_record WHERE ID = 123Paraphrase: 门诊就诊123中患者测了体温,数值是多少?What is the patient's temperature reading during the outpatient visit 123?门诊就诊记录(outpatient_record)就诊流水号(ID)患者姓名(name)患者年龄(age)体温(temperature)SQL: SELECT temperature FROM outpatient_record WHERE ID = 123", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "order to generate real dataset examples from question/SQL templates, values should be generated and filled into all templates. Different types of values are replaced with different special tokens in question/SQL templates. In this step, we use certain rules to generate random values for various special tokens. Concretely, special tokens indicating number or time are filled with reasonable and suitable random values. Special tokens indicating ID (e.g. person ID, hospital ID) are filled with random strings, which consist of numbers and letters. Other special tokens basically indicate specialized and professional words like disease names. To generate these values, we firstly collect sufficient disease names, medicine names, medical test names, etc. Then these special tokens are filled with values chosen at random from corresponding candidate value lists.", "figure_data": "", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "and create 19 new databases. Firstly for two tables linked with foreign keys, we create a new relation table between the original two tables and create new foreign keys respectively pointing to them. Secondly for two tables linked with foreign keys, we merge them by putting their columns together in a merged table. Thirdly for a table with a special column which only contains a few different kinds of values (e.g. gender), we split the table into several tables according to those limited values.", "figure_data": "", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Table 1 shows scale statistics of existing datasets, including single-domain datasets, cross-domain datasets, and the medical dataset CSS. For singledomain datasets listed in the table and WikiSQL, we use the standardized version from Finegan-Dollak et al. (2018). CSS contains 33,620 examples generated from scratch across 21 databases. Comparing with previous single-domain datasets, CSS has the largest scale and various databases. We extend original databases with several certain rules. Therefore, CSS can help text-to-SQL models generalize to different medical systems, where databases are different but share the similar structure.", "figure_data": "", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": ", we raise a similar auxiliary task named syntactic role prediction (SRP). Under Scale statistics of existing datasets. \"Avg T/DB\" represents the average number of tables per database schema. \"Avg C/T\" represents the average number of columns per table. \"Avg P/T\" represents the average number of columns in the composite primary key per table. \"Avg F/T\" represents the average number of foreign keys per table.", "figure_data": "DatasetLanguage ExamplesDBsAvg T/DB Avg C/T Avg P/T Avg F/TATISEnglish19,2011255.240.161.56GeoQueryEnglish920183.881.751.12RestaurantsEnglish378134.001.001.33ScholarEnglish1,8581122.330.580.75AcademicEnglish2001152.800.470.00YelpEnglish141175.431.000.00IMDBEnglish1471164.061.000.19AdvisingEnglish4,7441186.891.395.39WikiSQLEnglish80,65426,5311.006.340.000.00SpiderEnglish9,6931665.285.140.890.91DuSQLChinese25,0032084.045.290.510.71CSSChinese33,620215.6228.491.681.65Dataset# SQL Avg Len Max LenATIS82897.96474GeoQuery12026.0892Restaurants1229.2261Scholar15837.0765Academic7636.30116Yelp6228.9256IMDB3027.4855Advising16947.49169WikiSQL3912.4823Spider1,11617.9987DuSQL2,32320.2337CSS56255.41243Table 2: SQL statistics of existing datasets. \"# SQL\"represents the number of SQL skeletons. \"Avg Len\"represents the average number of tokens in one SQLquery. \"Max Len\" represents the maximum number oftokens in one SQL query.", "figure_id": "tab_6", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Model performances under dataset splitting method according to examples.", "figure_data": "DevTestw/oww/owRATSQL 90.2 81.1 89.0 79.1LGESQL 91.7 82.2 90.8 81.1PICARD 93.8 53.7 70.3 58.3", "figure_id": "tab_7", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Case study for the PICARD model when predicting values. FROM conditions are omitted for clarity.", "figure_data": "Note that as a sequence-to-sequence approach,PICARD cannot perform as well as the two AST-based approaches (RATSQL and LGESQL) in the", "figure_id": "tab_8", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Model performances under dataset splitting method according to templates.", "figure_data": "ModelDev w/owTest w/owRATSQL36.6 35.7 43.4 42.0RATSQL + SRP 38.3 37.4 47.2 45.3", "figure_id": "tab_9", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Model performances under dataset splitting method according to databases. \"SRP\" represents the auxiliary task named syntactic role prediction.", "figure_data": "", "figure_id": "tab_10", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Figure 4 is an instance from the test set, where RATSQL predicts the wrong SQL but RATSQL with SRP predicts the correct result. After database schema extension, a new relation table is created. However, RATSQL does not understand the change and misses the relation table in the FROM clause. On the contrary, the auxiliary task SRP helps the model utilize the relation table and eventually predict the correct SQL.", "figure_data": "", "figure_id": "tab_11", "figure_label": "", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "(Hui et al., 2022)", "Explanation": "The cited work by Hui et al. provides foundational data and research on text-to-SQL tasks, which supports the discussion in the citing paper on the importance of domain-specific knowledge in the field."}, {"Category": "Methodological Basis", "Citation": "(Lin et al., 2020)", "Explanation": "The cited work by Lin et al. serves as a methodological basis for the citing paper, as it discusses the development of a universal parser for text-to-SQL tasks in a cross-domain setup."}, {"Category": "Data Source", "Citation": "(Yu et al., 2018)", "Explanation": "The cited work by Yu et al. is acknowledged as a data source for the text-to-SQL research, as it provides a database for the training and evaluation of the task."}, {"Category": "Extension or Continuation", "Citation": "(Wang et al., 2020b)", "Explanation": "The cited work by Wang et al. extends the research on text-to-SQL tasks by focusing on the construction of a universal parser in a cross-domain setup."}, {"Category": "Data Source", "Citation": "(Price, 1990)", "Explanation": "The cited work provides the ATIS dataset, which is a manually annotated question set for the flight-booking task and serves as a data source for the citing paper."}, {"Category": "Data Source", "Citation": "(Dahl et al., 1994)", "Explanation": "The cited work provides the ATIS dataset, which is a manually annotated question set for the flight-booking task and serves as a data source for the citing paper."}, {"Category": "Data Source", "Citation": "(Zelle and Mooney, 1996)", "Explanation": "The cited work provides the GeoQuery dataset, which is a manually annotated question set about US geography and serves as a data source for the citing paper."}, {"Category": "Data Source", "Citation": "(Popescu et al., 2003)", "Explanation": "The cited work provides the conversion of the GeoQuery dataset into the SQL version and serves as a data source for the citing paper."}, {"Category": "Data Source", "Citation": "(Giordani and Moschitti, 2012)", "Explanation": "The cited work provides the conversion of the GeoQuery dataset into the SQL version and serves as a data source for the citing paper."}, {"Category": "Data Source", "Citation": "(Iyer et al., 2017)", "Explanation": "The cited work provides the conversion of the GeoQuery dataset into the SQL version and the creation of the Scholar and Academic datasets, which serve as data sources for the citing paper."}, {"Category": "Data Source", "Citation": "(Li and Jagadish, 2014)", "Explanation": "The cited work provides the creation of the Academic dataset, which is a query logics set supported by the Microsoft Academic Search website and serves as a data source for the citing paper."}, {"Category": "Data Source", "Citation": "(Yaghmazadeh et al., 2017)", "Explanation": "The cited work provides the creation of the Yelp and IMDB datasets, which are question sets about the Yelp website and the Internet Movie Database and serve as data sources for the citing paper."}, {"Category": "Data Source", "Citation": "(Finegan-Dollak et al., 2018)", "Explanation": "The cited work is the source of the Advising dataset, which the citing paper uses to train a text-to-SQL model for the University of Michigan course information database."}, {"Category": "Extension or Continuation", "Citation": "(Guo et al., 2019)", "Explanation": "The cited work is a text-to-SQL model that the citing paper builds upon to improve the generalization ability of models in different systems with the same domain but different backgrounds."}, {"Category": "Extension or Continuation", "Citation": "(Bogin et al., 2019)", "Explanation": "The cited work is a text-to-SQL model that the citing paper extends to address the issue of cross-schema generalization ability in different systems with the same domain but different backgrounds."}, {"Category": "Extension or Continuation", "Citation": "(Zhang et al., 2019)", "Explanation": "The cited work is a text-to-SQL model that the citing paper further extends to improve the generalization ability of models in different systems with the same domain but different backgrounds."}, {"Category": "Data Source", "Citation": "(Zhong et al., 2017)", "Explanation": "The cited work is the source of the WikiSQL dataset, which the citing paper uses to evaluate the generalization ability of text-to-SQL models in different systems with the same domain but different backgrounds."}, {"Category": "Supporting Evidence", "Citation": "(Yu et al., 2018)", "Explanation": "The cited work by Yu et al. (2018) releases Spider, a large-scale complex cross-domain text-to-SQL dataset, which serves as a foundational dataset for the study of text-to-SQL generation in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Wang et al., 2020b)", "Explanation": "The cited work by Wang et al. (2020b) releases DuSQL, a large-scale cross-domain text-to-SQL dataset in Chinese, which extends the research on text-to-SQL generation to the Chinese language in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Yu et al., 2019b)", "Explanation": "The cited work by Yu et al. (2019b) releases SParC, a conversational cross-domain text-to-SQL dataset, which provides a new perspective on text-to-SQL generation in the context of conversational systems in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Yu et al., 2019a)", "Explanation": "The cited work by Yu et al. (2019a) releases CoSQL, another conversational cross-domain text-to-SQL dataset, which further highlights the importance of conversational systems in text-to-SQL generation in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Guo et al., 2021)", "Explanation": "The cited work by Guo et al. (2021) releases CHASE, a conversational cross-domain text-to-SQL dataset, which provides a comprehensive evaluation of the performance of text-to-SQL generation models in the context of conversational systems in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Li et al., 2023b)", "Explanation": "The cited work by Li et al. (2023b) releases DIR, a conversational cross-domain text-to-SQL dataset, which further demonstrates the need for text-to-SQL generation models to handle complex conversational scenarios in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Cao et al., 2021)", "Explanation": "The cited work by Cao et al. introduces the graph pruning auxiliary task in the text-to-SQL model LGESQL, which the citing paper adopts in order to improve the model performance in predicting the SQL query."}, {"Category": "Methodological Basis", "Citation": "(Wang et al., 2020a)", "Explanation": "The cited work by Wang et al. (2020a) provides the method of using a graph encoding approach to process given information in text-to-SQL models, which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "(Cao et al., 2021)", "Explanation": "The cited work by Cao et al. (2021) introduces the use of line graph in text-to-SQL models, which the citing paper builds upon in their research to improve model performance."}, {"Category": "Methodological Basis", "Citation": "(Scholak et al., 2021)", "Explanation": "The cited work by Scholak et al. (2021) presents the sequence-to-sequence approach in text-to-SQL models, which the citing paper uses as a baseline approach in their research."}]
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b3", "b8", "b9", "b14", "b5", "b1", "b18", "b14" ], "table_ref": [], "text": "Knowledge Graphs (KGs) are often viewed as large-scale semantic networks that store facts as triples in the form of (subject entity, relation, object entity). KGs have been widely adopted in many industrial applications because they capture the multi-relational nature between real-world entities well. However, real-world KGs are usually incomplete (Dong et al., 2014). To tackle the incompleteness problem, there has been a surge of research interest in the KGC task in recent years. KGC and many other KG-based tasks are usually based on knowledge representation learning (KRL), in which entities and relations in a KG are encoded into low-dimensional vectors. With the recent advances in Graph Neural Network(GNN) (Scarselli et al., 2009), many recently published methods like R-GCN (Schlichtkrull et al., 2018) and CompGCN (Vashishth et al., 2020) all employed an encoder-decoder mechanism to tackle the KGC problem: variations of Graph Convolutional Networks (GCN) (Kipf and Welling, 2017) are used as encoders to generate embeddings for entities and relations in a KG, and traditional shallow KG embedding methods like TransE (Bordes et al., 2013) and DistMult (Yang et al., 2015) are used as decoders for the KGC task. With the additional message propagation and aggregation mechanism of graph convolution operation in the encoding stage, these methods have shown more promising results on the KGC task than traditional knowledge 36th Conference on Neural Information Processing Systems (NeurIPS 2022). graph embedding methods. However, not all the missing information can be inferred based on the information inside a KG. Even with a better encoding mechanism of GCNs, the expressiveness and quality of trained models can still be limited by the sparseness and incompleteness of the individual KG the model is trained on. At the same time, real-world entities are usually captured in more than one KG from either different sources or languages. The common entities in the disjoint real-world KGs can potentially serve as bridges to better connect them and transfer additional knowledge to one another to alleviate the sparseness problem faced by almost all real-world KGs. The common entities across different KGs are often known as seed alignments, which usually originate from the manual annotation of human annotators. Because of the scale and size of KGs nowadays, seed alignments are usually relatively scarce.\nIn this paper, we focus on the multi-KG completion problem and aim to collectively utilize multiple KGs and seed alignments between them to maximize the KGC task performance on each individual KG. We propose a novel method that concurrently trains CompGCN-based (Vashishth et al., 2020) encoders on each individual KGs and a fused KG in which seed alignments are regarded as edges for connecting KGs together and augmented message propagation for \"knowledge transfer\". During the concurrent training, we also employ the mutual knowledge distillation mechanism. CompGCN-based encoders on individual KGs and the fused KG are trained to learn potentially complementary features from each other. The intuition behind the mutual knowledge distillation process is that the small encoders trained on individual KGs capture local semantic relationships while the large encoder trained on the large fused KG captures the global semantic relationships better because of the intra-KG message propagation. In the mutual knowledge distillation process, the small and large encoders take turns to become \"teachers\" in the knowledge distillation to encourage mutual knowledge transfer between them. Lastly, we use an ensemble to combine the predictions from the individual KG models and the fused KG model to produce the KGC predictions on the test set for each individual KG. Figure 1 provides an illustrative figure of the overall structure of CKGC-CKD.\nThe main contribution of this paper can be summarized as follows: 1) we propose a novel augmented CompGCN encoder to facilitate intra-KG knowledge transfer and tackle the multi-KG completion task; 2) we propose a novel mutual knowledge distillation mechanism to encourage collaborative knowledge transfer between the models trained on individual KGs and globally fused KG. Experimental results on popular multilingual datasets show that our proposed method outperforms all state-of-the-art models. Extensive ablation studies are conducted on monolingual and multilingual datasets to demonstrate the contribution of each component in the proposed method.\n2 Related work" }, { "figure_ref": [], "heading": "Knowledge graph embeddings", "publication_ref": [ "b1", "b16", "b6", "b7", "b18", "b9", "b14", "b19" ], "table_ref": [], "text": "The research on knowledge graph embeddings has gained significant attention in recent years. This task aims to encode entities and relations of a KG into low-dimensional vectors. Traditional translation-based methods like TransE (Bordes et al., 2013), TransH (Wang et al., 2014), TransR (Lin et al., 2015), as well as the semantic matching models like RESCAL (Nickel et al., 2011) and DistMult (Yang et al., 2015), all achieved promising results on the KGC task. These methods all served as decoders on shallow embeddings to exploit the local structural information within triples and capture the semantics of relation triples. Other than the triple-based embedding models, another stream of recent works (Schlichtkrull et al., 2018;Vashishth et al., 2020;Yu et al., 2021) all employed the graph structure to propagate information between adjacent entities and encode them into embeddings. Specifically, variants of the GCN model are used as encoders to embed entities and relations in the heterogeneous graphs into vectors. Traditional knowledge graph embedding methods like TransE are then used as decoders for various downstream tasks like KGC and node classification." }, { "figure_ref": [], "heading": "KGC across multiple Knowledge graphs", "publication_ref": [ "b15", "b2", "b10", "b4", "b11" ], "table_ref": [], "text": "Compared to KGC on a single KG, KGC across multiple KGs is a relatively under-explored area. Wang et al. (2021) proposed ATransN, an adversarial embedding transfer network which aims to facilitate the knowledge transfer from a pre-trained embedding of a teacher KG to a student KG with a set of seed alignments. Chen et al. (2020) was the first to propose multilingual KGC problem setting and tackled the problem by generating ensemble results on shared triples between KGs in different languages. On the same multilingual problem setting, Singh et al. (2021) proposed AlignKGC, which employs a multi-task strategy to jointly trains KGC, entity alignment and relation alignment tasks. Most recently, Huang et al. (2022) proposed SS-AGA, which models seed alignments as edges to fuse multiple knowledge graphs while using a generator model to dynamically capture more potential alignments between entities and iteratively add more edges to the graph. Additionally, Sourty et al. (2020) proposed KD-MKB, which assumes the existence of both shared relations and shared entities across individual KGs, and therefore tackles multi-KG completion tasks from a knowledge distillation perspective." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "The framework of a multi-KG completion task involves two or more KGs. Without loss of generality, we assume there are a total of m KGs in the problem setting. We formalize the i-th heterogeneous KG in the task as KG i = {E i , R i , T i }, where E i , R i , T i respectively represent the entity set, relation set, and fact triple set of KG i . A set of seed alignments between KGs, known before training, is denoted by S KGi,KGj = {(e i , ∼, e j ) : (e i , e j ) ∈ E i × E j }, where ∼ denotes the equivalence relation. The complete set of seed alignments can then be denoted by S align = ∪ m i=1 ∪ m j=i+1 S KGi,KGj . We can then formalize the large fused KG connected by seed alignments as\nKG f = {E f , R f , T f |E f = ∪ m i=1 E i , R f = ∪ m i=1 R i , T f = (∪ m i=1 T i ) ∪ S align }.\nLet M i and M f denote the encoder models for the i-th individual KG and the fused KG, respectively." }, { "figure_ref": [], "heading": "Augmented CompGCN Link Prediction", "publication_ref": [ "b14", "b15", "b10" ], "table_ref": [], "text": "We use CompGCN (Vashishth et al., 2020) as our encoders for the knowledge graph embeddings. In the method, CompGCN encoders with non-parametric composition operations are trained on each individual KGs and the fused KG concurrently. In standard CompGCN, the update equation of CompGCN node embeddings can be written as:\nh t v = f (Σ (u,r)∈N (v) M e(u, r)) (1) M e(u, r) = W λ(r) ϕ(h t-1 u , h t-1 r )(2)\n, where h t v denotes the updated embedding for node v at t-th layer, N (v) denotes the set of neighboring entities and relations of node v, h t-1 u and h t-1 r denotes the embeddings for node u and relation r at (t-1)-th layer respectively, ϕ denotes the non-parametric composition operation and W λ(r) denotes the direction specific transformation matrix where λ denotes the direction of relation. In our method, the vanilla CompGCN encoder is used without modification on individual KGs, while we decide to use an augmented CompGCN encoder for better knowledge transfer on the fused KG f . Specifically, although seed alignments are viewed as regular relations in the fused KG, we remove the composition operator for message propagation between the seed alignments and instead use the standard nonrelation-specific message passing between KGs. The augmented message function in the fused KG can then be written as:\nM e(u, r) = W align h t-1 u if(u, ∼, v) ∈ S align W λ(r) ϕ(h t-1 u , h t-1 r ) otherwise(3)\n, where W align denotes the transformation matrix specifically for intra-KG message propagation. The composition operation is removed because we view the cross-KG equivalence as a bi-directional relationship different from the triples inside KGs. Additionally, many existing methods (Wang et al., 2021;Singh et al., 2021) use a loss regularization to ensure the equivalent entities in each KG have similar embeddings with or without transformation. However, instead of imposing the regularization directly on the training loss term, we impose a softer regularization in the message passing augmentation, where the contextualized node embeddings of entities in each knowledge graph are passed to their counterparts in other KGs during encoding. As a result, the contextualized embedding of entities in each KG can be shared across the KGs by the augmented message propagation in the encoding phase and optimized during the training of the KGC task on fact triples.\nThe encoded entity and relation embeddings are then passed to the decoder, which performs the link prediction task on triples in KG, and computes the knowledge representation loss. The margin-based knowledge representation loss can be written as:\nL T = Σ ti∈Ti,t ′ i ∈T ′ i f (t i ) -f (t ′ i ) + γ(4)\n, where T ′ i denotes the negative samples created from corrupting head or tail entity in triple t i ; f (t i ) denotes the scoring function for TransE or similar knowledge embedding model; and γ denotes the margin, which is a hyperparameter describing the ideal distance between positive triples and negative triples." }, { "figure_ref": [], "heading": "Mutual Knowledge Distillation", "publication_ref": [ "b11", "b17", "b17" ], "table_ref": [], "text": "We employ the mutual knowledge distillation mechanism between each model on individual KGs M i and the model on the fused KG M f . At each training step, each M i pairs with M f to conduct mutual knowledge distillation, where M i and M f learn simultaneously from each other via a mimicry loss that measures the difference between the posterior predictions of each other on the KGC task on triples T i in the corresponding KG i . Three different KGC tasks are used for mutual knowledge distillation: for a triple t = (s, r, o), the task is to predict the missing component given the other two in the triple, i.e., head prediction, tail prediction and relation prediction. The distillation loss can be written as:\nL i D = ti∈Ti β∈T ask d KL (t i , β) (5) d KL (t i , β) = D KL (P β i (t i ), P β f (t i ))(6)\n, where T ask denotes three KGC tasks, D KL denotes the Kullback Leibler (KL) Divergence, and P denotes the categorical distribution predicted by the knowledge graph embedding scoring function on task β. As an example, for tail prediction of triple t i = (s i , r i , o i ), the categorical distribution can be written as softmax of tail prediction across all candidates:\nP i (t i ) = exp(f (M i (s i ), M i (r i ), M i (o i ))) oj ∈Ei exp(f (M i (s i ), M i (r i ), M i (o j )))(7)\n, where M i () denotes the embedding look-up operation for entities and relations from encoder model M i . In practice, predicting across all candidates E i and comparing the categorical distribution across all entities can be inefficient due to the size of KG. Therefore, we employ the top-k sampling technique used in the work of Sourty et al. (2020) by using the \"teacher\" model to select the top-k most confident candidates for the categorical distribution comparison in the mutual distillation.\nAdditionally, we enabled the mutual distillation process to be performance-aware as the performance of individual model M i and the fused model M f could differ by some margin. A recent work (Xie and Du, 2022) on knowledge distillation pointed out that a worse-performing model could potentially generate negative knowledge transfer and lead to collective failure. Therefore, we adopted a softer restriction than Xie and Du (2022) and designed a hyperparameter θ such that the worse-performing model is allowed to become a teacher and generate soft training target only if its mean reciprocal rank metric on the validation set is less θ away from the better-performing model. EL JA ES FR EN H@1/H@10/MRR H@1/H@10/MRR H@1/H@10/MRR H@1/H@10/MRR H@1/H@10/MRR IT H@1/H@10/MRR H@1/H@10/MRR H@1/H@10/MRR H@1/H@10/MRR H@1/H@10/MRR H@1/H@10/MRR " }, { "figure_ref": [], "heading": "Training", "publication_ref": [], "table_ref": [], "text": "The overall loss term combines the knowledge representation and mutual knowledge distillation loss:\nL = L T + αL D(8)\n, where α is a hyperparameter controlling the contribution of mutual distillation loss to the overall loss term. The models M i and M f are trained concurrently on KGC tasks while learning from the best-performing model of each other via the mutual distillation process. In practice, the training process is separated into two stages for better convergence and faster training. In the first stage, individual and fused models are trained independently with only knowledge representation loss, while in the second stage, knowledge distillation losses are introduced so that models can mutually learn from each other." }, { "figure_ref": [], "heading": "Ensemble Prediction", "publication_ref": [], "table_ref": [], "text": "In the end, the output for KGC tasks is generated by combining predictions from models M i and M f using an ensemble. Concretely, the for triple t i ∈ T i , the final scoring function becomes:\nF (t i ) = f (M i (t i )) + f (M f (t i ))\n. The ensemble scores are then used for further ranking and evaluation." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Basic settings", "publication_ref": [ "b2", "b4", "b0", "b2", "b10", "b4", "b16", "b18" ], "table_ref": [], "text": "All of our experiments are conducted on a Linux server running Ubuntu 20.04 with 64GB memory and a Nvidia A100 GPU.\nWe perform experiments and compare the performance of the proposed CKGC-CKD method with the state-of-the-art models on the existing multilingual dataset DBP-5L (Chen et al., 2020) and E-PKG (Huang et al., 2022). DBP-5L is a dataset sampled from DBpedia (Auer et al., 2007); it contains five KGs from different languages: English (EN), French (FR), Spanish (ES), Japanese (JA) and Greek (EL). E-PKG is an industrial multilingual E-commerce product KG dataset, which describes phone-related products across six different languages: English (EN), German (DE), French(FR), Japanese (JA), Spanish (ES) and Italian (IT). In this work, we follow the evaluation scheme of previous works (Chen et al., 2020;Singh et al., 2021;Huang et al., 2022): for a test triple (h, r, t), use the embedding model to rank all possible answers to tail prediction query (h, r, ?); and apply the MRR(mean reciprocal ranks), Hit@1 and Hit@10 metrics under filtered settings (Wang et al., 2014;Yang et al., 2015) to evaluate the performance of the models. The reported CKGC-CKD model uses a 1-layer encoder, with TransE as a knowledge embedding decoder and an embedding dimension of 100. However, CKGC-CKD can be easily extended to use other knowledge-embedding decoders.\nDeeper encoder can also be explored if more hardware resources become available. " }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b2", "b4" ], "table_ref": [ "tab_0" ], "text": "In table 1 and 2, we present the experiment results on the DBP-5L dataset (Chen et al., 2020) and E-PKG dataset (Huang et al., 2022) respectively1 . In the tables, performances KGC-I refers to the standard CompGCN encoder model trained on individual KG, and KGC-A refers to the augmented message propagation encoder trained the fused KG. It can be observed that the proposed CKGC-CKD method outperforms all baseline and state-of-the-art models on both datasets. Compared to the previous cross-lingual models, the individually trained KGC-I model on each language can already achieve similar performance in most languages, which verifies the effectiveness of the CompGCN encoder. The KGC-A model trained on the fused KG provided a large margin over the KGC-I and the previous models. This implies that the inclusion of multiple KGs truly helps the KGC task of each other and also verifies the benefit of the augmented cross-KG message propagation even without explicit data leakage (direct swapping of relations between aligned entities). In the end, with mutual knowledge distillation between KGC-I and KGC-A enabled, the CKGC-CKD model uses the ensemble predictions from both distilled models. This achieves the best performances in the table across almost all of the metrics. Overall, compared to the KGC-I model trained on monolingual KG, CKGC-CKD achieves a significant gain on the lower-resource languages like Greek(EL) in DBL-5L and Japanese(JA) in E-PKG, which verifies the common belief that lower-resource KGs would typically benefit from the multi-KG learning setting. Moreover, we also observed performance improvement on relatively rich languages like French(FR) and English(EN) also in the experiments." }, { "figure_ref": [], "heading": "Ablation study", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Contribution of each component", "publication_ref": [ "b13", "b12" ], "table_ref": [], "text": "In table 3, we report the results of our ablation studies to analyze how each of the components in the proposed method affects the results. We report the ablation study results on the multilingual DBP-5L dataset and a monolingual self-generated D-W-15K-LP dataset. D-W-15K-LP is a dataset generated from the entity alignment benchmarking datasets D-W-15K (Sun et al., 2020). To mimic a more real-life setting, we employed the sampling strategy proposed in the work of Sun et al. (2021) to create dangling entities (entities without alignment across KGs) in the KGs. In the sampling process, Table 4: Results of CKGC-CKD models w/ and w/o extra triples introduced by alignment-informed meta-paths, metrics reported on link prediction task under standard filtered settings." }, { "figure_ref": [], "heading": "DBP-5L", "publication_ref": [ "b16", "b18" ], "table_ref": [], "text": "EL JA ES FR EN H@1/H@10/MRR H@1/H@10/MRR H@1/H@10/MRR H@1/H@10/MRR H@1/H@10/MRR w/ ES IT H@1/H@10/MRR H@1/H@10/MRR H@1/H@10/MRR H@1/H@10/MRR H@1/H@10/MRR H@1/H@10/MRR w/ triples containing removed entities are excluded by removing part of the alignments from KGs. This results in a more sparse dataset with dangling entities in each individual KG. In addition to the KGC-I and the KGC-A models reported in the section 4, we additionally report the performance of several ablation models: KGC-C refers to the ablation model trained on fused KG without augmented message propagation, KGC-I-D and KGC-A-D respectively represent the ablation models with mutual distillation enabled for KGC-I and KGC-A. Therefore, the reported CKGC-CKD is the ensemble results of KGC-A-D and KGC-I-D. For a more complete and universal comparison, in the ablation study, we use the traditional \"link prediction\" task that includes both head prediction and tail prediction with the traditional filtered setting used in the works of Wang et al. (2014) and Yang et al. (2015).\nOn both datasets, we can observe a clear margin that the KGC-A model created over the KGC-C model, which verifies the effectiveness of augmented message propagation. Additionally, on both datasets, the distillation-enabled KGC-I-D and KGC-A-D models have shown superior performance in almost all metrics over the KGC-I and KGC-A models, respectively. This has shown that the mutual knowledge distillation process benefits individual and fused models. Lastly, CKGC-CKD achieves the best performances in most metrics, which verifies the gains provided by the ensemble technique. An interesting observation is that even after the mutual knowledge distillation, the KGC-I-D models still perform slightly worse than the fused model KGC-A-D, and the difference in performance also varies across different KG. One possible reason behind this observation is that we used a constant α for all KGs in one dataset to control the trade-off between knowledge distillation loss and knowledge representation loss. Limited by the hardware resources, we did not explore possibilities of assigning different α for each KG and decided to leave that for future work that possibly explores a fine-tuning stage of the model to better reconcile the difference and imbalance of resources in each KG." }, { "figure_ref": [], "heading": "Discussion on the alignment-informed meta-path", "publication_ref": [ "b13", "b0", "b4" ], "table_ref": [], "text": "With multilingual datasets sharing the same relation schema, there are obvious mechanisms to introduce additional triples information via an alignment-informed meta-path. For example, a metapath of (E a SAM EAS\n-------→ E b Ra --→ E c SAM EAS -------→ E d ) implies a new triple (E a Ra --→ E d ),\nthis is usually referred as parameter-swapping (Sun et al., 2020). When the dataset involves more than two KGs, additional alignments can potentially also be inferred via meta-path: -------→ E c ). Additional triples information can be generated or shared using the two alignment-informed meta-paths above. However, this relies heavily on the quality annotation of seed alignments between the KGs, which is not usually guaranteed, especially in scenarios involving multiple KGs. In table 4, we report the link prediction performances (incl. both head prediction and tail prediction) of our method with and without alignment-informed meta-path included in training on the two multilingual datasets, in which we have observed fairly contradictory phenomenons: DBP5L dataset benefits massively from the additional triples from alignment-informed meta-path, while on the E-PKG dataset we have observed a decrease in performance when triples swapping are enabled. From further analysis of the dataset, we hypothesized that the inconsistent observations on both datasets mainly originate from the quality of seed alignment annotations. Specifically, when we connect all entities with seed alignments annotations to form an \"alignment graph\", we discovered several connected components of size over 1000 in such a graph on the E-PKG dataset, suggesting that over 1000 entities across the six KGs are annotated to be aligned to each other. This could either be due to incorrect and noisy annotations or potentially duplicate entities within monolingual KGs. A similar effect was not observed in DBP-5L. The effect of lower quality KGs and seed alignments was enlarged when triples generated from alignment-informed meta-path were introduced, generating a prohibitively large number of inferred triples, resulting in the inconsistent observations in table 4. Compared to DBP-5L, which was generated from the sampling of the widely adopted and verified DBpedia (Auer et al., 2007), E-PKG was a recently constructed industrial E-commerce product KG dataset (Huang et al., 2022), which represents the quality of real-life industrial dataset to some extent. Therefore, we have decided not to make any assumptions about the dataset's quality in our settings and report the main results without including triples generated from alignment-informed meta-paths in our training.\n(E a SAM EAS -------→ E b SAM EAS -------→ E c ) implies (E a SAM EAS" }, { "figure_ref": [ "fig_1" ], "heading": "Knowledge transferred across KG", "publication_ref": [], "table_ref": [], "text": "In this section, we discuss the knowledge being transferred across KGs. Compared to the individual models trained on monolingual KGs, the model on fused KG implicitly captures the transferable triples via alignment information discussed previously while avoiding the error propagation of the potential noisy seed alignments to some extent (as shown in table 4). Additionally, relations embeddings are shared across individual KGs within the connected model. As a result, relationships between the KG relations are also better captured. In figure 2, we visualize the correlations of relation embeddings learnt on Greek KG and the fused KG on the DBP-5L dataset. The correlations of learnt KG relations in the Greek KG differ from those discovered on the fused KG. Specifically, pairs of relatable connections like (\"birthplace\", \"nationality\"), (\"birthplace\", \"deathplace\"), as well as (\"homearena\", \"homestadium\"), all exhibit a higher correlation in the fused KG compared to the individual Greek KG. This implies that the connected model can capture more accurate dynamics between relations with more triples information and shared relation embeddings. At the same time, the low-resource Greek KG converged to a suboptimal region mainly due to the lack of information. The learnt knowledge is then transferred to the models on individual KGs via a knowledge distillation mechanism and further optimized during training." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this paper, we proposed a novel method CKGC-CKD that focuses on the KGC task across multiple KGs. The proposed method uses an augmented CompGCN encoder for message propagation across different KGs via seed alignments in a fused KG. In addition, the proposed model employs additional mutual knowledge distillations between individual KGs and the fused KG to maximize knowledge transfer. CKGC-CKD outperforms the state-of-the-art models by a significant margin on the KGC task on multilingual datasets DBP-5L and E-PKG. We also demonstrate the performance gains provided by each component of the proposed method. One limitation of this work lies in the assumption of sufficient seed alignments between KGs. When seed alignments are more limited than current experiment settings, it is fairly obvious that our method has potentials to generate probalistic predictions of alignments aross KGs. As a result, we have planned future work to specifically focus on including more probabilistic/noisy alignments predictions iteratively while limiting the propagation of error in the iterative process.\nAlgorithm 1 Pseudocode of the training process of CKGC-CKD.\n▷ " } ]
2023-05-25
10.1007/978-3-540-76298-0_52
[ { "authors": "Sören Auer; Christian Bizer; Georgi Kobilarov; Jens Lehmann; Richard Cyganiak; Zachary Ives", "journal": "Springer", "ref_id": "b0", "title": "DBpedia: A Nucleus for a Web of Open Data", "year": "2007" }, { "authors": "Antoine Bordes; Nicolas Usunier; Alberto Garcia-Duran; Jason Weston; Oksana Yakhnenko", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b1", "title": "Translating Embeddings for Modeling Multi-relational Data", "year": "2013" }, { "authors": "Xuelu Chen; Muhao Chen; Changjun Fan; Ankith Uppunda; Yizhou Sun; Carlo Zaniolo", "journal": "", "ref_id": "b2", "title": "Multilingual Knowledge Graph Completion via Ensemble Knowledge Transfer", "year": "2020" }, { "authors": "Xin Dong; Evgeniy Gabrilovich; Geremy Heitz; Wilko Horn; Ni Lao; Kevin Murphy; Thomas Strohmann; Shaohua Sun; Wei Zhang", "journal": "ACM", "ref_id": "b3", "title": "Knowledge vault: a web-scale approach to probabilistic knowledge fusion", "year": "2014" }, { "authors": "Zijie Huang; Zheng Li; Haoming Jiang; Tianyu Cao; Hanqing Lu; Bing Yin; Karthik Subbian; Yizhou Sun; Wei Wang", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Multilingual Knowledge Graph Completion with Self-Supervised Adaptive Graph Alignment", "year": "2022" }, { "authors": "Thomas N Kipf; Max Welling", "journal": "", "ref_id": "b5", "title": "Semi-Supervised Classification with Graph Convolutional Networks", "year": "2017-02" }, { "authors": "Yankai Lin; Zhiyuan Liu; Maosong Sun; Yang Liu; Xuan Zhu", "journal": "", "ref_id": "b6", "title": "Learning Entity and Relation Embeddings for Knowledge Graph Completion", "year": "2015" }, { "authors": "Maximilian Nickel; Hans-Peter Volker Tresp; Kriegel", "journal": "", "ref_id": "b7", "title": "A Three-Way Model for Collective Learning on Multi-Relational Data", "year": "2011" }, { "authors": "Franco Scarselli; Marco Gori; Ah Chung Tsoi; Markus Hagenbuchner; Gabriele Monfardini", "journal": "", "ref_id": "b8", "title": "The Graph Neural Network Model", "year": "2009" }, { "authors": "Michael Schlichtkrull; Thomas N Kipf; Peter Bloem; Rianne Van Den; Ivan Berg; Max Titov; Welling", "journal": "Springer International Publishing", "ref_id": "b9", "title": "Modeling Relational Data with Graph Convolutional Networks", "year": "2018" }, { "authors": "Harkanwar Singh; Prachi Jain; Mausam Mausam; Soumen Chakrabarti", "journal": "", "ref_id": "b10", "title": "Multilingual Knowledge Graph Completion with Joint Relation and Entity Alignment", "year": "2021" }, { "authors": "Raphaël Sourty; Jose G Moreno; François-Paul Servant; Lynda Tamine-Lechani", "journal": "", "ref_id": "b11", "title": "Knowledge Base Embedding By Cooperative Knowledge Distillation", "year": "2020" }, { "authors": "Zequn Sun; Muhao Chen; Wei Hu", "journal": "", "ref_id": "b12", "title": "Knowing the No-match: Entity Alignment with Dangling Cases", "year": "2021-08" }, { "authors": "Zequn Sun; Qingheng Zhang; Wei Hu; Chengming Wang; Muhao Chen; Farahnaz Akrami; Chengkai Li", "journal": "", "ref_id": "b13", "title": "A Benchmarking Study of Embedding-based Entity Alignment for Knowledge Graphs", "year": "2020-08" }, { "authors": "Shikhar Vashishth; Soumya Sanyal; Nitin Vikram; Partha Talukdar", "journal": "", "ref_id": "b14", "title": "COMPOSITION-BASED MULTI-RELATIONAL GRAPH CONVOLUTIONAL NETWORKS", "year": "2020" }, { "authors": "Huijuan Wang; Shuangyin Li; Rong Pan", "journal": "ACM", "ref_id": "b15", "title": "An Adversarial Transfer Network for Knowledge Representation Learning", "year": "2021" }, { "authors": "Zhen Wang; Jianwen Zhang; Jianlin Feng; Zheng Chen", "journal": "", "ref_id": "b16", "title": "Knowledge Graph Embedding by Translating on Hyperplanes", "year": "2014" }, { "authors": "Pengtao Xie; Xuefeng Du", "journal": "IEEE", "ref_id": "b17", "title": "Performance-Aware Mutual Knowledge Distillation for Improving Neural Architecture Search", "year": "2022-06" }, { "authors": "Bishan Yang; Wen-Tau Yih; Xiaodong He; Jianfeng Gao; Li Deng", "journal": "", "ref_id": "b18", "title": "EMBEDDING ENTITIES AND RELATIONS FOR LEARN-ING AND INFERENCE IN KNOWLEDGE BASES", "year": "2015" }, { "authors": "Donghan Yu; Yiming Yang; Ruohong Zhang; Yuexin Wu", "journal": "Association for Computing Machinery", "ref_id": "b19", "title": "Knowledge Embedding Based Graph Convolutional Network", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 108, 479.53, 396, 21.66 ], "formula_id": "formula_0", "formula_text": "KG f = {E f , R f , T f |E f = ∪ m i=1 E i , R f = ∪ m i=1 R i , T f = (∪ m i=1 T i ) ∪ S align }." }, { "formula_coordinates": [ 3, 239.45, 596.56, 265.22, 41.52 ], "formula_id": "formula_1", "formula_text": "h t v = f (Σ (u,r)∈N (v) M e(u, r)) (1) M e(u, r) = W λ(r) ϕ(h t-1 u , h t-1 r )(2)" }, { "formula_coordinates": [ 4, 192.58, 108.13, 312.08, 25.29 ], "formula_id": "formula_2", "formula_text": "M e(u, r) = W align h t-1 u if(u, ∼, v) ∈ S align W λ(r) ϕ(h t-1 u , h t-1 r ) otherwise(3)" }, { "formula_coordinates": [ 4, 232.45, 284.68, 272.21, 13.85 ], "formula_id": "formula_3", "formula_text": "L T = Σ ti∈Ti,t ′ i ∈T ′ i f (t i ) -f (t ′ i ) + γ(4)" }, { "formula_coordinates": [ 4, 232.96, 465.73, 271.7, 43.13 ], "formula_id": "formula_4", "formula_text": "L i D = ti∈Ti β∈T ask d KL (t i , β) (5) d KL (t i , β) = D KL (P β i (t i ), P β f (t i ))(6)" }, { "formula_coordinates": [ 4, 202.29, 558.25, 302.38, 24.72 ], "formula_id": "formula_5", "formula_text": "P i (t i ) = exp(f (M i (s i ), M i (r i ), M i (o i ))) oj ∈Ei exp(f (M i (s i ), M i (r i ), M i (o j )))(7)" }, { "formula_coordinates": [ 5, 273.14, 316.55, 231.52, 9.65 ], "formula_id": "formula_6", "formula_text": "L = L T + αL D(8)" }, { "formula_coordinates": [ 5, 108, 462.73, 138.25, 9.65 ], "formula_id": "formula_7", "formula_text": "F (t i ) = f (M i (t i )) + f (M f (t i ))" }, { "formula_coordinates": [ 7, 155.65, 505.47, 293.07, 13.37 ], "formula_id": "formula_8", "formula_text": "-------→ E b Ra --→ E c SAM EAS -------→ E d ) implies a new triple (E a Ra --→ E d )," }, { "formula_coordinates": [ 7, 108, 531.06, 396, 28.15 ], "formula_id": "formula_9", "formula_text": "(E a SAM EAS -------→ E b SAM EAS -------→ E c ) implies (E a SAM EAS" } ]
Collective Knowledge Graph Completion with Mutual Knowledge Distillation
Knowledge graph completion (KGC), the task of predicting missing information based on the existing relational data inside a knowledge graph (KG), has drawn significant attention in recent years. However, the predictive power of KGC methods is often limited by the completeness of the existing knowledge graphs from different sources and languages. In monolingual and multilingual settings, KGs are potentially complementary to each other. In this paper, we study the problem of multi-KG completion, where we focus on maximizing the collective knowledge from different KGs to alleviate the incompleteness of individual KGs. Specifically, we propose a novel method called CKGC-CKD that uses relationaware graph convolutional network encoder models on both individual KGs and a large fused KG in which seed alignments between KGs are regarded as edges for message propagation. An additional mutual knowledge distillation mechanism is also employed to maximize the knowledge transfer between the models of "global" fused KG and the "local" individual KGs. Experimental results on multilingual datasets have shown that our method outperforms all state-of-the-art models in the KGC task.
Weihang Zhang; Ovidiu Serban; Jiahao Sun; Yi-Ke Guo
[ { "figure_caption": "Figure 1 :1Figure 1: An illustrative figure of the proposed CKGC-CKD with 2 KGs.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Relation correlation heatmap of Greek KG v.s. fused KG.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Results on DBP-5L dataset.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results on E-PKG dataset.", "figure_data": "KenS28.1 / 56.9 / -32.1 / 65.3 / -23.6 / 60.1 / -25.5 / 62.9 / -15.1 / 39.8 / -CG-MuA21.5 / 44.8 / 32.827.3 / 61.1 / 40.122.3 / 55.4 / 34.324.2 / 57.1 / 36.113.1 / 33.5 / 22.2AlignKGC27.6 / 56.3 / 33.831.6 / 64.3 / 41.624.2 / 60.9 / 35.124.1 / 62.3 / 37.415.5 / 39.2 / 22.3SS-AGA30.8 / 58.6 / 35.334.6 / 66.9 / 42.925.5 / 61.9 / 36.627.1 / 65.5 / 38.416.3 / 41.3 / 23.1KGC-I28.9 / 66.8 / 41.630.3 / 61.7 / 41.424.8 / 61.2 / 37.525.8 / 64.1 / 39.120.5 / 58.6 / 33.5KGC-A43.4 / 89.2 / 60.142.7 / 83.6 / 57.232.8 / 77.2 / 48.435.1 / 81.4 / 51.325.1 / 67.2 / 39.6CKGC-CKD49.2 / 88.5 / 63.948.4 / 84.6 / 60.937.5 / 77.6 / 51.840.9 / 83.0 / 55.828.8 / 67.1 / 42.1ENDEFRJAES", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study results on DBP-5L and D-W-15K-LP.", "figure_data": "DBP-5LD-W-15K-LPMetric ELJAESFREN DBpedia [email protected]@1049.144.744.145.345.554.249.2MRR31.330.727.929.526.938.434.2H@136.432.727.028.720.229.926.8KGC-CH@1069.563.657.159.750.855.350.4MRR47.943.637.639.530.738.835.4H@137.935.528.029.421.030.727.5KGC-AH@1071.567.059.562.253.855.650.8MRR49.946.539.140.932.339.335.8H@137.835.428.630.522.331.229.2KGC-I-DH@1066.062.455.458.850.354.949.8MRR48.144.938.040.631.839.436.5H@140.237.830.231.923.031.729.2KGC-A-DH@1071.866.859.563.152.755.750.7MRR51.347.940.642.833.340.036.9H@141.438.731.332.723.231.729.3CKGC-CKDH@1070.567.259.563.453.455.850.8MRR52.148.641.243.633.640.137.0", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Stage 1: Train each model M i and M f with the Knowledge Representation loss. for i ∈ 1..m + 1 do while M i not converged do L i ← T i M i ← Update w.r.t L i end while end for ▷ Stage 2: Train each model M i and M f with the Knowledge Representation and the Knowledge Distillation loss. while not converged do batch f ← sample from triple set T f L f T ← calculate loss of batch f base on equation 4 for i ∈ 1..m do batch i ← sample from triple set T i L i T ← calculate loss of batch i based on equation 4 L i D , L f D ← calculate distillation losses between M i and M f on batch i based on equation 5 with top-k sampling to select candidates space of distillation L i ← L i T + αL i Update w.r.t L i end for M f ← Update w.r.t L f end while", "figure_data": "", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "(Dong et al., 2014)", "Explanation": "The cited work by Dong et al. provides a method for tackling the incompleteness problem in KGs, which the citing paper adopts in their research on KGC and other KG-based tasks."}, {"Category": "Methodological Basis", "Citation": "(Schlichtkrull et al., 2018)", "Explanation": "The cited work introduces the R-GCN method, which is used as a variation of GCN in the encoding stage of the KGC problem, providing a methodological basis for the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Vashishth et al., 2020)", "Explanation": "The cited work presents the CompGCN method, which is also a variation of GCN used in the encoding stage of the KGC problem, providing another methodological basis for the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Kipf and Welling, 2017)", "Explanation": "The cited work introduces the GCN method, which is a key component in the encoding stage of the KGC problem, providing a methodological basis for the citing paper."}, {"Category": "Data Source", "Citation": "(Bordes et al., 2013)", "Explanation": "The cited work presents the TransE method, which is a traditional knowledge graph embedding method used as a decoder in the KGC task, providing a data source for the citing paper."}, {"Category": "Data Source", "Citation": "(Yang et al., 2015)", "Explanation": "The cited work introduces the DistMult method, which is also a traditional knowledge graph embedding method used as a decoder in the KGC task, providing another data source for the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Vashishth et al., 2020)", "Explanation": "The cited work by Vashishth et al. (2020) provides the CompGCN-based encoders that the citing paper adopts in their research to train encoders on individual KGs and a fused KG for knowledge transfer."}, {"Category": "Methodological Basis", "Citation": "(Bordes et al., 2013)", "Explanation": "The cited work, TransE, serves as a traditional translation-based method for knowledge graph embeddings, which the citing paper adopts in its research on the KGC task."}, {"Category": "Methodological Basis", "Citation": "(Wang et al., 2014)", "Explanation": "The cited work, TransH, is another traditional translation-based method for knowledge graph embeddings that the citing paper uses in its research on the KGC task."}, {"Category": "Methodological Basis", "Citation": "(Lin et al., 2015)", "Explanation": "The cited work, TransR, is a traditional translation-based method for knowledge graph embeddings that the citing paper utilizes in its research on the KGC task."}, {"Category": "Methodological Basis", "Citation": "(Nickel et al., 2011)", "Explanation": "The cited work, RESCAL, is a semantic matching model for knowledge graph embeddings that the citing paper references in its research on the KGC task."}, {"Category": "Methodological Basis", "Citation": "(Yang et al., 2015)", "Explanation": "The cited work, DistMult, is another semantic matching model for knowledge graph embeddings that the citing paper mentions in its research on the KGC task."}, {"Category": "Methodological Basis", "Citation": "(Schlichtkrull et al., 2018)", "Explanation": "The cited work by Schlichtkrull et al. (2018) employs the graph structure to propagate information between adjacent entities and encode them into embeddings, which the citing paper builds upon in its research."}, {"Category": "Methodological Basis", "Citation": "(Vashishth et al., 2020)", "Explanation": "The cited work by Vashishth et al. (2020) also uses the graph structure to encode entities and relations in the heterogeneous graphs into vectors, which the citing paper further builds upon in its research."}, {"Category": "Methodological Basis", "Citation": "(Yu et al., 2021)", "Explanation": "The cited work by Yu et al. (2021) utilizes the GCN model as an encoder to embed entities and relations in the heterogeneous graphs into vectors, which the citing paper adopts in its research."}, {"Category": "Methodological Basis", "Citation": "(Wang et al., 2021)", "Explanation": "The cited work proposes ATransN, an adversarial embedding transfer network that facilitates knowledge transfer from a pre-trained embedding of a teacher KG to a student KG with a set of seed alignments, which the citing paper adopts in their research on KGC across multiple KGs."}, {"Category": "Extension or Continuation", "Citation": "(Chen et al., 2020)", "Explanation": "The cited work was the first to propose the multilingual KGC problem setting and generated ensemble results on shared triples between KGs in different languages, which the citing paper extends by exploring the same problem setting in their research on KGC across multiple KGs."}, {"Category": "Extension or Continuation", "Citation": "(Singh et al., 2021)", "Explanation": "The cited work proposed AlignKGC, which employs a multi-task strategy to jointly train KGC, entity alignment, and relation alignment tasks, building upon the research of the cited work on multilingual KGC in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Huang et al., 2022)", "Explanation": "The cited work proposed SS-AGA, which models seed alignments as edges to fuse multiple knowledge graphs and uses a generator model to dynamically capture more potential alignments between entities, continuing the research on KGC across multiple KGs in the citing paper."}, {"Category": "Data Source", "Citation": "(Sourty et al., 2020)", "Explanation": "The cited work proposed KD-MKB, which assumes the existence of both shared relations and shared entities across individual KGs, providing a data source for the multi-KG completion tasks in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Vashishth et al., 2020)", "Explanation": "The cited work provides the encoders for the knowledge graph embeddings used in the citing paper, serving as the methodological basis for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Wang et al., 2021)", "Explanation": "The cited work by Wang et al. provides a method for ensuring the equivalent entities in each KG have similar embeddings, which the citing paper adopts in their research to improve the knowledge transfer on the fused KG f."}, {"Category": "Methodological Basis", "Citation": "(Singh et al., 2021)", "Explanation": "The cited work by Singh et al. also provides a method for ensuring the equivalent entities in each KG have similar embeddings, which the citing paper may have considered in their research to improve the knowledge transfer on the fused KG f."}, {"Category": "Methodological Basis", "Citation": "(Sourty et al., 2020)", "Explanation": "The cited work by Sourty et al. (2020) introduces the top-k sampling technique that the citing paper adopts in the mutual distillation process to select the most confident candidates for the categorical distribution comparison."}, {"Category": "Extension or Continuation", "Citation": "(Xie and Du, 2022)", "Explanation": "The cited work by Xie and Du (2022) on knowledge distillation provides a new perspective on the issue of negative knowledge transfer in the mutual distillation process. The citing paper extends the research by adopting a softer restriction to allow for a more flexible approach in generating soft training targets."}, {"Category": "Data Source", "Citation": "(Chen et al., 2020)", "Explanation": "The dataset DBP-5L is used as a benchmark for evaluating the performance of the proposed method in the citing paper."}, {"Category": "Data Source", "Citation": "(Huang et al., 2022)", "Explanation": "The E-PKG dataset is also used as a benchmark for evaluating the performance of the proposed method in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Wang et al., 2014;Yang et al., 2015)", "Explanation": "The cited works provide the evaluation metrics of MRR, Hit@1, and Hit@10 for the performance evaluation of the models in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2020)", "Explanation": "The DBP-5L dataset is used as a benchmark to evaluate the performance of the proposed model, providing a methodological basis for the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Huang et al., 2022)", "Explanation": "The E-PKG dataset is used in the experiment to test the performance of the model, serving as a data source for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Sun et al., 2020)", "Explanation": "The cited work provides a dataset (D-W-15K) that the citing paper uses to generate a new dataset (D-W-15K-LP) for the study of entity alignment in knowledge graphs."}, {"Category": "Extension or Continuation", "Citation": "(Sun et al., 2021)", "Explanation": "The cited work introduces a sampling strategy to create dangling entities in knowledge graphs, which the citing paper adopts to create a new dataset (D-W-15K-LP) for the study of entity alignment in knowledge graphs."}, {"Category": "Supporting Evidence", "Citation": "(Wang et al., 2014)", "Explanation": "The cited work by Wang et al. provides a traditional filtered setting for the link prediction task, which serves as a reference point for the comparison in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Yang et al., 2015)", "Explanation": "The cited work by Yang et al. also contributes to the traditional filtered setting for the link prediction task, providing additional context and data for the comparison in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Wang et al., 2014)", "Explanation": "The cited work by Wang et al. serves as a foundation for the more complete and universal comparison conducted in the citing paper, which expands upon the original research by including both head prediction and tail prediction in the link prediction task."}, {"Category": "Extension or Continuation", "Citation": "(Yang et al., 2015)", "Explanation": "The cited work by Yang et al. also contributes to the extension of the link prediction task in the citing paper, providing additional insights and data for the comparison."}, {"Category": "Supporting Evidence", "Citation": "(Wang et al., 2014)", "Explanation": "The cited work by Wang et al. provides a traditional filtered setting for the link prediction task, which serves as a reference point for the comparison in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Yang et al., 2015)", "Explanation": "The cited work by Yang et al. also contributes to the traditional filtered setting for the link prediction task, providing additional context and data for the comparison in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Wang et al., 2014)", "Explanation": "The cited work by Wang et al. serves as a foundation for the more complete and universal comparison conducted in the citing paper, which expands upon the original research by including both head prediction and tail prediction in the link prediction task."}, {"Category": "Extension or Continuation", "Citation": "(Yang et al., 2015)", "Explanation": "The cited work by Yang et al. also contributes to the extension of the link prediction task in the citing paper, providing additional insights and data for the comparison."}, {"Category": "Extension or Continuation", "Citation": "(Wang et al., 2014)", "Explanation": "The cited work by Wang et al. serves as a foundation for the more complete and universal comparison conducted in the citing paper, which expands upon the original research by including both head prediction and tail prediction in the link prediction task."}, {"Category": "Extension or Continuation", "Citation": "(Yang et al., 2015)", "Explanation": "The cited work by Yang et al. also contributes to the extension of the link prediction task in the citing paper, providing additional insights and data for the comparison."}, {"Category": "Methodological Basis", "Citation": "(Sun et al., 2020)", "Explanation": "The cited work introduces the concept of parameter-swapping in the context of meta-path alignment, which the citing paper adopts in their research to generate additional triples information in multilingual datasets."}, {"Category": "Data Source", "Citation": "(Auer et al., 2007)", "Explanation": "The cited work by Auer et al. (2007) is the source of the widely adopted and verified DBpedia dataset, which is used in the citing paper to generate the DBP-5L dataset."}, {"Category": "Extension or Continuation", "Citation": "(Huang et al., 2022)", "Explanation": "The cited work by Huang et al. (2022) constructed the E-PKG industrial E-commerce product KG dataset, which the citing paper further uses to represent the quality of real-life industrial datasets in their research."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b6", "b13", "b54", "b1", "b9", "b10", "b17", "b23", "b42", "b14", "b13", "b3", "b4" ], "table_ref": [], "text": "Visual object tracking has been a fundamental and long-standing task in computer vision, which aims to locate the object throughout a video sequence, given its initial bounding box. It has a wide range of practical applications, which often require for low computational latency. So it is important to design an efficient tracking framework while maintaining high accuracy.\nRecently, the Transformer-based one-stream trackers [7,14,55] achieve excellent tracking accuracy than the previous Siamese trackers [2,10,11], due to the unified modeling of feature extraction and target integration within a transformer block, which allows both components to benefit from the transformer development (e.g. ViT [18], self-supervised pre-training [24] or contrastive pretraining [43]). However for these trackers, inference efficiency, especially on CPU, is still the main obstacle to practical deployment. Taking the state-of-the-art tracker MixViT [15] as an instance, its pipeline contains i) transformer backbone on the token sequence from target template and search area, ii) dense corner head on the 2D search region for regression, and iii) extra complex score prediction module for classification (i.e., estimating the box quality for reliable online samples selection). To achieve a high-efficiency tracker, there are still several issues on the design of MixViT. First, the dense convolutional corner head still exhibits a time-consuming design, as implied in Tab 1. This is because it densely estimates the probability distribution of the box corners through a total of † Equal contribution. * Corresponding author ([email protected]). Table 2: Efficiency analysis on MixViT with different backbone settings. The employed prediction head is plain corner head [14] for the analysis. The circle diameter is in proportion to model FLOPs. MixFormerV2-B surpasses existing trackers by a large margin in terms of both accuracy and inference speed. MixFormerV2-S achieves extremely high tracking speed of over 300 FPS while obtaining competitive accuracy compared with other efficient trackers [4,5].\nten convolutional layers on the high-resolution 2D feature maps. Second, to deal with online template updating, an extra complex score prediction module composed of precise RoI pooling layer, two attention blocks, and a three-layer MLP is required for improving online samples quality, which largely hinders its efficiency and simplicity of MixViT.\nTo avoid the dense corner head and complicated score prediction module, we propose a new fully transformer tracking framework-MixFormerV2 without any dense convolutional operation. Our MixFormerV2 yields a very simple and efficient architecture, which is composed of a transformer backbone on the mixed token sequence and two simple MLP heads on the learnable prediction tokens. Specifically, we introduce four special learnable prediction tokens and concate them with the original tokens from target template and search area. Like the CLS token in the standard ViT, these prediction tokens are able to capture the complex relation between target template and search area, serving as a compact representation for subsequent regression and classification. Based on them, we can easily predict the target box and confidence score through simple MLP heads, which results in an efficient fully transformer tracker. Our MLP heads directly regress the probability distribution of four box coordinates, which can improve the regression accuracy without increasing overhead.\nTo further improve efficiency of MixFormerV2, we present a new model reduction paradigm based on distillation, including dense-to-sparse distillation and deep-to-shallow distillation. The denseto-sparse distillation aims to transfer knowledge from the dense-head based MixViT to our fully transformer tracker. Thanks to the distribution-based regression design in our MLP head, we can easily adopt logits mimicking strategy for distilling MixViT trackers to our MixFormerV2. Based on the observation in Tab. 2, we also exploit the deep-to-shallow distillation to prune our MixFormerV2. We devise a new progressive depth pruning strategy by following a critical principle that constraining the initial distribution of student and teacher trackers to be as similar as possible, which can augment the capacity of knowledge transfer. Specifically, instructed by the frozen teacher model, some certain layers of a copied teacher model are progressively dropped and we use the pruned model as our student initialization. For realtime tracking on CPU, we further introduce an intermediate teacher model to bridge the gap between the large teacher and small student, and prune hidden dim of MLP based on the proposed distillation paradigm.\nBased on the proposed model reduction paradigm, we instantiate two types of MixFormerV2 trackers, MixFormerV2-B and MixFormerV2-S. As shown in Fig. " }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b19", "b27", "b40", "b41", "b47", "b1", "b30", "b31", "b49", "b2", "b15", "b9", "b36", "b51", "b6", "b13", "b54", "b52", "b4", "b8", "b3", "b26", "b0", "b22", "b32", "b56", "b21", "b46", "b26", "b35", "b24", "b18", "b43", "b50", "b7", "b20", "b5", "b53", "b55" ], "table_ref": [], "text": "Efficient Visual Object Tracking. In recent decades, the visual object tracking task has witnessed rapid development due to the emergence of new benchmark datasets [20,28,41,42,48] and better trackers [2, 10, 12-14, 32, 52, 55]. Researchers have tried to explore efficient and effective tracking architectures for practical applications, such as siamese-based trackers [2,31,32,50], online trackers [3,16], and transformer-based trackers [10,37,52]. Benefiting from transformer structure and attention mechanism, recent works [7,14,55] on visual tracking are gradually abandoning traditional three-stage model paradigm, i.e., feature extraction, information interaction and box localization. They adopted a more unified one-stream model structure to jointly perform feature extraction and interaction, which turned out to be effective for visual object tracking task. However, some modern tracking architectures are too heavy and computational expensive, making it hard to deploy in practical applications. LightTrack [53] employed NAS to search a light Siamese network, but its speed was not extremely fast on powerful GPUs. FEAR [5], HCAT [9], and E.T.Track [4] designed more efficient frameworks, which were not suitable for one-stream trackers. We are the first to design efficient one-stream tracker so as to achieve good accuracy and speed trade-off.\nKnowledge Distillation. Knowledge Distillation (KD) [27] was proposed to learn more effective student models with teacher model supervision. In the beginning, KD is applied in classification problem, where KL divergence is used for measuring the similarity of teacher and student predictions.\nFor regression problem like object detection, feature mimicking [1,23,33] is frequently employed. LD [57] operated logits distillation on bounding box location by converting Dirac delta distribution representation to probability distribution representation of bounding box, which well unified logits distillation and location distillation. In this work, we exploit some customized strategies to make knowledge distillation more suitable for our tracking framework.\nVision Transformer Compression. There exist many general techniques for the purpose of speeding up model inference, including model quantization [22,47], knowledge distillation [27,36], pruning [25], and neural architecture search [19]. Recently many works also focus on compressing vision transformer models. For example, Dynamic ViT [44] and Evo-ViT [51] tried to prune tokens in attention mechanism. AutoFormer [8], NASViT [21], and SlimmingViT [6] employed NAS technique to explore delicate ViT architectures. ViTKD [54] provided several ViT feature distillation guidelines but focused on compressing the feature dimension instead of model depth. MiniViT [56] applied weights sharing and multiplexing to reduce model parameters. Since one-stream trackers highly rely on expensive pre-training, we resort to directly prune the layers of our tracker." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "In this section, we first present the MixFormerV2, which is an efficient and fully transformer tracking framework. Then we describe the proposed distillation-based model reduction, including dense-tosparse distillation and deep-to-shallow distillation. Finally, we describe the training of MixFormerV2." }, { "figure_ref": [ "fig_1" ], "heading": "Fully Transformer Tracking: MixFormerV2", "publication_ref": [ "b9", "b51", "b13", "b54", "b6", "b14" ], "table_ref": [], "text": "The proposed MixFormerV2 is a fully transformer tracking framework without any convolutional operation or complex score prediction module. Its backbone is a plain vision transformer on the mixed token sequence of three types: target template tokens, search area tokens, and learnable prediction tokens. Then, simple MLP heads are placed on top for predicting probability distribution of the box coordinates and the corresponding target quality score. Compared with other transformer-based trackers (e.g., TransT [10], STARK [52], MixFormer [14], OSTrack [55] and SimTrack [7]), our MixFormerV2 streamlines the tracking pipeline by effectively removing the customized convolutional classification and regression heads for the first time, which yields a more unified, efficient and flexible tracker. The overall architecture is depicted in Fig. 2. With inputting the template tokens, the search area tokens and the learnable prediction tokens, MixFormerV2 directly predicts the target bounding boxes and quality score in an end-to-end manner. Prediction-Token-Involved Mixed Attention. Compared to the original slimming mixed attention [15] in MixViT, the key difference lies in the introduction of the special learnable prediction tokens, which are used to capture the correlation between the target template and search area. These prediction tokens can progressively extract the target information and used as a compact representations for subsequent regression and classification. Specifically, given the concatenated tokens of multiple templates, search area and four learnable prediction tokens, we pass them into N layers of prediction-token-involved mixed attention modules (P-MAM). We use q t , k t and v t to represent template elements (i.e. query, key and value) of attention, q s , k s and v s to represent search region, q e , k e and v e to represent learnable prediction tokens. The P-MAM can be defined as:\nk tse = Concat(k t , k s , k e ), v tse = Concat(v t , v s , v e ), Atten t = Softmax( q t k T t √ d )v t , Atten s = Softmax( q s k T tse √ d )v tse , Atten e = Softmax( q e k T tse √ d )v tse\n(1) where d represents the dimension of each elements, Atten t , Attent s and Atten e are the attention output of the template, search and the learnable prediction tokens respectively. Similar to the original MixFormer, we use the asymmetric mixed attention scheme for efficient online inference. Like the CLS tokens in the standard ViT, the learnable prediction tokens automatically are trained on the tracking dataset to integrate the template and search area information.\nDirect Prediction Based on Tokens. After the transformer backbone, we directly use the prediction tokens to regress the target location and estimate its reliable score. Specifically, we exploit the distribution-based regression based on the four special learnable prediction tokens. In this sense, we regress the probability distribution of the four bounding box coordinates rather than their absolute positions. Experimental results in Section 4.2 also validate the effectiveness of this design. As the prediction tokens can compress target-aware information via the prediction-token-involved mixed attention modules, we can simply predict the four box coordinates with a weight-sharing MLP head as follows:\nPX (x) = MLP(token X ), X ∈ {T , L, B, R}.(2)\nIn implementation, we share the MLP weights among four prediction tokens. For predicted target quality assessment, the Score Head is a simple MLP composed of two linear layers. Specifically, we first average these four prediction tokens to gather the target information, and then feed the token into the MLP-based Score Head to directly predict the confidence score s which is a real number. Formally, we can represent it as: s = MLP (mean (token X )) , X ∈ {T , L, B, R}. These token-based heads largely reduces the complexity for both the box estimation and quality score estimation, which leads to a more simple and unified tracking architecture. " }, { "figure_ref": [], "heading": "Distillation-Based Model Reduction", "publication_ref": [], "table_ref": [], "text": "To further improve the efficiency of MixFormerV2, we present a distillation-based model reduction paradigm as shown in Fig. 3, which first performs dense-to-sparse distillation for better token-based prediction and then deep-to-shallow distillation for the model pruning." }, { "figure_ref": [], "heading": "Dense-to-Sparse Distillation", "publication_ref": [], "table_ref": [], "text": "In MixFormerV2, we directly regress the target bounding box based on the prediction tokens to the distribution of four random variables T , L, B, R ∈ R, which represents the box's top, left, bottom and right coordinate respectively. In detail, we predict the probability density function of each coordinate: X ∼ PX (x), where X ∈ {T , L, B, R}. The final bounding box coordinates B can be derived from the expectation over the regressed probability distribution:\nB X = E PX [X] = R x PX (x)dx.(3)\nSince the original MixViT dense convolutional corner heads predict two-dimensional probability maps, namely the joint distribution P T L (x, y) and P BR (x, y) for top-left and bottom-right corners, the one-dimensional version of box coordinates distribution can be deduced easily through marginal distribution:\nP T (x) = R P T L (x, y)dy, P L (y) = R P T L (x, y)dx, P B (x) = R P BR (x, y)dy, P R (y) = R P BR (x, y)dx.(4)\nHerein, this formulation can bridge the gap between the dense corner prediction and our sparse token-based prediction. In this sense, the regression outputs of original MixViT can be regarded as soft labels for dense-to-sparse distillation. Specifically, we use MixViT outputs P X as in Equation ( 4) for supervising the four coordinates estimation PX of MixFormerV2, applying KL-Divergence loss as follows:\nL log = X∈{T ,L,B,R} L KL ( PX , P X ).(5)\nIn this way, the localization knowledge is transferred from the dense corner head of MixViT to the simple token-based regression head of MixFormerV2." }, { "figure_ref": [], "heading": "Deep-to-Shallow Distillation", "publication_ref": [ "b4" ], "table_ref": [], "text": "For further improving efficiency, we focus on pruning the transformer backbone. However, designing a new light-weight backbone is not suitable for fast one-stream tracking. A new backbone of onestream trackers often highly relies on large-scale pre-training to achieve good performance, which requires for huge amounts of computation. Therefore, we resort to directly cut down some layers of MixFormerV2 backbone based on both the feature mimicking and logits distillation, as can be seen in Fig. 3: Stage2. Let F S i , F T j ∈ R h×w×c denote the feature map from student and teacher, the subscript represents the index of layers. For logits distillation, we use the same KL-Divergence loss L log as in Equation (5). For feature imitation, we apply L 2 loss:\nL feat = (i,j)∈M L 2 (F S i , F T j ), (6\n)\nwhere M is the set of matched layer pairs need to be supervised. Specifically, we design a progressive model depth pruning strategy for distillation.\nProgressive Model Depth Pruning. Progressive Model Depth Pruning aims to compress Mix-FormerV2 backbone by reducing the number of transformer layers. Since directly removing some layers could lead to inconsistency between teacher and student, we explore a progressive method for model depth pruning based on the feature and logits distillation. Specifically, instead of letting teacher to supervise a smaller student model from scratch, we make the original student model as a complete copy of the teacher model. Then, we will progressively eliminate certain layers of student and make the remaining layers to mimic teacher representation during training with supervision of teacher. This design allows the initial representation of student and teacher to keep as consistent as possible, providing a smooth transition scheme and reducing the difficulty of feature mimicking.\nFormally, let x i denote output of the i-th layer of MixFormerV2 backbone, the calculation of attention block can be represented as below (Layer-Normalization operation is omitted in equation):\nx ′ i = ATTN(x i-1 ) + x i-1 , x i = FFN(x ′ i ) + x ′ i = FFN(ATTN(x i-1 ) + x i-1 ) + ATTN(x i-1 ) + x i-1 ,(7)\nLet E be the set of layers to be eliminated in our student network, we apply a decay rate γ on the weights of these layers:\nx i = γ(FFN(ATTN(x i-1 ) + x i-1 ) + ATTN(x i-1 )) + x i-1 , i ∈ E.(8)\nDuring the first m epochs of student network training, γ will gradually decrease from 1 to 0 in the manner of cosine function:\nγ(t) =    0.5 × 1 + cos t m π , t ≤ m, 0, t > m.(9)\nThis means these layers in student network are gradually eliminated and finally turn into identity transformation, as depicted in Fig. 4. The pruned student model can be obtained by simply removing layers in E and keeping the remaining blocks.\nIntermediate Teacher. For distillation of an extremely shallow model (4-layers MixFormerV2), we introduce an intermediate teacher (8-layers MixFormerV2) for bridging the deep teacher (12-layers MixFormerV2) and the shallow one. Typically, the knowledge of teacher may be too complex for a small student model to learn. So we introduce an intermediate role serving as teaching assistant to relieve the difficulty of the extreme knowledge distillation. In this sense, we divide the problem of knowledge distillation between teacher and small student into several distillation sub-problems.\nMLP Reduction. As shown in Tab. 2, one key factor affecting the inference latency of tracker on CPU device is the hidden feature dim of MLP in Transformer block. In other words, it becomes the bottleneck that limits the real-time speed on CPU device. In order to leverage this issue, we further prune the hidden dim of MLP based on the proposed distillation paradigm, i.e., feature mimicking and logits distillation. Specifically, let the shape of linear weights in the original model is w ∈ R d1×d2 , and the corresponding shape in the pruning student model is\nw ′ ∈ R d ′ 1 ×d ′ 2 , in which d ′ 1 ≤ d 1 , d ′ 2 ≤ d 2 ,\nwe will initialize weights for student model as:\nw ′ = w[: d ′ 1 , : d ′ 2 ]\n. Then we apply distillation supervision for training, letting the pruned MLP to simulate original heavy MLP." }, { "figure_ref": [], "heading": "Training of MixFormerV2", "publication_ref": [], "table_ref": [], "text": "The overall training pipeline is demonstrated in Fig. 3, performing dense-to-sparse distillation and then deep-to-shallow distillation to yield our final efficient MixFormerV2 tracker. Then, we train the MLP based score head for 50 epochs. Particularly, for CPU real-time tracking, we employ the intermediate teacher to generate a shallower model (4-layer MixFormerV2) based on the proposed distillation. Besides, we also use the designed MLP reduction strategy for further pruning the CPU real-time tracker. The total loss of distillation training with student S and teacher T is calculated as:\nL = λ 1 L 1 (B S , B gt ) + λ 2 L ciou (B S , B gt ) + λ 3 L log (S, T ) + λ 4 L feat (S, T ),(10)\nwhere the first two terms are exactly the same as the original MixFormer location loss supervised by ground truth bounding boxes, and the rest terms are for the aforementioned distillation, i.e., logit distillation and feature distillation." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Implemented Details", "publication_ref": [ "b41", "b19", "b27", "b34", "b13", "b13", "b14" ], "table_ref": [], "text": "Training and Inference. Our trackers are implemented using Python 3.6 and PyTorch 1.7. The distillation training is conducted on 8 NVidia Quadro RTX 8000 GPUs. The inference process runs on one NVidia Quadro RTX 8000 GPU and Intel(R) Xeon(R) Gold 6230R CPU @ 2.10GHz. The training datasets includes TrackingNet [42], LaSOT [20], GOT-10k [28] and COCO [35] training splits, which are the same as MixFormer [14]. Each distillation training stage takes 500 epochs, where m = 40 epochs are for progressively eliminating layers. We train the score prediction MLP for additional 50 epochs. The batch size is 256 and each GPU holding 32 samples. We use AdamW optimizer with weight decay of 10 -4 . The initial learning rate is 10 -4 and decreases to 10 -5 after 400 epochs. The loss weights in Equation ( 10) are set as\nλ 1 = 5, λ 2 = 2, λ 3 = 1, λ 4 = 0.2.\nWe instantiate two types of MixFormerV2, including MixFormerV2-B of 8 P-MAM layers and MixFormerV2-S of 4 P-MAM layers with MLP ratio of 1.0. Their numbers of parameters are 58.8M and 16.2M respectively. The resolutions of search and template images for MixFormerV2-B are 288 × 288 and 128 × 128 respectively. While for MixFormerV2-S, the resolutions of search and template images are 224 × 224 and 112 × 112 for real-time tracking on CPU platform. The inference pipeline is the same as MixFormer [14]. We use the first template together with the current search region as input of MixFormerV2. The dynamic templates are updated when the update interval of 200 is reached by default, where the template with the highest score is selected as an online sample.\nDistillation-Based Reduction. For dense-to-sparse distillation, we use MixViT-L [15] with pyramidal corner head as teacher for MixFormerV2-B by default. We also try MixViT-B with pyramidal corner head as the teacher in Tab. 3 and 5. We employ MixViT-B with plain corner head and search input size of 224 × 224 as the teacher for MixFormer-S. For deep-to-shallow distillation, we use the progressive model depth pruning strategy to produce the 8-layer MixFormerV2-B from a 12-layer one. For MixFormerV2-S, we additionally employs intermediate teacher and MLP reduction strategies, and the process is '12-layer MixFormerV2 to 8-layer MixFormerV2-B, 8-layer MixFormerV2-B to 4-layer MixFormerV2, 4-layer MLP-ratio-4.0 MixFormerV2 to 4-layer MLP-ratio-1.0 MixFormerV2-S'." }, { "figure_ref": [], "heading": "Exploration Studies", "publication_ref": [ "b19" ], "table_ref": [], "text": "To verify the effectiveness of our proposed framework and training paradigm, we investigate different components of MixFormerV2 and perform detailed exploration studies on the LaSOT [20] dataset." }, { "figure_ref": [], "heading": "Analysis on MixFormerV2 Framework", "publication_ref": [], "table_ref": [], "text": "Token-based Distribution Regression. The design of distribution-based regression with special learnable prediction tokens is the core of our MixFormerV2. We conduct experiments on different regression methods in Tab. 3a. All models employ ViT-B as backbone and are deployed without distillation and online score prediction. Although the pyramidal corner head obtains the best performance, the running speed is largely decreased compared with our token-based regression head in MixFormerV2. MixFormerV2 with four prediction tokens achieves a good trade-off between tracking accuracy and inference latency. Besides, compared to the direct box prediction with one token in the first row of Tab. 3a, which estimates the absolute target position instead of the probability distribution of four coordinates, the proposed distribution-based regression obtains a better accuracy. Besides, this design allows to perform dense-to-sparse distillation to further boost tracking accuracy.\nToken-based Quality Score Prediction. The design of the prediction tokens also allows to perform more efficient quality score prediction via a simple MLP head. As shown in Tab. 3b, the token-based score prediction component improves the baseline MixFormerV2-B by 1.7% with almost no inference latency increase. Compared to ours, the score prediction module in MixViT-B further decreases the running speed by 13.0%, which is inefficient. Besides, the SPM in MixViT requires precise RoI pooling, which hinders the migration to various platforms." }, { "figure_ref": [], "heading": "Analysis on Dense-to-Sparse Distillation", "publication_ref": [], "table_ref": [], "text": "We verify the effectiveness of dense-to-sparse distillation in Tab. 3c. When use MixViT-B without its SPM (69.0% AUC) as the teacher model, the MixFormerV2 of 12 P-MAM layers achieves an AUC score of 68.9%, increasing the baseline by 1.4%. This further demonstrates the significance of the design of four special prediction tokens, which allows to perform dense-to-sparse distillation. The setting of using MixViT-L (71.5% AUC) as the teacher model increases the baseline by an AUC score of 2.2%, which implies the good distillation capacity of the large model. " }, { "figure_ref": [], "heading": "Analysis on Deep-to-Shallow Distillation", "publication_ref": [], "table_ref": [ "tab_3", "tab_3" ], "text": "In the following analysis on deep-to-shallow distillation, we use the MixViT-B of 12 layers with plain corner head as the teacher, and MixViT of 4 layers with the same corner head as the student.\nFeature Mimicking & Logits Distillation. To give detailed analysis on different distillation methods for tracking, we conduct experiments in Tab. 3d. The models are all initialized with the first 4-layers MAE pre-trained ViT-B weights. It can be seen that logits distillation can increase the baseline by 1.7% AUC, and adding feature mimicking further improves by 0.4% AUC, which indicates the effectiveness of both feature mimicking and logits distillation for tracking.\nProgressive Model Depth Pruning. We study the effectiveness of the progressive model depth pruning (PMDP) for the student initialization in Tab. 3e. It can be observed that the PMDP improves the traditional initialization method of using MAE pre-trained first 4-layers ViT-B by 1.9%. This demonstrates that it is critical for constraining the initial distribution of student and teacher trackers to be as similar as possible, which can make the feature mimicking easier. Surprisingly, we find that even the initial weights of the four layers are not continuous, i.e., using the skipped layers (the 3,6,9,12-th) of the teacher for initialization, the performance is better than the baseline (62.9% vs. 64.4%), which further verifies the importance of representation similarity between the two ones.\nIntermediate Teacher. Intermediate teacher is introduced to promote the transferring capacity from a deep model to a shallow one. We conduct experiment as in Table 3f. We can observe that the intermediate teacher can bring a gain of 0.7% AUC score which can verify that.\nDetermination of Eliminating Epochs. We conduct experiments as shown in the Table 3g to choose the best number of epochs m in the progressive eliminating period. We find that when the epoch m greater than 40, the choice of m seems hardly affect the performance. Accordingly we determine the epoch to be 40." }, { "figure_ref": [], "heading": "Model Pruning Route", "publication_ref": [], "table_ref": [], "text": "We present the model pruning route from the teacher model to MixFormerV2-B and MixFormerV2-S in Tab. 3h and Tab. 3i, respectively. The models in the first row are the corresponding teacher models. We can see that, through the dense-to-sparse distillation, our token-based MixFormerV2-B obtains comparable accuracy with the dense-corner-based MixViT-B with higher running speed. Through the progressive model depth pruning based on the feature and logits distillation, MixFormerV2-B with 8 layers only decreases little accuracy compared to the 12-layers one." }, { "figure_ref": [ "fig_5" ], "heading": "Comparison with the Previous Methods", "publication_ref": [ "b19", "b19", "b41", "b40", "b47", "b29", "b41", "b19", "b19", "b40", "b47" ], "table_ref": [], "text": "Comparison with the State-of-The-Art Trackers. We evaluate the performance of our proposed trackers on 6 benchmark datasets: including the large-scale LaSOT [20], LaSOT ext [20], Track-ingNet [42], UAV123 [41], TNL2K [48], and VOT2022 [30]. LaSOT is a large-scale dataset with 1400 long videos in total and its test set contains 280 sequences. TrackingNet provides over 30K videos with more than 14 million dense bounding box annotations. UAV123 is a large dataset containing 123 aerial videos which is captured from low-altitude UAVs. VOT2022 benchmark has 60 sequences, which measures the Expected Average Overlap (EAO), Accuracy (A) and Robustness (R) metrics. Among them, LaSOT ext and TNL2K are two relatively recent benchmarks. LaSOT ext is a released extension of LaSOT, which consists of 150 extra videos from 15 object classes. TNL2K consists of 2000 sequences, with natural language description for each. We evaluate our MixFormerV2 on the test set with 700 videos. The results are presented in Tab. 4 and Tab. 5. More results on other datasets will be present in supplementary materials. Only the trackers of similar complexity are included, i.e., the trackers with large-scale backbone or large input resolution are excluded. Our Table 6: Comparison with CPU-realtime trackers on TrackingNet [42], LaSOT [20], LaSOT ext [20], UAV123 [41] and TNL2k [48]. The best results are shown in bold fonts.\nMixFormerV2-B achieves state-of-the-art performance among these trackers with a very fast speed, especially compared to transformer-based one-stream tracker. For example, MixFormerV2-B without post-processing strategies surpasses OSTrack by 1.5% AUC on LaSOT and 2.4% AUC on TNL2k, running with quite faster speed (165 FPS vs. 105 FPS). Even the MixFormerV2-B with MixViT-B as the teacher model obtains better performance than existing SOTA trackers, such as MixFormer, OSTrack, ToMP101 and SimTrack, with much faster running speed on GPU.\nComparison with Efficient Trackers. For real-time running requirements on limited computing resources such as CPU, we explore a lightweight model, i.e., MixFormerV2-S, which still reaches strong performance. And it is worth noting that this is the first time that transformer-based onestream tracker is able to run on CPU device with a real-time speed. As demonstrated in Figure 6, MixFormerV2-S surpasses all other architectures of CPU-real-time trackers by a large margin. We make a comparison with other prevailing efficient trackers on multiple datasets, including LaSOT, LaSOT ext , TrackingNet, UAV123, and TNL2k, in Tab 6. Our MixFormerV2-S outperforms FEAR-L by an AUC score of 2.7% and STARK-Lightning by an AUC score of 2.0% on LaSOT." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "In this paper, we have proposed a fully transformer tracking framework MixFormerV2, composed of standard ViT backbones on the mixed token sequence and simple MLP heads for box regression and quality score estimation. Our MixFormerV2 streamlines the tracking pipeline by removing the dense convolutional head and the complex score prediction modules. We also present a distillation based model reduction paradigm for MixFormerV2 to further improve its efficiency. Our MixFormerV2 obtains a good trade-off between tracking accuracy and speed on both GPU and CPU platforms. We hope our MixFormerV2 can facilitate the development of efficient transformer trackers in the future. More Exploration of PMDP. Tea-skip4 is a special initialization method, which chooses the skiped four layers (layer-3/6/9/12) of the teacher (MixViT-B) for initialization. In other words, Tea-skip4 is an extreme case of ours PMDP when the eliminating epoch m equal to 0. So it is reasonable that Tea-skip4 performs better than the baseline Tea-fir4, which employs the first four layers of the teacher (MixViT-B) to initialize the student backbone. In Table 8b, we further evaluate the performance on more benchmarks. It can be seen that ours PMDP surpasses Tea-skip4 by 1.0% on LaSOT_ext, which demonstrate its effectiveness.\nComputation Loads of Different Localization Head. We showcase the FLOPs of different heads as follows. Formally, we denote C in as input feature dimension, C out as output feature dimension, H in , W in as input feature map shape of convolution layer, H out , W out as output feature map shape, and K as the convolution kernel size. The computational complexity of one linear layer is O(C in C out ), and that of one convolutional layer is O(C in C out H out W out K 2 ).\nIn our situation, for T4, the Localization Head contains four MLP to predict four coordinates. Each MLP contains two linear layer, whose input and output dimensions are all 768. The loads can be calculated as: For simplicity, we do not include some operations such as bias terms and Layer/Batch-Normalization, which does not affect the overall calculation load level. Besides, the Pyramid Corner Head utilize additional ten interpolation operations. Obviously the calculation load of Py-Corner is still hundreds of times of T4.\nLoad T 4 =" }, { "figure_ref": [ "fig_4", "fig_5", "fig_6", "fig_6", "fig_6", "fig_6", "fig_6" ], "heading": "S.6 Visualization Results", "publication_ref": [], "table_ref": [], "text": "Visualization of Attention Map To explore how the introduced learnable prediction tokens work within the P-MAM, we visualize the attention maps of prediction-token-to-search and predictiontoken-to-template in Fig. 5 and Fig. 6, where the prediction tokens are served as query and the others as key/val of the attention operation. From the visualization results, we can arrive that the four prediction tokens are sensitive to corresponding part of the targets and thus yielding a compact object bounding box. We suspect that the performance gap between the dense corner head based MixViT-B and our fully transformer MixFormerV2-B without distillation lies in the lack of holistic target modeling capability. Besides, the prediction tokens tend to extract partial target information in both the template and the search so as to relate the two ones.\nVisualization of Predicted Probability Distribution We show two good cases and bad cases in Figure 7. In Figure 7a MixFormerV2 deals with occlusion well and locate the bottom edge correctly. As show in Figure 7b, the probability distribution of box representation can effectively alleviate issue of ambiguous boundaries. There still exist problems like strong occlusion and similar objects which will lead distribution shift, as demonstrated in Figure 7c and7d. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgement This work is supported by National Key R&D Program of China (No. 2022ZD0160900), National Natural Science Foundation of China (No. 62076119, No. 61921006), Fundamental Research Funds for the Central Universities (No. 020214380099), and Collaborative Innovation Center of Novel Software Technology and Industrialization." }, { "figure_ref": [], "heading": "", "publication_ref": [ "b41", "b19", "b19", "b40", "b47" ], "table_ref": [], "text": "Table 5: State-of-the-art comparison on TrackingNet [42], LaSOT [20], LaSOT ext [20], UAV123 [41] and TNL2K [48]. " }, { "figure_ref": [], "heading": "Appendix S.1 Broader Impact", "publication_ref": [], "table_ref": [], "text": "In this paper, we introduce MixFormerV2, a fully transformer tracking approach for efficiently and effectively estimating the state of an arbitrary target in a video. Generic object tracking is one of the fundamental computer vision problems with numerous applications. For example, object tracking (and hence MixFormerV2) could be applied to human-machine interaction, visual surveillance and unmanned vehicles. Our research could be used to improve the tracking performance while maintaining a high running speed. Of particular concern is the use of the tracker by those wishing to position and surveil others illegally. Besides, if the tracker is used in unmanned vehicles, it may be a challenge when facing the complex real-world scenarios. To mitigate the risks associated with using MixFormerV2, we encourage researchers to understand the impacts of using the trackers in particular real-world scenarios." }, { "figure_ref": [], "heading": "S.2 Limitations", "publication_ref": [], "table_ref": [], "text": "The main limitation lies in the training overhead of MixFormerV2-S, which performs multiple model pruning based on the dense-to-sparse distillation and deep-to-shallow distillation. In detail, we first perform distillation from MixViT with 12 layers and plain corner head to MixFormerV2 of 12 layers. The 12-layers MixFormerV2 is pruned to 8-layers and then to 4-layers MixFormerV2 based on the deep-to-shallow distillation. Finally, the MLP-ratio-4.0 4-layers MixFormerV2 is pruned to the MLP-ratio-4.0 4-layers MixFormerV2-S for real-time tracking on CPU. For each step, it requires training for 500 epochs which is time-consuming." }, { "figure_ref": [], "heading": "S.3 Details of Training Time", "publication_ref": [], "table_ref": [], "text": "The models are trained on 8 Nvidia RTX8000 GPUs. The dense-to-sparse stage takes about 43 hours.\nThe deep-to-shallow stage1 (12-to-8 layers) takes about 42 hours, and stage2 (8-to-4 layers) takes about 35 hours." }, { "figure_ref": [], "heading": "S.4 More Results on VOT2020 and GOT10k", "publication_ref": [ "b28", "b28", "b27", "b19", "b41", "b27", "b34" ], "table_ref": [], "text": "VOT2020. We evaluate our tracker on VOT2020 [29] benchmark, which consists of 60 videos with several challenges including fast motion, occlusion, etc. The results is reported in [29] and GOT10k [28]. * denotes training with four datasets including LaSOT [20], TrackingNet [42], GOT10k [28] and COCO [35]. The best results are shown in bold font." }, { "figure_ref": [], "heading": "S.5 More Ablation Studies", "publication_ref": [], "table_ref": [], "text": "Design of Prediction Tokens. We practice three different designs of prediction tokens for the target localization in Tab. 8a. All the three methods use the formulation of estimating the probability" } ]
[ { "authors": "Ballas Romero Adriana; Samira Nicolas; Chassang Ebrahimi; Gatta Antoine; Carlo; Yoshua", "journal": "Proc. ICLR", "ref_id": "b0", "title": "Fitnets: Hints for thin deep nets", "year": "2015" }, { "authors": "Luca Bertinetto; Jack Valmadre; Joao F Henriques; Andrea Vedaldi; Philip Hs Torr", "journal": "", "ref_id": "b1", "title": "Fully-convolutional siamese networks for object tracking", "year": "2016" }, { "authors": "Goutam Bhat; Martin Danelljan; Luc Van Gool; Radu Timofte", "journal": "", "ref_id": "b2", "title": "Learning discriminative model prediction for tracking", "year": "2019" }, { "authors": "Philippe Blatter; Menelaos Kanakis; Martin Danelljan; Luc Van Gool", "journal": "", "ref_id": "b3", "title": "Efficient visual tracking with exemplar transformers", "year": "2021" }, { "authors": "Vasyl Borsuk; Roman Vei; Orest Kupyn; Tetiana Martyniuk; Igor Krashenyi; Jiři Matas", "journal": "", "ref_id": "b4", "title": "Fear: Fast, efficient, accurate and robust visual tracker", "year": "2021" }, { "authors": "Arnav Chavan; Zhiqiang Shen; Zhuang Liu; Zechun Liu; Kwang-Ting Cheng; Eric P Xing", "journal": "", "ref_id": "b5", "title": "Vision transformer slimming: Multi-dimension searching in continuous optimization space", "year": "2022" }, { "authors": "Boyu Chen; Peixia Li; Lei Bai; Lei Qiao; Qiuhong Shen; Bo Li; Weihao Gan; Wei Wu; Wanli Ouyang", "journal": "", "ref_id": "b6", "title": "Backbone is all your need: A simplified architecture for visual object tracking", "year": "2022" }, { "authors": "Minghao Chen; Houwen Peng; Jianlong Fu; Haibin Ling", "journal": "", "ref_id": "b7", "title": "Autoformer: Searching transformers for visual recognition", "year": "2021" }, { "authors": "Xin Chen; Dong Wang; Dongdong Li; Huchuan Lu", "journal": "", "ref_id": "b8", "title": "Efficient visual tracking via hierarchical crossattention transformer", "year": "2022" }, { "authors": "Xin Chen; Bin Yan; Jiawen Zhu; Dong Wang; Xiaoyun Yang; Huchuan Lu", "journal": "CVPR", "ref_id": "b9", "title": "Transformer tracking", "year": "2021" }, { "authors": "Zedu Chen; Bineng Zhong; Guorong Li; Shengping Zhang; Rongrong Ji", "journal": "CVPR", "ref_id": "b10", "title": "Siamese box adaptive network for visual tracking", "year": "2020" }, { "authors": "Yutao Cui; Cheng Jiang; Limin Wang; Gangshan Wu", "journal": "CoRR", "ref_id": "b11", "title": "Target transformed regression for accurate tracking", "year": "2021" }, { "authors": "Yutao Cui; Cheng Jiang; Limin Wang; Gangshan Wu", "journal": "Computer Vision and Image Understanding", "ref_id": "b12", "title": "Fully convolutional online tracking", "year": "2022" }, { "authors": "Yutao Cui; Cheng Jiang; Limin Wang; Gangshan Wu", "journal": "CVPR", "ref_id": "b13", "title": "Mixformer: End-to-end tracking with iterative mixed attention", "year": "2022" }, { "authors": "Yutao Cui; Cheng Jiang; Gangshan Wu; Limin Wang", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b14", "title": "Mixformer: End-to-end tracking with iterative mixed attention", "year": "2024" }, { "authors": "Martin Danelljan; Goutam Bhat; Fahad Shahbaz Khan; Michael Felsberg", "journal": "CVPR", "ref_id": "b15", "title": "ATOM: accurate tracking by overlap maximization", "year": "2019" }, { "authors": "Martin Danelljan; Luc Van Gool; Radu Timofte", "journal": "CVPR", "ref_id": "b16", "title": "Probabilistic regression for visual tracking", "year": "2020" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "ICLR", "ref_id": "b17", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "Thomas Elsken; Jan Hendrik Metzen; Frank Hutter", "journal": "The Journal of Machine Learning Research", "ref_id": "b18", "title": "Neural architecture search: A survey", "year": "2019" }, { "authors": "Liting Heng Fan; Fan Lin; Peng Yang; Ge Chu; Sijia Deng; Hexin Yu; Yong Bai; Chunyuan Xu; Haibin Liao; Ling", "journal": "CVPR", "ref_id": "b19", "title": "Lasot: A high-quality benchmark for large-scale single object tracking", "year": "2019" }, { "authors": "Chengyue Gong; Dilin Wang; Meng Li; Xinlei Chen; Zhicheng Yan; Yuandong Tian; Vikas Chandra", "journal": "", "ref_id": "b20", "title": "Nasvit: Neural architecture search for efficient vision transformers with gradient conflict aware supernet training", "year": "2021" }, { "authors": "Yunchao Gong; Liu Liu; Ming Yang; Lubomir Bourdev", "journal": "", "ref_id": "b21", "title": "Compressing deep convolutional networks using vector quantization", "year": "2014" }, { "authors": "Jianyuan Guo; Kai Han; Yunhe Wang; Han Wu; Xinghao Chen; Chunjing Xu; Chang Xu", "journal": "", "ref_id": "b22", "title": "Distilling object detectors via decoupled features", "year": "2021" }, { "authors": "Kaiming He; Xinlei Chen; Saining Xie; Yanghao Li; Piotr Dollár; Ross Girshick", "journal": "CVPR", "ref_id": "b23", "title": "Masked autoencoders are scalable vision learners", "year": "2022" }, { "authors": "Yihui He; Xiangyu Zhang; Jian Sun", "journal": "", "ref_id": "b24", "title": "Channel pruning for accelerating very deep neural networks", "year": "2017" }, { "authors": "F João; Rui Henriques; Pedro Caseiro; Jorge Martins; Batista", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b25", "title": "High-speed tracking with kernelized correlation filters", "year": "2015" }, { "authors": "Geoffrey Hinton; Oriol Vinyals; Jeff Dean", "journal": "", "ref_id": "b26", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "Lianghua Huang; Xin Zhao; Kaiqi Huang", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b27", "title": "Got-10k: A large high-diversity benchmark for generic object tracking in the wild", "year": "2021" }, { "authors": "Matej Kristan; Ales Leonardis", "journal": "", "ref_id": "b28", "title": "The eighth visual object tracking VOT2020 challenge results", "year": "2020" }, { "authors": "Matej Kristan; Aleš Leonardis; Jiří Matas; Michael Felsberg; Roman Pflugfelder; Joni-Kristian Kämäräinen; Jin Hyung; Martin Chang; Danelljan; Čehovin Luka; Alan Zajc; Lukežič", "journal": "", "ref_id": "b29", "title": "The tenth visual object tracking vot2022 challenge results", "year": "2023" }, { "authors": "Bo Li; Wei Wu; Qiang Wang; Fangyi Zhang; Junliang Xing; Junjie Yan", "journal": "CVPR", "ref_id": "b30", "title": "Siamrpn++: Evolution of siamese visual tracking with very deep networks", "year": "2019" }, { "authors": "Bo Li; Junjie Yan; Wei Wu; Zheng Zhu; Xiaolin Hu", "journal": "CVPR", "ref_id": "b31", "title": "High performance visual tracking with siamese region proposal network", "year": "2018" }, { "authors": "Quanquan Li; Shengying Jin; Junjie Yan", "journal": "", "ref_id": "b32", "title": "Mimicking very efficient network for object detection", "year": "2017" }, { "authors": "Liting Lin; Heng Fan; Yong Xu; Haibin Ling", "journal": "NIPS", "ref_id": "b33", "title": "Swintrack: A simple and strong baseline for transformer tracking", "year": "2022" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge J Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence Zitnick", "journal": "", "ref_id": "b34", "title": "Microsoft COCO: common objects in context", "year": "2014" }, { "authors": "Benlin Liu; Yongming Rao; Jiwen Lu; Jie Zhou; Cho-Jui Hsieh", "journal": "Springer", "ref_id": "b35", "title": "Metadistiller: Network self-boosting via meta-learned top-down distillation", "year": "2020" }, { "authors": "Ze Liu; Yutong Lin; Yue Cao; Han Hu; Yixuan Wei; Zheng Zhang; Stephen Lin; Baining Guo", "journal": "", "ref_id": "b36", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "Alan Lukezic; Jiri Matas; Matej Kristan", "journal": "CVPR", "ref_id": "b37", "title": "D3S -A discriminative single shot segmentation tracker", "year": "2020" }, { "authors": "Christoph Mayer; Martin Danelljan; Goutam Bhat; Matthieu Paul; Danda Pani Paudel; Fisher Yu; Luc Van Gool", "journal": "CVPR", "ref_id": "b38", "title": "Transforming model prediction for tracking", "year": "2022" }, { "authors": "Christoph Mayer; Martin Danelljan; Danda Pani Paudel; Luc Van Gool", "journal": "", "ref_id": "b39", "title": "Learning target candidate association to keep track of what not to track", "year": "2021" }, { "authors": "Matthias Mueller; Neil Smith; Bernard Ghanem", "journal": "", "ref_id": "b40", "title": "A benchmark and simulator for UAV tracking", "year": "2016" }, { "authors": "Matthias Müller; Adel Bibi; Silvio Giancola; Salman Al-Subaihi; Bernard Ghanem", "journal": "", "ref_id": "b41", "title": "Trackingnet: A large-scale dataset and benchmark for object tracking in the wild", "year": "2018" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b42", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Yongming Rao; Wenliang Zhao; Benlin Liu; Jiwen Lu; Jie Zhou; Cho-Jui Hsieh", "journal": "Advances in neural information processing systems", "ref_id": "b43", "title": "Dynamicvit: Efficient vision transformers with dynamic token sparsification", "year": "2021" }, { "authors": "Zikai Song; Run Luo; Junqing Yu; Yi-Ping Phoebe Chen; Wei Yang", "journal": "", "ref_id": "b44", "title": "Compact transformer tracker with correlative masked modeling", "year": "2023-02" }, { "authors": "Zikai Song; Junqing Yu; Yi-Ping Phoebe Chen; Wei Yang", "journal": "CVPR", "ref_id": "b45", "title": "Transformer tracking with cyclic shifting window attention", "year": "2022" }, { "authors": "Kuan Wang; Zhijian Liu; Yujun Lin; Ji Lin; Song Han", "journal": "", "ref_id": "b46", "title": "Haq: Hardware-aware automated quantization with mixed precision", "year": "2019" }, { "authors": "Xiao Wang; Xiujun Shu; Zhipeng Zhang; Bo Jiang; Yaowei Wang; Yonghong Tian; Feng Wu", "journal": "", "ref_id": "b47", "title": "Towards more flexible and accurate object tracking with natural language: Algorithms and benchmark", "year": "2021" }, { "authors": "Fei Xie; Chunyu Wang; Guangting Wang; Yue Cao; Wankou Yang; Wenjun Zeng", "journal": "CVPR", "ref_id": "b48", "title": "Correlation-aware deep tracking", "year": "2022" }, { "authors": "Yinda Xu; Zeyu Wang; Zuoxin Li; Ye Yuan; Gang Yu", "journal": "AAAI", "ref_id": "b49", "title": "Siamfc++: Towards robust and accurate visual tracking with target estimation guidelines", "year": "2020" }, { "authors": "Yifan Xu; Zhijie Zhang; Mengdan Zhang; Kekai Sheng; Ke Li; Weiming Dong; Liqing Zhang; Changsheng Xu; Xing Sun", "journal": "", "ref_id": "b50", "title": "Evo-vit: Slow-fast token evolution for dynamic vision transformer", "year": "2022" }, { "authors": "Bin Yan; Houwen Peng; Jianlong Fu; Dong Wang; Huchuan Lu", "journal": "", "ref_id": "b51", "title": "Learning spatio-temporal transformer for visual tracking", "year": "2021" }, { "authors": "Bin Yan; Houwen Peng; Kan Wu; Dong Wang; Jianlong Fu; Huchuan Lu", "journal": "", "ref_id": "b52", "title": "Lighttrack: Finding lightweight neural networks for object tracking via one-shot architecture search", "year": "2021" }, { "authors": "Zhendong Yang; Zhe Li; Ailing Zeng; Zexian Li; Chun Yuan; Yu Li", "journal": "", "ref_id": "b53", "title": "Vitkd: Practical guidelines for vit feature knowledge distillation", "year": "2022" }, { "authors": "Botao Ye; Hong Chang; Bingpeng Ma; Shiguang Shan", "journal": "", "ref_id": "b54", "title": "Joint feature learning and relation modeling for tracking: A one-stream framework", "year": "2022" }, { "authors": "Jinnian Zhang; Houwen Peng; Kan Wu; Mengchen Liu; Bin Xiao; Jianlong Fu; Lu Yuan", "journal": "", "ref_id": "b55", "title": "Minivit: Compressing vision transformers with weight multiplexing", "year": "2022" }, { "authors": "Zhaohui Zheng; Rongguang Ye; Qibin Hou; Dongwei Ren; Ping Wang; Wangmeng Zuo; Ming-Ming Cheng", "journal": "", "ref_id": "b56", "title": "Localization distillation for object detection", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 114.18, 413.91, 383.15, 39.31 ], "formula_id": "formula_0", "formula_text": "k tse = Concat(k t , k s , k e ), v tse = Concat(v t , v s , v e ), Atten t = Softmax( q t k T t √ d )v t , Atten s = Softmax( q s k T tse √ d )v tse , Atten e = Softmax( q e k T tse √ d )v tse" }, { "formula_coordinates": [ 4, 216.36, 615.05, 288.3, 12.17 ], "formula_id": "formula_1", "formula_text": "PX (x) = MLP(token X ), X ∈ {T , L, B, R}.(2)" }, { "formula_coordinates": [ 5, 238.63, 422.04, 266.04, 20.14 ], "formula_id": "formula_2", "formula_text": "B X = E PX [X] = R x PX (x)dx.(3)" }, { "formula_coordinates": [ 5, 192.45, 496.2, 312.22, 44.25 ], "formula_id": "formula_3", "formula_text": "P T (x) = R P T L (x, y)dy, P L (y) = R P T L (x, y)dx, P B (x) = R P BR (x, y)dy, P R (y) = R P BR (x, y)dx.(4)" }, { "formula_coordinates": [ 5, 234.78, 597.89, 269.89, 23.05 ], "formula_id": "formula_4", "formula_text": "L log = X∈{T ,L,B,R} L KL ( PX , P X ).(5)" }, { "formula_coordinates": [ 6, 242.85, 128.89, 257.95, 22.6 ], "formula_id": "formula_5", "formula_text": "L feat = (i,j)∈M L 2 (F S i , F T j ), (6" }, { "formula_coordinates": [ 6, 500.8, 136.1, 3.87, 8.64 ], "formula_id": "formula_6", "formula_text": ")" }, { "formula_coordinates": [ 6, 187.83, 331.11, 316.83, 40.87 ], "formula_id": "formula_7", "formula_text": "x ′ i = ATTN(x i-1 ) + x i-1 , x i = FFN(x ′ i ) + x ′ i = FFN(ATTN(x i-1 ) + x i-1 ) + ATTN(x i-1 ) + x i-1 ,(7)" }, { "formula_coordinates": [ 6, 163.03, 412.46, 341.64, 9.65 ], "formula_id": "formula_8", "formula_text": "x i = γ(FFN(ATTN(x i-1 ) + x i-1 ) + ATTN(x i-1 )) + x i-1 , i ∈ E.(8)" }, { "formula_coordinates": [ 6, 212.62, 460.94, 292.04, 36.56 ], "formula_id": "formula_9", "formula_text": "γ(t) =    0.5 × 1 + cos t m π , t ≤ m, 0, t > m.(9)" }, { "formula_coordinates": [ 6, 108, 688.16, 396, 24.76 ], "formula_id": "formula_10", "formula_text": "w ′ ∈ R d ′ 1 ×d ′ 2 , in which d ′ 1 ≤ d 1 , d ′ 2 ≤ d 2 ," }, { "formula_coordinates": [ 6, 369.37, 700.71, 71.06, 12.2 ], "formula_id": "formula_11", "formula_text": "w ′ = w[: d ′ 1 , : d ′ 2 ]" }, { "formula_coordinates": [ 7, 151.33, 166.38, 353.34, 11.88 ], "formula_id": "formula_12", "formula_text": "L = λ 1 L 1 (B S , B gt ) + λ 2 L ciou (B S , B gt ) + λ 3 L log (S, T ) + λ 4 L feat (S, T ),(10)" }, { "formula_coordinates": [ 7, 352.16, 365.05, 153.58, 9.65 ], "formula_id": "formula_13", "formula_text": "λ 1 = 5, λ 2 = 2, λ 3 = 1, λ 4 = 0.2." }, { "formula_coordinates": [ 15, 182.63, 449.32, 42.82, 9.65 ], "formula_id": "formula_14", "formula_text": "Load T 4 =" } ]
MixFormerV2: Efficient Fully Transformer Tracking
Transformer-based trackers have achieved high accuracy on standard benchmarks. However, their efficiency remains an obstacle to practical deployment on both GPU and CPU platforms. In this paper, to mitigate this issue, we propose a fully transformer tracking framework based on the successful MixFormer tracker [14], coined as MixFormerV2, without any dense convolutional operations or complex score prediction modules. We introduce four special prediction tokens and concatenate them with those from target template and search area. Then, we apply a simple transformer backbone on these mixed token sequence. These prediction tokens are able to capture the complex correlation between target template and search area via mixed attentions. Based on them, we can easily predict the tracking box and estimate its confidence score through simple MLP heads. To further improve the efficiency of MixFormerV2, we present a new distillation-based model reduction paradigm, including dense-to-sparse distillation and deep-to-shallow distillation. The former one aims to transfer knowledge from the dense-head based MixViT to our fully transformer tracker, while the latter one is for pruning the backbone layers. We instantiate two MixForemrV2 trackers, where the MixFormerV2-B achieves an AUC of 70.6% on LaSOT and AUC of 56.7% on TNL2k with a high GPU speed of 165 FPS, and the MixFormerV2-S surpasses FEAR-L by 2.7% AUC on LaSOT with a real-time CPU speed.
Yutao Cui; Tianhui Song; Gangshan Wu; Limin Wang
[ { "figure_caption": "Figure 1 :1Figure 1: Comparison with the state-of-theart trackers in terms of AUC performance, model FLOPs and GPU Speed on LaSOT.The circle diameter is in proportion to model FLOPs. MixFormerV2-B surpasses existing trackers by a large margin in terms of both accuracy and inference speed. MixFormerV2-S achieves extremely high tracking speed of over 300 FPS while obtaining competitive accuracy compared with other efficient trackers[4,5].", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: MixFormerV2 Framework. MixFormerV2 is a fully transformer tracking framework, composed of a transformer backbone and two simple MLP heads on the learnable prediction tokens.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :Figure 4 :34Figure 3: Distillation-Based Model Reduction for Mix-FormerV2. The 'Stage1' represents for the dense-to-sparse distillation, while the 'Stage2' is the deep-to-shallow distillation. The blocks with orange arrows are to be supervised and blocks with dotted line are to be eliminated.", "figure_data": "", "figure_id": "fig_2", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Visualization of prediction-token-tosearch attention maps, where the prediction tokens are served as query of attention operation.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Visualization of prediction-token-totemplate attention maps, where the prediction tokens are served as query of attention operation.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: In each figure, the left one is plot of the probability distribution of predicted box (red), which demonstrates how our algorithm works. The right one is heatmap of attention weights in the backbone. The examples are from LaSOT test subset and the green boxes are ground truths.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Efficiency analysis on 8-layer MixViT with different heads. 'Pyram. Corner' represents for the pyramidal corner head[15].", "figure_data": "arXiv:2305.15896v2 [cs.CV] 7 Feb 2024", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation studies on LaSOT.The default choice for our model is colored in gray .", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "State-of-the-art comparison on the VOT2022[30]. The best results are shown in bold font.", "figure_data": "KCF SiamFC ATOM D3Sv2 DiMP ToMP TransT SBT SwinTrack MixV2-S MixV2-B[26][2][16][38][3][39][10][49][34]EAO0.2390.2550.3860.3560.430 0.5110.5120.5220.5240.4310.556Accuracy0.5420.5620.6680.5210.689 0.7520.7810.7910.7880.7150.795Robustness 0.5320.5430.7160.8110.760 0.8180.8000.8130.8030.7570.851", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "More ablation studies. The default choice for our model is colored in gray . distribution of the four coordinates of the bounding box. The model on the first line denotes using one prediction token and then predicting coordinates distribution with four independent MLP heads. It can be observed that adopting separate prediction tokens for the four coordinates and a same MLP head retains the best accuracy.", "figure_data": "", "figure_id": "tab_6", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "4 × (768 × 768 + 768 × 72) = 2580480 ∼ 2.5M For Py-Corner, totally 24 convolution layers are used. The loads can be calculated as: Load P y-Corner = 2 * (768 * 384 * 18 * 18 * 3 * 3+ 384 * 192 * 18 * 18 * 3 * 3+ 384 * 192 * 18 * 18 * 3 * 3+ 192 * 96 * 36 * 36 * 3 * 3+ 384 * 96 * 18 * 18 * 3 * 3+ 96 * 48 * 72 * 72 * 3 * 3+ 48 * 1 * 72 * 72 * 3 * 3+ 192 * 96 * 18 * 18 * 3 * 3+ 96 * 48 * 18 * 18 * 3 * 3+ 48 * 1 * 18 * 18 * 3 * 3+ 96 * 48 * 36 * 36 * 3 * 3+ 48 * 1 * 36 * 36 * 3 * 3) =3902587776 ∼ 3.9B", "figure_data": "", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work MixViT serves as a methodological basis for the citing paper, as it is used as a benchmark to evaluate the performance of the proposed tracking framework in terms of inference efficiency and accuracy."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work provides a plain corner head for the design of the MixViT tracker, which the citing paper adopts to improve the efficiency of the tracker by reducing the time-consuming design of the dense convolutional corner head."}, {"Category": "Methodological Basis", "Citation": "[2, 10, 12-14, 32, 52, 55]", "Explanation": "The cited works provide a range of effective trackers that the citing paper adopts or adapts in its research to improve the efficiency and effectiveness of visual object tracking."}, {"Category": "Extension or Continuation", "Citation": "[7,14,55]", "Explanation": "The cited works on visual tracking are discussed in the context of abandoning the traditional three-stage model paradigm in favor of a more unified one-stream model structure. The citing paper extends this research by exploring the use of a more efficient and effective model for visual object tracking."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work on Knowledge Distillation (KD) is used as a basis for the design of a more effective student model in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[1,23,33]", "Explanation": "The cited works on feature mimicking are extended in the citing paper to address the regression problem in object detection."}, {"Category": "Data Source", "Citation": "[57]", "Explanation": "The cited work on LD (Logit Distillation) is used as a data source for bounding box location in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[22,47]", "Explanation": "The cited works on model quantization are used as a methodological basis for speeding up model inference in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[27,36]", "Explanation": "The cited works on knowledge distillation are used as a methodological basis for improving the efficiency of student models in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[25]", "Explanation": "The cited work on pruning is used as a methodological basis for compressing vision transformer models in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work on neural architecture search is used as a methodological basis for compressing vision transformer models in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[44]", "Explanation": "The cited work Dynamic ViT provides a method of pruning tokens in attention mechanism, which the citing paper adopts in their research to improve the performance of their tracker."}, {"Category": "Methodological Basis", "Citation": "[51]", "Explanation": "The cited work Evo-ViT also employs a method of pruning tokens in attention mechanism, which the citing paper may have considered in their research to improve the performance of their tracker."}, {"Category": "Methodological Basis", "Citation": "[8]", "Explanation": "The cited work AutoFormer uses NAS technique to explore delicate ViT architectures, which the citing paper may have considered in their research to improve the performance of their tracker."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work NASViT also employs NAS technique to explore delicate ViT architectures, which the citing paper may have considered in their research to improve the performance of their tracker."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work SlimmingViT also uses NAS technique to explore delicate ViT architectures, which the citing paper may have considered in their research to improve the performance of their tracker."}, {"Category": "Data Source", "Citation": "[54]", "Explanation": "The cited work ViTKD provides several ViT feature distillation guidelines, which the citing paper may have used as a data source to improve the performance of their tracker by compressing the feature dimension."}, {"Category": "Methodological Basis", "Citation": "[56]", "Explanation": "The cited work MiniViT applies weights sharing and multiplexing to reduce model parameters, which the citing paper may have considered in their research to improve the performance of their tracker."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work, TransT, is used as a reference for the design of the backbone in the proposed MixFormerV2 tracking framework."}, {"Category": "Methodological Basis", "Citation": "[52]", "Explanation": "The cited work, STARK, is mentioned as a reference for the design of the tracking framework in MixFormerV2."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work, MixFormer, is referenced in the design of the tracking framework in MixFormerV2."}, {"Category": "Methodological Basis", "Citation": "[55]", "Explanation": "The cited work, OSTrack, is mentioned as a reference for the design of the tracking framework in MixFormerV2."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The cited work, SimTrack, is referenced in the design of the tracking framework in MixFormerV2."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work introduces the concept of special learnable prediction tokens, which the citing paper adopts in the design of the P-MAM module for capturing target information in a more efficient manner."}, {"Category": "Methodological Basis", "Citation": "[42]", "Explanation": "The cited work, TrackingNet, provides the training dataset used in the distillation training process of the citing paper, which serves as a foundational element for the research conducted."}, {"Category": "Data Source", "Citation": "[20]", "Explanation": "The cited work, LaSOT, is mentioned as a part of the training dataset used in the distillation training process, indicating the reliance on external data for the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[28]", "Explanation": "The cited work, GOT-10k, is mentioned as a part of the training dataset used in the distillation training process, highlighting the reliance on external data for the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[35]", "Explanation": "The cited work, COCO, is mentioned as a part of the training dataset used in the distillation training process, indicating the reliance on external data for the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work provides the inference pipeline and template update process for MixFormerV2, which the citing paper adopts in their research to conduct real-time tracking on CPU platform."}, {"Category": "Data Source", "Citation": "[15]", "Explanation": "The cited work, MixViT-L with pyramidal corner head, is used as the teacher model for MixFormerV2-B in the citing paper, providing the data source for the distillation process."}, {"Category": "Methodological Basis", "Citation": "Tab. 3 and 5", "Explanation": "The cited work is used to test the performance of MixViT-B with pyramidal corner head as the teacher in the distillation process, providing a methodological basis for the experiments conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "Tab. 3 and 5", "Explanation": "The cited work is used to test the performance of MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher for MixFormer-S, providing a methodological basis for the experiments conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "Tab. 3 and 5", "Explanation": "The cited work is used to test the performance of MixViT-B with pyramidal corner head as the teacher in the distillation process, which is an extension of the research on MixViT-L with pyramidal corner head as the teacher model in the cited work."}, {"Category": "Extension or Continuation", "Citation": "Tab. 3 and 5", "Explanation": "The cited work is used to test the performance of MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher for MixFormer-S, which is an extension of the research on MixViT-B with pyramidal corner head as the teacher model in the cited work."}, {"Category": "Extension or Continuation", "Citation": "Tab. 3 and 5", "Explanation": "The cited work is used to test the performance of MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher for MixFormer-S, which is an extension of the research on MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher in the cited work."}, {"Category": "Extension or Continuation", "Citation": "Tab. 3 and 5", "Explanation": "The cited work is used to test the performance of MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher for MixFormer-S, which is an extension of the research on MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher in the cited work."}, {"Category": "Extension or Continuation", "Citation": "Tab. 3 and 5", "Explanation": "The cited work is used to test the performance of MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher for MixFormer-S, which is an extension of the research on MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher in the cited work."}, {"Category": "Extension or Continuation", "Citation": "Tab. 3 and 5", "Explanation": "The cited work is used to test the performance of MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher for MixFormer-S, which is an extension of the research on MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher in the cited work."}, {"Category": "Extension or Continuation", "Citation": "Tab. 3 and 5", "Explanation": "The cited work is used to test the performance of MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher for MixFormer-S, which is an extension of the research on MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher in the cited work."}, {"Category": "Extension or Continuation", "Citation": "Tab. 3 and 5", "Explanation": "The cited work is used to test the performance of MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher for MixFormer-S, which is an extension of the research on MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher in the cited work."}, {"Category": "Extension or Continuation", "Citation": "Tab. 3 and 5", "Explanation": "The cited work is used to test the performance of MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher for MixFormer-S, which is an extension of the research on MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher in the cited work."}, {"Category": "Extension or Continuation", "Citation": "Tab. 3 and 5", "Explanation": "The cited work is used to test the performance of MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher for MixFormer-S, which is an extension of the research on MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher in the cited work."}, {"Category": "Extension or Continuation", "Citation": "Tab. 3 and 5", "Explanation": "The cited work is used to test the performance of MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher for MixFormer-S, which is an extension of the research on MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher in the cited work."}, {"Category": "Extension or Continuation", "Citation": "Tab. 3 and 5", "Explanation": "The cited work is used to test the performance of MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher for MixFormer-S, which is an extension of the research on MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher in the cited work."}, {"Category": "Extension or Continuation", "Citation": "Tab. 3 and 5", "Explanation": "The cited work is used to test the performance of MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher for MixFormer-S, which is an extension of the research on MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher in the cited work."}, {"Category": "Extension or Continuation", "Citation": "Tab. 3 and 5", "Explanation": "The cited work is used to test the performance of MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher for MixFormer-S, which is an extension of the research on MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher in the cited work."}, {"Category": "Extension or Continuation", "Citation": "Tab. 3 and 5", "Explanation": "The cited work is used to test the performance of MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher for MixFormer-S, which is an extension of the research on MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher in the cited work."}, {"Category": "Extension or Continuation", "Citation": "Tab. 3 and 5", "Explanation": "The cited work is used to test the performance of MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher for MixFormer-S, which is an extension of the research on MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher in the cited work."}, {"Category": "Extension or Continuation", "Citation": "Tab. 3 and 5", "Explanation": "The cited work is used to test the performance of MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher for MixFormer-S, which is an extension of the research on MixViT-B with plain corner head and search input size of 224 \u00d7 224 as the teacher in the cited work."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work, LaSOT dataset, serves as the basis for the investigation of the proposed framework and training paradigm in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work, LaSOT, provides a large-scale dataset with long videos that serves as the basis for the performance evaluation of the proposed trackers in the citing paper."}, {"Category": "Data Source", "Citation": "[20]", "Explanation": "The cited work, LaSOT ext, is a released extension of the LaSOT dataset and provides an additional 150 videos from 15 object classes for the performance evaluation of the proposed trackers in the citing paper."}, {"Category": "Data Source", "Citation": "[41]", "Explanation": "The cited work, UAV123, is a large dataset with 123 aerial videos captured from low-altitude UAVs that serves as a data source for the performance evaluation of the proposed trackers in the citing paper."}, {"Category": "Data Source", "Citation": "[42]", "Explanation": "The cited work, TrackingNet, provides over 30K videos with more than 14 million dense bounding box annotations that serve as a data source for the performance evaluation of the proposed trackers in the citing paper."}, {"Category": "Data Source", "Citation": "[48]", "Explanation": "The cited work, TNL2K, consists of 2000 sequences with natural language description for each and serves as a data source for the performance evaluation of the proposed trackers in the citing paper."}, {"Category": "Data Source", "Citation": "[30]", "Explanation": "The cited work, VOT2022 benchmark, has 60 sequences that are used to measure the Expected Average Overlap (EAO), Accuracy (A) and Robustness (R) metrics in the performance evaluation of the proposed trackers in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[20]", "Explanation": "The cited work provides the LaSOT dataset, which the citing paper uses to compare the performance of their MixFormerV2-B tracker with other state-of-the-art trackers."}, {"Category": "Data Source", "Citation": "[41]", "Explanation": "The cited work provides the UAV123 dataset, which the citing paper uses to compare the performance of their MixFormerV2-B tracker with other state-of-the-art trackers."}, {"Category": "Data Source", "Citation": "[42]", "Explanation": "The cited work provides the TrackingNet dataset, which the citing paper uses to compare the performance of their MixFormerV2-B tracker with other state-of-the-art trackers."}, {"Category": "Data Source", "Citation": "[48]", "Explanation": "The cited work provides the TNL2k dataset, which the citing paper uses to compare the performance of their MixFormerV2-B tracker with other state-of-the-art trackers."}, {"Category": "Supporting Evidence", "Citation": "[42]", "Explanation": "The cited work, TrackingNet, provides a benchmark dataset for tracking and evaluation, which the citing paper uses to compare the performance of their proposed method with state-of-the-art methods."}, {"Category": "Supporting Evidence", "Citation": "[20]", "Explanation": "The cited work, LaSOT, is a large-scale tracking dataset that the citing paper uses to evaluate the performance of their method in a real-world setting."}, {"Category": "Supporting Evidence", "Citation": "[20]", "Explanation": "The cited work, LaSOT ext, is a larger version of the LaSOT dataset that the citing paper uses to further test the performance of their method in a more challenging environment."}, {"Category": "Supporting Evidence", "Citation": "[41]", "Explanation": "The cited work, UAV123, is a tracking dataset collected from UAV videos that the citing paper uses to assess the performance of their method in a real-world setting with a more complex background."}, {"Category": "Supporting Evidence", "Citation": "[48]", "Explanation": "The cited work, TNL2K, is a large-scale tracking dataset that the citing paper uses to test the performance of their method in a more diverse and challenging environment."}, {"Category": "Data Source", "Citation": "[29]", "Explanation": "The cited work is the VOT2020 benchmark, which provides the data used in the citing paper for evaluation purposes."}, {"Category": "Data Source", "Citation": "[28]", "Explanation": "The cited work is the GOT10k dataset, which is used in the citing paper for evaluation purposes."}, {"Category": "Extension or Continuation", "Citation": "[20]", "Explanation": "The cited work is the LaSOT dataset, which the citing paper uses to train the tracker in addition to other datasets."}, {"Category": "Extension or Continuation", "Citation": "[42]", "Explanation": "The cited work is the TrackingNet dataset, which the citing paper uses to train the tracker in addition to other datasets."}, {"Category": "Extension or Continuation", "Citation": "[35]", "Explanation": "The cited work is the COCO dataset, which the citing paper uses to train the tracker in addition to other datasets."}]
[ { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "In such settings, standard OT variants cannot be employed, and novel estimation techniques are necessary. Since the main challenge is that the conditional distributions are not explicitly available, the key idea in our OT formulation is to employ kernelized-leastsquares terms computed over the joint samples, which implicitly match the transport plan's marginals with the empirical conditionals. Under mild conditions, we prove that our estimated transport plans, as a function of the conditioned variable, are asymptotically optimal. For finite samples, we show that the deviation in terms of our regularized objective is bounded by O(1/m1/4 ), where m is the number of samples. We also discuss how the conditional transport plan could be modelled using explicit probabilistic models as well as using implicit generative ones. We empirically verify the consistency of our estimator on synthetic datasets, where the optimal plan is analytically known. When employed in applications like prompt learning for few-shot classification and conditionalgeneration in the context of predicting cell responses to treatment, our methodology improves upon state-of-the-art methods." }, { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b55", "b41", "b25", "b67", "b27", "b11" ], "table_ref": [], "text": "Optimal Transport (OT) [Kantorovich, 1942] serves as a powerful tool for comparing distributions.\nOT has been instrumental in diverse ML applications [Peyré and Cuturi, 2019, Liu et al., 2020, Fatras et al., 2021, Cao et al., 2022, Chen et al., 2023] that involve matching distributions. The need for comparing conditional distributions also frequently arises in machine learning. For instance, in the supervised learning of (probabilistic) discriminative models, one needs to compare the model's label posterior with the label posterior of the training data. Learning implicit conditional-generative models is another such application. Typically, the observed input covariates in these applications are continuous rather than discrete. Consequently, one may only assume access to samples from the input-label joint distribution rather than having multiple samples for a given input. It is well known that estimating conditionals is a significantly more challenging problem than estimating joints (e.g. refer to Section (2) in [Li et al., 2022]). Hence, it is not straightforward to apply OT between the relevant conditionals, as the conditionals are implicitly given via samples from the joint distribution.\nThis issue becomes more pronounced when the distributions of input covariates in the two joints are not the same, e.g. in medical applications [Hahn et al., 2019] where the distributions of treated and untreated patients differ. In such cases, merely performing an OT between the joint distributions of input and label is not the same as comparing the corresponding conditionals.\nIn this paper, we address this challenging problem of estimating OT plan between two conditionals, say s Y |X (•|x) and t Y ′ |X ′ (•|x), when samples from the joint distributions, s X,Y , t X ′ ,Y ′ , are given. As motivated above, we do not restrict the conditioned variable to be discrete, nor do we assume that the marginals of the common variable, s X and t X ′ , are the same. As we discuss in our work, the key challenge in estimating OT between conditionals comes in enforcing the marginal constraints involving the conditionals, because the samples provided are not from the conditionals, but from the joints s X,Y and t X ′ ,Y ′ . Our formulation employs kernelized-least-squares terms, computed over the joint samples, to address this issue. These regularizer terms implicitly match the transport plan's marginals with the empirical conditionals. Under mild assumptions, we prove that our conditional transport plan is indeed an optimal one, asymptotically. Hence, the corresponding transport cost will match the true Wasserstein between the conditionals. For finite samples, m, we show that the deviation in our regularized objective is upper bounded by O(1/m 1/4 ).\nFew prior works have considered special cases of this problem and have focused on learning conditional optimal transport maps [Tabak et al., 2021, Bunne et al., 2022]. To the best of our knowledge, our work is the first to formulate OT between conditionals in a general setting that also leads to provably consistent estimators for the optimal transport cost as well as the transport plan as a function of the conditioned variable's value, x. Further, instead of directly modelling the transport plan, π Y,Y ′ |X , we instead propose modelling it's factors: π Y ′ |Y,X , π Y |X . This gives a three-fold advantage: (i) The models for the factors are much simpler than for the joint (ii) when dealing with discriminative/conditional-generative models we can directly choose π Y |X (•|x) as the discriminative model being learnt. (ii) When implicit generative models are used for the factors, π Y ′ |Y,X (•|y, x) can be readily be used for inference in applications like cell population dynamics (e.g., section 5.2).\nWe empirically show the utility of our approach in the conditional generative task for modelling cell population dynamics, where we consistently outperform the baselines. Furthermore, we pose the task of learning prompts for few-shot classification as a conditional optimal transport problem. We argue that this is advantageous than posing it as a classical optimal transport problem, which is the approach existing works employ. We test this novel approach on the benchmark EuroSAT [Helber et al., 2019] dataset and show improvements over [Chen et al., 2023], a state-of-theart prompt learning method.\nIn Table 1, we highlight some of the key features of COT, comparing it with the related works. Our main contributions are summarized below." }, { "figure_ref": [], "heading": "Contributions", "publication_ref": [ "b67", "b3" ], "table_ref": [], "text": "• We propose novel estimators for optimal transport between conditionals in a general setting where the conditioned variable may be continuous, and its marginals in the two joint distributions may differ.\n• We prove the consistency of the proposed estimators. To the best of our knowledge, we are the first to present a consistent estimator for conditional optimal transport in the general setting.\n• While recent approaches model the optimal transport map [Tabak et al., 2021], [Bunne et al., 2022], we model the transport plan, which enables more general inferences.\n• We empirically verify the correctness of the proposed estimator on synthetic datasets. We further evaluate the proposed approach on downstream applications of conditional generation for modelling cell population dynamics and prompt learning for few-shot classification, showing its utility over some of the state-of-the-art baselines." }, { "figure_ref": [], "heading": "PRELIMINARIES", "publication_ref": [], "table_ref": [], "text": "Let X , Y be two sets (domains) that form compact Hausdorff spaces. Let P(X ) be the set of all probability measures over X .\nOptimal Transport (OT) Given a cost function, c : Y ×Y → R, OT compares two measures s, t ∈ P(Y) by finding a plan to transport mass from one to the other, that incurs the least expected cost. More formally, Kantorovich's OT formulation [Kantorovich, 1942] is given by:\nW c (s, t) ≡ min π∈P(Y×Y) c dπ, s.t. π 1 = s, π 2 = t,(1)\nwhere π 1 , π 2 are the marginals of π. A valid cost metric over Y × Y defines the 1-Wasserstein metric, W c (s, t), over distributions s, t ∈ P(Y). The cost metric is referred to as the ground metric." }, { "figure_ref": [], "heading": "Maximum", "publication_ref": [ "b61" ], "table_ref": [], "text": "Mean Discrepancy (MMD) Given a characteristic kernel function [Sriperumbudur et al., 2011], k : Y × Y → R, MMD defines a metric over probability measures given by: MMD\n2 (s, t) ≡ E X∼s,X ′ ∼s [k(X, X ′ )] + E Y ∼t,Y ′ ∼t [k(Y, Y ′ )] -2E X∼s,Y ∼t [k(X, Y )].\nWith H k as the RKHS associated with the characteristic kernel k, the dual norm definition of MMD is given by MMD\n(s, t) = max f ∈H k ;∥f ∥≤1 E s [f (X)] -E t [f (Y )]." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b17", "b67", "b46", "b47", "b47", "b3", "b67", "b67", "b67", "b67", "b65", "b64", "b67" ], "table_ref": [], "text": "Few prior works have attempted to solve the conditional OT problem in some special cases, which we discuss below. [Frogner et al., 2015] presents an estimator for the case when the marginals, s X and t X ′ , are Table 1: Summary of related works and the proposed COT method. [Tabak et al., 2021] [ Luo andRen, 2021] [Bunne et al., 2022] COT Consistent estimator N/A N/A N/A Models OT plan with flexibility of implicit modelling Flexibility with the ground cost Allows single sample per conditioned variable the same and y takes discrete values. Their estimator does not generalize to the case where y is continuous. Further, they solve individual OT problems at each x rather than modelling the transport map/plan as a function of x. [Luo and Ren, 2021] characterizes the conditional distribution discrepancy using the Conditional Kernel Bures (CKB). With the assumption that the kernel embeddings for the source and target are jointly Gaussian, CKB defines a metric between conditionals. [Luo and Ren, 2021] does not discuss any (sufficient) conditions for this assumption to hold. Moreover, CKB only estimates the discrepancy between the two conditionals, and it is unclear how to retrieve an optimal transport plan/map with CKB, limiting its applications. [Bunne et al., 2022] \n(•|x), t Y ′ |X ′ (•|x) individually for each sample x.\nAlso, their approach additionally assumes the ground cost is squared Euclidean. In contrast, we neither assume access to multiple samples from s Y |X (•|x), t Y ′ |X ′ (•|x) at each x nor make restrictive assumptions on the ground cost. Further, we estimate the transport plan rather than the transport map. The work closest to ours is [Tabak et al., 2021]. However, there are critical differences between the two approaches, which we highlight below. [Tabak et al., 2021] formulates a min-max adversarial formulation with a KL divergence-based regularization to learn a transport map. Such adversarial formulations are often unstable, and [Tabak et al., 2021] does not present any convergence results. Their empirical evaluation is also limited to small-scale qualitative experiments. Moreover, unlike the estimation bounds we prove, [Tabak et al., 2021] does not discuss any learning theory bounds or consistency results. It is expected that such bounds would be cursed with dimensions [Séjourné et al., 2023b, Séjourné et al., 2023a]. Additionally, the proposed formulation allows us to learn transport plans using implicit models ( § 4.2.2). Such an approach may not be possible with KLregularized formulation in [Tabak et al., 2021] due to non-overlapping support of the distributions. Owing to these differences, our proposed method is more widely applicable." }, { "figure_ref": [], "heading": "PROBLEM FORMULATION", "publication_ref": [ "b41", "b21", "b59", "b51" ], "table_ref": [], "text": "This section formally defines the Conditional Optimal Transport (COT) problem and presents a consistent estimator for it in the general setting. We begin by recalling the definition of OT between two given measures s\nY |X (•|x) and t Y ′ |X ′ (•|x) for a given x. W c s Y |X (•|x), t Y ′ |X ′ (•|x) is defined as follows. min π Y,Y ′ |X (•,•|x)∈P(Y×Y) Y×Y c dπ Y,Y ′ (•, •|x),(2)\ns.t. π Y |X (•|x) = s Y |X (•|x), π Y ′ |X (•|x) = t Y ′ |X ′ (•|x), where π Y |X (•|x) and π Y ′ |X (•|x) denotes the marginals of π Y,Y ′ |X (•, •|x). If the cost is a valid met- ric, then W c s Y |X (•|x), t Y ′ |X ′ (•|x) is nothing but the Wasserstein distance between s Y |X (•|x) and t Y ′ |X ′ (•|x). While W c s Y |X (•|x), t Y ′ |X ′ (•|x\n) helps comparing/transporting measures given a specific x ∈ X , in typical learning applications, one needs a comparison in an expected sense rather than at a specific x ∈ X . Accordingly, we consider\nE X ′′ ∼a W c s Y |X (•|X ′′ ), t Y ′ |X ′ (•|X ′′ )\n, where a is a given auxiliary measure:\nX min π Y,Y ′ |X (•,•|x)∈P(Y×Y) ∀x∈X Y×Y c dπ Y,Y ′ |X (•, •|x) da(x), s.t. π Y |X (•|x) = s Y |X (•|x), π Y ′ |X (•|x) = t Y ′ |X ′ (•|x) ∀x ∈ X ≡ min π Y,Y ′ |X :X →P(Y×Y) X Y×Y c dπ Y,Y ′ |X (•, •|x) da(x), s.t. π Y |X (•|x) = s Y |X (•|x), π Y ′ |X (•|x) = t Y ′ |X ′ (•|x) ∀x ∈ X .(3)\nIn the special case where the auxiliary measure, a, is degenerate, (3) gives back (2). Henceforth, we analyze the proposed COT formulation defined in (3). Now, in typical machine learning applications, the conditionals are not explicitly given, and only samples from the joints are available. Estimation of COT from samples seems challenging because the problem of estimating conditional densities itself has been acknowledged to be a significantly difficult one with known impossibility results (e.g., refer to Section 2 in [Li et al., 2022]). Hence, some regularity assumptions are necessary for consistent estimation. Further, even after making appropriate assumptions, the typical estimation errors are cursed with dimensions (e.g., Theorem 2.1 in [Graham et al., 2020]).\nOn the other hand, estimation of RKHS embeddings of conditional measures can be performed at rates O(1/m 1/4 ), where m is the number of samples [Song et al., 2009] [Grünewälder et al., 2012]. This motivates us to enforce the constraints in COT (3) by penalizing the distance between their RKHS embeddings. More specifically, we exploit the equivalence:\nπ Y |X (•|x) = s Y |X (•|x) ∀x ∈ X ⇐⇒ X MMD 2 π Y |X (•|x), s Y |X (•|x) ds X (x) = 0\n. This is true because MMD is a valid metric and we assume s X (x) > 0, t X ′ (x) > 0 ∀ x ∈ X . Using this, COT (3) can be relaxed as:\nmin π Y,Y ′ |X :X →P(Y×Y) X Y×Y c dπ Y,Y ′ |X (•, •|x) da(x), + λ 1 X MMD 2 π Y |X (•|x), s Y |X (•|x) ds X (x) + λ 2 X MMD 2 π Y ′ |X (•|x), t Y ′ |X ′ (•|x) dt X ′ (x),(4)\nwhere λ 1 , λ 2 > 0 are regularization hyperparameters. Note that (4) is exactly the same as (3\n) if λ 1 , λ 2 → ∞. Now, we use a standard result, E ∥G -h(X)∥ 2 = E ∥G -E[G|X]∥ 2 + E ∥E[G|X] -h(X)∥ 2\nwith G taken as the kernel mean embedding of δ Y and h(X) taken as the kernel mean embedding of π Y |X (•|X) [Muandet et al., 2017]. This gives us\nX ×Y MMD 2 π Y |X (•|x), δ y ds X,Y (x, y) = X MMD 2 π Y |X (•|x), s Y |X (•|x) ds X (x)+v(s)\n, where v(s) ≥ 0. Here, ϕ is the feature map corresponding to the kernel defining the MMD. This leads to the following formulation:\nmin π Y,Y ′ |X :X →P(Y×Y) X Y×Y c dπ Y,Y ′ |X (•, •|x) da(x), + λ 1 X ×Y MMD 2 π Y |X (•|x), δ y ds X,Y (x, y) + λ 2 X ×Y MMD 2 π Y ′ |X (•|x), δ y dt X ′ ,Y ′ (x, y).\n(5) Since v(s), v(t) are independent of π, the solutions of (5) are exactly the same as those of COT (3) as λ 1 , λ 2 → ∞. The advantage of this reformulation is that it can be efficiently estimated using samples from the joints, as we detail below." }, { "figure_ref": [], "heading": "Sample-Based Estimation", "publication_ref": [ "b69", "b43", "b33" ], "table_ref": [], "text": "In our set-up, in order to solve (5) and perform estimation, we are only provided with samples D s m = {(x 1 , y 1 ), . . . , (x m , y m )} and\nD t m = {(x ′ 1 , y ′ 1 ), . . . , (x ′ m , y ′ m )} from s X,Y and t X ′ ,Y ′ , respectively.\nHence, we employ a sample-based estimator for the regularizer terms:\nX ×Y MMD 2 π Y |X (•|x), δ y ds X,Y (x, y) ≈ 1 m m i=1 MMD 2 π Y |X (•|x i ), δ yi .\nThe following lemma shows that this regularizer estimator is statistically consistent.\nLemma 1. Assuming k is a normalized characteristic kernel, with probability at least 1 -δ, we have\nX ×Y MMD 2 (πY |X (•|x),δy) ds X,Y (x,y) -1 m m i=1 MMD 2 (πY |X (•|xi),δy i ) ≤2 2 m log( 2 δ ).\nUsing this result for the regularization terms, (5) can in-turn be estimated as:\nmin π Y,Y ′ |X :X →P(Y×Y) X Y×Y c dπ Y,Y ′ |X (•,•|x)da(x) +λ1 1 m m i=1 MMD 2 (πY |X (•|xi),δy i ) +λ2 1 m m i=1 MMD 2 π Y ′ |X (•|x ′ i ),δ y ′ i .(6)\nWe choose not to estimate the first term with empirical average as a is a known distribution. In the following theorem, we prove the consistency of our COT estimator.\nTheorem 1. Let Π be a given model for the conditional transport plans, π Y,Y ′ |X : X → P(Y × Y). Assume λ 1 = λ 2 = λ. Let πm , π * denote optimal solutions over the restricted model Π corresponding to (6),( 5) respectively. Let Ûm [π], U[π] denote the objectives as a function of π ∈ Π in ( 6),( 5) respectively. Then, we prove the following:\n1. With probability at least 1 -δ, U[π m ] -U[π * ] ≤ 2λ 1 R m (Π) + 2λ 2 R ′ m (Π) + 6(λ 1 + λ 2 ) 2 m log 3 δ , where the Rademacher based com- plexity term, R m (Π),\nis defined as:\n1 m E max π∈Π m i=1 ϵ i MMD 2 π Y |X (•|X i ), δ Yi ; (X i , Y i ) are IID samples from s X,Y and ϵ i denotes the Rademacher random vari- able.\nR ′ m (Π), is analogously defined as:\n1 m E max π∈Π m i=1 ϵ i MMD 2 π Y ′ |X (•|X ′ i ), δ Y ′ i ,\nwhere\n(X ′ i , Y ′ i ) are IID samples from t X ′ ,Y ′ and ϵ i denotes the Rademacher random variable. Recall that π Y |X (•|x) and π Y ′ |X (•|x) denote the marginals of π Y,Y ′ |X (•, •|x).\n2. In the special case Π is a neural network based conditional generative model, the kernel employed is universal, normalized, and non-expansive [Waarde and Sepulchre, 2022], and λ = O(m 1/4 ), with high probability we have that\nU[π m ] -U[π * ] ≤ O(1/m 1/4 ). More importantly, when m → ∞, πm is an optimal solution to the original COT prob- lem (3) whenever Π is rich enough such that ∃π * ∈ Π ∋ π * Y |X (•|x) = s Y |X (•|x) and π * Y ′ |X (•|x) = t Y ′ |X ′ (•|x) ∀x ∈ X .\nThe proof is presented in Supplementary § (S1.2).\nThe conditions for consistency are indeed mild because (i) neural conditional generators are known to be universal (Lemma 2.1 in [Liu et al., 2021], [Kidger and Lyons, 2020]) (ii) the popularly used Gaussian kernel is indeed universal, normalized, and non-expansive (for a large range of hyperparameters).\nThe proof for the first part of the theorem is an adaptation of classical uniform convergence based arguments; however, further bounding the complexity terms in the case of neural conditional generative models is novel and we derive this using vector contraction inequalities along with various properties of the kernel." }, { "figure_ref": [], "heading": "Modelling the Transport Plan", "publication_ref": [], "table_ref": [], "text": "We now provide details of modelling the transport plan function, i.e., choices for Π, from a pragmatic perspective. Firstly, we model the transport plan π Y,Y ′ |X (y, y ′ |x) by modelling its factors: π Y ′ |Y,X (y ′ |y, x) and π Y |X (y|x). Since the factors can be modelled using simpler models, this brings us computational benefits, among other advantages that we discuss. Secondly, employing COT with such a factorization enables us to directly choose π Y |X (•|x) as the label posterior of the model to be learnt in discriminative modelling applications. Moreover, the other factor π Y ′ |Y,X (•|y, x) can be readily used for inference (see § 5.1.2, § 5.2)." }, { "figure_ref": [], "heading": "Transport Plan with Explicit Models", "publication_ref": [], "table_ref": [], "text": "Here, we discuss our modelling choice with explicit probabilistic models when Y = {l 1 , . . . , l n } is a finite set. Accordingly, we model the factors π Y ′ |Y,X (y ′ |y, x), π Y |X (y|x) with fixed-architecture neural networks, parameterized by ψ and ϕ respectively, with the output layer as softmax over |Y| labels.\nThe COT estimator 6 in this case simplifies as:\nmin ψ,θ X i=n,j=n i=1,j=1 c(li,lj )π ψ (li|lj ,x)π θ (lj |x)da(x) +λ1 1 m m i=1 MMD 2 ( n j=1 π ψ (•|lj , xi)π θ (lj |xi), δy i ) +λ2 1 m m i=1 MMD 2 π θ (•|x ′ i ), δ y ′ i ,(7)\nwhere ψ, θ are the network parameters we wish to learn. In discrminative learning applications, the factor π θ (•|x) can be readily used as a probabilistic classifier (e.g., section 5.3)." }, { "figure_ref": [], "heading": "Transport Plan with Implicit Models", "publication_ref": [], "table_ref": [], "text": "As mentioned earlier, in applications such as § 5.1.2, § 5.2, it is required to generate samples from π Y ′ |Y,X (•|y, x) for inference. In such applications, one would prefer modelling these transport plan factors using implicit generative models.\nSince the MMD metric, unlike KL-divergence, can be employed to compare measures with non-overlapping support, implicit generative models can be readily employed for modelling our transport plan. More specifically, we model the factors π\nY ′ |Y,X (y ′ |y, x), π Y |X (y|x)\nwith fixed-architecture generative neural networks, π ψ and π θ , respectively. We use η, η ′ ∼ N (0, 1) to denote the noise random variables. The π θ network takes as input x and random η ′ to produce (random) y, to be distributed as π Y |X (•|x). Like-wise, the π ψ network takes as input y, x and random η to produce (random) y ′ , to be distributed as π Y ′ |Y,X (•|y, x). We denote the outputs of π θ by y(x, η ′ i ; θ) i = 1, . . . , m (i.e., samples from π Y /X (•|x). And, we denote outputs of π ψ by y (x, η i , η ′ i ; θ, ψ) i = 1, . . . , m, when inputs are y(x, η ′ i ; θ), x, η i . We illustrate the overall model in figure 1. Then, the COT estimator, with implicit modelling, reads as:\nmin θ,ψ X 1 m m i=1 c(y(x,η ′ i ;θ),y(x,ηi,η ′ i ;θ,ψ))da(x) +λ1 1 m m i=1 MMD 2 1 m m j=1 δ y ( x i ,η j ,η ′ j ;θ,ψ ) ,δy i +λ2 1 m m i=1 MMD 2 1 m m j=1 δ y(x ′ i ,η ′ j ;θ) ,δ y ′ i . (8\n)\nWe note that solving the COT problem, then readily provides us with the factors π Y ′ |Y,X (y ′ |y, x) and π Y |X (y|x), which can be used for inference purposes. This is in contrast to a typical implicit modelling approach, where one would require samples of (x, y, y ′ ) for learning such a model. The unavailability of such triplets (as in § 5.2) often limits such typical approaches. However, as we can see, COT now allows us to learn such a model without the availability of such triplets, only using samples from s X,Y and t X ′ ,Y ′ . This clearly shows the benefits of the proposed approach. " }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "In this section, we showcase the utility of the proposed estimator 5 in various applications. We choose the auxiliary distribution a as the empirical distribution over the training covariates and use λ 1 = λ 2 = λ in all our experiments. More experimental details and results are in Supplementary § (S2.2). 1" }, { "figure_ref": [], "heading": "Verifying Correctness of Estimator", "publication_ref": [], "table_ref": [], "text": "We empirically verify the correctness of the proposed estimator in synthetically constructed settings where the closed-form solutions are known." }, { "figure_ref": [], "heading": "Convergence to the True Wasserstein", "publication_ref": [ "b55" ], "table_ref": [], "text": "We learn the implicit networks with the proposed COT loss 8, keeping λ high enough. With the learnt networks, we draw samples y(x, η ′\ni ; θ) ∼ π θ (•|x) and y(x, η i , η ′ i ; θ, ψ) ∼ π ψ (•|y(x, η ′ i ; θ), x) , for i = 1, • • • , m\n, and compute the transport cost (first term in 8) and compare it with\nW c (s Y |X (•|x), t Y ′ |X ′ (•|x)).\nIn order to verify that our estimate converges to the true Wasserstein, we consider a case where the analytical solution for the Wasserstein distance W c is known and compare it with our estimate.\nExperimental Setup We consider two distributions y ∼ N (4(x -0.5), 1) and y ′ ∼ N (-2(x ′ -0.5), 8x ′ + 1) where x ∼ β(2, 4) and x ∼ β(4, 2) generate m samples from each them. The true Wasserstein distance between them at x turns out to be (6(x -0.5)) 2 + ( √ 8x + 1 -1) 2 (see Equation (2.39) in [Peyré and Cuturi, 2019]), which we compare against. We use the RBF kernel and squared Euclidean distance as our ground cost. The factors π θ (•|x) and π ψ (•|y, x) are modelled using two 2-layer MLP neural networks.\n1 The code for reproducing our experiments is publicly available at https://github.com/atmlr-lab/COT. " }, { "figure_ref": [ "fig_1" ], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "Figure 2 shows the convergence to the true Wasserstein as m increases. The variance of the estimated values and the MSEs decrease as the number of samples increases. The quadratic nature of the function is also captured with our estimator." }, { "figure_ref": [], "heading": "Convergence to the True Barycenter", "publication_ref": [ "b55", "b19", "b67", "b67" ], "table_ref": [], "text": "For further verification of our estimator, we show that the barycenter estimated using our transport plan and the true barycenter converge in Wasserstein distance.\nExperimental Setup Two independent Gaussian distributions are taken y ∼ N (2(x -0.5), 1) and y ′ ∼ N (-4(x ′ -0.5), 4) where x ∼ β(2, 4) and x ′ ∼ β(4, 2). The analytical solution of the barycenter is calculated as y c ∼ N (-x + 0.5, 2.5) [Peyré and Cuturi, 2019]. Recall that the barycenter can also be computed using the optimal transport map (Remark 3.1 in [Gordaliza et al., 2019]) using the expression: B x = ρS x +(1-ρ)T x , where ρ ∈ [0, 1] and B x , S x , T x denote the random variables corresponding to the barycenter, source measure and the transported sample, conditioned on x, respectively. Accordingly, samples from the barycenter, B xi , are obtained using: The corresponding MSEs are {22.399, 3.408, 3.964, 2.534, 1.687} for [Tabak et al., 2021] and {4.441, 0.654, 0.353, 0.099, 0.058} for the proposed COT estimator. It can be seen that the proposed COT-based barycenter converges to the true solution faster than [Tabak et al., 2021].\nρy i +(1-ρ)y, where y ∼ π ψ (•|y i , x i ).\nResults For evaluation, we generate 500 samples from our transport plan based barycenter and the true barycenter. We use kernel density estimation to plot the barycenters. Figures 3 and4 show that the proposed estimate of barycenter closely resembles the analytical barycenter and converges on increasing m." }, { "figure_ref": [], "heading": "Cell Population Dynamics", "publication_ref": [ "b5", "b3", "b3", "b5", "b3" ], "table_ref": [], "text": "The study of single-cell molecular responses to treatment drugs is a major problem in biology. Existing single-cell-sequencing methods allow one to observe gene expressions of the cells, but do so by destroying them. As a result, one ends up with cells from control (unperturbed) and target (perturbed) distributions without a correspondence between them. Optimal transport has emerged as a natural method [Bunne et al., 2021] to obtain a mapping between the source and the target cells, which can then be used for predictions on unseen cells. As the drug dosage is highly correlated with the predicted cell populations, [Bunne et al., 2022] learns such optimal trans-port maps conditioned on the drug dosage. We apply the proposed COT formulation to generate samples from the distributions over perturbed cells conditioned on the drug dosage given to an unperturbed cell.\nDataset We consider the dataset used by [Bunne et al., 2022] and [Bunne et al., 2021] corresponding to the cancer drug Givinostat applied at different dosage levels, {x 1 = 10nM, x 2 = 100nM, x 3 = 1000nM, x 4 = 10000nM }. At each dosage level, x i , samples of perturbed cells are given: y i1 , . . . , y imi . The total perturbed cells are 3541. Samples of unperturbed cells are also provided: y ′ 1 , . . . , y ′ m , m = 17, 565. Each of these cells is described by gene-expression levels of n = 1000 highly variable genes, i.e., y ij , y ′ i ∈ R 1000 . Following [Bunne et al., 2022], the representations of cells are brought down to 50 dimensions with PCA.\nCOT-Based Generative Modelling Our goal is to perform OT between the distribution of the unperturbed cells and the distribution of the perturbed cell conditioned on the drug dosage. As the representations of the cells lie in Y = R 50 , we choose implicit modelling ( § 4.2.2) for learning the conditional transport plans. The factor π θ is taken as the empirical distribution over the unperturbed cells. With this notation, our COT estimator, (8), simplifies as follows. where y (x, η i ; ψ) i = 1, . . . , m are samples from the network π ψ (•|y ′ i , x)." }, { "figure_ref": [], "heading": "Experimental", "publication_ref": [ "b3", "b3", "b63", "b5", "b3", "b3", "b5" ], "table_ref": [ "tab_1", "tab_2" ], "text": "Setup Similar to [Bunne et al., 2022], we take the cost function, c, as squared Euclidean. For the MMD regularization, we use the characteristic inverse multi-quadratic (IMQ) kernel.\nResults Following [Bunne et al., 2022], we evaluate the performance of COT by comparing samples from the predicted and ground truth perturbed distributions. We report the l 2 norm between the Perturbation Signatures [Stathias et al., 2018], for 50 marker genes for various dosage levels. We also report the MMD distances between the predicted and target distributions on various dosage levels. The distances are reported for in-sample settings, i.e. the dosage levels are seen during training. We compare our performance to the reproduced CellOT [Bunne et al., 2021] and CondOT [Bunne et al., 2022] baselines.\nWe summarize our results in Tables 2 and3. We observe that COT consistently outperforms state-of-the- art baselines CondOT [Bunne et al., 2022] and Cel-lOT [Bunne et al., 2021] in terms of l 2 (PS) as well as the MMD distances." }, { "figure_ref": [], "heading": "Prompt Learning", "publication_ref": [ "b75", "b57", "b11", "b11" ], "table_ref": [], "text": "In order to show the versatility of our framework, we adapt our estimator for learning prompts for largescale vision-language models and evaluate the performance in a limited supervision setting.\nThe success of vision-language models in open-world visual understanding has motivated efforts which aim to learn prompts [Zhou et al., 2022a, Zhang et al., 2022, Zhou et al., 2022b, Chen et al., 2023] to adapt the knowledge from pre-trained models like CLIP [Radford et al., 2021] for downstream tasks since it is infeasible to fine-tune such models due to a large number of parameters. Typically, these approaches rely on learning class-specific prompts for each category to better adapt the vision-language model for downstream tasks without the need for finetuning. A recent approach, PLOT [Chen et al., 2023], achieved state-of-the-art results by incorporating an OT-based loss between distributions over the set of local visual features and the set of textual prompt features, each of 1024 dimensions, to learn the downstream classifier. For each image, PLOT computes an OT-based loss between M (49) visual features of the image and N (4) textual prompt features per class.\nAs prompts are shared across images of a class [Chen et al., 2023], learning optimal transport plans conditioned on class-level information is expected to improve the downstream performance " }, { "figure_ref": [], "heading": "Validating the Proposed Explicit Modelling", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Before working on the challenging few-shot classification task, we evaluate the proposed explicit modellingbased COT estimator on a simpler multi-class classification task. Let the discriminative model to be learnt be f θ . The idea is to match this conditional to that in the training data using COT. We choose the transport plan factor π θ ≡ f θ and a as the marginal of input covariates in the training data, simplifying our COT estimator, (7), as:\nmin ψ,θ 1 m m q=1 i=n,j=n i=1,j=1 c(li,lj )π ψ (li|lj ,xq)f θ (lj |xq) +λ1 1 m m i=1 MMD 2 ( n j=1 π ψ (•|lj ,xi)f θ (lj |xi),δy i ), (9\n)\nwhere ψ, θ are the network parameters we wish to learn. Table 4 validates the performance with the proposed explicit modelling." }, { "figure_ref": [], "heading": "COT Formulation for Prompt Learning", "publication_ref": [ "b75", "b11" ], "table_ref": [], "text": "We learn an explicit model π ψr (•|l jqr , x qr ) over the N textual prompt features l 1r , . . . , l N r for each class. Here, x qr is the q th image from class r and l jqr is the j th visual feature for image x qr . Following PLOT, the distribution over image features given an image is considered uniform and, hence, not modelled as the other (10)\nFollowing the PLOT setup, we take v, u as uniform distributions over the M (49) visual features and the N (4) prompt features, respectively. As the prompts are shared across the images of a class, our MMDregularization term matches the cumulative marginals to the distribution over prompt features.\nExperimental Setup We take the same experimental setup used in CoOp [Zhou et al., 2022b] and PLOT [Chen et al., 2023] for learning prompts and only change the training loss to 10. The kernel employed is the characteristic inverse multi-quadratic, and the ground cost is the cosine cost. We follow the common training/evaluation protocol used in CoOp and PLOT and report the mean and standard deviation of the accuracies obtained with 3 seeds." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b27" ], "table_ref": [ "tab_4" ], "text": "In Table 5, we report the accuracies on the EuroSAT benchmark dataset [Helber et al., 2019] for the number of shots K as 1, 2, 4 and 8. As the number of shots represents the number of training images per class, learning with lesser K is more difficult. The advantage of class-level context brought by the proposed COT formulation is evident in this setting." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "Often, machine learning applications need to compare conditional distributions. Remarkably, our framework enables such a comparison solely using samples from (observational) joint distributions. To the best of our knowledge, the proposed method is the first work that consistently estimates the conditional transport plan in the general setting. The cornerstone of our work lies in the theoretical analysis of its convergence properties, demonstrating different modelling choices for learning and empirically validating its correctness. We further showcase the utility of the proposed method in downstream applications of cell population dynamics and prompt learning for few-shot classification. A possible future work would be to extend the proposed approach of generating conditional barycenters ( § 5.1.2) to work with more than two conditionals." }, { "figure_ref": [], "heading": "Checklist", "publication_ref": [], "table_ref": [], "text": "1. For all models and algorithms presented, check if you include:\n(a) A clear description of the mathematical setting, assumptions, algorithm, and/or model.\n[Yes] (b) An analysis of the properties and complexity (time, space, sample size) of any algorithm.\n[Yes] (c) (Optional) Anonymized source code, with specification of all dependencies, including external libraries.\n[Yes]\n2. For any theoretical claim, check if you include:\n(a) Statements of the full set of assumptions of all theoretical results. [Yes] (b) Complete proofs of all theoretical results.\n[Yes] (c) Clear explanations of any assumptions. [Yes] 3. For all figures and tables that present empirical results, check if you include:\n(a) The code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL).\n[Yes] (b) All the training details (e.g., data splits, hyperparameters, how they were chosen In continuation to the main paper, we present theoretical proofs, more details on the experiments and some additional experimental results. Our key sections are listed as follows.\n• Theoretical proofs S1.\n• Visualizing predictions of our conditional generator S2.1.\n• More experimental details and additional results S2.2." }, { "figure_ref": [], "heading": "S1 THEORETICAL PROOFS", "publication_ref": [ "b51" ], "table_ref": [], "text": "S1.1 Proof of Lemma 1\nLemma1. Assuming k is a normalized characteristic kernel, with probability at least 1 -δ, we have:\nX ×Y MMD 2 π Y |X (•|x), δ y ds X,Y (x, y) - 1 m m i=1 MMD 2 π Y |X (•|x i ), δ yi ≤ 2 2 m log 2 δ .\nProof. Recall that MMD is nothing but the RKHS norm-induced distance between the corresponding kernel embeddings i.e., MMD(s, t) = ∥µ k (s) -µ k (t) ∥, where µ k (s) ≡ ϕ k (x) ds X , is the kernel mean embedding of s [Muandet et al., 2017], ϕ k is the canonical feature map associated with the characteristic kernel k. Let H k denote the RKHS associated with the kernel k. Since our kernel is normalized we have that\n∥µ k (b)∥ ≤ 1 ∀ b ∈ P(Y). Hence, 0 ≤ MMD 2 π Y |X (•|x), s Y |X (•|x) = ∥µ k π Y |X (•|x) -µ k s Y |X (•|x) ∥ 2 ≤ ∥µ k π Y |X (•|x) + µ k s Y |X (•|x) ∥ 2 ≤ ∥µ k π Y |X (•|x) ∥ + ∥µ k s Y |X (•|x) ∥ 2 ≤ 4\n, where the second last step uses the triangle inequality. From Chernoff-Hoeffding bound, we have that: with probability at least 1 -δ,\nX ×Y MMD 2 π Y |X (•|x), δ y ds X,Y (x, y) -1 m m i=1 MMD 2 π Y |X (•|x i ), δ yi ≤ 2 2 m log 2 δ ." }, { "figure_ref": [], "heading": "S1.2 Proof of Theorem 1", "publication_ref": [], "table_ref": [], "text": "We first restate Corollary (4) from the result of vector-contraction inequality for Rademacher in [Maurer, 2016], which we later use in our proof.\nCorollary (Restated from [Maurer, 2016]). Let H denote a Hilbert space and let f be a class of functions\nf : X → H, let h i : H → R have Lipschitz norm L. Then E sup f ∈F i ϵ i h i (f (x i )) ≤ √ 2L i,k ϵ i,k f k (x i ),\nwhere ϵ ik is an independent doubly indexed Rademacher sequence, and\nf k (x i ) is the k-th component of f (x i ).\nOur consistency theorem from the main paper is presented below, followed by its proof.\nProof of Theorem 1.\nProof. From the definition of U[π m ] and\nU[π * ], it follows that 0 ≤ U[π m ] -U[π * ]. 0 ≤ U[π m ] -U[π * ] = U[π m ] -Ûm [π m ] + Ûm [π m ] -Ûm [π * ] + Ûm [π * ] -U[π * ] ≤ U[π m ] -Ûm [π m ] + Ûm [π * ] -U[π * ] (∵ πm is the solution of 6) ≤ max π∈Π (U[π] -Ûm [π]) + Ûm [π * ] -U[π * ] (S11)\nWe now separately upper bound the two terms in S11 : ( Ûm [π * ] -U[π * ]) and max π∈Π (U[π] -Ûm [π]). From Lemma 1, with probability at least 1 -δ,\nÛm [π * ] -U[π * ] ≤ 2(λ 1 + λ 2 ) 2 m log 2 δ (S12)\nWe now turn to the second term. We show that max π∈Π U[π] -Ûm [π] satisfies the bounded difference property.\nLet Z i denote the random variable (X i , Y i ). Let Z = {Z 1 , • • • , Z i , • • • , Z m }\nbe a set of independent random variables. Consider another such set that differs only at the i th position :\nZ ′ = {Z 1 , • • • , Z i ′ , • • • , Z m }. Let Ûm [π]\nand Û′ m [π] be the corresponding objectives in 6.\nmax π∈Π U[π] -Ûm [π] -max π∈Π U[π] -Û′ m [π] ≤ max π∈Π -Ûm [π] + Û′ m [π] ≤ λ 1 m max π∈Π MMD 2 (π Y |X (•|x i ), δ yi ) -MMD 2 (π Y |X (•|x ′ i ), δ y ′ i ) + λ 2 m max π∈Π MMD 2 (π Y ′ |X (•|x i ), δ yi ) -MMD 2 (π Y ′ |X (•|x ′ i ), δ y ′ i ) (Using triangle inequality) ≤ 8(λ 1 + λ 2 ) m ,(S13)\nwhere for the last step, we use that, with a normalized kernel, MMD(π\nȲ (•|x i ), δ yi ) + MMD(π Ȳ (•|x ′ i ), δ y ′ i ) ≤ 4 and MMD(π Ȳ (•|x i ), δ yi ) -MMD(π Ȳ (•|x ′ i ), δ y ′ i ) ≤ 2 for Ȳ ∈ {Y, Y ′ }. Using the above in McDiarmid's inequality, max π∈Π U[π] -Ûm [π] ≤ E max π∈Π U[π] -Ûm [π] + 4(λ 1 + λ 2 ) 2 m log 1 δ . (S14) Let Z i ≡ (X i , Y i ) ∼ s X,Y and Z = {Z 1 , • • • , Z m }. Let Z ′ i ≡ (X ′ i , Y ′ i ) ∼ t X,Y and Z ′ = {Z ′ 1 , • • • , Z ′ m }. Let (ϵ i ) i∈{1,••• ,\nm} be IID Rademacher random variables. We now follow the standard symmetrization trick and introduce the Rademacher random variables to get the following.\nE[maxπ∈Π U [π]-Ûm[π]]≤2λ1 1 m E Z,ϵ[ maxπ∈Π m i=1 ϵi∥µ k (π Y |X (•|Xi))-ϕ(Yi)∥ 2 ] Rm(Π) +2λ2 1 m E Z ′ ,ϵ [maxπ∈Π m i=1 ϵi∥µ k (π Y ′ |X (•|Xi))-ϕ(Yi)∥ 2 ] R ′ m (Π)\n.\n(S15)\nRecall that µ k (s) is the kernel mean embedding of the measure s. Hence, using S12, S14 and S15, we prove that with probability at least 1 -δ,\nU[π m ] -U[π * ] ≤ 2λ 1 R m (Π) + 2λ 2 R ′ m (Π) + 6(λ 1 + λ 2 ) 2 m log 3 δ . (S16)\nBounding Rademacher in the Special Case: We now upper-bound R m (Π) for the special case where π(•|x) is implicitly defined using neural conditional generative models. More specifically, let d be the dimensionality of Y and let\ng w (x, N ) ∈ R 2d ∼ π(•|x)\n, where g w is a neural network function parameterized by w, N denotes the noise random variable. We make a mild assumption on the weights of the neural network to be bounded. The first d outputs, denoted by g w,1 (x, N ) will be distributed as π Y |X (•|x) and the last d outputs, denoted by g w,2 (x, N ) will be distributed as\nπ Y ′ |X (•|x). Let ζ i (π Y |X ) ≡ ∥µ k (π Y |X (•|x i )) -ϕ(y i )∥ 2 .\nWe now compute the Lipschitz constant for ζ i , used in our bound next.\nζ i (π Y |X ) -ζ i (π ′ Y |X ) ≤ 4 ∥µ k π Y |X (•|x i ) -ϕ(y i )∥ -∥µ k π ′ Y |X (•|x i ) -ϕ(y i )∥ (With a normalized kernel) ≤ 4∥µ k (π Y |X (•|x i )) -µ k (π ′ Y |X (•|x i ))∥ (Using triangle inequality) = 4∥E [ϕ(g w,1 (x i , N ))] -E [ϕ(g w ′ ,1 (x i , N ))] ∥ ≤ 4E [∥ϕ(g w,1 (x i , N )) -ϕ(g w ′ ,1 (x i , N ))∥] ∵ (Jensen's inequality) ≤ 4E [∥g w,1 (x i , N ) -g w ′ ,1 (x i , N )∥] ∵ (non-expansive kernel) ≤ 4 [∥g w,1 (x i , n i,1 ) -g w ′ ,1 (x i , n i,1 )∥] ∵ n i,j ≡ arg max n [∥g w,j (x i , n) -g w ′ ,j (x i , n)∥] . (S17)\nWe next use a vector-contraction inequality for Rademacher given in Corollary (4) from [Maurer, 2016]. This gives\nR m (Π) ≤ 4 √ 2 m E Z,ϵ max w m i=1 d j=1 r ij g j w,1 (x i , n i,1 ) and R ′ m (Π) ≤ 4 √ 2 m E Z ′ ,ϵ max w m i=1 d j=1 r ij g j w,2 (x i , n i,2\n). Here, g j w,1 , g j w,2 denote the j th output in the first and the second blocks; r ij denotes an independent doubly indexed Rademacher variable. Thus, we have upper bounded the complexity of Π in terms of that of the neural networks. Now, applying standard bounds (e.g. refer to §5 in [Neyshabur, 2017]) on Rademacher complexity of neural networks, we obtain\nR m (Π) ≤ O(1/ √ m) and R ′ m (Π) ≤ O(1/ √ m).\nIf λ 1 , λ 2 are chosen to be O(m 1/4 ), then from (S16), we have:\nU[π m ] -U[π * ] ≤ O(1/m 1/4\n). When m → ∞, this shows that πm is also an optimal solution of (6), in which case it is also an optimal solution of the original COT problem (when restricted to Π) because λ → ∞ too." }, { "figure_ref": [], "heading": "S2 MORE ON EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "This section contains more experimental details along with some additional results." }, { "figure_ref": [ "fig_8" ], "heading": "S2.1 Visualizing Predictions of the Conditional Generator", "publication_ref": [], "table_ref": [], "text": "We visualize the predictions learnt by the implicit conditional generator trained with the COT loss 8 and the alternate formulation S18 described below. The COT formulation 4 employs a clever choice of MMD regularization over the conditionals, which is then computed using the samples from the joints 5. One may think of alternatively employing an MMD regularization over joints as follows.\nmin π Y,Y ′ |X :X →P(Y×Y) X Y×Y c dπ Y,Y ′ |X (•, •|x)da(x)+λ 1 MMD 2 π Y |X (•|x)s(x), s(x, y) +λ 2 MMD 2 π Y ′ |X (•|x)t(y), t(x, y) .(S18)\nWe argue that this choice is sub-optimal. We first note that as we only have samples from the joints and not the marginal distributions (s X and t X ), matching conditionals through the above formulation is not straightforward.\nComputing the above formulation also incurs more memory because for computing the Gram matrix over (x, y), we need to keep Gram matrices over the samples of x, y separately. Further, in this case, each of the Gram matrices is larger than the ones needed with the proposed formulation 5. We compared the performances of the two formulations in a toy regression case and found the proposed COT formulation better.\nThe training algorithm for learning with the proposed COT loss is presented in Algorithm S1. The per-epoch computational complexity is O(m 2 ), where m. We fix λ to 500, noise dimension to 10. We use Adam optimizer with a learning rate of 5e -3 and train for 1000 epochs. We use squared Euclidean distance and RBF kernel.\nFigure 6 shows we obtain a good fit for σ 2 = 10, 100.\nIn Table 6, we also show the per-epoch computation time taken (on an RTX 4090 GPU) by the COT loss as a function of the size of the minibatch, which shows the computational efficiency of the COT loss. On the other hand, the computation time for the alternate formulation discussed in S18 (with MMD regularization over joints) is 0.245 ± 0.0012 s with minibatch-size 16 and resulted in the out-of-memory error for higher batchsizes. \ny i (x i ; θ) = π θ (•|x i , z i )∀i ∈ [m]. 4: Sample z ′ i ∼ η ∀i ∈ [m].\n5:\ny i (x i ; θ, ψ) = π ψ (•|y i (x i ; θ), x i , z ′ i ); ∀i ∈ [m]\n." }, { "figure_ref": [], "heading": "6:", "publication_ref": [], "table_ref": [], "text": "Compute the COT loss (Simplified case of Equation 8)\nmin θ,ψ 1 m m i=1 c (y i (x i ; θ) , y i (x i ; θ, ψ)) + λ 1 m m i=1 MMD 2   1 m m j=1 δ yj (xi;θ,ψ) , δ yi   . 7:\nUpdate θ, ψ using gradient descent. 8: end while" }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "S2.2 More Experimental Details", "publication_ref": [ "b7", "b7", "b3", "b71", "b3", "b3", "b3", "b3", "b3", "b3", "b35", "b37", "b15", "b17", "b29", "b29", "b1", "b29", "b29", "b17", "b11", "b11", "b75" ], "table_ref": [ "tab_1", "tab_2", "tab_6", "tab_7", "tab_8", "tab_10", "tab_11", "tab_12", "tab_11" ], "text": "We provide more details for the experiments shown in § 5 of the main paper, along with some additional results.\nVerifying the Correctness of Estimator We use Adam optimizer and jointly optimize π θ and π ψ . We choose λ from the set {1, 200, 500, 800, 1000} and σ 2 used in the RBF kernel from the set {1e-2, 1e-1, 1, 10}. We found λ as 1000 and σ 2 as 1 to perform the best.\nIn Figure 8, we also show the OT plans. We draw 500 samples from the implicit maps learnt with the COT loss 8 and use kernel density estimation (KDE) to plot the distributions.\nCell Population Dynamics Dataset: We use the preprocessed dataset provided by [Bunne et al., 2023]. The dataset is publicly available for download using the following link https://polybox.ethz.ch/index.php/s/RAykIMfDl0qCJaM.\nFrom this dataset, we extracted unperturbed cells and cells treated with Givinostat. This led to a total of 17565 control cells and a total of 3541 cells treated with Givinostat. We take the same data splits as in [Bunne et al., 2023].\nMore on evaluation: Following [Bunne et al., 2022], we use scanpy's [Wolf et al., 2018] rank genes groups function for ranking and obtaining 50 marker genes for the drug, in this case Givinostat. The perturbed cells are grouped by drug, and the ranking is computed by keeping the unperturbed (i.e. control) cells as reference. We fix the architecture of our implicit model (ψ) as a 5-layer MLP and train it for 1000 epochs. Similar to [Bunne et al., 2022], we train on the 50-dimensional representation after applying PCA on the 1000-dimensional original representation. It is worth noting that training our MLP models is much stabler than the Partial Input Convex Neural Networks (PICNN) used in [Bunne et al., 2022], which needs carefully chosen initialization schemes. Following the evaluation scheme in [Bunne et al., 2022], we get back to the original 1000 dimensions, and then 50 marker genes are computed for the evaluation metrics.\nFollowing the in-sample experiment done in [Bunne et al., 2022], we tune our hyperparameters on the training data split. Based on the scale of terms in the COT objective, we chose λ from the set {400, 2000, 10000} and found λ = 400 to be the optimal choice. For the IMQ kernel, we chose the hyperparameter from the set {1, 10, 50, 100} and found 100 to be the optimal choice. Since we model the transport plan and not the transport map, the following procedure is followed for inference. We generate one sample corresponding to each pair of (source sample, condition) through our implicit model, and measure the required metrics on the generated distributions. This procedure is repeated n = 50 times, and the average metric is reported.\nFollowing [Bunne et al., 2022], we quantitatively evaluate our performance using the MMD distance and the l 2 distance between the perturbation signatures, l 2 (PS) metric. Let µ be the set of observed unperturbed cell population, ν be the set of the observed perturbed cell population (of size m 1 ), and ν ′ be the set of The plots show the effect of different σ 2 hyperparameters used in the RBF kernel as 1, 10 and 100, respectively. We quantitatively evaluate the methods using Explained Variance (between -∞ and 1; higher is better). With the proposed COT loss, the explained variance scores are 0.94, 0.94 and 0.95, respectively. With the alternate formulation S18, the explained variance scores are 0.63, 0.73 and 0.85. This shows the superiority of the proposed COT formulation 8. predicted perturbed state of population µ (of size m 2 ). The perturbation signature PS(ν, µ) is then defined as 1 m1 yi∈ν y i -1 m2 yi∈µ y ′ i . The l 2 (PS) metric is the l 2 distance between PS(ν, µ) and PS(ν, µ). Following [Bunne et al., 2022], we report MMD ( § 2) with RBF kernel averaged over the kernel widths: {2, 1, 0.5, 0.1, 0.01, 0.005}.\nAdditional Results: In addition to the results reported in Tables 2 and3 where the marker genes are computed on a per-drug level, in Tables 7 and8, we show results where marker genes are computed on a per-dosage level. Further, we present results for the out-of-sample setting, i.e., the dosage levels we predict are not seen during training. In Tables 9 and10, we show the results when marker genes are computed on a per-drug level and in Tables 11 and12. we show the results when marker genes are computed on a per-dose level. In Figures 9 and 10, we also show how closely the marginals of the proposed conditional optimal transport plan match the target distribution. The plots for COT correspond to the generated distribution having the median value for the metrics among all the (n=50) generated distributions.\nClassification We consider the task of multi-class classification and experiment on three benchmark datasets MNIST [LeCun and Cortes, 2010], CIFAR-10 [ Krizhevsky et al., 2009] and Animals with Attribute (AWA) [Lampert et al., 2009].\nFollowing the popular approaches of minibatch OT [Fatras et al., 2020, Fatras et al., 2021], we perform a minibatch training. We use the implementation of [Frogner et al., 2015] opensourced by [Jawanpuria et al., 2021]. We maintain the same experimental setup used in [Jawanpuria et al., 2021]. The classifier is a single-layer neural network with Softmax activation trained for 200 epochs. We use the cost function, c, between labels as the squared l 2 distance between the fastText embeddings [Bojanowski et al., 2017] of the labels. The kernel function used in COT is k(x, y) = 1/(σ 2 + c(x, y)) 0.5 . For MNIST and CIFAR-10, we use the standard splits for training and testing and choose a random subset of size 10,000 from the training set Figure 8: The OT plans computed with the COT formulation 8 for the case of source and target as the conditional Gaussian distributions. For each value of the conditioned variable, we show the corresponding source, target and the obtained OT plan in a given color.\nFigure 9: Marginals for selected genes 'ENSG00000165092.12', 'ENSG00000175175.5', 'ENSG00000173727.12', where the dosage is 100nM, in the insample setting.\nFigure 10: Marginals for selected genes 'ENSG00000165092.12', 'ENSG00000175175.5', 'ENSG00000173727.12', where the dosage is 100nM, in the outsample setting. for validation. For AWA, we use the train and test splits provided by [Jawanpuria et al., 2021] and randomly take 30% of the training data for validation.\nFollowing [Jawanpuria et al., 2021], we compare all methods using the Area Under Curve (AUC) score of the classifier on the test data after finding the best hyperparameters on the validation data. Based on the validation phase, the best Sinkhorn regularization hyperparameter in ϵ-OT [Frogner et al., 2015] is 0.2. For COT, we choose the hyperparameters (λ, σ 2 ) based on the validation set: for MNIST (0.1, 0.1), for CIFAR-10 (0.1, 0.1) and for AWA (10, 0.1).\nIn Table 13, we also show the per-epoch computation time taken (on an RTX 4090 GPU) by the COT loss as a function of the size of the minibatch, which shows the computational efficiency of the COT loss.\nPrompt Learning Let F = {f m | M m=1 } denote the set of visual features for a given image and G r = {g n | N n=1 } denote the set of textual prompt features for class r. PLOT [Chen et al., 2023] learns the prompt features by performing an alternate optimization where the inner optimization solves an OT problem between the empirical measure over image features (49) and that over the prompt features (4). We denote the OT distance between the visual features of image x and the textual prompt features of class r by d OT (x, r). Then the probability of assigning the image x to class r is computed as p\n(y = r|x) = exp ((1-d OT (x,r)/τ )) T r=1 exp ((1-d OT (x,r)/τ ))\n, where T denotes the total no. of classes and τ is the temperature of softmax. These prediction probabilities are then used in the cross-entropy loss for the outer optimization.\nFollowing [Chen et al., 2023] and [Zhou et al., 2022b], we choose the last training epoch model. The PLOT baseline empirically found 4 to be the optimal number of prompt features. We follow the same for our experiment. We also keep the neural network architecture and hyperparameters the same as in PLOT. For our experiment, we choose λ, kernel type and the kernel hyperparameter used in COT. We choose the featurizer in Figure ( 5) as the same image encoder used for getting the visual features. We use a 3-layer MLP architecture for ψ r in equation 10. We choose λ from {1, 10, 100}, kernel type from k(x, y) = exp -∥x-y∥ 2 2σ 2 (referred as RBF), k(x, y) = (σ 2 + ∥x -y∥ 2 ) -0.5 (referred as IMQ), k(x, y) = 1+∥x-y∥ 2 σ 2 -0.5 (referred as IMQ2), kernel hyperparameter (σ 2 ) from {median, 0.01, 0.1, 1, 10, 100}. The chosen hyperparameters, (λ, kernel type, kernel hyperparameter), for the increasing number of shots (1 to 8), are (100, RBF, 10), (100, IMQ2, 1), (10, IMQ, 1), (1, IMQ, 0.01).\nFigure 11 shows attention maps corresponding to each of the prompts learnt by COT. Table 12 presents an ablation study.\nTable 9: Out-of-sample setting: l 2 (PS) distances (lower is better) between predicted and ground truth distributions where the marker genes are computed at a per-drug level. As the abstract motivates, the main challenge in formulating OT over conditionals is the unavailability of the conditional distributions, which is handled by COT using MMD-based kernelized-least-squares terms computed over the joint samples that implicitly match the transport plan's marginals with the empirical conditionals. This results in the equivalence between Eqn (4) and Eqn (5). Furthermore, the statistical efficiency of MMD (Lemma 1) helps derive the consistency result (Thm. 1). Moreover, as discussed in § 4.2, the MMD metric is meaningful even for distributions with potentially non-overlapping support, enabling us to model the transport plan with implicit models for applications like those in § 5.2. Finally, the closed-form expression for MMD (discussed in § 2) helps in computational efficiency." }, { "figure_ref": [], "heading": "S3.2 The Choice of Baselines", "publication_ref": [ "b67" ], "table_ref": [ "tab_1", "tab_2" ], "text": "Other baselines for § 5.1.1 and 5.1.2: CKB and CondOT are inapplicable to Fig 2 . CondOT requires multiple samples for each conditioned variable ( § 3 and Table 1). Using CKB, Wasserstein distance conditioned at an x can't be computed, which is needed for § 5.1.1. Also, it does not provide an OT plan/map needed for § 5.1.2. Hence, these are inapplicable. We will add this clarification in § 5. For the downstream applications in § 5.2 and § 5.3, we compare with the state-of-the-art baselines. However, for completeness's sake, we extended other baselines to these applications. The results obtained by [Tabak et al., 2021] for Table 2 are ( 7.1758, 56.682, 559.42, 5588.14), for Table 3 (32.10±0.49, 27.3±4.61, 21.67±3.09) for K = 8. The results in the manuscript ( § 5) can be seen better than the above newly added. " }, { "figure_ref": [], "heading": "S4 NEGATIVE SOCIETAL IMPACT", "publication_ref": [], "table_ref": [], "text": "We present a formulation for solving optimal transport between conditional distributions. This problem has many socially beneficial applications, like predicting cell responses to cancer treatment, as shown in our paper. However, if a malicious task is selected, the proposed COT formulation may have a negative societal impact, similar to most other methods in machine learning." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The first author is supported by the Google PhD Fellowship. JSN would like to thank Fujitsu Limited, Japan, for the generous research grant. We thank Charlotte Bunne for the clarifying discussions on reproducing the CondOT method. We also thank Dr Pratik Jawanpuria, Kusampudi Venkata Datta Sri Harsha, Shivam Chandhok, Aditya Saibewar, Amit Chandhak and the anonymous reviewers who helped us improve our work. PM thanks Suvodip Dey and Sai Srinivas Kancheti for the support." }, { "figure_ref": [], "heading": "Proposed COT-based", "publication_ref": [], "table_ref": [], "text": "Figure 11: The leftmost is an image from the EuroSAT satellite dataset followed by visualization maps corresponding to each of the 4 prompts learnt (using COT loss 10). We can see that the 4 prompts diversely capture different visual features of the image. " } ]
2024-03-09
[ { "authors": " Bojanowski", "journal": "", "ref_id": "b0", "title": "", "year": "2017" }, { "authors": "P Bojanowski; E Grave; A Joulin; T Mikolov", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b1", "title": "Enriching word vectors with subword information", "year": "2017" }, { "authors": " Bunne", "journal": "", "ref_id": "b2", "title": "", "year": "2022" }, { "authors": "C Bunne; A Krause; M Cuturi", "journal": "", "ref_id": "b3", "title": "Supervised training of conditional monge maps", "year": "2022" }, { "authors": " Bunne", "journal": "", "ref_id": "b4", "title": "", "year": "2021" }, { "authors": "C Bunne; S G Stark; G Gut; J S Del Castillo; K.-V Lehmann; L Pelkmans; A Krause; G Rätsch", "journal": "bioRxiv", "ref_id": "b5", "title": "Learning single-cell perturbation responses using neural optimal transport", "year": "2021" }, { "authors": " Bunne", "journal": "", "ref_id": "b6", "title": "", "year": "2023" }, { "authors": "C Bunne; S G Stark; G Gut; J S Del Castillo; M Levesque; K.-V Lehmann; L Pelkmans; A Krause; G Ratsch", "journal": "Nature Methods", "ref_id": "b7", "title": "Learning single-cell perturbation responses using neural optimal transport", "year": "2023" }, { "authors": " Cao", "journal": "", "ref_id": "b8", "title": "", "year": "2022" }, { "authors": "Z Cao; Q Xu; Z Yang; Y He; X Cao; Q Huang", "journal": "", "ref_id": "b9", "title": "Otkge: Multi-modal knowledge graph embeddings via optimal transport", "year": "2022" }, { "authors": "Chen ", "journal": "", "ref_id": "b10", "title": "", "year": "2023" }, { "authors": "G Chen; W Yao; X Song; X Li; Y Rao; K Zhang", "journal": "", "ref_id": "b11", "title": "Prompt learning with optimal transport for vision-language models", "year": "2023" }, { "authors": " Fatras", "journal": "", "ref_id": "b12", "title": "", "year": "2021" }, { "authors": "K Fatras; T Séjourné; N Courty; R Flamary", "journal": "", "ref_id": "b13", "title": "Unbalanced minibatch optimal transport; applications to domain adaptation", "year": "2021" }, { "authors": " Fatras", "journal": "", "ref_id": "b14", "title": "", "year": "2020" }, { "authors": "K Fatras; Y Zine; R Flamary; R Gribonval; N Courty", "journal": "", "ref_id": "b15", "title": "Learning with minibatch wasserstein: asymptotic and gradient properties", "year": "2020" }, { "authors": " Frogner", "journal": "", "ref_id": "b16", "title": "", "year": "2015" }, { "authors": "C Frogner; C Zhang; H Mobahi; M Araya; T A Poggio", "journal": "", "ref_id": "b17", "title": "Learning with a wasserstein loss", "year": "2015" }, { "authors": " Gordaliza", "journal": "", "ref_id": "b18", "title": "", "year": "2019" }, { "authors": "P Gordaliza; E D Barrio; G Fabrice; J.-M Loubes", "journal": "PMLR", "ref_id": "b19", "title": "Obtaining fairness using optimal transport theory", "year": "2019" }, { "authors": " Graham", "journal": "", "ref_id": "b20", "title": "", "year": "2020" }, { "authors": "B S Graham; F Niu; J L Powell", "journal": "", "ref_id": "b21", "title": "Minimax risk and uniform convergence rates for nonparametric dyadic regression", "year": "2020" }, { "authors": " Grünewälder", "journal": "", "ref_id": "b22", "title": "", "year": "2012" }, { "authors": "S Grünewälder; G Lever; A Gretton; L Baldassarre; S Patterson; M Pontil", "journal": "", "ref_id": "b23", "title": "Conditional mean embeddings as regressors", "year": "2012" }, { "authors": " Hahn", "journal": "", "ref_id": "b24", "title": "", "year": "2019" }, { "authors": "P R Hahn; V Dorie; J S Murray", "journal": "", "ref_id": "b25", "title": "Atlantic causal inference conference (ACIC) data analysis challenge", "year": "2019" }, { "authors": " Helber", "journal": "", "ref_id": "b26", "title": "", "year": "2019" }, { "authors": "P Helber; B Bischke; A Dengel; D Borth", "journal": "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing", "ref_id": "b27", "title": "Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification", "year": "2019" }, { "authors": " Jawanpuria", "journal": "", "ref_id": "b28", "title": "", "year": "2021" }, { "authors": "P Jawanpuria; N Satyadev; B Mishra", "journal": "", "ref_id": "b29", "title": "Efficient robust optimal transport with application to multi-label classification", "year": "2021" }, { "authors": " Kantorovich", "journal": "", "ref_id": "b30", "title": "", "year": "1942" }, { "authors": "L Kantorovich", "journal": "Doklady Akademii Nauk", "ref_id": "b31", "title": "On the transfer of masses (in russian)", "year": "1942" }, { "authors": "Lyons Kidger", "journal": "", "ref_id": "b32", "title": "", "year": "2020" }, { "authors": "P Kidger; T Lyons", "journal": "", "ref_id": "b33", "title": "Universal Approximation with Deep Narrow Networks", "year": "2020" }, { "authors": " Krizhevsky", "journal": "", "ref_id": "b34", "title": "", "year": "2009" }, { "authors": "A Krizhevsky; G Hinton", "journal": "", "ref_id": "b35", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": " Lampert", "journal": "", "ref_id": "b36", "title": "", "year": "2009" }, { "authors": "C H Lampert; H Nickisch; S Harmeling", "journal": "", "ref_id": "b37", "title": "Learning to detect unseen object classes by between-class attribute transfer", "year": "2009" }, { "authors": "Cortes Lecun", "journal": "", "ref_id": "b38", "title": "", "year": "2010" }, { "authors": "Y Lecun; C Cortes", "journal": "", "ref_id": "b39", "title": "MNIST handwritten digit database", "year": "2010" }, { "authors": " Li", "journal": "", "ref_id": "b40", "title": "", "year": "2022" }, { "authors": "M Li; M Neykov; S Balakrishnan", "journal": "Electronic Journal of Statistics", "ref_id": "b41", "title": "Minimax optimal conditional density estimation under total variation smoothness", "year": "2022" }, { "authors": " Liu", "journal": "", "ref_id": "b42", "title": "", "year": "2021" }, { "authors": "S Liu; X Zhou; Y Jiao; J Huang", "journal": "", "ref_id": "b43", "title": "Wasserstein generative learning of conditional distribution", "year": "2021" }, { "authors": " Liu", "journal": "", "ref_id": "b44", "title": "", "year": "2020" }, { "authors": "Y Liu; L Zhu; M Yamada; Y Yang", "journal": "", "ref_id": "b45", "title": "Semantic correspondence as an optimal transport problem", "year": "2020" }, { "authors": "Ren Luo", "journal": "", "ref_id": "b46", "title": "", "year": "2021" }, { "authors": "Y.-W Luo; C.-X Ren", "journal": "", "ref_id": "b47", "title": "Conditional bures metric for domain adaptation", "year": "2021" }, { "authors": " Maurer", "journal": "", "ref_id": "b48", "title": "", "year": "2016" }, { "authors": "A Maurer", "journal": "", "ref_id": "b49", "title": "A vectorcontraction inequality for rademacher complexities", "year": "2016" }, { "authors": " Muandet", "journal": "", "ref_id": "b50", "title": "", "year": "2017" }, { "authors": "K Muandet; K Fukumizu; B Sriperumbudur; B Schölkopf", "journal": "Foundations and Trends® in Machine Learning", "ref_id": "b51", "title": "Kernel mean embedding of distributions: A review and beyond", "year": "2017" }, { "authors": " Neyshabur", "journal": "", "ref_id": "b52", "title": "", "year": "2017" }, { "authors": "B Neyshabur", "journal": "", "ref_id": "b53", "title": "Implicit regularization in deep learning", "year": "2017" }, { "authors": "Cuturi Peyré", "journal": "", "ref_id": "b54", "title": "", "year": "2019" }, { "authors": "G Peyré; M Cuturi", "journal": "Foundations and Trends® in Machine Learning", "ref_id": "b55", "title": "Computational optimal transport", "year": "2019" }, { "authors": " Radford", "journal": "", "ref_id": "b56", "title": "", "year": "2021" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark; G Krueger; I Sutskever", "journal": "", "ref_id": "b57", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": " Song", "journal": "", "ref_id": "b58", "title": "", "year": "2009" }, { "authors": "L Song; J Huang; A Smola; K Fukumizu", "journal": "", "ref_id": "b59", "title": "Hilbert space embeddings of conditional distributions with applications to dynamical systems", "year": "2009" }, { "authors": " Sriperumbudur", "journal": "", "ref_id": "b60", "title": "", "year": "2011" }, { "authors": "B K Sriperumbudur; K Fukumizu; G R G Lanckriet", "journal": "Journal of Machine Learning Research", "ref_id": "b61", "title": "Universality, characteristic kernels and RKHS embedding of measures", "year": "2011" }, { "authors": " Stathias", "journal": "", "ref_id": "b62", "title": "", "year": "2018" }, { "authors": "V Stathias; A M Jermakowicz; M E Maloof; M Forlin; W M Walters; R K Suter; M A Durante; S L Williams; J W Harbour; C.-H Volmar; N J Lyons; C Wahlestedt; R M Graham; M E Ivan; R J Komotar; J N Sarkaria; A Subramanian; T R Golub; S C Schürer; N G Ayad", "journal": "Nature Communications", "ref_id": "b63", "title": "Drug and disease signature integration identifies synergistic combinations in glioblastoma", "year": "2018" }, { "authors": " Séjourné", "journal": "", "ref_id": "b64", "title": "Unbalanced optimal transport meets sliced-wasserstein", "year": "2023" }, { "authors": " Séjourné", "journal": "", "ref_id": "b65", "title": "Sinkhorn divergences for unbalanced optimal transport", "year": "2023" }, { "authors": " Tabak", "journal": "", "ref_id": "b66", "title": "", "year": "2021" }, { "authors": "E G Tabak; G Trigila; W Zhao", "journal": "Machine Learning", "ref_id": "b67", "title": "Data driven conditional optimal transport", "year": "2021" }, { "authors": "Sepulchre Waarde", "journal": "", "ref_id": "b68", "title": "", "year": "2022" }, { "authors": "H V Waarde; R Sepulchre", "journal": "", "ref_id": "b69", "title": "Training lipschitz continuous operators using reproducing kernels", "year": "2022" }, { "authors": " Wolf", "journal": "", "ref_id": "b70", "title": "", "year": "2018" }, { "authors": "F A Wolf; P Angerer; F J Theis", "journal": "Genome Biology", "ref_id": "b71", "title": "Scanpy: large-scale single-cell gene expression data analysis", "year": "2018" }, { "authors": " Zhang", "journal": "", "ref_id": "b72", "title": "", "year": "2022" }, { "authors": "R Zhang; W Zhang; R Fang; P Gao; K Li; J Dai; Y Qiao; H Li", "journal": "", "ref_id": "b73", "title": "Tip-Adapter: Training-free adaption of clip for fewshot classification", "year": "2022" }, { "authors": " Zhou", "journal": "", "ref_id": "b74", "title": "Conditional prompt learning for vision-language models", "year": "2022" }, { "authors": " Zhou", "journal": "International Journal of Computer Vision", "ref_id": "b75", "title": "Learning to prompt for visionlanguage models", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 319.98, 459.79, 229.02, 15.05 ], "formula_id": "formula_0", "formula_text": "W c (s, t) ≡ min π∈P(Y×Y) c dπ, s.t. π 1 = s, π 2 = t,(1)" }, { "formula_coordinates": [ 2, 315, 589.59, 234, 28.71 ], "formula_id": "formula_1", "formula_text": "2 (s, t) ≡ E X∼s,X ′ ∼s [k(X, X ′ )] + E Y ∼t,Y ′ ∼t [k(Y, Y ′ )] -2E X∼s,Y ∼t [k(X, Y )]." }, { "formula_coordinates": [ 2, 347.43, 643.21, 199.71, 10.63 ], "formula_id": "formula_2", "formula_text": "(s, t) = max f ∈H k ;∥f ∥≤1 E s [f (X)] -E t [f (Y )]." }, { "formula_coordinates": [ 3, 63, 399.86, 234, 20.69 ], "formula_id": "formula_3", "formula_text": "(•|x), t Y ′ |X ′ (•|x) individually for each sample x." }, { "formula_coordinates": [ 3, 315, 284.89, 234, 56.95 ], "formula_id": "formula_4", "formula_text": "Y |X (•|x) and t Y ′ |X ′ (•|x) for a given x. W c s Y |X (•|x), t Y ′ |X ′ (•|x) is defined as follows. min π Y,Y ′ |X (•,•|x)∈P(Y×Y) Y×Y c dπ Y,Y ′ (•, •|x),(2)" }, { "formula_coordinates": [ 3, 315, 347.92, 234, 80.44 ], "formula_id": "formula_5", "formula_text": "s.t. π Y |X (•|x) = s Y |X (•|x), π Y ′ |X (•|x) = t Y ′ |X ′ (•|x), where π Y |X (•|x) and π Y ′ |X (•|x) denotes the marginals of π Y,Y ′ |X (•, •|x). If the cost is a valid met- ric, then W c s Y |X (•|x), t Y ′ |X ′ (•|x) is nothing but the Wasserstein distance between s Y |X (•|x) and t Y ′ |X ′ (•|x). While W c s Y |X (•|x), t Y ′ |X ′ (•|x" }, { "formula_coordinates": [ 3, 315, 477.4, 167.1, 14.17 ], "formula_id": "formula_6", "formula_text": "E X ′′ ∼a W c s Y |X (•|X ′′ ), t Y ′ |X ′ (•|X ′′ )" }, { "formula_coordinates": [ 3, 320.02, 520.12, 233.02, 133.95 ], "formula_id": "formula_7", "formula_text": "X min π Y,Y ′ |X (•,•|x)∈P(Y×Y) ∀x∈X Y×Y c dπ Y,Y ′ |X (•, •|x) da(x), s.t. π Y |X (•|x) = s Y |X (•|x), π Y ′ |X (•|x) = t Y ′ |X ′ (•|x) ∀x ∈ X ≡ min π Y,Y ′ |X :X →P(Y×Y) X Y×Y c dπ Y,Y ′ |X (•, •|x) da(x), s.t. π Y |X (•|x) = s Y |X (•|x), π Y ′ |X (•|x) = t Y ′ |X ′ (•|x) ∀x ∈ X .(3)" }, { "formula_coordinates": [ 4, 67.71, 272.42, 229.3, 25.08 ], "formula_id": "formula_8", "formula_text": "π Y |X (•|x) = s Y |X (•|x) ∀x ∈ X ⇐⇒ X MMD 2 π Y |X (•|x), s Y |X (•|x) ds X (x) = 0" }, { "formula_coordinates": [ 4, 71.12, 346.23, 225.88, 82.31 ], "formula_id": "formula_9", "formula_text": "min π Y,Y ′ |X :X →P(Y×Y) X Y×Y c dπ Y,Y ′ |X (•, •|x) da(x), + λ 1 X MMD 2 π Y |X (•|x), s Y |X (•|x) ds X (x) + λ 2 X MMD 2 π Y ′ |X (•|x), t Y ′ |X ′ (•|x) dt X ′ (x),(4)" }, { "formula_coordinates": [ 4, 63, 453.28, 234, 40.18 ], "formula_id": "formula_10", "formula_text": ") if λ 1 , λ 2 → ∞. Now, we use a standard result, E ∥G -h(X)∥ 2 = E ∥G -E[G|X]∥ 2 + E ∥E[G|X] -h(X)∥ 2" }, { "formula_coordinates": [ 4, 67.7, 531.1, 229.29, 28.21 ], "formula_id": "formula_11", "formula_text": "X ×Y MMD 2 π Y |X (•|x), δ y ds X,Y (x, y) = X MMD 2 π Y |X (•|x), s Y |X (•|x) ds X (x)+v(s)" }, { "formula_coordinates": [ 4, 71.12, 609.98, 219.42, 73.2 ], "formula_id": "formula_12", "formula_text": "min π Y,Y ′ |X :X →P(Y×Y) X Y×Y c dπ Y,Y ′ |X (•, •|x) da(x), + λ 1 X ×Y MMD 2 π Y |X (•|x), δ y ds X,Y (x, y) + λ 2 X ×Y MMD 2 π Y ′ |X (•|x), δ y dt X ′ ,Y ′ (x, y)." }, { "formula_coordinates": [ 4, 315, 168.8, 231.78, 23.18 ], "formula_id": "formula_13", "formula_text": "D t m = {(x ′ 1 , y ′ 1 ), . . . , (x ′ m , y ′ m )} from s X,Y and t X ′ ,Y ′ , respectively." }, { "formula_coordinates": [ 4, 316.2, 203.94, 232.8, 28.1 ], "formula_id": "formula_14", "formula_text": "X ×Y MMD 2 π Y |X (•|x), δ y ds X,Y (x, y) ≈ 1 m m i=1 MMD 2 π Y |X (•|x i ), δ yi ." }, { "formula_coordinates": [ 4, 347.59, 297.39, 168.81, 35.5 ], "formula_id": "formula_15", "formula_text": "X ×Y MMD 2 (πY |X (•|x),δy) ds X,Y (x,y) -1 m m i=1 MMD 2 (πY |X (•|xi),δy i ) ≤2 2 m log( 2 δ )." }, { "formula_coordinates": [ 4, 323.5, 386.95, 225.5, 47.34 ], "formula_id": "formula_16", "formula_text": "min π Y,Y ′ |X :X →P(Y×Y) X Y×Y c dπ Y,Y ′ |X (•,•|x)da(x) +λ1 1 m m i=1 MMD 2 (πY |X (•|xi),δy i ) +λ2 1 m m i=1 MMD 2 π Y ′ |X (•|x ′ i ),δ y ′ i .(6)" }, { "formula_coordinates": [ 4, 321.8, 592.27, 227.2, 52.91 ], "formula_id": "formula_17", "formula_text": "1. With probability at least 1 -δ, U[π m ] -U[π * ] ≤ 2λ 1 R m (Π) + 2λ 2 R ′ m (Π) + 6(λ 1 + λ 2 ) 2 m log 3 δ , where the Rademacher based com- plexity term, R m (Π)," }, { "formula_coordinates": [ 4, 334.93, 650.5, 214.07, 53.55 ], "formula_id": "formula_18", "formula_text": "1 m E max π∈Π m i=1 ϵ i MMD 2 π Y |X (•|X i ), δ Yi ; (X i , Y i ) are IID samples from s X,Y and ϵ i denotes the Rademacher random vari- able." }, { "formula_coordinates": [ 4, 336.12, 710.28, 192.69, 17.55 ], "formula_id": "formula_19", "formula_text": "1 m E max π∈Π m i=1 ϵ i MMD 2 π Y ′ |X (•|X ′ i ), δ Y ′ i ," }, { "formula_coordinates": [ 4, 363.98, 729.62, 185.02, 12.32 ], "formula_id": "formula_20", "formula_text": "(X ′ i , Y ′ i ) are IID samples from t X ′ ,Y ′ and ϵ i denotes the Rademacher random variable. Recall that π Y |X (•|x) and π Y ′ |X (•|x) denote the marginals of π Y,Y ′ |X (•, •|x)." }, { "formula_coordinates": [ 5, 82.93, 185.73, 214.08, 74.18 ], "formula_id": "formula_21", "formula_text": "U[π m ] -U[π * ] ≤ O(1/m 1/4 ). More importantly, when m → ∞, πm is an optimal solution to the original COT prob- lem (3) whenever Π is rich enough such that ∃π * ∈ Π ∋ π * Y |X (•|x) = s Y |X (•|x) and π * Y ′ |X (•|x) = t Y ′ |X ′ (•|x) ∀x ∈ X ." }, { "formula_coordinates": [ 5, 327.9, 99.19, 221.1, 46.27 ], "formula_id": "formula_22", "formula_text": "min ψ,θ X i=n,j=n i=1,j=1 c(li,lj )π ψ (li|lj ,x)π θ (lj |x)da(x) +λ1 1 m m i=1 MMD 2 ( n j=1 π ψ (•|lj , xi)π θ (lj |xi), δy i ) +λ2 1 m m i=1 MMD 2 π θ (•|x ′ i ), δ y ′ i ,(7)" }, { "formula_coordinates": [ 5, 439.16, 352.5, 109.84, 11.53 ], "formula_id": "formula_23", "formula_text": "Y ′ |Y,X (y ′ |y, x), π Y |X (y|x)" }, { "formula_coordinates": [ 5, 330.61, 533.88, 214.15, 57.62 ], "formula_id": "formula_24", "formula_text": "min θ,ψ X 1 m m i=1 c(y(x,η ′ i ;θ),y(x,ηi,η ′ i ;θ,ψ))da(x) +λ1 1 m m i=1 MMD 2 1 m m j=1 δ y ( x i ,η j ,η ′ j ;θ,ψ ) ,δy i +λ2 1 m m i=1 MMD 2 1 m m j=1 δ y(x ′ i ,η ′ j ;θ) ,δ y ′ i . (8" }, { "formula_coordinates": [ 5, 544.76, 579.08, 4.24, 8.74 ], "formula_id": "formula_25", "formula_text": ")" }, { "formula_coordinates": [ 6, 63, 470.53, 234, 22.7 ], "formula_id": "formula_26", "formula_text": "i ; θ) ∼ π θ (•|x) and y(x, η i , η ′ i ; θ, ψ) ∼ π ψ (•|y(x, η ′ i ; θ), x) , for i = 1, • • • , m" }, { "formula_coordinates": [ 6, 133.82, 506.4, 112.7, 9.96 ], "formula_id": "formula_27", "formula_text": "W c (s Y |X (•|x), t Y ′ |X ′ (•|x))." }, { "formula_coordinates": [ 6, 315, 719.24, 234, 21.61 ], "formula_id": "formula_28", "formula_text": "ρy i +(1-ρ)y, where y ∼ π ψ (•|y i , x i )." }, { "formula_coordinates": [ 8, 327.42, 570.98, 217.33, 29.28 ], "formula_id": "formula_29", "formula_text": "min ψ,θ 1 m m q=1 i=n,j=n i=1,j=1 c(li,lj )π ψ (li|lj ,xq)f θ (lj |xq) +λ1 1 m m i=1 MMD 2 ( n j=1 π ψ (•|lj ,xi)f θ (lj |xi),δy i ), (9" }, { "formula_coordinates": [ 8, 544.76, 589.89, 4.24, 8.74 ], "formula_id": "formula_30", "formula_text": ")" }, { "formula_coordinates": [ 12, 111.57, 248.26, 397.73, 30.32 ], "formula_id": "formula_31", "formula_text": "X ×Y MMD 2 π Y |X (•|x), δ y ds X,Y (x, y) - 1 m m i=1 MMD 2 π Y |X (•|x i ), δ yi ≤ 2 2 m log 2 δ ." }, { "formula_coordinates": [ 12, 63, 337.8, 486, 27.3 ], "formula_id": "formula_32", "formula_text": "∥µ k (b)∥ ≤ 1 ∀ b ∈ P(Y). Hence, 0 ≤ MMD 2 π Y |X (•|x), s Y |X (•|x) = ∥µ k π Y |X (•|x) -µ k s Y |X (•|x) ∥ 2 ≤ ∥µ k π Y |X (•|x) + µ k s Y |X (•|x) ∥ 2 ≤ ∥µ k π Y |X (•|x) ∥ + ∥µ k s Y |X (•|x) ∥ 2 ≤ 4" }, { "formula_coordinates": [ 12, 71.03, 379.37, 394.03, 14.67 ], "formula_id": "formula_33", "formula_text": "X ×Y MMD 2 π Y |X (•|x), δ y ds X,Y (x, y) -1 m m i=1 MMD 2 π Y |X (•|x i ), δ yi ≤ 2 2 m log 2 δ ." }, { "formula_coordinates": [ 12, 63, 471.62, 334.77, 43.46 ], "formula_id": "formula_34", "formula_text": "f : X → H, let h i : H → R have Lipschitz norm L. Then E sup f ∈F i ϵ i h i (f (x i )) ≤ √ 2L i,k ϵ i,k f k (x i )," }, { "formula_coordinates": [ 12, 373.08, 527.68, 165.5, 9.65 ], "formula_id": "formula_35", "formula_text": "f k (x i ) is the k-th component of f (x i )." }, { "formula_coordinates": [ 12, 126.1, 596.28, 422.9, 72.06 ], "formula_id": "formula_36", "formula_text": "U[π * ], it follows that 0 ≤ U[π m ] -U[π * ]. 0 ≤ U[π m ] -U[π * ] = U[π m ] -Ûm [π m ] + Ûm [π m ] -Ûm [π * ] + Ûm [π * ] -U[π * ] ≤ U[π m ] -Ûm [π m ] + Ûm [π * ] -U[π * ] (∵ πm is the solution of 6) ≤ max π∈Π (U[π] -Ûm [π]) + Ûm [π * ] -U[π * ] (S11)" }, { "formula_coordinates": [ 12, 223.49, 716.6, 325.51, 22.31 ], "formula_id": "formula_37", "formula_text": "Ûm [π * ] -U[π * ] ≤ 2(λ 1 + λ 2 ) 2 m log 2 δ (S12)" }, { "formula_coordinates": [ 13, 63, 87.11, 338.73, 9.65 ], "formula_id": "formula_38", "formula_text": "Let Z i denote the random variable (X i , Y i ). Let Z = {Z 1 , • • • , Z i , • • • , Z m }" }, { "formula_coordinates": [ 13, 64.95, 97.49, 484.05, 22.27 ], "formula_id": "formula_39", "formula_text": "Z ′ = {Z 1 , • • • , Z i ′ , • • • , Z m }. Let Ûm [π]" }, { "formula_coordinates": [ 13, 113.71, 149.79, 435.3, 129.44 ], "formula_id": "formula_40", "formula_text": "max π∈Π U[π] -Ûm [π] -max π∈Π U[π] -Û′ m [π] ≤ max π∈Π -Ûm [π] + Û′ m [π] ≤ λ 1 m max π∈Π MMD 2 (π Y |X (•|x i ), δ yi ) -MMD 2 (π Y |X (•|x ′ i ), δ y ′ i ) + λ 2 m max π∈Π MMD 2 (π Y ′ |X (•|x i ), δ yi ) -MMD 2 (π Y ′ |X (•|x ′ i ), δ y ′ i ) (Using triangle inequality) ≤ 8(λ 1 + λ 2 ) m ,(S13)" }, { "formula_coordinates": [ 13, 63, 286.29, 486, 108.05 ], "formula_id": "formula_41", "formula_text": "Ȳ (•|x i ), δ yi ) + MMD(π Ȳ (•|x ′ i ), δ y ′ i ) ≤ 4 and MMD(π Ȳ (•|x i ), δ yi ) -MMD(π Ȳ (•|x ′ i ), δ y ′ i ) ≤ 2 for Ȳ ∈ {Y, Y ′ }. Using the above in McDiarmid's inequality, max π∈Π U[π] -Ûm [π] ≤ E max π∈Π U[π] -Ûm [π] + 4(λ 1 + λ 2 ) 2 m log 1 δ . (S14) Let Z i ≡ (X i , Y i ) ∼ s X,Y and Z = {Z 1 , • • • , Z m }. Let Z ′ i ≡ (X ′ i , Y ′ i ) ∼ t X,Y and Z ′ = {Z ′ 1 , • • • , Z ′ m }. Let (ϵ i ) i∈{1,••• ," }, { "formula_coordinates": [ 13, 63, 416.48, 489.64, 25.29 ], "formula_id": "formula_42", "formula_text": "E[maxπ∈Π U [π]-Ûm[π]]≤2λ1 1 m E Z,ϵ[ maxπ∈Π m i=1 ϵi∥µ k (π Y |X (•|Xi))-ϕ(Yi)∥ 2 ] Rm(Π) +2λ2 1 m E Z ′ ,ϵ [maxπ∈Π m i=1 ϵi∥µ k (π Y ′ |X (•|Xi))-ϕ(Yi)∥ 2 ] R ′ m (Π)" }, { "formula_coordinates": [ 13, 163.46, 496.83, 385.54, 22.31 ], "formula_id": "formula_43", "formula_text": "U[π m ] -U[π * ] ≤ 2λ 1 R m (Π) + 2λ 2 R ′ m (Π) + 6(λ 1 + λ 2 ) 2 m log 3 δ . (S16)" }, { "formula_coordinates": [ 13, 120.78, 556.12, 107.04, 11.23 ], "formula_id": "formula_44", "formula_text": "g w (x, N ) ∈ R 2d ∼ π(•|x)" }, { "formula_coordinates": [ 13, 210.06, 591.98, 238.81, 11.53 ], "formula_id": "formula_45", "formula_text": "π Y ′ |X (•|x). Let ζ i (π Y |X ) ≡ ∥µ k (π Y |X (•|x i )) -ϕ(y i )∥ 2 ." }, { "formula_coordinates": [ 13, 67.17, 625.74, 481.83, 110.61 ], "formula_id": "formula_46", "formula_text": "ζ i (π Y |X ) -ζ i (π ′ Y |X ) ≤ 4 ∥µ k π Y |X (•|x i ) -ϕ(y i )∥ -∥µ k π ′ Y |X (•|x i ) -ϕ(y i )∥ (With a normalized kernel) ≤ 4∥µ k (π Y |X (•|x i )) -µ k (π ′ Y |X (•|x i ))∥ (Using triangle inequality) = 4∥E [ϕ(g w,1 (x i , N ))] -E [ϕ(g w ′ ,1 (x i , N ))] ∥ ≤ 4E [∥ϕ(g w,1 (x i , N )) -ϕ(g w ′ ,1 (x i , N ))∥] ∵ (Jensen's inequality) ≤ 4E [∥g w,1 (x i , N ) -g w ′ ,1 (x i , N )∥] ∵ (non-expansive kernel) ≤ 4 [∥g w,1 (x i , n i,1 ) -g w ′ ,1 (x i , n i,1 )∥] ∵ n i,j ≡ arg max n [∥g w,j (x i , n) -g w ′ ,j (x i , n)∥] . (S17)" }, { "formula_coordinates": [ 14, 63, 81.36, 478.86, 21.82 ], "formula_id": "formula_47", "formula_text": "R m (Π) ≤ 4 √ 2 m E Z,ϵ max w m i=1 d j=1 r ij g j w,1 (x i , n i,1 ) and R ′ m (Π) ≤ 4 √ 2 m E Z ′ ,ϵ max w m i=1 d j=1 r ij g j w,2 (x i , n i,2" }, { "formula_coordinates": [ 14, 155.74, 153.04, 206.27, 17.8 ], "formula_id": "formula_48", "formula_text": "R m (Π) ≤ O(1/ √ m) and R ′ m (Π) ≤ O(1/ √ m)." }, { "formula_coordinates": [ 14, 155.96, 170.6, 115.59, 11.23 ], "formula_id": "formula_49", "formula_text": "U[π m ] -U[π * ] ≤ O(1/m 1/4" }, { "formula_coordinates": [ 14, 121.83, 369.48, 427.17, 36.05 ], "formula_id": "formula_50", "formula_text": "min π Y,Y ′ |X :X →P(Y×Y) X Y×Y c dπ Y,Y ′ |X (•, •|x)da(x)+λ 1 MMD 2 π Y |X (•|x)s(x), s(x, y) +λ 2 MMD 2 π Y ′ |X (•|x)t(y), t(x, y) .(S18)" }, { "formula_coordinates": [ 15, 67.79, 138.51, 157.77, 22.71 ], "formula_id": "formula_51", "formula_text": "y i (x i ; θ) = π θ (•|x i , z i )∀i ∈ [m]. 4: Sample z ′ i ∼ η ∀i ∈ [m]." }, { "formula_coordinates": [ 15, 94.88, 160.85, 185.72, 12.32 ], "formula_id": "formula_52", "formula_text": "y i (x i ; θ, ψ) = π ψ (•|y i (x i ; θ), x i , z ′ i ); ∀i ∈ [m]" }, { "formula_coordinates": [ 15, 67.79, 194.63, 414.38, 55.48 ], "formula_id": "formula_53", "formula_text": "min θ,ψ 1 m m i=1 c (y i (x i ; θ) , y i (x i ; θ, ψ)) + λ 1 m m i=1 MMD 2   1 m m j=1 δ yj (xi;θ,ψ) , δ yi   . 7:" }, { "formula_coordinates": [ 18, 293.71, 547.57, 157.02, 17.08 ], "formula_id": "formula_54", "formula_text": "(y = r|x) = exp ((1-d OT (x,r)/τ )) T r=1 exp ((1-d OT (x,r)/τ ))" } ]
Consistent Optimal Transport with Empirical Conditional Measures
Given samples from two joint distributions, we consider the problem of Optimal Transportation (OT) between them when conditioned on a common variable. We focus on the general setting where the conditioned variable may be continuous, and the marginals of this variable in the two joint distributions may not be the same.
Piyushi Manupriya; Rachit Keerti Das; Sayantan Biswas; J Sakethanath
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of the proposed factorization and implicit modelling for learning the transport plan π Y,Y ′ |X (y, y ′ |x) through the factors π θ (y|x)π ψ (y ′ |y, x), parameterized by fixed-architecture neural networks 4.2.2. η, η ′ ∼ N (0, 1) denotes the noise input to the implicit models.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: As m ∈ {100, 200, 400, 800} increases from left to right, we plot the true Wasserstein distance in red and mark the means (in orange) and medians (in green) of the distances estimated using[Tabak et al., 2021] and the proposed COT estimator. The statistics are obtained from runs over multiple seeds. The corresponding MSEs are {245.530, 290.458, 89.715, 27.687} and {22.711, 6.725, 8.052, 1.580} respectively. It can be seen that the proposed COT objective converges to the true Wasserstein faster than[Tabak et al., 2021].", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3: Barycenters shown on varying ρ ∈ [0, 1] with colors interpolated between red and blue. Left: Conditional barycenter learnt by the proposed COT method. Right: Analytical barycenter.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "FeaturizerFigure 5 :5Figure 5: We pose learning prompts in few-shot classification as the conditional optimal transport problem. The figure shows our neural network diagram for learning conditional optimal transport plans.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: The objective over the training epochs curve.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure6: Predictions of the implicit conditional generator trained with the COT loss 8 and the alternate formulation S18 (with MMD regularization over joints). The plots show the effect of different σ 2 hyperparameters used in the RBF kernel as 1, 10 and 100, respectively. We quantitatively evaluate the methods using Explained Variance (between -∞ and 1; higher is better). With the proposed COT loss, the explained variance scores are 0.94, 0.94 and 0.95, respectively. With the alternate formulation S18, the explained variance scores are 0.63, 0.73 and 0.85. This shows the superiority of the proposed COT formulation 8.", "figure_data": "", "figure_id": "fig_8", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "considers special applications where multiple samples from s Y |X (•|x), t Y ′ |X ′ (•|x) are available at each x. They learn a transport map as a function of x by solving standard OT problems between s Y |X", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "l 2 (PS) distances (lower is better) between predicted and ground truth distributions", "figure_data": "DosageCellOT CondOT Proposed10nM1.22820.37890.3046100nM1.27080.25150.24211000nM0.86530.72900.364710000nM4.90350.38190.2607Average2.0670.43530.2930", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "MMD distances (lower is better) between predicted and ground truth distributions", "figure_data": "DosageCellOT CondOT Proposed10nM0.018110.006540.00577100nM0.01700.005550.004641000nM0.01540.012900.0064710000nM0.16020.010340.00840Average0.05260.008830.00632", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "AUC on test data (higher is better). We compare the performance of COT against other OTbased losses ϵ-OT ([Frogner et al., 2015]) and CKB([Luo and Ren, 2021]).", "figure_data": "Datasetϵ-OT CKB ProposedMNIST0.890.990.99CIFAR100.660.730.79Animals with Attribute0.680.640.86compared to solving an OT problem separatelyfor each (image, class) pair. Hence, we pose thisprompt learning task as a COT problem, where theconditional transport plans are modelled explicitly( § 4.2.1).", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Prompt Learning experiment: Average accuracy (higher is better) on EuroSAT dataset. The classlevel context brought by the proposed COT method allows it to outperform the state-of-the-art PLOT baseline, especially in the challenging case of lesser K.", "figure_data": "CoOpPLOTProposedK = 1 52.12 ± 5.46 54.05 ± 5.95 61.20 ± 3.65K = 2 59.00 ± 3.48 64.21 ± 1.90 64.67 ± 2.37K = 4 68.61 ± 3.54 72.36 ± 2.29 72.53 ± 2.60K = 8 77.08 ± 2.42 78.15 ± 2.65 78.57 ± 2.38factor in the transport plan. Figure (5) depicts theproposed setup. Our formulation for prompt learningfor K-shot classification (only K training images perclass) is as follows.min ψr1 KK q=1i=N,j=M i=1,j=1c(lir,ljqr)π ψr (lir|ljqr,xqr)vj+λ1MMD 2 ( K q=1M j=1 π", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Insample setting: l 2 (PS) distances (lower is better) between predicted and ground truth distributions where the marker genes are computed at a per-dose level.", "figure_data": "DosageCellOT CondOT Proposed10nM0.71640.47180.3682100nM0.51980.32670.30511000nM0.70750.69820.391710000nM4.81310.34570.2488Average1.68920.46060.3284", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Insample setting: MMD distances (lower is better) between predicted and ground truth distributions where the marker genes are computed at a per-dose level.", "figure_data": "DosageCellOT CondOT Proposed10nM0.00890.00640.00549100nM0.00690.00540.004941000nM0.01170.010380.0058610000nM 0.169400.010510.01011Average 0.049220.008170.00660", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Out-of-sample setting: MMD distances (lower is better) between predicted and ground truth distributions where the marker genes are computed at a per-drug level.", "figure_data": "DosageCellOT CondOT Proposed10nM2.08890.37890.3376100nM2.00240.21690.19141000nM1.25960.99281.00210000nM 5.970134.90168.2417DosageCellOT CondOT Proposed10nM0.03690.00650.0071100nM0.03420.00610.00701000nM0.02150.01780.015110000nM 0.23040.39170.3591S3 MORE DETAILS", "figure_id": "tab_8", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "are (0.2438, 0.587, 0.582, 0.600) and for Table 4 are 0.49(MNIST), 0.52(CIFAR10), 0.52(AWA). As Tables 2 and 3 need an OT map, CKB can't be applied. CondOT doesn't apply to Table 4 as they need multiple samples for each conditioned variable. Table 5 results with ([Tabak et al., 2021], CKB, CondOT) are: (29.13±0.90, 29.7± 2.41, 23.97±0.98) for K = 1, (38.87±2.00, 26.1±5.31, 21.8±6.39) for K = 2, (33.07±1.94, 28.87±1.58, 22.3±6.98) for K = 4 and", "figure_data": "", "figure_id": "tab_9", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Out-of-sample setting: l 2 (PS) distances (lower is better) between predicted and ground truth distributions where the marker genes are computed at a per-dose level.", "figure_data": "DosageCellOT CondOT Proposed10nM1.21300.47180.3950100nM0.85610.28460.25221000nM0.97070.99541.077510000nM 5.873733.52117.1487", "figure_id": "tab_10", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Out-of-sample setting: MMD distances (lower is better) between predicted and ground truth distributions where the marker genes are computed at a per-dose level.", "figure_data": "DosageCellOTCondOT Proposed10nM0.016480.006410.00638100nM0.011330.0063250.005711000nM0.016070.014960.0146210000nM 0.242340.418450.34246", "figure_id": "tab_11", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Time (in s) for COT loss 7 computation shown for increasing minibatch size. The computation time reported is based on 3 independent runs on the CIFAR-10 dataset.", "figure_data": "16645121024Time (s) 0.229±0.0013 0.229±0.0006 0.227±0.0004 0.225±0.0021", "figure_id": "tab_12", "figure_label": "13", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "[Kantorovich, 1942]", "Explanation": "The cited work by Kantorovich serves as a foundational tool for comparing distributions, which the citing paper leverages in its research on machine learning applications involving distribution matching."}, {"Category": "Supporting Evidence", "Citation": "[Liu et al., 2020]", "Explanation": "The cited work by Liu et al. provides a specific example of the use of OT in machine learning applications, which the citing paper builds upon in its research on the need for comparing conditional distributions."}, {"Category": "Extension or Continuation", "Citation": "[Li et al., 2022]", "Explanation": "The cited work by Li et al. discusses the challenges of estimating conditionals in machine learning, which the citing paper expands upon in its research on the need for comparing conditional distributions in such applications."}, {"Category": "Methodological Basis", "Citation": "[Hahn et al., 2019]", "Explanation": "The cited work by Hahn et al. (2019) is used to highlight the issue of applying OT between the relevant conditionals in medical applications, where the distributions of input covariates in the two joints differ. The citing paper adopts this example to demonstrate the challenge of comparing conditionals in such cases."}, {"Category": "Supporting Evidence", "Citation": "[Tabak et al., 2021, Bunne et al., 2022]", "Explanation": "The cited works have considered special cases of the problem of estimating OT between conditionals and have focused on learning conditional optimal transport maps, which provide a basis for the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[Chen et al., 2023]", "Explanation": "The citing paper adopts the prompt learning method proposed in the cited work to test the novel approach of posing the problem as a classical optimal transport problem."}, {"Category": "Methodological Basis", "Citation": "[Tabak et al., 2021], [Bunne et al., 2022]", "Explanation": "The cited works model the optimal transport map, which the citing paper adopts in their research to model the transport plan for more general inferences in the general setting of continuous conditioned variables and different marginals in the two joint distributions."}, {"Category": "Methodological Basis", "Citation": "[Sriperumbudur et al., 2011]", "Explanation": "The cited work provides the definition of a characteristic kernel function, which serves as the basis for the MMD metric used in the citing paper to measure the distance between probability measures."}, {"Category": "Supporting Evidence", "Citation": "[Frogner et al., 2015]", "Explanation": "The cited work presents an estimator for the case of discrete y values, which is a special case of the problem discussed in the citing paper. The cited work provides a foundational method for solving the conditional OT problem in this particular scenario."}, {"Category": "Supporting Evidence", "Citation": "[Tabak et al., 2021]", "Explanation": "The cited work presents a model for the conditional OT problem that allows for flexibility in the ground cost and the ability to work with a single sample per conditioned variable. This work provides a useful approach for solving the problem in a more general context."}, {"Category": "Supporting Evidence", "Citation": "[Luo and Ren, 2021]", "Explanation": "The cited work characterizes the conditional distribution discrepancy using the CKB metric, which is a useful method for understanding the problem of conditional OT. However, the cited work does not discuss sufficient conditions for the joint Gaussian assumption to hold, which is a limitation in the approach."}, {"Category": "Supporting Evidence", "Citation": "[Bunne et al., 2022]", "Explanation": "The cited work allows for a single sample per conditioned variable in the conditional OT problem, which is a useful method for solving the problem in a more general context."}, {"Category": "Supporting Evidence", "Citation": "[Tabak et al., 2021]", "Explanation": "The cited work by Tabak et al. provides a min-max adversarial formulation with a KL divergence-based regularization to learn a transport map, which the citing paper uses as a basis for their own research on estimating transport plans in a more stable and convergent manner."}, {"Category": "Methodological Basis", "Citation": "[Tabak et al., 2021]", "Explanation": "The cited work provides a formulation for learning transport plans, which the citing paper adopts in their own research to develop a more widely applicable method for learning transport plans using implicit models."}, {"Category": "Methodological Basis", "Citation": "[Li et al., 2022]", "Explanation": "The cited work provides a discussion on the difficulty of estimating conditional densities, which serves as a methodological basis for the citing paper in understanding the challenges of estimating COT from samples."}, {"Category": "Methodological Basis", "Citation": "[Graham et al., 2020]", "Explanation": "The cited work presents a theorem on the estimation errors in conditional density estimation, which the citing paper uses to highlight the limitations of estimating COT from samples."}, {"Category": "Methodological Basis", "Citation": "[Song et al., 2009]", "Explanation": "The cited work provides a method for estimating RKHS embeddings of conditional measures at a certain rate, which the citing paper uses to motivate the use of distance penalization in COT constraints."}, {"Category": "Methodological Basis", "Citation": "[Gr\u00fcnew\u00e4lder et al., 2012]", "Explanation": "The cited work presents a method for estimating RKHS embeddings of conditional measures, which the citing paper uses to support the enforcement of constraints in COT (3) by penalizing the distance between their RKHS embeddings."}, {"Category": "Methodological Basis", "Citation": "[Muandet et al., 2017]", "Explanation": "The cited work provides the standard result used in the citing paper to calculate the MMD between the kernel mean embedding of two distributions, which forms the basis for the analysis conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[Liu et al., 2021], [Kidger and Lyons, 2020]", "Explanation": "The cited works provide the basis for the use of neural conditional generators in the citing paper, as they establish the universality of these models and the properties of the Gaussian kernel that are necessary for the study conducted in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[Peyr\u00e9 and Cuturi, 2019]", "Explanation": "The cited work provides the analytical solution of the barycenter, which is used in the citing paper to calculate the barycenter and verify the estimator."}, {"Category": "Extension or Continuation", "Citation": "[Gordaliza et al., 2019]", "Explanation": "The cited work provides the expression for the optimal transport map, which the citing paper uses to compute the barycenter and further extend the research on barycenter estimation."}, {"Category": "Supporting Evidence", "Citation": "[Tabak et al., 2021]", "Explanation": "The cited work provides the baseline for comparison in terms of the MSE values obtained for the barycenter estimation, which the citing paper uses to highlight the improved performance of the proposed COT estimator."}, {"Category": "Extension or Continuation", "Citation": "The proposed COT estimator", "Explanation": "The citing paper extends the research by introducing a new estimator for barycenter estimation that is shown to converge faster than the method presented in the cited work."}, {"Category": "Methodological Basis", "Citation": "[Bunne et al., 2021]", "Explanation": "The cited work introduces optimal transport as a method for obtaining a mapping between control and target cell distributions, which the citing paper adopts in their study of single-cell molecular responses to treatment drugs."}, {"Category": "Methodological Basis", "Citation": "[Bunne et al., 2022]", "Explanation": "The cited work learns optimal transport maps conditioned on drug dosage, which the citing paper uses to generate samples from perturbed cell distributions based on the drug dosage given to an unperturbed cell."}, {"Category": "Data Source", "Citation": "[Bunne et al., 2022]", "Explanation": "The cited work provides the dataset used in the study conducted in the citing paper, which is the cancer drug Givinostat applied at different dosage levels and samples of perturbed and unperturbed cells with gene-expression levels of highly variable genes."}, {"Category": "Data Source", "Citation": "[Bunne et al., 2021]", "Explanation": "The cited work also provides a dataset that is used in the study conducted in the citing paper, which is the cancer drug Givinostat applied at different dosage levels and samples of perturbed and unperturbed cells with gene-expression levels of highly variable genes."}, {"Category": "Supporting Evidence", "Citation": "[Bunne et al., 2022]", "Explanation": "The cited work by Bunne et al. provides the cost function and the MMD regularization used in the citing paper, which serves as a foundational element for the research conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[Bunne et al., 2021]", "Explanation": "The cited work by Bunne et al. serves as a baseline for the performance evaluation in the citing paper, indicating an extension or continuation of the research in the field of cell-type inference."}, {"Category": "Extension or Continuation", "Citation": "[Bunne et al., 2022]", "Explanation": "The cited work by Bunne et al. is also used as a baseline for performance evaluation in the citing paper, further extending the research in the field of cell-type inference."}, {"Category": "Data Source", "Citation": "[Stathias et al., 2018]", "Explanation": "The cited work by Stathias et al. provides the Perturbation Signatures used in the citing paper for performance evaluation, indicating a reliance on external data for the study conducted in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[Bunne et al., 2022]", "Explanation": "The cited work CondOT is used to support the claim that COT outperforms the state-of-the-art baselines in terms of l2 (PS) and MMD distances."}, {"Category": "Supporting Evidence", "Citation": "[Bunne et al., 2021]", "Explanation": "The cited work CellOT is also used to support the claim that COT outperforms the state-of-the-art baselines in terms of l2 (PS) and MMD distances."}, {"Category": "Methodological Basis", "Citation": "[Chen et al., 2023]", "Explanation": "The cited work PLOT is used as a methodological basis for the citing paper, as it incorporates an OT-based loss to learn a downstream classifier for vision-language models in a limited supervision setting."}, {"Category": "Methodological Basis", "Citation": "[Chen et al., 2023]", "Explanation": "The cited work by Chen et al. provides the class-level information that the citing paper uses to condition the learning of optimal transport plans, which is a key methodological element in the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2023)", "Explanation": "The cited work, PLOT, provides the setup and methodology for learning the explicit model over textual prompt features in the citing paper."}, {"Category": "Data Source", "Citation": "(Zhou et al., 2022b)", "Explanation": "The experimental setup used in the cited work, CoOp, is employed in the citing paper to learn prompts and train the model."}, {"Category": "Supporting Evidence", "Citation": "[Helber et al., 2019]", "Explanation": "The cited work on the EuroSAT benchmark dataset provides a benchmark dataset for evaluating the performance of the proposed COT formulation in a more difficult setting with fewer training images per class."}, {"Category": "Methodological Basis", "Citation": "[Muandet et al., 2017]", "Explanation": "The cited work provides the concept of kernel mean embedding and the associated canonical feature map, which the citing paper uses in its research to compute the MMD distance between kernel embeddings."}, {"Category": "Data Source", "Citation": "[Bunne et al., 2023]", "Explanation": "The cited work provides the preprocessed dataset used in the experiments conducted in the citing paper. The dataset is publicly available for download via a link provided in the citation."}, {"Category": "Data Source", "Citation": "[Wolf et al., 2018]", "Explanation": "The cited work provides the scanpy library and the rank genes groups function used in the citing paper to rank and obtain marker genes for the drug Givinostat."}, {"Category": "Methodological Basis", "Citation": "[Bunne et al., 2022]", "Explanation": "The cited work provides the evaluation scheme and the process of computing marker genes, which the citing paper follows in their research."}, {"Category": "Extension or Continuation", "Citation": "[Bunne et al., 2022]", "Explanation": "The citing paper extends the in-sample experiment done in the cited work by tuning the hyperparameters on the training data split and finding the optimal choices for the \u03bb and IMQ kernel parameters."}, {"Category": "Data Source", "Citation": "The scale of terms in the COT objective", "Explanation": "The cited work is the source of the scale of terms in the COT objective, which the citing paper uses in their research."}, {"Category": "Data Source", "Citation": "The transport plan and the transport map", "Explanation": "The cited work is the source of the transport plan and transport map, which the citing paper models in their inference procedure."}, {"Category": "Supporting Evidence", "Citation": "The procedure for generating samples and measuring metrics", "Explanation": "The cited work provides the procedure for generating samples and measuring metrics, which the citing paper follows in their inference process."}, {"Category": "Methodological Basis", "Citation": "[Bunne et al., 2022]", "Explanation": "The cited work by Bunne et al. provides a quantitative evaluation method using the MMD distance and l2 distance metrics, which the citing paper adopts to evaluate the performance of their own research."}, {"Category": "Data Source", "Citation": "(of size m1), and \u03bd \u2032 be the set of The plots show the effect of different \u03c32 hyperparameters used in the RBF kernel as 1, 10 and 100, respectively.", "Explanation": "The cited work provides the data sets of unperturbed and perturbed cell populations (of size m1 and m2), which the citing paper uses in their research to evaluate the performance of their method."}, {"Category": "Extension or Continuation", "Citation": "With the proposed COT loss, the explained variance scores are 0.94, 0.94 and 0.95, respectively. With the alternate formulation S18, the explained variance scores are 0.63, 0.73 and 0.85.", "Explanation": "The citing paper extends the research by proposing a new COT loss formulation (S18) and evaluating its performance using the explained variance scores, which are higher than the scores obtained with the proposed COT loss."}, {"Category": "Methodological Basis", "Citation": "[Bunne et al., 2022]", "Explanation": "The cited work provides the definition of the l 2 (PS) metric and the methodology for reporting MMD ( \u00a7 2) with RBF kernel averaged over the kernel widths, which the citing paper adopts in their research to measure the distance between perturbation signatures."}, {"Category": "Supporting Evidence", "Citation": "[LeCun and Cortes, 2010]", "Explanation": "The cited work by LeCun and Cortes provides the benchmark dataset MNIST for the task of multi-class classification in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[Krizhevsky et al., 2009]", "Explanation": "The cited work by Krizhevsky et al. provides the benchmark dataset CIFAR-10 for the task of multi-class classification in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[Lampert et al., 2009]", "Explanation": "The cited work by Lampert et al. provides the benchmark dataset Animals with Attribute (AWA) for the task of multi-class classification in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[Fatras et al., 2020, Fatras et al., 2021]", "Explanation": "The cited works by Fatras et al. provide the minibatch OT training method used in the citing paper for the task of multi-class classification."}, {"Category": "Data Source", "Citation": "[Frogner et al., 2015]", "Explanation": "The cited work by Frogner et al. provides the implementation of the multi-class classification classifier used in the citing paper, which is open-sourced by Jawanpuria et al."}, {"Category": "Data Source", "Citation": "[Jawanpuria et al., 2021]", "Explanation": "The cited work by Jawanpuria et al. provides the open-sourced implementation of the multi-class classification classifier used in the citing paper."}, {"Category": "Data Source", "Citation": "[Bojanowski et al., 2017]", "Explanation": "The cited work provides the fastText embeddings used in the cost function of the OT plan in the COT formulation for the MNIST and CIFAR-10 datasets."}, {"Category": "Data Source", "Citation": "[Jawanpuria et al., 2021]", "Explanation": "The cited work provides the train and test splits for the AWA dataset, which the citing paper uses to compare the performance of different methods in the outsample setting."}, {"Category": "Methodological Basis", "Citation": "[Frogner et al., 2015]", "Explanation": "The cited work introduces the Sinkhorn regularization hyperparameter in \u03f5-OT, which the citing paper uses to find the best value for the hyperparameter in the outsample setting."}, {"Category": "Extension or Continuation", "Citation": "[Jawanpuria et al., 2021]", "Explanation": "The cited work provides a comparison of different methods using the Area Under Curve (AUC) score on the test data, which the citing paper extends by adding the outsample setting to the analysis."}, {"Category": "Methodological Basis", "Citation": "[Chen et al., 2023]", "Explanation": "The cited work introduces the PLOT model, which the citing paper adopts to perform an alternate optimization and solve the OT problem in prompt learning."}, {"Category": "Methodological Basis", "Citation": "[Chen et al., 2023]", "Explanation": "The cited work provides a method for choosing the last training epoch model, which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "[Zhou et al., 2022b]", "Explanation": "The cited work also contributes a method for choosing the last training epoch model, which the citing paper follows in their experiment."}, {"Category": "Data Source", "Citation": "The cited work is used to acknowledge the origin of a dataset or specific information that the citing paper utilizes in their research or analysis."}, {"Category": "Extension or Continuation", "Citation": "The cited work is extended in the citing paper by choosing the optimal number of prompt features and keeping the neural network architecture and hyperparameters the same as in PLOT."}, {"Category": "Methodological Basis", "Citation": "The cited work contributes a method for choosing the featurizer in Figure ( 5), which the citing paper uses in their experiment."}, {"Category": "Methodological Basis", "Citation": "The cited work provides a method for using a 3-layer MLP architecture for \u03c8 r in equation 10, which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "[Tabak et al., 2021]", "Explanation": "The cited work by Tabak et al. provides the results for downstream applications in Table 2 and Table 3, which the citing paper extends to complete the analysis and comparison with state-of-the-art baselines."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "The methodology of observing phenomena, formulating hypotheses, designing experiments, and drawing conclusions is fundamental to scientific progress. This phenomenon-driven paradigm has led to breakthroughs in fields ranging from physics to biology that have reshaped our understanding of the world. However, this paradigm is less observed in the field of deep learning.\nModern deep learning has achieved remarkable practical successes, yet our theoretical understanding of deep neural networks (DNNs) remains limited. As deep learning continues its rapid progress, applying a scientific, phenomenon-driven approach is crucial to gaining a deeper understanding of the field. Rather than relying solely on preconceived theories, phenomenon-driven approach allows the models to speak for themselves, revealing new insights that often yield surprises. Since phenomenon-driven discoveries originate from real observations, their results also tend to be more informative to practice.\nThe significance of phenomenon-driven approach is amplified as DNN models grow increasingly complex. For massive models like Large Language Models with billions of parameters, understanding from theoretical principles alone is implausible. However, by observing phenomena, formulating hypotheses, and test them through designed experiments, we can obtain some solid conclusions that can serve as the basis for future theories.\nThere have been some works that embody this approach. Here we introduce two notable examples: double descent and frequency principles. Double descent. As reported in [1; 2], the \"double descent\" phenomenon refers to the observation that as model size increases, model generalization ability first gets worse but then gets better, contradicting the usual belief that overparameterization leads to overfitting. This phenomenon provides a new perspective on understanding the generalization ability of overparameterized DNNs [3; 4; 5; 6]. It also provides a useful guidance on how to balance data size and model size.\nFrequency principles. According to [7; 8], the \"frequency principle\" or \"spectral bias\" refers to the observation that DNNs often learn target functions from low to high frequencies during training. This bias is contrary to many conventional iterative numerical schemes, where high frequencies are learned first. These findings have motivated researchers to apply Fourier analysis to deep learning [9; 10; 11] and provide justification for previous common belief of NN's simplicity bias.\nThese phenomenon-driven works share the following two key characteristics. First, the phenomena they observed are prevalent across various tasks, datasets, and model architectures, indicating that they manifest general patterns, not isolated occurrences. Second, these phenomena differentiate DNNs from conventional models or schemes, highlighting the uniqueness of DNN models.\nThese two characteristics ensure that these phenomena are prevalent for DNNs, but DNNs alone. They point to fundamental workings of DNNs that can inform us of their strengths and limitations, facilitating more principled designs and applications of DNNs. We consider these characteristics crucial for a phenomenon-driven approach to systematically studying DNNs.\nIn this paper, we have discovered and reported a phenomenon with these characteristics. This phenomenon differentiates complex neural networks from linear models and is counter-intuitive. We have conducted extensive experiments to demonstrate that this phenomenon is widespread across different tasks, datasets, and network architectures. We have also found that this phenomenon is closely related to other properties in DNNs, including early stopping and network generalization ability.\nHere, we give a brief description of this phenomenon, which we term the \"double descent of discrepancy\" phenomenon, or the D 3 phenomenon for short. Consider two identically-trained, overparameterized networks. Eventually, they will perfectly fit the same training data, which means their discrepancy on the training set trends to zero. However, contrary to intuition, this trend towards zero is not always monotonic. For various tasks, datasets, and network architectures, there exists a double descent phenomenon, where the discrepancy between identically-trained networks first decreases, then increases, and then decreases again.\nIn order to better explain the D 3 phenomenon, we first define some notations used in this paper, then illustrate it with an example." }, { "figure_ref": [], "heading": "Notations", "publication_ref": [], "table_ref": [], "text": "Supervised learning aims to use parameterized models to approximate a ground truth function f clean : X → Y. However, in most circumstances, only a finite set of noisy samples of f clean is available, which we denote as S N :\nS N = {(x i , y i ) | y i = f clean (x i ) + ϵ i } N i=1 .\nWe define the function that interpolates the noisy data f noisy (x i ) = y i on S N,X = {x i } N i=1 . Let f (x; θ) be a neural network model with parameters θ. Training this network involves optimizing θ with respect to a loss function L:\nL(f ) = 1 N N i=1 l(f (x i ; θ), y i ).\nIn most cases, θ 0 is randomly initialized and trained with methods such as SGD or Adam. We define identically-trained neural networks {f (j) } as multiple networks with the same architecture, trained on the same dataset with the same algorithm, but with different random initializations indexed by j.\nAny metric d(•, •) on Y can induce a new pseudo-metric d N (•,\n•) on the function space:\nd N (f, g) = 1 N N i=1 d(f (x i ), g(x i )).\nIf l(•, •) in the loss function is a metric itself, we can simply take d = l, which means L(f ) = d N (f, f noisy ). Otherwise, we can choose common metrics, such as the l 2 or l ∞ metric.\nGiven two identically-trained networks f (1) , f (2) , we define their discrepancy at time step t as:\nD t = d N (f (1) t , f(2)\nt ).\n(1) Note that calculating D t requires only S N,X and no extra samples. Remark. To avoid confusion, we specify the notation used here. Subscripts denote time step, usually t, while superscripts denote network index, usually j or numbers. For example, θ (j) t represents parameters of network j at time t, and f\n(j) t = f (•; θ (j) t )." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "The phenomenon", "publication_ref": [], "table_ref": [], "text": "Gradient descent guarantees a monotonic decrease in the loss L(f (j) t ). Therefore, one might expect that D t would also decrease monotonically. This can be easily proven for linear feature models with the form f (x; θ) = i θ i ϕ i (x). See Appendix A for the proof.\nHowever, for more complicated neural networks and for training datasets with certain level of noises, this is not the case. Figure 1(a) provides an example of a D t curve, where the training dataset is CIFAR-10 with 20% label corruption and the network architecture is ResNet. For more detailed experimental settings, please refer to Section 2.1. It is evident from the figure that D t does not follow a monotonic trend, but instead exhibits the D 3 phenomenon. This trend is so clear that it cannot be attributed to randomness in training. This phenomenon is counter intuitive! What it has implied is that even though identically-trained networks are approaching the same target function f noisy , at some point, they diverge from each other. Figure 1(b) illustrate the dynamics of f (1) , f (2) in function space. At time step t, their training errors still decrease, but the discrepancy between them increases. This strange dynamic means there exist fundamental non-linearity in DNNs' training process.\nRemark. While the \"double descent of discrepancy\" phenomenon share a similar name with the \"double descent\" phenomenon, the two are distinct and unrelated. The D 3 phenomenon characterizes the discrepancy between two identically-trained networks on the training dataset, where the \"double descent\" phenomenon focuses on the single network's generalization ability." }, { "figure_ref": [], "heading": "Our contributions", "publication_ref": [], "table_ref": [], "text": "Our main contributions in this work are:\n1. We discover and report the \"double descent of discrepancy\" phenomenon in neural network training. We find that, if there exists a certain level of noise in the training dataset, the discrepancy between identically-trained networks will increase at some point in the training process. This counter-intuitive phenomenon provides new insights into the complex behaviors of DNNs.\n2. In Section 2, we conduct extensive experiments to demonstrate the prevalence of the D 3 phenomenon. We show that it occurs across different tasks (e.g. classification, implicit neural representation), datasets (e.g. CIFAR-10, Mini-ImageNet), and network architectures (e.g. VGG, ResNet, DenseNet). These experiments empirically show that this phenomenon appears commonly in DNN training processes. 3. In Section 3, we propose an early stopping criterion based on the D 3 phenomenon. We evaluate its performance on image denoising tasks and compare with another existing early stopping criterion. We demonstrate that our criterion outperforms the other. Furthermore, we prove a theorem that describes the relationship between the early stopping time and the increase in discrepancy. 4. In Section 4, we develop a new method for data quality assessment. We empirically show that the D 3 phenomenon is related with the data quality of the training dataset, with the maximum degree of discrepancy linearly related to the noise level. Based on this insight, we propose that the degree of discrepancy can serve as an effective proxy for data quality.\nIn summary, this work practices the phenomenon-driven approach we introduced before. We observe a prevalent yet counter-intuitive phenomenon in DNN training. Through extensive experiments, we demonstrate that this phenomenon is widespread across different experimental settings. Based on insights gained from this phenomenon, we propose an early stopping criterion and a data quality assessment method. We believe that discovering and understanding more phenomena like this can provide fundamental insights into complex DNN models and guide the development of deep learning to a more scientific level." }, { "figure_ref": [], "heading": "Double descent of discrepancy", "publication_ref": [], "table_ref": [], "text": "In this section, we demonstrate that the D 3 phenomenon is widespread across various tasks, datasets, and network architectures. As training progresses, D t first decreases, then increases, and finally decreases to zero. We also provide a brief discussion of this phenomenon at the end of the section." }, { "figure_ref": [ "fig_2" ], "heading": "Classification", "publication_ref": [ "b11", "b12", "b13", "b14" ], "table_ref": [], "text": "Experimental setup. For classification tasks, we run experiments on CIFAR-10, CIFAR-100, and Mini-ImageNet [12]. The network architectures include Visual Geometry Group (VGG) [13],\nResidual Networks (ResNet) [14], Densely Connected Convolutional Networks (DenseNet) [15] and some more updated architectures such as Vision Transformer [16; 17; 18]. For each dataset, we corrupt a fraction of labels by replacing them with random labels to introduce noise. Networks are trained on these corrupted datasets with momentum SGD. The level of corruption and training hyper-parameters can also be modified. See Appendix B.1 for setting details.\nSince in classification the cross-entropy loss function l(•, •) is not symmetric, we defined the discrepancy function as\nd(y 1 , y 2 ) = ∥y 1 -y 2 ∥ ∞ = I y1=y2 .\nDuring training, identically-trained networks undergo exactly the same procedure. For instance, they process batches in the same order. This allows us to calculate their discrepancy by using networks' forward propagation results, thus minimizing the computational cost.\nResult. Figure 2 shows some examples of D t curves. Due to space limitation, here we only present results for the CIFAR-10 and Mini-ImageNet datasets, and the VGG, ResNet, and DenseNet network architectures. Each dataset is corrupted by 0% (clean), 20%, and 50%. More results are provided in Appendix B.1.\nIn all plots, when a certain portion of the labels are corrupted, the D 3 phenomenon emerges. While the shapes of the D t curves differ, they exhibit the same pattern. These results demonstrate that the D 3 phenomenon is data-and model-agnostic. It can also be observed from the plots that the D 3 phenomenon becomes more pronounced as the corruption rate in the dataset increases." }, { "figure_ref": [ "fig_3" ], "heading": "Implicit neural representation", "publication_ref": [ "b18", "b19", "b20" ], "table_ref": [], "text": "Experimental setup. For implicit neural representation tasks, we use neural networks to represent images in the classical 9-image dataset [19]. The network architectures include fully connected neural networks with periodic activations (SIREN) [20] and deep image prior (DIP) [21]. Here, we treat DIP as a special kind of neural representation architecture. We add different levels of Gaussian noise on these images to create their noisy versions. The networks are trained on noisy images using Adam. The corruption level and hyper-parameters in training are also adjustable. For more details, see Appendix B.2.\nThe loss function used here is the l-2 loss, so we simply take d(y 1 , y 2 ) = l(y 1 , y 2 ) = ∥y 1 -y 2 ∥ 2 2 . Results. Figure 3 shows some examples of D t curves. For the same reason, here we only present SIREN and DIP trained on the \"House\" image corrupted by Gaussian noise with zero mean and standard deviations σ = 0, 25, 50. More results are provided in Appendix B.2.\nWe can see that in neural representation tasks the D 3 phenomenon also emerges, demonstrating that it is task-agnostic. Furthermore, even though SIREN and DIP varies dramatically in network architecture, the patterns of their D t curves are quite similar. " }, { "figure_ref": [], "heading": "Other tasks", "publication_ref": [], "table_ref": [], "text": "We have also conducted experiments on regression tasks and graph-related tasks. Due to space limitation, we provide their results in Appendix B.3 and B.4. In all these tasks, the D 3 phenomenon emerges, further demonstrating that it is task-agnostic." }, { "figure_ref": [], "heading": "Brief discussion", "publication_ref": [ "b21" ], "table_ref": [], "text": "Based on these experimental results, we are confident to say that the double descent of discrepancy is a prevalent phenomenon in DNN training. However, it does not appear in linear feature models or any model that exhibits linear properties during training, such as the infinite wide network discussed in neural tangent kernel (NTK) [22]. This is rigorously proved in Appendix A. This difference may help us understand how DNNs differ from conventional parametric models. Explaining this phenomenon is challenging, as it involves fundamentally non-linear behavior of DNNs during their training process. We have partly explained it in Section 3, but our understanding remains elementary." }, { "figure_ref": [], "heading": "Early stopping criterion", "publication_ref": [ "b22" ], "table_ref": [], "text": "In machine learning, early stopping is a common technique used to avoid overfitting. By stopping the training process at an appropriate time, models can achieve good generalization performance even when trained on very noisy dataset [23].\nThe key factor in early stopping is the stopping criterion, which determines when to stop training.\nThe most common criteria are validation-based, which involve monitoring the model's generalization performance on a validation set and stopping training when the validation error starts increasing. However, as pointed out in [24; 25], validation-based criteria have several drawbacks: they bring extra computational costs, reduce the number of training samples, and have high variability in performance.\nIn some cases, it may not even be possible to construct a validation set. These drawbacks have motivated researchers to develop criteria without validation sets [24; 26; 27].\nIn this section, we demonstrate how the D 3 phenomenon can be used to construct an early stopping criterion. We evaluate its performance on image denoising tasks and compare it with another preexisting criterion. Furthermore, we prove a theorem that formally establishes the relationship between early stopping time and the increase in discrepancy." }, { "figure_ref": [], "heading": "Our criterion", "publication_ref": [], "table_ref": [], "text": "The optimal stopping time for the j-th network is defined as τ (j) = arg min t d N (f\n(j)\nt , f clean ). Our criterion stops training when D t begins to increase. More precisely, the stopping time τ α given by our criterion is:\nτ α = inf t | d dt D t > α ,(2)\nwhere α is a hyper-parameter.\nSince the time step t is discrete, d dt D t is approximated by its discrete difference ( Dt+1 -Dt )/∆t. To minimize fluctuations from randomness, here we use the moving average Dt = 1 w w-1 i=0 D t+i instead of D t , where w is the window size.\nSimply setting α = 0 would give a fairly good criterion. However, with more information about the model and dataset, one could choose a better α that improves performance. In Section 3.3, we explain how to choose a better α." }, { "figure_ref": [ "fig_4" ], "heading": "Image denoising", "publication_ref": [ "b27" ], "table_ref": [ "tab_0" ], "text": "For image denoising tasks, f clean is the clean image we want to recover, and f noisy is the noisy image. Here, x represents the pixel position, and f (x) represents the RGB value of the corresponding position. If we stop the training at a proper time τ , f τ would be close to f clean thus filter out the noise. Experimental setup. We use DIP as our DNN model and evaluate the performance of our criterion on the 9-image dataset. We compare our criterion with ES-WMV [28], a stopping criterion specifically designed for DIP. We adopt their experimental settings and use the PSNR gap (the difference in PSNR values between f (j)\nτ and f (j) τ (j) ) to measure the criterion performance. Results. Table 1 has listed the performances of ES-WMV and our criterion. Here, the noises are Gaussian noises with zero mean and standard deviation σ = 25. As shown in the table, our criterion outperform ES-WMV in seven out of nine images, is not as good in one, and both perform poorly in one. Additionally, we present some examples of stopping time τ given by our criterion compare to the optimal stopping time τ (j) in Figure 4. As shown in the figure, they are very close to each other. More results are provided in Appendix C. To ensure fairness, here we set α = 0 in our criterion.\nIt is worth pointing out that our criterion is not task-specific but rather a general criterion, yet here it works better than a specifically designed criterion. Furthermore, from its definition one can see that it is an adaptive criterion, which means it is robust to changes in network architecture or learning algorithms. We expect these performances do not represent the limit of our criterion and that better results can be achieved through hyperparameter tuning." }, { "figure_ref": [], "heading": "Mathematical explanation", "publication_ref": [], "table_ref": [], "text": "In this subsection, we establish the connection between the optimal stopping time and the increase of discrepancy. For simplicity, we assume that l(y 1 , y 2 ) = d(y 1 , y 2 ) = ∥y 1 -y 2 ∥ 2 and approximate the gradient descent by gradient flow:\nd dt θ = -∇ θ d N (f t , f noisy ),\nGiven neural network f (x; θ), we define neural kernel as K = ∇ θ f ⊗ ∇ θ f and define ⟨g, h⟩ K as the inner product induced by K:\n⟨g, h⟩ K = 1 N 2 xi,x ′ j ∈S N,X g T (x i )K(x i , x ′ j )h(x ′ j )\nNotice for t near the optimal stopping time τ (j) , t > τ (j) is equivalent with d dt d N (f\n(j) t , f clean ) > 0. Meanwhile, d dt D t > 0 equals with d dt d N (f (1) t , f(2)\nt ) > 0. The theorem bellow states the relationship between these two inequalities. It shows that under certain condition, they are almost equivalent.\nTheorem. If at time step t, f\n(j) t satisfies that ∀j, |⟨f (-j) t -f clean , f (j) t -f clean ⟩ K (j) t | < δ/2,(3)\n|⟨f\n(-j) t -f clean , f noisy -f clean ⟩ K (j) t | < ϵ/2. (4\n)\nthen we have the following two results:\n1. d dt d N (f (1) t , f(2)\nt ) > δ + ϵ implies ∃j, d dt d N (f (j) t , f clean ) > 0; 2. ∀j, d dt d N (f (j) t , f clean ) > 0 implies d dt d N (f (1) t , f(2)\nt ) > -(δ + ϵ)\nHere, K (j) t\nrepresents the neural kernel of f (j) t . Proof. See Appendix C for the proof.\nFor any δ and ϵ, there exists a set of time steps T δ,ϵ = {t | f (j) t satisfies the condition}. At these time steps, our theorem shows that these two inequalities are equivalent with a difference of δ + ϵ. The smaller the sum δ + ϵ, the tighter this equivalence. However, note that smaller δ and ϵ values lead to a condition that is harder to satisfy, thus lead to a smaller set T δ,ϵ .\nWe argue that conditions (3) and (4) of the theorem are relatively mild. We demonstrate this by showing that small δ and ϵ are sufficient for T δ,ϵ to be non-empty.\nCondition (3) is automatically satisfied if ∥f (k) t -f clean ∥ 2 ≲ δ/∥K (j) t ∥, ∀j, k. So the smallest δ for T δ,ϵ to be non-empty is δ * ∼ ∥K τ (j) ∥∥f τ (j) -f clean ∥ 2 .\nThe better the generalization performance of the early stopped model f τ (j) , the smaller δ * is. Estimation of generalization error ∥f τ (j) -f clean ∥ requires considering the dataset, network architecture, and training algorithm, which is far beyond the scope of this work. However, the effectiveness of early stopping method gives us confidence that a relatively small δ * can be achieved. Condition (4) can be justified by Fourier analysis. Notice that f noisy -f clean is pure noise, which means it primarily comprises high frequency components, while K t (f t -f clean ) primarily comprises low frequency components. This means that they are almost orthogonal in the function space and their inner product can be controlled by a small constant ϵ * . These analyses show that δ * + ϵ * is relatively small, which means the conditions of this theorem are relatively mild.\nOne may note that the early stopping times given by our criterion are always ahead of the optimal stopping times. This can be avoided by choosing some α > 0 in the stopping criterion. In fact, to achieve better performance, one could take α ∼ δ * + ϵ * . More discussions on setting α are provided in Appendix C." }, { "figure_ref": [], "heading": "Data quality assessment", "publication_ref": [], "table_ref": [], "text": "As machine learning models rely heavily on large amounts of data to train, the quality of the datasets used is crucial. However, sometimes high-quality datasets can be expensive and difficult to obtain. As a result, cheaper or more accessible datasets are often used as an alternative [29; 30]. However, these datasets may lack guarantees on data quality and integrity, which can negatively impact model performance. It is therefore important to have methods to assess the quality of datasets in order to understand potential issues and limitations. By vetting the quality of datasets, we can produce more reliable machine learning models.\nData quality assessment include many evaluating aspects. Here, we focus on the accuracy of labels. As we have mentioned in section 2, the greater the noise level, the more pronounced the D 3 phenomenon. In this section, we quantify this relationship and show how it can be used for data quality assessment. We first clarify some definitions, then establish our method and use the CIFAR-10 dataset as an example to demonstrate it. " }, { "figure_ref": [], "heading": "Definitions", "publication_ref": [ "b1" ], "table_ref": [], "text": "We define the noise level of the training dataset as E = d N (f noisy , f clean ). For example, in classification tasks, E represents the label corruption rate.\nFor the D 3 phenomenon, we define the maximum discrepancy between two networks as:\nD * = max t>τ0 D t ,(5)\nwhere τ 0 is the time step where D t begins to increase, as defined in (2). Intuitively, D * quantifies the height of the peak in a D t curve." }, { "figure_ref": [ "fig_6", "fig_6" ], "heading": "Our method", "publication_ref": [], "table_ref": [], "text": "We demonstrate our method using the CIFAR-10 dataset and the ResNet model. The experimental setups are basically the same as in Section 2.1. Here we corrupt CIFAR-10 by 10%, ...,90%, 100% (pure noise) and use it as our noisy datasets. We compute the values of D * on these datasets and plot its relationship with noise level E in Figure 5. As shown in the plots, D * vs E can be well approximated by linear functions, with R 2 = 0.991523. This indicates a strong correlation between the maximum discrepancy D * and the noise level E.\nSuch an accurate fit means we can use it to evaluate the noise level of other similar datasets. For example, if we want to evaluate a new noisy dataset that is similar to CIFAR-10, then we could compute D * and use Figure 5 to get a rough estimation of noise level E. However, we have to point out that differences in the dataset, such as size or sample distribution, may affect these linear relationships and make our estimation inaccurate. Thus, only for datasets that are very similar with the original dataset, such as a new dataset generate from the same distribution, will this estimation approach be accurate.\nThe underlying cause of this linear relationship remains a mystery. Our hypothesis is that, for time steps τ 0 < t < τ 0 + ∆t where networks begin overfitting to noise, different networks fit different components of the pure noise f noisy -f clean that are nearly orthogonal. Since identically-trained networks are similar to one another near τ 0 , new orthogonal increments would cause D t to increase. Therefore, the maximum discrepancy D * is linearly related to the maximum length of the orthogonal components of pure noise f noisy -f clean , which is linearly related to the noise level E. This explanation is rough and lacks mathematical rigor. We aim to prove it mathematically in future works." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we discovered a counter-intuitive phenomenon that the discrepancy of identically-trained networks does not decrease monotonically, but exhibits the D 3 phenomenon. This phenomenon differentiates simple linear models and complex DNNs. We conducted extensive experiments to demonstrate that it is task-, data-, and model-agnostic. Leveraging insights from this phenomenon, we proposed a new early stopping criterion and a new data quality assessment method.\nWhile this paper reveals new insights into complex DNN behaviors, our understanding remains limited. There are many aspects of this phenomenon left to be discovered and explained, such as identifying the necessary conditions for this phenomenon to emerge. Additionally, many of the findings presented in this paper lack rigorous mathematical proofs and formal analyses. These are all possible directions for future works.\nIn summary, through observing and analyzing the D 3 phenomenon, we gain new insights into DNNs that were previously not well understood. This work showcases the power of a phenomenon-driven approach in facilitating progress in deep learning theory and practice. We believe discovering and understanding more such phenomena is crucial for developing a systematic and principled understanding of DNNs." }, { "figure_ref": [], "heading": "B Experimental settings and results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B.1 Classification", "publication_ref": [ "b15" ], "table_ref": [], "text": "For each classification dataset, we corrupt a fraction of labels by replacing them with randomly generated labels to introduce noise. The random labels are uniformly distributed across all possible labels, including the correct label. This means that even when all labels are corrupted, some labels will remain correct due to randomness. For example, in a 100% corrupted CIFAR-10 dataset, around 10% of the labels will remain correct.\nThe network architectures we used include: 6. Vision Transformer (ViT-B) in [16].\nAll networks are trained with SGD with a momentum of 0.9 and weight decay of 1E-4. Learning rate is 0.01 without decay (since we want the networks to overfit). We perform data augmentation and use a minibatch size of 100 for CIFAR-10 and CIFAR-100, and a size of 50 for Mini-ImageNet. As for noise level, CIFAR-10 is corrupted by 0%, 20%, and 50%, where CIFAR-100 and Mini-ImageNet are corrupted by 0%, 30% and 50%.\nIt should be pointed out that in order to maintain a consistent experimental setting, many of these networks are not trained to state-of-the-art accuracy. However, the D 3 phenomenon is not sensitive to specific training methods. Therefore, differences in training method are not a key factor for this phenomenon.\nThe results are presented in Figure 6, 7, and 8. In all plots, the D 3 phenomenon emerges." }, { "figure_ref": [ "fig_11" ], "heading": "B.2 Implicit neural representation", "publication_ref": [ "b19", "b20", "b20" ], "table_ref": [], "text": "For each image in image-9, we add Gaussian noises with zero mean and standard deviation σ = 0, 25, 50 to create a noisy image.\nFor SIREN, we use the model given in [20]'s demo1 , which has 3 hidden layers and 256 hidden features. For DIP, we use the model given in [21]'s demo2 . DIP represents images with a generative deep network, i.e. f θ = G θ (z), where z is an input noise. Here, we use the same z between identically-trained networks. Also, following the original setup, we perturb z during the training process. In Section 2.2, Section 3, and Appendix C, at each step we perturb z with additive normal noise with zero mean and standard deviation σ p = 0.05, which follows the setting of [21]. Here, in order to better illustrate the D 3 phenomenon, we took σ p = 0.02.\nSIRENs and DIPs are trained with Adam. For SIREN, we use PyTorch's default Adam hyperparameters. For DIP, we take a learning rate of 0.01 while keeping other hyperparameters unchanged.\nResults for image \"Peppers\", \"F16\", and \"Kodak12\" are presented in Figure 9 and 10. In all plots, the D 3 phenomenon emerges." }, { "figure_ref": [ "fig_0" ], "heading": "B.3 Regression", "publication_ref": [], "table_ref": [], "text": "For regression tasks, we manually construct some analytical functions to serve as f clean . To generate the training dataset, we sample x i uniformly in a bound set Ω ⊂ X , and calculate y i = f clean (x i )+ϵ i , where ϵ i ∼ i.i.d. N (0, σ). More specifically, here we take f clean as the 1-dimensional sigmoid function f clean (x) = 2/(1 -e -x ) -1 and generate 100 samples (x i , y i ) where x i ∼ U[-2, 2]. The network architecture we chose for this task is a 4-layer deep, 512-unit wide fully connected neural network with ReLU activation function. We train these networks with momentum GD. The hyperparameters are: learning rate 1E-3, momentum 0.9, and weight decay 1E-4.\nThe results for σ = 0, 0.5, 1 are presented in Figure 11(a). Again, the D 3 phenomenon emerges.\nIt should be noted that the D 3 phenomenon does not occur every time under this setting. Our understanding is that the 4-layer FNN we used here is simple and does not have as many parameters as the networks used in the previous two tasks." }, { "figure_ref": [ "fig_0" ], "heading": "B.4 Graph related tasks", "publication_ref": [ "b30", "b31" ], "table_ref": [], "text": "We have also conducted experiments on the classification tasks of nodes in a graph. We use the citation network dataset Cora [31] as our basic dataset, and corrupt its labels by 0%, 30%, and 50%.\nThe network architecture we use is a 4-layer deep, 256-unit wide graph convolution network (GCN) given in [32]. We train these networks with momentum GD. The hyperparameters are: learning rate 0.01, momentum 0.9, and weight decay 1E-4.\nThe results are presented in Figure 11(b). Again, the D 3 phenomenon emerges." }, { "figure_ref": [ "fig_2" ], "heading": "C Early stopping criterion C.1 Image denoising", "publication_ref": [], "table_ref": [], "text": "Here, we we adopt the same experimental setup as in Appendix B.2.\nThe strict definition of PSNR gap in Section 3 is: ∆PSNR = PSNR(f τ (1) ; f clean ) -PSNR(f τα ; f clean ) where PSNR(f ; f clean ) is the peak signal-to-noise ratio of output f . We present more examples of early stopping times τ 0 given by our criterion in Figure 12. As one can see, the problem with our criterion is that it always stops the training too early. As we discussed in the paper, this problem can be avoided by choosing an appropriate hyperparameter α." }, { "figure_ref": [], "heading": "C.2 Theorem and proof", "publication_ref": [], "table_ref": [], "text": "With the definitions given in Section 3, we have the theorem below.\nTheorem. If at time step t, f (j) t satisfies that ∀j, |⟨f\n(-j) t -f clean , f (j) t -f clean ⟩ K (j) t | < δ/2, |⟨f (-j) t -f clean , f noisy -f clean ⟩ K (j) t | < ϵ/2." }, { "figure_ref": [], "heading": "Appendix A Results for linear models", "publication_ref": [], "table_ref": [], "text": "In this section, we strictly state and proof that the discrepancy between identically-trained linear feature models decreases monotonically. Thus, no matter how noisy the training set is, it does not exhibit the D 3 phenomenon.\nBy the term \"linear feature models\", we refer to models with the form below:\nwhere ϕ i : X → Y are the features.\nLike what we did in Section 3, here we assume that d(y 1 , y 2 ) = l(y 1 , y 2 ) = ∥y 1 -y 2 ∥ 2 and approximate the gradient descent by the gradient flow:\nThen, we have the proposition below.\nProposition. For identically-trained linear feature models f\n(1) t and f\n(2)\nt , their discrepancy on the training dataset\nProof. For linear feature models, gradient flow can be specified as:\nwhere ⟨•, •⟩ represents the inner product on S N,X :\nThis gives:\nNotice that df (j)\nt /dt depend linearly on f (j) t , which means:\nThus gives the result of the proposition:\nIt is worth noting that for any model that exhibits a linear training dynamic, the l-2 discrepancy between identically-trained networks decreases monotonically. By \"linear training dynamic\", we refer to dynamic with the form of:\nwhere G is a semi-definite linear operator.\nThis means that our proposition can be generated to include more network architectures, includes the infinite wide neural networks studied in NTK. However, as demonstrated in our work, complicated neural networks do not behave like this. Figure 12: Different stopping times. Red: optimal. Green: our criterion.\nthen we have the following two results:\nt , f\nHere, K (j) t\nrepresents the neural kernel of f (j) t . Proof. Here, we only prove result 2 since the proof for these two results are quite similar." }, { "figure_ref": [], "heading": "Take the full differential of", "publication_ref": [], "table_ref": [], "text": "t ):\nt )\nt )\nt , f noisy )\nThis leads to:\nt , f\nt ) > -(δ + ϵ). It is easy to see that result 1 can be proved similarly. □\nWe have empirically observed that in most circumstances, for t near τ 0 :\nt , f\nt , f clean ).\nThis means that when the discrepancy began to increase, the networks could still be heading towards f clean , which means τ 0 is always ahead of τ (j) , i.e. τ 0 < τ (j) .\nThe reason for this is still unclear, but we believe an important factor is that:\nThis inequality means there exist some components of f clean that are difficult for all identicallytrained networks to learn. Take this inequality as an assumption, it is easy to see that the condition in Result 1 can be weakened to d dt d N (f\nt ) > ϵ. This is also why we suggest in Section 3 that one should choose α > 0 instead of α < 0." } ]
[ { "authors": "Mikhail Belkin; Daniel J Hsu; Siyuan Ma; Soumik Mandal", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b0", "title": "Reconciling modern machinelearning practice and the classical bias-variance trade-off", "year": "2018" }, { "authors": "Preetum Nakkiran; Gal Kaplun; Yamini Bansal; Tristan Yang; Boaz Barak; Ilya Sutskever", "journal": "Journal of Statistical Mechanics: Theory and Experiment", "ref_id": "b1", "title": "Deep double descent: where bigger models and more data hurt", "year": "2019" }, { "authors": "Reinhard Heckel; Fatih Yilmaz", "journal": "", "ref_id": "b2", "title": "Early stopping in deep networks: Double descent and how to eliminate it", "year": "2020" }, { "authors": "Zitong Yang; Yaodong Yu; Chong You; Jacob Steinhardt; Yi Ma", "journal": "", "ref_id": "b3", "title": "Rethinking bias-variance trade-off for generalization of neural networks", "year": "2020" }, { "authors": "Maria Stéphane D'ascoli; Giulio Refinetti; Florent Biroli; Krzakala", "journal": "", "ref_id": "b4", "title": "Double trouble in double descent : Bias and variance(s) in the lazy regime", "year": "2020" }, { "authors": "Cory Stephenson; Tyler Lee", "journal": "", "ref_id": "b5", "title": "When and how epochwise double descent happens", "year": "2021" }, { "authors": "John Zhi-Qin; Yaoyu Xu; Yan Zhang; Xiao", "journal": "", "ref_id": "b6", "title": "Training behavior of deep neural network in frequency domain", "year": "2018" }, { "authors": "Aristide Nasim Rahaman; Devansh Baratin; Felix Arpit; Min Dräxler; Fred A Lin; Yoshua Hamprecht; Aaron C Bengio; Courville", "journal": "", "ref_id": "b7", "title": "On the spectral bias of neural networks", "year": "2018" }, { "authors": "John Zhi-Qin; Yaoyu Xu; Tao Zhang; Yan Luo; Zheng Xiao; Ma", "journal": "", "ref_id": "b8", "title": "Frequency principle: Fourier analysis sheds light on deep neural networks", "year": "2019" }, { "authors": "Ronen Basri; David W Jacobs; Yoni Kasten; Shira Kritchman", "journal": "", "ref_id": "b9", "title": "The convergence rate of neural networks for learned functions of different frequencies", "year": "2019" }, { "authors": "Ronen Basri; Meirav Galun; Amnon Geifman; David W Jacobs; Yoni Kasten; Shira Kritchman", "journal": "", "ref_id": "b10", "title": "Frequency bias in neural networks for input of non-uniform density", "year": "2020" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; K Li; Li Fei-Fei", "journal": "", "ref_id": "b11", "title": "Imagenet: A largescale hierarchical image database", "year": "2009" }, { "authors": "Karen Simonyan; Andrew Zisserman", "journal": "", "ref_id": "b12", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2014" }, { "authors": "X Kaiming He; Shaoqing Zhang; Jian Ren; Sun", "journal": "", "ref_id": "b13", "title": "Deep residual learning for image recognition", "year": "2015" }, { "authors": "Gao Huang; Zhuang Liu; Kilian Q Weinberger", "journal": "", "ref_id": "b14", "title": "Densely connected convolutional networks", "year": "2016" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "", "ref_id": "b15", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Fisher Yu; Dequan Wang; Trevor Darrell", "journal": "", "ref_id": "b16", "title": "Deep layer aggregation", "year": "2017" }, { "authors": "Jie Hu; Li Shen; Samuel Albanie; Gang Sun; Enhua Wu", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b17", "title": "Squeeze-and-excitation networks", "year": "2017" }, { "authors": "Kostadin Dabov; Alessandro Foi; Vladimir Katkovnik; Karen O Egiazarian", "journal": "", "ref_id": "b18", "title": "Image restoration by sparse 3d transform-domain collaborative filtering", "year": "2008" }, { "authors": " Vincent Sitzmann; N P Julien; Alexander W Martel; David B Bergman; Gordon Lindell; Wetzstein", "journal": "", "ref_id": "b19", "title": "Implicit neural representations with periodic activation functions", "year": "2020" }, { "authors": "Dmitry Ulyanov; Andrea Vedaldi; S Victor", "journal": "International Journal of Computer Vision", "ref_id": "b20", "title": "Lempitsky. Deep image prior", "year": "2017" }, { "authors": "Arthur Jacot; Franck Gabriel; Clément Hongler", "journal": "", "ref_id": "b21", "title": "Neural tangent kernel: convergence and generalization in neural networks", "year": "2018" }, { "authors": "Mingchen Li; Mahdi Soltanolkotabi; Samet Oymak", "journal": "", "ref_id": "b22", "title": "Gradient descent with early stopping is provably robust to label noise for overparameterized neural networks", "year": "2019" }, { "authors": "Maren Mahsereci; Lukas Balles; Christoph Lassner; Philipp Hennig", "journal": "", "ref_id": "b23", "title": "Early stopping without a validation set", "year": "2017" }, { "authors": "David Bonet; Antonio Ortega; Javier Ruiz-Hidalgo; Sarath Shekkizhar", "journal": "", "ref_id": "b24", "title": "Channel-wise early stopping without a validation set via nnk polytope interpolation", "year": "2021" }, { "authors": "Ali Vardasbi; M De Rijke; Mostafa Dehghani", "journal": "", "ref_id": "b25", "title": "Intersection of parallels as an early stopping criterion", "year": "2022" }, { "authors": "Mahsa Forouzesh; Patrick Thiran", "journal": "", "ref_id": "b26", "title": "Disparity between batches as a signal for early stopping", "year": "2021" }, { "authors": "Hengkang Wang; Taihui Li; Zhong Zhuang; Tiancong Chen; Hengyue Liang; Ju Sun", "journal": "", "ref_id": "b27", "title": "Early stopping for deep image prior", "year": "2021" }, { "authors": "Xiaohui Xie; Jiaxin Mao; Yiqun Liu; M De Rijke; Qingyao Ai; Yufei Huang; Min Zhang; Shaoping Ma", "journal": "", "ref_id": "b28", "title": "Improving web image search with contextual information", "year": "2019" }, { "authors": "Xiyu Yu; Tongliang Liu; Mingming Gong; Dacheng Tao", "journal": "", "ref_id": "b29", "title": "Learning with biased complementary labels", "year": "2017" }, { "authors": "Prithviraj Sen; Galileo Namata; Mustafa Bilgic; Lise Getoor; Brian Gallagher; Tina Eliassi-Rad", "journal": "The AI Magazine", "ref_id": "b30", "title": "Collective classification in network data", "year": "2008" }, { "authors": "Thomas Kipf; Max Welling", "journal": "", "ref_id": "b31", "title": "Semi-supervised classification with graph convolutional networks", "year": "2016" } ]
[ { "formula_coordinates": [ 2, 219.74, 538.06, 172.52, 12.69 ], "formula_id": "formula_0", "formula_text": "S N = {(x i , y i ) | y i = f clean (x i ) + ϵ i } N i=1 ." }, { "formula_coordinates": [ 2, 245.32, 601.32, 121.36, 30.32 ], "formula_id": "formula_1", "formula_text": "L(f ) = 1 N N i=1 l(f (x i ; θ), y i )." }, { "formula_coordinates": [ 2, 107.64, 677.71, 248.01, 9.65 ], "formula_id": "formula_2", "formula_text": "Any metric d(•, •) on Y can induce a new pseudo-metric d N (•," }, { "formula_coordinates": [ 2, 234.7, 695.01, 142.61, 30.32 ], "formula_id": "formula_3", "formula_text": "d N (f, g) = 1 N N i=1 d(f (x i ), g(x i ))." }, { "formula_coordinates": [ 3, 262.99, 116.74, 78.88, 13.74 ], "formula_id": "formula_4", "formula_text": "D t = d N (f (1) t , f(2)" }, { "formula_coordinates": [ 3, 267.41, 173.52, 63.54, 13.74 ], "formula_id": "formula_5", "formula_text": "(j) t = f (•; θ (j) t )." }, { "formula_coordinates": [ 4, 175.64, 521.63, 140.76, 9.65 ], "formula_id": "formula_6", "formula_text": "d(y 1 , y 2 ) = ∥y 1 -y 2 ∥ ∞ = I y1=y2 ." }, { "formula_coordinates": [ 6, 439.75, 479.74, 9.93, 6.12 ], "formula_id": "formula_7", "formula_text": "(j)" }, { "formula_coordinates": [ 6, 250.78, 523.07, 253.89, 22.31 ], "formula_id": "formula_8", "formula_text": "τ α = inf t | d dt D t > α ,(2)" }, { "formula_coordinates": [ 7, 251.62, 581.69, 109.95, 22.31 ], "formula_id": "formula_9", "formula_text": "d dt θ = -∇ θ d N (f t , f noisy )," }, { "formula_coordinates": [ 7, 207.19, 642.27, 197.61, 30.56 ], "formula_id": "formula_10", "formula_text": "⟨g, h⟩ K = 1 N 2 xi,x ′ j ∈S N,X g T (x i )K(x i , x ′ j )h(x ′ j )" }, { "formula_coordinates": [ 7, 108, 684.32, 397.74, 29.56 ], "formula_id": "formula_11", "formula_text": "(j) t , f clean ) > 0. Meanwhile, d dt D t > 0 equals with d dt d N (f (1) t , f(2)" }, { "formula_coordinates": [ 8, 213.17, 72.47, 291.5, 44.14 ], "formula_id": "formula_12", "formula_text": "(j) t satisfies that ∀j, |⟨f (-j) t -f clean , f (j) t -f clean ⟩ K (j) t | < δ/2,(3)" }, { "formula_coordinates": [ 8, 224.69, 120.09, 276.11, 16.87 ], "formula_id": "formula_13", "formula_text": "(-j) t -f clean , f noisy -f clean ⟩ K (j) t | < ϵ/2. (4" }, { "formula_coordinates": [ 8, 500.8, 123.54, 3.87, 8.64 ], "formula_id": "formula_14", "formula_text": ")" }, { "formula_coordinates": [ 8, 131.41, 173.46, 75.84, 14.73 ], "formula_id": "formula_15", "formula_text": "1. d dt d N (f (1) t , f(2)" }, { "formula_coordinates": [ 8, 131.41, 173.46, 261.81, 39.98 ], "formula_id": "formula_16", "formula_text": "t ) > δ + ϵ implies ∃j, d dt d N (f (j) t , f clean ) > 0; 2. ∀j, d dt d N (f (j) t , f clean ) > 0 implies d dt d N (f (1) t , f(2)" }, { "formula_coordinates": [ 8, 340.5, 201.85, 65.45, 10.61 ], "formula_id": "formula_17", "formula_text": "t ) > -(δ + ϵ)" }, { "formula_coordinates": [ 8, 108, 341.42, 396.17, 24.5 ], "formula_id": "formula_18", "formula_text": "Condition (3) is automatically satisfied if ∥f (k) t -f clean ∥ 2 ≲ δ/∥K (j) t ∥, ∀j, k. So the smallest δ for T δ,ϵ to be non-empty is δ * ∼ ∥K τ (j) ∥∥f τ (j) -f clean ∥ 2 ." }, { "formula_coordinates": [ 9, 275.44, 338.82, 229.22, 16.21 ], "formula_id": "formula_19", "formula_text": "D * = max t>τ0 D t ,(5)" }, { "formula_coordinates": [ 16, 213.17, 683.73, 185.67, 37.23 ], "formula_id": "formula_20", "formula_text": "(-j) t -f clean , f (j) t -f clean ⟩ K (j) t | < δ/2, |⟨f (-j) t -f clean , f noisy -f clean ⟩ K (j) t | < ϵ/2." } ]
Double Descent of Discrepancy: A Task-, Data-, and Model-Agnostic Phenomenon
In this paper, we studied two identically-trained neural networks (i.e. networks with the same architecture, trained on the same dataset using the same algorithm, but with different initialization) and found that their outputs discrepancy on the training dataset exhibits a "double descent" phenomenon. We demonstrated through extensive experiments across various tasks, datasets, and network architectures that this phenomenon is prevalent. Leveraging this phenomenon, we proposed a new early stopping criterion and developed a new method for data quality assessment. Our results show that a phenomenon-driven approach can benefit deep learning research both in theoretical understanding and practical applications.
Yifan Luo
[ { "figure_caption": "Figure 1 :1Figure 1: Double descent of discrepancy", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "(a) CIFAR-10, VGG (b) CIFAR-10, ResNet (c) CIFAR-10, DenseNet (d) Mini-ImageNet, VGG (e) Mini-ImageNet, ResNet (f) Mini-ImageNet, DenseNet", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: D t curves, classification.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: log 10 D t curves, implicit neural representation.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Different stopping times. Red: optimal. Green: our criterion.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "(a) Dt curves (b) E vs D *", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Noise level vs max discrepancy. CIFAR-10, ResNet.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 : 10 (610Figure 6: D t curves, classification, CIFAR-10", "figure_data": "", "figure_id": "fig_8", "figure_label": "610", "figure_type": "figure" }, { "figure_caption": "Figure 7 :(7Figure 7: D t curves, classification, CIFAR-100", "figure_data": "", "figure_id": "fig_9", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: D t curves, classification, Mini-ImageNet", "figure_data": "", "figure_id": "fig_10", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: D t curves, implicit neural representation, SIREN", "figure_data": "", "figure_id": "fig_11", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :Figure 11 :1011Figure 10: D t curves, implicit neural representation, DIP", "figure_data": "", "figure_id": "fig_12", "figure_label": "1011", "figure_type": "figure" }, { "figure_caption": "PSNR gaps, Gaussian noise, σ = 25", "figure_data": "House Pep. Lena Bab. F16 K01 K02 K03 K12ES-WMV1.421.02 0.39 3.87 0.72 0.40 1.62 1.39 1.63Ours0.300.25 0.31 4.26 0.30 0.76 0.76 0.93 0.56", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work on Mini-ImageNet provides a dataset for the classification tasks conducted in the citing paper, serving as a methodological basis for the research."}, {"Category": "Data Source", "Citation": "[13]", "Explanation": "The cited work on Visual Geometry Group (VGG) provides a network architecture that the citing paper uses in their research for classification tasks."}, {"Category": "Data Source", "Citation": "[14]", "Explanation": "The cited work on Residual Networks (ResNet) provides a network architecture that the citing paper uses in their research for classification tasks."}, {"Category": "Data Source", "Citation": "[15]", "Explanation": "The cited work on Densely Connected Convolutional Networks (DenseNet) provides a network architecture that the citing paper uses in their research for classification tasks."}, {"Category": "Data Source", "Citation": "[16]", "Explanation": "The cited work on Vision Transformer provides network architectures that the citing paper uses in their research for classification tasks."}, {"Category": "Data Source", "Citation": "[17]", "Explanation": "The cited work on Vision Transformer provides network architectures that the citing paper uses in their research for classification tasks."}, {"Category": "Data Source", "Citation": "[18]", "Explanation": "The cited work on Vision Transformer provides network architectures that the citing paper uses in their research for classification tasks."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work provides the dataset used in the neural network training process for implicit neural representation tasks, which serves as the basis for the research conducted in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[20]", "Explanation": "The cited work introduces the neural network architecture of SIREN, which is used in the research conducted in the citing paper to represent images in the dataset."}, {"Category": "Supporting Evidence", "Citation": "[21]", "Explanation": "The cited work presents the deep image prior (DIP) architecture, which is also used in the research conducted in the citing paper to represent images in the dataset."}, {"Category": "Data Source", "Citation": "Appendix B.2", "Explanation": "The appendix provides additional information on the dataset and the training process used in the research conducted in the citing paper, which is sourced from the cited works."}, {"Category": "Supporting Evidence", "Citation": "[22]", "Explanation": "The cited work, neural tangent kernel (NTK), is used to support the claim that the double descent of discrepancy phenomenon does not occur in linear feature models or models with linear properties during training. The category of the relationship is Supporting Evidence, as the cited work provides foundational data or theories that support the claims made in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[23]", "Explanation": "The cited work provides the concept of early stopping in machine learning to avoid overfitting, which the citing paper builds upon in the context of image denoising tasks."}, {"Category": "Methodological Basis", "Citation": "[24; 25]", "Explanation": "The cited works point out the drawbacks of validation-based early stopping criteria, which the citing paper adopts in the development of a new criterion without validation sets for image denoising tasks."}, {"Category": "Extension or Continuation", "Citation": "[24; 26; 27]", "Explanation": "The cited works have motivated the development of early stopping criteria without validation sets in the field of machine learning, which the citing paper extends by applying the D 3 phenomenon to construct a new criterion for image denoising tasks."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work, ES-WMV, serves as a stopping criterion specifically designed for DIP, which the citing paper adopts in their experimental setup to measure the performance of the proposed criterion."}, {"Category": "Data Source", "Citation": "[16]", "Explanation": "The cited work is the Vision Transformer (ViT-B) model, which the citing paper uses as a network architecture in their experiments."}, {"Category": "Data Source", "Citation": "[20]", "Explanation": "The cited work provides the model used in the demo1 of the cited work for SIREN, which the citing paper adopts in their research."}, {"Category": "Data Source", "Citation": "[21]", "Explanation": "The cited work provides the model used in the demo2 of the cited work for DIP, which the citing paper uses in their research."}, {"Category": "Methodological Basis", "Citation": "[31]", "Explanation": "The cited work provides the basic dataset used in the experiments conducted in the citing paper, which serves as the foundation for the analysis and results presented."}, {"Category": "Data Source", "Citation": "[32]", "Explanation": "The cited work provides the network architecture and training details for the GCN used in the experiments, which the citing paper adopts in its research."}]
[ { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b29", "b0", "b2", "b16", "b19" ], "table_ref": [], "text": "The state-of-the-art dialogue systems are designed for assisting the user to execute a task, holding limited chit-chat conversations with shallow user engagement, or information retrieval over a finite set of topics. The personalization in these systems is limited to a stereotypical user model. This user model is implicitly inferred from conversations with many users, or is limited to a superficial list of persona statements (e.g., \"He likes dogs\") (Zhang et al., 2018). The dialogue sessions are disconnected and the shared information across sessions is negligible and close to none.\nLongitudinal Dialogue (LD) is one of the most challenging types of conversation for human-machine dialogue systems. LDs are multi-session interactions that encompass user-specific situations, thoughts, and emotions. Dialogue systems designed for LDs should interact uniquely with each user about personal life events and emotions over multiple sessions and long periods of time (e.g. weeks). Through each session in LDs, the dialogue system must learn about the user's personal space of events and participants and social interactions, and engage the user in personal dialogues regarding their thoughts, feelings, and personal and world events.\nFigure 1 shows an example of three types of human-machine dialogues: task-based, opendomain chit-chat and LD. The user dialogues with the tasked-based dialogue system consists of either independent command-and-control exchanges such as on Day 1, or a task-driven dialogue such as on Day 2. The user model in this system is not personal as it adopts a stereotypical model -implicitlyinferred from dialogue corpora with multiple users. In the open-domain chit-chat dialogue, the dialogue does not include the execution of any explicit task, and the model engages the user in a conversation about movies and news. A common characteristic of task-based and open-domain dialogues is the fact that there is no personal information carried to the next dialogue session. The system does not update/modify the user model with each dialogue session and the level of personalization is intact from one interaction to the other (Personalization in the natural language processing and dialogue models could be added based on the voice user interface requirements and could include the exploitation of personal information such as contact directory, preferences, etc.).\nIn contrast, the model designed for the LD must account for three main differences compared to the other two systems; A) the contents of the LD are not about general information or knowledge matters as LDs encompass personal emotions, user and time-specific situations, and participants; B) the sessions are not disconnected dialogues and we can not model them as stand-alone interactions. In contrast, they belong to a multi-session interaction unique to the individual user, where the information shared in each interaction creates a common ground between the machine and the user. For each interaction, the system must engage the user in a dialogue respecting the common ground based on the information shared in the previous interactions, as well as the novel information in the new dialogue history; C) the machine has to extract the personal information presented in the user responses to construct and update the user model and respond coherently. Similar to a natural interaction between human speakers, the model has to gradually become acquainted with the user throughout the dialogues and not from a superficial list of sentence-based persona descriptions.\nThere has been limited research on personal conversations with users over a long period of time.\nEngaging the user to elaborate on personal situations and emotions is a challenging task and designing appropriate collection/elicitation methodologies is not straightforward. As a result, research on multi-session dialogues resorts to crowd-sourcing datasets with superficial persona statements and pretended longitudinality (Xu et al., 2022a,b;Bae et al., 2022). Meanwhile, studies on LDs have been limited to inferring user's attributes such as age and gender (Welch et al., 2019b), or next quickresponse selection from a candidate set of \"yes,\" \"haha,\" \"okay,\" \"oh,\" and \"nice\" (Welch et al., 2019a).\nIn this work, we study the task of response generation in LDs. Response generation in LDs is subject to appropriateness and accuracy as well as personalization and engagement of the user. The level of personalization in LDs is beyond a set of personal preferences and can not be learned from a limited set of persona statements (\"I like cars\" does not necessarily imply that I like to talk about cars in my interactions). The generated response needs to respect individuals' states, profiles, and experiences that vary among users and dialogue sessions. Therefore, we can not collect a massive knowledge base of user models that can suit all individuals and scenarios. The dialogue system should learn about each user and derive the individual user model through/from the previous dialogue sessions to generate a personal response that is coherent with respect to the dialogue context as well as the previous dialogue sessions.\nWe investigate the applicability of generalpurpose Pre-trained Language Models (PLM) for grounded response generation in LDs. We study whether PLMs can generate a response that is coherent with respect to the dialogue history and grounded on the personal knowledge the user has shared in previous interactions. We conversation- ally fine-tuned two recent PLMs, GePpeTto (GPT-2) (De Mattei et al., 2020) and iT5 (Sarti and Nissim, 2022), using a dataset of LDs about real-life events, feelings, and situations that the user has experienced. We use the responses each individual user shared in the previous dialogue sessions with the system as personal knowledge and evaluate whether grounding the generation on such knowledge results in more appropriate and personal responses. In previously published research on grounded generation, the knowledge sequence is provided to the model as-is. In this work, we experiment with three different representations of the knowledge piece; A) Raw as unprocessed text, similar to the previously published research; B) bag of head nouns as a distilled syntactic representation of the knowledge; C) graph representation of the events and participants mentioned in the user responses (Mousavi et al., 2021b). An example of a dialogue and different representations of the corresponding personal knowledge is shown in Figure 2.\nWe evaluate the performance of the models and the impact of different knowledge representations through automatic and human evaluations, as well as explainability studies using the Integrated Gradients technique (Sundararajan et al., 2017). Our contributions can be summarised as follows:\n• To the best of our knowledge this is the first study on the task of response generation in LDs. • We conversationally fine-tune two PLMs with and without grounded response generation on personal knowledge. We study the performance of the models and how different representations of knowledge can affect generation quality. • We evaluate and compare the performance of the models using automatic evaluation, including explainability studies, and human evaluations, including studying the sub-dimensional errors made by each model." }, { "figure_ref": [], "heading": "Literature Review", "publication_ref": [ "b30", "b22", "b31", "b30", "b3", "b31", "b5", "b29", "b26", "b7", "b9", "b4", "b0" ], "table_ref": [], "text": "Grounded Response Generation PLMs have achieved comparably well performance for opendomain chit-chats (Zhang et al., 2020), goaloriented agents (Thulke et al., 2021) and question answering (Zhao et al., 2020). However, such models can generate inappropriate and/or generic responses which can lead to ethical problems and low user engagement (Zhang et al., 2020). Research to address this problem and improve the generation quality includes grounding the generation on external knowledge content. The selection of the knowledge source to ground the generation has been studied as an individual component (Hedayatnia et al., 2020), as well as a joint task along with response generation (Zhao et al., 2020;Huang et al., 2021).\nPersonal Dialogue Research on personalized response generation has focused on persona descriptions and synthetic sets of user preferences and profiles. Zhang et al. (2018) collected Persona-Chat dataset of open-domain dialogues using crowd workers, where the workers were instructed to impersonate as speakers with synthetic personas of 5 sentences. This dataset has been studied for personal response generation by fine-tuning PLMs (Wolf et al., 2019;Kasahara et al., 2022), by learning the users' persona from the dialogues samples rather than the persona descriptions (Madotto et al., 2019), or investigating different representations of persona statements (Huang et al., 2022) Multi-session Dialogue Studies on multisession dialogues have been limited to simulated longitudinality and superficial persona. Xu et al. (2022a) extended the Persona-Chat dataset to a multi-session chat dataset with 4 to 5 sessions, by instructing crowd-workers to impersonate the role of returning dialogue partners in the first session (extracted from the Persona-Chat dataset) after a random amount of time. The workers were explicitly asked not to discuss any personal and real-life matters but play the role defined by the persona statements. This approach was further used by Bae et al. (2022) to extend an existing dataset of persona chats in Korean to multi-session dialogues. Xu et al. (2022b) proposed a framework for persona memory in multi-session dialogues and collected a dataset of persona chats in Chinese via crowd workers." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "Dataset", "publication_ref": [], "table_ref": [], "text": "The dataset of LDs used in this work (Mousavi et al., 2021a) consists of two dialogue sessions for each individual user. The first dialogue session is a set of personal human-machine conversations with real users encompassing their personal life events and emotions. These dialogues are collected from a group of 20 Italian native speakers receiving therapy to handle their distress more effectively. Throughout the interaction, the machine prompts the user to engage her in the recollection of daily life events the user has experienced, while the user shares details about the events and participants that have activated her emotions by answering a set of questions.\nFor each user, the first session is then followed by a follow-up dialogue. These dialogues were elicited from 4 psychotherapists and 4 trained annotators supervised by the psychotherapists. In the second dialogue session, the user tends to share more details about her feelings and the possible evolution of the previously mentioned events. Meanwhile, the listener provides personal suggestions and asks questions to expand or disambiguate previously stated facts or feelings. A mock-up example of a second dialogue session and the corresponding user response in the previous dialogue is shown in Figure 2. This dataset consists of 800 2-session LDs in the mental health domain with an average of 5 turns per dialogue." }, { "figure_ref": [], "heading": "Models", "publication_ref": [ "b2", "b13", "b8", "b16", "b14", "b23" ], "table_ref": [], "text": "We fine-tuned two state-of-the-art PLMs using the dataset of LDs.\nGePpeTto: Italian GPT-2 The first model we experimented with is GePpeTto (De Mattei et al., 2020), a PLM based on GPT-2 small (12 layers of decoder, 117M parameters) (Radford et al., 2019), trained for the Italian language (13 GB corpus size). We fine-tuned the model using AdamW optimizer (Loshchilov and Hutter, 2017) with an early-stopping wait counter equal to 3 and a history window of 2 last turns.\niT5: Italian T5 The second PLM in our experiments is iT5 (Sarti and Nissim, 2022), a PLM based on T5 (Raffel et al., 2020), trained on the Italian portion of mC4 corpus (275 GB corpus size). We experimented with iT5-Small (12 layers, 60M parameters) and iT5-Base (24 layers, 220M parameters)1 . We fine-tuned this model class using AdaFactor optimizer (Vaswani et al., 2017) with early stopping wait counter equal to 3 and a history window of 4 last turns." }, { "figure_ref": [], "heading": "Grounded Response Generation", "publication_ref": [], "table_ref": [], "text": "For each user, we extracted her responses in the first dialogue session as personal knowledge to ground the response generation for the second dialogue session. We experimented with three representations of the knowledge piece:\n• (A) RAW: We provide the responses of the user in the previous dialogue as an unprocessed knowledge piece. The average length of knowledge with this representation is 126.7 tokens.\n• " }, { "figure_ref": [], "heading": "Evaluations", "publication_ref": [], "table_ref": [], "text": "The fine-tuning of the models was done using 80% of the dialogues (640 second-session dialogues, 1284 samples with different turn levels), while the remaining data was split into 10% (80 dialogues, 160 samples with different turn levels) as the validation set for parameter engineering and earlystopping, and 10% as unseen test set. Each split was sampled at the dialogue level to guarantee no history overlap among splits. An example of a second dialogue session and the generated responses are presented in Appendix Table 5." }, { "figure_ref": [ "fig_4" ], "heading": "Automatic Evaluation", "publication_ref": [ "b1" ], "table_ref": [], "text": "The results of the automatic evaluation of the models is presented in Table 1. The perplexity scores cannot be used to compare the performance between GePpeTto and iT-5 model classes as the vocabulary distributions in the pre-training phase of the two PLMs are not identical. However, the scores are comparable among iT5 variations as the same model class pre-trained using the same data.\nIn fact, the perplexity scores indicate that iT5-Base demonstrates a better performance than iT5-Small in all combinations with knowledge representations. Therefore, we select iT5-Base among the iT5 models and focus the rest of the analysis on GePpeTto and iT5-Base. 2.04 7.70 +BOHKnowl.\n2.12 8.40\n+P SGKnowl.\n2.09 8.07\nTable 1: Automatic evaluation of the models indicates that incorporating the knowledge slightly increases the models' perplexity (Perplexity scores can not be compared among models since the vocabulary distributions of pre-training data are not identical). of the train set, thus the fine-tuning has been more effective. However, in the second half of the data, both models show a steady trend while iT5-Base achieves a gradual improvement.\nTo investigate the impact of grounding on the response lexicalization of the models, we measured the diversity in the generated responses for the test set samples via BLEU-4 score, Figure 4. We observed that there is a higher similarity among responses generated by iT5 models, while the responses generated by GePpeTto variations are more diverse. A similar finding has been observed in the literature about the performance of autoregressive models compared to encoder-decoder architectures regarding novelty in sequence generation (Tekiroglu et al., 2022;Bonaldi et al., 2022). Further, responses generated by iT5-Base with BOH and PSG representations have the lowest lexical similarity. The responses with the highest lexical similarity are generated by iT5-Base with no grounding and RAW representation. Nevertheless, there is a negligible lexical similarity between the generated responses and the ground truth." }, { "figure_ref": [ "fig_5" ], "heading": "Human Evaluation", "publication_ref": [], "table_ref": [ "tab_3", "tab_3" ], "text": "We sampled 50% of the unseen test set (42 dialogue histories, 80 samples with different turn levels) and evaluated the generated responses via human judges. We evaluated the responses according to four criteria using the protocol proposed by Mousavi et al. ( 2022):\n• Correctness: evaluating grammatical and syn-tactical structure of the response.\n• Appropriateness: evaluating the response to be a proper and coherent continuation with respect to the dialogue history. • Contextualization: evaluating whether the response refers to the context of the dialogue (not generic) or it consists of nonexisting/contradicting information (hallucination cases). • Listening: whether the generated response shows that the speaker is following the dialogue with attention.\nThe annotators were asked to evaluate the response candidates and select a decision for each criterion from a 3-point Likert scale as positive (eg. Correct, Appropriate), negative (eg. Not Correct, Not Appropriate), and \"I don't know\". We recruited 35 native Italian crowd-workers through Prolific crowd-sourcing platform3 . The workers were asked to perform a qualification task consisting of evaluating 5 samples (sampled from the validation set) in an identical setting to the main task. For the main evaluation, each crowd-worker annotated 3 response candidates for 10 dialogue histories, and each sample was annotated by 7 crowd-workers. We also asked the annotators to motivate their decisions for appropriateness and contextualization criteria by providing an explanation to point out possible errors in the generated response. Moreover, the ground truth was also included in the candidate set to be evaluated.\nThe Inter Annotator Agreement (IAA) level measured by Fleiss' κ, presented in Appendix Table 4, indicates high levels of subjectivity and complexity in Contextualization criterion, suggesting that it has been difficult for the annotators to assess this aspect of the responses.\nThe results of the human evaluation of responses are presented in Table 2 (the scores are obtained by majority voting). The evaluation of GePpeTto models shows that grounding generally worsens the performance of GePpeTto, regardless of the representation format, as the best performance is achieved by GePpeTto with no knowledge grounding. Nevertheless, BOH and PSG representations slightly improve the grammatical correctness of this model. To gain better insight into the errors made by each model, we investigated the reasons provided by the annotators for their judgments. These results, presented in Figure 5, are complementary to the evaluation decisions, Table 2, and point out the errors that resulted in the negative evaluation of a response by the annotators. The analysis shows that grounding reduces the cases of genericness in rejected responses by GePpeTto while it slightly escalates this issue in iT5-Base rejected responses. Moreover, the rejected responses of iT5-Base with RAW representation were more hallucinated than other representations. Nevertheless, grounding does have any positive impact on the cases of incoherence in rejected responses of the PLMs." }, { "figure_ref": [], "heading": "Generation Explainability", "publication_ref": [ "b19", "b17" ], "table_ref": [], "text": "According to the human evaluation results, iT5-Base with knowledge grounding achieves the best performance among PLMs. We investigated the contribution of personal knowledge and different representations on the performance of the model at inference time. We studied the attribution scores of the input tokens using the Integrated Gradients technique (Sundararajan et al., 2017;Sarti et al., 2023) based on backward gradient analysis. We experimented with two thresholds for the attribution scores:\n• Positive Contribution: Based on the assumption that elements with positive scores have a positive influence on the model's performance, we investigated the tokens with positive attribution scores, However, tokens with small attribution scores have negligible contributions and thus this analysis can be noisy. • Significant Contribution: To identify the tokens with significant contributions to the generation, we selected the top-25% of the tokens in the input sequence (knowledge and history) according to their attribution score. We then investigated what portion of these tokens belong to each segment of the input vector. For a fair comparison, the values are normalized over the segment length.\nAccording to Positive Contribution analysis, 74% of the tokens in the RAW representation have a positive contribution to the generation with the majority (30%) of tokens being verbs and nouns. This percentage for BOH (Bag of Head Nouns) representation changes to 79.0%. This result suggests the importance of nouns for the model inference. Regarding the PSG representation, 55.6% of the tokens have a positive contribution to the generation (excluding the tags used for linearization), with the " }, { "figure_ref": [], "heading": "Models", "publication_ref": [], "table_ref": [], "text": "Knowl. History" }, { "figure_ref": [], "heading": "iT5-Base", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "+RAW Knowl.\n44.6% 55.4% +BOHKnowl.\n39.5% 60.5%\n+P SGKnowl.\n38.7% 61.3% majority (68%) of tokens being events rather than participants.\nThe analysis of the tokens with significant contributions is presented in Table 3. Regarding the model with RAW representation, the percentage of tokens with high attribution scores is almost balanced between the knowledge and history segments. However, for the models with refined representations of knowledge (BOH and PSG), the dialogue history contains moderately more significantly contributing tokens." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We studied the task of response generation in Longitudinal Dialogues (LD), where the model should learn about the user's thoughts and emotions from the previous dialogue sessions and generate a personal response that is coherent with respect to the user profile and state, the dialogue context, as well as the previous dialogue sessions. We finetuned two state-of-the-art PLMs for Italian, using a dataset of LDs in the mental health domain. We experimented with grounded generation using user responses in the previous dialogue session as userspecific knowledge. We investigated the impact of different representations of the knowledge, including a graph representation of personal life events and participants mentioned previously by the user.\nOur evaluations showed there is still a huge gap between the performance of the general-purpose PLMs with knowledge grounding and the ground truth. Nevertheless, we observed that a) refined representations of the knowledge (such as BOH and PSG) can be more informative and less noisy for a grounded generation; b) the encoder-decoder model exhibited more diversity in the outputs compared to the auto-regressive model; c) knowledge grounding reduces the cases of genericness in response, though it can result in more hallucinated responses." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "The dataset used in this work is in Italian and there may be language-specific limitations in the model performance. GePpeTto is the only candidate for auto-regressive models for the Italian language at the time of this research. Therefore, its performance may be limited due to the small number of parameters. We were unable to experiment with iT5-Large model due to computation power limitations." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "I think that working in the morning and in the afternoon was not tiring, actually it was pleasant. I was also able to go to bed early enough, and I am well rested." }, { "figure_ref": [], "heading": "Response Candidates", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Ground Truth", "publication_ref": [], "table_ref": [], "text": "Good! Did you even manage to spend time with your daughter? " }, { "figure_ref": [], "heading": "GePpeTto", "publication_ref": [], "table_ref": [], "text": "" } ]
2023-05-25
10.18653/v1/2021.nlp4convai-1.14
[ { "authors": "Sanghwan Bae; Donghyun Kwak; Soyoung Kang; Min Young Lee; Sungdong Kim; Yuin Jeong; Hyeri Kim; Sang-Woo Lee; Woomyoung Park; Nako Sung", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Keep me updated! memory management in long-term conversations", "year": "2022" }, { "authors": "Helena Bonaldi; Sara Dellantonio; Serra Sinem Tekiroglu; Marco Guerini", "journal": "", "ref_id": "b1", "title": "Humanmachine collaboration approaches to build a dialogue dataset for hate speech countering", "year": "2022" }, { "authors": "Lorenzo De Mattei; Michele Cafagna; Felice Dell'orletta; Malvina Nissim; Marco Guerini", "journal": "", "ref_id": "b2", "title": "Geppetto carves italian into a language model", "year": "2020" }, { "authors": "Karthik Behnam Hedayatnia; Seokhwan Gopalakrishnan; Yang Kim; Mihail Liu; Dilek Eric; Hakkani-Tur", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Policy-driven neural response generation for knowledge-grounded dialog systems", "year": "2020" }, { "authors": "Qiushi Huang; Yu Zhang; Tom Ko; Xubo Liu; Bo Wu; Wenwu Wang; Lilian Tang", "journal": "", "ref_id": "b4", "title": "Personalized dialogue generation with persona-adaptive attention", "year": "2022" }, { "authors": "Xinxian Huang; Huang He; Siqi Bao; Fan Wang; Hua Wu; Haifeng Wang", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "PLATO-KAG: Unsupervised knowledge-grounded conversation via joint modeling", "year": "2021" }, { "authors": "K Chaitanya; Fei Joshi; Boi Mi; Faltings", "journal": "", "ref_id": "b6", "title": "Personalization in goal-oriented dialog", "year": "2017" }, { "authors": "Tomohito Kasahara; Daisuke Kawahara; Nguyen Tung; Shengzhe Li; Kenta Shinzato; Toshinori Sato", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Building a personalized dialogue system with prompt-tuning", "year": "2022" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b8", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "Andrea Madotto; Zhaojiang Lin; Chien-Sheng Wu; Pascale Fung", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Personalizing dialogue agents via meta-learning", "year": "2019" }, { "authors": "Mahed Seyed; Alessandra Mousavi; Morena Cervone; Giuseppe Danieli; Riccardi", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "a. Would you like to tell me more? generating a corpus of psychotherapy dialogues", "year": "2021" }, { "authors": "Mahed Seyed; Roberto Mousavi; Giuseppe Negro; Riccardi", "journal": "", "ref_id": "b11", "title": "An unsupervised approach to extract life-events from personal narratives in the mental health domain", "year": "2021" }, { "authors": "Mahed Seyed; Gabriel Mousavi; Michela Roccabruna; Simone Lorandi; Giuseppe Caldarella; Riccardi", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Evaluation of response generation models: Shouldn't it be shareable and replicable?", "year": "2022" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b13", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b14", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "F R Leonardo; Martin Ribeiro; Hinrich Schmitt; Iryna Schütze; Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "Investigating pretrained language models for graph-to-text generation", "year": "2021" }, { "authors": "Gabriele Sarti; Malvina Nissim", "journal": "", "ref_id": "b16", "title": "It5: Largescale text-to-text pretraining for italian language understanding and generation", "year": "2022" }, { "authors": "Gabriele Sarti; Ludwig Sickert; Nils Feldhus; Oskar Van Der Wal", "journal": "", "ref_id": "b17", "title": "Inseq: An interpretability toolkit for sequence generation models", "year": "2023" }, { "authors": " Ab Siddique; Kshitija Maqbool; Hassan Taywade; Foroosh", "journal": "", "ref_id": "b18", "title": "Personalizing task-oriented dialog systems via zero-shot generalizable reward function", "year": "2022" }, { "authors": "Mukund Sundararajan; Ankur Taly; Qiqi Yan", "journal": "", "ref_id": "b19", "title": "Axiomatic attribution for deep networks", "year": "2017" }, { "authors": " Pmlr", "journal": "", "ref_id": "b20", "title": "", "year": "" }, { "authors": "Serra Sinem Tekiroglu; Helena Bonaldi; Margherita Fanton; Marco Guerini", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Using pre-trained language models for producing counter narratives against hate speech: a comparative study", "year": "2022" }, { "authors": "David Thulke; Nico Daheim; Christian Dugast; Hermann Ney", "journal": "", "ref_id": "b22", "title": "Adapting document-grounded dialog systems to spoken conversations using data augmentation and a noisy channel model", "year": "2021" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b23", "title": "Attention is all you need", "year": "2017" }, { "authors": "Charles Welch; Verónica Pérez-Rosas; Jonathan K Kummerfeld; Rada Mihalcea", "journal": "IEEE Intelligent systems", "ref_id": "b24", "title": "Learning from personal longitudinal dialog data", "year": "2019" }, { "authors": "Charles Welch; Verónica Pérez-Rosas; Jonathan K Kummerfeld; Rada Mihalcea", "journal": "", "ref_id": "b25", "title": "Look who's talking: Inferring speaker attributes from personal longitudinal dialog", "year": "2019" }, { "authors": "Thomas Wolf; Victor Sanh; Julien Chaumond; Clement Delangue", "journal": "", "ref_id": "b26", "title": "Transfertransfo: A transfer learning approach for neural network based conversational agents", "year": "2019" }, { "authors": "Jing Xu; Arthur Szlam; Jason Weston; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Beyond goldfish memory: Long-term open-domain conversation", "year": "2022" }, { "authors": "Xinchao Xu; Zhibin Gou; Wenquan Wu; Zheng-Yu Niu; Hua Wu; Haifeng Wang; Shihang Wang", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Long time no see! open-domain conversation with long-term persona memory", "year": "2022" }, { "authors": "Saizheng Zhang; Emily Dinan; Jack Urbanek; Arthur Szlam; Douwe Kiela; Jason Weston", "journal": "", "ref_id": "b29", "title": "Personalizing dialogue agents: I have a dog, do you have pets too?", "year": "2018" }, { "authors": "Yizhe Zhang; Siqi Sun; Michel Galley; Yen-Chun Chen; Chris Brockett; Xiang Gao; Jianfeng Gao; Jingjing Liu; Bill Dolan", "journal": "", "ref_id": "b30", "title": "DIALOGPT : Large-scale generative pre-training for conversational response generation", "year": "2020" }, { "authors": "Xueliang Zhao; Wei Wu; Can Xu; Chongyang Tao; Dongyan Zhao; Rui Yan", "journal": "", "ref_id": "b31", "title": "Knowledgegrounded dialogue generation with pre-trained language models", "year": "2020" } ]
[]
Response Generation in Longitudinal Dialogues: Which Knowledge Representation Helps?
Longitudinal Dialogues (LD) are the most challenging type of conversation for humanmachine dialogue systems. LDs include the recollections of events, personal thoughts, and emotions specific to each individual in a sparse sequence of dialogue sessions. Dialogue systems designed for LDs should uniquely interact with the users over multiple sessions and long periods of time (e.g. weeks), and engage them in personal dialogues to elaborate on their feelings, thoughts, and real-life events. In this paper, we study the task of response generation in LDs. We evaluate whether general-purpose Pre-trained Language Models (PLM) are appropriate for this purpose. We fine-tune two PLMs, GePpeTto (GPT-2) and iT5, using a dataset of LDs. We experiment with different representations of the personal knowledge extracted from LDs for grounded response generation, including the graph representation of the mentioned events and participants. We evaluate the performance of the models via automatic metrics and the contribution of the knowledge via the Integrated Gradients technique. We categorize the natural language generation errors via human evaluations of contextualization, appropriateness and engagement of the user.
Seyed Mahed Mousavi; Simone Caldarella; Giuseppe Riccardi
[ { "figure_caption": "Figure 1 :1Figure 1: Examples of a task-based dialogue, a chat-chit, and a Longitudinal Dialogue (LD) in two different sessions. The dialogue system for LDs needs to learn about the user in a timely manner and engage her in a personal conversation encompassing her life events, thoughts, and emotions.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: An example of a longitudinal dialogue. The user responses in the previous dialogue session are used as personal knowledge for grounded response generation. The knowledge is presented to the model as A) Unprocessed text (RAW); B) Bag of Head nouns (BOH); and C) Personal Space Graph (PSG) of events and their participants in linearized format. The model then encodes the dialogue history and the knowledge piece and generates a response candidate (the last agent turn in the dialogue example).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "B) Bag of Head nouns (BOH): We automatically parse the user responses 2 and extract the head nouns as a distilled syntactic representation of the knowledge. • (C) Personal Space Graph (PSG): We represent the knowledge by the personal graph of the events and participants mentioned by the user Mousavi et al. (2021b). The predicates in a sentence represent an event, and its corresponding noun dependencies (subject, object) represent the participants. In this graph, the participants are the nodes while the predicates are the relations (edges) among the participants. We obtain a linear representation of the graph using an approach inspired by Ribeiro et al. (2021) in which the authors observed that providing a linearized representation of the graph to the PLMs results in outperforming the models with a graph-specific structural bias for the task of graph-to-text generation.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Perplexity score trends of the models over increasing size of the training set. The performance of GePpeTto variations is considerably improved after observing 50% of the fine-tuning training set.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Lexical similarity among generated responses measured by BLEU-4 score. The results indicate a higher similarity among the responses generated by iT5-Base models.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Explanations provided by the crowd-workers to motivate their negative judgments in Appropriateness and Contextualization criteria, represented by the percentage of the times the error category (x-axis) was selected.The figure is obtained by considering all the votes (i.e. not majority voting). Note that the labels are not mutually exclusive.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": ". While the mentioned work focused on personalization in open-domain dialogues, Joshi et al. (2017) generated profiles consisting of gender, age, and food preference permutations for the user side in restaurant booking dialogues, which was used in another work (Siddique et al., 2022) to generate personalized responses in a task-based dialogue.", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The highest level of Contextualization among grounded GePpeTto models is achieved by PSG representation. Regarding iT5-Base varia-", "figure_data": "Human EvaluationModelsnllpplCorrectness Appropriateness Contextualization ListeningGround Truth--97.62%100.0%97.62%97.62%GePpeTto2.76 15.8483.33%66.67%69.05%64.29%+RAW Knowl.2.79 16.3383.33%59.52%57.14%57.14%+BOHKnowl.2.85 17.3892.86%45.24%52.38%42.86%+P SGKnowl.2.77 16.0690.48%54.76%64.29%50.00%iT5-Base2.057.79100.0%66.67%73.81%66.67%+RAW Knowl.2.047.7085.71%80.95%80.95%76.19%+BOHKnowl.2.128.4092.86%80.95%85.71%83.33%+P SGKnowl.2.098.0795.24%73.81%90.48%83.33%", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Human Evaluation of the fine-tuned models. The results show the impact of different representations of the knowledge source for grounded response generation in LDs. Refined representations of the knowledge (BOH and PSG) generally result in better performances than RAW representation.", "figure_data": "tions, the results indicate that grounding improvesthe models' performance considerably with respectto Appropriateness, Contextualization, and Listen-ing. However, it decreases the model's Correct-ness with the highest decrease caused by RAWrepresentation. PSG representation achieves thehighest level of Contextualization and Listeningoverall, besides the highest level of Correctnessamong grounded models. Therefore, refined repre-sentations of the knowledge (BOH and PSG) gen-erally result in better performances compared toRAW representation. Nevertheless, there is stilla huge gap between the performance of the best-performing model and the ground truth, suggestingthe grounded PLMs are not suitable dialogue mod-els for LDs in the mental health domain.", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Percentage of tokens with significant contribution to the generation (top-25%) in knowledge and history segments of the input vector for each model.", "figure_data": "", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "(Zhang et al., 2018)", "Explanation": "The cited work provides a list of persona statements that the citing paper uses to limit the personalization in their dialogue systems to a superficial level."}, {"Category": "Methodological Basis", "Citation": "(Welch et al., 2019a)", "Explanation": "The cited work by Welch et al. (2019a) has been used to study the task of response generation in LDs, which involves the selection of next quick-response from a candidate set of options."}, {"Category": "Methodological Basis", "Citation": "(De Mattei et al., 2020)", "Explanation": "The cited work, GePpeTto (GPT-2), is a pre-trained language model that the citing paper uses to fine-tune for conversationally generating responses in dialogue systems."}, {"Category": "Methodological Basis", "Citation": "(Sarti and Nissim, 2022)", "Explanation": "The cited work, iT5, is another pre-trained language model that the citing paper uses to fine-tune for conversationally generating responses in dialogue systems."}, {"Category": "Data Source", "Citation": "(De Mattei et al., 2020)", "Explanation": "The cited work provides a dataset of real-life events, feelings, and situations that the user has experienced, which the citing paper uses to fine-tune the pre-trained language models for generating personal responses in dialogue systems."}, {"Category": "Methodological Basis", "Citation": "(Sundararajan et al., 2017)", "Explanation": "The cited work by Sundararajan et al. (2017) provides the methodology of using the Integrated Gradients technique for explainability studies in the citing paper on response generation in LDs."}, {"Category": "Methodological Basis", "Citation": "(Zhang et al., 2018)", "Explanation": "The cited work by Zhang et al. (2018) provides a dataset of open-domain dialogues that serves as a methodological basis for the study conducted in the citing paper on personalized response generation."}, {"Category": "Supporting Evidence", "Citation": "(Wolf et al., 2019)", "Explanation": "The cited work by Wolf et al. provides foundational data and methods for personal response generation in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Kasahara et al., 2022)", "Explanation": "The cited work by Kasahara et al. contributes to the study of personal response generation in the citing paper by fine-tuning PLMs."}, {"Category": "Supporting Evidence", "Citation": "(Madotto et al., 2019)", "Explanation": "The cited work by Madotto et al. offers a new approach to learning user personas in dialogues samples, which the citing paper builds upon."}, {"Category": "Supporting Evidence", "Citation": "(Huang et al., 2022)", "Explanation": "The cited work by Huang et al. investigates the use of different representations of persona statements, which the citing paper leverages in its research."}, {"Category": "Extension or Continuation", "Citation": "(Xu et al., 2022a)", "Explanation": "The cited work by Xu et al. extends the Persona-Chat dataset to multi-session chats with multiple sessions, which the citing paper further builds upon in its study of multi-session dialogues."}, {"Category": "Extension or Continuation", "Citation": "(Bae et al., 2022)", "Explanation": "The cited work by Bae et al. extends the study of multi-session dialogues in Korean by building upon the approach of Xu et al. in the cited work."}, {"Category": "Methodological Basis", "Citation": "(De Mattei et al., 2020)", "Explanation": "The cited work, GePpeTto, is a PLM that the citing paper fine-tuned to use in their research on the Italian language."}, {"Category": "Methodological Basis", "Citation": "(Sarti and Nissim, 2022)", "Explanation": "The cited work, iT5, is a PLM that the citing paper experimented with in their research on the Italian language, using iT5-Small and iT5-Base models."}, {"Category": "Methodological Basis", "Citation": "(Vaswani et al., 2017)", "Explanation": "The cited work by Vaswani et al. (2017) provides the AdaFactor optimizer that the citing paper uses in their fine-tuning process for the model class."}, {"Category": "Methodological Basis", "Citation": "(Tekiroglu et al., 2022)", "Explanation": "The cited work by Tekiroglu et al. (2022) has observed a similar finding in the literature about the performance of autoregressive models compared to encoder-decoder architectures regarding novelty in sequence generation, which the citing paper builds upon in their research."}, {"Category": "Methodological Basis", "Citation": "(Bonaldi et al., 2022)", "Explanation": "The cited work by Bonaldi et al. (2022) has also observed a similar finding in the literature about the performance of autoregressive models compared to encoder-decoder architectures regarding novelty in sequence generation, which the citing paper builds upon in their research."}, {"Category": "Methodological Basis", "Citation": "(Sundararajan et al., 2017)", "Explanation": "The cited work introduces the Integrated Gradients technique, which the citing paper adopts to study the attribution scores of input tokens in the model."}, {"Category": "Methodological Basis", "Citation": "(Sarti et al., 2023)", "Explanation": "The cited work provides a method for backward gradient analysis based on the Integrated Gradients technique, which the citing paper uses to investigate the contribution of personal knowledge and different representations in the model."}]
[ { "figure_ref": [ "fig_0", "fig_1", "fig_1" ], "heading": "INTRODUCTION", "publication_ref": [ "b56", "b29", "b1", "b22", "b25", "b48", "b37", "b44", "b53", "b37" ], "table_ref": [], "text": "Object Re-identification (ReID), such as person ReID [57] and vehicle ReID [30], aims to search the probe object from the many gallery images captured from different camera views, attracting increasing attention recently. Most of the existing object ReID methods can be divided into supervised object ReID [4-6, 16, 18, 19, 37, 46, 52, 53, 59] and unsupervised object ReID [7, 9, 13-15, 29, 50, 51, 58], where the supervised object ReID aims to infer a discriminative ReID model with the annotated labels, and the unsupervised object ReID applies the pseudo-labels or unsupervised consistency constraint for representation learning. The existing object ReID assumes a camera-free stationary dataset with the images and identities all available during training, also called camera-free ReID. However, using the camera-free stationary dataset to infer a ReID model has the following shortcomings when deploying the ReID system in the real-world. Firstly, since each camera would generate millions of images every day, merging the images captured from all camera views into a unified dataset needs massive storage, making the ReID model difficult and time-consuming to train. Secondly, the images captured from each camera view may not be allowed to be stored elsewhere for privacy constraints. Thirdly, the ReID model inferred on the stationary datasets cannot self-grow and quickly adapt to real-world situations where cameras are dynamically increased. Finally, it is hard to associate the camera-independent identities, easily generated or annotated, across different camera views to obtain camera-free identities. The above aspects limit the camera-free ReID mechanism not to be better deployed in the real world, restricting its generalization and expansion.\nIn this work, we introduce a novel ReID task named Camera-Incremental Object Re-Identification (CIOR) by continually updating the ReID model with the incoming data of each camera without access to the other cameras. Unlike the traditional camera-free object ReID, CIOR treats each camera's data separately as a sequential learning problem for different camera views. An intuitive description of CIOR consisting of eight camera views is shown in Figure 1, where all the camera datasets are not available during training but encountered sequentially one after the other, and the ReID model trained after each camera view can be used for deployment. Since Based on the identity knowledge of the historical camera, the identities of current cameras can be divided into two sets: common identities and unique identitis. For incremental learning, the common identities can be applied to remember the historical knolwdge, while the unique identities can be used to infer the newly knowledge.\nCIOR merely considers the identities and images belonging to the current camera and does not access ones from the previous (other) camera views, it thus has two challenges: 1) Less Discriminative: as each camera contains a limited number of identities, it is difficult to infer a discriminative ReID model without considering the identities from other camera views; 2) Catastrophic Forgetting: without access to the dataset of historical cameras, solely training on the current camera dataset has severe catastrophic forgetting of identities knowledge gained from the historical cameras.\nAlthough some class incremental learning [2,23,26,49] and object incremental ReID [38,45,54] exist, CIOR is a more reasonable and challenging task than existing work for real-world object ReID. Among existing methods, the most related ones to CIOR are class-incremental learning and domain-incremental learning. The class-incremental learning assumes that the newly coming images have the disjoint class labels and similar data distribution with the history datasets. The domain-incremental learning assumes that each new image has the same class label with different data distribution. For example, Adaptive Knowledge Accumulation (AKA) [38] defines a domain-incremental person ReID by treating existing ReID benchmarks as an incremental scenario. By treating the images captured by each camera as domain and the identity as the class, CIOR is the combination of the class-incremental learning and domain-incremental learning, i.e., different camera views contain the different identities (class) with different distributions. Because the same object might have different identities across different camera views, the different identities under different camera views might represent the same object. That is to say, different camera views might contain a subset of common identities, as shown in Figure 2. In conclusion, the identity of the current camera is implicitly associated with the historical identities instead of being explicitly associated in traditional class incremental learning, making CIOR to be a more appropriate and challenging setting for object ReID.\nBased on the fact that the identities under different camera views might describe the same object, associating and distilling the common identities between the current camera and historical identities would boost the discrimination and benefit to alleviate the catastrophic forgetting. As shown in Figure 2, with the identity knowledge association, we can divide the identities of current camera as Common Identities and Unique Identities. As the Common Identities describe the same object from the historical camera views, they can be used to review the historical identity knowledge for alleviating forgetting, and also enhance the discrimination of the ReID model. Different from the Common Identities, the Unique Identities is used to adapt the historical ReID model to the current camera for inferring the new knowledge. Due to the above issue, we propose an Identity Knowledge Evolution (IKE) framework for CIOR, consisting of the Identity Knowledge Association (IKA), Identity Knowledge Distillation (IKD), and Identity Knowledge Update (IKU). Identity Knowledge Association(IKA) is proposed to discover the common identities between the current camera and historical camera views with the cyclic-matching strategy. Then, IKD has applied to distillate historical identity knowledge from common identities and quickly adapt the historical model to the current camera view. After each camera has been trained, IKU is applied to continually expand the identity knowledge by combining historical and current identity embeddings.\nThe major contribution can be summarized as follows: 1) To overcome the shortcoming of existing camera-free object ReID, we introduce a novel ReID task named Camera-Incremental Object Re-Identification (CIOR) by continually updating the ReID model merely based on the sequence of camera's dataset;\n2) By associating and distilling the common identities between the current camera and historical camera views, we introduce a novel Identity Knowledge Evolution (IKE) framework for CIOR;\n3) We adapt the existing Market-1501 and Veri-776 datasets for CIOR, where Market-CL and Veri-CL, consisting of six cameras and fourteen camera views. The evaluation of two datasets proves the effectiveness of the proposed IKE for CIOR." }, { "figure_ref": [], "heading": "RELATED WORK 2.1 Object Re-Identification", "publication_ref": [ "b3", "b4", "b45", "b51", "b52", "b12", "b54", "b18", "b23", "b6", "b28", "b43", "b57", "b7", "b20", "b30", "b28", "b11", "b27", "b46", "b47", "b55", "b27" ], "table_ref": [], "text": "Object ReID aims to infer a discriminative ReID model used for retrieving the same objects from the gallery images. Based on whether using the human annotated labels, object ReID can be divided into supervised object ReID and unsupervised object ReID. For the supervised object ReID always apply the attention-mechism [4,5,46,52,53], part-based description model [13,55], and transform-based description [19,24] for infer a discrimiantive object description.\nAs manual annotations are expensive to collect, unsupervised object ReID has attracted much more attention. Some researchers use extra labeled images to assist the unsupervised training on unlabeled person ReID by transferring labeled images to the unlabeled domains with GAN-based models [7,29,44,58] or narrowing the distribution gap in feature space [8,21,31]. For example, Liu et al. [29] use three GAN models to reduce the discrepancy between different domains in illumination, resolution, and camera-view, respectively. To handle the lack of annotation, many methods have been proposed to acquire reliable pseudo labels [12,28,47,48,56]. For example, Lin et al. [28] propose a bottom-up unsupervised clustering method that simultaneously considers both diversity and similarity. Although the above methods can achieve better performance, they all depend on the pre-collected camera-free dataset, limiting the generalization and expansion of the ReID algorithm." }, { "figure_ref": [], "heading": "Incremental Learning", "publication_ref": [ "b9", "b9", "b2", "b31", "b38", "b40", "b1", "b22", "b25", "b33", "b34", "b39", "b2", "b31", "b38", "b40", "b1", "b22", "b25", "b33", "b34", "b39" ], "table_ref": [], "text": "Although existing machine learning algorithms have obtained excellent performance for most computer vision tasks, they are all inferred based on the statistic dataset. They have the severe catastrophic forgetting of the historical knowledge when adapting to the new dataset. To address the above shortcomings, incremental learning or lifelong learning [10] has attracted ever-increasing attention recently. Based on how and what type of task specific information is used during the sequential training process, existing incremental learning methods can be divided into three classes [10]: Replay methods [3,32,39,41], Regularization-based methods [2,23,26], and Parameter isolation methods [34,35,40]. Replay methods [3,32,39,41] always store the historical samples after train, which are replayed to train with the current sample to alleviate forgetting. The shortcoming of replay methods is that they need additional memory space to store the historical samples. Unlike the replay methods, regularization-based methods [2,23,26] add a regularization term between the weight of the current model and the historical model to consolidate the previous knowledge. Parameter isolation methods [34,35,40] maintain the independent model parameters for the different tasks to prevent any possible forgetting during the sequential training." }, { "figure_ref": [], "heading": "Incremental Object Re-identification", "publication_ref": [ "b35", "b37", "b37", "b41", "b42", "b44", "b37", "b42", "b44" ], "table_ref": [], "text": "From the type of incremental tasks, incremental learning can be divided into: class-incremental learning [36], domain-incremental learning [38], and task-incremental learning. The class-incremental learning assumes that the newly coming images have disjoint class labels with the history classes. The domain-incremental assumes that the newly coming images have the same class distribution and a large domain gap with the history images. Inspired by the existing incremental learning methods, a lot of incremental settings are proposed for the object Re-identification [38,42,43,45]. For example, Adaptive Knowledge Accumulation (AKA) [38] firstly defines a domain-incremental person ReID by treating existing ReID benchmarks as a incremental scenario, and proposes an Adaptive Knowledge Accumulation by exchange the knowledge between the historical knowledge graph and current knowledge graph. Using the same incremental setting as AKA, Sun et al. [43] propose a Patch-based Knowledge Distillation to reduce the data distribution discrepancy between the historical and current data. Furthermore, Wu et al. [45] design a comprehensive learning objective that accounts for classification coherence, distribution coherence and representation coherence in a unified framework.\nThe above-mentioned methods all focus on the domain-incremental person ReID by treating existing ReID benchmarks as different domains for incremental learning. However, the incremental object re-identification setting based on different datasets has significant difference from the real setting. A more reasonable incremental setting is to treat the images of each camera as a domain for incremental learning, named Camera-Incremental Object Re-Identification(CIOR). Specially, by treating the images captured by each camera as domain and the identity as the class, CIOR is the combination of the class-incremental learning and domain-incremental learning, i.e., different cameras contain the different identity (class) and different data distribution, which is more challenge and reasonable than existing incremental object ReID settings." }, { "figure_ref": [], "heading": "CAMERA-INCREMENTAL OBJECT RE-IDENTIFICATION 3.1 Problem Formulation", "publication_ref": [], "table_ref": [], "text": "We assume that there are 𝐶 camera views, and the whole dataset is defined as\n𝐷 = {𝐷 1 , 𝐷 2 , ..., 𝐷 𝐶 }, where 𝐷 𝑐 = {𝑥 𝑐 𝑖 , 𝑦 𝑐 𝑖 }(𝑦 𝑐 𝑖 ∈ [1, 𝑛 𝑐 ], 𝑐 ∈ [1, 𝐶]\n) denotes the dataset captured from the 𝑐-th camera view, 𝑛 𝑐 is the number of identities, 𝑥 𝑐 𝑖 and 𝑦 𝑐 𝑖 denote the image and its identity label. Camera-Incremental Object Re-Identification(CIOR) treats each camera's images and identities separately as a sequentially learning problem for different cameras, which leads to a severe catastrophic forgetting for the historical cameras when inferring the current camera. Take the 𝑐-th camera view as an example, CIOR aims to infer a robust ReID model Φ 𝑐 based on the dataset 𝐷 𝑐 , and the historical model Φ ℎ , where Φ ℎ denotes the historical model inferred from the (𝑐 -1)-th camera view,\nΦ 𝑐 = A (Φ ℎ , 𝐷 𝑐 ),(1)\nwhere A (•) denotes the ReID algorithm for optimization. A simple solution of A is to treat the historical model Φ ℎ as an initial model for applying the fine-tune strategy on the dataset 𝐷 𝑐 , which is the Baseline for CIOR. As dataset 𝐷 𝑐 cannot describe the identities from other camera views, simply applying to fine-tune would forget the identity knowledge inferred from the previous (𝑐 -1) camera views, leading to the inferred model Φ 𝑐 a weak descriptive ability. As the different identity labels under different camera views could describe the same object, associating and distilling the common identities between the current camera and historical identities would boost the discrimination and benefit from alleviating the catastrophic forgetting. We thus propose a novel Identity Knowledge Evolution framework for CIOR." }, { "figure_ref": [ "fig_1" ], "heading": "Identity Knowledge Evolution framework", "publication_ref": [], "table_ref": [], "text": "After training on the (𝑐 -1)-th camera view, we can obtain the updated historical identity memory M ℎ ∈ R 𝑛 ℎ ×𝑁 𝑑 , and a historical model Φ ℎ , where 𝑛 ℎ is the number of historical identity. Identity Knowledge Evolution aims to infer a robustness ReID model Φ 𝑐 based on the historical identity memory M ℎ , the historical model Φ ℎ , and the dataset\n𝐷 𝑐 = {𝑥 𝑐 𝑖 , 𝑦 𝑐 𝑖 }(𝑦 𝑐 𝑖 ∈ [1, 𝑛 𝑐 ]\n) of the 𝑐-th camera view. Therefore, Eq. ( 1) can be reformualted as follows:\nΦ 𝑐 = A (Φ ℎ , M ℎ , M 𝑐 , 𝐷 𝑐 ),(2)\nwhere M 𝑐 ∈ R 𝑛 𝑐 ×𝑁 𝑑 denotes the identity embedding of current 𝑐-th camera view, which is initialized with the mean feature of each identity. Specially, given the datasets 𝐷 𝑐 , we firstly apply the model Φ ℎ to extract the feature of each image, and then generate the identity's embedding with the mean of the feature belonging to the same identity. Formally, we propose the Identity Knowledge Evolution(IKE) to implement the A in Eq. ( 2). As shown in Figure 3, IKE consists of the IKA is applied for selecting the common identities between the historical identities and identities of the current camera views. Based on the selected common identities, Identity Knowledge Distillation (IKD) is applied to distillate historical identity knowledge from common identities and quickly adapt the historical model to the current cameras view. After that, IKU is used to expand historical identity memory by combining current and historical identity memories.\nThe framework of the Identity Knowledge Evolution is shown in Figure 3. Identity Knowledge Association: Given the historical identity embedding obtained by the previous camera views, IKA is proposed to select the common identities between the current camera and the historical camera views. After that, the identities of the current camera can be divided into Common Identities and Unique Identities. The Common Identities can be used to review the historical identity knowledge for alleviating forgetting, and the Unique Identities can be applied to adapt the previous ReID model to the current dataset. An intuitive motivation of the IKA is shown in Figure 2.\nWith the historical identity memory M ℎ and the current identity memory M 𝑐 , IKA employs the cycle-consisteny to discover the matching identities between M 𝑐 and M ℎ , e.g., (𝑦 ′ 𝑖 , 𝑦 𝑐 𝑖 ) denotes an matching pair for the identities 𝑦 𝑐 𝑖 , where 𝑦 ′ 𝑖 represents the discovered identities of the identities 𝑦 𝑐 𝑖 from the historical identity space M ℎ . For an identity 𝑦 𝑐 𝑖 , its matched identity 𝑦 ′ 𝑖 is discovered with the cycle-matching between M 𝑐 and M ℎ . Formally, if the identity embedding M 𝑐 ) is a matching pair, which is formulated as follows:\n(𝑦 ′ 𝑖 , 𝑦 𝑐 𝑖 ) ⇐⇒        arg max 𝑑 (M 𝑐 𝑦 𝑐 𝑖 ; M ℎ ) = 𝑦 ′ 𝑖 , arg max 𝑑 (M ℎ 𝑦 ′ 𝑖 ; M 𝑐 ) = 𝑦 𝑐 𝑖 ,(3)\nwhere 𝑑 (f, M) denotes the cosine distance between the identity embedding f and the identity memory M, in which a higher score represents a higher similarity. If not finding the maching identity to 𝑦 𝑐 𝑖 , the corresponding identity 𝑦 ′ 𝑖 = -1. After that, two identity labels 𝑦 𝑐 𝑖 and 𝑦 ′ 𝑖 can be assigned to the image 𝑥 𝑐 𝑖 for optimization. For the incremental learning, the identity label 𝑦 𝑐 𝑖 is used to adapt the ReID model to the current dataset 𝐷 𝑐 , and 𝑦 ′ 𝑖 can be applied to reduce the forgetting of the historical knowledge. With the discovered matching pairs, we can produce a new dataset 𝐷 ′ 𝑐 = {𝑥 𝑐 𝑖 , 𝑦 𝑐 𝑖 , 𝑦 ′ 𝑖 }. Identity Knowledge Distillation: After obtaining the dataset 𝐷 ′ 𝑐 = {𝑥 𝑐 𝑖 , 𝑦 𝑐 𝑖 , 𝑦 ′ 𝑖 } and the historical model Φ ℎ , IKD is applied to distillate historical identity knowledge with the identity 𝑦 ′ 𝑖 and quickly adapt the historical model to the current camera view with the identity 𝑦 𝑐 𝑖 . Given the image 𝑥 𝑐 𝑖 , we firstly apply the model Φ 𝑐 , which is initialized with the historical model Φ ℎ , to extract its description f 𝑐 𝑖 = Φ 𝑐 (𝑥 𝑐 𝑖 ). Next, the cluster contrastive loss based on the identity memory M 𝑐 is used for identity classification with Eq. ( 4),\nL 𝑖𝑑 = 𝑁 ′ ∑︁ 𝑖 -log exp(f 𝑐 𝑖 • M 𝑐 𝑦 𝑐 𝑖 )/𝜏 𝑛 𝑐 𝑗=1 exp(f 𝑐 𝑖 • M 𝑐 𝑗 )/𝜏 ,(4)\nwhere 𝑦 𝑐 𝑖 is the corresponding label for the image feature f 𝑐 𝑖 , 𝜏 is a temperature hyper-parameter, and 𝑁 ′ is the number of images.\nInspired by the cluster contrastive learning, the image feature is applied to momentum update the identity memory M 𝑐 during backward propagation with Eq. ( 5),\nM 𝑐 𝑦 𝑐 𝑖 = 𝜔M 𝑐 𝑦 𝑐 𝑖 + (1 -𝜔) • f 𝑐 𝑖 ,(5)\nwhere M 𝑐 𝑦 𝑐 𝑖 is the 𝑦 𝑐 𝑖 -th identity's embedding in identity memory M 𝑐 , and 𝜔=0.1 is the updating factor.\nTo reduce the forgetting of the historical identity knowledge during training, the historical identity memory M ℎ ∈ R 𝑛 ℎ ×𝑁 𝑑 and the identity label 𝑦 ′ 𝑖 are used to optimize the model Φ 𝑐 by computing the cluster contrastive loss with the feature f 𝑐 𝑖 ,\nL 𝑖𝑑 ′ = 𝑁 ′ ∑︁ 𝑖 -𝑠𝑔𝑛(𝑦 ′ 𝑖 ) log exp(f 𝑐 𝑖 • M ℎ 𝑦 ′ 𝑖 )/𝜏 𝑛 ℎ 𝑗=1 exp(f 𝑐 𝑖 • M ℎ 𝑗 )/𝜏 ,(6)\nwhere 𝑠𝑔𝑛(𝑦) is the sign function. 𝑠𝑔𝑛(𝑦) = 0 if 𝑦 = -1. Otherwise 𝑠𝑔𝑛(𝑦)=1. As the historical identity memory M ℎ is generated based on the historical identities, M ℎ is merely used for computing the loss and not updating. Furthermore, we apply the knowledge distillation to transfer the knowledge from the historical model to the current model, which is formulated as follows:\nL 𝑘𝑑 = 𝑁 ∑︁ 𝑖=0 𝑠𝑔𝑛(𝑦 ′ 𝑖 )||Φ 𝑐 (𝑥 𝑐 𝑖 ) -Φ ℎ (𝑥 𝑐 𝑖 )|| 2 2 .(7)\nAs Eq.( 6) and Eq. ( 7) merely reducing the catastrophic forgetting from the final feature of the model, it has few affect for the middle parameters of the model. For example, given an image, although Eq.( 6) and Eq. ( 7) can constrain the historical model and current model to generate the same descriptions, the generated middle features of those two models would have seriously feature gap. To reduce the forgetting of the historical knowledge of middle layers during training, we further construct the knowledge distillation among the middle layers with Eq.( 8),\nL 𝑚𝑘𝑑 = 1 2 𝑁 ∑︁ 𝑖=0 3 ∑︁ 𝑙=2 𝑠𝑔𝑛(𝑦 ′ 𝑖 )||Φ 𝑐 𝑙 (𝑥 𝑐 𝑖 ) -Φ ℎ 𝑙 (𝑥 𝑐 𝑖 )|| 2 2 ,(8)\nwhere Φ * 𝑙 denotes the 𝑙-th middle features of the model Φ * . Specially, Φ * 2 and Φ * 3 are the output features of 2-th and 3-th residual blocks of ResNet50, respectively.\nFinally, the total loss is of IKD is:\nL 𝑖𝑘𝑑 = L 𝑖𝑑 + L 𝑖𝑑 ′ + L 𝑘𝑑 + L 𝑚𝑘𝑑 .(9)\nIdentity Knowledge Update: After training the 𝑐-th camera view, we can apply the trained model Φ 𝑐 to generate its newly identity memory M 𝑐 by applying the average of the features belonging to the same identity. Next, IKU is applied to expand the historical identity memory M ℎ , i.e., it updates the historical identity memory M ℎ with M 𝑐 based on the updating and expansion rules.\nUpdaing rules: For the identity 𝑦 𝑐 𝑗 , if it can find a matched identity 𝑦 ′ 𝑗 in the history identity memory M ℎ , then updating the\nM ℎ 𝑦 ′ 𝑗 with M 𝑐 𝑦 𝑐 𝑗 : M ℎ 𝑦 ′ 𝑗 = 𝜆 × M ℎ 𝑦 ′ 𝑗 + (1 -𝜆) × M 𝑐 𝑦 𝑐 𝑗 . (10\n)\nExpansion rules: Otherwise, if it does not discover a matching identity, M 𝑐 𝑦 𝑐 𝑗 is inserted into the historical identity memory M ℎ .\nM ℎ = [M ℎ ; M 𝑐 𝑦 𝑐 𝑗 ],(11)\nwhere [; ] denotes the concatenaton. After that, M ℎ is the updated identity memory used for the next incoming camera dataset. Furthermore, the historical model Φ ℎ is updated with Φ 𝑐 : Φ ℎ =Φ 𝑐 ." }, { "figure_ref": [], "heading": "EXPERIMENTS 4.1 Experimental Setting", "publication_ref": [ "b56", "b29", "b8", "b16", "b10", "b32", "b21" ], "table_ref": [ "tab_5", "tab_6" ], "text": "Datasets: To evaluate the effectiveness of the proposed method, we adopted the existing ReID dataset Market-1501 [57] and Veri-776 [30] as Market-CL and Veri-CL for Camera-Incremental Object ReID, where Market-CL and Veri-CL consist of six cameras and fourteen cameras, respectively. Specifically, we adapt the original identity label by independently annotating their identity labels in each camera view. We discard the cameras whose identities are smaller than 250 for Veri-776. We feed each camera view dataset sequentially for training, and the standard testing setting is used for evaluation. The number of identities (#ids) of each camera for Market-CL and Veri-CL are shown in Table 4 and Table 5, respectively.\nImplementation Details: The proposd framework is adopted based on the existing cluster contrastive framework [9] 1 , which adopts the ResNet-50 [17] pretrained on ImageNet [11] as the backbone. Inspired by [33], all sub-module layers after layer4-1 are removed, and a GEM pooling followed by batch normalization layer [22] and L2-normalization layer is added. Therefore, the feature dimension 𝑁 𝑑 is 2,048. For the person ReID, all input images are resized to 256×128 for training and evaluation with the batchsize of 128. For the vehicle ReID, all input images are resized to 224×224 for training and evaluation with the batchsize of 96. The temperature coefficient 𝜏 is set to 0.05. The adam optimizer sets the weight decay as 0.0005, and the learning rate is initially set as 0.00035 and decreased to one-tenth of every 15 epochs up to 30 epochs.\nEvaluation metrics: The metrics 𝑚𝐴𝑃 and 𝑓 𝑚𝐴𝑃 are used to evaluate the performance of the Camera-Incremental Object ReID task. 𝑚𝐴𝑃 = 𝐶 𝑐=1 𝑚𝐴𝑃 𝑐 denotes the average of the mean average precision (mAP) obtained by each cameras, where 𝑚𝐴𝑃 𝑐 denotes the mAP after training of the 𝑐-th camera view, and 𝐶 is the number of cameras. 𝑓 𝑚𝐴𝑃 is the mAP after training of all cameras, i.e., 𝑓 𝑚𝐴𝑃=𝑚𝐴𝑃 𝐶 . Note that the CMC is a popular evaluation metric for the standard Object Re-identification. However, mAP is a more reasonable and robust evaluation metric than CMC for incremental learning. We thus report 𝑚𝐴𝑃 and 𝑓 𝑚𝐴𝑃 in this work." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [ "tab_1", "tab_1", "tab_1", "tab_4" ], "text": "This section gives a series of ablation studies to evaluate the influence of the critical components proposed in Identity Knowledge Evolution on the Market-CL datasets.\nEffect of Identity Knowledge Association: IKA is proposed to discover the common identities used for identity knowledge distillation. To verify the effectiveness of the IKA, we also conduct the comparison by using the whole identities for the identity knowledge distillation. For example, 'IKE*' uses the whole identities for knowledge distillation in Eq. ( 7) and Eq. ( 8), and 'IKE-A' applies the whole identitis to distillate the historical identity knowledge with Eq. ( 6). As shown in Especially for the 'IKE-A', it exists an obvious performance gap with IKE. The lower performance demonstrates that using the selected common identities can effectively reduce the catastrophic forgetting of the historical identity knowledge. Effect of Multiple Knowledge Distillation: Eq. ( 8) is used to transfer the identity knowledge of multiple middle layers from the historical model to the current model. As shown in Table 1, 'IKE-D' without considering the constraint L 𝑚𝑘𝑑 obtains the fmAP of 42.1%, which is lower than 48.3% with using the additional constraint L 𝑚𝑘𝑑 . The reason is that using the mutiple knowledge distillation can reduce the catastrophic forgetting caused by the middle paramers of each model, which is also complementary to IKD. Therefore, combining the Multiple Knowledge Distillation and the Identity Knowledge Distillation obtains a superior performance.\nEffect of Identity Knowledge Update: IKU is used to expand the historical identity memory at the end of training of each camera. We thus make a comparison between the models with/without using IKU, and summarize the related results in Table 1, where 'IKE-U' denotes that IKE uses the identity memory of current camera as the historical identity memory without IKU. As shown in Table 1, 'IKE-U' obtains the fmAP of 47.1%, which is lower than 48.3% obtained by the IKE. The superior performance demonstrates the necessity and importance of IKU for CIOR.\nEffect of 𝜆 on IKU: For IKU, 𝜆 is used to merge the historical identity memory M ℎ and the current identity memory M 𝑐 . We thus analyze the effect of 𝜆 on the Market-CL dataset. From Figure 4, we observe that setting the higher and lower 𝜆 both obtain a worse performance, e.g., 𝜆=0.0 and 𝜆=1.0 obtain the 𝑚𝐴𝑃 of 46.8% and 47.4%, respectively. The worse performance demonstrates that merely considering the limited historical identity knowledge or the current identity knowledge is not suitable for Camera-Incremental Object ReID. Furthermore, we observe that setting 𝜆 as 0.25 obtains the best performance, which means that IKU needs to pay more attention to the current identity knowledge most related to the past incoming camera dataset. Number of historical identities 𝑁 ℎ : IKU merges the historical identity memory M ℎ and the current identity memory M 𝑐 for expanding the identity knowledge, where M ℎ ∈ R 𝑁 ℎ ×𝑁 𝑑 , and 𝑁 ℎ the number of historical identities after the incremental learning. We thus describe the process changing of 𝑁 ℎ during training, and summarize the related results in Figure 5. 'GT' denotes the ground-truth number of identities for the previous 𝑡-th (camera index) camera views, e.g., 751 is the training identities of the Market-1501. From Figure 5, we observe that the first three cameras can cover the whole identities, e.g., the identities captured by the previous three camera views is 856. As shown in Figure 5, IKU generates the more identities by considering more camera datasets, e.g., IKU generates the final historical identities of 946.\nAccuracy of Identity Association: The critical of Identity Knowledge Evolution is to discover and associate the common identities between current and historical identities with IKA. We thus analyze the robustness of IKA by computing the ratio of positive matching samples among all discovered matching samples (prec.). Because there are the ground-truth matchings between the first two camera views, we thus compute the precision (prec.) for all pair of camera views to generate the precision matrix 𝑃, where 𝑃 [𝑖, 𝑗] denotes that the ReID model is first trained on the 𝑐 𝑖 -th camera view and then trained on the 𝑐 𝑗 -th camera view to compute the identity association precision on the 𝑐 𝑗 -th camera view. As shown in Figure 6, IKA obtains a higher precision (prec.), e.g., most of the precision is higher than 80%, which means that the discovered common identities can effectively associate the identities across the historical and current identities space. The higher precision also demonstrates that the common identities can be used to review identical historical knowledge to alleviate catastrophic forgetting.\nFrom Figure 6, we can observe that the 'c4' camera obtains the lowest association precision, e.g. treating camera 'c4' as the first and second cameras obtain the average precision of 73.1% and 74.1%, respectively. The reason is that the camera 'c4' contains a few of 241 identities, smaller than other camera views.\nEffect of the mulitiple cameras's orders: For CIOR, the cameras' dataset always consists of several cameras, e.g, Market-CL and Veri-CL contains six and fourteen camera views, respectively. As each camera view consists of the different identities and images, the order of cameras may affect the performance of CIOR. To further evaluate whether the proposed methods are insensitive to the camera's order, we randomly generate five different CIOR settings for Market-CL with the different orders of cameras:\n1) Task1: c1→c2→c3→c4→c5→c6; 2) Task2: c1→c6→c5→c2→c4→c3; 3) Task3: c6→c3→c4→c5→c1→c2; 4) Task4: c4→c2→c6→c5→c3→c1; 5) Task5: c3→c1→c4→c5→c2→c6; where Task1 is the standard order of cameras in the original dataset. From Table 3, we observe that different camera orders affect the performance. For example, IKE, EWC, and MAS all obtains higher performance on Task5 than the rest four tasks, indicating that the order of the incoming camera's dataset having obvious effect on CIOR. Compared with the existing methods, the proposed IKE obtains the best performance in three of the five tasks, e.g., Task1, Task4, and Task5. However, the proposed IKE obtains the average performance of 39.5%/47.3% for 𝑚𝐴𝑃/fmAP, which is higher than the second best performance of 38.5%/ 45.4% obtained by the EWC.\nForgetting trend: The criticial of the incremental learning is to reduce the forgetting ratio of existing object ReID methods. We thus analyze the forgetting ratio of the compared methods, where the Forgetting is the performance gap between each method and the upbound performance. As shown in Figure . 7, the proposed IKE method achieves the lowest forgetting ratio during the incremental learning." }, { "figure_ref": [], "heading": "Comparison with existing methods", "publication_ref": [ "b53", "b25", "b22", "b1", "b0", "b37", "b53", "b26", "b1" ], "table_ref": [ "tab_5", "tab_6", "tab_5", "tab_6" ], "text": "In this section, we compare the proposed IKE and the existing incremental learning methods on Market-CL and Veri-CL datasets, and summarize the related results in Table 4, and Table 5. Existing compared methods can be divided into: class-incremental(CL) methods( CRL [54], LWF [26],EWC [23], MAS [2], and SS-IL [1]), domainincremental(DL) methods(AKA [38]), exempler-based method(CRL [54]), and domain-and class-incremental methods(PGCA [27]).\nAs shown in Table 4, 'Baseline' model obtains the fMAP of 36.3% by merely conducting the fine-tune on each camera, having a large gap with the upbound of 54.6%. The large gap demonstrates that there exists a severe catastrophic forgetting by simply fine-tuning each camera dataset. Compared with the 'Baseline', the proposed IKE obtains large improvement on two type of evaluation metrics, e.g., the 𝑓 𝑚𝐴𝑃 and 𝑚𝐴𝑃 are improved from the 36.3%/32.4% to 48.3%/39.8% for Market-CL dataset.\nAmong the class-incremental methods, MAS [2] obtains a best performance, e.g., 44.9%/37.5% for fmAP/𝑚𝐴𝑃. Compared with MAS, IKE obtains an improvement of 3.4%/2.3% for fmAP/𝑚𝐴𝑃. We also observed that the domain-incremental method AKA obtains a similar performance with EWC, lower than MAS and the proposed IKE. Among all existing methods, PGCA is the most related work to ours, which is proposed for Class-Incremetal Unsupervised Domain Adaptation, consisting of domain-incremental and class-incremental. PGCA discovers the novel class by computing the cumulative probability (CP) of target samples regarding source classes. Different As shown in Table 5, the proposed IKE still obtains the superior performance than existing methods on Veri-CL datasets. The superior performance demonstrates the effectiveness of the proposed IKE for Camera-Incremental Object ReID." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this work, we introduce a novel ReID task named Camera-Incremental Object Re-identification (CIOR) by continually updating the ReID model with the incoming data of the current camera. Unlike the traditional camera-free object ReID, CIOR treats each camera's data separately as a sequentially learning problem for different cameras, leading to severe catastrophic forgetting for the historical cameras when inferring the current cameras. Furthermore, we propose a novel Identity Knowledge Evolution to associate and distillate the historical identity knowledge for alleviating forgetting. The evaluation of two adapted benchmarks, Market-CL and Veri-CL, validated the effectiveness of the IKE for CIOR.\nAlthough the proposed IKE is an effective method for CIOR, it performs slightly worse than the regularization-based methods on some causes, e.g., EWC obtains higher performance than IKE on Task2 and Task3. As the IKE can be treated as the knowledge distillation-based methods, it might be complement to the regularization-based methods. In the future, we will try to combine the advantages of these two types of algorithms to propose a more robust algorithm for CIOR." } ]
2023-05-25
10.1109/ICCV48922.2021.00088
[ { "authors": "Jihwan Hongjoon Ahn; Subin Kwak; Hyeonsu Lim; Hyojun Bang; Taesup Kim; Moon", "journal": "IEEE", "ref_id": "b0", "title": "SS-IL: Separated Softmax for Incremental Learning", "year": "2021-10-10" }, { "authors": "Rahaf Aljundi; Francesca Babiloni; Mohamed Elhoseiny; Marcus Rohrbach; Tinne Tuytelaars", "journal": "Springer", "ref_id": "b1", "title": "Memory Aware Synapses: Learning What (not) to Forget", "year": "2018-09-08" }, { "authors": "Arslan Chaudhry; Marc'aurelio Ranzato; Marcus Rohrbach; Mohamed Elhoseiny", "journal": "", "ref_id": "b2", "title": "Efficient Lifelong Learning with A-GEM", "year": "2019-05-06" }, { "authors": "Binghui Chen; Weihong Deng; Jiani Hu", "journal": "", "ref_id": "b3", "title": "Mixed high-order attention network for person re-identification", "year": "2019" }, { "authors": "Peixian Chen; Wenfeng Liu; Pingyang Dai; Jianzhuang Liu; Qixiang Ye; Mingliang Xu; Rongrong Qi'an Chen; Ji", "journal": "", "ref_id": "b4", "title": "Occlude Them All: Occlusion-Aware Attention Network for Occluded Person Re-ID", "year": "2021" }, { "authors": "Weihua Chen; Xiaotang Chen; Jianguo Zhang; Kaiqi Huang", "journal": "", "ref_id": "b5", "title": "Beyond Triplet Loss: A Deep Quadruplet Network for Person Re-Identification", "year": "2017" }, { "authors": "Yanbei Chen; Xiatian Zhu; Shaogang Gong", "journal": "", "ref_id": "b6", "title": "Instance-Guided Context Rendering for Cross-Domain Person Re-Identification", "year": "2019" }, { "authors": "Yongxing Dai; Jun Liu; Yifan Sun; Zekun Tong; Chi Zhang; Ling-Yu Duan", "journal": "", "ref_id": "b7", "title": "IDM: An Intermediate Domain Module for Domain Adaptive Person Re-ID", "year": "2021" }, { "authors": "Zuozhuo Dai; Guangyuan Wang; Weihao Yuan; Siyu Zhu; Ping Tan", "journal": "", "ref_id": "b8", "title": "Cluster Contrast for Unsupervised Person Re-Identification", "year": "2021" }, { "authors": "Matthias Delange; Rahaf Aljundi; Marc Masana; Sarah Parisot; Xu Jia; Ales Leonardis; Greg Slabaugh; Tinne Tuytelaars", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b9", "title": "A continual learning survey: Defying forgetting in classification tasks", "year": "2021" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "", "ref_id": "b10", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Guodong Ding; Salman Khan; Zhenmin Tang", "journal": "", "ref_id": "b11", "title": "Dispersion based Clustering for Unsupervised Person Re-identification", "year": "2019" }, { "authors": "Yang Fu; Yunchao Wei; Guanshuo Wang; Yuqian Zhou; Honghui Shi; Thomas S Huang", "journal": "", "ref_id": "b12", "title": "Self-similarity grouping: A simple unsupervised cross domain adaptation approach for person re-identification", "year": "2019" }, { "authors": "Yixiao Ge; Dapeng Chen; Hongsheng Li", "journal": "", "ref_id": "b13", "title": "Mutual mean-teaching: Pseudo label refinery for unsupervised domain adaptation on person re-identification", "year": "2020" }, { "authors": "Yixiao Ge; Feng Zhu; Dapeng Chen; Rui Zhao; Hongsheng Li", "journal": "", "ref_id": "b14", "title": "Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID", "year": "2020" }, { "authors": "Bing He; Jia Li; Yifan Zhao; Yonghong Tian", "journal": "", "ref_id": "b15", "title": "Part-regularized nearduplicate vehicle re-identification", "year": "2019" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b16", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Shuting He; Hao Luo; Weihua Chen; Miao Zhang; Yuqi Zhang; Fan Wang; Hao Li; Wei Jiang", "journal": "", "ref_id": "b17", "title": "Multi-domain learning and identity mining for vehicle re-identification", "year": "2020" }, { "authors": "Shuting He; Hao Luo; Pichao Wang; Fan Wang; Hao Li; Wei Jiang", "journal": "", "ref_id": "b18", "title": "TransReID: Transformer-Based Object Re-Identification", "year": "2021" }, { "authors": "Geoffrey E Hinton; Oriol Vinyals; Jeffrey Dean", "journal": "", "ref_id": "b19", "title": "Distilling the Knowledge in a Neural Network", "year": "2015" }, { "authors": "Yangru Huang; Peixi Peng; Yi Jin; Yidong Li; Junliang Xing", "journal": "", "ref_id": "b20", "title": "Domain Adaptive Attention Learning for Unsupervised Person Re-Identification", "year": "2020" }, { "authors": "Sergey Ioffe; Christian Szegedy", "journal": "", "ref_id": "b21", "title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", "year": "2015" }, { "authors": "James Kirkpatrick; Razvan Pascanu; Neil C Rabinowitz; Joel Veness; Guillaume Desjardins; Andrei A Rusu; Kieran Milan; John Quan; Tiago Ramalho; Agnieszka Grabska-Barwinska; Demis Hassabis; Claudia Clopath; Dharshan Kumaran; Raia Hadsell", "journal": "", "ref_id": "b22", "title": "Overcoming catastrophic forgetting in neural networks", "year": "2016" }, { "authors": "Shenqi Lai; Zhenhua Chai; Xiaolin Wei", "journal": "", "ref_id": "b23", "title": "Transformer Meets Part Model: Adaptive Part Division for Person Re-Identification", "year": "2021" }, { "authors": "Zhizhong Li; Derek Hoiem", "journal": "Springer", "ref_id": "b24", "title": "Learning Without Forgetting", "year": "2016-10-11" }, { "authors": "Zhizhong Li; Derek Hoiem", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b25", "title": "Learning without Forgetting", "year": "2018" }, { "authors": "Hongbin Lin; Yifan Zhang; Zhen Qiu; Shuaicheng Niu; Chuang Gan; Yanxia Liu; Mingkui Tan", "journal": "Springer", "ref_id": "b26", "title": "Prototype-Guided Continual Adaptation for Class-Incremental Unsupervised Domain Adaptation", "year": "2022-10-23" }, { "authors": "Yutian Lin; Xuanyi Dong; Liang Zheng; Yan Yan; Yi Yang", "journal": "", "ref_id": "b27", "title": "A bottom-up clustering approach to unsupervised person re-identification", "year": "2019" }, { "authors": "Jiawei Liu; Zheng-Jun Zha; Di Chen; Richang Hong; Meng Wang", "journal": "", "ref_id": "b28", "title": "Adaptive Transfer Network for Cross-Domain Person Re-Identification", "year": "2019" }, { "authors": "Xinchen Liu; Wu Liu; Tao Mei; Huadong Ma", "journal": "Springer", "ref_id": "b29", "title": "A Deep Learning-Based Approach to Progressive Vehicle Re-identification for Urban Surveillance", "year": "2016-10-11" }, { "authors": "Xiaobin Liu; Shiliang Zhang", "journal": "ACM MM", "ref_id": "b30", "title": "Domain adaptive person re-identification via coupling optimization", "year": "2020" }, { "authors": "David Lopez; - Paz; Marc'aurelio Ranzato", "journal": "", "ref_id": "b31", "title": "Gradient Episodic Memory for Continual Learning", "year": "2017-09" }, { "authors": "Youzhi Hao Luo; Xingyu Gu; Shenqi Liao; Wei Lai; Jiang", "journal": "", "ref_id": "b32", "title": "Bag of Tricks and a Strong Baseline for Deep Person Re-Identification", "year": "2019" }, { "authors": "Arun Mallya; Dillon Davis; Svetlana Lazebnik", "journal": "Springer", "ref_id": "b33", "title": "Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights", "year": "2018-09-08" }, { "authors": "Arun Mallya; Svetlana Lazebnik", "journal": "Computer Vision Foundation / IEEE Computer Society", "ref_id": "b34", "title": "PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning", "year": "2018-06-18" }, { "authors": "Marc Masana; Xialei Liu; Bartlomiej Twardowski; Mikel Menta; Andrew D Bagdanov; Joost Van De Weijer", "journal": "", "ref_id": "b35", "title": "Class-incremental learning: survey and performance evaluation", "year": "2020" }, { "authors": "Dechao Meng; Liang Li; Xuejing Liu; Yadong Li; Shijie Yang; Zheng-Jun Zha; Xingyu Gao; Shuhui Wang; Qingming Huang", "journal": "", "ref_id": "b36", "title": "Parsing-based viewaware embedding network for vehicle re-identification", "year": "2020" }, { "authors": "Nan Pu; Wei Chen; Yu Liu; Erwin M Bakker; Michael S Lew", "journal": "Computer Vision Foundation / IEEE", "ref_id": "b37", "title": "Lifelong Person Re-Identification via Adaptive Knowledge Accumulation", "year": "2021-06-19" }, { "authors": " Sylvestre-Alvise; Alexander Rebuffi; Georg Kolesnikov; Christoph H Sperl; Lampert", "journal": "IEEE Computer Society", "ref_id": "b38", "title": "iCaRL: Incremental Classifier and Representation Learning", "year": "2017-07-21" }, { "authors": "Joan Serrà; Didac Suris; Marius Miron; Alexandros Karatzoglou", "journal": "PMLR", "ref_id": "b39", "title": "Overcoming Catastrophic Forgetting with Hard Attention to the Task", "year": "2018-07-10" }, { "authors": "Hanul Shin; Jung Kwon Lee; Jaehong Kim; Jiwon Kim", "journal": "", "ref_id": "b40", "title": "Continual Learning with Deep Generative Replay", "year": "2017-09" }, { "authors": "Nehemia Sugianto; Dian Tjondronegoro; Golam Sorwar; Raj Prithwi; Elizabeth Irenne Chakraborty; Yuwono", "journal": "IEEE", "ref_id": "b41", "title": "Continuous Learning without Forgetting for Person Re-Identification", "year": "2019-09-18" }, { "authors": "Zhicheng Sun; Yadong Mu", "journal": "ACM", "ref_id": "b42", "title": "Patch-based Knowledge Distillation for Lifelong Person Re-Identification", "year": "2022-10-10" }, { "authors": "Longhui Wei; Shiliang Zhang; Wen Gao; Qi Tian", "journal": "", "ref_id": "b43", "title": "Person transfer gan to bridge domain gap for person re-identification", "year": "2018" }, { "authors": "Guile Wu; Shaogang Gong", "journal": "AAAI Press", "ref_id": "b44", "title": "Generalising without Forgetting for Lifelong Person Re-Identification", "year": "2021-02-02" }, { "authors": "Bryan Ning Xia; Yuan Gong; Yizhe Zhang; Christian Poellabauer", "journal": "", "ref_id": "b45", "title": "Second-order non-local attention networks for person re-identification", "year": "2019" }, { "authors": "Hong-Xing Yu; Wei-Shi Zheng; Ancong Wu; Xiaowei Guo; Shaogang Gong; Jian-Huang Lai", "journal": "", "ref_id": "b46", "title": "Unsupervised Person Re-identification by Soft Multilabel Learning", "year": "2019" }, { "authors": "Kaiwei Zeng; Munan Ning; Yaohua Wang; Yang Guo", "journal": "", "ref_id": "b47", "title": "Hierarchical clustering with hard-batch triplet loss for person re-identification", "year": "2020" }, { "authors": "Friedemann Zenke; Ben Poole; Surya Ganguli", "journal": "PMLR", "ref_id": "b48", "title": "Continual learning through synaptic intelligence", "year": "2017" }, { "authors": "Xinyu Zhang; Jiewei Cao; Chunhua Shen; Mingyu You", "journal": "", "ref_id": "b49", "title": "Self-Training With Progressive Augmentation for Unsupervised Cross-Domain Person Re-Identification", "year": "2019" }, { "authors": "Xiao Zhang; Yixiao Ge; Yu Qiao; Hongsheng Li", "journal": "Computer Vision Foundation / IEEE", "ref_id": "b50", "title": "Refining Pseudo Labels With Clustering Consensus Over Generations for Unsupervised Object Re-Identification", "year": "2021-06-19" }, { "authors": "Zhizheng Zhang; Cuiling Lan; Wenjun Zeng; Xin Jin; Zhibo Chen", "journal": "", "ref_id": "b51", "title": "Relation-Aware Global Attention for Person Re-Identification", "year": "2020" }, { "authors": "Zhong Zhang; Haijia Zhang; Shuang Liu", "journal": "", "ref_id": "b52", "title": "Person Re-Identification Using Heterogeneous Local Graph Attention Networks", "year": "2021" }, { "authors": "Bo Zhao; Shixiang Tang; Dapeng Chen; Hakan Bilen; Rui Zhao", "journal": "IEEE", "ref_id": "b53", "title": "Continual Representation Learning for Biometric Identification", "year": "2021-01-03" }, { "authors": "Feng Zheng; Cheng Deng; Xing Sun; Xinyang Jiang; Xiaowei Guo; Zongqiao Yu; Feiyue Huang; Rongrong Ji", "journal": "", "ref_id": "b54", "title": "Pyramidal person re-identification via multi-loss dynamic training", "year": "2019" }, { "authors": "Kecheng Zheng; Wu Liu; Lingxiao He; Tao Mei; Jiebo Luo; Zheng-Jun Zha", "journal": "", "ref_id": "b55", "title": "Group-aware label transfer for domain adaptive person re-identification", "year": "2021" }, { "authors": "Liang Zheng; Liyue Shen; Lu Tian; Shengjin Wang; Jingdong Wang; Qi Tian", "journal": "", "ref_id": "b56", "title": "Scalable person re-identification: A benchmark", "year": "2015" }, { "authors": "Zhun Zhong; Liang Zheng; Shaozi Li; Yi Yang", "journal": "", "ref_id": "b57", "title": "Generalizing a person retrieval model hetero-and homogeneously", "year": "2018" }, { "authors": "Sanping Zhou; Jinjun Wang; Rui Shi; Qiqi Hou; Yihong Gong; Nanning Zheng", "journal": "IEEE TMM", "ref_id": "b58", "title": "Large margin learning in set-to-set similarity comparison for person reidentification", "year": "2017" } ]
[ { "formula_coordinates": [ 3, 318.49, 206.34, 239.53, 18.8 ], "formula_id": "formula_0", "formula_text": "𝐷 = {𝐷 1 , 𝐷 2 , ..., 𝐷 𝐶 }, where 𝐷 𝑐 = {𝑥 𝑐 𝑖 , 𝑦 𝑐 𝑖 }(𝑦 𝑐 𝑖 ∈ [1, 𝑛 𝑐 ], 𝑐 ∈ [1, 𝐶]" }, { "formula_coordinates": [ 3, 407.55, 339.56, 151.19, 9.11 ], "formula_id": "formula_1", "formula_text": "Φ 𝑐 = A (Φ ℎ , 𝐷 𝑐 ),(1)" }, { "formula_coordinates": [ 3, 390.77, 579.04, 94.31, 9.75 ], "formula_id": "formula_2", "formula_text": "𝐷 𝑐 = {𝑥 𝑐 𝑖 , 𝑦 𝑐 𝑖 }(𝑦 𝑐 𝑖 ∈ [1, 𝑛 𝑐 ]" }, { "formula_coordinates": [ 3, 392.43, 606.88, 166.31, 9.11 ], "formula_id": "formula_3", "formula_text": "Φ 𝑐 = A (Φ ℎ , M ℎ , M 𝑐 , 𝐷 𝑐 ),(2)" }, { "formula_coordinates": [ 4, 363.92, 373.87, 194.82, 38.27 ], "formula_id": "formula_4", "formula_text": "(𝑦 ′ 𝑖 , 𝑦 𝑐 𝑖 ) ⇐⇒        arg max 𝑑 (M 𝑐 𝑦 𝑐 𝑖 ; M ℎ ) = 𝑦 ′ 𝑖 , arg max 𝑑 (M ℎ 𝑦 ′ 𝑖 ; M 𝑐 ) = 𝑦 𝑐 𝑖 ,(3)" }, { "formula_coordinates": [ 4, 370.07, 607.92, 188.67, 28.08 ], "formula_id": "formula_5", "formula_text": "L 𝑖𝑑 = 𝑁 ′ ∑︁ 𝑖 -log exp(f 𝑐 𝑖 • M 𝑐 𝑦 𝑐 𝑖 )/𝜏 𝑛 𝑐 𝑗=1 exp(f 𝑐 𝑖 • M 𝑐 𝑗 )/𝜏 ,(4)" }, { "formula_coordinates": [ 4, 387.81, 698.66, 170.93, 12.71 ], "formula_id": "formula_6", "formula_text": "M 𝑐 𝑦 𝑐 𝑖 = 𝜔M 𝑐 𝑦 𝑐 𝑖 + (1 -𝜔) • f 𝑐 𝑖 ,(5)" }, { "formula_coordinates": [ 5, 90.84, 164, 203.75, 28.67 ], "formula_id": "formula_7", "formula_text": "L 𝑖𝑑 ′ = 𝑁 ′ ∑︁ 𝑖 -𝑠𝑔𝑛(𝑦 ′ 𝑖 ) log exp(f 𝑐 𝑖 • M ℎ 𝑦 ′ 𝑖 )/𝜏 𝑛 ℎ 𝑗=1 exp(f 𝑐 𝑖 • M ℎ 𝑗 )/𝜏 ,(6)" }, { "formula_coordinates": [ 5, 102.12, 285.28, 192.46, 24.75 ], "formula_id": "formula_8", "formula_text": "L 𝑘𝑑 = 𝑁 ∑︁ 𝑖=0 𝑠𝑔𝑛(𝑦 ′ 𝑖 )||Φ 𝑐 (𝑥 𝑐 𝑖 ) -Φ ℎ (𝑥 𝑐 𝑖 )|| 2 2 .(7)" }, { "formula_coordinates": [ 5, 88.61, 423.44, 205.98, 27 ], "formula_id": "formula_9", "formula_text": "L 𝑚𝑘𝑑 = 1 2 𝑁 ∑︁ 𝑖=0 3 ∑︁ 𝑙=2 𝑠𝑔𝑛(𝑦 ′ 𝑖 )||Φ 𝑐 𝑙 (𝑥 𝑐 𝑖 ) -Φ ℎ 𝑙 (𝑥 𝑐 𝑖 )|| 2 2 ,(8)" }, { "formula_coordinates": [ 5, 110.96, 510.36, 183.63, 8.43 ], "formula_id": "formula_10", "formula_text": "L 𝑖𝑘𝑑 = L 𝑖𝑑 + L 𝑖𝑑 ′ + L 𝑘𝑑 + L 𝑚𝑘𝑑 .(9)" }, { "formula_coordinates": [ 5, 53.8, 608.87, 240.25, 47.13 ], "formula_id": "formula_11", "formula_text": "M ℎ 𝑦 ′ 𝑗 with M 𝑐 𝑦 𝑐 𝑗 : M ℎ 𝑦 ′ 𝑗 = 𝜆 × M ℎ 𝑦 ′ 𝑗 + (1 -𝜆) × M 𝑐 𝑦 𝑐 𝑗 . (10" }, { "formula_coordinates": [ 5, 291.16, 644.24, 3.42, 7.95 ], "formula_id": "formula_12", "formula_text": ")" }, { "formula_coordinates": [ 5, 142.21, 697.61, 152.37, 12.71 ], "formula_id": "formula_13", "formula_text": "M ℎ = [M ℎ ; M 𝑐 𝑦 𝑐 𝑗 ],(11)" } ]
Camera-Incremental Object Re-Identification with Identity Knowledge Evolution
Object Re-identification (ReID) aims to retrieve the probe object from many gallery images with the ReID model inferred based on a stationary camera-free dataset by associating and collecting the identities across all camera views. When deploying the ReID algorithm in real-world scenarios, the aspect of storage, privacy constraints, and dynamic changes of cameras would degrade its generalizability and applicability. Treating each camera's data independently, we introduce a novel ReID task named Camera-Incremental Object Re-identification (CIOR) by continually optimizing the ReID mode from the incoming stream of the camera dataset. Since the identities under different camera views might describe the same object, associating and distilling the knowledge of common identities would boost the discrimination and benefit from alleviating the catastrophic forgetting. In this paper, we propose a novel Identity Knowledge Evolution (IKE) framework for CIOR, consisting of the Identity Knowledge Association (IKA), Identity Knowledge Distillation (IKD), and Identity Knowledge Update (IKU). IKA is proposed to discover the common identities between the current identity and historical identities. IKD has applied to distillate historical identity knowledge from common identities and quickly adapt the historical model to the current camera view. After each camera has been trained, IKU is applied to continually expand the identity knowledge by combining the historical and current identity memories. The evaluation of Market-CL and Veri-CL shows the Identity Knowledge Evolution (IKE) effectiveness for CIOR. code:https://github.com/htyao89/
Hantao Yao; Lu Yu; Jifei Luo; Changsheng Xu
[ { "figure_caption": "Figure 1 :1Figure 1: An intuitive description of Camera-Incremental Object Re-Identification (CIOR) with eight camera views, where all camera datasets trained sequentially one after the other.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: An intuitive description of Identity Knowledge Association for CIOR.Based on the identity knowledge of the historical camera, the identities of current cameras can be divided into two sets: common identities and unique identitis. For incremental learning, the common identities can be applied to remember the historical knolwdge, while the unique identities can be used to infer the newly knowledge.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "𝑦 𝑐 𝑖 has the best matching score with the identity embedding M ℎ 𝑦 ′ 𝑖 , and M ℎ 𝑦 ′ 𝑖 also has the best matching score with M 𝑐 𝑦 𝑐 𝑖 , (𝑦 ′ 𝑖 , 𝑦 𝑐 𝑖", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :Figure 6 :56Figure 5: The changing of historical identities' number 𝐷 ℎ during training on Market-CL.", "figure_data": "", "figure_id": "fig_3", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Forgetting trend on Market-CL. The lower forgetting ratio, the better.", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "'IKE*' and 'IKE-A' obtain the fmAP", "figure_data": "", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation studies for the Identity Knowledge Evolution. '*' denotes that the whole identities are used for knowledge distillation in IKD (Eq. (7)) and MKD(Eq. (8)).", "figure_data": "√√√√48.339.848.548.348.147.947.747.547.347.146.946.746.500.250.50.751𝜆", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison of IKE on five different CIOR tasks on Market-CL dataset: 𝑚𝐴𝑃.", "figure_data": "TasksT1T2T3T4T5Avg.Baselines 32.3 31.7 32.7 33.3 32.332.5KD [20]35.6 36.2 36.5 32.3 38.735.9CRL [54]32.7 32.6 33.1 30.7 32.332.5LWF [26] 35.3 35.7 35.8 33.5 36.935.4EWC [23] 35.7 39.8 40.7 33.8 42.438.5MAS [2]35.7 38.8 38.8 34.1 42.037.9IKE39.8 39.3 39.6 34.3 44.7 39.5", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison of IKE on five different CIOR tasks on Market-CL dataset: fmAP.", "figure_data": "TasksT1T2T3T4T5Avg.Baselines 43.2 41.9 42.7 39.6 46.442.7KD [20]40.8 39.6 38.3 40.9 39.839.9CRL [54]35.8 38.23237.4 34.135.5LWF [26] 48.9 39.6 36.4 42.3 38.141.1EWC [23] 41.5 45.4 47.3 44.1 48.845.4MAS [2]44.9 44.3 42.9 44.3 45.144.3IKE48.3 44.8 47.2 44.4 51.8 47.3", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparion with existing methods on Market-1501 datasets. 'CI' and 'DI' denote 'Class-Incremental' and 'Domain-Incremental', respectively. 'M=2' denotes that storing two images of each historical identity.", "figure_data": "SettingsMethodsC1C2C3C4C5C6 fmAP 𝑚𝐴𝑃#ids652 541694241576558---Baseline [9]29.7 27.2 37.6 30.0 33.5 36.336.332.4-KD [20]29.73037.8 37.43840.840.835.6EWC [23]29.7 30.3 39.7 35.3 37.1 41.541.535.7CIMAS [2] LWF [25]29.7 33.9 38.8 39.5 38.5 44.9 29.7 30.4 39.7 36.3 36.5 38.844.9 38.837.5 35.3CRL [54]29.7 28.33830.5 33.6 35.835.8 32.65SS-IL [1]29.7 29.8 40.2 39.1 39.9 41.441.435.0DIAKA [38]25.5 29.3 36.2 37.4 35.3 41.541.534.2Exempler CRL(M=2) [54] 29.7 32.3 39.3 39.7 40.2 41.641.637.1CI+ DIPGCA [27] IKE29.7 26.8 38.4 38.1 38.9 41.9 29.7 34.3 41.2 41.9 43.3 48.3 48.3 41.935.6 39.8-UpBound29.7 35.4 45.9 46.4 48.8 54.654.6 43.15", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparison on Veri-CL datasets. .4 26.4 25.3 28.1 28.1 26.3 23.7 24.9 27.7 26.7 26.7 28.8 24.1 .6 26.3 28.5 31.1 31.8 32.5 32.9 32.6 33.2 34.5 35 35.3 33 33 30.5 from PGCA, IKE applies an Identity Knowledge Association to discover the the common identities apart from the new identitis. As shown in Table 4, the proposed IKE obtains a significant improvement upon the PGCA, e.g., improving fmAP/𝑚𝐴𝑃 from 41.9%/35.6% to 48.3%/39.8%. The superior performance demonstrates the effectiveness of the proposed IKE.", "figure_data": "Methodsc1c2c3c4c5c6c7c8c9c10c11 c12 c13 c14 fmAP 𝑚𝐴𝑃#ids316273296294268267267313314465356 350 340 335Baseline16.7 21.7 26.3 27.2 30.2 28.5 27.3 26.7 25.7 27.4 27.4 27.82924.224.226.1AKA [38]14.12 2124.125.1CRL [54]16.7 21.2 26.9 26.5 29.8 26.9 28.5 25.8 27.4 26.9 27.3 27.6 30.4 22.822.826.0KD [20]16.7 22.3 26.6 28.7 30.4 31.13130.83131.1 31.6 31.9 32.8 29.129.128.9LWF [25]16.7 22.4 28.1 30.2 32.2 30.6 29.4 28.5 28.5 29.6 29.1 29.7 31.4 27.527.528.1EWC [23]16.7 21.7 29.7 30.7 33.3 34.2 30.8 31.2 30.3 31.3 32.6 33.1 33.1252529.6MAS [2]16.7 22.6 27.8 30.2 31.7 32.9 30.5 30.9 30.4 32.4 32.9 32.7 32.6 26.326.329.3SS-IL [1]16.7 24.6 27.5 29.8 30.4 29.1 28.6 27.6 28.4 27.8 27.7 26.8 28.6 25.325.327.1PGCA [27] 16.7 24.2 32.1 30.43329.4 31.4 27.5 30.2 30.5 29.6 33.2 31.8 25.125.128.9IKE16.7 23", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[57]", "Explanation": "The cited work on person ReID provides a method for object re-identification that the citing paper builds upon in their research."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work on vehicle ReID offers a method for object re-identification that the citing paper adopts in their research."}, {"Category": "Supporting Evidence", "Citation": "[4-6, 16, 18, 19, 37, 46, 52, 53, 59]", "Explanation": "The cited works on supervised object ReID provide evidence of the effectiveness of using annotated labels in inferring a discriminative ReID model, which the citing paper may reference in their research."}, {"Category": "Supporting Evidence", "Citation": "[7, 9, 13-15, 29, 50, 51, 58]", "Explanation": "The cited works on unsupervised object ReID offer evidence of the usefulness of pseudo-labels or unsupervised consistency constraints in representation learning for object re-identification, which the citing paper may consider in their research."}, {"Category": "Extension or Continuation", "Citation": "[2,23,26,49]", "Explanation": "The cited works on class incremental learning provide a foundation for the citing paper to explore the more challenging task of CIOR in real-world object ReID."}, {"Category": "Methodological Basis", "Citation": "[38,45,54]", "Explanation": "The cited works on object incremental ReID provide methods and techniques that the citing paper adopts to study the CIOR task in real-world object ReID."}, {"Category": "Data Source", "Citation": "[38]", "Explanation": "The cited work on Adaptive Knowledge Accumulation defines a domain-incremental person ReID scenario that the citing paper utilizes as a data source for the CIOR task in real-world object ReID."}, {"Category": "Methodological Basis", "Citation": "[4,5,46,52,53]", "Explanation": "The cited works provide attention-based mechanisms that the citing paper adopts to infer a discriminative ReID model for retrieving the same objects from gallery images."}, {"Category": "Methodological Basis", "Citation": "[13,55]", "Explanation": "The cited works present part-based description models that the citing paper uses to improve the discriminative power of the ReID model for object retrieval."}, {"Category": "Methodological Basis", "Citation": "[19,24]", "Explanation": "The cited works introduce transform-based description methods that the citing paper employs to enhance the ReID model for more accurate object retrieval."}, {"Category": "Data Source", "Citation": "[7,29,44,58]", "Explanation": "The cited works provide extra labeled images that the citing paper uses to assist the unsupervised training on unlabeled person ReID by transferring labeled images to the unlabeled domains with GAN-based models."}, {"Category": "Data Source", "Citation": "[8,21,31]", "Explanation": "The cited works present methods to narrow the distribution gap in feature space, which the citing paper leverages to improve the ReID model for unlabeled person ReID training."}, {"Category": "Data Source", "Citation": "[12,28,47,48,56]", "Explanation": "The cited works propose techniques to acquire reliable pseudo labels, which the citing paper adopts to address the lack of annotation in the ReID model training process."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work introduces a bottom-up unsupervised clustering method that the citing paper adopts to consider both diversity and similarity in the ReID algorithm."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work provides a classification of existing incremental learning methods based on the use of task-specific information in the sequential training process, which the citing paper adopts to structure its own research on the topic."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work provides a classification of incremental learning tasks, which the citing paper adopts to structure its own research on incremental learning."}, {"Category": "Methodological Basis", "Citation": "[38]", "Explanation": "The cited work defines a domain-incremental person ReID setting and proposes the Adaptive Knowledge Accumulation method, which the citing paper builds upon to explore the topic of incremental learning in person ReID."}, {"Category": "Methodological Basis", "Citation": "[42]", "Explanation": "The cited work proposes incremental settings for object Re-identification, which the citing paper uses to further develop the field of incremental learning in the context of object Re-identification."}, {"Category": "Methodological Basis", "Citation": "[43]", "Explanation": "The cited work introduces a Patch-based Knowledge Distillation method to address the data distribution discrepancy in incremental learning, which the citing paper builds upon to improve the data processing in incremental learning."}, {"Category": "Methodological Basis", "Citation": "[45]", "Explanation": "The cited work designs a comprehensive learning objective that accounts for various aspects of incremental learning, which the citing paper adopts to develop a unified framework for incremental learning."}, {"Category": "Data Source", "Citation": "[57]", "Explanation": "The cited work, Market-1501, is used as a dataset for evaluating the effectiveness of the proposed method in the citing paper."}, {"Category": "Data Source", "Citation": "[30]", "Explanation": "The cited work, Veri-776, is used as a dataset for evaluating the effectiveness of the proposed method in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work is the basis for the proposed framework, as the framework is adopted based on the existing cluster contrastive framework from the cited work."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work, ResNet-50, is used as the backbone in the proposed framework, as the framework is adopted based on the existing cluster contrastive framework that uses the ResNet-50 as the backbone."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work, batch normalization layer, is used in the proposed framework to add a GEM pooling followed by a batch normalization layer and L2-normalization layer after layer4-1."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work, EWC, provides a method for reducing catastrophic forgetting in incremental learning, which the citing paper adopts in their research to address the issue of forgetting in their own study."}, {"Category": "Extension or Continuation", "Citation": "[2]", "Explanation": "The cited work, MAS, extends the research on incremental learning by exploring the use of memory in the process, which the citing paper builds upon in their own study to further improve the performance of their model."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work, SS-IL, provides a method for incremental learning that the citing paper adopts in their research to address the issue of forgetting in their study by using a more effective approach to learning and memory management."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work, MAS, is the basis for the proposed IKE method in the citing paper. The IKE method builds upon the MAS method to achieve a better performance in class-incremental methods."}, {"Category": "Extension or Continuation", "Citation": "[2]", "Explanation": "The proposed IKE method in the citing paper is an extension of the MAS method, exploring new dimensions and variables to improve the class-incremental performance."}, {"Category": "Supporting Evidence", "Citation": "[2]", "Explanation": "The cited work, MAS, provides a baseline for the class-incremental methods in the citing paper, demonstrating the need for improvement in this area."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b75", "b38", "b81", "b42", "b17", "b50", "b3", "b52", "b16", "b10", "b65", "b41", "b42", "b26", "b69", "b0", "b34", "b33", "b23", "b77", "b16", "b17", "b31", "b7", "b61", "b78", "b9", "b73", "b72", "b25", "b51", "b35", "b36", "b51", "b36", "b51", "b44", "b70" ], "table_ref": [], "text": "Human languages undergo constant change as a result of innovations in form and meaning, and competition between new and old forms of expression. For example, a phoneme may start being pronounced in a different way, or a new word order may be introduced. A wide range of factors may be responsible for innovation [76]. These include expressivity, for example, a desire to be noticed, recognisable, amusing, or charming [39], and economy, minimizing the effort needed to communicate without compromising the listener's understanding, which may lead to the development of novel, simpler forms [82]. Meanwhile, competition mediated by interactions with other components of the language may favour a more consistent mapping between form and function. Alongside these, social factors like prestige or taboo [43], may make certain variant forms more or less attractive to certain language users. In this work, our aim is to quantify the competition between linguistic variants that are available to a speech community, and therewith gain insights into its origins. We achieve this by viewing language change as a cultural evolutionary process [18,51,4,53]. When modelling cultural evolution [17,11], it has long been recognised that changes in variant frequencies may arise both from systematic biases (which we refer to generically as selection) and random drift. While drift may refer to directional change in linguistics following [66], we use it here in the cultural evolutionary sense, denoting unbiased stochastic change. Typically one is most interested in identifying the selective forces that cause one variant to be favoured over another, including linguistic [42] or social [43] factors. Eliminating the possibility that changes may be entirely due to drift is a necessary first step in this endeavour. Initial attempts to achieve this in the context of cultural evolution involved establishing statistical properties of drift and comparing with the corresponding features of empirical data. For example, the distributions of baby names [27] and Hittite ceramic bowl types [70], as measured at a single point in time, were found to be consistent with the predictions of drift. Under closer examination, however, deviations from drift were found in both cases, for example, by appealing to the rate at which the most abundant types are replaced [1].\nCultural and linguistic datasets provide a potentially rich source of data to constrain parameters in a model of the evolutionary process. In particular, by combining observations of token frequencies at multiple time points, one should achieve greater inferential power than can be achieved by considering only a single point in time. Although such analyses are challenging to construct, a number of forward steps have been made in recent years. For example, the evolution of pottery styles was investigated by appealing to predictions for the number of types remaining after a given time under drift [35] and by using simulated trajectories of variant frequencies in an Approximate Bayesian Computation scheme [34].\nHere, we analyse changes in linguistic corpus data with a method based on the Wright-Fisher model of evolution [24,78]. Although introduced as a model for changes in gene frequencies through biological reproduction, the Wright-Fisher model is also relevant to cultural evolution [17]. In the specific context of language change, the Wright-Fisher model has been shown to be equivalent to a variety of different conceptual approaches. For example, a mathematical formulation of Croft's descriptive theory of utterance selection [18], itself grounded in [32]'s generalised analysis of selection, was shown to have the same structure as the Wright-Fisher model [8]. Moreover, [62] showed that a version of the Wright-Fisher model that includes innovation and drift is equivalent to a model of iterated learning where language learners apply Bayesian inference to estimate a variant's frequency in their linguistic input. Other theories of language change, for example those that invoke a competition between multiple candidate grammars [79], can also be viewed as instances of Hull's generalised analysis of selection, and it has been argued that these may also be represented as a Wright-Fisher model [10].\nThe essence of the analysis presented below is to determine the values of parameters in the Wright-Fisher model that maximise the probability that the model generates the series of variant frequencies obtained from a historical corpus. As we set out in Section 2 below, one of these parameters quantifies the strength of selection, and the other the scale of fluctuations arising from random contributions to language change. A difficulty with the Wright-Fisher model is that the mathematical formulae that describe the evolution are difficult to work with. In genetics, a great deal of effort has been invested in devising reliable approximations that facilitate application to empirical time series [74], an effort that we utilise here in the cultural evolutionary context. Specifically we build on a Beta-with-spikes approximation [73] in a way that facilitates an efficient and reliable estimation of model parameters, as judged by benchmarking with both real and synthetic data [26].\nIn Section 3 we apply this method to historical corpus data in three separate investigations. First, we revisit the set of English verbs with irregular past tense forms that were previously examined by [52], [36] and [37], showing that our method is more reliable than that based on a normal approximation of the Wright-Fisher model [52] while offering greater interpretability than a neural-network based time-series classifier [37]. In common with [52], we find that some verbs appear to be irregularising over time.\nBy itself, the inferred strength of selection is not necessarily informative as to its underlying cause. Our second investigation demonstrates one approach by which such information can be gleaned. Specifically, we divide English verbs into two sets: those whose regular past tense form contains a repeated consonant, and those that do not. The former set is then subject to a conflict between the greater grammatical simplicity that would be gained by following the regular pattern and the greater phonological simplicity afforded by omitting the repeated consonant [45,71]. By comparing the selection strengths between the two sets, we can show that the latter constraint tends to override the former in the context of English verbs.\nFinally, we turn to a set of Spanish words that were affected by orthographical reforms in the 18 th and 19 th centuries. Here, we demonstrate that an unsupervised maximum-likelihood analysis can pinpoint with good accuracy the time at which the reforms were introduced and furthermore quantify the impact of the reform on the linguistic behaviour of the speech community. These last results illustrate that, even with time-series comprising a few measurement points, we can uncover social changes that might not otherwise be apparent. We discuss such opportunities, along with limitations of our method, further in Section 4." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Maximum likelihood estimation methods", "publication_ref": [ "b66", "b67", "b40", "b61", "b37", "b9", "b31", "b17", "b7", "b61", "b18", "b53", "b73", "b25", "b25" ], "table_ref": [], "text": "Maximum likelihood estimation is a conceptually simple yet powerful technique for estimating parameter values in a model and selecting between multiple candidate models. The basic setup is that we have both some empirical data, denoted X, and a probabilistic model that tells us how likely the observation X is given some choice of parameters Θ. We then estimate the values of the parameters by determining the combination that maximises the likelihood of the data. This procedure lies at the heart of many statistical methods, including linear regression. In such a model, parameters are chosen to maximise the likelihood of the data given a statistical model of the residuals [67,68], for example, that the residuals are drawn from a normal distribution. It can also be viewed as a special case of Bayesian inference with a uniform prior.\nIn this work we are concerned with frequency time series, that is, a sequence of measurements X = {(x t , t)} = {(x 1 , t 1 ), (x 2 , t 2 ), ..., (x m , t m )} where x i is the fraction of instances of use of a linguistic variable (e.g., all past-tense forms of a specific verb) in which a particular variant (e.g., the regular form) was used during a short time window centred on time t i . Thus, the dataset X = {(0.2, 1), (0.5, 2), (0.75, 5)} would imply a proportion of usage of the regular form of 20% at time 1, 50% at time 2 and 75% at time 5 (and no frequency data at any other time).\nThe underlying evolutionary model of language change determines a set of transition probabilities, Prob(x i+1 , t i+1 |x i , t i , Θ), that tell us how likely it is that, given a proportion x i at time t i and parameters Θ, the proportion will be x i+1 at time t i+1 . In the previous example, the dataset X would determine the transition probabilities Prob(0.5, 2|0.2, 1, Θ) and Prob(0.75, 5|0.5, 2, Θ), whose exact numerical values would depend on the choice of model parameters Θ. We assume that contributions to changes in variant frequencies at different points in time are uncorrelated, which means that we can write the likelihood of the entire frequency time series as the product of the transition probabilities for each interval:\nL(X|Θ) = m-1 i=1 Prob (x i+1 , t i+1 |x i , t i , Θ) .(1)\nIt is this likelihood function that we will maximise to determine the set of parameters Θ that best describes the cultural evolutionary dynamics, and that we will use to compare different models.\nThere are two main ways to choose the form of the transition probabilities Prob(•|•, Θ), a choice that is crucial to parameter estimation and subsequent interpretation. One could simply assume that the frequencies follow a prescribed trajectory between the two points, subject to some fluctuations around them. For example, in linear regression, frequencies would be assumed to vary linearly with time with normally-distributed residuals. Logistic regression is similar, but instead assumes that frequencies vary following a nonlinear logistic function that is commonly used as a model for S-shaped language change [41,62,38]. A weakness of this approach is that without an underlying model of language production and transmission that may be operationalised as frequency time series, it is difficult to relate the parameters obtained to the behaviour of individuals or speech communities.\nThe alternative is to derive the transition probabilities from an explicit agent-based model of language change, many of which can be understood as a variant of the Wright-Fisher model of evolution [10]. As noted in the introduction, the transfer of a model from genetics is justified on theoretical grounds [32,18] and one can interpret the parameters by appealing to models of language use [8] or iterated Bayesian learning [62]. A drawback of the Wright-Fisher model is that exact expressions for the transition probabilities [19] are complex and difficult to work with computationally, as their associated transition matrices may become numerically intractable [54]. This has motivated many different approximation schemes [74]. In this work, we apply a self-contained Beta-with-Spikes approximation scheme that was developed and tested by [26] and found to provide reliable estimates for parameter values without incurring undue computational cost. In the following we overview the conceptual components of this approach that are most relevant to linguistic applications, directing the reader to [26] for technical details." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "The Wright-Fisher model", "publication_ref": [ "b73", "b18", "b7", "b9", "b11", "b76", "b9", "b7", "b61", "b41", "b42", "b43", "b17", "b37", "b79", "b18", "b73" ], "table_ref": [], "text": "The Wright-Fisher model describes a population of N replicating individuals of different types, each of which directly corresponds to a different variant of a linguistic variable. While any number of mutually exclusive types can be included in the model, we will focus on the case with two distinct types. Its extension to three or more distinct types is straightforward [74]. The quantity x t is the proportion of individuals of a specific variant in generation t, as described above. The process of replication has the effect that, in generation t + 1, each individual is of the variant of interest with probability g(x t , s), which depends both on the composition of the population and a measure of selection strength that we denote s. It is assumed that each of the N individuals in the new generation has its type assigned independently. This replication process is illustrated in Figure 1.\nThe two evolutionary parameters, N and s, can be afforded a linguistic interpretation as follows. The effective population size, N , quantifies the scale of fluctuations in the variant frequencies around a smooth trajectory of change. The smaller the effective population size, N , the larger the fluctuations. When selection is weak (s is close to zero), the time taken for a variant to go extinct is proportional to N [19]. Its interpretation as a population size, effortless in population genetics, does not work as well when studying language change. Through agent-based models of language learning and use [8,10], we understand that N generically correlates with the size of the speech community. However, heterogeneous social network structures can result in N correlating only weakly with the size of the human population [12,77,10]. The population size N is also related to the total usage of all variants of a linguistic variable in a given generation, and how long it is retained in memory [8,62]. Intuitively, speakers will be more consistent in their usage of specific variants the more they encounter them, and the longer they recall these encounters. This increased individual consistency will be reflected as smaller fluctuations in time series data. Although our analysis will not allow these different contributions to N to be distinguished, we will be able to determine which linguemes are subject to greater or lesser uncertainty in transmission between speakers.\nThe selection strength parameter, s, represents a tendency for the variant of interest to increase in relative usage (s > 0) or decrease (s < 0). Here, s subsumes all factors that could lead to a variant systematically increasing or decreasing in frequency over time, whether they originate in cultural, cognitive or language internal factors [42,43,44,18]. Similarly to the various factors that may influence the effective population size, we will not be able to distinguish them from the value of s alone. However, as we show below, we can gain useful information by looking for common features of variants which are found to have similar selection strengths.\nThe parameter s specifies the probability g(x, s) that an individual in generation t + 1 is an offspring of an individual with frequency x and selection strength s in the previous generation. Here, we take\ng(x, s) = 1 1 + 1-x x e -s ,(2)\nwhich has been commonly used in the theoretical characterisation of language change [38,80]. In Figure 2 we plot the transition probability Prob(x|x 0 , s) that results from this definition for the case where x 0 = 1 2 and for different values of s. We see that larger values of s shift the peak of this distribution towards higher values of this frequency x, consistent with the notion of a bias towards the corresponding variant.\nIn the literature, one can find relationships between the selection strength s and the probability g(x, s) different to that specified above [19,74]. Our chosen formula has the useful property that g(x, s) + g(1 -x, -s) = 1, which means that if one of two variants in a population has a selection strength s, the other one implicitly has a selection strength -s. This choice thus lends a symmetry between positive and negative selection strengths of the same magnitude, which aids the interpretation of the results. The strength s = 0 represents pure drift, where any changes in usage over time are due to the stochasticity of replication alone, and not the presence of selective forces. Under pure drift, g(x, s) reduces to g(x, 0) = x." }, { "figure_ref": [ "fig_2" ], "heading": "Beta-with-Spikes approximation", "publication_ref": [ "b72", "b25", "b25", "b22", "b51", "b72", "b25", "b25" ], "table_ref": [], "text": "For a single generation of evolution in the Wright-Fisher model, the transition probability is the binomial distribution because there are N individuals and a success probability of g(x t , s). If the effective population size N is known, and the time between two frequency measurements corresponds to a single generation, one can use this expression for the transition probability in Equation (1) to construct the overall likelihood of a series of measurements. In the present application to linguistic corpus data, neither of these requirements hold. N is a parameter that we need to estimate, and measurement times are not in general separated by a fixed interval that constitutes a single Wright-Fisher generation. The Beta-with-Spikes approximation, introduced by [73] and extended by [26], is designed to deal with these complexities.\nProb(x t+1 |x t , N, s) = N N x t+1 g (x t , s) N xt+1 (1 -g (x t , s)) N (1-xt+1)(3)\nFor two observations made at times x t and x t+k (i.e., separated by k generations) the BwS approximation is Prob(x t+k |x t , N, s) = P 0,k δ(x t+k ) + P 1,k δ(1 -x t+k )\n+ (1 -P 1,k -P 0,k ) x α k -1 t+k (1 -x t+k ) β k -1 B(α k , β k ) .(4)\nHere, P 0,k , P 1,k , α k and β k are parameters that determine the shape of the distribution. These parameters have the following interpretation. P 0,k is the probability that the variant has gone extinct by the k th generation, and P 1,k is the probability that it has driven the other variant to extinction in that time. α k and β k control the shape of the distribution of variant frequencies, conditioned on neither of them having gone extinct. These parameters can be determined from the mean and variance of this conditional distribution (see [26]). Note that all four parameters depend on N and s, as well as the sequence of observed frequencies x ti . Therefore, they need to be recalculated for each time series and combination of model parameters.\nA crucial advantage of the BwS approximation is that it accounts for the fact that changes in variant frequencies cannot be arbitrarily large. If a variant has a low frequency (x close to zero), then a downward fluctuation should cause it to become extinct, rather than attain a negative frequency. It is the spikes (represented by the delta functions) in the Beta-with-Spikes expression (4) that incorporate this constraint. By contrast, a normal approximation to the same transition rates (as used by [23,52]) allows, in principle, arbitrarily large or negative x, instead of being constrained to the range 0 ≤ x ≤ 1. This difference is illustrated in Figure 3 which shows the statistical distances between the BwS and normal approximation and the exact WF transition probability, for different values of the initial frequency x 0 and two values of the selection strength s. We see that for both pure drift (s = 0) and strong selection (s = 0.5), the BwS approximation stays consistently closer to the exact distribution for all values of x 0 , which is reflected in lower values of the statistical distance. In particular, the BwS approximation is significantly better than the normal approximation for initial frequencies x 0 close to the edges of the interval, and for strong selection.\nThe main task in applying the BwS approximation is to estimate the parameters P 0,k , P 1,k , α k and β k for successive generations k = 1, 2, 3, . . .. The strategy of [73] is to match up the moments of the BwS distribution to those of the Wright-Fisher model after k generations have elapsed. This method works well when the selection strength s small, but less so when it is large. [26] have improved on the method, particularly in the large s regime, by iterating (3) one generation at a time, and reading off the extinction probabilities, mean and variance at each. The code that implements this procedure, and generates parameter estimates is available here. In the context of cultural evolution, it is not obvious what period of time counts as a generation in the Wright-Fisher model. In principle, this is a free parameter which would also need to be optimised by maximum likelihood estimation (and furthermore demand an interpretation). Fortunately, this is unnecessary. [26] further show that the optimum values of 1/N and s are both proportional to the chosen generation time. In other words, the generation time serves only to set the units in which the parameters N and s are measured. It is however important to use the same generation time across multiple time series when one wishes to compare the values of N and s that are obtained: otherwise, they would be in different units and not comparable. In this work we generally take the shortest time between successive observation points as the generation time. If one makes it shorter than this, the computational effort increases without any improvement in the quality of the estimates obtained. If one makes it longer, one must then aggregate multiple data points which then entails a loss of temporal resolution. However, as we discuss below, it is sometimes beneficial to combine data points to reduce sampling error that is not accounted for in the present maximum likelihood analysis." }, { "figure_ref": [], "heading": "Distinguishing selection from drift", "publication_ref": [ "b8", "b51", "b66", "b67", "b22" ], "table_ref": [], "text": "As established in the introduction, the social, linguistic and cognitive forces driving language change are very diverse.\nStill, their measurable effects can be broadly characterised as belonging to one of two types. Systematic biases drive the evolutionary process in a specific direction, and can be modelled as selective forces. Frequency effects and stochasticity in transmission produce random, unbiased drift whose effects are always present, albeit not always sufficient to explain the behaviour of the data. Quantitative, empirical analyses benefit from the simple yet powerful and flexible characterisation of language change afforded by this binary description.\nBy using the transition probabilities (4) in the likelihood function (1), we can find the maximum likelihood values of the effective population size N * and selection strength s * via\n(N * , s * ) = arg max L(X|N, s).(5)\nIn practice, we find that the likelihood function L(X|N, s) has a single maximum, which can be located by successively optimising on N at fixed s and vice versa.\nIt is important to establish whether the selection strength s * is significantly different to zero: otherwise, the null model of stochastic drift (s = 0) would be sufficient to explain the behaviour of the data without the need for selection [9,52].\nIn order to do this, we compare the maximal likelihood under selection, L(X|N * , s * ), with the maximal likelihood under pure drift. That is, we first restrict to s = 0 and determine the optimal effective population size N * 0 :\nN * 0 = arg max L(X|N, 0) .(6)\nThen we compare the models with and without selection by computing the likelihood-ratio\nλ = 2 ln L(X|N * , s * ) L(X|N * 0 , 0) . (7\n)\nThis quantity can be compared to a reference distribution to find a p-value, an estimation of the probability that the observed time series could have arisen from drift alone1 [67,68]. To achieve this, we follow the procedure outlined by [23] and generate 1,000 artificial time series spanning the same time period as the empirical data X with parameter values s = 0 and N = N * 0 . For each of these we compute the maximum likelihood values N * , s * and N * 0 , using the same sequence of steps as for the original empirical time series. We then compute the likelihood ratio λ and determine what fraction of the artificial time series has a larger λ than the one that was observed. This provides an empirical p-value for the null hypothesis of drift." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b20", "b51", "b36", "b51", "b36", "b49", "b54" ], "table_ref": [], "text": "We now apply the methods set out above in three separate tasks, each with a distinct purpose. First, we revisit the set of verb time series from the Corpus of Historical American English (COHA, [21]) to benchmark our approach against those of [52] and [37]. These results demonstrate that the BwS method is both more robust than a similar likelihood-based approach [52] and more informative than a neural network trained to perform a binary classification [37]. We also introduce a method for assessing the variability of parameter values under different binning strategies, thereby facilitating a judgement as to which results are more robust.\nWe then perform similar analyses to understand the direction of selection in the context of English irregular verbs, this time using the English 2019 1-grams and 2-grams datasets from the Google Books corpus [50]. This larger corpus contains more instances of verbs that appear to be irregularising over time. We find that a phonological constraint that disfavours repeated consonants can override a general preference for regularity. Finally, we use data from the 2019 Spanish 1-gram Google Books corpus to show that the dates at which Spanish spelling reforms were introduced can be detected using the unsupervised maximum-likelihood analysis.\nThe validity of using frequency data from the Google Books corpus to draw conclusions on cultural evolution and language change has been questioned by [55] due to the over-representation of scientific literature in the English sub-corpus throughout the 20th and 21st centuries. While they propose restricting studies of cultural and language change to the fiction sub-corpus, we believe that using frequency data from the general English sub-corpus is justified for the purposes of our study. First, our work rests on the comparison between two data sets of English verbs differing only in their phonology. It is reasonable to assume that, if any bias exists in scientific texts regarding the use of irregular or regular forms of verbs, this bias will not be phonologically conditioned, thus maintaining the validity of the comparison between both data sets. Secondly, we have chosen verbs that are reasonably present in both the general English corpus and the English Fiction corpus, so a potential biases towards uncommon verbs in scientific literature should not be an issue. Thirdly, the general English sub-corpus will contain more words than the restricted fiction sub-corpus, thus reducing the effect of sampling noise on our results." }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "Drift vs selection in past-tense English verbs", "publication_ref": [ "b45", "b19", "b51", "b35", "b36", "b62", "b22", "b20", "b51", "b35", "b36", "b36", "b51", "b51", "b35", "b36", "b51", "b73", "b25", "b25", "b35", "b71", "b32", "b51", "b73" ], "table_ref": [], "text": "A simple example of competition between two variants is provided by English verbs with an irregular past tense form which in many cases coexists with a regular form. This competition has been studied from a variety of quantitative perspectives [46,20,52,36,37,63]. Of greatest relevance to the present work are those studies that aimed to distinguish drift from selection as the mechanism behind changes in the relative frequencies of the regular and irregular forms over time.\n[52] applied the Frequency Increment Test (FIT, [23]) to a set of verbs from the Corpus of Historical American English (COHA, [21]). This is a maximum-likelihood method that rests on a normal approximation to the Wright-Fisher transition probabilities. Like the Beta-with-Spikes maximum-likelihood method in Section 2.4, this method yields estimates of the effective population size and selection strength, along with a p-value for the null hypothesis of pure drift. However, there are situations where results are flagged as unreliable due to the frequency increments failing a normality test [52]. [36] further noted that the results can also be sensitive to the size of the window over which frequencies are estimated.\n[37] avoid these issues by taking the rather different approach of training a neural network on simulated time series generated by the Wright-Fisher transition probabilities (3) for different values of s (but fixed N = 1, 000). Each time series in the training set is labelled according to whether it was generated purely by drift (s = 0) or if selection was operating (s ̸ = 0). Once trained, the network yields a binary classification of empirical time series, according to whether they are more similar to the examples of drift or selection in the training data. We refer to this as the Time Series Classification (TSC) approach. The advantage of TSC is that no approximation to the Wright-Fisher transition Figure 4: Results for the detection of selective forces in 36 COHA verbs, with three different methods and for three different temporal binnings of 10, 20 and 40 years. Results for both the FIT and BwS likelihood-ratio algorithms produce a p-value for the pure drift hypothesis. Blue shades represent higher p-values (i.e., higher likelihood of the data under drift), while red shades represent p-values under the traditional 0.05 threshold of significance for selection. Time series where the normal approximation that FIT relies on is inaccurate are crossed out. Results for the TSC method from [37] are classified in a binary way as either drift or selection. The average p-value across the three bins widths obtained through the BwS algorithm is shown along the horizontal axis. We note that the BwS method gives results consistent with TSC when FIT is unreliable.\nprobabilities is made. Moreover, one can manipulate the training data so that it displays artifacts of binning or finite sample sizes that are features of real time series, which in turn should improve the reliability of the classification. This approach does however come with some drawbacks. Whilst the output from the classification algorithm is a value between 0 and 1, it does not have an obvious interpretation as a probability. [37] used a threshold of 0.5 to label timeseries as arising from drift or selection. The method further does not provide an estimate of the strength of s, and since N was fixed in the training set, this amounts to an assumption that this single value of N was appropriate for all empirical time series. This could be an issue since [52] report a wide range of values of N for this data set (from around 80 to around 22, 500).\nIn Section A of the appendix we report the maximum likelihood estimates of N and s, along with the p-value for the drift hypothesis, obtained using the BwS method for the same set of verbs that were considered by [52] and [36] using FIT and by [37] using TSC. We perform the analysis by extracting annual frequency data of the variants of interest from COHA and aggregating it into 10-, 20-and 40-year bins. The reason for this is a trade-off between the more precise frequency estimates that derive from larger bins and the greater temporal resolution obtained from a larger number of bins over the relevant historical period. By employing different binning strategies, we can gain insights into the consequences of this trade-off. Variable-width binning strategies have also been successfully applied in previous studies [52]. In these, the number of tokens per bin is kept roughly constant at an arbitrarily chosen value, at the expense of varying their temporal width. For the purpose of comparing the different methods, we have chosen to look only at fixed-binning strategies, although the BwS method could be combined with variable-width binning.\nWe focus first on the role played by selective forces, which we quantify by appealing to the p values associated with the null hypothesis of pure drift as described in Section 2.4. In Figure 4 we compare the results obtained from the three different methods by ordering the verbs from left to right by decreasing BwS p-value, averaged over the three temporal binnings. Each panel corresponds to a different analysis method, and indicates the p-value for the hypothesis of pure drift for each verb and binning protocol. We recall that higher p-values are more suggestive of the historical changes being due to drift: these are represented with colours ranging from light to dark blue, with darker colours representing higher p-values. Meanwhile, low p-values point towards other forces (such as selection) being present and are represented with different shades of red. While we use the standard p-value threshold of 0.05 in the transition between blue (drift) and red (selection) in this representation, we acknowledge that these mechanisms lie in a continuum by making the transition between these extremes smooth.\nWe see from Figure 4 that the three distinct methods give broadly consistent results, with those verbs towards the left being more compatible with change through pure drift, and those to the right with change from selection. More precisely, the correlation coefficients between the p-values obtained with different methods are 0.63 (Pearson) between FIT and BwS, 0.68 (biserial) between TSC and FIT, and 0.62 (biserial) between BwS and TSC. Analyses producing high p-values for selection (i.e. implying that drift alone can explain the behaviour of the data) are indicated with blue colours, whereas those where selection is more significant are red. Results obtained through the FIT method are generally consistent with those obtained with the BwS method. However, 30 of the FIT results (27.8% of the total) are flagged as 'unreliable' due to a failed normality test. These reliability issues are designed out of the BwS method, as it does not require normally-distributed increments [74,26]. Confidence in the method's reliability is also gained by benchmarking with synthetic and genetic data [26] and through the consistency with the independent TSC results. The higher precision of the BwS at high selection strengths leads to higher significance (lower p-values) in its detection of selective forces when compared to the normal approximation, leading to redder colours in Figure 3.\nThe TSC appears to give a cleaner classification of verbs according to drift and selection, and greater consistency with different choices of bin size. This is likely due in part to the training data being subjected to the same binning protocol as the empirical time-series, but also because a strict threshold was applied to the neural network's output value to partition into the two classes. While the TSC neural networks produce a value between 0 and 1 as their output, making it more nuanced than this binary classification would suggest, this number is not a probability or a p-value like those produced by BwS or FIT. Thus, an arbitrary threshold is necessary in order to classify time series as driven by drift or selection. A higher or lower threshold would put the boundary between the two classes in a different place. This hinders the interpretability of the result and the estimation of significance levels.\nOur results further demonstrate that variation in p-values under different binning strategies, previously observed within the FIT analysis [36], remains evident under the less restrictive BwS analysis presented here. We consequently regard this variability as an inherent feature of the time series data: that is, some changes are harder to classify than others. That is, this uncertainty need not be a failure of the method, but a reflection of linguistic reality. For example, it could reflect different variants being used less predictably by speakers, or by the constraints on variation changing over short timescales [72].\nSuch observations motivate a more detailed investigation of the classifiability of individual time series. A time series that shows limited variation in parameter values under different temporal binnings is more classifiable than one that shows more variation. With our interest in selection, the two most relevant parameters are s, the selection strength, and the p-value associated with the drift hypothesis. We can visualise the variation in these parameters by performing a Principal Components Analysis [33] on combinations obtained through different binning strategies (in this case, bins of 10, 20 and 40 years). The interior of the resulting ellipses indicates the range of variation of the two parameters over different binning strategies. This way, they provide a visualization of not just the average, but the uncertainty and covariance of s and the p-value under different binning strategies. We show these ellipses for the COHA verbs in Figure 5. The upper panel contains the full range of p and s values obtained through the analysis, while the lower panel zooms in on the region where the drift p-value is smaller than 0.05 (i.e., the conventional threshold for rejecting the null hypothesis). We see a correlation between the maximum likelihood value of s and the p-value (both through the positions and rotation angles of each ellipse).\nThe ellipses that lie entirely within the lower panel correspond to the verbs that are most likely to be driven by selection. We see a clear split between four verbs with positive selection (catch, light, wake and quit), which corresponds to them becoming more irregular over time, and six verbs (learn, lean, burn, smell, dwell and spill) with negative selection, and thus regularising over time. In this analysis, the frequency x is the fraction of irregular forms used in the relevant context. Across the entire plane, there is evidence of both regularisation and irregularisation, although in most cases it is difficult to rule out drift as an explanation for the changes, as was observed by [52].\nIn interpreting these results, it is important to recognise that the presence of fluctuations around a smooth change trajectory will tend towards a higher drift p-value, since in the analysis drift is the sole source of fluctuations. It is possible that fluctuations in the corpus derive from other sources, such as sampling effects associated with a finite corpus. Some methods for estimating parameters in the Wright-Fisher model attempt to account for such fluctuations separately to drift [74]. These are, however, typically difficult to implement, and instead we sidestep the issue by ensuring sufficiently many tokens in each temporal bin that the frequency is well estimated. As such, we might expect to see stronger evidence for selection as the bin width is increased, which appears to be true for some (but not all) of the verbs with intermediate p-values. This suggests that some language changes may be dominated by the random effects of drift and therefore exhibit strong fluctuations even in very large corpora.\nTo summarise, we have shown in this section that the BwS method can be readily applied to historical corpus data for changes in the frequencies of linguistic variants. It provides estimates of parameters in the Wright-Fisher model that do not rest on an assumption that frequency increments are drawn from a normal distribution, and we find broad consistency in the strength of support for a drift hypothesis with complementary methods." }, { "figure_ref": [ "fig_3", "fig_4", "fig_4", "fig_4", "fig_4" ], "heading": "Competing linguistic motivations in English verbs", "publication_ref": [ "b15", "b68", "b19", "b62", "b14", "b56", "b19", "b51", "b57", "b29", "b21", "b5", "b6", "b27", "b39", "b28", "b49", "b47", "b13", "b44", "b70", "b46", "b24", "b55" ], "table_ref": [], "text": "In the previous section we observed a split between some verbs that were regularising and some that were irregularising. While the extension of regular inflection at the expense of irregulars seems to be the norm (e.g. [16,69]), irregularisation is however an attested phenomenon. [20] found that the processes of regularisation and irregularisation tend to take place with similar frequency, something that is also perhaps suggested by Figure 5, which shows a similar density of verbs along the branch with positive s (towards irregularity) and negative s (towards regularity). [63] suggest that irregularisation may occur if the number of verbs within an irregularity class is high enough to surpass a productivity threshold. Following [15,57], both [20] and [52] propose phonological analogy as a potential mechanism for irregularisation. Couched in the terms of the present work, this would correspond to the general rule (adding -ed) contributing a negative value to s whilst rules that apply only to a specific subset of verbs contribute a positive value to s. Note that we do not necessarily imply that these contributions are additive: for example, in optimality theory [58,30], higher-ranked rules take precedence over lower-ranked rules. In general, we may regard opposing forces on linguistic variation as arising from competing motivations which have been discussed in a variety of language change contexts (e.g., [22,6,7,28,40,29]). By whatever mechanism this opposition is resolved, an overall positive s value here indicates that the irregularising rule is dominant.\nIn this section, we investigate a distinct motivation that may favour irregularisation, namely the phonological simplicity that is afforded by omitting a sound repetition that would occur under application of the regular rule. Specifically, we consider verbs whose infinitives end in alveolar stops (/d/ or /t/) and have an irregular past form where the regular -ed termination is omitted. Examples include I bled instead of I bleeded or she bet instead of she betted. Verbs where devoicing of final /d/ or changes in the root vowel take place on top of the omission of the termination are also considered. Thus, we hypothesise that the regular form is preferred from the point of view of inflectional simplicity (i.e. using the regular everywhere leads to a simpler inflectional system), while the irregular form is favoured by phonological simplicity. By applying the BwS algorithm to estimate the s parameter (and in particular, its sign), we can assess how these competing motivations play out.\nFor this investigation we switch to the 2019 English Google Books corpus [50], as the number of verbs falling into this category and whose past tense forms are both sufficiently frequent and can be reliably identified is relatively small. The larger size of Google Books relative to COHA allows more examples to be included. We identified 19 English verbs whose irregular and regular forms both show usage above 1% at least in one 5-year bin in the Google Books corpus in the considered time frame (1809 to 2009). These verbs are: bend, bet, bite, blend, build, fit, glide, knit, light, pat, plead, quit, slide, speed, spit, thrust, tread, wed, and wet. A difficulty in the analysis is that the irregular past-tense form can coincide with certain present-tense forms. A major exception is when the verb is preceded by a third-person singular pronoun (e.g., the present he bets versus the irregular past she bet), which can easily be distinguished in the bigram dataset. We recognise that this separation is not perfect: for example, certain English varieties do not use the third person marker -s, but we consider the effect of these contributions to be negligible in the corpus. We also kept only those cases where the pronoun was judged to appear at the start of a sentence (by virtue of capitalisation), so as to exclude contexts where the pronoun is followed by the infinitive in a question or an inversion. Again there are situations where capitalised pronouns can appear mid-sentence, but these are also rare. With this, total counts of usage for verbs with non distinct irregular past tense forms range roughly between 2, 600 (knit) and 120, 000 (pat), while counts for verbs whose irregular past tense is distinct from their base form range between 600, 000 (glide) and 40, 000, 000 (build).\nIn order to formally test whether a potential bias towards irregularisation is significant, a similar analysis was carried out on a baseline set of 34 English verbs whose base form does not end in /d/ or /t/. Data was extracted from Google Books and all verbs satisfy the same conditions on minimal usage in the time frame of interest (1809-2009). The chosen verbs are: awake, blow, burn, catch, cleave, creep, dive, dream, dwell, freeze, grow, hang, heave, hew, kneel, lean, leap, learn, shake, shear, shine, slay, slink, smell, sneak, spell, spill, spoil, strew, string, strive, swell, wake and weave. Total usage for these verbs in the Google Books corpus for the specified period ranges between 211, 000 tokens (slink) and 31, 900, 000 (learn), in the same orders of magnitude as the /d/,/t/ set. The maximum likelihood parameters for these 53 verbs are given in Section B of the appendix. Here, we visualise our findings by plotting ellipses in the plane spanned by the selection strength and the indicator of selection, following the 1: Contingency table for the comparison of irregularising behaviour between the set of verbs ending in alveolar stops and the baseline set. Irregularisation is significantly more common amongst verbs ending in alveolar stops, with a p-value of 0.031 as provided by the G-test. same procedure as previously described for the COHA verbs, albeit with the addition of a 5-year temporal binning strategy. With this, each ellipse in the s-p plane for each verb is produced by averaging the results of the analyses of at most four temporal binnings. We recall that these ellipses characterise the variability in these parameters as the temporal binning is varied. The upper panel in Figure 6 shows the results for all 53 verbs.\nFor the purpose of comparing the two sets of verbs, we partition the s-p plane into four regions: those with positive or negative selection strengths; and those where the p-value falls above or below 0.05. The lower panel of Figure 6 zooms in on this latter region, which we may regard as showing evidence of selection. In both panels, red crosses and ellipses correspond to verbs ending in alveolar stops, while blue crosses and ellipses correspond to verbs in the baseline set. Given our interest in irregularisation, three groups of verbs can be identified. 16 verbs (awake, bend, bet, bite, catch, fit, hang, light, quit, shake, slide, sneak, spit, strew, wake, wed) have their confidence regions (ellipses) completely contained in the region of likely selection of the irregular form (p < 0.05 and s > 0, lower-right panel). Of those, 9 are in the alveolar stop set and 7 are in the baseline set. Six verbs (freeze, kneel, leap, plead, swell, thrust) have confidence regions only partially contained in this region of the s-p plane, indicating that, while selective forces towards the irregular form are a plausible explanation to their dynamics, the pure drift hypothesis cannot be confidently ruled out. The remaining 31 verbs (8 in the alveolar stop set, 23 in the baseline set) have confidence regions contained entirely outside this region of likely irregularisation.\nThese results suggest that verbs in the alveolar stop set are more likely to be selected towards their phonologically simpler irregular form than their counterparts in the baseline set. To test the significance of these findings, we construct the 2 × 3 contingency table shown in Table 1, where one dimension expresses belonging to the alveolar stop or the baseline sets, while the other dimension expresses whether the verbs' ellipse falls in the irregularisation region in the bottom panels of Figure 6. The p-value for the null hypothesis that the baseline and alveolar stop verbs are drawn from the same distribution is 0.031, as obtained by applying the G-test of goodness-of-fit to the contingency table [48]. This indicates that the specific rule favouring phonological simplicity likely outcompetes a general tendency towards regularity.\nIt is possible that other effects may be responsible for this subset of verbs tending to irregularise. For example, it is well understood that higher frequency items tend to tolerate greater irregularity [14]. Given the selection criteria imposed to arrive at the set of 12 verbs in this analysis, it is possible that the sample is skewed towards higher frequency and more irregular forms. However, as noted, the total token counts for both the baseline set and the alveolar stop set span similar ranges, and also have similar averages (of around 5 million for both sets). Therefore we consider this alternative explanation unlikely. This is not the only phonological conditioning on irregularisation that can be inferred from Figure 6. The subset of verbs ending in a short vowel plus a lateral (dwell, smell, spell, spill, swell) seem to be a lot more likely to regularise under selection than other verbs in the study. A similar G-test to the one performed on the alveolar stop test on Table 1 reveals that this tendency is significant with p < 0.003. The origin of this tendency is, however, unclear.\nTo summarise, in this section we have shown that by focussing on a subset of verbs that are subject to specific combination of competing motivations, the Wright-Fisher model combined with the BwS approximation can be used to determine the net effect of this competition. Specifically, we have acquired evidence that phonological simplicity dominates inflectional simplicity in this competition, suggesting perhaps that this is an instance of an OCP constraint (Obligatory Contour Principle, [45,71]). OCP constraints disfavour pairs of identical or near-identical consonants from being in close proximity to each other. In particular, the constraint here appears to be an OCP-place constraint ( [47,25,56]), meaning that it does not just affect identical consonants, but all alveolar stops independently of voicing." }, { "figure_ref": [], "heading": "Spanish spelling reforms", "publication_ref": [ "b30", "b42", "b48", "b2", "b63", "b74", "b25", "b4", "b58", "b59", "b60", "b25", "b1", "b25", "b58", "b59", "b60" ], "table_ref": [], "text": "So far we have assumed that the evolutionary parameters (the effective population size N and the selection strength s) have been constant over time. In the case of competition between regular and irregular verbs this is a reasonable assumption, due to the factors favouring one over the other likely being cognitive or linguistic in origin. By contrast, social pressures like prestige, taboo, or language contact [31,43,49] are inherently time-dependent, and we may expect the selection strength in particular to change over time. Here, we investigate this possibility in the context of a purposeful change made by a regulating institution through prescriptive grammar and spelling rules [3,64], the acceptance or rejection of which we expect to be reflected by a change in the value of s. While well established algorithms like change-point analysis [75] exist for the detection of change in time series, these suffer from shortcomings that make them inadequate for a more nuanced analysis of change in language and culture. First, change-point analysis is based on the assumption that the data is distributed around constant average before and after a change, which changes the value of said average instantaneously. This makes this methodology only fit for the detection of rapidly occurring S-shaped curves of language change, where the usage frequency of a variant quickly changes and stabilises. Secondly, changepoint analysis provides no extra linguistic information, as it does not assume a model of the underlying evolutionary dynamics. [26] solve this issue by setting out a procedure for estimating times at which the parameters N and s change, thus measuring changes in the evolutionary dynamics of the data rather than its average. We briefly recapitulate and then apply this method below.\nThe specific changes of interest are spelling reforms in Spanish that were introduced by the Real Academia Española (RAE), the central regulatory institution of the standard Spanish language. Since its creation in 1713, the RAE has regulated Spanish orthography following the phonemic principle over etymological or conservative approaches [5]. We study words affected by one of the following reforms: (A) The simplification of the <ss> digraph to a single <s> in 1763, due to the different sounds that both spellings represented having merged in the 16th century [59]; (B) The replacement in 1815 of etymological <x> with <j> in all non word-final contexts where it represented the phoneme /x/ [60]; (C) The replacement, also in 1815, of <y> with <i> in all non word-final closing diphthongs; (D) The reversal of accentuation rules for words ending in <n>, introduced in 1881. This reform stipulated that words ending in <n> with a tonic last syllable had to be accentuated, while words ending in <n> with a tonic penultimate syllable lost their previously prescribed accent [61]. We treat words that gain an accent and words that lose an accent as independent sets (D.1 and D.2, respectively).\nWe now seek to estimate the time at which each reform occurred by appealing only to the time series data and no external information. The basic idea (see also [26]) is to allow different parameter combinations (N, s) to apply before and after a time T . That is, for t < T the Wright-Fisher model with parameters (N 1 , s 1 ) applies, and for t > T the parameters (N 2 , s 2 ) apply. The data likelihood, obtained by combining Eqs. ( 1) and ( 4), is then maximised with respect to all five parameters (i.e., N and s each before and after the change, and the time T of the change itself).\nAfter identifying the time T that maximises the data likelihood, one needs to determine if the additional complexity of the five-parameter model is compensated by a sufficiently improved description of the data. To achieve this, we obtain an empirical p-value for the null hypothesis that the selection strength s was constant over the entire time period by following a procedure similar to that described in Section 2.4. Specifically, we determine the maximum likelihood values of N and s without a change point, and generate 500 synthetic time series that match the length of the observed series with these parameter values. For each of these time series, we then optimise the five-parameter likelihood that applies when the selection strength changes at a single point in time. An empirical p-value is then given by the fraction of such time series whose five-parameter likelihood exceeds that of the real trajectory. Although computational constraints limit the number of synthetic time series that can be analysed this way, we find that situations where the five-parameter fit has a high likelihood are extremely rare, and there is little to be gained by estimating their rarity to greater precision. One can then apply a threshold, e.g., p < 0.05, to decide whether to accept the more complex model. Having split the time series once, one can apply the method again to each sub-series, thereby identifying secondary change points. This procedure terminates when none of the sub-series admits a subdivision that yields a sufficiently improved description of the data according to the threshold that has been imposed.\nTo apply this method to the Spanish Spelling reforms, we identify a set of commonly used words that are affected by each one, and average the relative frequencies of usage of their old spellings over all members of each set. The number of words in each set ranges from 16 to 27. The exact sets are specified in Section C of the appendix. This procedure generates a single effective time series for each of the reforms, and has been found effective in related corpus analyses [2].\nWhile this averaging over sets of words decreases the sampling noise in the data and increases the inferential power of the analysis, cultural data still suffers from issues that may affect the applicability of the method. Particularly, corpora tend to contain lower token counts in earlier time periods. When translated to frequency time series, this leads to greater sampling noise fluctuations that may be misidentified as changes in the effective population size parameter N . This issue can be remedied by applying a sampling error equalisation algorithm, as laid out by [26]. This method creates subsamples of the larger token counts in the data set, in such a way that sampling effects are of equal magnitude throughout the data. In this way, any significant changes in N detected by the method must be due to changes in effective population size parameter, and not a consequence of unequal sampling noise. For each set of words that undergo a rule change, the ratio of usage of the old form is plotted over time. The ratio of usage of all old forms converges to zero after each reform. Red dots with solid vertical lines represent the year of publication of the RAE spelling reforms [59,60,61]. Dark blue dots with solid vertical lines represent the year at which selection strengths changed as detected by the maximum likelihood method with a p-value below 0.05. These fall within a period ∆T of 12 years or less relative to the date of the reform. Note that the temporal resolution of the time series is of 5 years, so an error of 10 years is equivalent to just two data points. Dashed vertical lines represent secondary points of change in evolutionary parameters, also detected with a p-value below 0.05. The number of such secondary points depends on the time series." }, { "figure_ref": [ "fig_5", "fig_5", "fig_5" ], "heading": "Reform", "publication_ref": [ "b64" ], "table_ref": [], "text": "Detected year s before division s after division (A) <ss>to <s> 1775 -0.008 -0. 2: Maximum likelihood estimates of the first detected time at which the selection strength changed, and its values before and after the change, for the five Spanish reform categories. All changes significant with p < 0.002.\nOur results are shown in Figure 7. Despite the aggregation of words within each category (to improve the inferential power) and 5-year bins (to reduce computational effort), we find that the resulting trajectories are still subject to considerable fluctuations. The frequency plotted is that of the deprecated variant, which we find is eliminated in all five cases-this highlights the acceptance and influence of the Real Academia Española amongst the literate population. We show with a red line and dot the time at which the reform was introduced, and with a black dot and solid line the first time T at which subdividing the time-series improves the fit to the data, with a p-value threshold of p = 0.05 applied.\nIn all five cases, we find evidence that the selection s changed significantly over time. In each case, the first detected change point falls within twelve years of the reform being introduced, even when the trajectory is strongly fluctuating. We note that the algorithm does have a tendency to detect the reform after it has occurred, rather than at its inception. This is due to the algorithm not distinguishing past from present, making both the beginning and the end of the sharp decline following a reform are considered equivalent.\nFurther iterating the algorithm, we can further subdivide the time series, as described above. In doing so, we detect secondary time divisions (dashed lines in Figure 7), whose p-values are below 0.05. In time series (B), the earlier secondary point detects the beginning of the rapid decline in usage that was deemed less significant than the end by the first application of the algorithm. The later secondary point in (B) and the secondary point in (A) are not associated to documented reforms, and may reflect slight changes in social attitudes or simply be quirks of the data.\nTable 2 further records the s-values before and after the main change point. All s, N and p-values for every main and secondary point detected by the algorithm can be found in Section D of the appendix. For categories (A), (D.1) and (D.2), we find that the s value decreases after the detected year of the reform, corresponding to an acceptance of the reform by the speech community. The other two categories however show the opposite trend, with the s value becoming less negative across the reform. We note from Figure 7 that both categories (B) and (C) feature a rapid elimination of the deprecated form, and that this change was in progress before the reform was introduced. It has been suggested that in many cases, language reforms tend to reflect pre-existing trends, as opposed to actuating the change [65]. Our analysis provides further evidence of this, and further suggests that the impact of the reform on the speech community may be limited in such cases.\nIn summary, this analysis indicates that the BwS method can be used successfully to characterise evolutionary forces that change over time from time series data alone. As an unsupervised method, it does not rely on any prior knowledge as to when the change may have occurred, although it does benefit from a large sample size being available, obtained here by aggregating multiple instances of a change together. We have found that the estimated time at which the selection strength changed corresponds well with the time at which the corresponding reform was introduced, and comparing these strengths before and after the reform allows us to assess its impact on the speech community." }, { "figure_ref": [ "fig_3" ], "heading": "Discussion", "publication_ref": [ "b25", "b31", "b17", "b7", "b61", "b9", "b51", "b72", "b25", "b51", "b35", "b36", "b51", "b35", "b36", "b51", "b51", "b80", "b12" ], "table_ref": [], "text": "In this work we have applied an algorithm for the quantitative study of evolutionary time series [26] to instances of competition in language change. This algorithm is based on likelihood-maximisation methods and the Beta-with-Spikes (BwS) approximation to the Wright-Fisher model. The applicability of the Wright-Fisher model was justified through both theoretical considerations [32,18] and its manifestation as an agent-based model of language change from various starting points [8,62,10].\nIn Section 2.3, we demonstrated that the BwS method better captured the statistical properties of the Wright-Fisher model than the normal approximation that has been used elsewhere [52]. In particular, it deals better with situations where variant frequencies are close to 0 or 1, which arises in the case when selection serves to eliminate linguistic variation across the speech community. Through refinements to the original BwS method of [73] that are detailed by [26], we further gain accuracy in regimes where the selection strength is large.\nOur first application was to the set of 36 COHA verbs previously investigated by other methods [52,36,37]. In particular, we found that even when the Frequency Increment Test (FIT, [52,36]) delivered unreliable results due to shortcomings of the normal approximation that it relies on, we obtained evidence of selection that was broadly consistent with that obtained within a Time Series Classification (TSC, [37]) which took the complementary approach of training neural networks with artificial time series. The present method further delivered graded measure of the extent to which the historical changes are consistent with drift (in the form of a null hypothesis p-value) along with maximum likelihood estimates of parameters in the Wright-Fisher model.\nA degree of care is needed when interpreting this p-value. All evolutionary trajectories are likely to be the product of some combination of drift and selection. The key question is whether their respective contributions can be distinguished. For example, a variant could be strongly selected for (large s) but subject to sufficiently large fluctuations (small N ) that the systematic effects of selection are masked. The p-value is therefore a measure of the extent to which fluctuations alone could account for the changes that have been observed. If one chooses to apply the conventional significance threshold for rejecting this null hypothesis (p < 0.05), we find consistency with [52]'s observation that the evolution of many verbs appears to be dominated by drift.\nA second important question is whether these fluctuations are a consequence of the finite number of tokens available for analysis in historical corpora, or an intrinsic property of the language dynamics within the speech community. One way to gain an insights into this question is to compare results obtained with different temporal binnings (Figures 4 and5), since wider bins contain more points and should reduce fluctuations due to sampling. If sampling effects were dominating, we would expect to see the p-value for the drift hypothesis to decrease as the bins are widened (i.e., increasing darkness in Figure 4). This happens for some, but not all the verbs in the intermediate region, suggesting that drift may be the dominant factor in the evolution of a substantial fraction of the COHA verbs (again, consistent with [52]). A more rigorous answer to this question could be obtained by incorporating finite sample-size effects into the data likelihood function used in the analysis. This is however likely to be computationally demanding, and we leave this possibility for future work.\nIn this work, we found plotting ellipses that indicate the variation in estimates of the selection strength and the p-value for the drift hypothesis helpful to understand which variants are more likely to have been selected for. A comparison between a baseline set of verbs from the Google Books corpus and a set where the past tense is formed by deletion of a repeated consonant reveals that they are distributed differently across the space of selection strengths s and drift p-values. Specifically, we found that the phonological simplicity arising from coalescence or omission of the /Id/ termination tended to be favoured over the inflectional simplicity of the regular form. In principle, the method we have set out here could be used to determine the relative importance of other pairs of constraints that correspond to opposing selective forces.\nFinally, we showed that the method could be applied also to changes that do not have a cognitive origin and manifest as the selection strength changing over time. We studied the dynamics of word spellings in Spanish before and after reforms introduced by their central regulatory institution, the Real Academia Española. We found that each of the changes was much better described by a model in which the selection strength changed at one or more points in time, and that the primary change point corresponded well with that at which the reform was introduced. This is despite the presence of noise on the time series data. Since changes in selection strength could derive from a variety of social and cultural factors, and indeed apply to cultural evolutionary processes beyond language, this method for automated detection of societal trends and shifts could have broad applicability. Despite these promising results, there are inevitably some limitations. Chief among these is an inability to separate different contributions to the selective pressures acting on the system. Therefore, although it is possible to use this data to determine that selection has favoured one variant over another over time, and to estimate the strength of the effect, we have had to appeal to additional information to relate to likely causes of selection. This, however, is a problem intrinsic to the Wright-Fisher model with selection and not specific to the BwS method: the Wright-Fisher model contains only a single parameter s that characterises all systematic contributions to changes in variant frequencies. This oversimplification of the contributing factors to language change stems, at least in part, from the underlying assumption that the competition between forms (e.g. irregular and regular verbal forms) occurs in isolation, uninfluenced by the competition dynamics of related forms (e.g. irregular and regular forms of other verbs). [81] and [13] argue, in the context of cultural evolution, that cultural change may arise as an emergent phenomenon when cultural traits are interconnected. It is possible, then, that emergent system-level effects may account for significant changes in usage frequency of variants that are not favoured individually by any social or inductive bias. More refined models, ones that account for the complex web of interconnected forms and functions present in language, may be able to differentiate between these systemic effects and those affecting individual variants. Such models might allow more information to be extracted from corpora without the need for additional information.\nNevertheless, we have shown that it is possible to draw inferences about contributions to selection from different sources (as was done in the analysis of competition between regular and irregular forms in English verbs) and quantify the impact of social factors (as was done in the language reform example). By appealing to a wider range of corpora and instances of change, it may become possible to identify general mechanisms that are invariant over time and operate cross-linguistically, and are thus informative about language universals in general. Furthermore, the method is not specific to linguistic variation, and could be used to address similar questions in other instances of cultural evolution. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank Johanna Basnak (University of Edinburgh) for helpful discussions." }, { "figure_ref": [], "heading": "Data availability", "publication_ref": [], "table_ref": [], "text": "The code and data are available here: link" }, { "figure_ref": [], "heading": "Funding information", "publication_ref": [], "table_ref": [], "text": "Juan Guerrero Montero holds a Principal's Career Development scholarship awarded by the University of Edinburgh." }, { "figure_ref": [], "heading": "Competing interests", "publication_ref": [], "table_ref": [], "text": "The authors declare no competing interests." }, { "figure_ref": [], "heading": "A Maximum likelihood parameters for the COHA verbs", "publication_ref": [], "table_ref": [], "text": "In the following tables we quote the maximum-likelihood estimates of the parameters in the Wright-Fisher model obtained by applying the Beta-with-Spikes method outlined in the main text to frequency counts derived from the COHA corpus. Each table corresponds to a different binning strategy: for example, in the first table, frequency counts from each period of 10 consecutive years are aggregated to form a single frequency estimate for the corresponding time period.\nTwo different effective population sizes N are quoted: one ('for drift') under the assumption that s = 0, and the other ('for selection') that is obtained when both N and s are optimised via the maximum likelihood analysis. The p-value is the empirical p-value for the drift hypothesis, obtained as described in the Section 2 of the main text. The maximum likelihood values are all quoted to three significant figures, and the p-values to two significant figures." }, { "figure_ref": [], "heading": "B Maximum likelihood parameters for verbs in the study of competing motivations", "publication_ref": [], "table_ref": [], "text": "In this appendix, we provide the corresponding tables for the set of verbs ending in alveolar stops from drawn from the Google Books corpus. Dashes mean that the corresponding time series did not have enough data points per time bin in the corresponding binning for it to be included in the study." }, { "figure_ref": [], "heading": "5-year bins", "publication_ref": [], "table_ref": [], "text": "Verb " } ]
2023-08-22
10.1016/j.evolhumbehav.2014.02.003
[ { "authors": "Alberto Acerbi; R Alexander; Bentley ", "journal": "Evolution and Human Behavior", "ref_id": "b0", "title": "Biases in cultural transmission shape the turnover of popular traits", "year": "2014" }, { "authors": "R Amato", "journal": "Proceedings of the National Academy of Sciences of the United States of America", "ref_id": "b1", "title": "The dynamics of norm change in the cultural evolution of language", "year": "2018" }, { "authors": "Lieselotte Anderwald", "journal": "American Speech", "ref_id": "b2", "title": "Variable past-tense forms in nineteenth-century American English: Linking Normative Grammars and language change", "year": "2012" }, { "authors": "Quentin D Atkinson; Russell D Gray", "journal": "Systematic biology", "ref_id": "b3", "title": "Curious parallels and curious connections-Phylogenetic thinking in biology and historical linguistics", "year": "2005" }, { "authors": "Susan Baddeley; Anja Voeste", "journal": "De Gruyter Mouton", "ref_id": "b4", "title": "Orthographies in early modern Europe", "year": "2012" }, { "authors": "Elizabeth Bates; Brian Macwhinney", "journal": "Lawrence Erlbaum", "ref_id": "b5", "title": "Competition, variation and language learning", "year": "1987" }, { "authors": "Elizabeth Bates; Brian Macwhinney", "journal": "Cambridge University Press", "ref_id": "b6", "title": "Functionalism and the competition model", "year": "1989" }, { "authors": "G J Baxter", "journal": "Phys. Rev. E", "ref_id": "b7", "title": "Utterance selection model of language change", "year": "2006" }, { "authors": "R Blythe", "journal": "Advances in complex systems", "ref_id": "b8", "title": "Neutral evoltuion: A null model for language dynamics", "year": "2012" }, { "authors": "Richard Blythe; William Croft", "journal": "PLOS ONE", "ref_id": "b9", "title": "How individuals change language", "year": "2021-06" }, { "authors": "Robert Boyd; Peter J Richerson", "journal": "University of Chicago Press", "ref_id": "b10", "title": "Culture and the evolutionary process", "year": "1988" }, { "authors": " Bromham", "journal": "PNAS", "ref_id": "b11", "title": "Rate of language evolution is affected by population size", "year": "2015" }, { "authors": "A Buskell; M Enquist; F Jansson", "journal": "Palgrave Commun", "ref_id": "b12", "title": "A systems approach to cultural evolution", "year": "2019" }, { "authors": "Joan Bybee", "journal": "Oxford University Press", "ref_id": "b13", "title": "Frequency of use and the organization of language", "year": "2007" }, { "authors": "Joan Bybee", "journal": "Cambridge University Press", "ref_id": "b14", "title": "Phonology and Language Use", "year": "2001" }, { "authors": "Joan Bybee", "journal": "", "ref_id": "b15", "title": "Regular morphology and the lexicon", "year": "1995" }, { "authors": "Luigi Luca; Cavalli-Sforza ; Marcus W Feldman", "journal": "Princeton University Press", "ref_id": "b16", "title": "Cultural transmission and evolution: A quantitative approach", "year": "1981" }, { "authors": "W Croft", "journal": "Pearson Education", "ref_id": "b17", "title": "Explaining language change: An evolutionary approach", "year": "2000" }, { "authors": "J F Crow; M Kimura", "journal": "Harper and Row", "ref_id": "b18", "title": "An introduction in Population Genetics Theory", "year": "1970" }, { "authors": "Christine F Cuskley", "journal": "PloS ONE", "ref_id": "b19", "title": "Internal and external dynamics in language: Evidence from verb regularity in a historical corpus of English", "year": "2014" }, { "authors": "Mark Davies", "journal": "", "ref_id": "b20", "title": "The Corpus of Historical American English", "year": "2010" }, { "authors": "John W Dubois", "journal": "John Benjamins", "ref_id": "b21", "title": "Competing motivations", "year": "1985" }, { "authors": "A F Feder; S Kryazhimskiy; J B Plotkin", "journal": "Genetics", "ref_id": "b22", "title": "Identifying signatures of selection in genetic time series", "year": "2014" }, { "authors": "R A Fisher", "journal": "Clarendon Press", "ref_id": "b23", "title": "The Genetical Theory of Natural Selection", "year": "1930" }, { "authors": "S A Frisch; J B Pierrehumbert; M B Broe", "journal": "Natural Language & Linguistic Theory", "ref_id": "b24", "title": "Similarity Avoidance and the OCP", "year": "2004" }, { "authors": "Juan Guerrero; Montero ; Richard A Blythe", "journal": "", "ref_id": "b25", "title": "Self-contained Beta-with-Spikes Approximation for Inference Under a Wright-Fisher Model", "year": "" }, { "authors": "M W Hahn; R A Bentley", "journal": "Proceedings of the Royal Society of London B: Biological Sciences", "ref_id": "b26", "title": "Drift as a mechanism for cultural change: An example from baby names", "year": "2003" }, { "authors": "John Haiman", "journal": "Language", "ref_id": "b27", "title": "Iconic and economic motivation", "year": "1983" }, { "authors": "John A Hawkins", "journal": "OUP Oxford", "ref_id": "b28", "title": "Efficiency and complexity in grammars", "year": "2004" }, { "authors": "P Bruce; Hayes", "journal": "", "ref_id": "b29", "title": "Phonetically driven phonology", "year": "1999" }, { "authors": "Juan Manuel; Hernández-Campoy ; Juan Camilo Conde-Silvestre", "journal": "John Wiley & Sons", "ref_id": "b30", "title": "The handbook of historical sociolinguistics", "year": "2012" }, { "authors": "L David; Hull", "journal": "University of Chicago Press", "ref_id": "b31", "title": "Science as a Process: An Evolutionary Account of the Social and Conceptual Development of Science", "year": "2010" }, { "authors": "Ian T Jolliffe", "journal": "Springer", "ref_id": "b32", "title": "Principal component analysis", "year": "2002" }, { "authors": "Anne Kandler; Adam Powell", "journal": "Springer Japan", "ref_id": "b33", "title": "Inferring Learning Strategies from Cultural Frequency Data", "year": "2015" }, { "authors": "Anne Kandler; Stephen Shennan", "journal": "Journal of Theoretical Biology", "ref_id": "b34", "title": "A non-equilibrium neutral model for analysing cultural change", "year": "2013" }, { "authors": "A Karjus", "journal": "Glossa", "ref_id": "b35", "title": "Challenges in detecting evolutionary forces in language change using diachronic corpora", "year": "2020" }, { "authors": "F Karsdorp", "journal": "Evolutionary Human Sciences", "ref_id": "b36", "title": "Classifying evolutionary forces in language change using neural networks", "year": "2020" }, { "authors": "H Kauhanen; G Walkden", "journal": "Nat Lang Linguistic Theory", "ref_id": "b37", "title": "Deriving the Constant Rate Effect", "year": "2018" }, { "authors": "R Keller; P L R Keller; B Nerlich", "journal": "Routledge", "ref_id": "b38", "title": "On Language Change: The Invisible Hand in Language", "year": "1994" }, { "authors": "Simon Kirby", "journal": "Linguistic Typology", "ref_id": "b39", "title": "Competing motivations and emergence: explaining implicational hierarchies", "year": "1997" }, { "authors": "Anthony S Kroch", "journal": "Language Variation and Change", "ref_id": "b40", "title": "Reflexes of grammar in patterns of language change", "year": "1989" }, { "authors": "W Labov", "journal": "Blackwell", "ref_id": "b41", "title": "Principles of linguistic change", "year": "1994" }, { "authors": "W Labov", "journal": "Blackwell", "ref_id": "b42", "title": "Principles of linguistic change", "year": "2001" }, { "authors": "W Labov", "journal": "Blackwell", "ref_id": "b43", "title": "Principles of linguistic change", "year": "2010" }, { "authors": "R William; Leben", "journal": "", "ref_id": "b44", "title": "Suprasegmental Phonology", "year": "1973" }, { "authors": "Erez Lieberman", "journal": "Nature", "ref_id": "b45", "title": "Quantifying the evolutionary dynamics of language", "year": "2007" }, { "authors": "J J Mccarthy", "journal": "Linguistic Inquiry", "ref_id": "b46", "title": "OCP effects: Gemination and antigemination", "year": "1986" }, { "authors": "J H Mcdonald", "journal": "Sparky House Publishing", "ref_id": "b47", "title": "Handbook of Bological Statistics", "year": "2014" }, { "authors": "M S Mcmahon", "journal": "Cambridge University Press", "ref_id": "b48", "title": "Understanding Language Change", "year": "1994" }, { "authors": "J B Michel", "journal": "Science", "ref_id": "b49", "title": "Quantitative analysis of culture using millions of digitized books", "year": "2011" }, { "authors": " Salikoko S Mufwene", "journal": "Cambridge University Press", "ref_id": "b50", "title": "The ecology of language evolution", "year": "2001" }, { "authors": "M Newberry", "journal": "Nature", "ref_id": "b51", "title": "Detecting evolutionary forces in language change", "year": "2017" }, { "authors": "Mark Pagel", "journal": "Nature Reviews Genetics", "ref_id": "b52", "title": "Human language as a culturally transmitted replicator", "year": "2009" }, { "authors": "Cyriel Paris; Bertrand Servin; Simon Boitard", "journal": "G3 Genes|Genomes|Genetics", "ref_id": "b53", "title": "Inference of Selection from Genetic Time Series Using Various Parametric Approximations to the Wright-Fisher Model", "year": "2019" }, { "authors": "Eitan Adam Pechenick; Christopher M Danforth; Peter Sheridan Dodds", "journal": "PLOS ONE", "ref_id": "b54", "title": "Characterizing the Google Books Corpus: Strong Limits to Inferences of Socio-Cultural and Linguistic Evolution", "year": "2015-10" }, { "authors": "Konstantin Pozdniakov; Guillaume Segerer", "journal": "", "ref_id": "b55", "title": "Similar place avoidance: A statistical universal", "year": "2007" }, { "authors": "Sandeep Prasada; Steven Pinker", "journal": "Language and Cognitive Processes", "ref_id": "b56", "title": "Generalization of regular and irregular morphological patterns", "year": "1993" }, { "authors": "Alan Prince; Paul Smolensky", "journal": "Science", "ref_id": "b57", "title": "Optimality: From neural networks to universal grammar", "year": "1997" }, { "authors": "", "journal": "Real Academia Española", "ref_id": "b58", "title": "Ortografía de la lengua castellana", "year": "1763" }, { "authors": "", "journal": "Real Academia Española", "ref_id": "b59", "title": "Ortografía de la lengua castellana", "year": "1815" }, { "authors": "", "journal": "Gregorio Hernando", "ref_id": "b60", "title": "Prontuario de ortografía castellana en preguntas y respuestas", "year": "1881" }, { "authors": "F Reali; T L Griffiths", "journal": "", "ref_id": "b61", "title": "Words as alleles: connecting language evolution with Bayesian learners to models of genetic drift", "year": "2010" }, { "authors": "Don Ringe; Charles Yang", "journal": "John Benjamins", "ref_id": "b62", "title": "The threshold of productivity and the 'irregularization' of verbs in Early Modern English", "year": "2022" }, { "authors": "", "journal": "De Gruyter Mouton", "ref_id": "b63", "title": "Language Planning Processes", "year": "2013" }, { "authors": "Gijsbert Rutten; Rik Vosters", "journal": "Cambridge University Press", "ref_id": "b64", "title": "Language standardization 'from above", "year": "2021" }, { "authors": "E Sapir", "journal": "Harcourt", "ref_id": "b65", "title": "Language: An introduction to the study of speech", "year": "1921" }, { "authors": "T A Severini", "journal": "Oxford University Press", "ref_id": "b66", "title": "Likelihood Methods in Statistics", "year": "2000" }, { "authors": "S D Silvey", "journal": "Chapman & Hall", "ref_id": "b67", "title": "Statistical Inference", "year": "1970" }, { "authors": "Helen Sims-Williams", "journal": "Transactions of the Philological Society", "ref_id": "b68", "title": "Analogical Levelling and Optimisation: The Treatment of Pointless Lexical Allomorphy in Greek", "year": "2016" }, { "authors": "James Steele; Claudia Glatz; Anne Kandler", "journal": "Journal of Archaeological Science", "ref_id": "b69", "title": "Ceramic diversity, random copying, and tests for selectivity in ceramic production", "year": "2010" }, { "authors": "Joseph Paul; Stemberger ", "journal": "Language", "ref_id": "b70", "title": "Morphological Haplology", "year": "1981" }, { "authors": "S A Tagliamonte", "journal": "Wiley", "ref_id": "b71", "title": "Variationist Sociolinguistics : Change, Observation, Interpretation", "year": "2011" }, { "authors": "T Tataru; A Bataillon; Hobolth", "journal": "Genetics", "ref_id": "b72", "title": "Inference under a Wright-Fisher model using an accurate Beta approximation", "year": "2015" }, { "authors": "Paula Tataru", "journal": "Systematic Biology", "ref_id": "b73", "title": "Statistical Inference in the Wright-Fisher Model Using Allele Frequency Data", "year": "2016" }, { "authors": "A Wayne; Taylor", "journal": "", "ref_id": "b74", "title": "Change-point analysis: a powerful new tool for detecting changes", "year": "2000" }, { "authors": "J A Walker", "journal": "", "ref_id": "b75", "title": "Variation in Linguistic Systems", "year": "2010" }, { "authors": " Wichmann", "journal": "Advances in Complex Systems", "ref_id": "b76", "title": "Do language change rates depend on population size?", "year": "2008" }, { "authors": "S Wright", "journal": "Genetics", "ref_id": "b77", "title": "Evolution in Mendelian populations", "year": "1931" }, { "authors": "C Yang", "journal": "Oxford University Press", "ref_id": "b78", "title": "Grammar competition and Language change", "year": "2002" }, { "authors": "C Yang", "journal": "Language Variation and Change", "ref_id": "b79", "title": "Internal and external forces in language change", "year": "2000" }, { "authors": "Justin D Yeh; Laurel Fogarty; Anne Kandler", "journal": "", "ref_id": "b80", "title": "Cultural linkage: the influence of package transmission on cultural dynamics", "year": "2019" }, { "authors": "G K Zipf", "journal": "Addison-Wesley Press", "ref_id": "b81", "title": "Human behavior and the principle of least effort", "year": "1949" } ]
[ { "formula_coordinates": [ 3, 216.62, 355.89, 324.05, 30.32 ], "formula_id": "formula_0", "formula_text": "L(X|Θ) = m-1 i=1 Prob (x i+1 , t i+1 |x i , t i , Θ) .(1)" }, { "formula_coordinates": [ 4, 258.26, 493.93, 282.41, 25.65 ], "formula_id": "formula_1", "formula_text": "g(x, s) = 1 1 + 1-x x e -s ,(2)" }, { "formula_coordinates": [ 4, 158.86, 702.95, 381.81, 20.56 ], "formula_id": "formula_2", "formula_text": "Prob(x t+1 |x t , N, s) = N N x t+1 g (x t , s) N xt+1 (1 -g (x t , s)) N (1-xt+1)(3)" }, { "formula_coordinates": [ 5, 256.81, 418.94, 283.86, 26.65 ], "formula_id": "formula_3", "formula_text": "+ (1 -P 1,k -P 0,k ) x α k -1 t+k (1 -x t+k ) β k -1 B(α k , β k ) .(4)" }, { "formula_coordinates": [ 6, 241.52, 558.28, 299.15, 11.03 ], "formula_id": "formula_4", "formula_text": "(N * , s * ) = arg max L(X|N, s).(5)" }, { "formula_coordinates": [ 6, 251.25, 659.12, 289.42, 12.69 ], "formula_id": "formula_5", "formula_text": "N * 0 = arg max L(X|N, 0) .(6)" }, { "formula_coordinates": [ 6, 249.74, 700.37, 287.06, 25.96 ], "formula_id": "formula_6", "formula_text": "λ = 2 ln L(X|N * , s * ) L(X|N * 0 , 0) . (7" }, { "formula_coordinates": [ 6, 536.8, 709.01, 3.87, 8.64 ], "formula_id": "formula_7", "formula_text": ")" } ]
RELIABLE DETECTION AND QUANTIFICATION OF SELECTIVE FORCES IN LANGUAGE CHANGE
Language change is a cultural evolutionary process in which variants of linguistic variables change in frequency through processes analogous to mutation, selection and genetic drift. In this work, we apply a recently-introduced method to corpus data to quantify the strength of selection in specific instances of historical language change. We first demonstrate, in the context of English irregular verbs, that this method is more reliable and interpretable than similar methods that have previously been applied. We further extend this study to demonstrate that a bias towards phonological simplicity overrides that favouring grammatical simplicity when these are in conflict. Finally, with reference to Spanish spelling reforms, we show that the method can also detect points in time at which selection strengths change, a feature that is generically expected for socially-motivated language change. Together, these results indicate how hypotheses for mechanisms of language change can be tested quantitatively using historical corpus data.
Juan Guerrero Montero; Andres Karjus; Kenny Smith; Richard A Blythe
[ { "figure_caption": "Figure 1 :1Figure 1: Schematic representation of the transition from generation t to generation t + 1 in a Wright-Fisher process with N = 10.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Probability distribution of a variant frequency x after one generation of evolution in the Wright-Fisher model, starting from a frequency x 0 = 1 2 . As the selection strength s increases, the distribution becomes more sharply peaked on larger values of x.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Comparison of the statistical distances of the BwS and normal approximations to the exact WF distribution as a function of the initial frequency x 0 . Left: statistical distance for pure drift (s = 0). Right: statistical distance for strong selection (s = 0.5). The Beta-with-Spikes approximation has lower statistical distance to the exact distribution (meaning it approximates it more accurately) for every value of s and x 0 , but especially for extreme values of x 0 close to 0.0 or 1.0 and for strong selection.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Variability in the selection strengths s and p-values for the null hypothesis pure drift for the COHA verbs. Each cross shows the mean value of the two parameters for each verb obtained when aggregating frequencies into temporal bins of different lengths. Each ellipse indicates the variability in the parameters at the level of one standard deviation. The vertical axis is an indicator of selection, defined as one minus the p-value associated with the drift hypothesis. The lower panel shows those verbs that fall within the range of p-values that is conventionally used to reject the null hypothesis for a single observation. In this panel we see a clear split into those that are regularising (negative s) and are irregularising (positive s).", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Parameter estimates for verbs ending in alveolar stops (red) and verbs in the baseline set (blue) in the Google Books data set. The top panel shows the entire range of drift p-values and includes all 53 verbs. The bottom panel is restricted to p < 0.05, thus focusing on verbs that are likely to be undergoing directed selection. The distribution of verbs in the alveolar stop set seems to be skewed to the region where s > 0 and p < 0.05, suggesting they are more likely to be irregularising than the other verbs.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Application of the BwS algorithm for the detection of changing forces to the reference data set of Spanish spelling reforms in the 2019 Spanish 1-gram Google Books corpus, with temporal binning of the frequency data of 5 years.For each set of words that undergo a rule change, the ratio of usage of the old form is plotted over time. The ratio of usage of all old forms converges to zero after each reform. Red dots with solid vertical lines represent the year of publication of the RAE spelling reforms[59,60,61]. Dark blue dots with solid vertical lines represent the year at which selection strengths changed as detected by the maximum likelihood method with a p-value below 0.05. These fall within a period ∆T of 12 years or less relative to the date of the reform. Note that the temporal resolution of the time series is of 5 years, so an error of 10 years is equivalent to just two data points. Dashed vertical lines represent secondary points of change in evolutionary parameters, also detected with a p-value below 0.05. The number of such secondary points depends on the time series.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" } ]
[{"Category": "Methodological Basis", "Citation": "[76]", "Explanation": "The cited work provides a range of factors that may be responsible for language innovation, which the citing paper adopts to study the competition between linguistic variants in a speech community."}, {"Category": "Supporting Evidence", "Citation": "[39]", "Explanation": "The cited work highlights the role of expressivity in language innovation, which the citing paper uses to support the claim that a desire to be noticed or recognized may lead to the development of novel forms."}, {"Category": "Supporting Evidence", "Citation": "[82]", "Explanation": "The cited work emphasizes the importance of economy in language innovation, which the citing paper uses to support the claim that simplicity may lead to the development of new forms."}, {"Category": "Supporting Evidence", "Citation": "[43]", "Explanation": "The cited work mentions the social factors of prestige and taboo in language change, which the citing paper uses to support the claim that these factors may influence the popularity of certain variant forms in a speech community."}, {"Category": "Methodological Basis", "Citation": "[18,51,4,53]", "Explanation": "The cited works provide a cultural evolutionary perspective on language change, which the citing paper adopts to study the competition between linguistic variants in a speech community."}, {"Category": "Methodological Basis", "Citation": "[17,11]", "Explanation": "The cited works are used to model cultural evolution, providing a basis for the citing paper to study the changes in variant frequencies and identify selective forces in the process of cultural evolution."}, {"Category": "Data Source", "Citation": "[27]", "Explanation": "The distribution of baby names is used as a data source in the study of cultural evolution, providing a specific example of the changes in variant frequencies that the citing paper is interested in."}, {"Category": "Data Source", "Citation": "[70]", "Explanation": "The distribution of Hittite ceramic bowl types is also used as a data source in the study of cultural evolution, providing another example of the changes in variant frequencies that the citing paper is interested in."}, {"Category": "Supporting Evidence", "Citation": "[1]", "Explanation": "The deviations from drift in the distributions of baby names and Hittite ceramic bowl types are used as supporting evidence in the study of cultural evolution, showing that the process is more complex than previously thought."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work provides a method for investigating the evolution of pottery styles by appealing to predictions for the number of types remaining after a given time under drift, which the citing paper builds upon in its analysis of linguistic corpus data."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work uses a simulated trajectory of variant frequencies in an Approximate Bayesian Computation scheme to investigate the evolution of pottery styles, which the citing paper adopts in its analysis of linguistic corpus data."}, {"Category": "Methodological Basis", "Citation": "[24,78]", "Explanation": "The cited works introduce the Wright-Fisher model of evolution, which the citing paper uses as a basis for analysing changes in linguistic corpus data in the context of language change."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work highlights the relevance of the Wright-Fisher model to cultural evolution, which the citing paper leverages in its analysis of linguistic corpus data."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work shows that a mathematical formulation of Croft's descriptive theory of utterance selection is equivalent to the Wright-Fisher model, which the citing paper uses in its analysis of linguistic corpus data."}, {"Category": "Methodological Basis", "Citation": "[32]", "Explanation": "The cited work provides a generalised analysis of selection that forms the basis for the mathematical formulation of Croft's descriptive theory of utterance selection, which the citing paper uses in its analysis of linguistic corpus data."}, {"Category": "Methodological Basis", "Citation": "[62]", "Explanation": "The cited work provides a model of iterated learning that the citing paper adopts in their research on language change, using Bayesian inference to estimate variant frequencies in linguistic input."}, {"Category": "Extension or Continuation", "Citation": "[79]", "Explanation": "The cited work on theories of language change is discussed as a possible instance of the generalised analysis of selection in the cited work, and the citing paper further explores the relationship between these theories and the Wright-Fisher model."}, {"Category": "Data Source", "Citation": "[10]", "Explanation": "The cited work is mentioned as a reference for the discussion of the Wright-Fisher model in the context of theories of language change, providing a foundational element for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[73]", "Explanation": "The cited work provides a Beta-with-spikes approximation method that the citing paper builds upon to facilitate efficient and reliable estimation of model parameters."}, {"Category": "Extension or Continuation", "Citation": "[26]", "Explanation": "The cited work serves as a benchmark for the citing paper in terms of data analysis and model evaluation, leading to the extension of the method to real and synthetic data."}, {"Category": "Supporting Evidence", "Citation": "[52]", "Explanation": "The cited work provides a set of English verbs with irregular past tense forms that the citing paper uses to validate the method and compare it to other approaches."}, {"Category": "Extension or Continuation", "Citation": "[36]", "Explanation": "The cited work is revisited in the citing paper to further explore the set of English verbs with irregular past tense forms and demonstrate the reliability of the method."}, {"Category": "Extension or Continuation", "Citation": "[37]", "Explanation": "The cited work is used in the citing paper to compare the method with a neural-network based time-series classifier and highlight the advantages of the method in terms of interpretability."}, {"Category": "Data Source", "Citation": "[37]", "Explanation": "The data from the cited work is used in the citing paper to demonstrate the time-series classification method and its performance in the context of English verbs with irregular past tense forms."}, {"Category": "Supporting Evidence", "Citation": "[45,71]", "Explanation": "The cited works provide evidence of a conflict between grammatical and phonological simplicity in the context of English verbs, which the citing paper uses to support its discussion of the selection strengths between two sets of words."}, {"Category": "Extension or Continuation", "Citation": "Spanish words affected by orthographical reforms", "Explanation": "The citing paper extends the discussion of linguistic changes by applying the maximum-likelihood analysis to Spanish words affected by orthographical reforms in the 18th and 19th centuries, demonstrating the potential of the method to uncover social changes in a time-series with a few measurement points."}, {"Category": "Methodological Basis", "Citation": "[67,68]", "Explanation": "The cited works provide a statistical model of the residuals that the citing paper adopts in its research on maximum likelihood estimation and linear regression."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work provides a model of language change that the citing paper uses to derive the transition probabilities in the cultural evolutionary dynamics."}, {"Category": "Methodological Basis", "Citation": "[26]", "Explanation": "The cited work provides a self-contained Beta-with-Spikes approximation scheme that the citing paper adopts to approximate the transition probabilities in the Wright-Fisher model, which is a computationally efficient method for working with the model in linguistic applications."}, {"Category": "Methodological Basis", "Citation": "[74]", "Explanation": "The cited work provides a straightforward extension of the Wright-Fisher model to include more than two distinct types, which the citing paper adopts in its research on linguistic variable variants."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work provides the foundational theory that the time taken for a variant to go extinct is proportional to the population size, which the citing paper adopts in their study of language change."}, {"Category": "Extension or Continuation", "Citation": "[8,10]", "Explanation": "The cited works on agent-based models of language learning and use provide a basis for understanding the correlation between population size and the size of the speech community, which the citing paper further explores in their study of language change."}, {"Category": "Data Source", "Citation": "[12,77,10]", "Explanation": "The cited works on heterogeneous social network structures provide data on the correlation between population size and the size of the human population, which the citing paper uses in their study of language change."}, {"Category": "Supporting Evidence", "Citation": "[62]", "Explanation": "The cited work on the relationship between population size and the total usage of all variants of a linguistic variable in a given generation provides supporting evidence for the study of language change in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[38,80]", "Explanation": "The cited works provide a theoretical characterisation of language change, which the citing paper adopts in its research to model the probability of offspring generation in a given context."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work provides a specific formula for the relationship between selection strength and probability, which the citing paper adopts in its research to model the selection process in a population."}, {"Category": "Supporting Evidence", "Citation": "[74]", "Explanation": "The cited work offers additional relationships between selection strength and probability that the citing paper can use to support its research on the selection process in a population."}, {"Category": "Methodological Basis", "Citation": "[73]", "Explanation": "The cited work introduces the Beta-with-Spikes approximation, which the citing paper adopts to address the complexities in estimating the effective population size and time intervals in the Wright-Fisher model for linguistic corpus data analysis."}, {"Category": "Methodological Basis", "Citation": "[26]", "Explanation": "The cited work provides the methodology for determining the parameters \u03b1 k and \u03b2 k in the BwS approximation, which the citing paper adopts to control the shape of the distribution of variant frequencies."}, {"Category": "Supporting Evidence", "Citation": "[73]", "Explanation": "The cited work by [73] provides a method for estimating the parameters of the BwS approximation, which the citing paper uses to match the moments of the BwS distribution to those of the Wright-Fisher model in successive generations."}, {"Category": "Methodological Basis", "Citation": "[26]", "Explanation": "The cited work by [26] has improved the method for iterating the Wright-Fisher model by generating parameter estimates through maximum likelihood estimation, which the citing paper adopts in its research on cultural evolution."}, {"Category": "Methodological Basis", "Citation": "[9,52]", "Explanation": "The cited works provide the basis for the selection strength s * comparison in the citing paper, as they establish the need to test whether the strength is significantly different from zero to avoid the null model of stochastic drift."}, {"Category": "Supporting Evidence", "Citation": "[67,68]", "Explanation": "The cited works provide the reference distribution for the likelihood-ratio comparison in the citing paper, as they establish the methodology for computing the p-value to assess the probability of the observed time series arising from drift alone."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work provides a procedure for generating artificial time series and computing maximum likelihood values, which the citing paper adopts in their research to analyze the null hypothesis of drift in empirical data."}, {"Category": "Supporting Evidence", "Citation": "[21]", "Explanation": "The cited work, the Corpus of Historical American English, serves as the data source for the set of verb time series used in the citing paper to benchmark the BwS method against other approaches."}, {"Category": "Extension or Continuation", "Citation": "[52]", "Explanation": "The cited work by [52] provides a similar likelihood-based approach to the BwS method, which the citing paper extends by demonstrating that the BwS method is more robust in the context of verb time series analysis."}, {"Category": "Extension or Continuation", "Citation": "[37]", "Explanation": "The cited work by [37] introduces a neural network approach to perform a binary classification in the context of verb time series analysis. The citing paper extends this work by demonstrating that the BwS method is more informative than the neural network approach in this context."}, {"Category": "Data Source", "Citation": "[50]", "Explanation": "The cited work by [50] provides the English 2019 1-grams and 2-grams datasets from the Google Books corpus, which the citing paper uses to perform analyses on the direction of selection in the context of English irregular verbs."}, {"Category": "Data Source", "Citation": "[55]", "Explanation": "The cited work raises concerns about the validity of using frequency data from the Google Books corpus to study cultural evolution and language change. The citing paper acknowledges this criticism and justifies its use of the general English sub-corpus for the purposes of its study."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work introduces the Frequency Increment Test (FIT), which the citing paper adopts in their study of the relative frequencies of regular and irregular forms of English verbs in the Corpus of Historical American English (COHA). The FIT method is used to estimate the effective population size and selection strength, along with p-values for the null hypothesis of pure drift."}, {"Category": "Data Source", "Citation": "[21]", "Explanation": "The cited work, the Corpus of Historical American English (COHA), serves as the data source for the study conducted in the citing paper on the relative frequencies of regular and irregular forms of English verbs. The COHA provides a dataset for the analysis and results presented in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[37]", "Explanation": "The cited work provides a training method for a neural network that is used in the citing paper to classify time series data into either drift or selection categories."}, {"Category": "Methodological Basis", "Citation": "[37]", "Explanation": "The cited work by [37] provides a method for classifying time series data into either drift or selection categories, which the citing paper adopts in their research to analyze the data and draw conclusions."}, {"Category": "Methodological Basis", "Citation": "[52]", "Explanation": "The cited work reports a range of values of N for a data set, which the citing paper uses to inform the training set in the method."}, {"Category": "Extension or Continuation", "Citation": "[36]", "Explanation": "The cited work uses FIT to analyze a set of verbs, and the citing paper extends this analysis by also considering the same set of verbs using a different method."}, {"Category": "Extension or Continuation", "Citation": "[37]", "Explanation": "The cited work uses TSC to analyze a set of verbs, and the citing paper extends this analysis by also considering the same set of verbs using a different method."}, {"Category": "Data Source", "Citation": "COHA", "Explanation": "The cited work is a data source for the analysis performed in the citing paper, providing annual frequency data of variants of interest for a historical period."}, {"Category": "Methodological Basis", "Citation": "[74,26]", "Explanation": "The cited works provide the BwS method that the citing paper uses to design out reliability issues and gain confidence in the method's reliability."}, {"Category": "Data Source", "Citation": "[26]", "Explanation": "The cited work is used to benchmark the BwS method with synthetic and genetic data, which contributes to the data source for the study conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "Figure 3", "Explanation": "The cited work is extended in the citing paper to show the higher precision of the BwS at high selection strengths, leading to higher significance in the detection of selective forces."}, {"Category": "Methodological Basis", "Citation": "TSC neural networks", "Explanation": "The cited work is the training data for the neural networks used in the TSC method, which contributes to the methodological basis for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "binning protocol", "Explanation": "The cited work is the binning protocol used in the empirical time-series data, which contributes to the methodological basis for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "threshold applied to neural network output", "Explanation": "The cited work is the application of a strict threshold to the neural network output to partition time series into classes, which contributes to the methodological basis for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work by [36] is the source of the observation of variation in p-values under different binning strategies, which the citing paper builds upon in their analysis of the time series data."}, {"Category": "Methodological Basis", "Citation": "[33]", "Explanation": "The cited work provides the methodology of performing a Principal Components Analysis, which the citing paper adopts to visualize the variation in parameters through different binning strategies."}, {"Category": "Methodological Basis", "Citation": "[52]", "Explanation": "The cited work by [52] is used to support the analysis conducted in the citing paper, as the authors acknowledge the evidence of both regularisation and irregularisation in the changes observed in the corpus."}, {"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work provides a reference for the observation of a split between regular and irregular verbs in the context of regular inflection and irregularisation."}, {"Category": "Methodological Basis", "Citation": "[69]", "Explanation": "The cited work is mentioned as a reference for the norm of extension of regular inflection at the expense of irregulars."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work is cited for the finding that the processes of regularisation and irregularisation occur with similar frequency, which is in line with the observations made in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[63]", "Explanation": "The cited work is mentioned for the suggestion that irregularisation may occur if the number of verbs within an irregularity class surpasses a productivity threshold."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work is cited for the proposal of phonological analogy as a potential mechanism for irregularisation, which is in line with the discussion in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[57]", "Explanation": "The cited work is mentioned for the proposal of phonological analogy as a potential mechanism for irregularisation, which is in line with the discussion in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[52]", "Explanation": "The cited work is cited for the proposal of phonological analogy as a potential mechanism for irregularisation, which is in line with the discussion in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[22,6,7,28,40,29]", "Explanation": "The cited works provide a discussion of the various motivations for linguistic variation, which the citing paper uses as a basis for understanding the forces that drive the resolution of oppositional forces in language change contexts."}, {"Category": "Methodological Basis", "Citation": "[50]", "Explanation": "The cited work provides the Google Books corpus as a data source for the analysis conducted in the citing paper, which is used to identify and count the frequency of verb forms in a given time frame."}, {"Category": "Methodological Basis", "Citation": "[48]", "Explanation": "The cited work provides the G-test of goodness-of-fit as a statistical method to test the null hypothesis in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[14]", "Explanation": "The cited work by [14] is used to support the claim that higher frequency items tend to tolerate greater irregularity, which is relevant to the discussion of the selection criteria for the set of 12 verbs in the analysis."}, {"Category": "Supporting Evidence", "Citation": "[45,71]", "Explanation": "The cited works provide the theoretical basis for the OCP constraint discussed in the citing paper, which is used to explain the net effect of competition between phonological and inflectional simplicity in a subset of verbs."}, {"Category": "Methodological Basis", "Citation": "[3,64]", "Explanation": "The cited works provide the basis for the study of the time-dependent social pressures in language change, as the acceptance or rejection of prescriptive grammar and spelling rules is expected to influence the selection strength s in a time-dependent manner."}, {"Category": "Methodological Basis", "Citation": "[26]", "Explanation": "The cited work provides a procedure for estimating times at which the parameters N and s change, which the citing paper adopts to measure changes in the evolutionary dynamics of the data in Spanish spelling reforms."}, {"Category": "Methodological Basis", "Citation": "[59]", "Explanation": "The cited work provides the historical context and rationale for the simplification of the <ss> digraph to a single <s>, which the citing paper adopts in their study of word changes in English."}, {"Category": "Methodological Basis", "Citation": "[60]", "Explanation": "The cited work serves as a reference for the replacement of etymological <x> with <j> in non word-final contexts, which the citing paper incorporates into their analysis of word changes in English."}, {"Category": "Methodological Basis", "Citation": "[61]", "Explanation": "The cited work is the source of the reversal of accentuation rules for words ending in <n>, which the citing paper utilizes in their study of word changes in English by treating words that gain an accent and words that lose an accent as independent sets."}, {"Category": "Methodological Basis", "Citation": "[26]", "Explanation": "The cited work provides the basic idea of allowing different parameter combinations to apply before and after a time T in the Wright-Fisher model, which the citing paper adopts in their research on data likelihood maximization."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work provides a procedure for generating effective time series for reforms, which the citing paper adopts in its analysis of cultural data."}, {"Category": "Data Source", "Citation": "[26]", "Explanation": "The cited work presents a sampling error equalisation algorithm that the citing paper uses to create subsamples in the data set and address issues of unequal sampling noise in cultural data analysis."}, {"Category": "Supporting Evidence", "Citation": "[59,60,61]", "Explanation": "The cited works are the RAE spelling reforms that serve as the basis for the analysis of the ratio of usage of old forms in the citing paper. The reforms are used to understand the changes in the usage of old forms and the impact of the reforms on the language evolution."}, {"Category": "Methodological Basis", "Citation": "[65]", "Explanation": "The cited work provides a suggestion that language reforms may reflect pre-existing trends, which the citing paper uses to support the analysis of the impact of reforms on the speech community."}, {"Category": "Methodological Basis", "Citation": "[26]", "Explanation": "The cited work provides the algorithm and method for quantitative study of evolutionary time series, which the citing paper adopts to analyze instances of competition in language change."}, {"Category": "Extension or Continuation", "Citation": "[32,18]", "Explanation": "The cited works provide theoretical considerations that justify the applicability of the Wright-Fisher model, which the citing paper extends to an agent-based model of language change."}, {"Category": "Extension or Continuation", "Citation": "[8,62,10]", "Explanation": "The cited works demonstrate the manifestation of the Wright-Fisher model as an agent-based model of language change from various starting points, which the citing paper further expands upon."}, {"Category": "Supporting Evidence", "Citation": "[52]", "Explanation": "The cited work has used the normal approximation in the study of language change, but the citing paper demonstrates that the BwS method is a better approximation to the Wright-Fisher model in this context."}, {"Category": "Data Source", "Citation": "[26]", "Explanation": "The cited work provides refinements to the BwS method for the study of language change, which the citing paper utilizes in its research."}, {"Category": "Methodological Basis", "Citation": "[52,36]", "Explanation": "The cited work provides the Frequency Increment Test (FIT) that the citing paper adopts to measure the reliability of results in the context of evolutionary changes."}, {"Category": "Methodological Basis", "Citation": "[37]", "Explanation": "The cited work presents the Time Series Classification (TSC) approach that the citing paper uses to train neural networks for artificial time series in a complementary manner to the Frequency Increment Test."}, {"Category": "Supporting Evidence", "Citation": "The present method", "Explanation": "The cited method provides a graded measure of the extent to which historical changes are consistent with drift in the form of a null hypothesis p-value and maximum likelihood estimates of parameters in the Wright-Fisher model, which supports the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "All evolutionary trajectories", "Explanation": "The cited evolutionary trajectories are the data source for the present method, which the citing paper uses to measure the extent of drift and selection in evolutionary changes."}, {"Category": "Methodological Basis", "Citation": "[52]", "Explanation": "The cited work provides a method for analyzing the evolution of verbs in historical corpora, which the citing paper adopts in its own research to study the dynamics of language within a speech community."}, {"Category": "Methodological Basis", "Citation": "[81], [13]", "Explanation": "The cited works argue for the importance of cultural change as an emergent phenomenon in the context of language evolution, which the citing paper adopts in their research to understand the complex web of interconnected forms and functions in language."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b47", "b39", "b7", "b26", "b26", "b57", "b21", "b61", "b3", "b2", "b46", "b22", "b25", "b15", "b56", "b39", "b23", "b14", "b29", "b45", "b17", "b27", "b1", "b14", "b19", "b33", "b24", "b30", "b8", "b26", "b57", "b51", "b61", "b21", "b3", "b4", "b36", "b35", "b54", "b55", "b61", "b3" ], "table_ref": [], "text": "One of the main challenges for modern learning frameworks in tackling new tasks is the lack of high-quality real-world labeled data. Unfortunately, labeling massive amounts of data is a time-consuming process that typically requires expert knowledge. Unsupervised learning is a modeling paradigm for learning without labels, and thus it gained increased attention in recent years (Sohl-Dickstein et al., 2015). Recent approaches utilize the inputs as supervisory signals (Chen et al., 2020a) and use pretext tasks (Misra & Maaten, 2020), yielding highly-competitive self-supervised learning (SSL) frameworks (Caron et al., 2020). The goal of this paper is to study the effect of a novel SSL approach on sequential disentanglement problems.\nData disentanglement is related to representation learning, where semantic latent representations are sought, to be used in various downstream tasks. A common goal in sequential disentanglement is the factorization of data to time-invariant (i.e., static) and time-variant (i.e., dynamic) features (Hsu et al., 2017). Most sequential disentanglement approaches for arbitrary data modalities such as video, audio, and time series are unsupervised, modeling the task via variational autoencoders (VAE) (Hsu et al., 2017;Yingzhen & Mandt, 2018;Han et al., 2021). Effectively, the static and dynamic factors are obtained via separate posterior distributions.\nSelf-supervised learning appeared only recently in sequence disentanglement works via supervisory signals, pretext tasks, and contrastive estimation. However, existing SSL introduces several shortcomings as it depends on the underlying modality. In this work, modality refers to the properties of the data or task. For instance, (Zhu et al., 2020) design auxiliary tasks per data type, e.g., predict silence in audio segments or detect a face in an image. Similarly, (Bai et al., 2021) require positive and negative samples with respect to the input, i.e., same-class and different-class examples, respectively. In practice, positive views are obtained via data-dependent data augmentation transformations such as rotations and cropping, whereas negative views are selected randomly from the batch. To increase the variability in the batch, common solutions address these issues by increasing the batch or creating a memory bank, resulting in high memory costs. In this work, we refer to the above approaches as modality-based supervision methods, and we argue that they can be avoided if the underlying model is generative.\nTo alleviate the above disadvantages, we design a novel sampling technique, yielding a new contrastive learning framework for disentanglement tasks of arbitrary sequential data that is based on the following insights. First, variational autoencoders naturally support the comparison of empirical distributions and their sampling. Second, we observe that a sample may be contrasted with its subsequent VAE prediction, leading to an increase in batch variability while keeping its size fixed. Based on these observations, we will show that we generate good positive and negative views. We evaluate our method on several challenging disentanglement problems and downstream tasks, and we achieve beyond state-of-the-art (SOTA) performance in comparison to several strong baseline methods.\nAs mentioned above, the majority of approaches use data augmentation for positive sampling (Bachman et al., 2019;Chen et al., 2020a). In (Sermanet et al., 2018), positive examples are obtained from video frames of different views for the same action, e.g., pouring coffee into a cup. A related idea appeared in (Han et al., 2019), where given a collection of videos, they generate samples by considering different videos, and different locations/times within a video. (Ho & Vasconcelos, 2020) use positive adversarial samples to further enhance the effect of contrastive learning.\nWhile significant attention was given to positive sampling techniques, recent approaches focus on negative sampling that goes beyond random selection from the batch (Doersch & Zisserman, 2017) or from a memory bank (Wu et al., 2018;Misra & Maaten, 2020;He et al., 2020). The main issue with random sampling is that it may yield negative examples which are actually positive, an issue known as sampling bias (Chuang et al., 2020). To address the latter problem, Kalantidis et al. (2020); Robinson et al. (2020) construct negative samples by measuring their similarity to the current sample. Recently, Ge et al. (2021) generate negative examples with superfluous features, and similarly, Huynh et al. (2022) aim at discarding semantic information from negative samples. Ash et al. (2021) studied the effect of the number of negative samples on performance. Finally, a few techniques showed impressive results with no negative examples at all (Chuang et al., 2020;Grill et al., 2020).\nDisentanglement methods. Separating the underlying factors of variation is a well-established research problem on static image data (Kulkarni et al., 2015;Higgins et al., 2016;Kim & Mnih, 2018;Chen et al., 2018). Disentanglement of sequential data is an emerging field, and it focuses on data factorization to static and dynamic factors. Hsu et al. (2017) introduced unsupervised disentanglement of sequential data via an LSTM-based model on audio data. Later, Yingzhen & Mandt (2018) suggested DSVAE using a similar LSTM architecture while adding a heuristic in which the dynamic features' dimension is small compared to the static features' size. Further, Tulyakov et al. (2018) proposed an adversarial setup. S3VAE (Zhu et al., 2020) improves DSVAE by adding mutual information penalties on the relation between the static and dynamic features and the input, and in addition, they used auxiliary signals. Han et al. (2021) also suggested improving DSVAE by replacing the Euclidean distance with a Wasserstein distance. Tonekaboni et al. ( 2022) use a VAE model to disentangle arbitrary time series data. C-DSVAE (Bai et al., 2021) includes a contrastive estimation of the mutual information losses introduced by S3VAE. They employ data augmentations for contrastive loss estimation, using a similar architecture as S3VAE and DSVAE. Recent work by Berman et al. (2023) developed structured Koopman autoencoders to promote multifactor disentanglement of the sequential data to more than two semantic components. Our work builds on the architecture and objective of C-DSVAE while overcoming some of its shortcomings. Specifically, we design a simple framework for sampling good positive and negative samples. Our approach is modality-free, i.e., it does not depend on the data domain (video, audio, or time series), nor does it depend on the task (e.g., images of faces or letter images).\nContrastive disentanglement. Several works considered contrastive estimation in the context of disentanglement of latent factors (Lin et al., 2020;Li et al., 2021;Wang et al., 2021). Here, we focus on disentanglement of sequential data. For instance, Wei et al. (2022) employ a contrastive triplet loss for unsupervised video domain adaptation. Selfsupervision in sequential disentanglement of arbitrary data appeared only recently. Zhu et al. (2020) utilize auxiliary tasks and supervisory signals, whereas Bai et al. (2021) use contrastive estimation, following the standard augmentation and random sampling for constructing positive and negative examples, respectively, and using the infoNCE loss." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b57", "b3", "b32", "b13", "b5", "b57", "b37", "b61", "b21", "b3", "b3", "b3", "b8", "b3", "b40", "b3" ], "table_ref": [], "text": "Problem formulation. Given a dataset D = {x j 1:T } N j=1 of time series sequences x 1:T = {x 1 , . . . , x T } where the index j is omitted for brevity, our goal is to find a posterior distribution p(s, d 1:T | x 1:T ) of disentangled static and dynamic latent representations, s and d 1:T respectively, such that x 1:T ∼ p(x 1:T | s, d 1:T ). We elaborate below on the constraints and assumptions related to the factors and data.\nProbabilistic modeling. Our discussion follows closely existing works such as (Yingzhen & Mandt, 2018;Bai et al., 2021). The static factor s and dynamic factors d 1:T are assumed to be independent, x i depends only on s and d i , and d i depends on the previous dynamic factors d <i = {d 1 , . . . , d i-1 }. Under these assumptions, we consider the following joint distribution p(x 1:T , z) = p(s)\nT i=1 p(d i | d <i ) • T i=1 p(x i | s, d i ) ,(1)\nwhere z = (s, d 1:T ). The prior distributions p(s) and p(d i | d <i ) are taken to be Gaussian with p(s) := N (0, I) and p(\nd i | d <i ) := N (µ(d <i ), σ 2 (d <i )).\nThe posterior distribution p(s, d 1:T | x 1:T ) disentangles static from dynamic, and it is approximated via\nq(z | x 1:T ) = q(s | x 1:T ) T i=1 q(d i | d <i , x ≤i ) ,(2)\ni.e., s is conditioned on the entire sequence, whereas d i depends on previous d j and inputs, and current inputs.\nThe variational autoencoder (VAE) (Kingma & Welling, 2014) relates the prior and approximate posterior distributions in a regularized reconstruction loss. For mutually independent s and d 1:T this loss takes the following form,\nL VAE = λ 1 E q(z | x 1:T ) log p(x 1:T | z) -λ 2 KL[q(s | x 1:T ) ∥ p(s)] -λ 3 KL[q(d 1:T | x 1:T ) ∥ p(d 1:T )] ,(3)\nwhere KL[q ∥ p] is the Kullback-Leibler divergence that computes the distance between distributions q and p, and λ 1 , λ 2 , λ 3 ∈ R + are weight hyperparameters.\nIn practice, the likelihood p(\nx i | s, d i ) in Eq. (1), p(d i | d <i )\nand the terms q(s | x 1:T ) and q(d i | d <i , x ≤i ) in Eq. ( 2) are all obtained via separate LSTM modules. Sampling from the sequential distribution p(d i | d <i ) is achieved by using the mean and variance the LSTM outputs when feeding d i-1 , and similarly for q(d i | d <i , x ≤i ). Finally, we use the mean squared error (MSE) for reconstruction in Eq. ( 3) and the KL terms are computed analytically. Further network architectural details are given in App. A.3.\nMutual information disentanglement. Similar to the catastrophic collapse observed in (Chopra et al., 2005), VAE models may produce non-informative latent factors (Bowman et al., 2016). In sequential disentanglement tasks, this issue manifests itself by condensing the static and dynamic information into d 1:T . An empirical heuristic has been partially successful in mitigating this issue, where a lowdimensional d i and a high-dimensional s are used (Yingzhen & Mandt, 2018), thus d i is less expressive by construction. However, a recent theoretical result (Locatello et al., 2019) shows that unsupervised disentanglement is impossible if no inductive biases are imposed on models and data. Thus, to alleviate these challenges, several existing works (Zhu et al., 2020;Han et al., 2021;Bai et al., 2021) augmented model (3) with mutual information terms.\nThe main idea in introducing mutual information (MI) terms is to separately maximize the relation in pairs (s, x 1:T ) and (d 1:T , x 1:T ), while minimizing the relation of (s, d 1:T ). This idea is realized formally as follows (Bai et al., 2021),\nL MI = λ 4 I q (s; x 1:T ) + λ 4 I q (d 1:T ; x 1:T ) -λ 5 I q (s; d 1:T ) , (4) where I q (u; v) = E q(u,v) log q(u | v) q(u) . Combining the above losses (3) and ( 4), the disentanglement model reads\nmax p,q E x 1:T ∼p D L VAE + L MI ,(5)\nwhere p D is the empirical distribution of the dataset D. Bai et al. (2021) shows that problem (5) is a proper evidence lower bound (ELBO) of the log-likelihood of (1).\nEstimating the MI terms is not straightforward. A standard approach uses mini-batch weighted sampling (MWS) (Chen et al., 2018). In contrast, Bai et al. (2021) approximated MI terms via a contrastive estimation known as infoNCE,\nL iNCE = log ϕ(u, v + ) ϕ(u, v + ) + M j=1 ϕ(u, v j ) , (6\n)\nwhere u is either the static factor or the dynamic features, i.e., u ∈ {s, d 1:T }. The samples v + and v j correspond to positive and negative views with respect to u. For instance, if u := s, then the static features v + := s + are similar to s. The function ϕ(u, v) = exp(u T v/τ |u||v|) measures the similarity between examples u and v, with τ = 0.5 being a temperature parameter (Chen et al., 2020a). It has been shown that I q (u; x 1:T ) ≈ L iNCE (u) under relatively mild conditions (Oord et al., 2018). Our model architecture and objective function follow C-DSVAE (Bai et al., 2021), while significantly improving their contrastive estimation by proposing a novel sampling procedure as we detail below." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Method", "publication_ref": [ "b61", "b3", "b48", "b48", "b58", "b15", "b14", "b56", "b29", "b45", "b57" ], "table_ref": [], "text": "Employing contrastive estimation (6) within a sequential disentanglement framework requires positive and negative views of s and d 1:T for a given input x 1:T . In practice, the positive samples are obtained via modality-based data augmentations such as cropping and color distortion for images, voice conversion for audio data, and shuffling of frames for general static augmentation. Negative views are obtained by randomly sampling from the batch. See, for instance, (Zhu et al., 2020;Bai et al., 2021). In this work, we argue that while random sampling and data augmentations with the infoNCE loss are popular tools for unsupervised learning (Chen et al., 2020a;Tian et al., 2020), one should revisit the core components of sequential contrastive learning. We will show that existing practices for sampling of views and for increasing the batch variation can be improved.\nShortcomings of views' sampling. Creating semantically similar and dissimilar examples is a challenging problem. We distinguish between domain-modality and task-modality sampling. In general, we will use the following definition of X-modality. A method Y is X-modality-dependent if Y depends on the characteristics of X. For instance, image rotation (Y ) is domain-modality-dependent (X) as it will probably be less effective for audio sequences. Similarly, cropping of images (Y ) may not be effective for images of letters, and thus it is task-modality-dependent (X). Namely, task-modality-dependent approaches may require separate sampling methods for the same data domain. In summary, modality may mean multiple concepts depending on the particular context, including the format of the data or its statistical features. In general, we argue below that existing disentanglement approaches are modality-based.\nExisting studies show that the particular choice of DA can significantly affect results (Chen et al., 2020b;Tian et al., 2020;Zhang & Ma, 2022). Even shuffling of frames which may seem robust, can yield wrong views in critical healthcare applications involving data with the vital measurements of a patient. In conclusion, DA may heavily depend on domain knowledge and task expertise. DA which falls into one of the categories above is referred to as modality-based augmentations. To the best of our knowledge, the majority of data augmentation tools are modality-based.\nConstructing negative views may seem conceptually simpler in comparison to positive views, however, it bears its own challenges. Common methods select randomly from the dataset (Doersch & Zisserman, 2017). To reduce the sampling bias of false negative views (Chuang et al., 2020), existing works suggest increasing batch sizes (Chen et al., 2020a) or using a memory bank (Wu et al., 2018). Yet, the memory footprint of these methods is limiting. To conclude, both data augmentation and randomness should be avoided in the construction of positive and negative views.\nVAE-based sampling. Motivated by the above discussion, we opt for an efficient modality-free sampling approach. We make the following key observation: variational autoencoders inherently support the formation, comparison, and sampling of empirical distributions\nEssentially, given a dataset D = {x j 1:T } N j=1 of time series sequences and model (3), we can generate the individual posterior distributions {q(z j | x j 1:T )} N j=1 , compare them via the Kullback-Leibler divergence, and sample z j ∼ q(z j | x j 1:T ). We denote by x 1:T the input for which we seek a positive x + 1:T and several negative x -,j 1:T , j = 1, . . . , M , examples. Our discussion focuses on sampling static views {s + , s -,j }, however, a similar process can be performed for sampling dynamic features. Intuitively, s + is the factor such that the distance KL[q(s | x 1:T ) ∥ q(s + | x + 1:T )] is minimal, where x + 1:T ∼ p D . Similarly, s -,j are the features with maximal KL value. However, a subtle yet important aspect of views is the distinction between soft and hard samples. Soft negative examples are those which contribute less to learning as they are too dissimilar to the current sample, whereas hard views are the semantically-dissimilar examples that are close in latent space to x 1:T (Kalantidis et al., 2020;Robinson et al., 2020). How would one obtain good views with large batch variability given the above observation?\nTo increase variation while avoiding memory banks and large batch sizes, we suggest to use the (non-increased) batch itself. Let qk (s | x 1:T ) denote the partially-trained posterior after k epochs of training. We denote by D ∈ R n×n the pairwise KL divergence distances matrix for a batch of size n, {x j 1:T } n j=1 . Namely,\nD ij := KL[q(s i | x i 1:T ) ∥ q(s j | x j 1:T )] ,(7)\nwhere i, j ∈ {1, . . . , n} and we omit the training epoch for brevity. For a particular example in the batch x i 1:T , we generate good views based on the following heuristic. We sort the row D i: in ascending order, and we sample positive views from the first third of distributions, whereas negative views are sampled from the last third. We denote by S + (i) = {q(s j | x j 1:T )} the set of positive distributions, We sample from these distributions using the reparameterization trick. B) Unfortunately, samples from the batch have limited variation (dashed red rectangle), and thus we use our predictive sampling trick, generating samples by using the posterior of the sampled prior.\nand similarly, S -(i) holds the negative distributions. See Fig. 1 for an illustration of these definitions.\nPredictive sampling trick. Unfortunately, as usual batch sizes are relatively small, it may occur that variability is limited in the original batch. Notice that soft positive views always exist via the posterior of the sample itself, and soft negatives probably exist as well for moderate batch sizes, e.g., for n = 16, 32. However, it is not clear whether hard views exist in the batch, and thus its variability may need to be increased. To improve variability, we introduce our predictive sampling trick.\nAgain, w.l.o.g. we focus on the setting of sampling static views of a given example x 1:T with its static and dynamic features s and d 1:T . To increase the variability in the views, our predictive sampling trick generates these examples from the posterior of the sampled prior. For instance, to produce a positive static view, we denote s+ ∼ S + . The dynamic features can be arbitrary, and thus we sample from the prior d1:T ∼ p(d 1:T ). The positive instance x + 1:T is defined via x + 1:T ∼ p(x + 1:T | s+ , d1:T ). We obtain the positive static view by sampling the posterior, i.e.,\ns + ∼ q(s + | x + 1:T ) .(8)\nA similar process is used to compute s -,j , j = 1, . . . , M . These views {s + , s -,j } are utilized in L iNCE (s, s + , s -,j ), see the diagram of our predictive sampling in Fig. 1B. We find that our views' heuristic and predictive sampling trick yield soft to semi-soft positive examples and semi-hard to hard negative examples, see Sec. 5.6. For additional implementation details, see App. B.\nOur approach is based on the implicit assumption that the underlying model (3) encourages similar examples to be close and dissimilar views to be farther apart. Indeed, previous work on this model (Yingzhen & Mandt, 2018) showed this tendency when using large s and small d i . Thus, our approach can be viewed as promoting the natural tendency of the model to separate positive and negative views." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets and Methods", "publication_ref": [ "b44", "b0", "b38", "b28", "b16", "b18", "b59", "b51", "b26", "b57", "b21", "b61", "b4", "b3", "b49" ], "table_ref": [], "text": "In our evaluation, we consider several datasets of different modalities, and we compare our results with several state-ofthe-art approaches. Specifically, we test on video datasets such as Sprites (Reed et al., 2015) and MUG (Aifanti et al., 2010) containing animated cartoon characters and subjects performing facial expressions, respectively. Moreover, we also use the Jester dataset (Materzynska et al., 2019) with videos of hand gestures, and the Letters corpus (Ibrahim et al., 2019) with handwritten text. For audio, we experiment with TIMIT (Garofolo et al., 1992). Finally, we also explore time series datasets including Physionet (Goldberger et al., 2000) with individual medical records and Air Quality (Zhang et al., 2017) with measurements of multiple air pollutants. We compare our results to sequential disentanglement frameworks including MoCoGan (Tulyakov et al., 2018), FHVAE (Hsu et al., 2017), DSVAE (Yingzhen & Mandt, 2018), R-WAE (Han et al., 2021), S3VAE (Zhu et al., 2020), SKD (Berman et al., 2023), C-DSVAE (Bai et al., 2021), and GLR (Tonekaboni et al., 2022). See App. A for details." }, { "figure_ref": [], "heading": "Hyperparameters", "publication_ref": [ "b31" ], "table_ref": [], "text": "To control the contribution of each loss component we add a λ 1 coefficient to the reconstruction loss, a λ 2 to the static KL term, a λ 3 to the dynamic KL term, and finally λ 4 , λ 5 to the contrastive terms. The hyperparameter λ 1 is tuned over {1, 2.5, 5, 10}, λ 2 is tuned over {1, 3, 5, 7, 9}, and λ 4 and λ 5 are tuned over {0.1, 0.5, 1, 2.5, 5} while λ 3 is fixed to 1. We used Adam optimizer (Kingma & Ba, 2014) with the learning rate chosen from {0.001, 0.0015, 0.002}. The static and dynamic features' dimensions are selected from {128, 256} and {32, 64}, respectively. These dimensions are similar or sometimes smaller in comparison to all other benchmark models such as C-DSVAE, S3VAE, DSVAE, R-WAE. We highlight that tuning multiple hyperparameters is often challenging. Hence, we utilize automatic tuning tools, using 5 to 10 runs for each dataset. The hyperparameters for each task and dataset are given in Tab. 6 in the Appendix. All the tasks were trained for at most 600 epochs." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Qualitative Results", "publication_ref": [], "table_ref": [], "text": "We begin our evaluation with qualitative examples showing the disentanglement capabilities of our approach in Fig. 2. Specifically, given source and target samples, x src 1:T , x tgt 1:T , we swap the static and dynamic features between the source and the target. In practice, swapping the content (static) information corresponds to generating an image with factors (s tgt , d src 1:T ), i.e., fix the source dynamics and use the static factor of the target. For instance, a perfect content swap in Sprites yields different characters with the same pose. The opposite swap of pose (dynamic) information is obtained with (s src , d tgt 1:T ). Fig. 2 shows two separate examples of Sprites and MUG (rows 1, 2 and rows 3, 4), where each example is organized in blocks of four panels. For instance on the Sprites example, panel no. 1 is the source and panel no. 2 is the target. Panel no. 3 shows a content (static) swap, and Panel no. 4 shows a pose (dynamic) swap." }, { "figure_ref": [], "heading": "Quantitative Results: Common Benchmarks", "publication_ref": [ "b61", "b3", "b26", "b57", "b12", "b49", "b49", "b49" ], "table_ref": [], "text": "Image data. Similar to previous work (Zhu et al., 2020;Bai et al., 2021), we test our model disentanglement and generative abilities on the Sprites and MUG datasets, and we compare our results with state-of-the-art (SOTA) methods. The evaluation protocol takes a sample x 1:T with its static and dynamic factors, s and d 1:T , and generates a new sample x1:T with the original dynamic features, and a new static component sampled from the prior, s ∼ p(s). Ideally, we expect that x 1:T , x1:T share the dynamic labels, e.g., happy in MUG, whereas, their static classes match with probability close to random guess. To verify that this is indeed the case, we use a pre-trained classifier to predict the dynamic label of x1:T , and we compare it to the true label of x 1:T .\nWe utilize these labels on several different error metrics: label accuracy (Acc), inception score (IS) that estimates the generator performance, intra-entropy H(y|x) that shows how confident the classifier is regarding its prediction, and inter-entropy H(y) that measures diversity in generated samples, see also App. A. Our results are provided in Tab. 1, alongside the results of previous SOTA approaches. The arrows ↑, ↓ next to the metrics denote which metric is expected to be higher or lower in value, respectively. Notably, our method outperforms existing work on Sprites and MUG datasets with respect to all metrics. Finally, one may also consider the opposite test where the static factor is fixed and the dynamic features are sampled. However, the SOTA methods achieve near perfect accuracy in this setting, and thus, we do not show these results here.\nAudio data. Another common benchmark demonstrates the effectiveness of sequential disentanglement frameworks on a different data modality (Hsu et al., 2017;Yingzhen & Mandt, 2018). Specifically, we consider speaker verification on the TIMIT dataset. The main objective is to distinguish between different speakers, independently of the text they read. For a sample x 1:T , we expect that its static factor s represents the speaker identity, whereas d 1:T should not be related to that information. We use the Equal Error Rate (EER) metric where we compute the cosine similarity between all s instances and independently for d 1:T instances. Two static vectors encode the same speaker if their cosine similarity is higher than a threshold ϵ ∈ [0, 1], and different speakers otherwise. The threshold ϵ needs to be calibrated to receive the EER (Chenafa et al., 2008). Tab. 1 shows that our approach improves SOTA results by a margin of 0.62% and 1.41% for the static and dynamic EER. For additional info on this benchmark, see App. B.5. Time series data. Recently, (Tonekaboni et al., 2022) explored their approach on downstream tasks with time series data. Sequential information different from image and audio is an ideal test case for our framework as we lift the dependency on data augmentation (DA) techniques. Indeed, while DA is common for image/audio data, it is less available for arbitrary time series data. We follow the evaluation setup in (Tonekaboni et al., 2022) to study the latent representations learned by our method. Specifically, we used an encoder and a decoder to compute codes of consecutive time series windows, and we extract these codes on non-stationary datasets such as Physionet and air quality.\nWe consider the following tasks: 1. prediction of the risk of in-hospital mortality, and 2. estimation of the average daily rain level. For each task, we train a simple RNN classifier in which we utilize the latent representations from the above autoencoder. For comparison, Tonekaboni et al. ( 2022) used C-DSVAE without data augmentation and thus with no contrastive estimation losses. However, as noted in the above paragraph, our approach does not have this limitation, and thus we can utilize the entire model (5). Tab. 2 shows the results on the mortality rate and daily rain downstream tasks.\nOur method performs on par with GLR on the mortality rate task, and it comes second for daily rain estimation. However, it is important to emphasize that GLR was designed specifically for time series data with statistical properties as in Physionet and Air Quality datasets. In contrast, our method is not tuned to specifically handle time series data, and it can work on multiple data modalities such as video, audio, and time series data. Further, In our experimental setup, we were not able to re-produce the baseline results for the daily rain task. We leave this direction for further exploration. For more details regarding the evaluation setup and tasks, we refer to (Tonekaboni et al., 2022)." }, { "figure_ref": [], "heading": "Quantitative Results: New Benchmarks", "publication_ref": [ "b43", "b52" ], "table_ref": [], "text": "The standard sequential disentanglement benchmark tests include classification of conditionally generated images and speaker verification. Here, we propose a new benchmark that quantifies the quality of the learned representations.\nFor this evaluation, we consider the MUG dataset, and two challenging video datasets with hand writing (Letters) and hand gestures (Jesters). In this experiment, we explore a common framework to evaluate the disentangled codes. First, we compute the static {s j } and dynamic {d j 1:T } codes of the test set. Then, we define train and test sets via an 80 -20 split of the test set, and we train four classifiers. The first classifier takes s vectors as inputs, and it tries to predict the static label. The second classifier takes s and it predicts the dynamic label. Similarly, the third classifier takes d 1:T and predicts the static label, and the fourth classifier takes d 1:T and predicts the dynamic label. An ideal result with input s is a prefect classification score in the first classifier, and a random guess in the second classifier. Additional details on this experiment appear in B.9. Our results are summarized in Tab. 3, where we outperform C-DSVAE often by a large gap in accuracy. The Jesters dataset does not include static labels, and thus we only have partial results. Further, this dataset is extremely challenging due to low-quality images and complex gestures, and currently, C-DSVAE and our approach obtain low scores, where our approach attains > 7% improvement over a random guess. These results can be improved by integrating recent VAEs (Razavi et al., 2019;Vahdat & Kautz, 2020), as we observe low quality reconstruction, which may effect disentanglement abilities." }, { "figure_ref": [], "heading": "Analysis of Positive and Negative Views", "publication_ref": [ "b48", "b48" ], "table_ref": [], "text": "Sec. 3 details how to incorporate contrastive learning in a sequential disentanglement setting, and in Sec. 4, we list some of the challenges such as sampling wrong positive and negative views. Here, we would like to empirically compare the views generated by C-DSVAE and our approach. For instance, we show a qualitative example in Fig. 3A of views used in C-DSVAE and obtained with SimCLR (Chen et al., 2020a), where positive dynamic examples are generated via e.g., color distortion while supposedly keeping the dynamic features fixed. Unfurtunately, not all DA preserve the facial expressions. Beyond these qualitative examples, we also adapt the analysis (Tian et al., 2020) as detailed below. is generated using the pair (s + , d1:T ) where s + is similar to s, and d1:T is different from d 1:T . In the opposite case of a positive dynamic sample, the dynamic features d + 1:T are similar to d 1:T and s is different from s. Following Tian et al. (2020), a good view is such that the mutual information I q (s + ; y) is high, whereas I q ( d1:T ; y) is low, where y is the task label. For example, the identity of the person is kept, while its facial expression has changed. To estimate these MI terms, we use the latent codes in classification tasks as in Sec. 5.5 where the static and dynamic factors are predicted. Namely, we use s + to predict the static labels, and similarly, we use d1:T to predict the dynamic labels. Good views will yield high static classification scores and low dynamic classification scores.\nWe show in Fig. 3B the classification results when using (s + , d1:T ) with C-DSVAE and with our method, and Fig. 3C shows the opposite case, i.e., (s, d + 1:T ). For both plots, blue curves are related to our approach and red curves correspond to C-DSVAE. We focus on the test which uses (s + , d1:T ), Fig. 3B. The blue and red curves show the classification accuracy when using s + to predict the static label, and thus, they should be high. In contrast, the light blue and orange curves arise from using d1:T to predict the dynamic labels, and thus, they should be close to a random guess for semihard views (16.66% in MUG). However, the orange curve is around 70%, whereas the light blue attains ≈ 30%. These results indicate that our views are semi-hard as they yield accuracy results close to a random guess. In the opposite scenario, Fig. 3C, we use the pair (s, d + 1:T ) with different static and similar dynamic factors. Here, the blue and red curves should be close to a random guess (1.92%), and the light blue and orange plots should present high accuracy values. However, the orange curve presents ≈ 25% accuracy, whereas ours is around 70%. We conclude that our dynamic features better preserve the underlying action. Additional analysis and results are provided in App. B.4." }, { "figure_ref": [], "heading": "Ablation Study: Negative Views Sampling", "publication_ref": [], "table_ref": [], "text": "The previous evaluation in Sec. 5.6 focused on the quality of positive views. Here, we explore the effect of utilizing various negative sampling rules. Ultimately, we would like to empirically motivate the heuristic we introduce in Sec. 4 where we propose to only consider 33.3% of the farthest distributions as measured by the KL divergence distance.\nAn inferior heuristic is one that produces confusing negative views, i.e., examples that are semantically similar to the current data, instead of being dissimilar. However, choosing \"right\" negative views is important to the overall behavior of the approach, and thus we explore other sampling policies in the following ablation study. For a meaningful comparison, we fix the hyperparameters of the approach, and we train several models that only differ in their selection strategy of the negative views. Let n the number of inputs in the batch; we define a pool of size ⌊n/3⌋ of negative distributions taken from: 1) the middle third, 2) the farthest third, and 3) both middle and farthest thirds, and 4) random sampling. From these distributions we sample 2n views. We present in Tab. 4 the results of our ablation study on the MUG and TIMIT datasets. For MUG, we show the accuracy score, and we display the EER gap in TIMIT. Notably, all sampling strategies attain SOTA results on these tasks, cf. Tab. 1. However, the farthest third yields the best results consistently across tasks, and thus, these results support our heuristic. In App. B.2, we conduct an analysis that motivates and justify our heuristic by showing the similarity distribution of the thirds. " }, { "figure_ref": [], "heading": "A B C", "publication_ref": [], "table_ref": [], "text": "Figure 3. We present randomly selected views of SimCLR on MUG (A). In addition, we compare the quality of views obtained with C-DSVAE and our approach when classifying the static and dynamic labels (B, C). See the text for more details." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Our model achieves SOTA results on several sequential disentanglement benchmarks. While the method relies on heuristics such as initial disentanglement by restricting the dimensions of s and d 1:T , and the methodology of selecting negative and positive samples, it is backed up with extensive empirical results that show the significance of each component and the robustness of the method to different modalities. Our model uses a similar number of hyperparameters as existing work. Tuning several hyperparameters may be challenging in practice. Nevertheless, we utilized automatic tuning tools, such as hyperopt, to search for the best parameter values within the predefined hyperparameter space. Finally, similar to existing disentanglement works, we used pre-trained classifiers to evaluate our approach. In general, we believe that the sequential disentanglement community will benefit from new challenging benchmarks that depend on improved evaluation metrics." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "In this work, we investigated the problem of unsupervised disentanglement of sequential data. Recent SOTA methods employ self-supervision via auxiliary tasks and DA which unfortunately, are modality-based. Namely, they depend on the domain-modality (e.g., videos), on the task-modality (e.g., classifying expressions), or on both. In contrast, we propose a contrastive estimation framework that is free of external signals, and thus is applicable to arbitrary sequential data and tasks. Key to our approach is the observation that VAEs naturally support the generation, comparison, and sampling of distributions. Therefore, effective sampling strategies for generating positive and negative views can be devised based solely on the batch inputs. Our method is easy to code, efficient, and it uniformly treats similar and dissimilar views. Our extensive evaluation shows new SOTA results on multiple datasets including video, audio and arbitrary time series and on downstream tasks as speaker verification, unconditional generation, and prediction.\nIn the future, we would like to explore the interplay between the mutual information loss components. Essentially, these terms are contradicting in nature, and thus, it motivates us to find improved formulations. Moreover, we would like to investigate whether sampling strategies as our method can be effective for non-sequential contrastive estimation on e.g., static images. We believe that this is a very interesting direction for future research and that with some adaptions, our method can contribute to contrastive learning of static information as well. Finally, we aim to tackle challenging datasets as Jesters using improved VAE pipelines." }, { "figure_ref": [], "heading": "A. Experimental Setup", "publication_ref": [ "b44" ], "table_ref": [], "text": "A.1. Datasets Sprites. A dataset introduced by (Reed et al., 2015) that includes animated cartoon characters that have both static and dynamic attributes. The static attributes include variations in skin, tops, pants, and hair color, each of which has six possible options. The dynamic attributes consist of three different types of motion (walking, casting spells, and slashing) that can be performed in three different orientations (left, right, and forward). In total, there are 1296 unique characters that can perform nine different motions. Each sequence in the dataset consists of eight RGB images with a size of 64 × 64 pixels. In our experiments, we use 9000 samples for training and 2664 samples for testing." }, { "figure_ref": [], "heading": "MUG.", "publication_ref": [ "b0", "b3", "b16", "b57", "b38", "b28", "b18", "b49", "b59", "b49" ], "table_ref": [], "text": "A Facial expression dataset created by (Aifanti et al., 2010) that includes image sequences of 52 subjects displaying six different facial expressions (anger, fear, disgust, happiness, sadness, and surprise). Each video in the dataset consists of between 50 and 160 frames. In order to create sequences of length 15, as was done in previous work (Bai et al., 2021), we randomly select 15 frames from the original sequences. We then use Haar Cascades face detection to crop the faces and resize them to 64 × 64 pixels, resulting in sequences x ∈ R 15×3×64×64 . The final dataset consists of 3429 samples.\nTIMIT. A dataset introduced by (Garofolo et al., 1992) which consists of read speech that is used for acoustic-phonetic research and other speech tasks. It contains 6300 utterances (5.4 hours of audio). There are 10 sentences per speaker, for a total of 630 speakers. The dataset includes adult men and women. For the data pre-processing, we follow the same procedure as in prior work (Yingzhen & Mandt, 2018). We extract spectrogram features (10ms frame shift) from the audio, and we sample segments of 200ms duration (20 frames) from the audio, which are used as independent samples.\nJester. A dataset introduced by (Materzynska et al., 2019). The Jester dataset comprises of 148.092 labeled video segments of more then 1300 unique individuals making 27 simple, predefined hand gestures in front of a laptop camera or webcam.\nThe gestures are labeled whereas the subject is not, and thus the dataset contains only dynamic labels. This dataset is significantly more complex in comparison to MUG since there are variations in the background, light, and pose, and more elements in the image are much bigger. We used five gestures (Pushing Hand Away, Rolling Hand Forward, Shaking Hand, Sliding Two Fingers Left, Sliding Two Fingers Right). We extracted videos with 10 frames where the gap between two frames has been selected by taking the total sequence length divided by 10.\nLetters. The Letters dataset (Ibrahim et al., 2019) comprises of English letters and numbers written by 66 individuals, both offline and online handwritten letters. In our setup, we utilized only small English letters (a-z) from the offline subset of the dataset. We created a lexicon of 100 words, each consisting of seven letters, and then we generated word sequences using images of the letters. As an example, a sequence may appear as \"science\". We excluded subject number '61' due to missing data and generated 100 word sequences using the handwriting of the remaining 65 subjects. Each subject has two samples for each letter, which were randomly selected.\nPhysionet. The Physionet ICU Dataset (Goldberger et al., 2000) is a medical time series corpus of 12.000 adult patients' stays in the Intensive Care Unit (ICU). The data includes time-dependent measurements such as physiological signals and lab measurements as well as general information about the patients, such as their age, the reason for their ICU admission, and etc. Additionally, the dataset includes labels that indicate in-hospital mortality. For pre-processing we follow (Tonekaboni et al., 2022).\nAir Quality. The UCI Beijing Multi-site Air Quality dataset (Zhang et al., 2017) was collected over four years from March 1st, 2013 to February 28th, 2017. It includes hourly measurements of multiple air pollutants from 12 nationally controlled monitoring sites. The meteorological data in each air-quality site are matched with the nearest weather station from the China Meteorological Administration. For our experiments we follow (Tonekaboni et al., 2022) and pre-process the data by dividing it into samples from different stations and of different months of the year." }, { "figure_ref": [], "heading": "A.2. Disentanglement Metrics", "publication_ref": [], "table_ref": [], "text": "Accuracy (Acc). This is a metric that measures the ability of a model to preserve fixed features while generating others. For instance, freeze the dynamic features and sample the static features. The metric computed by using a pre-trained classifier (called C or the \"judge\"). The classifier training has been on the same train set of the model and testing is on the" }, { "figure_ref": [], "heading": "A.3. Model Architecture", "publication_ref": [ "b61", "b57", "b61", "b3", "b49" ], "table_ref": [], "text": "All the models have been implemented using Pytorch (Paszke et al., 2019). The Conv2D and Conv2DT denote a 2D convolution layer and its transpose, and BN2D is a 2D batch normalization layer.\nImage Datasets. Our image model architecture follows (Zhu et al., 2020) implementation. The static latent distribution variables s µ , s log(σ) are parameterized by taking the last hidden state of a bi-directional LSTM and propagating it through linear layers. The dynamic latent distribution variables d µ 1:T , d log(σ) 1:T are given by propagating the hidden states of the bi-directional LSTM through a RNN and then Linear layers. In Tab. 5 we describe the encoder and the decoder of our model. Sprites, MUG, Letters, and Jesters, share the same architecture. We denote the dimension of d 1:T by d d , and s dimension as d s . The values are chosen per dataset and reported in Tab. 6.\nAudio Datasets. The architecture of the TIMIT dataset model follows (Yingzhen & Mandt, 2018) and was used by the previous methods (Zhu et al., 2020;Bai et al., 2021). The only difference from the image architecture is the removal of the convolutions from the encoder and the replacement of the decoder with two linear layers. The first linear layer input dimension is d z and its output dimension is 256 followed by LeakyReLU activation. Finally, we feed the second linear layer followed by LeakyReLU activation and its output dimension is 200. Time Series Datasets. The architecture of the time series dataset is simpler. The encoder is composed of 3 linear layers, Linear(10, 32) → Linear(32, 64) → Linear(64, 32) with ReLU activations after each linear layer followed by similar architecture from image models (Bidi-LSTM etc.) to model s µ , s log(σ) , d µ 1:T , d log(σ) 1:T\n. The decoder is composed of a Linear layer that projects the latent codes onto a dimension of size 32, followed by tanh activation. Then, the output is propagated through an LSTM with a hidden size of 32. We feed the output of the LSTM to 2 linear layers, each followed by a ReLU activation, Linear(32, 64) and Linear (64,32). Finally, we project the output onto 2 linear layers to produce the mean and covariance from which we sample the final output. This architecture follows (Tonekaboni et al., 2022)." }, { "figure_ref": [], "heading": "A.4. Hyperparameters", "publication_ref": [ "b31" ], "table_ref": [], "text": "We estimate the following objective function:\nmax p,q E x 1:T ∼p D λ 1 E q(z | x 1:T ) log p(x 1:T | z) -λ 2 KL[q(s | x 1:T ) ∥ p(s)] -λ 3 KL[q(d 1:T | x 1:T ) ∥ p(d 1:T )]\n+ λ 4 I q (d 1:T ; x 1:T ) + λ 4 I q (s; x 1:T ) -λ 5 I q (s;\nd 1:T ) (9)\nTo control the contribution of each loss component we add λ 1 coefficient to the reconstruction loss, λ 2 for static KL term, λ 3 for the dynamic KL term, and finally λ 4 , λ 5 to the contrastive terms. The hyperparameter λ 1 is tuned over {1, 2.5, 5, 10}, we do not divide the MSE loss by the batch size. λ 2 is tuned over {1, 3, 5, 7, 9}, and λ 4 and λ 5 are tuned over {0.1, 0.5, 1, 2.5, 5} while λ 3 is fixed to 1. We used Adam optimizer (Kingma & Ba, 2014) with the learning rate chosen from {0.001, 0.0015, 0.002}. The dimensions of the static and dynamic features which were chosen among {128, 256} for the static and {32, 64} for the dynamic factors. Our optimal hyperparameters for each task and dataset are given in Table . 6. All the tasks were trained for at most 600 epochs. Table 6. Hyperparameters for all datasets, lr and bsz are abbreviations for learning rate and batch size, respectively." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [], "table_ref": [], "text": "λ 1 λ 2 λ 3 λ 4 λ 5 lr bsz d s d d Sprites 10 5 1 5 1 2e-3 100 256 32 MUG 5 9 1 0.5 2.5 15e-4 16 256 64 Letters 2.5 1 1 5 5 2e-3 64 256 32 Jesters 5 1 1 1 1 1e-3 16 256 64 TIMIT 5 1 1 0.5 1 1e-3 10 256 64 Physionet 2.5 7 1 0.1 2.5 1e-3 10 12 4 Air Quality 2.5 5 1 0.1 2.5 1e-3 10 12 4 Ours 100% ± 0% 8.942 ± 3.3e-5 0.006 ± 4e-6 2.197 ± 0 85.71% ± 0.9 5.548 ± 0.039 0.066 ± 4e-3 1.779 ± 6e-3 S -obtained from D. Next, the negative instances {x - 1:T } 2n j=1 are defined via\n{x - 1:T } 2n j=1 ∼ p({x - 1:T } 2n j=1 | {s -} 2n j=1 , {d 1:T } 2n j=1 ) .\nFinally, the negative static views are obtained by sampling the posterior\n{s -} 2n j=1 ∼ q({s -} 2n j=1 | {x - 1:T } 2n j=1 ) .\nThe computational complexity of the D KL matrix is an important aspect of this stage. In practice, we never construct this matrix. Instead, we exploit the parallel capabilities of PyTorch. Notice that this computation is parallel on the level of the cell, as each matrix cell is independent of the others. Thus, PyTorch can utilize its full parallelization capabilities on this task. In particular, since the computation per cell is constant in time (and memory), the entire computation of D KL can be made in constant time, if every cell is calculated by a separate compute node. Therefore, the first for loop in Alg. 1 utilizes the full parallel compute capabilities of PyTorch. Similarly, the second for loop in Alg. 1 is re-phrased in a tensorial form such that the for loop is avoided completely." }, { "figure_ref": [], "heading": "Calculating the Contrastive Loss:", "publication_ref": [], "table_ref": [], "text": "We can compute the contrastive loss (L iNCE ) given the positive s + and the negatives {s -} 2n j=1 samples." }, { "figure_ref": [], "heading": "B.2. A Thirds Similarity Distributions Experiment", "publication_ref": [], "table_ref": [], "text": "Throughout our studies, we observed that taking the negatives from the last third, obtained the best results as seen in the ablation study in Sec. 5.7. One possible explanation is that the last third consists of fewer positive points, i.e., samples we consider to be negative but are in fact positive. Since our negative sampling process is random, it might be that using negative samples from the last third avoids positive samples more often than taking such samples from the middle third, which yields better overall results in practice. To strengthen this hypothesis, we calculated the distribution of similarity ϕ(u, v j ) for each third. We used our trained model to get the latent space vectors and calculated their similarity scores. We conduct the experiment both for the static and dynamic latent vectors. The results are very similar, thus we decided to show here the dynamic vector similarity distribution. We show the similarity histogram of the various thirds in Fig. 4. The histogram shows an intuitive ordering of the thirds in the sense that the first third yields the most similar samples, and the last third yields the most dissimilar samples. Thus, we believe that the governing factor which made the last third to be better is that wrong samples are probably sampled less often in comparison to the middle third. In addition, notice that the last third does contain several examples with similarity ϕ(u, v j ) higher than 0.5, and these samples may be hard negative examples." }, { "figure_ref": [], "heading": "B.3. The Predictive Sampling Trick vs a Reparametrization Trick", "publication_ref": [], "table_ref": [], "text": "To further analyze the contribution of the predictive sampling trick, we re-trained two neural networks with the same hyperparameters on Sprites and MUG without the predictive sampling trick, and instead, we used a simple reparametrization trick. We report the results of the comparison between the two in Tab. 7. One can observe that the reparametrization trick models' results are inferior in comparison to the results of the predictive sampling trick results. For instance, while the accuracy metric for Sprites is saturated, a reduction in the other metrics is noticeable, e.g. 8.942, using our method vs. 8.865 using the reparametrization trick on the IS metric. The difference is even more noticeable on the MUG dataset, where the reparametrization trick suffers an almost two percent loss in accuracy. These results further motivate and reinforce our choice and design of the predictive sampling trick. . We plot the t-SNE embedding of the dynamic features of the inputs and their negative samples as computed by C-DSVAE (top) and our approach (bottom). In this setting, the blue and orange embeddings should be as separate as possible.\nseparately for all the combinations of 192 vectors. In total, there are 18 336 pairs. We repeat this process independently once for the s features and once for the d 1:T features.\nB.6. Standard Deviation Measures for Tab. 1\nHere, we report the standard deviation measures related to Tab. 1 in the main text. Notice that the audio experiment is deterministic due to the ERR metric definition, and thus it does not have standard deviation measures. The extended results are provided in Tab. 8. These results indicate that not only our method achieves superior results in comparison to SOTA approaches, but also, it is well within the statistical significance regime, given the standard deviation measures." }, { "figure_ref": [ "fig_4", "fig_5", "fig_6", "fig_7" ], "heading": "B.7. Data Generation", "publication_ref": [], "table_ref": [], "text": "We qualitatively evaluate our model's ability to generate static and dynamics features. Specifically, let x 1:t ∼ p D denote a sample from the data with its static and dynamic factors latent representations (s, d 1:T ) given by s ∼ q(s | x 1:T ) and d 1:T ∼ q(d 1:T | x 1:T ). We generate new static features by sampling from the static prior distribution p(s), namely, s ∼ p(s) and fixing the dynamics d 1:T . Then, we concatenate (s, d 1:T ), and we generate a new sample x1:T ∼ p(x 1:T | s, d 1:T ).\nFinally, we perform a similar process in order to generate the dynamics, where we sample from the dynamic prior distribution and the static features are fixed. The results of static and dynamic features' generation for the Sprites and MUG datasets are given in Fig. 7, Fig. 8, Fig. 9, and Fig. 10. The left column in each figure contains the original samples and the right column contains the generated samples. If the model disentangles the features well and has high generation performance, then, the fixed features should be preserved perfectly and the generated features should be random (independent of the original class)." }, { "figure_ref": [], "heading": "B.8. Swaps", "publication_ref": [], "table_ref": [], "text": "In this section we perform another qualitative experiment. Specifically, given source and target samples, x src 1:T , x tgt 1:T ∼ p D , we swap the static and dynamic features between the source and the target. In practice, we feed the encoder with the samples to extract their static and dynamic latent representation, (s src , d src 1:T ) s.t s src ∼ q(s src | x src 1:T ) and d src 1:T ∼ q(d src 1:T | x src 1:T ) for the source and s tgt ∼ q(s tgt | x 1:T ) tgt and d tgt 1:T ∼ q(d tgt 1:T | x tgt 1:T ) for the target. Then, we generate swapped samples by feeding the decoder, xsrc " }, { "figure_ref": [], "heading": "B.9. Latent Classification Experiments with Different Classifiers and Standard Deviation", "publication_ref": [ "b3" ], "table_ref": [], "text": "Here, we elaborate more on the experiment we reported in Sec. 5.5. We extracted the static s and dynamic d 1:T features using a trained model and trained four different classifiers. All trained classifiers are Support Vector Machines. We used the default Support Vector Classifier (SVC) of the sklearn package without changing any hyperparameter. To strengthen the statistical significance of Tab. 3 from the main text, we conduct the same experiment with different classifiers and report their results in Tab. 9 (SVC), Tab. 10 (Random Forest Classifier),and Tab. 11 (KNN). These tables show that our results are robust to the choice of the classifier. We repeated the experiments per classifier for 10 times with different seeds for data splitting and report their means and standard deviations. We used the default sklearn Random Forest Classifier and KNN. We used the sklearn default hyperparameters and conducted the same experiment procedure exactly. In the Jesters dataset, we do the exact same process just without the static features and their classifiers since the subjects in this data are not labeled. In the Letters dataset, there is one difference. For each d i , i = 1, ..., T in d 1:T we try to predict its corresponding letter label in the sequence instead of trying to predict the whole word. Briefly, our model maintains its superior performance in comparison to C-DSVAE (Bai et al., 2021) with respect to the gap metric among all classifiers." }, { "figure_ref": [], "heading": "B.10. Robustness with Respect to the Seed Choice", "publication_ref": [], "table_ref": [], "text": "In our work, we based our evaluation section with respect to existing state-of-the-art models and their evaluation protocol and benchmark datasets. Following these approaches, the sensitivity to hyperparameters and randomness is typically not considered. Nevertheless, we re-trained our model on five different seeds in total to test its robustness with respect to the particular choice of seed. We report the results in Tab. 12. These results indicate that our method is statistically significant with respect to previous SOTA approaches. Ours 100% ± 0% 8.942 ± 3.3e-5 0.006 ± 4e-6 2.197 ± 0 85.06% ± 1.06 5.517 ± 0.034 0.073 ± 4e-3 1.782 ± 3e-3 " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This research was partially supported by the Lynn and William Frankel Center of the Computer Science Department, Ben-Gurion University of the Negev, an ISF grant 668/21, an ISF equipment grant, and by the Israeli Council for Higher Education (CHE) via the Data Science Research Center, Ben-Gurion University of the Negev, Israel." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "same test set as of the model. For example, for the MUG dataset, the classifier would output the facial expression and check that it did not change during the static features sampling.\nInception Score (IS). This is a metric for the generator performance. First, we apply the judge on all the generated sequences x 1:T . Thus, getting p(y | x 1:T ) which is the conditional predicted label distribution. Second, we take p(y) which is the marginal predicted label distribution and we calculate the KL-divergence KL[p(y | x 1:T ) ∥ p(y)]. Finally, we compute IS = exp(E x KL[p(y | x 1:T ) ∥ p(y)]).\nInter-Entropy (H(y|x)). This metric reflects the confidence of the classifier C regarding label prediction. Low Intra Entropy means high confidence. We measure it by entering k generated sequences into the classifier and computing\nIntra-Entroy (H(y)). This metric reflects the diversity among the generate sequence. High Intra-Entropy score means high diversity. It is computed by taking the generated sample from the learned prior distribution p(y) and then using the judge output on the predicted labels y.\nEqual Error Rate (EER). This metric is used in the TIMIT dataset for speaker verification task evaluation. It measures the value of the false negative rate, or equally, the value of the false positive rate of a model over the speaker verification task. EER is measured when the above rates are equal.\nLatent Accuracy (L-Acc). This metric measures the ability of a model to generate meaningful latent features for a downstream classification task. For instance, for the MUG dataset, taking the static latent factor s of a sample x and trying to predict the subject label or the facial expression label. In such case, a meaningful and disentangled model will produce static features that will contain information about the subject label but not on the facial expression label of x. We compute the prediction accuracy by training a Support Vector Machine Classifier for the static and dynamic features. We flatten the dynamic features d 1:T into one vector d, that is, assuming d i ∈ R k and i = 1, ..., T . Then the dimension of the flattened vector d is k × T . The exact dimension changes between dataset types. We split the test set data into two parts (80-20) and use its first part to train the different classifiers and the second one to evaluate the prediction accuracy. Finally, we also train a Random Forest Classifier and KNN Classifier to show the robustness of the benchmark to the classifier choice. \nend for return L iNCE /n B. More Experiments, Analyses and Information" }, { "figure_ref": [], "heading": "B.1. Method Implementation and Pseudocode", "publication_ref": [], "table_ref": [], "text": "In what follows, we will explain in detail the implementation of our sampling procedure. In addition, we add a pseudocode in Alg. 1 which describes the process and shows how our framework can be implemented.\n1. Producing static (s) and dynamic (d 1:T ) distributions: Let x 1:T ∼ p D . Using the model architecture, elaborated in the previous section, we can compute the mean and log variance vectors that represent the s and d 1:T posterior distributions:\nProducing Positive and Negative Views: W.l.o.g we focus on how to produce positive and negative static views. However, it is a similar process for the positive and negative dynamic views. To produce the positive view, we need a vector s + that represents a positive view of the static factor (potentially from the same class) and vectors d1:T that represent arbitrary dynamics.\nThe vectors d1:T can be simply sampled from the prior distribution d1:T ∼ p(d 1:T ). To produce s + , we compute the pairwise KL divergence distances matrix D ∈ R n×n for the batch as described in the main text in Eq 7. We then sort the row D i: in ascending order, and we sample positive views from the first third of distributions denoted by S + , whereas negative views are sampled from the last third denoted by S -. To increase the variability in the views, our predictive sampling trick generates these examples from the posterior of the sampled prior. To achieve this, we sample s+ ∼ S + . Then, the positive instance x + 1:T is defined via x + 1:T ∼ p(x + 1:T | s+ , d1:T ). Finally, the positive static view is obtained by sampling the posterior: s + ∼ q(s + | x + 1:T ). A similar proccess is performed to obtain negative views. We first sample 2n examples from {s -} 2n j=1 ∼ S -where and our approach (bottom). In this setting, the blue and orange embeddings should be as close as possible." }, { "figure_ref": [], "heading": "B.4. Qualitative Evaluations", "publication_ref": [ "b53" ], "table_ref": [], "text": "Here, we propose an additional qualitative evaluation of our contrastive estimation using the MUG dataset. Specifically, we evaluated the positive and negative samples on two trained models, C-DSVAE and ours. We collect the dynamic latent representations d i 1:T for every sample i in the test set, and we compute the mean value d = 1 T T j=1 d j , where the index i is omitted for brevity. For each of those samples, we extract a subset of their positive d + and negative d -samples. To visualize these latent features, we project the original representation d and the new samples d + and d -using t-SNE (Van der Maaten & Hinton, 2008). We anticipate that the pair (d, d + ) will be close in latent space as contrastive learning attracts positive data points closer. In contrast, contrastive learning repels negative samples, and thus (d, d -) should be far from each other. We present the results in 5 and 6. For the positive samples, our method shows an impressive similarity between d and d + . In comparison, C-DSVAE presents a much bigger distance between the d and d + . On the negative samples, our method shows good discrimination between d and d -, and in addition, our samples are much more concentrated. In comparison, discriminating between the input and negative samples in C-DSVAE is more challenging. We obtained similar results for the static setting, i.e., when we studied the t-SNE embeddings of s and its positive s + and negative s -samples." }, { "figure_ref": [], "heading": "B.5. Additional Information on the TIMIT Speaker Verification Task", "publication_ref": [ "b57" ], "table_ref": [], "text": "Here, we discuss more on the Audio experiment with TIMIT described in Sec. 5.4. First, the TIMIT test dataset contains eight different sentences for each speaker with 24 unique speakers. In total there are 192 audios. For all those audios, we extract their s and d 1:T latent representations. Then, to prepare a single vector representation we calculate the identity representation vector by the same procedure described in (Yingzhen & Mandt, 2018). Last, the EER is being calculated " } ]
2023-05-25
[ { "authors": "N Aifanti; C Papachristou; A Delopoulos", "journal": "", "ref_id": "b0", "title": "The MUG facial expression database", "year": "2010" }, { "authors": "J T Ash; S Goel; A Krishnamurthy; D Misra", "journal": "", "ref_id": "b1", "title": "Investigating the role of negatives in contrastive representation learning", "year": "2021" }, { "authors": "P Bachman; R D Hjelm; W Buchwalter", "journal": "Advances in neural information processing systems", "ref_id": "b2", "title": "Learning representations by maximizing mutual information across views", "year": "2019" }, { "authors": "J Bai; W Wang; C P Gomes", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b3", "title": "Contrastively disentangled sequential variational autoencoder", "year": "2021" }, { "authors": "N Berman; I Naiman; O Azencot", "journal": "ICLR", "ref_id": "b4", "title": "Multifactor sequential disentanglement via structured Koopman autoencoders", "year": "2023" }, { "authors": "S R Bowman; L Vilnis; O Vinyals; A M Dai; R Józefowicz; S Bengio", "journal": "ACL", "ref_id": "b5", "title": "Generating sentences from a continuous space", "year": "2016" }, { "authors": "J Bromley; I Guyon; Y Lecun; E Säckinger; R Shah", "journal": "", "ref_id": "b6", "title": "Signature verification using a \"Siamese\" time delay neural network", "year": "1993" }, { "authors": "M Caron; I Misra; J Mairal; P Goyal; P Bojanowski; A Joulin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b7", "title": "Unsupervised learning of visual features by contrasting cluster assignments", "year": "2020" }, { "authors": "R T Chen; X Li; R B Grosse; D K Duvenaud", "journal": "Advances in neural information processing systems", "ref_id": "b8", "title": "Isolating sources of disentanglement in variational autoencoders", "year": "2018" }, { "authors": "T Chen; S Kornblith; M Norouzi; G Hinton", "journal": "", "ref_id": "b9", "title": "A simple framework for contrastive learning of visual representations", "year": "" }, { "authors": " Pmlr", "journal": "", "ref_id": "b10", "title": "", "year": "2020" }, { "authors": "X Chen; H Fan; R Girshick; K He", "journal": "", "ref_id": "b11", "title": "Improved baselines with momentum contrastive learning", "year": "2020" }, { "authors": "M Chenafa; D Istrate; V Vrabie; M Herbin", "journal": "Springer", "ref_id": "b12", "title": "Biometric system based on voice recognition using multiclassifiers", "year": "2008" }, { "authors": "S Chopra; R Hadsell; Y Lecun", "journal": "IEEE", "ref_id": "b13", "title": "Learning a similarity metric discriminatively, with application to face verification", "year": "2005" }, { "authors": "C.-Y Chuang; J Robinson; Y.-C Lin; A Torralba; S Jegelka", "journal": "Advances in neural information processing systems", "ref_id": "b14", "title": "Debiased contrastive learning", "year": "2020" }, { "authors": "C Doersch; A Zisserman", "journal": "", "ref_id": "b15", "title": "Multi-task self-supervised visual learning", "year": "2017" }, { "authors": "J Garofolo; L Lamel; W Fisher; J Fiscus; D Pallett; N Dahlgren; V Zue", "journal": "", "ref_id": "b16", "title": "TIMIT acoustic-phonetic continuous speech corpus", "year": "1992" }, { "authors": "S Ge; S Mishra; C.-L Li; H Wang; D Jacobs", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b17", "title": "Robust contrastive learning using negative samples with diminished semantics", "year": "2021" }, { "authors": "A L Goldberger; L A Amaral; L Glass; J M Hausdorff; P C Ivanov; R G Mark; J E Mietus; G B Moody; C.-K Peng; H E Stanley; Physiobank", "journal": "circulation", "ref_id": "b18", "title": "physiotoolkit, and physionet: components of a new research resource for complex physiologic signals", "year": "2000" }, { "authors": "J.-B Grill; F Strub; F Altché; C Tallec; P Richemond; E Buchatskaya; C Doersch; B Avila Pires; Z Guo; M Gheshlaghi Azar", "journal": "Advances in neural information processing systems", "ref_id": "b19", "title": "Bootstrap your own latent-a new approach to self-supervised learning", "year": "2020" }, { "authors": "M Gutmann; A Hyvärinen", "journal": "", "ref_id": "b20", "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "year": "2010" }, { "authors": "J Han; M R Min; L Han; L E Li; X Zhang", "journal": "", "ref_id": "b21", "title": "International Conference on Learning Representations ICLR", "year": "2021" }, { "authors": "T Han; W Xie; A Zisserman", "journal": "", "ref_id": "b22", "title": "Video representation learning by dense predictive coding", "year": "2019" }, { "authors": "K He; H Fan; Y Wu; S Xie; R Girshick", "journal": "", "ref_id": "b23", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "I Higgins; L Matthey; A Pal; C Burgess; X Glorot; M Botvinick; S Mohamed; A Lerchner", "journal": "", "ref_id": "b24", "title": "beta-VAE: Learning basic visual concepts with a constrained variational framework", "year": "2016" }, { "authors": "C.-H Ho; N Vasconcelos", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b25", "title": "Contrastive learning with adversarial examples", "year": "2020" }, { "authors": "W.-N Hsu; Y Zhang; J Glass", "journal": "Advances in neural information processing systems", "ref_id": "b26", "title": "Unsupervised learning of disentangled and interpretable representations from sequential data", "year": "2017" }, { "authors": "T Huynh; S Kornblith; M R Walter; M Maire; M Khademi", "journal": "", "ref_id": "b27", "title": "Boosting contrastive self-supervised learning with false negative cancellation", "year": "2022" }, { "authors": "A Ibrahim; O Elijah; F Olusayo; B Omodolapo; D Opeyemi", "journal": "", "ref_id": "b28", "title": "Isgl online and offline character recognition dataset", "year": "2019" }, { "authors": "Y Kalantidis; M B Sariyildiz; N Pion; P Weinzaepfel; D Larlus", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b29", "title": "Hard negative mixing for contrastive learning", "year": "2020" }, { "authors": "H Kim; A Mnih", "journal": "PMLR", "ref_id": "b30", "title": "Disentangling by factorising", "year": "2018" }, { "authors": "D P Kingma; J Ba; Adam", "journal": "", "ref_id": "b31", "title": "A method for stochastic optimization", "year": "2014" }, { "authors": "D P Kingma; M Welling", "journal": "ICLR", "ref_id": "b32", "title": "Auto-encoding variational bayes", "year": "2014" }, { "authors": "T D Kulkarni; W F Whitney; P Kohli; J Tenenbaum", "journal": "Advances in neural information processing systems", "ref_id": "b33", "title": "Deep convolutional inverse graphics network", "year": "2015" }, { "authors": "P H Le-Khac; G Healy; A F Smeaton", "journal": "IEEE Access", "ref_id": "b34", "title": "Contrastive representation learning: A framework and review", "year": "2020" }, { "authors": "H Li; X Wang; Z Zhang; Z Yuan; H Li; W Zhu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b35", "title": "Disentangled contrastive learning on graphs", "year": "2021" }, { "authors": "Z Lin; K Thekumparampil; G Fanti; S Oh", "journal": "PMLR", "ref_id": "b36", "title": "Infogancr and modelcentrality: Self-supervised model training and selection for disentangling GANs", "year": "2020" }, { "authors": "F Locatello; S Bauer; M Lucic; G Raetsch; S Gelly; B Schölkopf; O Bachem", "journal": "PMLR", "ref_id": "b37", "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "year": "2019" }, { "authors": "J Materzynska; G Berger; I Bax; R Memisevic", "journal": "", "ref_id": "b38", "title": "The jester dataset: A large-scale video dataset of human gestures", "year": "2019" }, { "authors": "I Misra; L V Maaten", "journal": "", "ref_id": "b39", "title": "Self-supervised learning of pretext-invariant representations", "year": "2020" }, { "authors": "A V D Oord; Y Li; O Vinyals", "journal": "", "ref_id": "b40", "title": "Representation learning with contrastive predictive coding", "year": "2018" }, { "authors": "A Paszke; S Gross; F Massa; A Lerer; J Bradbury; G Chanan; T Killeen; Z Lin; N Gimelshein; L Antiga; A Desmaison; A Kopf; E Yang; Z Devito; M Raison; A Tejani; S Chilamkurthy; B Steiner; L Fang; J Bai; S Chintala", "journal": "", "ref_id": "b41", "title": "PyTorch: An imperative style, high-performance deep learning library", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b42", "title": "", "year": "2019" }, { "authors": "A Razavi; A Van Den Oord; O Vinyals", "journal": "Advances in neural information processing systems", "ref_id": "b43", "title": "Generating diverse high-fidelity images with VQ-VAE-2", "year": "2019" }, { "authors": "S E Reed; Y Zhang; Y Zhang; H Lee", "journal": "Advances in neural information processing systems", "ref_id": "b44", "title": "Deep visual analogy-making", "year": "2015" }, { "authors": "J Robinson; C.-Y Chuang; S Sra; S Jegelka", "journal": "", "ref_id": "b45", "title": "Contrastive learning with hard negative samples", "year": "2020" }, { "authors": "P Sermanet; C Lynch; Y Chebotar; J Hsu; E Jang; S Schaal; S Levine; G Brain", "journal": "IEEE", "ref_id": "b46", "title": "Time-contrastive networks: Self-supervised learning from video", "year": "2018" }, { "authors": "J Sohl-Dickstein; E Weiss; N Maheswaranathan; S Ganguli", "journal": "PMLR", "ref_id": "b47", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "Y Tian; C Sun; B Poole; D Krishnan; C Schmid; P Isola", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b48", "title": "What makes for good views for contrastive learning", "year": "2020" }, { "authors": "S Tonekaboni; C.-L Li; S O Arik; A Goldenberg; T Pfister", "journal": "PMLR", "ref_id": "b49", "title": "Decoupling local and global representations of time series", "year": "2022" }, { "authors": "Y.-H H Tsai; M Q Ma; M Yang; H Zhao; L.-P Morency; R Salakhutdinov", "journal": "", "ref_id": "b50", "title": "Self-supervised representation learning with relative predictive coding", "year": "2021" }, { "authors": "S Tulyakov; M.-Y Liu; X Yang; J Kautz", "journal": "", "ref_id": "b51", "title": "MoCoGAN: Decomposing motion and content for video generation", "year": "2018" }, { "authors": "A Vahdat; J Kautz", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b52", "title": "NVAE: A deep hierarchical variational autoencoder", "year": "2020" }, { "authors": "L Van Der Maaten; G Hinton", "journal": "Journal of machine learning research", "ref_id": "b53", "title": "Visualizing data using t-sne", "year": "2008" }, { "authors": "T Wang; Z Yue; J Huang; Q Sun; H Zhang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b54", "title": "Selfsupervised learning disentangled group representation as feature", "year": "2021" }, { "authors": "P Wei; L Kong; X Qu; X Yin; Z Xu; J Jiang; Z Ma", "journal": "", "ref_id": "b55", "title": "Unsupervised video domain adaptation: A disentanglement perspective", "year": "2022" }, { "authors": "Z Wu; Y Xiong; S X Yu; D Lin", "journal": "", "ref_id": "b56", "title": "Unsupervised feature learning via non-parametric instance discrimination", "year": "2018" }, { "authors": "L Yingzhen; S Mandt", "journal": "PMLR", "ref_id": "b57", "title": "Disentangled sequential autoencoder", "year": "2018" }, { "authors": "J Zhang; K Ma", "journal": "", "ref_id": "b58", "title": "Rethinking the augmentation module in contrastive learning: Learning hierarchical augmentation invariance with expanded views", "year": "2022" }, { "authors": "S Zhang; B Guo; A Dong; J He; Z Xu; S X Chen", "journal": "Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences", "ref_id": "b59", "title": "Cautionary tales on air-quality improvement in beijing", "year": "2017" }, { "authors": "R Zhu; B Zhao; J Liu; Z Sun; C W Chen", "journal": "", "ref_id": "b60", "title": "Improving contrastive learning by visualizing feature transformation", "year": "2021" }, { "authors": "Y Zhu; M R Min; A Kadav; H P Graf", "journal": "", "ref_id": "b61", "title": "3VAE: Selfsupervised sequential VAE for representation disentanglement and data generation", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 139.07, 436.76, 151.04, 30.32 ], "formula_id": "formula_0", "formula_text": "T i=1 p(d i | d <i ) • T i=1 p(x i | s, d i ) ,(1)" }, { "formula_coordinates": [ 3, 81.7, 499.05, 134.54, 11.23 ], "formula_id": "formula_1", "formula_text": "d i | d <i ) := N (µ(d <i ), σ 2 (d <i ))." }, { "formula_coordinates": [ 3, 82.58, 549.79, 207.53, 30.32 ], "formula_id": "formula_2", "formula_text": "q(z | x 1:T ) = q(s | x 1:T ) T i=1 q(d i | d <i , x ≤i ) ,(2)" }, { "formula_coordinates": [ 3, 91.33, 674.78, 198.78, 39.54 ], "formula_id": "formula_3", "formula_text": "L VAE = λ 1 E q(z | x 1:T ) log p(x 1:T | z) -λ 2 KL[q(s | x 1:T ) ∥ p(s)] -λ 3 KL[q(d 1:T | x 1:T ) ∥ p(d 1:T )] ,(3)" }, { "formula_coordinates": [ 3, 419.96, 112.06, 122.65, 9.65 ], "formula_id": "formula_4", "formula_text": "x i | s, d i ) in Eq. (1), p(d i | d <i )" }, { "formula_coordinates": [ 3, 367.81, 534.47, 174.29, 14.13 ], "formula_id": "formula_5", "formula_text": "max p,q E x 1:T ∼p D L VAE + L MI ,(5)" }, { "formula_coordinates": [ 3, 341.35, 657.14, 196.88, 28.14 ], "formula_id": "formula_6", "formula_text": "L iNCE = log ϕ(u, v + ) ϕ(u, v + ) + M j=1 ϕ(u, v j ) , (6" }, { "formula_coordinates": [ 3, 538.24, 665.77, 3.87, 8.64 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 4, 347.33, 614.73, 194.78, 13.83 ], "formula_id": "formula_8", "formula_text": "D ij := KL[q(s i | x i 1:T ) ∥ q(s j | x j 1:T )] ,(7)" }, { "formula_coordinates": [ 5, 133.47, 526.48, 156.64, 13.31 ], "formula_id": "formula_9", "formula_text": "s + ∼ q(s + | x + 1:T ) .(8)" }, { "formula_coordinates": [ 15, 167.64, 425.71, 261.61, 30.33 ], "formula_id": "formula_10", "formula_text": "max p,q E x 1:T ∼p D λ 1 E q(z | x 1:T ) log p(x 1:T | z) -λ 2 KL[q(s | x 1:T ) ∥ p(s)] -λ 3 KL[q(d 1:T | x 1:T ) ∥ p(d 1:T )]" }, { "formula_coordinates": [ 15, 375.97, 443.94, 166.14, 27.05 ], "formula_id": "formula_11", "formula_text": "d 1:T ) (9)" }, { "formula_coordinates": [ 17, 206.26, 206.24, 204.29, 13.31 ], "formula_id": "formula_12", "formula_text": "{x - 1:T } 2n j=1 ∼ p({x - 1:T } 2n j=1 | {s -} 2n j=1 , {d 1:T } 2n j=1 ) ." }, { "formula_coordinates": [ 17, 232.95, 260.58, 150.91, 13.31 ], "formula_id": "formula_13", "formula_text": "{s -} 2n j=1 ∼ q({s -} 2n j=1 | {x - 1:T } 2n j=1 ) ." } ]
Sample and Predict Your Latent: Modality-free Sequential Disentanglement via Contrastive Estimation
Unsupervised disentanglement is a long-standing challenge in representation learning. Recently, self-supervised techniques achieved impressive results in the sequential setting, where data is timedependent. However, the latter methods employ modality-based data augmentations and random sampling or solve auxiliary tasks. In this work, we propose to avoid that by generating, sampling, and comparing empirical distributions from the underlying variational model. Unlike existing work, we introduce a self-supervised sequential disentanglement framework based on contrastive estimation with no external signals, while using common batch sizes and samples from the latent space itself. In practice, we propose a unified, efficient, and easy-to-code sampling strategy for semantically similar and dissimilar views of the data. We evaluate our approach on video, audio, and time series benchmarks. Our method presents state-of-the-art results in comparison to existing techniques. The code is available at GitHub.
Ilan Naiman; Nimrod Berman; Omri Azencot
[ { "figure_caption": "Figure 1 .1Figure1. A) To generate positive and negative static views of s, we collect the closest S + and farthest S -distributions in the batch. We sample from these distributions using the reparameterization trick. B) Unfortunately, samples from the batch have limited variation (dashed red rectangle), and thus we use our predictive sampling trick, generating samples by using the posterior of the sampled prior.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Content and pose swap results in Sprites (A) and MUG (B) datasets. See the text for additional details.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure6. We plot the t-SNE embedding of the dynamic features of the inputs and their negative samples as computed by C-DSVAE (top) and our approach (bottom). In this setting, the blue and orange embeddings should be as separate as possible.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "1:T ∼ p(x src 1:T | stgt , d src 1:T ) and xtgt 1:T ∼ p(x tgt 1:T | ssrc , d tgt 1:T ). If the representation is well disentangled, xsrc 1:T should preserve its original dynamics but have the target's static features and vice versa for xtgt 1:T . Fig. 11 and Fig. 12 show four separate examples of Sprites and MUG where the length of the MUG sequences are shorten to T = 10 for clarity. The first row of each pair shows the original samples x src 1:T , x tgt 1:T ∼ p D . The row below shows the swapping results. Namely, rows 1, 3, 5, 7 represent the original samples and rows 2, 4, 6, 8 represent swapped samples.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Content generation results in Sprites dataset. See the text for additional details.", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Dynamics generation results in Sprites dataset. See the text for additional details.", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 9. Content generation results in MUG dataset. See the text for additional details.", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 .10Figure 10. Dynamics generation results in MUG dataset. See the text for additional details.", "figure_data": "", "figure_id": "fig_7", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 .11Figure 11. Swapping results in Sprites dataset. See the text for additional details.", "figure_data": "", "figure_id": "fig_8", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Disentanglement metrics on Sprites, MUG, and TIMIT. Results with standard deviation appear in B.6.", "figure_data": "SpritesMUGTIMITMethodAcc↑IS↑ H(y|x)↓ H(y)↑ Acc↑IS↑ H(y|x)↓ H(y)↑Methodstatic EER↓ dynamic EER ↑MoCoGAN 92.89% 8.461 0.090 2.192 63.12% 4.332 0.183 1.721FHVAE5.06%22.77%DSVAE90.73% 8.384 0.072 2.192 54.29% 3.608 0.374 1.657DSVAE5.64%19.20%R-WAE98.98% 8.516 0.055 2.197 71.25% 5.149 0.131 1.771R-WAE4.73%23.41%S3VAE99.49% 8.637 0.041 2.197 70.51% 5.136 0.135 1.760S3VAE5.02%25.51%SKD100% 8.999 1.6e-7 2.197 77.45% 5.569 0.052 1.769SKD4.46%26.78%C-DSVAE 99.99% 8.871 0.014 2.197 81.16% 5.341 0.092 1.775C-DSVAE 4.03%31.81%Ours100% 8.942 0.006 2.197 85.71% 5.548 0.066 1.779Ours3.41%33.22%", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Error metrics on Physionet and Air quality datasets", "figure_data": "ICU Mortality PredictionAvg. Daily RainMethodAUPRCAUROCMAEVAE0.157 ± 0.053 0.564 ± 0.044 1.831 ± 0.005GPVAE0.282 ± 0.086 0.699 ± 0.018 1.826 ± 0.001C-DSVAE 0.158 ± 0.005 0.565 ± 0.007 1.806 ± 0.012GLR0.365 ± 0.092 0.752 ± 0.011 1.824 ± 0.001Ours0.367 ± 0.015 0.764 ± 0.040 1.823 ± 0.001", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Downstream classification task on latent static and dynamic features. Results with standard deviation appear in B.9.", "figure_data": "Static featuresDynamic featuresDataset MethodStatic L-Acc ↑ Dynamic L-Acc ↓ Gap ↑ Static L-Acc ↓ Dynamic L-Acc ↑ Gap ↑random1.92%16.66%-1.92%16.66%-MUGC-DSVAE98.75%76.25%22.25%26.25%82.50%56.25%Ours98.12%68.75%29.37%10.00%85.62%75.62%random1.65%3.84%-1.65%3.84%-LettersC-DSVAE95.47%13.0%82.47%2.79%66.35%63.56%Ours100%12.16%87.84%3.06%69.75%66.69%random----20%-JestersC-DSVAE----21.88%-Ours----27.70%-Let x 1:T ∼ p D denote a data sample with its static anddynamic factors, (s, d 1:T ). A positive static example x + 1:T", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Negatives Ablation Study.", "figure_data": "Negatives Mode Acc MUG↑ EER gap TIMIT↑Random84.18%28.53%Middle Third84.43%29.11%Middle+Farthest 84.96%29.53%Farthest Third85.71%29.81%", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Disentanglement metrics on Sprites and MUG using only a reparametrization trick (repar. trick) vs. using our predictive sampling trick (ours). Our results are better overall across all metrics.", "figure_data": "SpritesMUGMethodAcc↑IS↑H(y|x)↓H(y)↑Acc↑IS↑H(y|x)↓H(y)↑repar. trick 100% ± 0% 8.865 ± 9.98e-4 0.015 ± 1.13e-4 2.197 ± 0 83.93% ± 0.96 5.495 ± 0.048 0.092 ± 7.9e-3 1.775 ± 4.2e-3", "figure_id": "tab_4", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "We augment Tab. 1 for the Sprites and MUG datasets with standard deviation measures.", "figure_data": "SpritesMUGMethodAcc↑IS↑H(y|x)↓H(y)↑Acc↑IS↑H(y|x)↓H(y)↑MoCoGAN92.89%8.4610.0902.19263.12%4.3320.1831.721DSVAE90.73%8.3840.0722.19254.29%3.6080.3741.657R-WAE98.98%8.5160.0552.19771.25%5.1490.1311.771S3VAE99.49%8.6370.0412.19770.51%5.1360.1351.760SKD100%8.9991.6e-72.19777.45%5.5690.0521.769C-DSVAE99.99%8.8710.0142.19781.16%5.3410.0921.775", "figure_id": "tab_5", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Downstream classification task on latent static and dynamic features using SVC.", "figure_data": "Static featuresDynamic featuresDataset MethodStatic L-Acc ↑ Dynamic L-Acc ↓ Gap ↑Static L-Acc ↓Dynamic L-Acc ↑ Gap ↑random1.92%16.66%-1.92%16.66%-MUGC-DSVAE 98.75% ± 1% 76.25% ± 2.9% 22.25% 26.25% ± 3.8% 82.50% ± 2.5% 56.25%Ours98.12% ± 0.9% 68.75% ± 2.6% 29.37% 10.00% ± 3.4% 85.62% ± 2.4% 75.62%random1.65%3.84%-1.65%3.84%-LettersC-DSVAE 95.47% ± 0.5% 13.0% ± 0.6% 82.47% 2.79% ± 0.5%66.35% ± 1%63.56%Ours100% ± 0% 12.16% ± 0.6% 87.84% 3.06% ± 0.3% 69.75% ± 1.4% 66.69%", "figure_id": "tab_6", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Downstream classification task on latent static and dynamic features using Random Forest Classifier. 06% ± 1.1% 68.50% ± 2.7% 29.56% 38.93% ± 4.6% 86.75% ± 2.5% 47.82%", "figure_data": "Static featuresDynamic featuresDataset MethodStatic L-Acc ↑Dynamic L-Acc ↓ Gap ↑Static L-Acc ↓Dynamic L-Acc ↑ Gap ↑random1.92%16.66%-1.92%16.66%-MUGC-DSVAE 98.75% ± 0.7% 72.75% ± 2.8%26%64% ± 4.1%83.18% ± 1.7% 19.18%Ours random 98.Letters C-DSVAE 100% ± 0% 1.65%3.84% 5.6% ± 0.6%-94.4%1.65% 3.66% ± 0.3%3.84% 75.44% ± 1%-71.78%Ours100% ± 0%5.6% ± 0.5%94.4% 2.92% ± 0.3% 77.63% ± 1% 74.71%", "figure_id": "tab_7", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Downstream classification task on latent static and dynamic features using KNN.", "figure_data": "Static featuresDynamic featuresDataset MethodStatic L-Acc ↑Dynamic L-Acc ↓ Gap ↑Static L-Acc ↓Dynamic L-Acc ↑ Gap ↑random1.92%16.66%-1.92%16.66%-MUGC-DSVAE 98.31% ± 1.1% 50.31% ± 3.6%48%26.18% ± 3.4% 83.25% ± 2.1% 57.07%Ours99.31% ± 0.9% 49.25% ± 4.5% 50.06% 19.31% ± 2.8% 82.50% ± 1.5% 63.19%random1.65%3.84%-1.65%3.84%-LettersC-DSVAE100% ± 0%5.8% ± 0.5%94.2% 3.41% ± 0.3% 76.61% ± 0.9% 73.2%Ours100% ± 0%5.8% ± 0.6%94.2%3.54% ± 0.3% 78.33% ± 0.8% 74.92%", "figure_id": "tab_8", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "We augment Tab. 1 for the MUG dataset with the mean and standard deviation measures using of five models trained with different seed values.", "figure_data": "SpritesMUGMethodAcc↑IS↑H(y|x)↓H(y)↑Acc↑IS↑H(y|x)↓H(y)↑MoCoGAN92.89%8.4610.0902.19263.12%4.3320.1831.721DSVAE90.73%8.3840.0722.19254.29%3.6080.3741.657R-WAE98.98%8.5160.0552.19771.25%5.1490.1311.771S3VAE99.49%8.6370.0412.19770.51%5.1360.1351.760SKD100%8.9991.6e-72.19777.45%5.5690.0521.769C-DSVAE99.99%8.8710.0142.19781.16%5.3410.0921.775", "figure_id": "tab_9", "figure_label": "12", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "(Sohl-Dickstein et al., 2015)", "Explanation": "The cited work by Sohl-Dickstein et al. (2015) is mentioned as a foundational work in the area of unsupervised learning, which the citing paper builds upon in its study of novel SSL approaches for sequential disentanglement problems."}, {"Category": "Extension or Continuation", "Citation": "(Chen et al., 2020a)", "Explanation": "The cited work by Chen et al. (2020a) is mentioned as a recent approach that utilizes input as supervisory signals in SSL frameworks, which the citing paper further extends in its study of novel SSL approaches for sequential disentanglement problems."}, {"Category": "Extension or Continuation", "Citation": "(Misra & Maaten, 2020)", "Explanation": "The cited work by Misra and Maaten (2020) is mentioned as a recent approach that uses pretext tasks in SSL frameworks, which the citing paper further extends in its study of novel SSL approaches for sequential disentanglement problems."}, {"Category": "Extension or Continuation", "Citation": "(Caron et al., 2020)", "Explanation": "The cited work by Caron et al. (2020) is mentioned as a recent approach that utilizes SSL frameworks in the area of representation learning, which the citing paper further extends in its study of novel SSL approaches for sequential disentanglement problems."}, {"Category": "Supporting Evidence", "Citation": "(Hsu et al., 2017)", "Explanation": "The cited work by Hsu et al. (2017) provides foundational evidence in the area of sequential disentanglement, where the goal is to factorize data into time-invariant and time-variant features, which the citing paper builds upon in its study of novel SSL approaches for sequential disentanglement problems."}, {"Category": "Supporting Evidence", "Citation": "(Hsu et al., 2017)", "Explanation": "The cited work by Hsu et al. (2017) provides foundational data and methodologies for the study of sequential disentanglement approaches in the field of video, audio, and time series data."}, {"Category": "Supporting Evidence", "Citation": "(Yingzhen & Mandt, 2018)", "Explanation": "The cited work by Yingzhen and Mandt (2018) contributes to the field of sequential disentanglement by introducing a new method for modeling the task of variational autoencoders in a more effective manner."}, {"Category": "Extension or Continuation", "Citation": "(Han et al., 2021)", "Explanation": "The cited work by Han et al. (2021) extends the research on sequential disentanglement by exploring new dimensions and variables in the field of video, audio, and time series data."}, {"Category": "Methodological Basis", "Citation": "(Zhu et al., 2020)", "Explanation": "The cited work by Zhu et al. (2020) provides a methodological basis for the design of auxiliary tasks in sequence disentanglement works, specifically in the context of audio and image data."}, {"Category": "Methodological Basis", "Citation": "(Bai et al., 2021)", "Explanation": "The cited work by Bai et al. (2021) introduces a new method for self-supervised learning in sequence disentanglement works, which requires the use of positive and negative samples in the input data."}, {"Category": "Methodological Basis", "Citation": "(Bachman et al., 2019)", "Explanation": "The cited work by Bachman et al. (2019) is used as a methodological basis for generating good positive and negative views in the new contrastive learning framework for disentanglement tasks of arbitrary sequential data."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2020a)", "Explanation": "The cited work by Chen et al. (2020a) is also used as a methodological basis for generating good positive and negative views in the new contrastive learning framework for disentanglement tasks of arbitrary sequential data."}, {"Category": "Methodological Basis", "Citation": "(Sermanet et al., 2018)", "Explanation": "The cited work by Sermanet et al. provides a method of obtaining positive examples for action recognition in videos, which the citing paper adopts in their research to improve the performance of their contrastive learning approach."}, {"Category": "Methodological Basis", "Citation": "(Han et al., 2019)", "Explanation": "The cited work by Han et al. presents a method of generating samples for action recognition by considering different videos and locations/times within a video, which the citing paper may have adopted in their research to improve the quality of the samples used in their contrastive learning approach."}, {"Category": "Methodological Basis", "Citation": "(Ho & Vasconcelos, 2020)", "Explanation": "The cited work by Ho and Vasconcelos introduces the use of positive adversarial samples in contrastive learning, which the citing paper may have adopted in their research to enhance the effect of their contrastive learning approach."}, {"Category": "Methodological Basis", "Citation": "(Doersch & Zisserman, 2017)", "Explanation": "The cited work by Doersch and Zisserman highlights the issue of random sampling in negative sampling for action recognition, which the citing paper may have addressed in their research by adopting a more sophisticated method of negative sampling to improve the performance of their contrastive learning approach."}, {"Category": "Methodological Basis", "Citation": "(Chuang et al., 2020)", "Explanation": "The cited work by Chuang et al. discusses the problem of sampling bias in negative sampling for action recognition, which the citing paper may have addressed in their research by adopting a more effective method of negative sampling to improve the performance of their contrastive learning approach."}, {"Category": "Methodological Basis", "Citation": "(Kalantidis et al., 2020)", "Explanation": "The cited work by Kalantidis et al. presents a method of constructing negative samples for action recognition by measuring their similarity to the current sample, which the citing paper may have adopted in their research to improve the quality of the negative samples used in their contrastive learning approach."}, {"Category": "Methodological Basis", "Citation": "(Robinson et al., 2020)", "Explanation": "The cited work by Robinson et al. also focuses on constructing negative samples for action recognition by measuring their similarity to the current sample, which the citing paper may have adopted in their research to improve the quality of the negative samples used in their contrastive learning approach."}, {"Category": "Methodological Basis", "Citation": "(Huynh et al., 2022)", "Explanation": "The cited work by Huynh et al. (2022) provides a method for discarding semantic information from negative samples, which the citing paper adopts in its research to improve the quality of negative examples."}, {"Category": "Extension or Continuation", "Citation": "(Chuang et al., 2020)", "Explanation": "The cited work by Chuang et al. (2020) showed impressive results in generating negative examples with no semantic information, which the citing paper extends by exploring the use of this technique in their research."}, {"Category": "Data Source", "Citation": "(Ash et al., 2021)", "Explanation": "The cited work by Ash et al. (2021) studies the effect of the number of negative samples on performance, providing a dataset that the citing paper utilizes in their research to analyze the impact of negative examples on performance."}, {"Category": "Methodological Basis", "Citation": "(Yingzhen & Mandt, 2018)", "Explanation": "The cited work by Yingzhen & Mandt (2018) introduced the DSVAE model, which the citing paper adopts in their research by using a similar LSTM architecture and a heuristic to process dynamic features with a small dimension."}, {"Category": "Extension or Continuation", "Citation": "(Tulyakov et al., 2018)", "Explanation": "The cited work by Tulyakov et al. (2018) proposed an adversarial setup for a VAE model, which the citing paper extends by using a similar model in their research."}, {"Category": "Extension or Continuation", "Citation": "(Zhu et al., 2020)", "Explanation": "The cited work by Zhu et al. (2020) introduced the S3VAE model, which the citing paper extends by adding mutual information penalties and auxiliary signals to the DSVAE model."}, {"Category": "Methodological Basis", "Citation": "(Han et al., 2021)", "Explanation": "The cited work by Han et al. (2021) proposed replacing the Euclidean distance in the DSVAE model with a Wasserstein distance, which the citing paper adopts in their research to improve the model."}, {"Category": "Methodological Basis", "Citation": "(Tonekaboni et al., 2022)", "Explanation": "The cited work by Tonekaboni et al. (2022) used a VAE model to disentangle time series data, which the citing paper builds upon in their research to disentangle arbitrary time series data."}, {"Category": "Methodological Basis", "Citation": "(Bai et al., 2021)", "Explanation": "The cited work by Bai et al. (2021) introduced the C-DSVAE model, which the citing paper builds upon by using a similar model to estimate mutual information losses and data augmentations for contrastive loss estimation."}, {"Category": "Data Source", "Citation": "(Berman et al., n.d.)", "Explanation": "The cited work by Berman et al. (n.d.) is not specified in the context provided, but it is likely a data source that the citing paper uses in their research or analysis."}, {"Category": "Methodological Basis", "Citation": "(Lin et al., 2020)", "Explanation": "The cited work introduces the concept of contrastive estimation in the context of disentanglement of latent factors, which the citing paper builds upon in the design of a simple framework for sampling good positive and negative samples in the context of sequential data."}, {"Category": "Methodological Basis", "Citation": "(Li et al., 2021)", "Explanation": "The cited work also contributes to the field of contrastive estimation in the context of disentanglement of latent factors, which the citing paper leverages in the design of a simple framework for sampling good positive and negative samples in the context of sequential data."}, {"Category": "Methodological Basis", "Citation": "(Wang et al., 2021)", "Explanation": "The cited work further builds on the concept of contrastive estimation in the context of disentanglement of latent factors, providing insights that the citing paper utilizes in the design of a simple framework for sampling good positive and negative samples in the context of sequential data."}, {"Category": "Methodological Basis", "Citation": "(Wei et al., 2022)", "Explanation": "The cited work employs a contrastive triplet loss for unsupervised video domain adaptation, which the citing paper builds upon in the design of a simple framework for sampling good positive and negative samples in the context of sequential data."}, {"Category": "Methodological Basis", "Citation": "(Zhu et al., 2020)", "Explanation": "The cited work utilizes auxiliary tasks and supervisory signals in the context of self-supervision in sequential disentanglement of arbitrary data, which the citing paper leverages in the design of a simple framework for sampling good positive and negative samples in the context of sequential data."}, {"Category": "Methodological Basis", "Citation": "(Bai et al., n.d.)", "Explanation": "The cited work is not specified in the context of the citing paper, but it is likely to have contributed to the field of self-supervision in sequential disentanglement of arbitrary data, which the citing paper may have built upon in the design of a simple framework for sampling good positive and negative samples in the context of sequential data."}, {"Category": "Methodological Basis", "Citation": "(Yingzhen & Mandt, 2018)", "Explanation": "The cited work provides a framework for probabilistic modeling that the citing paper adopts to structure the research on finding posterior distributions of static and dynamic latent representations."}, {"Category": "Methodological Basis", "Citation": "(Bai et al., 2021)", "Explanation": "The cited work offers a method for modeling the relationship between static and dynamic factors in time series data, which the citing paper builds upon to develop a framework for finding posterior distributions of these factors."}, {"Category": "Supporting Evidence", "Citation": "(Locatello et al., 2019)", "Explanation": "The cited work provides a theoretical result that shows the impossibility of unsupervised disentanglement in models without inductive biases, which supports the claim in the citing paper that the issue of non-informative latent factors in VAE models is a real challenge in sequential disentanglement tasks."}, {"Category": "Methodological Basis", "Citation": "(Zhu et al., 2020)", "Explanation": "The cited work by Zhu et al. provides a method for augmenting a model with mutual information terms to improve the relation in pairs of data and minimize the relation of (s, d 1:T ) in the disentanglement model."}, {"Category": "Extension or Continuation", "Citation": "(Han et al., 2021)", "Explanation": "The cited work by Han et al. extends the method of augmenting a model with mutual information terms to further improve the relation in pairs of data and minimize the relation of (s, d 1:T ) in the disentanglement model."}, {"Category": "Methodological Basis", "Citation": "(Bai et al., 2021)", "Explanation": "The cited work by Bai et al. provides a formal way of introducing mutual information terms to maximize the relation in pairs of data and minimize the relation of (s, d 1:T ) in the disentanglement model."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2018)", "Explanation": "The cited work introduces the mini-batch weighted sampling (MWS) approach, which the citing paper adopts as a standard method for estimating MI terms in the log-likelihood of the model."}, {"Category": "Extension or Continuation", "Citation": "(Chen et al., 2020a)", "Explanation": "The cited work provides a temperature parameter (\u03c4 = 0.5) for measuring the similarity between examples in the function \u03d5(u, v), which the citing paper extends by using this parameter in the infoNCE estimation of MI terms."}, {"Category": "Methodological Basis", "Citation": "(Oord et al., 2018)", "Explanation": "The cited work by Oord et al. provides a theoretical basis for the contrastive estimation method used in the citing paper, which is important for understanding the model architecture and objective function."}, {"Category": "Methodological Basis", "Citation": "(Zhu et al., 2020)", "Explanation": "The cited work by Zhu et al. provides a method for obtaining positive and negative views of s and d 1:T for a given input x 1:T, which the citing paper adopts in their sequential disentanglement framework for contrastive estimation."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2020a)", "Explanation": "The cited work by Chen et al. discusses the use of data augmentations and random sampling for unsupervised learning, which the citing paper builds upon in their sequential contrastive learning framework."}, {"Category": "Methodological Basis", "Citation": "(Tian et al., 2020)", "Explanation": "The cited work by Tian et al. highlights the importance of data augmentations and random sampling in unsupervised learning, which the citing paper further explores in their sequential contrastive learning approach."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2020b)", "Explanation": "The cited work by Chen et al. (2020b) provides a study that shows the impact of the particular choice of data augmentation (DA) on the results, which the citing paper adopts to support the claim that the choice of DA can significantly affect the results."}, {"Category": "Methodological Basis", "Citation": "(Tian et al., 2020)", "Explanation": "The cited work by Tian et al. (2020) also contributes to the study of the impact of DA on the results, which the citing paper uses to further support the claim of the significant effect of DA on the results."}, {"Category": "Methodological Basis", "Citation": "(Zhang & Ma, 2022)", "Explanation": "The cited work by Zhang and Ma (2022) provides a study on the effect of frame shuffling in healthcare applications involving data with vital measurements of a patient, which the citing paper uses to highlight the potential issues with data augmentation in critical healthcare applications."}, {"Category": "Methodological Basis", "Citation": "(Doersch & Zisserman, 2017)", "Explanation": "The cited work by Doersch and Zisserman (2017) is used as a reference for the common method of selecting random samples from a dataset in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Chuang et al., 2020)", "Explanation": "The cited work by Chuang et al. (2020) is mentioned to highlight the need to address sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2020a)", "Explanation": "The cited work by Chen et al. (2020a) is discussed in the context of increasing batch sizes to reduce sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Wu et al., 2018)", "Explanation": "The cited work by Wu et al. (2018) is mentioned to discuss the use of a memory bank to address sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Chuang et al., 2020)", "Explanation": "The cited work by Chuang et al. (2020) is mentioned to highlight the need to address sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2020a)", "Explanation": "The cited work by Chen et al. (2020a) is discussed in the context of increasing batch sizes to reduce sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Wu et al., 2018)", "Explanation": "The cited work by Wu et al. (2018) is mentioned to discuss the use of a memory bank to address sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Chuang et al., 2020)", "Explanation": "The cited work by Chuang et al. (2020) is mentioned to highlight the need to address sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2020a)", "Explanation": "The cited work by Chen et al. (2020a) is discussed in the context of increasing batch sizes to reduce sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Wu et al., 2018)", "Explanation": "The cited work by Wu et al. (2018) is mentioned to discuss the use of a memory bank to address sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Chuang et al., 2020)", "Explanation": "The cited work by Chuang et al. (2020) is mentioned to highlight the need to address sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2020a)", "Explanation": "The cited work by Chen et al. (2020a) is discussed in the context of increasing batch sizes to reduce sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Wu et al., 2018)", "Explanation": "The cited work by Wu et al. (2018) is mentioned to discuss the use of a memory bank to address sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Chuang et al., 2020)", "Explanation": "The cited work by Chuang et al. (2020) is mentioned to highlight the need to address sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2020a)", "Explanation": "The cited work by Chen et al. (2020a) is discussed in the context of increasing batch sizes to reduce sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Wu et al., 2018)", "Explanation": "The cited work by Wu et al. (2018) is mentioned to discuss the use of a memory bank to address sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Chuang et al., 2020)", "Explanation": "The cited work by Chuang et al. (2020) is mentioned to highlight the need to address sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2020a)", "Explanation": "The cited work by Chen et al. (2020a) is discussed in the context of increasing batch sizes to reduce sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Wu et al., 2018)", "Explanation": "The cited work by Wu et al. (2018) is mentioned to discuss the use of a memory bank to address sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Chuang et al., 2020)", "Explanation": "The cited work by Chuang et al. (2020) is mentioned to highlight the need to address sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2020a)", "Explanation": "The cited work by Chen et al. (2020a) is discussed in the context of increasing batch sizes to reduce sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Wu et al., 2018)", "Explanation": "The cited work by Wu et al. (2018) is mentioned to discuss the use of a memory bank to address sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Chuang et al., 2020)", "Explanation": "The cited work by Chuang et al. (2020) is mentioned to highlight the need to address sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2020a)", "Explanation": "The cited work by Chen et al. (2020a) is discussed in the context of increasing batch sizes to reduce sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Wu et al., 2018)", "Explanation": "The cited work by Wu et al. (2018) is mentioned to discuss the use of a memory bank to address sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Chuang et al., 2020)", "Explanation": "The cited work by Chuang et al. (2020) is mentioned to highlight the need to address sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2020a)", "Explanation": "The cited work by Chen et al. (2020a) is discussed in the context of increasing batch sizes to reduce sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Wu et al., 2018)", "Explanation": "The cited work by Wu et al. (2018) is mentioned to discuss the use of a memory bank to address sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Chuang et al., 2020)", "Explanation": "The cited work by Chuang et al. (2020) is mentioned to highlight the need to address sampling bias in false negative views in the context of data augmentation and negative view construction."}, {"Category": "Methodological Basis", "Citation": "(Kalantidis et al., 2020)", "Explanation": "The cited work by Kalantidis et al. (2020) provides a framework for understanding the distinction between soft and hard samples in the context of views, which the citing paper leverages to improve the variation in the training process."}, {"Category": "Methodological Basis", "Citation": "(Robinson et al., 2020)", "Explanation": "The cited work by Robinson et al. (2020) contributes to the understanding of the distinction between soft and hard samples in the context of views, which the citing paper uses to improve the variation in the training process."}, {"Category": "Data Source", "Citation": "(Reed et al., 2015)", "Explanation": "The Sprites dataset is used as a video dataset in the evaluation of the citing paper."}, {"Category": "Data Source", "Citation": "(Aifanti et al., 2010)", "Explanation": "The MUG dataset is used as a video dataset in the evaluation of the citing paper."}, {"Category": "Data Source", "Citation": "(Materzynska et al., 2019)", "Explanation": "The Jester dataset is used as a video dataset in the evaluation of the citing paper."}, {"Category": "Data Source", "Citation": "(Ibrahim et al., 2019)", "Explanation": "The Letters corpus is used as a video dataset in the evaluation of the citing paper."}, {"Category": "Data Source", "Citation": "(Garofolo et al., 1992)", "Explanation": "The TIMIT dataset is used as an audio dataset in the evaluation of the citing paper."}, {"Category": "Data Source", "Citation": "(Goldberger et al., 2000)", "Explanation": "The Physionet dataset is used as a time series dataset in the evaluation of the citing paper."}, {"Category": "Data Source", "Citation": "(Zhang et al., 2017)", "Explanation": "The Air Quality dataset is used as a time series dataset in the evaluation of the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Tulyakov et al., 2018)", "Explanation": "The MoCoGan framework is used as a comparison in the study conducted in the citing paper, providing a methodological basis for evaluating the performance of the proposed approach."}, {"Category": "Methodological Basis", "Citation": "(Hsu et al., 2017)", "Explanation": "The FHVAE framework is cited as a comparison in the study, providing a methodological basis for evaluating the performance of the proposed approach."}, {"Category": "Methodological Basis", "Citation": "(Yingzhen & Mandt, 2018)", "Explanation": "The DSVAE framework is cited as a comparison in the study, providing a methodological basis for evaluating the performance of the proposed approach."}, {"Category": "Methodological Basis", "Citation": "(Han et al., 2021)", "Explanation": "The R-WAE framework is cited as a comparison in the study, providing a methodological basis for evaluating the performance of the proposed approach."}, {"Category": "Methodological Basis", "Citation": "(Zhu et al., 2020)", "Explanation": "The S3VAE framework is cited as a comparison in the study, providing a methodological basis for evaluating the performance of the proposed approach."}, {"Category": "Methodological Basis", "Citation": "(Berman et al., 2023)", "Explanation": "The SKD framework is cited as a comparison in the study, providing a methodological basis for evaluating the performance of the proposed approach."}, {"Category": "Methodological Basis", "Citation": "(Bai et al., 2021)", "Explanation": "The C-DSVAE framework is cited as a comparison in the study, providing a methodological basis for evaluating the performance of the proposed approach."}, {"Category": "Methodological Basis", "Citation": "(Tonekaboni et al., 2022)", "Explanation": "The GLR framework is cited as a comparison in the study, providing a methodological basis for evaluating the performance of the proposed approach."}, {"Category": "Methodological Basis", "Citation": "(Hsu et al., 2017)", "Explanation": "The cited work by Hsu et al. provides a benchmark for testing the effectiveness of sequential disentanglement frameworks on a different data modality, which the citing paper adopts in their research to evaluate the performance of their method."}, {"Category": "Methodological Basis", "Citation": "(Yingzhen & Mandt, 2018)", "Explanation": "The cited work by Yingzhen and Mandt contributes a benchmark for testing the effectiveness of sequential disentanglement frameworks on speaker verification tasks, which the citing paper uses to assess the performance of their method in this context."}, {"Category": "Methodological Basis", "Citation": "(Chenafa et al., 2008)", "Explanation": "The cited work by Chenafa et al. provides a method for calibrating the threshold \u03f5 in the EER metric, which the citing paper uses to assess the performance of their approach in distinguishing between different speakers."}, {"Category": "Methodological Basis", "Citation": "(Tonekaboni et al., 2022)", "Explanation": "The cited work provides a setup for evaluating the latent representations learned by the citing paper, which the citing paper adopts to study the performance of their method in a non-stationary dataset context."}, {"Category": "Methodological Basis", "Citation": "(Tonekaboni et al., 2022)", "Explanation": "The cited work provides a detailed evaluation setup and tasks for the experimental setup in the citing paper, which serves as a methodological basis for the research conducted in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Razavi et al., 2019)", "Explanation": "The cited work by Razavi et al. provides a recent VAE model that can be integrated into the citing paper to improve the quality of reconstruction and enhance disentanglement abilities."}, {"Category": "Supporting Evidence", "Citation": "(Vahdat & Kautz, 2020)", "Explanation": "The cited work by Vahdat and Kautz also provides a VAE model that can be used to improve the quality of reconstruction and enhance disentanglement abilities in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2020a)", "Explanation": "The cited work by Chen et al. provides a method for generating positive dynamic examples in C-DSVAE, which the citing paper adopts to improve the quality of the views generated in the sequential disentanglement setting."}, {"Category": "Data Source", "Citation": "(Reed et al., 2015)", "Explanation": "The Sprites dataset is introduced in this work and serves as the primary data source for the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Aifanti et al., 2010)", "Explanation": "The cited work provides a facial expression dataset that the citing paper uses in their research, including image sequences of subjects displaying different facial expressions and a method for creating sequences of length 15."}, {"Category": "Data Source", "Citation": "(Garofolo et al., 1992)", "Explanation": "The cited work introduces the TIMIT dataset, which the citing paper uses for acoustic-phonetic research and other speech tasks, including read speech and a total of 630 speakers."}, {"Category": "Data Source", "Citation": "(Yingzhen & Mandt, 2018)", "Explanation": "The cited work provides the pre-processing procedure for extracting features from the audio data used in the citing paper."}, {"Category": "Data Source", "Citation": "(Materzynska et al., 2019)", "Explanation": "The Jester dataset is cited as a source of labeled video segments for the hand gesture recognition task in the citing paper."}, {"Category": "Data Source", "Citation": "(Materzynska et al., 2019)", "Explanation": "The Jester dataset is cited again to highlight the use of specific hand gestures in the video segments for the hand gesture recognition task in the citing paper."}, {"Category": "Data Source", "Citation": "(Goldberger et al., 2000)", "Explanation": "The cited work provides the Physionet ICU Dataset, which the citing paper utilizes as a medical time series corpus for their research on in-hospital mortality."}, {"Category": "Data Source", "Citation": "(Zhang et al., 2017)", "Explanation": "The cited work provides the UCI Beijing Multi-site Air Quality dataset, which is used in the citing paper for pre-processing and analysis of air quality measurements from multiple sites in Beijing."}, {"Category": "Methodological Basis", "Citation": "(Paszke et al., 2019)", "Explanation": "The cited work by Paszke et al. (2019) provides the Pytorch framework that the citing paper uses to implement the models in their research."}, {"Category": "Data Source", "Citation": "(Zhu et al., 2020)", "Explanation": "The cited work by Zhu et al. (2020) serves as the source of the image model architecture that the citing paper adopts in their research."}, {"Category": "Extension or Continuation", "Citation": "(Tab. 5)", "Explanation": "The cited work in Tab. 5 extends the research by providing a description of the encoder and decoder architecture for the image model used in the study."}, {"Category": "Data Source", "Citation": "(Tab. 6)", "Explanation": "The cited work in Tab. 6 provides the dimension values for the dynamic latent distribution variables in the image model, which the citing paper uses in their research."}, {"Category": "Methodological Basis", "Citation": "(Yingzhen & Mandt, 2018)", "Explanation": "The architecture of the TIMIT dataset model is adopted from a previous work, which provides a methodological basis for the research conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Zhu et al., 2020)", "Explanation": "The research conducted in the citing paper extends the work of Zhu et al. (2020) by exploring new dimensions, contexts, or variables in the field of time series datasets."}, {"Category": "Extension or Continuation", "Citation": "(Bai et al., 2021)", "Explanation": "The research in the citing paper further builds upon the work of Bai et al. (2021) by continuing to study the use of time series datasets in various contexts and applications."}, {"Category": "Methodological Basis", "Citation": "(Tonekaboni et al., 2022)", "Explanation": "The cited work provides the architecture of the model used in the citing paper, including the use of an LSTM with a hidden size of 32, followed by two linear layers and a ReLU activation. The citing paper adopts this model as the basis for their own research."}, {"Category": "Supporting Evidence", "Citation": "(Bai et al., 2021)", "Explanation": "The cited work (Bai et al., 2021) is used to establish a baseline for comparison in the citing paper. The comparison is made in terms of the gap metric between the two models, and the results show that the model in the citing paper performs better than the baseline model."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b8", "b9", "b10", "b12", "b13", "b14", "b16", "b10", "b13", "b18", "b19", "b20", "b21", "b22" ], "table_ref": [], "text": "Bayesian optimisation (BO) techniques are sample-efficient sequential model-based solvers optimising expensive-to-evaluate black-box objectives. Traditionally, BO methods operate in isolation focusing on one task at a time, a setting that led to numerous successful applications, including but not limited to hyperparameter tuning [1], drug and chip design [2], and robotics [3]. Although successful, focusing on only one objective in isolation may increase sample complexities on new tasks since standard BO algorithms work tabula-rasa as they start optimising from scratch, ignoring previously observed black-box functions.\nTo improve sample efficiency on new target tasks, meta-BO makes use of data collected on related source tasks [4] and attempts to transfer knowledge in between. Those methods are primarily composed of two parts: 1) a (meta) surrogate model predicting the function to optimise and 2) a (meta) acquisition function (AF) estimating how good a queried location is. Regarding surrogate modelling, prior work relies on Gaussian processes (GPs) to perform transfer across tasks due to their data efficiency [5][6][7][8][9], or focuses on deep neural networks [10][11][12] gaining from their representation flexibility. As for acquisition functions, previous works assumed a fixed GP and trained a neural network to perform transfer [13,14].\nAlthough successful, those methods rely on the assumption that the two steps described above can be handled independently, potentially missing benefits from considering both steps together. Therefore, this paper advocates for an end-to-end training protocol for meta-BO where we train a model predicting AF values directly from observed data, without relying on a GP surrogate.\nAs shown in [10,12,15,16], Neural processes (NP) are a good candidate for meta-learning due to their structural properties, thus we propose to use a new transformer-based NP to model the acquisition function directly [17].\nNevertheless, the lack of labelled AF data prohibits a supervised learning protocol. Here, we rely on reward functions to assess the goodness of AFs and formulate a new reinforcement learning (RL) problem. Our RL formulation attempts to learn an optimal policy that selects new evaluation probes by minimising per-task cumulative regrets.\nEarly on, we faced very unstable training with minimal learning due to the combination of RL with transformers [18]. Furthermore, we notice that this problem amplifies when the reward function is sparse, which is the case in our setting, as we formally show in Section 3.2. To mitigate this difficulty, we introduce an inductive bias in our transformer architecture via an auxiliary loss [19] that guides a part of the network to learn a valid probabilistic model, effectively specialising as a neural process with gradient updates. Having developed our framework, we demonstrate state-of-the-art regret performance in hyperparameter optimisation of real-world problems and combinatorial sequence optimisation for antibody and chip design (see Section 4). Those solid empirical results show the value of end-to-end training in meta-BO to further improve sample complexity.\nContributions We summarise our contributions as follows: i) developing the first end-to-end transformer-based architecture for meta-BO predicting acquisition function values, ii) identifying logarithmic reward sparsity patterns hindering end-to-end learning with RL, iii) introducing NP-based inductive biases for successful model updates, and iv) demonstrating new state-ofthe-art regret results on a wide range of traditional (hyperparameter tuning) and non-traditional (chip and antibody design) real-world benchmarks. Our official code repository can be found at https://github.com/huawei-noah/HEBO/tree/master/NAP." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Bayesian Optimisation", "publication_ref": [ "b23", "b24" ], "table_ref": [], "text": "In BO, we sequentially maximise an expensive-to-evaluate black-box function f (x) for variables x ∈ X . BO techniques operate in two steps. First, based on previously collected evaluation data, we fit a probabilistic surrogate model to emulate f (x) allowing us to make probabilistic predictions of the function's behaviour on unobserved input points. Given the first step's probabilistic predictions, the second step in BO optimises an AF that trades off exploration and exploitation to find a new input query, x new ∈ X , for evaluation. Both the model and acquisition choices play a critical role in the success of BO. Many works adopt GPs as surrogate models due to their sample efficiency and practical uncertainty estimation [20]. Moreover, on the acquisition side, the common practice is to optimise one from a set of widely adopted AFs such as Expected Improvement (EI) [21]." }, { "figure_ref": [], "heading": "Transfer in Bayesian Optimisation", "publication_ref": [ "b3", "b14", "b3", "b4", "b14", "b25", "b26", "b27", "b28", "b29", "b4", "b14", "b16", "b23", "b30", "b31", "b5", "b4", "b14", "b18", "b10", "b33", "b35", "b37", "b20", "b12", "b13", "b12" ], "table_ref": [], "text": "Transfer techniques in BO [4,13] aim to improve the optimisation of a newly observed target blackbox function by reusing knowledge from past experiences gathered in related source domains, i.e. solve max x∈X f Target (x) by leveraging information from K source black-boxes f 1 (x), . . . , f K (x). We assume this information is available to our agent via D 1 , . . . , D K such that each dataset\nD k = {⟨x (k) i , y (k) i ⟩} n (k) i=1 consists of n (k) (noisy) evaluations of f k (x) for all k ∈ [1 : K].\nFollowing the meta-learning literature [4,5,13], we impose no additional assumptions on the process that collects D 1 , . . . , D K , allowing for a diverse set of source optimisation algorithms, including but not limited to BO, genetic and evolutionary algorithms [22], and sampling-based strategies [23]. Many algorithms that use source data to improve performance on target domains exist. We categorise those based on the part they customise within the BO pipeline, i.e., by registering if they affect initial points [24,25], search spaces [26], surrogate models [5] or AFs [13,14]. Since our work devises end-to-end meta-BO pipelines, we now elaborate on critical surrogate models needed for the rest of the paper and leave a detailed presentation of related work to Section 5.\nAlthough GPs [20] are a prominent tool for single-task BO, high computational complexity [27], among other drawbacks (e.g. smoothness assumptions or limited scalability in terms of dimensions), can limit their application to transfer scenarios. Of course, one can generalise GPs to support transfer using multi-task kernels [28] or GP ensembles [6], for example. However, recent trends demonstrated superior performance to such extensions when using deep networks (e.g., neural processes) to meta-learn probabilistic surrogates [5,13].\nNeural Processes A popular method for effective meta-learning consists of directly inputting context points (i.e., available data to adjust to a target task) for a model to adapt to unseen tasks [15]. We can accomplish such a goal by relying on neural processes (NPs), a class of models that combine the flexibility of deep neural networks with the properties of stochastic processes [10]. Given an observation set of input-output pairs D obs and a set of n pred locations x (pred) at which we desire to make predictions, an NP parameterised by θ outputs a distribution p θ (•|x (pred) , D obs ) that approximates the true posterior over the labels y (pred) . Among the different parameterisations of p θ (•), e.g., in [29][30][31], transformer architectures [17] recently emerged as compelling choices of NPs [11,12]. In this work, we also adopt a transformer architecture inspired by this architecture's broad software support and state-of-the-art supervised learning results as reported in [11]." }, { "figure_ref": [], "heading": "Reinforcement Learning", "publication_ref": [ "b39", "b40" ], "table_ref": [], "text": "In Section 3, we attempt to learn an end-to-end model that predicts acquisition values. Of course, we cannot fit the parameters of such a model with supervised learning due to the lack of labelled acquisition data. RL [32] is a viable alternative for learning from delayed and non-differentiable reward signals in those cases. In RL, we formalise problems as Markov decision processes (MDP): M = ⟨S, A, P, R, γ⟩, where S and A denote the state and action spaces, P : S × A × S → [0, 1] the state transition model, R the reward function dictating the optimisation goal, and γ ∈ [0, 1) a discount factor specifying the degree to which rewards are discounted over time. A policy π : S × A → [0, 1] is an action-selection rule that is defined as a probability distribution over state-action pairs, where π(a t |s t ) represents the probability of selecting action a t in state s t . An RL agent aims to find an optimal policy π ⋆ that maximises (discounted) expected returns. For determining π ⋆ , we use the state-of-the-art proximal policy optimisation (PPO) algorithm [33].\n3 End-to-End Meta-Bayesian Optimisation While transfer techniques in BO saw varying degrees of success in many applications, current approaches lack end-to-end differentiability, where model learning and acquisition discovery arise as two separate steps. Specifically, the surrogate model gradients hardly affect the acquisition network, and the acquisition's network gradients fail to back-propagate to the surrogate's updates.\nEnabling end-to-end transfer frameworks in which we learn the surrogate and acquisition jointly hold the promise for more scalable and easier-to-deploy algorithms that are more robust to input data or task changes. Following such a framework, we also expect more accurate predictions that can lead to better regret results (see Section 4) since we optimise the entire transfer pipeline, including the intermediate probabilistic model and AF representations. Additionally, end-to-end training techniques allow us to mitigate the need for domain-specific expertise and permit stable implementations that benefit fully from GPU and computing hardware.\nThe most straightforward way to enable end-to-end training in Bayesian optimisation is to introduce a deep network that acquires search variables and historical evaluations of black-box functions as inputs and outputs acquisition values after a set of nonlinear transformations. Of course, it is challenging to fit the weights of such a network due, in part, to a lack of labelled acquisition data where search variables and history of evaluations are inputs and acquisition values are labels." }, { "figure_ref": [], "heading": "Reinforcement Learning for End-to-End Training", "publication_ref": [ "b41", "b14", "b16", "b14", "b14", "b14" ], "table_ref": [], "text": "Our approach utilises RL to fit the network's parameters θ from minimal supervision, circumventing the need for labelled acquisition data. To formalise the RL problem, we introduce an MDP where:\nState: s t = [H t , t, T ] (history, BO time-step & budget) Action: a t = x t (choice of new probe), with H t = {⟨x 1 , y 1 ⟩ , . . . , ⟨x t-1 , y t-1 ⟩} denoting the history of black-box evaluations up-to the current time-step t. Adding the current BO step t and the maximum budget T in our state variable s t helps balance exploration versus exploitation trade-offs as noted in [34]. Our MDP's transition function is straightforward, updating H t by appending newly evaluated points, i.e., H t+1 = H t ∪ {⟨x t , y t ⟩} and incrementing the time variable t. Regarding rewards, we follow the well-established literature [13,14] and define r t = max 1≤ℓ≤t y ℓ to correspond to simple regret. Given such an MDP, our agent attempts to find a parameterised policy π θ which, when conditioned on s t , proposes a new probe x t that minimises cumulative regret (i.e., the sum of total discounted simple regrets).\nExtensions to Multi-Task Reinforcement Learning The above MDP describes an RL configuration that learns θ in a single BO task. We now extend this formulation to multi-task scenarios allowing for a meta-learning setup. To do so, we introduce a set of MDPs M 1 , . . . , M K with K being the total number of available tasks. This paper considers same-domain multi-task learning scenarios. As such, we assume that all MDPs share the same state and action spaces and leave cross-domain extensions as an interesting avenue for future work. We define each MDP, M k , as previously introduced such that:\nStates: s (k) t = [H (k) t , t (k) , T (k) ] Actions: a (k) t = x (k) t Rewards: r (k) t = max 1≤ℓ≤t (k) y (k) t ∀k.\nMoreover, for each task k, the transition model updates task-specific histories with\nH (k) t+1 = H (k) t ⊔ {⟨x (k) t , y(k)\nt ⟩} and increments t (k) . Contrary to the single task setup, we now seek a policy π θ which performs well on average across K tasks:\narg max π θ J(π θ ) = arg max π θ E k∼ptasks   E H (k) T (k) ∼pπ θ   T (k) t=1 γ t-1 r (k) t     ,(1)\nwhere p tasks denotes the task distribution. Furthermore, the per-task history distribution p π θ (H\n(k) T (k)\n) is jointly parameterised by θ and defined as:\np π θ H (k) T (k) = p y (k) T (k) -1 x (k) T (k) -1 π θ x (k) T (k) -1 |H (k) T (k) -1 . . . p y (k) 1 |x (k) 1 µ 0 x (k) 1 , (2) with µ 0 x (k) 1\ndenoting an initial (action) distribution from which we sample the first decision\nx (k)\n1 . While it appears that off-the-shelf PPO can tackle the problem in Equation 1, Section 3.2 details difficulties associated with such an RL formulation, noting that in BO situations, the sparsity of reward signals can impede the learning of the network's weights θ. Before presenting those arguments, we now differentiate from the closest prior art, clarifying critical MDP differences.\nConnection & Differences to [13] Although we are the first to propose end-to-end multi-task MDPs for meta-BO, others have also used RL to discover AFs, for example. Our parameterisation architecture and MDP definition significantly differ from the prior art, particularly from [13], the closest method to our work. Our approach uses a transformer-based deep network model to parameterise the whole pipeline (see Section 3.4). In contrast, the work in [13] assumes a pre-trained fixed Gaussian process surrogate and uses multi-layer perceptrons only to discover acquisitions. Our state variable requires historical information from which we jointly learn a probabilistic model and an acquisition end-to-end instead of requiring posterior means and variances of a Gaussian process model. Hence, our framework is more flexible, allowing us to model non-smooth black-box functions while overcoming some drawbacks of GP surrogates, like training and inference times." }, { "figure_ref": [], "heading": "Limitations of Regret Rewards in End-to-End Training", "publication_ref": [ "b3", "b14", "b21" ], "table_ref": [], "text": "To define Equation 1, we followed the well-established literature of meta-BO [4,13] and utilised simple regret reward functions. Although this choice is reasonable, we face challenges in applying such rewards in end-to-end training. Apart from difficulties associated with end-to-end training of deep architectures [18], our RL algorithm is subject to additional complexities when estimating gradients from Equation 1 due to the sparsity of the reward function. To better understand this problem, we start by noticing that for a reward component r (k) t to contribute to the cumulative summation t γ t-1 r (k) t , we need to observe a function value y (k) t that outperforms all values we have seen so far, i.e., y\n(k) t > max 1≤ℓ<t y (k)\nℓ . During the early training stages of RL, we can quantify the average number of such informative events (when y\n(k) t > max 1≤ℓ<t y (k)\nℓ ) by a combinatorial argument that frames this calculation as a calculation of the number of cycles in a permutation of T (k) elements, leading us to the following lemma. Lemma 3.1. Consider a task with a horizon length (budget) T , and define r t = max 1≤ℓ≤t y t the simple regret as introduced in Equation 1. For a history H T , let m H denote the total number of informative rewards, i.e. the number of steps t at which y t > max 1≤ℓ<t y ℓ . Under a random policy π θ , the number of informative events is logarithmic in T such that:\nE H∼pπ θ [m H ] = O (log T ),\nwhere p π θ is induced by π θ as in Equation 2.\nWe defer the proof of Lemma 3.1 to Appendix A due to space constraints. Here, we note that this result implies that the information contained in one sampled trajectory is sparse at the beginning of RL training when the policy acts randomly. Of course, this increases the difficulty of estimating informative gradients of Equation 1 when updating θ. One can argue that the sparsity described in Lemma 3.1 only holds under random policies during the early stages of RL training and that sparsity patterns decrease as policies improve. Interestingly, simple regret rewards do not necessarily confirm this intuition. To realise this, consider the other end of the RL training spectrum in which policies have improved to near optimality such that π θ → π θ ⋆ . Because π θ has been trained to maximise regret, it will seek to suggest the optimal point of the current task, as early as possible in the BO trajectory. Consequently, the policy is encouraged to produce trajectories with even sparser rewards during later training stages, further complicating the problem of informative gradient estimates of Equation 1." }, { "figure_ref": [], "heading": "Inductive Biases and Auxiliary Tasks", "publication_ref": [ "b39", "b42", "b43", "b44", "b45", "b22", "b46", "b47", "b12", "b48" ], "table_ref": [], "text": "Learning from sparse reward signals is a well-known difficulty in the reinforcement learning literature [32]. Many solutions, from imitation learning, [35] to exploration bonuses [36], improve reward signals to reduce agent-environment interactions and enhance gradient updates. Others [37] attempt to define more informative rewards from prior knowledge or via human interactions [38]. Unfortunately, both of those approaches are hard to use in BO. Indeed, manually engineering black-box-specific rewards is notoriously difficult and requires domain expertise and extensive knowledge of the source and target black-box functions we wish to optimise. Furthermore, learning from human feedback is data-intensive, conflicting with the goal of sample-efficient optimisation.\nAnother prominent direction demonstrating significant gains is the introduction of auxiliary tasks (losses) within RL that allows agents to discover relevant inductive biases leading to impressive empirical successes [19,39,40]. Inspired by those results, we propose introducing an inductive bias in our method via an auxiliary supervised loss. Since we have at our disposal the collected source tasks datasets on which we are training our architecture D (1) , . . . , D (K) , we augment our objective such that our RL agent maximises not only rewards but also the likelihood of making correct predictions on these labelled datasets.\nSupervised Auxiliary Loss Consider a source task k and consider that we split its corresponding dataset into an observed set and a prediction set\nD (k) = D (k) obs ⊔ D (k)\nperd . We define the auxiliary loss to be exactly the log-likelihood of functions values y (pred) at predicted locations x (pred) given observations D obs :\nL(θ) = E k∼ptask,D (k) obs ,D (k) perd log p θ (y (pred) k |x (pred) k , D (k) obs ) .(3)\nInterestingly, this part of our model specialises as a neural process (Section 2.2) with D (k) obs being the history H (k) t and x (pred) being the input points at which we wish to make predictions. To represent p θ (•), we use a head of our transformer to predict multi-modal Riemannian (or bar plot probability density function) posteriors as [11,41]. We compute Equation 3 on random iid splits of D (k) and not directly on trajectories generated by the policy. This is because as the policy improves, the trajectories it generates are composed of less diverse (non-iid.) points. Indeed as training progresses, π θ becomes better and therefore finds the optimal point in D (k) more rapidly. To fully take advantage of the labelled data at our disposal, we evaluate this auxiliary loss on iid. data, i.e. splits of\nD (k) into D (k) obs and D (k)\npred sampled uniformly at random. It is sensible from a BO standpoint as well since we want the introduced inductive bias to encourage our network to maximise the likelihood not only in a neighbourhood of the optimiser but also in the rest of the dataset." }, { "figure_ref": [], "heading": "Neural Acquisition Processes", "publication_ref": [ "b12", "b13" ], "table_ref": [], "text": "We now introduce in more detail our transformer architecture and note some of its important properties. We call this architecture the neural acquisition process (NAP) because it is a new type of NPs jointly predicting AF values and distribution over actions. Similarly to other NP [11,12], it takes a context and queried locations as input. We parameterise it by θ and denote the acquisition prediction by α θ (x (pred) , H, t, T )." }, { "figure_ref": [], "heading": "Algorithm 1 Neural Acquisition Process training.", "publication_ref": [ "b14", "b50", "b12", "b51" ], "table_ref": [ "tab_0" ], "text": "Require: Source tasks training data {D (k) } K k=1 , initial parameters θ, budgets T (k) ≡ T , discount factor γ, learning rate η for each epoch do select task k and dataset\nD (k) , set H 0 = {∅} for t = 1, . . . , T do x t ∼ π θ (•|s t ) ▷ predict action y t = f (k) (x t ) ▷ execute action r t = y * ≤t ▷ collect reward H t+1 ← H t ∪ {(x t , y t )} ▷ update hist. end for R = T t=1 γ t r t ▷ cumul. reward D → D obs ⊔ D pred ▷ split source data L = p θ (y (pred) |x (pred) , D obs ) ▷ aux. loss θ ← θ + η( ∇ θ R + ∇ θ L ) ▷ update θ end for\nSince α θ outputs real numbers, we still need to define how we can obtain a valid probability distribution over the action space to form a policy π θ (•|s t ). Defining such a distribution over the whole action space is hard if A is continuous. Hence, similarly to Volpp et al. [13], we evaluate the policy π θ only on the finite set of locations x (pred) for a given task 1 . Therefore, we have:\nπ θ x (pred) t |s t ∝ e α θ (x (pred) t ,Ht,t,T ) npred i e α θ (x (pred) i ,Ht,t,T )\n.\nWe now have all the necessary components to define the full objective combining Equation 1 and 3: J (θ) = J(π θ )+λL(θ), where λ is a hyperparameter to balance between the two objectives. We summarise the full training algorithm in Alg. 1 detailing each of the components.\nFinally, we study some desirable properties of NAPs, explain how we can achieve them, and why they are important in the context of BO.\nProperty 3.2 (History-order invariance). An NP g is history-order invariant if for any choice of permutation function ψ that changes the order of the points in history, g(x, ψ(H)) = g(x, H).\nUnlike vanilla transformers, we do not use a positional encoding. It allows NAP to treat the history H as a set instead of an ordered sequence [42] and to be invariant to the order of the history. To form an ⟨x, y⟩ pair in this set, we sum the embedding of x and y [11], whereas for queried locations we only use the embedding of x for a token (see Fig. 1, left). It is important for BO since the order in which we collect the points is not relevant for assessing how promising is a new queried location. Previous meta-RL algorithms [43] also shown the importance of relying on order-invariance. \n(pred) = (x (pred) 1 , . . . , x (pred) n ), we have g(x (pred) , H) = (g(x (pred) 1 , H), . . . , g(x (pred) n , H)).\nIn NAP, every token in the history can access each other through the self-attention mask, whereas the elements of x (pred) can only attend to themselves and to tokens in the history H (see Fig. 1, right). Because the tokens inside x (pred) cannot access each other and we do not use positional encoding, NAP is query independent, which is important to make consistent predictions of AF values for BO as they should not depend on the other queried locations. Additionally, NAP is fully differentiable enabling end-to-end training but also optimisation of the queried locations via gradient ascent for continuous action spaces. In Table 1, we highlight the differences with other state-of-the-art models regarding those properties.\nx 1 y 1 + x 2 y 2 + x 3 y 3 + x (pred) 1 x (pred) 2 t, T α θ (x (pred) 1 ) α θ (x (pred) 2 )\nSelf-attention layer (× n)\nEmbedding\nx 1 , y 1 x 1 , y 1 x 2 , y 2\nx 2 , y 2\nx 3 , y 3\nx 3 , y 3\nx (pred)\n1 x (pred) 1 x (pred) 2 x (pred)\n2\nFigure 1: Our proposed NAP architecture (left) and an example of the masks applied during inference (right). We apply independent embedding on x i , y i , t and T . The colored squares mean that the tokens on the left can attend the tokens on the top in the self-attention layer." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b52", "b4", "b14", "b48", "b12", "b14", "b4" ], "table_ref": [], "text": "We conduct experiments on hyperparameter optimisation (HPO) and sequence optimisation tasks to assess NAP's efficiency. Regarding HPO, we run our algorithm on datasets from the HPO-B benchmark [44] and in real-world settings of tuning hyperparameters for Mixed-Integer Programming (MIP) solvers. For sequence optimisation, we test NAP on combinatorial black-box problems from antibody design and synthesis flow optimisation for electronic design and automation (EDA).\nBaselines We compare our method against popular meta-learning baselines, including few-shot Bayesian optimisation (FSBO) [5], MetaBO [13] as well as OptFormers [41] when applicable. Moreover, we show how classical GP-based BO, equipped with an adequate kernel2 and an EI acquisition function, which we title GP-EI, performs across domains. We first fit a GP on the metatraining datasets to enable a fair comparison. We initialise the GP model with those learnt kernel parameters at test time. This way, the GP baseline can benefit from the information provided in the source tasks. We also explore training a neural process directly on source task datasets and then using a fixed EI acquisition. For that, we introduce NP-EI that combines the same base architecture from [11] with EI. Additionally, we contrast NAP against random search (RS).\nFollowing the standard practice in meta-BO, we report our results in terms of normalised regrets for easier comparison across all tasks. We attempt to re-implement all baselines across all empirical domains, extending previous implementations as needed, e.g., developing MetaBO [13] and FSBO [5] versions for combinatorial and mixed spaces to enable a fair comparison in MIP solver tuning, antibody and EDA design tasks." }, { "figure_ref": [], "heading": "Remark on OptFormer", "publication_ref": [ "b48" ], "table_ref": [], "text": "We re-run OptFormer with an EI AF on hyperparameter tuning tasks by extending the implementation in [41] to the discrete search space version of HPO-B. However, the lack of a complete open-source implementation and interpretable meta-data in the other benchmarks (e.g., in antibody design and EDA) prohibited successful execution." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Hyperparameter Optimisation Results", "publication_ref": [ "b52", "b53", "b54", "b4" ], "table_ref": [], "text": "Hyperparameter Optimisation Benchmarks We experiment on the HPO-B benchmark [44], which contains datasets of (classification) model hyperparameters and the models' corresponding accuracies across multiple types and search spaces. Due to resource constraints, we selected six representative search spaces. Nonetheless, we chose the search spaces to represent all underlying classification models in the experiment. We also always picked the ones with the least points to focus on the low data regime performance; see Appendix C.1 for more details. The results of our tests in Figure 2 demonstrate that NAP and OptFormer outperform all other baselines. Surprisingly, although NAP uses a much smaller architecture than OptFormer (around 15 million parameters vs 250 million for OptFormer) and trains on much less data (around 80k original points on average versus more than 3 million points for OptFormer), its regret performance is statistically similar to that of the OptFormer after 100 steps. Moreover, on the same GPU, to perform the same inference, NAP only uses 2% of OptFormer's compute time and around 40% of its memory usage.\nTuning MIP Solvers Apart from HPO-B, we consider another real-world example of hyperparameter tuning that requires finding the optimal parameters of MIP solvers. We use the open-source SCIP solver [45] and the Benchmark suite from the MIPLib2017 [46] that consists of a collection of 240 problems. The objective is to find a set of hyperparameters so that SCIP can solve MIP instances in minimal time. Our high-dimensional search space comprises 135 hyperparameters with mixed types, including boolean, integers, categories and real numbers. We train our model on data collected from BO traces on 103 MIPs and test on a held-out set of 42 instances. Our results in Figure 2 demonstrate that NAP is capable of outperforming all other baselines reaching low regret about an order of magnitude faster than FSBO [5]. Figure 2 further demonstrates the importance of end-to-end training where NAP again outperforms NP-EI." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Sequence Optimisation Experiments", "publication_ref": [ "b55", "b56", "b57", "b58", "b59", "b58" ], "table_ref": [], "text": "Now, we demonstrate NAP's abilities beyond hyperparameter tuning tasks in two real-world combinatorial black-box optimisation problems.\nAntibody CDRH3-Sequence Optimisation This experiment focuses on finding antibodies that can bind to target antigens. Antigens are proteins, i.e., sequences of amino acids that fold into a 3D shape giving them specific chemical properties. A protein region called CDRH3 is decisive in the antibody's ability to bind to a target antigen. Following the work in [47], we represent CDRH3s as a string of 11 characters, each character being the code for a different amino acid in an alphabet of cardinality 22. The goal is to find the optimal CDRH3 that minimises the binding energy towards a specific antigen. Binding energies can be computed using state-of-the-art simulation software like Absolut! [48]. We collected datasets of CDRH3 sequences and their respective binding energies (with Absolut!) across various antigens from the protein database bank [49]. We then formed a transfer scenario across antigens where we meta-learn on 109 datasets, validate on 16, and test NAP on 32 new antigens. Our results in Figure 2 indicate that NAP is not limited to hyperparameter tuning tasks but can also outperform all other baselines in combinatorial domains.\nElectronic Design Automation (EDA) Logic synthesis (LS) is an essential step in the EDA pipeline of the chip design process. At the beginning of LS, we represent the circuit as an AIG (an And-Inverter-Graph representation of Boolean functions) and seek to map it to a netlist of technology-dependent gates (e.g., 6-input logic gates in FPGA mapping). The goal in LS is to find a sequence of graph transformations such that the resulting netlist meets an objective that trades off the number of gates (area) and the size of the longest directed path (delay) in the netlist. We perform a sequence of logic synthesis operators dubbed a synthesis flow to optimise the AIG.\nFollowing [50], we consider length 20 LS flows and allow an alphabet of 11 such operators, e.g., {refactor, resub, . . . , balance} as implemented in the open-source ABC library [51]. We collected datasets for 43 different circuits. Each dataset consisted of 500 sequences (collected via a Genetic algorithm optimizer) and their associated area and delay. Additionally, we applied the well-known heuristic sequence resyn2 on each circuit to get a reference area and delay. For this task, the black-box takes a sequence as input and returns the sum of area and delay ratios with respect to the reference ones, as detailed in Appendix B.1. We train all methods on 30 circuits from OpenABC [50], validate on 4 and test on 9. Our results in Figure 2 again demonstrate that NAP outperforms all other baselines by a significant margin. " }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b18", "b60", "b10", "b62", "b19", "b13", "b12", "b63", "b16", "b64", "b65", "b48", "b4", "b6", "b8", "b9", "b5", "b66", "b4", "b16", "b14", "b3" ], "table_ref": [], "text": "Meta-Learning Paradigms Meta-learning aims to learn how to quickly solve a new task based on the experience gained from solving or observing data from related tasks [15]. To achieve this goal, we can follow two main directions. In the first one, the meta-learner uses source data to learn a good initialisation of its model parameters, such that it is able to make accurate predictions after only few optimisation steps on the new target task [52]. The second one is more ambitious as it endeavours to directly learn, from the related tasks, a general rule to predict a helpful quantity to suggest the next point (black-box value, acquisition function value, etc.) from a context of observations (i.e. our history H (k) ). Works following that direction, which include ours, can rely on different types of models to learn this general rule, such as NPs [10], recurrent neural networks [53], HyperNetworks [16] or transformers [12,11]. Orthogonal to these two directions is the learning of a hyper-posterior on the datasets to meta-train on. In that line of research, Rothfuss et al. [54] suggest the use of Stein Variational Gradient Descent (SVGD) to estimate uncertainty with multiple models better, and Hsieh et al. [14] extend the use of SVGD to meta-learn AFs. We consider those last two works as an orthogonal direction to our, as NAP could also benefit from SVGD updates.\nLearning a Meta-Model Chen et al. [55] and TV et al. [56] train an RNN to predict what should be the next suggestion in BO instead of predicting the acquisition function value. We do not compare to their method given the lack of available implementation. Still, we compare to the outperforming approach developed by Chen et al. [41] based on a transformer architecture designed to do meta-BO on hyperparameter tuning problems. Their OptFormer is trained on related tasks over different search spaces to output the next point to evaluate and predict its value. The next point is sequentially decoded, one token at a time, and exploits the hyperparameters' names or descriptions to improve generalisation across tasks. Contrary to NAP, OptFormer is not designed to predict acquisition value at any point and does not meet the two properties 3.2 and 3.3, and therefore needs much more training data to predict the proper sequence of tokens. We note that NAP does not rely on variable descriptions, making it easily deployable on various tasks and still very competitive in the hyperparameter optimisation context.\nRather than learning an entire predictive model from scratch, prior works [5,[7][8][9]] learn deep kernel surrogates, i.e. a warping functions mapping the input space with a neural network before it is given to a GP. Learning only the input transformation allows the authors to rely on the closed-form posterior prediction capacity of standard GP models. To perform the transfer, Feurer et al. [6] rely on an ensemble of GPs. Iwata [57] is an approach close to FSBO [5] as it learns a meta-model by meta-training a Deep Kernel GP. Notably, it does so in an end-to-end fashion using RL to propagate gradients through the acquisition function and the GP back to the deep kernel. It does not, however, learn a meta-acquisition.\nLearning a Meta-Acquisition Function Hsieh et al. [14] and Volpp et al. [13] choose to perform transfer through the acquisition function. They use directly GP surrogates and define the acquisition as a neural network that is meta-trained on related source tasks. They first pre-train GP surrogates on all source tasks and fix their kernel parameters. They then rely on RL training to meta-learn the neural acquisition function that takes as inputs the posterior mean and variance of the GP surrogate (which is itself trained online at test time). At test time, they allow for update in the GP but keep the weights of the neural acquisition fixed.\nIn summary, the methods meta-learning AFs do not do so in an end-to-end fashion and still rely on trained GP surrogates. While these methods are principled and competitive, they suffer from the cost of inverting the GP kernel matrix (cubic in the number of observations). In comparison, NAP can make predictions through a simple forward pass. On the other hand, the methods that learn a meta-model, either use a Deep Kernel GP (suffering the same cost) or have to learn a large model from scratch, costing a lot of budgets for collecting data beforehand, as well a pre-training time.\nBoth of them use standard acquisition functions, missing the potential benefits of doing transfer in acquisition between tasks.\nThe performance of some of those algorithms on HPO-B is presented in Appendix-C showing that despite learning an architecture from scratch, NAP achieves a lower regret. For a more detailed survey on transfer and meta-learning in BO, we refer the reader to Bai et al. [4]." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b67", "b68" ], "table_ref": [], "text": "We proposed the first end-to-end training protocol to meta-learn acquisition functions in meta-BO.\nOur method predicts acquisition values directly from a history of points with meta-reinforcement learning and auxiliary losses. We demonstrated new state-of-the-art results compared to popular baselines in a wide range of benchmarks.\nLimitations & Future Work Our architecture suffers from the usual quadratic complexity of the transformer in terms of the number of tokens which limits the budget of BO steps. Nevertheless, it can still handle around 5000 steps, which is enough for most BO scenarios. Another limitation of our architecture is that we need to train a new model for each search space. In future, we plan to enable our method to leverage meta-training from multiple search spaces and investigate how we could design data-driven BO-specific augmentation to further mitigate meta-overfitting [58].\nA Proof of Lemma 3.1\nLet D = {(x i , y i )} n i=1 denote the set of points associated to a task, and a untrained policy π that collects a trajectory of length T by iteratively drawing points in D uniformly without replacement. We note (z 1 , . . . , z T ) the sequence of function values observed during the trajectory, and (r 1 , . . . , r T ) the sequence of rewards obtained. We remind that r t = max 1≤ℓ≤t z ℓ , and we consider that r t (and z t ) is informative if z t = max 1≤ℓ≤t z ℓ . We want to compute the probability of obtaining exactly m informative rewards by sampling a trajectory from π. As each sequence is equiprobable, we do so by counting the number of sequences leading to m informative rewards. We assume for now that the values in {y i } n i=1 are pairwise distinct. We first note that there are n T ways to choose the T ≤ n points composing a trajectory of length T from D n . We now consider that the set of T sampled value functions is fixed (without loss of generality we note this set V = {v ℓ } 1≤ℓ≤T ). For 1 ≤ k ≤ T , we note C(k, T ) the number of ways to order them such that the resulting trajectory (z 1 , . . . , z n ) contains exactly k informative values, and we will give a recurrent formula for C(k, T ).\nWe can see that k = 1 necessarily implies that z 1 = max 1≤ℓ≤T v ℓ . Thus there are (T -1)! ways to order the remaining elements of the trajectory {z ℓ } 2≤ℓ≤n , hence C(1, T ) = (T -1)!. On the other hand, k = T informative rewards are only obtained for when the element of V are sorted in increasing order, i.e. with (z 1 < z 2 < • • • < z T ), and therefore C(T, T ) = 1.\nFinally, for 2 ≤ k < T we can establish a recurrence relation by reasoning on z n , the last element of the sequence:\n• If z T = max 1≤ℓ≤n v ℓ ,\nthen z T is informative, and there remains to count the number of ways to\norder the first T -1 elements V \\{z T } to get k -1 informative steps, which is by definition C(k -1, T -1). • If z T = v j < max 1≤ℓ≤T v ℓ ,\nthen z T is not informative. We note that there are T -1 choices for such v j , and for each choice of v j there remains to order V \\{v j } such that the resulting sub-trajectory (z 1 , . . . , z T -1 ) has exactly k informative values. There are therefore (T -1)C(k, T -1) trajectories with k informative rewards such that z T ̸ = max Finally, the number of trajectories of length T that can be obtained from D and having k informative rewards, is given by\n( n T )×C(k,T ) ( n T )×T ! = 1 T ! T k\n, which is equal to the probability of getting k cycles in a permutation sampled randomly in S T . Therefore the expected number of informative rewards in a trajectory of length T is equal to the average number of cycles in a permutation sampled uniformly in S T , which is a known result [59], thus is equal to the T th harmonic number H T = log T + O(1)." }, { "figure_ref": [], "heading": "B Experimental setup", "publication_ref": [], "table_ref": [], "text": "In this section we give more details about the experimental setup. In particular, we add details about the experiments environments and baselines where needed. We also give some more intuition on the NAP training and data augmentation scheme used. Finally we list all important hyperparameters as well as the hardware used in our experiments." }, { "figure_ref": [], "heading": "B.1 Tasks", "publication_ref": [ "b52" ], "table_ref": [], "text": "HPO-B As mentioned in Section 4 for this experiment we choose a subset of tasks from the HPO-B benchmark set. HPO-B is a collection of HPO datasets first grouped by search space. Each search space corresponds to the hyperparameters of a particular model, e.g. SVM, XGBoost, . . . Each such search space then has multiple associated datasets split into a set for training, validating and for testing. The multi-task RL setting from Section 3.1 states that we limit ourselves to MDPs sharing state and action spaces across tasks hence we don't train NAP on multiple search spaces at the same time. We train one model per search space, being careful to choose a search space for each type of underlying model. When multiple search spaces related to the same underlying model we choose the search space with the least amount of total data in order to focus on the low-data regime as much as possible. We pick the following search spaces: 5860 (glmnet), 4796 (rpart.preproc), 5906 (xgboost), 5889 (ranger), 5859 (rpart), 5527 (svm). Refer to Table 3 in Pineda-Arango et al. [44] for more details.\nElectronic Design Automation Following the description in Section 4 we search for the sequence of operators to optimise an objective combining two metrics associated with circuit performance, the area and the delay. The area is the number of gates in the mapped netlist, while the delay corresponds to the length of the longest directed path in the mapped netlist. As these two metrics are not directly commensurable, we normalise them by the area and delay obtained when running twice the reference sequence resyn2 [51] made of 10 operators, as follows:\nf EDA (seq) = Area(seq) Area(2 × resyn2) + Delay(seq) Delay(2 × resyn2)\nwhere seq is a sequence of operators from ABC." }, { "figure_ref": [], "heading": "B.2 Baselines", "publication_ref": [ "b48" ], "table_ref": [], "text": "OptFormer The results reported by Chen et al. [41] are for the continuous HPO-B benchmark where XGBoost models approximate the black-box functions from the discrete points. To evaluate every approach in a fairer and more robust manner, we instead focused on the original discrete setup of HPO-B which uses only true values from the black-boxes. We relied on the shared OptFormer checkpoint trained and validated on HPO-B. We adapted the inference code for the discrete setting and noticed that the default parameters were set to NaPolicy.DROP which drop the missing values in the HPO-B benchmark and removes the additionnal \"na\" columns. We switched it to NaPolicy.CONTINUOUS to keep every column leading to better performances. We also had to increase the maximum number of tokens possible in a trajectory from 1024 to 2048." }, { "figure_ref": [], "heading": "B.3 NAP training", "publication_ref": [], "table_ref": [], "text": "We conduct our experiments in a consistent manner. We define a training , validation and test sets that are kept the same for each method. We also ensure reproducibility by ensuring random seeds are similar across experiments and initial points too. Finally we run the tests on 10 different random seeds in all experiments and 5 for HPO-B as there are only 5 seeds available for other baselines." }, { "figure_ref": [], "heading": "Data augmentation", "publication_ref": [], "table_ref": [], "text": "When training on tasks with low number of points in each dataset, we perform data augmentation using Gaussian processes. On each dataset we fit an exact GP and during training we sample a new dataset directly from the posterior of that GP. This has an interpolating effect such that between original data points, the GP can make predictions that are roughly realistic. Training on these augmented datasets sampled from GP posteriors is handy in practice because it helps to create datasets of the desired size (we can sample as many data points as we like) and datasets that resemble the original one, provided we don't sample from the GP posterior too far from the original inputs. In practice we sample inputs points from the original dataset, add to them a random uniform perturbation and sample from the GP posterior at those new points." }, { "figure_ref": [], "heading": "Value function", "publication_ref": [], "table_ref": [], "text": "The value function takes as input t/T and the best y value observed in H t ." }, { "figure_ref": [], "heading": "B.4 Hyperparameters", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "We share in Table 2 a comprehensive list of the hyperparameters used during training and inference. More details can be found in the associated code repository. We want to underline that none of these presented hyperparameters were tuned. This is only fair as we did also not optimise any\nPolicy π θ State s t H t T, t p θ (•) α θ (•) x t Action R t = t i=1 γ i r i Reward ∇ θ R L t = p θ (y|x, H t ) Auxiliary loss ∇ θ L H t+1 := H t ∪ {(x t , y t )}\nFigure 3: Summary of our proposed Neural Acquisition Process (NAP) architecture. At iteration t = 1, . . . , T , the state consists of s t s t = {H t , t, T }, respectively the history of collected points, the current iteration index and the total budget. The action is sampled from the policy x t ∼ π θ (•|s t ). For a set of locations x ⊆ A, the gradients flow back to parameters θ from both the cumulative regret returns R t and the auxiliary likelihood loss L t . hyperparameters from the other baseline methods. Hyperparameters that are shared between our method and previous baselines are simply taken as is from their respective codebases as we assume the authors have already tuned them. We use the same for all experiments. Note that the ablation study in C.1 can be seen as a simple tuning of the parameter λ introduced to weight both losses in NAP. " }, { "figure_ref": [], "heading": "B.5 Hardware", "publication_ref": [], "table_ref": [], "text": "We train our model on a machine with 4 GPUs Tesla V100-SXM2-16GB and an Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 88 threads with an average training time of approximately 10 hours per experiment." }, { "figure_ref": [], "heading": "C Additional Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2", "fig_2", "fig_2" ], "heading": "C.1 Ablation", "publication_ref": [ "b69", "b69", "b14", "b12" ], "table_ref": [], "text": "In this section we perform an ablation study to answer some of the questions that arise naturally from the proposed framework. Notably, is end-to-end training useful? Does the auxiliary loss help? Is it useful to learn the acquisition function and not simply a surrogate model? We run an additional experiment on a dataset from the HPOBench benchmark [60]. For reference, we detail the methods of this ablation in Table 3 for easier comparison.\nTable 3: Variations of NAP and their components. This study uses datasets of HPOBench [60] for XG-Boost hyperparameters, similarly to Volpp et al. [13].\np θ α θ RL Supervision End-to-end NAP (ours) ✔ ✔ ✔ ✔ ✔ Pre-NAP ✔ ✔ ✔ ✔ ✘ NAP-RL ✔ ✔ ✔ ✘ ✔ NP-EI ✔ ✘ ✘ ✔ ✘0\nIt consists of hyperparameter configurations and their associated accuracy of the XGBoost model on a classification task.\nWe analyse the effects of training end-to-end with the introduced auxiliary loss to better understand the method's strengths and limitations. Six hyperparameters (learning rate, regularisation, etc.) and 48 classification tasks exist. We have 1000 hyperparameter configurations evaluated for each task and the corresponding XGBoost model accuracy, creating 48 datasets of different black-box functions. We meta-train on 20 datasets, validate on 13 and test on 15.\nIs it worthwhile to learn a meta-acquisition function? A valid question to ask is whether or not we need to meta-learn an acquisition function. There exist popular methods for meta-learning models, as discussed in the main paper, so one could use such a surrogate model and apply it directly in BO, using its posterior distribution to compute an acquisition function. Such an approach matches our baseline NP-EI, where we train a PFN [11] on the same training data and apply it directly in BO.\nComparing NAP and NP-EI, Figure 4 reveals the benefits of learning the acquisition function as part of the end-to-end architecture instead of using a pre-defined expected improvement acquisition function.\nDoes training end-to-end using the auxiliary loss help? Training end-to-end, we check that using supervised information through the auxiliary loss helps compared to end-to-end training with only the reinforcement learning reward. We name the latter NAP-RL and show the results in Figure 4 which suggests that indeed, the inductive bias introduced with the auxiliary loss is beneficial for downstream performance.\nIs end-to-end training beneficial? To investigate this question, we first pre-train the probabilistic model part of a NAP with the supervised auxiliary loss and then use PPO to update the metaacquisition function while keeping the rest of the weights frozen. We denote this method as PreNAP as the architecture is partly pre-trained. Figure 4 shows that PreNAP . Thus, training jointly end-toend NAP with both objectives translate into improved regret at test time, validating our hypothesis in Section 3." }, { "figure_ref": [ "fig_0", "fig_3", "fig_4" ], "heading": "C.2 HPO-B per search space results", "publication_ref": [], "table_ref": [], "text": "Additionally to the aggregated regret plot in Figure 2 of Section 4 we show ranks and regrets per search space.\nFigure 5 shows the regret for each method per search space, aggregated on all the test datasets of that search space and across 5 random seeds. Figure 6 shows the relative rank of each method (lower is better) per search space, aggregated across all the test datasets of that search space and 5 random seeds. " }, { "figure_ref": [], "heading": "C.3 Time Comparison", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "While in our BO setting we assume that querying the black-box objective is the main bottleneck (hence our focus on sample efficiency), it is also interesting to analyse the time efficiency of algorithms to gain another perspective on various methods. In this section we summarise some average test running time results to provide a different point of view. In short, we see that even though some methods require offline pre-training (e.g. MetaBO, FSBO, NAP), the time required to evaluate points in the objective function out-weights this cost. Hence when measuring the total running time, it makes little difference if some methods require this pre-training.\nFor example, in the antibody experiment, evaluating the objective is costly. This is true both in terms of monetary and time costs as evaluating the objective could mean manufacturing the molecule and testing it in a wet-lab experiment. In our experiments we use a simulator as a proxy. The HPO-B black-box function is also very expensive to evaluate as it relies on training and testing several models. We use result files posted on the authors' repository which only contain black-box values for some baselines.\nHowever we do have at our disposal the true black-boxes for the MIP and EDA experiments. By design of the experiment, evaluating one set of hyperparameters on the MIP experiment takes 2 hours. Compared to that, the time to train a GP model or doing a forward pass in NAP at test time is negligible. On EDA, the black-box time depends on the circuit so we approximate an average running time of 1 minute per circuit on open-source circuits, but this can take several hours on industrial circuits.\nTable 4 and 5 compare the average test time of one seed across all methods. In the first column, we can see that methods which have to fit a GP during the BO loop (FSBO, MetaBO and GP-EI) are considerably slowed down compared to methods like NAP that only do forward passes through their network. This is because fitting the GP surrogate at each BO step is time consuming, and increasingly so, as its dominant computational cost is cubic in the number of observed points. Note also that FSBO not only fits a GP at each step but also fine tunes the MLP of its deep kernel, hence the extra time. The second column, with the black-box time taken into account, further underlines that even though NAP is faster at test time than e.g. FSBO or GP-EI, this time gain it is negligible compared to the black-box evaluations. The third column takes into account the pre-training time for methods that require it. Note that for different test functions within the same search space, we can reuse the same model for NAP, NP-EI, MetaBO and FSBO without having to redo the pre-training, so we divided the pre-training time by the number of seeds and test functions. Hence, it does not add much time to the total.\nIt should be underlined that this way of presenting BO results is less readable than presenting regret vs BO steps as the more seeds and test tasks we have, the more negligible the pre-training time becomes compared to the black-box evaluation time. " }, { "figure_ref": [], "heading": "D Discussions", "publication_ref": [ "b13" ], "table_ref": [ "tab_0" ], "text": "We give a more detailed explanation of Table 1 below.\nOptFormer OptFormer encodes a history as follows: a short meta-data sequence describing the variables taken as input and a sequence of trials (a trial corresponds to an ⟨x, y⟩ pair). Each trial is composed of the values present in x, the value of y and a separator to mark the end of a trial (D + 2). They need to use positional encoding to keep the sequence consistent (for instance the order of the dimensions is important to identify which dimension of x it is). Because of that, their architecture is not history-order invariant.\nThe query independence of their model is debatable. We can achieve it by splitting the queries across batches, but it is not doable in a single batch without substantial modifications of their code, their masks and their positional encoding. Note that splitting queries across batches results in a very slow inference. To evaluate OptFormer on HPO-B in a fair manner we had to do it, and it took us more than 2 weeks to obtain the results with 16 GPUs.\nTransformer NP Nguyen and Grover [12] propose several transformer-based neural process architecture. Omitting the fact that they predict a Gaussian distribution over function values and not acquisitions, their TNP-D transformer architecture is the closest to ours. It does not rely on positional encoding, hence it is history-order invariant.\nHowever, instead of summing the embeddings of x and y as we do, they concatenate them in a fixed representation, forcing them to set y = 0 in the queries. Hence, the only way to achieve query independence is to train on the same queries during testing and training. As we cannot assume we know what the queries will be during testing, Property 3.3 is not respected for any queries.\nPrior Fitted Transformer PFN satisfies the two properties 3.2 and 3.3 since they do not rely on positional encoding and sum the embeddings of x and y.\nThe main differences with them are that our input is different and that we predict AF values with reinforcement learning." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to thank Prof. Frank Hutter and his team for their constructive feedback and for the availability of their benchmark code and datasets. We also would like to thanks Massimiliano Patacchiola for his comments during the writing phase of the paper. This work is supported by the CSTT on Generalisable Robot Learning via Machine Learning Models 2100332-GB and by the 2030 \"New Generation of AI\" -Major Project of China under grant No. 2022ZD0116408." } ]
2023-12-22
10.1109/TPAMI.2021.3079209
[ { "authors": "Jasper Snoek; Hugo Larochelle; Ryan P Adams", "journal": "", "ref_id": "b0", "title": "Practical bayesian optimization of machine learning algorithms", "year": "2012" }, { "authors": "Nate Gruver; Samuel Stanton; Polina Kirichenko; Marc Finzi; Phillip Maffettone; Vivek Myers; Emily Delaney; Peyton Greenside; Andrew Gordon; Wilson ", "journal": "", "ref_id": "b1", "title": "Effective surrogate models for protein design with bayesian optimization", "year": "2021" }, { "authors": "Fabio Muratore; Christian Eilers; Michael Gienger; Jan Peters", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b2", "title": "Data-efficient domain randomization with bayesian optimization", "year": "2021" }, { "authors": "Tianyi Bai; Yang Li; Yu Shen; Xinyi Zhang; Wentao Zhang; Bin Cui", "journal": "", "ref_id": "b3", "title": "Transfer learning for bayesian optimization: A survey", "year": "2023" }, { "authors": "Martin Wistuba; Josif Grabocka", "journal": "", "ref_id": "b4", "title": "Few-shot bayesian optimization with deep kernel surrogates", "year": "2021" }, { "authors": "Matthias Feurer; Benjamin Letham; Eytan Bakshy", "journal": "", "ref_id": "b5", "title": "Scalable meta-learning for bayesian optimization using ranking-weighted gaussian process ensembles", "year": "2018" }, { "authors": "Massimiliano Patacchiola; Jack Turner; Elliot J Crowley; O' Michael; Amos J Boyle; Storkey", "journal": "", "ref_id": "b6", "title": "Bayesian meta-learning for the few-shot setting via deep kernels", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b7", "title": "", "year": "2020" }, { "authors": "Zi Wang; George E Dahl; Kevin Swersky; Chansoo Lee; Zachary Nado; Justin Gilmer; Jasper Snoek; Zoubin Ghahramani", "journal": "", "ref_id": "b8", "title": "Pre-trained gaussian processes for bayesian optimization", "year": "2023" }, { "authors": "Wenlin Chen; Austin Tripp; José Miguel Hernández-Lobato", "journal": "", "ref_id": "b9", "title": "Meta-learning adaptive deep kernel gaussian processes for molecular property prediction", "year": "2023" }, { "authors": "Marta Garnelo; Dan Rosenbaum; Christopher Maddison; Tiago Ramalho; David Saxton; Murray Shanahan; Yee Whye Teh; Danilo Jimenez Rezende; S M Ali Eslami", "journal": "", "ref_id": "b10", "title": "Conditional neural processes", "year": "2018" }, { "authors": " Pmlr", "journal": "", "ref_id": "b11", "title": "", "year": "2018" }, { "authors": "Samuel Müller; Noah Hollmann; Sebastian Pineda-Arango; Josif Grabocka; Frank Hutter", "journal": "", "ref_id": "b12", "title": "Transformers can do bayesian inference", "year": "2022" }, { "authors": "Tung Nguyen; Aditya Grover", "journal": "PMLR", "ref_id": "b13", "title": "Transformer neural processes: Uncertainty-aware meta learning via sequence modeling", "year": "2022-07-23" }, { "authors": "Michael Volpp; Lukas P Fröhlich; Kirsten Fischer; Andreas Doerr; Stefan Falkner; Frank Hutter; Christian Daniel", "journal": "", "ref_id": "b14", "title": "Meta-learning acquisition functions for transfer learning in bayesian optimization", "year": "2020" }, { "authors": " Openreview", "journal": "", "ref_id": "b15", "title": "", "year": "2020" }, { "authors": "Bing-Jing Hsieh; Ping-Chun Hsieh; Xi Liu", "journal": "", "ref_id": "b16", "title": "Reinforced few-shot acquisition function learning for bayesian optimization", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b17", "title": "", "year": "2021" }, { "authors": "Timothy M Hospedales; Antreas Antoniou; Paul Micaelli; Amos J Storkey", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b18", "title": "Meta-learning in neural networks: A survey", "year": "2022" }, { "authors": "David Ha; Andrew M Dai; V Quoc; Le; Hypernetworks", "journal": "", "ref_id": "b19", "title": "5th International Conference on Learning Representations, ICLR 2017", "year": "2017" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b20", "title": "Attention is all you need", "year": "2017" }, { "authors": "Emilio Parisotto; Francis Song; Jack Rae; Razvan Pascanu; Caglar Gulcehre; Siddhant Jayakumar; Max Jaderberg; Raphaël Lopez Kaufman; Aidan Clark; Seb Noury; Matthew Botvinick; Nicolas Heess; Raia Hadsell", "journal": "PMLR", "ref_id": "b21", "title": "Stabilizing transformers for reinforcement learning", "year": "2020-07-18" }, { "authors": "Max Jaderberg; Volodymyr Mnih; Wojciech ; Marian Czarnecki; Tom Schaul; Joel Z Leibo; David Silver; Koray Kavukcuoglu", "journal": "", "ref_id": "b22", "title": "Reinforcement learning with unsupervised auxiliary tasks", "year": "2017" }, { "authors": "Carl Edward; Rasmussen Christopher; K I Williams", "journal": "MIT press", "ref_id": "b23", "title": "Gaussian Processes for Machine Learning", "year": "2006" }, { "authors": "Jonas Mockus; Vytautas Tiesis; Antanas Zilinskas", "journal": "Towards Global Optimization", "ref_id": "b24", "title": "The application of Bayesian methods for seeking the extremum", "year": "1978" }, { "authors": "Thomas Bäck; Hans-Paul Schwefel", "journal": "Evolutionary computation", "ref_id": "b25", "title": "An overview of evolutionary algorithms for parameter optimization", "year": "1993" }, { "authors": "James Bergstra; Yoshua Bengio", "journal": "Journal of machine learning research", "ref_id": "b26", "title": "Random search for hyper-parameter optimization", "year": "2012" }, { "authors": "Matthias Feurer; Jost Tobias Springenberg; Frank Hutter", "journal": "AAAI Press", "ref_id": "b27", "title": "Initializing bayesian hyperparameter optimization via meta-learning", "year": "2015" }, { "authors": "Martin Wistuba; Nicolas Schilling; Lars Schmidt-Thieme", "journal": "Mach. Learn", "ref_id": "b28", "title": "Scalable gaussian process-based transfer surrogates for hyperparameter optimization", "year": "2018" }, { "authors": "Valerio Perrone; Huibin Shen", "journal": "", "ref_id": "b29", "title": "Learning search spaces for bayesian optimization: Another view of hyperparameter transfer learning", "year": "2019-12-08" }, { "authors": "Pavel Izmailov; Alexander Novikov; Dmitry Kropotov", "journal": "PMLR", "ref_id": "b30", "title": "Scalable gaussian processes with billions of inducing inputs via tensor train decomposition", "year": "2018-04" }, { "authors": "Edwin V Bonilla; Kian ; Ming Adam Chai; Christopher K I Williams", "journal": "", "ref_id": "b31", "title": "Multitask gaussian process prediction", "year": "2007" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b32", "title": "", "year": "2007" }, { "authors": "Marta Garnelo; Dan Rosenbaum; Christopher Maddison; Tiago Ramalho; David Saxton; Murray Shanahan; Yee Whye Teh; Danilo Jimenez Rezende; S M Ali Eslami", "journal": "", "ref_id": "b33", "title": "Conditional neural processes", "year": "2018" }, { "authors": " Pmlr", "journal": "", "ref_id": "b34", "title": "", "year": "2018" }, { "authors": "Jonathan Gordon; P Wessel; Bruinsma; Y K Andrew; James Foong; Yann Requeima; Richard E Dubois; Turner", "journal": "", "ref_id": "b35", "title": "Convolutional conditional neural processes", "year": "2020" }, { "authors": " Openreview", "journal": "", "ref_id": "b36", "title": "", "year": "2020" }, { "authors": "Hyunjik Kim; Andriy Mnih; Jonathan Schwarz; Marta Garnelo; S M Ali Eslami; Dan Rosenbaum; Oriol Vinyals; Yee Whye Teh", "journal": "", "ref_id": "b37", "title": "Attentive neural processes", "year": "2019-09" }, { "authors": " Openreview", "journal": "", "ref_id": "b38", "title": "", "year": "2019" }, { "authors": "Richard S Sutton; Andrew G Barto", "journal": "MIT Press", "ref_id": "b39", "title": "Reinforcement learning -an introduction", "year": "1998" }, { "authors": "John Schulman; Filip Wolski; Prafulla Dhariwal; Alec Radford; Oleg Klimov", "journal": "", "ref_id": "b40", "title": "Proximal policy optimization algorithms", "year": "2017" }, { "authors": "Niranjan Srinivas; Andreas Krause; M Sham; Matthias W Kakade; Seeger", "journal": "Omnipress", "ref_id": "b41", "title": "Gaussian process optimization in the bandit setting: No regret and experimental design", "year": "2010" }, { "authors": "Jonathan Ho; Stefano Ermon", "journal": "Advances in neural information processing systems", "ref_id": "b42", "title": "Generative adversarial imitation learning", "year": "2016" }, { "authors": "Haoran Tang; Rein Houthooft; Davis Foote; Adam Stooke; Openai Xi Chen; Yan Duan; John Schulman; Filip Deturck; Pieter Abbeel", "journal": "Advances in neural information processing systems", "ref_id": "b43", "title": "# exploration: A study of count-based exploration for deep reinforcement learning", "year": "2017" }, { "authors": "Daishi Andrew Y Ng; Stuart Harada; Russell", "journal": "Citeseer", "ref_id": "b44", "title": "Policy invariance under reward transformations: Theory and application to reward shaping", "year": "1999" }, { "authors": "Jan Paul F Christiano; Tom Leike; Miljan Brown; Shane Martic; Dario Legg; Amodei", "journal": "Advances in neural information processing systems", "ref_id": "b45", "title": "Deep reinforcement learning from human preferences", "year": "2017" }, { "authors": "Xingyu Lin; Harjatin Baweja; George Kantor; David Held", "journal": "Advances in neural information processing systems", "ref_id": "b46", "title": "Adaptive auxiliary task weighting for reinforcement learning", "year": "2019" }, { "authors": "Pablo Hernandez-Leal; Bilal Kartal; Matthew E Taylor", "journal": "", "ref_id": "b47", "title": "Agent modeling as auxiliary task for deep reinforcement learning", "year": "2019" }, { "authors": "Yutian Chen; Xingyou Song; Chansoo Lee; Zi Wang; Richard Zhang; David Dohan; Kazuya Kawakami; Greg Kochanski; Arnaud Doucet; Marc Aurelio Ranzato; Sagi Perel; Nando De Freitas", "journal": "", "ref_id": "b48", "title": "Towards learning universal hyperparameter optimizers with transformers", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b49", "title": "", "year": "2022" }, { "authors": "Juho Lee; Yoonho Lee; Jungtaek Kim; Adam R Kosiorek; Seungjin Choi; Yee Whye Teh", "journal": "PMLR", "ref_id": "b50", "title": "Set transformer: A framework for attention-based permutation-invariant neural networks", "year": "2019-06-15" }, { "authors": "Kate Rakelly; Aurick Zhou; Chelsea Finn; Sergey Levine; Deirdre Quillen", "journal": "PMLR", "ref_id": "b51", "title": "Efficient off-policy meta-reinforcement learning via probabilistic context variables", "year": "2019-06-15" }, { "authors": "Sebastian Pineda-Arango; S Hadi; Martin Jomaa; Josif Wistuba; Grabocka", "journal": "", "ref_id": "b52", "title": "HPO-B: A largescale reproducible benchmark for black-box HPO based on openml", "year": "2021-12" }, { "authors": "Ksenia Bestuzheva; Mathieu Besançon; Wei-Kun Chen; Antonia Chmiela; Tim Donkiewicz; Leon Jasper Van Doornmalen; Oliver Eifler; Gerald Gaul; Ambros Gamrath; Gleixner", "journal": "", "ref_id": "b53", "title": "The scip optimization suite 8", "year": "2021" }, { "authors": "Ambros M Gleixner; Gregor Hendel; Gerald Gamrath; Tobias Achterberg; Michael Bastubbe; Timo Berthold; Philipp Christophel; Kati Jarck; Thorsten Koch; Jeff T Linderoth; Marco E Lübbecke; Hans D Mittelmann; B Derya; Ted K Özyurt; Domenico Ralphs; Yuji Salvagnin; Shinano", "journal": "Math. Program. Comput", "ref_id": "b54", "title": "MIPLIB 2017: data-driven compilation of the 6th mixed-integer programming library", "year": "2021" }, { "authors": "Asif Khan; Alexander I Cowen-Rivers; Antoine Grosnit; Derrick-Goh-Xin Deik; Philippe A Robert; Victor Greiff; Eva Smorodina; Puneet Rawat; Rahmad Akbar; Kamil Dreczkowski; Rasul Tutunov; Dany Bou-Ammar; Jun Wang; Amos Storkey; Haitham Bou-Ammar", "journal": "Cell Reports Methods", "ref_id": "b55", "title": "Toward realworld automated antibody design with combinatorial bayesian optimization", "year": "2023" }, { "authors": "Philippe A Robert; Rahmad Akbar; Robert Frank; Milena Pavlović; Michael Widrich; Igor Snapkov; Andrei Slabodkin; Maria Chernigovskaya; Lonneke Scheffer; Eva Smorodina; Puneet Rawat; Bhushan Brij; Mai Ha Mehta; Ingvild Vu; Aurél Frøberg Mathisen; Krzysztof Prósz; Alex Abram; Enkelejda Olar; Dag Miho; Tryslew Trygve; Fridtjof Haug; Sepp Lund-Johansen; Ingrid Hobaek Hochreiter; Günter Haff; Geir Klambauer; Victor Kjetil Sandve; Greiff", "journal": "Nature Computational Science", "ref_id": "b56", "title": "Unconstrained generation of synthetic antibody-antigen structures to guide machine learning methodology for antibody specificity prediction", "year": "2022" }, { "authors": "John Helen M Berman; Zukang Westbrook; Gary Feng; Gilliland; N Talapady; Helge Bhat; Weissig; N Ilya; Philip E Shindyalov; Bourne", "journal": "Nucleic acids research", "ref_id": "b57", "title": "The protein data bank", "year": "2000" }, { "authors": "Animesh Basak; Chowdhury ; Benjamin Tan; Ramesh Karri; Siddharth Garg", "journal": "", "ref_id": "b58", "title": "Openabc-d: A large-scale dataset for machine learning guided integrated circuit synthesis", "year": "2021" }, { "authors": "Robert Brayton; Alan Mishchenko", "journal": "Springer", "ref_id": "b59", "title": "Abc: An academic industrial-strength verification tool", "year": "2010" }, { "authors": "Chelsea Finn; Pieter Abbeel; Sergey Levine", "journal": "", "ref_id": "b60", "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "year": "2017-08-11" }, { "authors": " Pmlr", "journal": "", "ref_id": "b61", "title": "", "year": "2017" }, { "authors": "Yan Duan; Marcin Andrychowicz; Bradly C Stadie; Jonathan Ho; Jonas Schneider; Ilya Sutskever; Pieter Abbeel; Wojciech Zaremba", "journal": "", "ref_id": "b62", "title": "One-shot imitation learning", "year": "2017" }, { "authors": "Jonas Rothfuss; Vincent Fortuin; Martin Josifoski; Andreas Krause", "journal": "PMLR", "ref_id": "b63", "title": "PACOH: bayes-optimal meta-learning with pac-guarantees", "year": "2021-07" }, { "authors": "Yutian Chen; Matthew W Hoffman; Sergio Gómez Colmenarejo; Misha Denil; Timothy P Lillicrap; Matt Botvinick; Nando De Freitas", "journal": "PMLR", "ref_id": "b64", "title": "Learning to learn without gradient descent by gradient descent", "year": "2017-08-11" }, { "authors": "T V Vishnu; Pankaj Malhotra; Jyoti Narwariya; Lovekesh Vig; Gautam Shroff", "journal": "Springer", "ref_id": "b65", "title": "Metalearning for black-box optimization", "year": "2019" }, { "authors": "Tomaharu Iwata", "journal": "", "ref_id": "b66", "title": "End-to-end learning of deep kernel acquisition functions for bayesian optimization", "year": "2021" }, { "authors": "Huaxiu Yao; Long-Kai Huang; Linjun Zhang; Ying Wei; Li Tian; James Zou; Junzhou Huang; Zhenhui Li", "journal": "PMLR", "ref_id": "b67", "title": "Improving generalization in meta-learning via task augmentation", "year": "2021-07" }, { "authors": "V L Goncharov", "journal": "Acad. Sci. URSS (N.S.)", "ref_id": "b68", "title": "Sur la distribution des cycles dans les permutations", "year": "1944" }, { "authors": "Katharina Eggensperger; Philipp Müller; Neeratyoy Mallik; Matthias Feurer; René Sass; Aaron Klein; Noor H Awad; Marius Lindauer; Frank Hutter", "journal": "", "ref_id": "b69", "title": "Hpobench: A collection of reproducible multi-fidelity benchmark problems for HPO", "year": "2021-12" } ]
[ { "formula_coordinates": [ 2, 108, 666.91, 396, 24.49 ], "formula_id": "formula_0", "formula_text": "D k = {⟨x (k) i , y (k) i ⟩} n (k) i=1 consists of n (k) (noisy) evaluations of f k (x) for all k ∈ [1 : K]." }, { "formula_coordinates": [ 4, 116.98, 304.86, 378.05, 18.68 ], "formula_id": "formula_1", "formula_text": "States: s (k) t = [H (k) t , t (k) , T (k) ] Actions: a (k) t = x (k) t Rewards: r (k) t = max 1≤ℓ≤t (k) y (k) t ∀k." }, { "formula_coordinates": [ 4, 108, 332.37, 396, 28.63 ], "formula_id": "formula_2", "formula_text": "H (k) t+1 = H (k) t ⊔ {⟨x (k) t , y(k)" }, { "formula_coordinates": [ 4, 167.09, 377.68, 337.58, 33.41 ], "formula_id": "formula_3", "formula_text": "arg max π θ J(π θ ) = arg max π θ E k∼ptasks   E H (k) T (k) ∼pπ θ   T (k) t=1 γ t-1 r (k) t     ,(1)" }, { "formula_coordinates": [ 4, 485.26, 421.44, 15.03, 15.19 ], "formula_id": "formula_4", "formula_text": "(k) T (k)" }, { "formula_coordinates": [ 4, 107.64, 453.38, 397.03, 39.3 ], "formula_id": "formula_5", "formula_text": "p π θ H (k) T (k) = p y (k) T (k) -1 x (k) T (k) -1 π θ x (k) T (k) -1 |H (k) T (k) -1 . . . p y (k) 1 |x (k) 1 µ 0 x (k) 1 , (2) with µ 0 x (k) 1" }, { "formula_coordinates": [ 4, 108, 496.61, 17.2, 11.87 ], "formula_id": "formula_6", "formula_text": "x (k)" }, { "formula_coordinates": [ 5, 196.96, 124.74, 84.11, 13.74 ], "formula_id": "formula_7", "formula_text": "(k) t > max 1≤ℓ<t y (k)" }, { "formula_coordinates": [ 5, 335.21, 139.14, 85.33, 13.74 ], "formula_id": "formula_8", "formula_text": "(k) t > max 1≤ℓ<t y (k)" }, { "formula_coordinates": [ 5, 391.39, 212, 114.35, 12 ], "formula_id": "formula_9", "formula_text": "E H∼pπ θ [m H ] = O (log T )," }, { "formula_coordinates": [ 5, 311.53, 593.72, 85.73, 14.34 ], "formula_id": "formula_10", "formula_text": "D (k) = D (k) obs ⊔ D (k)" }, { "formula_coordinates": [ 5, 194.06, 637.84, 310.6, 17.3 ], "formula_id": "formula_11", "formula_text": "L(θ) = E k∼ptask,D (k) obs ,D (k) perd log p θ (y (pred) k |x (pred) k , D (k) obs ) .(3)" }, { "formula_coordinates": [ 6, 108, 97.58, 395.5, 26.45 ], "formula_id": "formula_12", "formula_text": "D (k) into D (k) obs and D (k)" }, { "formula_coordinates": [ 6, 315.96, 323.45, 189.78, 135.43 ], "formula_id": "formula_13", "formula_text": "D (k) , set H 0 = {∅} for t = 1, . . . , T do x t ∼ π θ (•|s t ) ▷ predict action y t = f (k) (x t ) ▷ execute action r t = y * ≤t ▷ collect reward H t+1 ← H t ∪ {(x t , y t )} ▷ update hist. end for R = T t=1 γ t r t ▷ cumul. reward D → D obs ⊔ D pred ▷ split source data L = p θ (y (pred) |x (pred) , D obs ) ▷ aux. loss θ ← θ + η( ∇ θ R + ∇ θ L ) ▷ update θ end for" }, { "formula_coordinates": [ 6, 119.13, 339.69, 161.31, 30.31 ], "formula_id": "formula_14", "formula_text": "π θ x (pred) t |s t ∝ e α θ (x (pred) t ,Ht,t,T ) npred i e α θ (x (pred) i ,Ht,t,T )" }, { "formula_coordinates": [ 7, 153.59, 85.18, 344.89, 13.53 ], "formula_id": "formula_15", "formula_text": "(pred) = (x (pred) 1 , . . . , x (pred) n ), we have g(x (pred) , H) = (g(x (pred) 1 , H), . . . , g(x (pred) n , H))." }, { "formula_coordinates": [ 7, 131.83, 212.81, 189.24, 102.27 ], "formula_id": "formula_16", "formula_text": "x 1 y 1 + x 2 y 2 + x 3 y 3 + x (pred) 1 x (pred) 2 t, T α θ (x (pred) 1 ) α θ (x (pred) 2 )" }, { "formula_coordinates": [ 7, 336.73, 213.09, 47.19, 45.22 ], "formula_id": "formula_17", "formula_text": "x 1 , y 1 x 1 , y 1 x 2 , y 2" }, { "formula_coordinates": [ 7, 336.44, 212.81, 143.81, 100.27 ], "formula_id": "formula_18", "formula_text": "1 x (pred) 1 x (pred) 2 x (pred)" }, { "formula_coordinates": [ 16, 135.4, 351.05, 77.58, 14.66 ], "formula_id": "formula_19", "formula_text": "• If z T = max 1≤ℓ≤n v ℓ ," }, { "formula_coordinates": [ 16, 135.4, 368.31, 368.6, 41.17 ], "formula_id": "formula_20", "formula_text": "order the first T -1 elements V \\{z T } to get k -1 informative steps, which is by definition C(k -1, T -1). • If z T = v j < max 1≤ℓ≤T v ℓ ," }, { "formula_coordinates": [ 16, 191.97, 527.33, 77.94, 21.21 ], "formula_id": "formula_21", "formula_text": "( n T )×C(k,T ) ( n T )×T ! = 1 T ! T k" }, { "formula_coordinates": [ 17, 191.59, 276.39, 227.62, 22.27 ], "formula_id": "formula_22", "formula_text": "f EDA (seq) = Area(seq) Area(2 × resyn2) + Delay(seq) Delay(2 × resyn2)" }, { "formula_coordinates": [ 18, 209.83, 76.19, 218.37, 139.07 ], "formula_id": "formula_23", "formula_text": "Policy π θ State s t H t T, t p θ (•) α θ (•) x t Action R t = t i=1 γ i r i Reward ∇ θ R L t = p θ (y|x, H t ) Auxiliary loss ∇ θ L H t+1 := H t ∪ {(x t , y t )}" }, { "formula_coordinates": [ 19, 189.78, 277.47, 232.43, 169.85 ], "formula_id": "formula_24", "formula_text": "p θ α θ RL Supervision End-to-end NAP (ours) ✔ ✔ ✔ ✔ ✔ Pre-NAP ✔ ✔ ✔ ✔ ✘ NAP-RL ✔ ✔ ✔ ✘ ✔ NP-EI ✔ ✘ ✘ ✔ ✘0" } ]
End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes
Meta-Bayesian optimisation (meta-BO) aims to improve the sample efficiency of Bayesian optimisation by leveraging data from related tasks. While previous methods successfully meta-learn either a surrogate model or an acquisition function independently, joint training of both components remains an open challenge. This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures. We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data. Early on, we notice that training transformer-based neural processes from scratch with RL is challenging due to insufficient supervision, especially when rewards are sparse. We formalise this claim with a combinatorial analysis showing that the widely used notion of regret as a reward signal exhibits a logarithmic sparsity pattern in trajectory lengths. To tackle this problem, we augment the RL objective with an auxiliary task that guides part of the architecture to learn a valid probabilistic model as an inductive bias. We demonstrate that our method achieves state-of-the-art regret results against various baselines in experiments on standard hyperparameter optimisation tasks and also outperforms others in the real-world problems of mixed-integer programming tuning, antibody design, and logic synthesis for electronic design automation.
Alexandre Maraval; Matthieu Zimmer; Antoine Grosnit; Haitham Bou Ammar
[ { "figure_caption": "Figure 2 :2Figure 2: Average regret vs. BO iterations with 5 initial points. (Left) Results on 6 search spaces on the HPO-B benchmark. (Middle-left) Results tuning SCIP for solving 42 different MIPs. (Middleright) Antibody CDR3 sequence optimisation on 32 test datasets corresponding to 32 different antigens. (Right) Logic synthesis operator sequence optimisation on 9 test datasets corresponding to 9 different circuits. For each method, error bars show confidence intervals computed across 5 runs on HPO-B and 10 runs on all the others.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "From this analysis we get that C(k, T ) = C(k -1, T -1) + (T -1)C(k, T -1). This relation, along with boundary values C(1, T ) = (T -1)! and C(T, T ) = 1, allow to identify C(k, T ) as the Stirling number of first kind T k . This number notably corresponds to the number of permutations in S T made of exactly k cycles.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Average regret vs iterations on HPOBench dataset for XGBoost. Error bars are confidence intervals across ten runs.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Average regret vs. BO iterations on each search space with 5 initial points. For each method, error bars show confidence intervals computed across 5 runs.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Average rank (lower is better) vs. BO iterations on each search space with 5 initial points. For each method, error bars show confidence intervals computed across 5 runs.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "We compare the properties of different transformer architectures. L is the number of tokens needed to encode the meta-data, and D denotes the dimensionality of X . Property 3.3 (Query independence). A NP g is query independent if for any choice of n queried locations x", "figure_data": "History-order inv. Query ind. AF valuesTokensNAP (ours)✔✔✔t + n predTNPs [12]✔✘✘t + n predOptFormer [41]✘✘✘L + (D + 2)(t + n pred )PFN [11]✔✔✘t + n pred", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "List of used hyperparameters in NAP.", "figure_data": "PPO", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Average test time of 1 seed on the MIP experiment.", "figure_data": "Method without bboxwith bboxwith bbox & pretrainGP-EI585sec25d 0hr 9min 45sec25d 0hr 9min 45secFSBO330sec25d 0hr 5min 30sec25d 0hr 10minMetaBO30sec25d 0hr 0min 30sec25d 0hr 12minNP-EI2sec25d 0hr 0min 2sec25d 0hr 36minNAP3sec25d 0hr 0min 3sec25d 1hr", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Average test time of 1 seed on the EDA experiment.", "figure_data": "Method without bboxwith bboxwith bbox & pretrainGP-EI17sec1hr 5min 17sec1hr 5min 17secFSBO516sec1hr 13min 36sec1hr 14min 6secMetaBO35sec1hr 5min 35sec1hr 12min 5secNP-EI8sec1hr 5min 8sec1hr 7min 34secNAP9sec1hr 5min 9sec1hr 7min 42sec", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work on hyperparameter tuning provides a method for optimising black-box objectives, which the citing paper adopts in its research on sample-efficient sequential model-based solvers."}, {"Category": "Extension or Continuation", "Citation": "[2]", "Explanation": "The cited work on drug and chip design extends the research on optimising black-box objectives to new domains, which the citing paper further explores in its study of sample-efficient sequential model-based solvers."}, {"Category": "Extension or Continuation", "Citation": "[3]", "Explanation": "The cited work on robotics extends the research on optimising black-box objectives to a new application area, which the citing paper further explores in its study of sample-efficient sequential model-based solvers."}, {"Category": "Supporting Evidence", "Citation": "[4]", "Explanation": "The cited work on meta-BO provides evidence of the need for data collected on related source tasks in order to improve sample efficiency on new target tasks, which the citing paper addresses in its research."}, {"Category": "Data Source", "Citation": "[5][6][7][8][9]", "Explanation": "The cited works on Gaussian processes provide data sources for surrogate modelling in meta-BO methods, which the citing paper uses in its research on transfer across tasks."}, {"Category": "Data Source", "Citation": "[10][11][12]", "Explanation": "The cited works on deep neural networks provide data sources for surrogate modelling in meta-BO methods, which the citing paper uses in its research on transfer across tasks."}, {"Category": "Methodological Basis", "Citation": "[13,14]", "Explanation": "The cited works provide a method for training a neural network to perform transfer, which the citing paper adopts as a basis for their research on end-to-end training protocol for meta-BO."}, {"Category": "Extension or Continuation", "Citation": "[10,12,15,16]", "Explanation": "The cited works on neural processes (NP) are used to model the acquisition function directly, which the citing paper further extends by proposing a new transformer-based NP to model the AF in a new RL problem."}, {"Category": "Supporting Evidence", "Citation": "[18]", "Explanation": "The cited work on the combination of RL with transformers is used to explain the early instability in training observed in the citing paper, providing evidence for the challenges faced in the research."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work introduces the concept of an auxiliary loss in transformer architecture, which the citing paper adopts in their own research to guide a part of the network to learn a valid probabilistic model."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work provides a discussion on the use of GPs as surrogate models in BO, which the citing paper adopts in their research to fit a probabilistic surrogate model for f (x)."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work introduces the concept of Expected Improvement (EI) as a common practice in BO for optimising acquisition functions. The citing paper adopts this method in their research to optimise the acquisition function in the second step of BO."}, {"Category": "Methodological Basis", "Citation": "[4,13]", "Explanation": "The cited works provide a transfer technique in BO that the citing paper adopts to improve the optimisation of a target blackbox function by reusing knowledge from past experiences in related source domains."}, {"Category": "Methodological Basis", "Citation": "[13,14]", "Explanation": "The cited works on end-to-end meta-BO pipelines provide a methodological basis for the citing paper to develop and design their own end-to-end meta-BO pipelines."}, {"Category": "Extension or Continuation", "Citation": "[20]", "Explanation": "The cited work on Gaussian processes (GPs) serves as a basis for the extension of GP-based methods to support transfer scenarios in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[27]", "Explanation": "The cited work on the computational complexity of GPs provides supporting evidence for the limitations of using GPs in transfer scenarios as discussed in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[5]", "Explanation": "The cited work on deep networks for meta-learning probabilistic surrogates serves as a basis for the extension of such methods in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work introduces the concept of neural processes (NPs) and their use in combining flexibility and properties of stochastic processes, which the citing paper adopts in their research to achieve a specific goal."}, {"Category": "Extension or Continuation", "Citation": "[11]", "Explanation": "The cited work reports on the use of transformer architectures in parameterising the distribution p \u03b8 (\u2022), which the citing paper extends by adopting a transformer architecture in their research to make predictions at specific locations."}, {"Category": "Methodological Basis", "Citation": "[32]", "Explanation": "The cited work on RL provides the methodological basis for learning an end-to-end model in the citing paper that predicts acquisition values in the absence of labelled acquisition data."}, {"Category": "Methodological Basis", "Citation": "[33]", "Explanation": "The cited work provides the PPO algorithm, which the citing paper uses to determine the optimal policy in a state-of-the-art approach for proximal policy optimization."}, {"Category": "Extension or Continuation", "Citation": "End-to-End Meta-Bayesian Optimisation", "Explanation": "The cited work discusses the limitations of current transfer techniques in BO and proposes a new end-to-end approach for more scalable and easier-to-deploy algorithms in BO."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work provides a framework for balancing exploration and exploitation trade-offs in the MDP, which the citing paper adopts in the state variable s t to update the history of black-box evaluations."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work in [13] has a similar approach to discovering AFs using RL, but the citing paper uses a different parameterisation architecture and MDP definition, which is a methodological basis for the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[4,13]", "Explanation": "The cited works provide a well-established literature on meta-BO that the citing paper follows in defining the Equation 1 for reward functions. The citing paper adopts the methods and techniques from the cited works to structure the research on end-to-end training of deep architectures in RL algorithms."}, {"Category": "Supporting Evidence", "Citation": "[32]", "Explanation": "The cited work highlights the challenge of learning from sparse reward signals in reinforcement learning, which is a foundational issue that the citing paper addresses in its research on BO."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work on imitation learning provides a method that the citing paper can adopt to improve reward signals in BO and reduce agent-environment interactions."}, {"Category": "Extension or Continuation", "Citation": "[36]", "Explanation": "The cited work on exploration bonuses is a continuation of the research on improving reward signals in BO, exploring new dimensions and variables to enhance gradient updates."}, {"Category": "Data Source", "Citation": "[37]", "Explanation": "The cited work on defining more informative rewards from prior knowledge or human interactions is a data source that the citing paper can use in its research on BO to improve the quality of reward signals."}, {"Category": "Extension or Continuation", "Citation": "[38]", "Explanation": "The cited work on learning from human feedback is a continuation of the research on BO, providing a data-intensive approach that the citing paper can use to improve the quality of reward signals in BO."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work on introducing auxiliary tasks within RL demonstrates significant gains, which inspires the citing paper to introduce an inductive bias in their method via an auxiliary supervised loss in their research on BO."}, {"Category": "Methodological Basis", "Citation": "[39]", "Explanation": "The cited work on discovering relevant inductive biases via auxiliary tasks in RL provides a methodological basis for the citing paper to improve the quality of reward signals in BO by introducing an auxiliary supervised loss."}, {"Category": "Methodological Basis", "Citation": "[40]", "Explanation": "The cited work on impressive empirical successes in RL via auxiliary tasks further supports the methodological basis of the citing paper to introduce an auxiliary supervised loss in their research on BO to improve the quality of reward signals."}, {"Category": "Methodological Basis", "Citation": "(K)", "Explanation": "The cited work provides a method for splitting a dataset into observed and predicted sets, which the citing paper adopts to augment their objective in a reinforcement learning task."}, {"Category": "Supporting Evidence", "Citation": "(ptask)", "Explanation": "The cited work provides a task distribution that the citing paper uses to guide their research on augmenting a reinforcement learning objective with supervised auxiliary loss."}, {"Category": "Data Source", "Citation": "(k)", "Explanation": "The cited work provides a dataset that the citing paper uses to train and evaluate their model for making predictions in a reinforcement learning task."}, {"Category": "Extension or Continuation", "Citation": "(k) obs ,D (k) perd", "Explanation": "The cited work introduces the concept of observed and predicted sets in a dataset, which the citing paper extends by using it to augment their objective in a reinforcement learning task."}, {"Category": "Extension or Continuation", "Citation": "(y (pred) k |x (pred) k , D (k) obs )", "Explanation": "The cited work provides a function for predicting multi-modal Riemannian posteriors, which the citing paper extends by using it in a head of their transformer to make predictions in a reinforcement learning task."}, {"Category": "Methodological Basis", "Citation": "[11,12]", "Explanation": "The cited works provide a basis for the parameterisation of the neural acquisition process (NAP) in the citing paper, as the authors note that the NAP is a new type of neural process that is similar to other neural processes in the cited works."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work by Volpp et al. provides a method for evaluating policies over a finite set of locations, which the citing paper adopts in their own research to evaluate the policy \u03c0 \u03b8 over the finite set of locations x (pred) for a given task."}, {"Category": "Methodological Basis", "Citation": "[42]", "Explanation": "The cited work provides a method of treating the history H as a set instead of an ordered sequence, which the citing paper adopts to achieve history-order invariance in their neural architecture."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work provides a method of summing the embedding of x and y to form an \u27e8x, y\u27e9 pair in the set of history H, which the citing paper uses to achieve history-order invariance in their neural architecture."}, {"Category": "Methodological Basis", "Citation": "[43]", "Explanation": "The cited work shows the importance of relying on order-invariance in meta-RL algorithms, which the citing paper highlights as a key factor in the context of Bayesian optimization."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The cited work introduces the concept of few-shot Bayesian optimisation (FSBO), which the citing paper adopts as a baseline for comparison in their experiments on hyperparameter optimisation (HPO)."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work introduces the MetaBO method, which the citing paper compares against in their experiments on HPO. The method is used as a baseline to assess the performance of the proposed approach."}, {"Category": "Methodological Basis", "Citation": "[41]", "Explanation": "The cited work introduces the OptFormers method, which the citing paper uses as a baseline in their experiments on HPO. The method is compared against to evaluate the performance of the proposed approach."}, {"Category": "Data Source", "Citation": "[44]", "Explanation": "The cited work provides the HPO-B benchmark dataset, which the citing paper utilises in their experiments on HPO. The dataset is used as a source of data for the analysis and comparison of the proposed method."}, {"Category": "Extension or Continuation", "Citation": "[41]", "Explanation": "The cited work introduces the OptFormers method, which the citing paper builds upon in their experiments on HPO. The method is used as a baseline to assess the performance of the proposed approach and to explore new dimensions in the field of HPO."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work provides the base architecture that the citing paper builds upon to develop a new model for meta-BO."}, {"Category": "Data Source", "Citation": "RS", "Explanation": "The cited work is a random search method that the citing paper uses as a baseline for comparison in the meta-BO task."}, {"Category": "Extension or Continuation", "Citation": "MetaBO [13] and FSBO [5]", "Explanation": "The cited works are extensions of previous implementations in the field of meta-BO, providing a new version of the model for combinatorial and mixed spaces in the MIP solver tuning, antibody and EDA design tasks."}, {"Category": "Methodological Basis", "Citation": "[41]", "Explanation": "The cited work provides the implementation of OptFormer with an EI AF for hyperparameter tuning tasks, which the citing paper extends to the discrete search space version of HPO-B."}, {"Category": "Methodological Basis", "Citation": "[44]", "Explanation": "The cited work provides the HPO-B benchmark dataset, which the citing paper uses in their experiment to evaluate the performance of hyperparameter optimisation methods."}, {"Category": "Methodological Basis", "Citation": "[45]", "Explanation": "The cited work provides the open-source SCIP solver that the citing paper uses in their research on hyperparameter tuning of MIP solvers."}, {"Category": "Data Source", "Citation": "[46]", "Explanation": "The cited work is the Benchmark suite from the MIPLib2017 that the citing paper utilizes in their study of finding the optimal parameters of MIP solvers."}, {"Category": "Data Source", "Citation": "[47]", "Explanation": "The cited work provides the representation of CDRH3 sequences in the form of a string of 11 characters, which the citing paper uses to model the problem in the experiment."}, {"Category": "Data Source", "Citation": "[48]", "Explanation": "The cited work provides the Absolut! software used to compute binding energies in the experiment, which the citing paper leverages to evaluate the performance of the CDRH3 sequences."}, {"Category": "Data Source", "Citation": "[49]", "Explanation": "The cited work provides a protein database bank that the citing paper uses to collect datasets of CDRH3 sequences and their binding energies for the experiment."}, {"Category": "Extension or Continuation", "Citation": "[50]", "Explanation": "The cited work in [50] is used as a reference to consider length 20 LS flows in the citing paper. The citing paper extends the research by exploring a new dimension of flow length in the LS process."}, {"Category": "Supporting Evidence", "Citation": "[51]", "Explanation": "The cited work provides the open-source ABC library that the citing paper uses to implement the balance function in their research on circuit design."}, {"Category": "Data Source", "Citation": "[50]", "Explanation": "The cited work is the source of the datasets used in the research on circuit design, as the citing paper collected datasets for 43 different circuits from the OpenABC library."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work provides a general definition of meta-learning, which serves as the methodological basis for the citing paper to follow in their research on learning to quickly solve new tasks based on experience gained from related tasks."}, {"Category": "Methodological Basis", "Citation": "[52]", "Explanation": "The cited work introduces a method of using source data to learn a good initialisation of model parameters for making accurate predictions in new target tasks, which the citing paper adopts in their research on learning to make accurate predictions after few optimisation steps."}, {"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work presents the use of HyperNetworks to learn a general rule from related tasks to suggest helpful quantities for making predictions, which the citing paper may have used in their research on learning a general rule to predict helpful quantities."}, {"Category": "Data Source", "Citation": "[10]", "Explanation": "The cited work mentions the use of NPs in learning a general rule, which the citing paper may have used in their research on learning a general rule to predict helpful quantities."}, {"Category": "Data Source", "Citation": "[11]", "Explanation": "The cited work mentions the use of transformers in learning a general rule, which the citing paper may have used in their research on learning a general rule to predict helpful quantities."}, {"Category": "Methodological Basis", "Citation": "[54]", "Explanation": "The cited work suggests the use of Stein Variational Gradient Descent (SVGD) to estimate uncertainty with multiple models, which the citing paper adopts in their research to improve the accuracy of their uncertainty estimates."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work extends the use of SVGD to meta-learn AFs, which the citing paper also adopts in their research to improve the performance of their meta-learning approach for AFs."}, {"Category": "Extension or Continuation", "Citation": "[55], [56]", "Explanation": "The cited works train an RNN to predict the next suggestion in BO instead of predicting the acquisition function value, which the citing paper does not compare to given the lack of available implementation. However, the citing paper does compare to the outperforming approach developed by Chen et al. [41], which is an extension of the work by Chen et al. [55] and TV et al. [56]."}, {"Category": "Supporting Evidence", "Citation": "[5,[7][8][9]]", "Explanation": "The cited works are mentioned as a basis for learning deep kernel surrogates, which the citing paper uses to perform the transfer in their research."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work by Feurer et al. is referenced for its reliance on an ensemble of GPs, which the citing paper adopts in their research to perform a specific task."}, {"Category": "Methodological Basis", "Citation": "[57]", "Explanation": "The cited work by Iwata is mentioned as a method close to FSBO, with the citing paper learning a meta-model in an end-to-end fashion using RL to propagate gradients through the acquisition function and the GP back to the deep kernel."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work by Hsieh et al. provides a method of performing transfer through the acquisition function by using a neural network to meta-train on related source tasks, which the citing paper adopts as a basis for their own research."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work by Volpp et al. also contributes to the method of transfer through the acquisition function by using a neural network to meta-learn the acquisition function, which the citing paper further builds upon in their research."}, {"Category": "Data Source", "Citation": "[14]", "Explanation": "The data used in the cited work by Hsieh et al. to pre-train GP surrogates on all source tasks is a foundational element for the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[13]", "Explanation": "The data used in the cited work by Volpp et al. to train the neural acquisition function is also a data source for the study conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[14]", "Explanation": "The citing paper extends the research of Hsieh et al. by exploring new dimensions and contexts in the use of the acquisition function for transfer learning."}, {"Category": "Extension or Continuation", "Citation": "[13]", "Explanation": "The citing paper also builds upon the research of Volpp et al. by further investigating the use of the neural acquisition function in transfer learning."}, {"Category": "Extension or Continuation", "Citation": "[4]", "Explanation": "The cited work by Bai et al. provides a more detailed survey on transfer and meta-learning in Bayesian optimization, which the citing paper extends by discussing the performance of NAP in HPO-B and the potential benefits of transfer in acquisition between tasks."}, {"Category": "Methodological Basis", "Citation": "[58]", "Explanation": "The cited work provides a data-driven approach to meta-overfitting in BO, which the citing paper plans to explore in future work to improve the performance of their architecture."}, {"Category": "Supporting Evidence", "Citation": "[59]", "Explanation": "The cited work provides a known result on the average number of cycles in a permutation sampled uniformly in S T , which the citing paper uses to calculate the expected number of informative rewards in a trajectory of length T."}, {"Category": "Data Source", "Citation": "[44]", "Explanation": "The cited work provides a table in Table 3 that contains detailed information for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[51]", "Explanation": "The reference sequence resyn2 is used in the objective function to normalise the area and delay metrics, indicating a methodological basis for the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[41]", "Explanation": "The cited work by Chen et al. provides the continuous HPO-B benchmark dataset that the citing paper uses to evaluate the performance of XGBoost models in approximating black-box functions."}, {"Category": "Methodological Basis", "Citation": "[60]", "Explanation": "The cited work, HPOBench benchmark, provides the dataset used in the study conducted in the citing paper. The dataset is essential for the analysis and comparison of the methods in the ablation study."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work, PFN, is used as a pre-defined expected improvement acquisition function in the baseline NP-EI method, which is compared to the end-to-end architecture in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work by Nguyen and Grover proposes several transformer-based neural process architectures that the citing paper adopts or adapts in their research."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b1", "b11", "b7", "b30", "b2", "b35", "b15", "b22", "b37" ], "table_ref": [], "text": "Commonsense reasoning has recently received significant attention in NLP research (Bhargava and Ng, 2022), with a vast amount of datasets now available (Levesque, 2011;Gordon et al., 2012;Sap et al., 2019;Rashkin et al., 2018;Bisk et al., 2020;Talmor et al., 2019). Most existing methods for commonsense reasoning either fine-tune large language models (LMs) on these datasets (Lourie et al., 2021) or use knowledge graphs (KGs) (Pan et al., 2017) to train LMs (Liu et al., 2019a;Yasunaga et al., 2022). However, it is not always possible to have relevant training data available, it is thus crucial to develop unsupervised approaches to commonsense reasoning that do not rely on labeled data.\nIn this paper, we focus on the unsupervised multiple choice question answering (QA) task: given a question and a set of answer options, the model is expected to predict the most likely option. We propose BUCA, a binary classification framework for unsupervised commonsense QA. Our method roughly works as follows: we first convert knowledge graph triples into textual form using manually written templates, and generate positive and negative question-answer pairs. We then fine-tune a pretrained language model, and leverage contrastive learning to increase the ability to distinguish reasonable from unreasonable ones. Finally, we input each question and all options of the downstream commonsense QA task into BUCA to obtain the reasonableness scores and select the answer with the highest reasonableness score as the predicted answer. Experimental results on various commonsense reasoning benchmarks show the effectiveness of our proposed BUCA framework. Our main contributions are:\n• We propose a binary classification approach to using KGs for unsupervised commonsense question answering.\n• We conduct extensive experiments, showing the effectiveness of our approach by using much less data." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b24", "b31", "b21", "b32", "b28", "b17", "b3", "b5", "b0", "b16", "b34", "b9" ], "table_ref": [ "tab_2" ], "text": "Language models are widely used in unsupervised commonsense inference tasks, e.g. as an additional knowledge source or as a scoring model. Rajani et al. (2019) propose an explanation generation model for the CommonsenseQA dataset. Self-talk (Shwartz et al., 2020) uses prompts to stimulate GPT and generate new knowledge. SEQA (Niu et al., 2021) generates several candidate answers using GPT2 and then ranks each them. Another research direction in unsupervised commonsense reasoning is the use of e.g. commonsense KGs (Speer et al., 2016;Romero et al., 2019;Malaviya et al., 2020) to train the model (Chen et al., 2021;Geng et al., 2023). In Banerjee and Baral (2020), given the inputs of context, question and answer, the model learns to generate one of the inputs given the other two. Ma et al. (2021) update the model with a margin ranking loss computed on positive and negative examples from KGs. MICO (Su et al., 2022) uses the distance between the positive and negative question-answer pairs obtained from the KG to calculate the loss. However, all of the above approaches demand a large amount of training data, sometimes reaching million of training samples, while BUCA only needs tens of thousands, cf. Table 2. The most similar to our work is NLI-KB (Huang et al., 2021), which trains a model on NLI data, then applies the corresponding knowledge to each question-answer pair on the downstream task. Our paper, instead, shows that is not the NLI data but the retrieved knowledge that helps." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [ "b22" ], "table_ref": [], "text": "We focus on the following multiple choice question answering (QA) task: given a question q and a set of options A, the model should select the most likely single answer A i ∈ A. We consider an unsupervised setting in which the model does not have access to the training or validation data. Our BUCA approach first trains the model with a knowledge graph and then uses the trained model to test on multiple QA downstream tasks. Formally, a knowledge graph (KG) (Pan et al., 2017) G is a tuple (V, R, T ), where V is a set of entities, E is a set of relation types and T is a set of triples of the form (h, r, t) with h, t ∈ V the head and tail entities and r ∈ R the relation of the triple connecting h and t.\nOur approach has three main components: knowledge graph transfer to training data, training loss design, and downstream task testing:" }, { "figure_ref": [], "heading": "Converting Triples into Binary Classification", "publication_ref": [ "b34", "b10", "b8" ], "table_ref": [], "text": "Training Data. Inspired by previous work (Su et al., 2022), each KG triple is converted into question-answer pairs by using pre-defined templates, so that the obtained pairs are then used as the input of the classification task. We use the templates provided in (Hwang et al., 2020). For example, the ATOMIC triple (PersonX thanks PersonY afterwards, isAfter, PersonX asked PersonY for help on her homework) can be converted to \"After Per-sonX asked PersonY for help on her homework, Per-sonX thanks PersonY afterwards\". In the appendix we show the distribution of the converted sequence pairs. Along with the correct QA pairs created from the KG triples, our framework is also trained on negative QA pairs, so it can better discriminate between reasonable and unreasonable QA pairs. More precisely, in the training dataset, each correct QA pair generated from a triple tp = (h, r, t) has a corresponding negative pair obtained from a variation of tp in which t is substituted by t ′ , which is randomly drawn from the existing tails in the KG.\nTraining Loss. For our binary classification model, we add a classification head with two nodes to the pre-trained language model. After normalizing the values on these two nodes, we can obtain reasonable and unreasonable scores for the QA pairs. From the triple conversion step, we obtained n training examples, each consisting of a question q, correct answer a c , and incorrect answer a w . For each question-answer pair, we can then obtain the reasonable and unreasonable scores r + i and r - i after applying a softmax layer. In each loss calculation, we jointly consider the correct and incorrect answers. For binary classification, we use two kinds of losses: Traditional Binary Loss (TBL).\nL = - n i=1 (log(p + ac ) + log(p - aw ))\nwhere p + ac and p - aw are the probabilities of correct and incorrect answers, respectively corresponding to reasonable and unreasonable scores. Margin Ranking Loss.\nL = n i=1 max(0, η -log(p + ac ) + log(p + aw )) +max(0, η -log(p - aw ) + log(p - ac ))\nwhere η is a margin threshold hyper-parameter.\nIn order to pull the representational distance between reasonable question-answer pairs as close as possible and to push the representational distance between reasonable and unreasonable ones as far as possible, we use supervised contrastive learning (Gunel et al., 2021) along with the binary classification. This is done by considering as positive examples of a given example within a category, all those examples within the same category.\nContrastive Loss of the i-th QA pair\nL scl = N j=1 1 y i =y j log e sim(h j ,h i )τ N k=1 1 i̸ =k e sim(h k ,h i )/τ\nwhere τ is the temperature parameter and h denotes the feature vector.\nInference. In the prediction phase for each candidate answer, we calculate its reasonableness score.\nWe choose the answer with the highest reasonableness score as the predicted answer." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we first describe our experiments on five commonsense question answering datasets, followed by ablation studies and data analysis." }, { "figure_ref": [], "heading": "Datasets and Baselines", "publication_ref": [ "b33", "b29", "b7", "b18", "b30", "b35", "b20", "b31", "b4", "b36", "b34", "b9", "b16" ], "table_ref": [], "text": "We use two well-known commonsense KGs for training our framework: ConceptNet (Speer et al., 2017) and ATOMIC (Sap et al., 2018).\nFor evaluation, we use five commonsense QA datasets: COPA (Gordon et al., 2012), Open-BookQA (Mihaylov et al., 2018), SIQA (Sap et al., 2019), CSQA (Talmor et al., 2019), and SCT (Mostafazadeh et al., 2017), covering a wide range of topics within commonsense reasoning. We compare our approach with various baselines:\nRoBERTa-Large (Liu et al., 2019b), GPT2 (Radford et al., 2019), Self-talk (Shwartz et al., 2020), Dou (Dou and Peng, 2022), Wang (Wang and Zhao, 2022) and other unsupervised systems using KGs: SMLM (Banerjee and Baral, 2020), MICO (Su et al., 2022), NLI-KB (Huang et al., 2021) and Ma (Ma et al., 2021). Most reported results are collected from the literature. For NLI-KB, we used their publicly available code to get the results. Details of the KGs and datasets, as well as implementation details, can be found in the appendix. " }, { "figure_ref": [], "heading": "Main results", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Table 1 shows the results for the five benchmarks. On other datasets our framework shows similiar behavior with both KGs. As for the loss functions, the margin ranking loss is on average 0.8% higher than the binary loss on ConceptNet, and 0.1% higher on ATOMIC. These results are explained by the fact that the ranking loss separates more the scores between reasonable and unreasonable answers. In light of this, we will only consider margin ranking loss in the below analysis." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [], "text": "In this section, we analyze the effects of the backbone models, the effect of contrastive learning, and explore the vocabulary overlap between the knowledge training set and the downstream task as well as the accuracy of our BUCA method.\nBackbone Pre-trained LMs Our experiments using different backbone models show that in general the stronger the PLM the better the perfor-mance on the downstream task. Regarding the KGs, in the BERT-base and RoBERTa-base variants, the ATOMIC-trained models perform better than the ConceptNet-trained models, while in the RoBERTa-large one they perform similarly. This might be explained by the fact that as the model capacity increases it has more inherently available event-like commonsense knowledge, necessary in the ATOMIC-based datasets. Detailed results are shown in Table 3." }, { "figure_ref": [], "heading": "Effects of Contrastive Learning", "publication_ref": [ "b6", "b6" ], "table_ref": [ "tab_4" ], "text": "Our experiments show that the RoBERTa-large variant with contrastive learning outperforms the version without it on all datasets, regardless of the used KG. Detailed results are shown in Table 4.\nAccuracy of the Binary Classifier Inspired by Ghosal et al. (2022), we evaluate how often input sequences corresponding to correct and incorrect answers are accurately predicted. To this end, we use the RoBERTa-large variant trained on ATOMIC. Table 5 shows that our model tends to predict all answers as reasonable since in our training set the negative examples are randomly selected, many QA pairs are semantically irrelevant or even ungrammatical. For the manually crafted candidate answers, many of them are semantically relevant and grammatical, so our model predicts them as reasonable. We also see that the accuracy metrics for SCT and COPA are the highest. Our findings are consistent with Ghosal et al. (2022)." }, { "figure_ref": [], "heading": "Data Analysis", "publication_ref": [ "b9", "b19", "b19" ], "table_ref": [ "tab_0", "tab_6", "tab_7" ], "text": "To better understand why transfer learning from CKGs is more suitable than from other datasets (i.e. MNLI or QNLI) in the commonsense QA task, we performed an analysis on the training data in NLI-KB (Huang et al., 2021) and the used CKGs. Following (Mishra et al., 2021), we first compare the vocabulary overlap of ConceptNet, ATOMIC and MNLI (training data) with our evaluation QA datasets. We follow the definition of overlap introduced in (Mishra et al., 2021). Table 6 shows that MNLI has higher vocabulary overlap with all the evaluation datasets than both used CKGs. However, the results for NLI-KB in Table 1 show that the vocabulary overlap is not a key factor for performance as otherwise, NLI-KB fine-tuned with the NLI datasets (before injecting knowledge) should perform better that the other models in the downstream task due to the high lexical similarity. We also analyze the distance to the sentence embeddings. Our results show that the MNLI entries performed poorly in commonsense knowledge retrieval for SIQA-queries as they are not reasonable answers. In contrast, the sentences generated from ATOMIC and ConceptNet successfully pair the SIQA-questions with reasonable answers. This reveals that, although MNLI has a higher lexical coverage, MNLI does not have suitable examples to match SIQA questions. Thus models fine-tuned with the NLI dataset hardly get any benefit for downstream commonsense reasoning tasks. Tables 7 and8 present a random sample showing this, where reasonable alternatives are in bold." }, { "figure_ref": [], "heading": "CSQA Example", "publication_ref": [], "table_ref": [], "text": "Question: If you have leftover cake, where would you put it? Answer: refrigerator" }, { "figure_ref": [], "heading": "MNLI", "publication_ref": [], "table_ref": [], "text": "In the waste-paper basket. This entails in the garbage bin.\nIn the middle of the dinner plate (or is it a base drum?) This entails in the center of the dinner plate.\nWe always keep it in the hall drawer. This entails it's always kept in the drawer in the hall." }, { "figure_ref": [], "heading": "ATOMIC", "publication_ref": [], "table_ref": [], "text": "John cuts the cake. as a result, John wants put the rest of the cake in fridge John places in the oven. but before, John needed to mix the cake ingredients John puts in the fridge. but before, John needed to grab it off the table" }, { "figure_ref": [], "heading": "ConceptNet", "publication_ref": [], "table_ref": [], "text": "oven is the position of cake refrigerator is the position of moldy leftover fridge is the position of leftover " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b16", "b34" ], "table_ref": [ "tab_2" ], "text": "We presented a framework converting KGs into positive/negative question-answer pairs to train a binary classification model, discriminating whether a sentence is reasonable. Extensive experiments show the effectiveness of our approach, while using a reasonably small amount of data. For future work, we will explore how to better select negative cases.\nAs seen in Table 2, in comparison to related works: Ma (Ma et al., 2021) and MICO (Su et al., 2022), our methods used much less data from the CKGs (~5-8x Ma, ~2-20x MICO) while still maintaining competitive performance on the evaluation dataset." }, { "figure_ref": [], "heading": "A.1 Generation of QA pairs", "publication_ref": [ "b10" ], "table_ref": [ "tab_8" ], "text": "The QA pairs were generated using the templates in the ATOMIC paper (Hwang et al., 2020), which is compatible with relations in both ConceptNet and ATOMIC. These templates help to convert KG triples into natural sentences, examples shown in Table 9. The head entity and mapped relation phrases are joined as a question. The correct tail entity and a randomly sampled tail from the dataset are used as the positive and negative answers, respectively, for contrastive learning." }, { "figure_ref": [], "heading": "A.2 Evaluation Datasets", "publication_ref": [ "b7", "b18", "b30", "b35", "b20" ], "table_ref": [], "text": "We evaluate our framework using five downstream QA tasks: COPA, OpenBookQA, SIQA, CSQA, and SCT, which covere a wide range of topics within commonsense reasoning. Accuracy is used as the evaluation metric. All experiments are perform in an unsupervised setting, where our model are not train on the source task.\nChoice of Plausible Alternatives (COPA) (Gordon et al., 2012) is a two-choice question-answer dataset designed to evaluate performance in opendomain commonsense causal reasoning. Each entry contains a premise and two possible answers, the task is to select the answers that most likely have a causal relationship with the premise. The dataset consists 500 questions for both debvelopment and test sets.\nOpenBookQA (Mihaylov et al., 2018) is inspired from open book exams that assess human understanding in real life. This QA task requires a deeper understanding about both open book facts (e.g., metals is a heat conductor) and a broad common knowledge (e.g., a steal spoon is made of metal) to answer questions like: Which of these objects conducts the most heat: A metal spoon, pair of jeans, or cotton made clothing? It contains 500 multiple-choice science questions for both development and test sets.\nSocialIQA (SIQA) (Sap et al., 2019) contains multiple-choice questions with topics concerned with emotional and social interactions in a variety of everyday situations. Each entry comes with a context, a question, and 3 candidate answers. The questions are generated using the ATOMIC KG by converting triples into question sentences using predefined templates, and the answers are crowdsourced. The dataset's development split is used as evaluation dataset, containing 1,954 questions.\nCommonsenseQA (CSQA) (Talmor et al., 2019) contains questions focused on various commonsense aspects. Each entry contains a question and five candidate answers. The questions are constructed by crowd workers. The answer candidates include distractors comprised of hand-picked ones or nodes from ConceptNet. The development set is used as evaluation set, containing 1,221 questions.\nStory Cloze Test (SCT) (Mostafazadeh et al., 2017) is a LSDSem'17 shared task, evaluating story understanding and script learning. Each entry contains a four-sentence story and two possible fifth sentences, where the model has to pick the most suitable ending for the story. The development set is used as the evaluation set, containing 1572 different stories. " }, { "figure_ref": [], "heading": "B Ablation Studies", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "We present the full results for the ablation studies discussed in Section 4.3. Table 3 for the backbone models study; Table 4 for the influence of contrastive learning; and Table 5 for accuracy." }, { "figure_ref": [], "heading": "C Data Analysis", "publication_ref": [], "table_ref": [ "tab_0", "tab_0" ], "text": "In the analysis of the distance to sentence embeddings, we treat each entry in the CKG datasets as possible answers and encode them using the SBERT pre-trained model (all-mpnet-base-v2) (Reimers andGurevych, 2019, 2020). Then, the cosine-similarity between the SIQA question and the encoded sentences is calculated to rank their semantic relatedness. We retrieved the top 3 answers for each source and listed by similarity score at descending order. Table 10 extends the results presented in Section 4.4; Table 11 show the alternative answers from CKG datasets COPA questions." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work is supported by the Chang Jiang Scholars Program (J2019032)." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "The method to select negative examples could be improved, as randomly selecting negative examples for training might lead to identifying most of examples in the evaluation datasets as reasonable. Secondly, we did not explore using other number of candidates in the training set, we always use 2 candidate answers for each question." }, { "figure_ref": [], "heading": "Appendix A KGs, Datasets, and Implementation", "publication_ref": [], "table_ref": [], "text": "This section contains more experimental details. In particular, we give details of the used KGs and datasets. We also discuss implementation details." }, { "figure_ref": [], "heading": "ConceptNet", "publication_ref": [ "b33", "b12" ], "table_ref": [], "text": "ConceptNet (Speer et al., 2017) is a traditional KG that focuses on taxonomic, lexical and physical relations (e.g., IsA, RelatedTo, PartOf ). In our experiment, we employed the CN-82K version which is uniformly sampled from a larger set of extracted ConceptNet entity-relations (Li et al., 2016)." }, { "figure_ref": [], "heading": "ATOMIC", "publication_ref": [ "b29" ], "table_ref": [], "text": "The ATOMIC KG (Sap et al., 2018) focuses on social-interaction knowledge about everyday events, and thus has a higher coverage in the field of commonsense query answering. It consists of 880K knowledge triples across 9 relations (e.g. xNeed, oEffect, xReact). This includes mentions of topics such as causes and effects, personal feelings toward actions or events, and conditional statements. The ATOMIC dataset is collected and validated completely through crowdsourcing." }, { "figure_ref": [], "heading": "SIQA Example", "publication_ref": [], "table_ref": [], "text": "Question: After a long grueling semester, Tracy took the final exam and finished their course today. Now they would graduate. Why did Tracy do this? Answer: complete their degree on time" }, { "figure_ref": [], "heading": "MNLI", "publication_ref": [], "table_ref": [], "text": "Because I had a deadline. This entails I had to finish by that time.\nThe professors went home feeling that history had been made. This entails The professors returned home.\nThey got married after his first year of law school.This entails Their marriage took place after he finished his first year of law school. " }, { "figure_ref": [], "heading": "ATOMIC", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "COPA Example", "publication_ref": [], "table_ref": [], "text": "Question: The boy wanted to be muscular. As a result, Answer: He lifted weights." }, { "figure_ref": [], "heading": "MNLI", "publication_ref": [], "table_ref": [], "text": "Emboldened, the small boy proceeded. This entails the small boy felt bolder and continued.\nOut of shape, fat boy. This entails the boy was obese.\nWhen Sport Resort won the contract for the construction of a new hotel center for 1200 people around the Olympic Sports Arena (built as a reserve for the future, to have it ready in time for the next championships), Gonzo began to push his weight around, because he felt more secure. This entails when Sport Resort won the contract for the construction of a new hotel Gonzo felt more secure." }, { "figure_ref": [], "heading": "ATOMIC", "publication_ref": [], "table_ref": [], "text": "John wanted to build his physique. as a result the boy lifts weights\nThe boy starts working out. as a result, the boy wants to gain more muscle\nThe boy starts lifting weights. as a result, the boy will build muscle " } ]
2023-06-07
10.18653/v1/2020.emnlp-main.11
[ { "authors": "Pratyay Banerjee; Chitta Baral", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Selfsupervised knowledge triplet learning for zero-shot question answering", "year": "2020" }, { "authors": "Prajjwal Bhargava; Vincent Ng", "journal": "", "ref_id": "b1", "title": "Commonsense knowledge reasoning and generation with pretrained language models: A survey", "year": "2022" }, { "authors": "Yonatan Bisk; Rowan Zellers; Ronan Le Bras; Jianfeng Gao; Yejin Choi", "journal": "Proceedings of the AAAI Conference on Artificial Intelligence", "ref_id": "b2", "title": "Piqa: Reasoning about physical commonsense in natural language", "year": "2020" }, { "authors": "Jiaoyan Chen; Yuxia Geng; Zhuo Chen; Ian Horrocks; Jeff Z Pan; Huajun Chen", "journal": "", "ref_id": "b3", "title": "Knowledgeaware Zero-Shot Learning: Survey and Perspective", "year": "2021" }, { "authors": "Zi-Yi Dou; Nanyun Peng", "journal": "", "ref_id": "b4", "title": "Zero-shot commonsense question answering with cloze translation and consistency optimization", "year": "2022" }, { "authors": " Geng; X Chen; Z Zhuang; J Chen; J Z Pan; H Li; Z Chen; Yuan", "journal": "Journal of Web Semantics", "ref_id": "b5", "title": "Benchmarking knowledgedriven zero-shot learning", "year": "2023" }, { "authors": "Deepanway Ghosal; Navonil Majumder; Rada Mihalcea; Soujanya Poria", "journal": "", "ref_id": "b6", "title": "Two is better than many? binary classification as an effective approach to multichoice question answering", "year": "2022" }, { "authors": "Andrew Gordon; Zornitsa Kozareva; Melissa Roemmele", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "SemEval-2012 task 7: Choice of plausible alternatives: An evaluation of commonsense causal reasoning", "year": "2012" }, { "authors": "Beliz Gunel; Jingfei Du; Alexis Conneau; Veselin Stoyanov", "journal": "", "ref_id": "b8", "title": "Supervised contrastive learning for pre-trained language model fine-tuning", "year": "2021" }, { "authors": "Canming Huang; Weinan He; Yongmei Liu", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Improving Unsupervised Commonsense Reasoning Using Knowledge-Enabled Natural Language Inference", "year": "2021" }, { "authors": "Jena D Hwang; Chandra Bhagavatula; Le Ronan; Jeff Bras; Keisuke Da; Antoine Sakaguchi; Yejin Bosselut; Choi", "journal": "", "ref_id": "b10", "title": "COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs", "year": "2020" }, { "authors": "Hector J Levesque", "journal": "AAAI", "ref_id": "b11", "title": "The winograd schema challenge", "year": "2011" }, { "authors": "Xiang Li; Aynaz Taheri; Lifu Tu; Kevin Gimpel", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Commonsense Knowledge Base Completion", "year": "2016" }, { "authors": "Weijie Liu; Peng Zhou; Zhe Zhao; Zhiruo Wang; Qi Ju; Haotang Deng; Ping Wang", "journal": "", "ref_id": "b13", "title": "K-bert: Enabling language representation with knowledge graph", "year": "2019" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b14", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Nicholas Lourie; Le Ronan; Chandra Bras; Yejin Bhagavatula; Choi", "journal": "", "ref_id": "b15", "title": "Unicorn on rainbow: A universal commonsense reasoning model on a new multitask benchmark", "year": "2021" }, { "authors": "Kaixin Ma; Filip Ilievski; Jonathan Francis; Yonatan Bisk; Eric Nyberg; Alessandro Oltramari", "journal": "Proceedings of the AAAI Conference on Artificial Intelligence", "ref_id": "b16", "title": "Knowledge-driven data construction for zero-shot evaluation in commonsense question answering", "year": "2021" }, { "authors": "Chaitanya Malaviya; Chandra Bhagavatula; Antoine Bosselut; Yejin Choi", "journal": "", "ref_id": "b17", "title": "Commonsense knowledge base completion with structural and semantic context", "year": "2020" }, { "authors": "Todor Mihaylov; Peter Clark; Tushar Khot; Ashish Sabharwal", "journal": "", "ref_id": "b18", "title": "Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering", "year": "2018" }, { "authors": "Anshuman Mishra; Dhruvesh Patel; Aparna Vijayakumar; Lorraine Xiang; Pavan Li; Kartik Kapanipathi; Talamadupula", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Looking Beyond Sentence-Level Natural Language Inference for Question Answering and Text Summarization", "year": "2021" }, { "authors": "Nasrin Mostafazadeh; Michael Roth; Annie Louis; Nathanael Chambers; James Allen", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "LS-DSem 2017 Shared Task: The Story Cloze Test", "year": "2017" }, { "authors": "Yilin Niu; Fei Huang; Jiaming Liang; Wenkai Chen; Xiaoyan Zhu; Minlie Huang", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "A semanticbased method for unsupervised commonsense question answering", "year": "2021" }, { "authors": "J Z Pan; G Vetere; J M Gomez-Perez; H Wu", "journal": "Springer", "ref_id": "b22", "title": "Exploiting Linked Data and Knowledge Graphs for Large Organisations", "year": "2017" }, { "authors": "Alec Radford; Jeff Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "", "ref_id": "b23", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Nazneen Fatema Rajani; Bryan Mccann; Caiming Xiong; Richard Socher", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Explain yourself! leveraging language models for commonsense reasoning", "year": "2019" }, { "authors": "Maarten Hannah Rashkin; Emily Sap; Noah A Allaway; Yejin Smith; Choi", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Event2Mind: Commonsense inference on events, intents, and reactions", "year": "2018" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "", "ref_id": "b26", "title": "Sentence-bert: Sentence embeddings using siamese bert-networks", "year": "2019" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "", "ref_id": "b27", "title": "Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation", "year": "2020" }, { "authors": "Julien Romero; Simon Razniewski; Koninika Pal; Jeff Z Pan; Archit Sakhadeo; Gerhard Weikum", "journal": "", "ref_id": "b28", "title": "Commonsense Properties from Query Logs and Question Answering Forums", "year": "2019" }, { "authors": "Maarten Sap; Ronan Lebras; Emily Allaway; Chandra Bhagavatula; Nicholas Lourie; Hannah Rashkin; Brendan Roof; Noah A Smith; Yejin Choi", "journal": "", "ref_id": "b29", "title": "ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning", "year": "2018" }, { "authors": "Maarten Sap; Hannah Rashkin; Derek Chen; Ronan Le Bras; Yejin Choi", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Social IQa: Commonsense reasoning about social interactions", "year": "2019" }, { "authors": "Vered Shwartz; Peter West; Le Ronan; Chandra Bras; Yejin Bhagavatula; Choi", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Unsupervised commonsense question answering with self-talk", "year": "2020" }, { "authors": "Robert Speer; Joshua Chin; Catherine Havasi", "journal": "", "ref_id": "b32", "title": "Conceptnet 5.5: An open multilingual graph of general knowledge", "year": "2016" }, { "authors": "Robyn Speer; Joshua Chin; Catherine Havasi", "journal": "AAAI Press", "ref_id": "b33", "title": "Conceptnet 5.5: An open multilingual graph of general knowledge", "year": "2017" }, { "authors": "Ying Su; Zihao Wang; Tianqing Fang; Hongming Zhang; Yangqiu Song; Tong Zhang", "journal": "", "ref_id": "b34", "title": "Mico: A multi-alternative contrastive learning framework for commonsense knowledge representation", "year": "2022" }, { "authors": "Alon Talmor; Jonathan Herzig; Nicholas Lourie; Jonathan Berant", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "CommonsenseQA: A question answering challenge targeting commonsense knowledge", "year": "2019" }, { "authors": "Jiawei Wang; Hai Zhao", "journal": "International Committee on Computational Linguistics", "ref_id": "b36", "title": "ArT: All-round thinker for unsupervised commonsense question answering", "year": "2022" }, { "authors": "Michihiro Yasunaga; Antoine Bosselut; Hongyu Ren; Xikun Zhang; Christopher D Manning; Percy Liang; Jure Leskovec", "journal": "", "ref_id": "b37", "title": "Deep bidirectional language-knowledge graph pretraining", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 342.06, 552.64, 146.43, 33.71 ], "formula_id": "formula_0", "formula_text": "L = - n i=1 (log(p + ac ) + log(p - aw ))" }, { "formula_coordinates": [ 2, 320.79, 658.78, 188.97, 50.97 ], "formula_id": "formula_1", "formula_text": "L = n i=1 max(0, η -log(p + ac ) + log(p + aw )) +max(0, η -log(p - aw ) + log(p - ac ))" }, { "formula_coordinates": [ 3, 80.2, 411.44, 196.99, 33.71 ], "formula_id": "formula_2", "formula_text": "L scl = N j=1 1 y i =y j log e sim(h j ,h i )τ N k=1 1 i̸ =k e sim(h k ,h i )/τ" } ]
BUCA: A Binary Classification Approach to Unsupervised Commonsense Question Answering
Unsupervised commonsense reasoning (UCR) is becoming increasingly popular as the construction of commonsense reasoning datasets is expensive, and they are inevitably limited in their scope. A popular approach to UCR is to fine-tune language models with external knowledge (e.g., knowledge graphs), but this usually requires a large number of training examples. In this paper, we propose to transform the downstream multiple choice question answering task into a simpler binary classification task by ranking all candidate answers according to their reasonableness. To this end, for training the model, we convert the knowledge graph triples into reasonable and unreasonable texts. Extensive experimental results show the effectiveness of our approach on various multiple choice question answering benchmarks. Furthermore, compared with existing UCR approaches using KGs, ours is less data hungry. Our code is available at https://github.com/probe2/BUCA John wanted to be a better dancer.
Jie He; Simon Chi Lok; Víctor Gutiérrez-Basulto; Jeff Z Pan
[ { "figure_caption": "Figure 1 :1Figure 1: After BUCA is trained on the above question from the training set, it is then able to rate the reasonableness of each sentence of the downstream task.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Accuracy (%) on five public benchmarks. Our best scores are highlighted in bold, and the results for the best performing baseline are underlined. Recall that TBL and MRL refer to the loss functions used in BUCA.", "figure_data": "MethodsBackboneKnowledge SourceCOPA dev testOpenbookQA SIQA CSQA SCT dev test dev dev devRandom--50.0 50.0 25.025.033.325.050.0RoBERTa-LRoBERTa-L-54.8 58.4 31.231.639.731.265.0GPT2-LGPT2-L-62.4 63.6 31.229.442.840.466.7Self-talkGPT2GPT266.0-28.430.846.232.4-DouALBERTALBERT--41.639.844.150.9-WangGPT2GPT269.8---47.3-71.6SMLMRoBERTa-Le.g., ATOMIC--34.633.848.538.8-MICORoBERTa-LConcept73.2 75.2--44.651.0-MICORoBERTa-LATOMIC79.4 77.4--56.044.2-NLI-KBRoBERTa-LConcept65.0 62.2 35.035.646.949.071.2NLI-KBRoBERTa-LATOMIC65.2 61.6 39.037.246.752.172.1MaRoBERTa-LCSKG----63.267.4-BUCARoBERTa-L/TBLConcept84.4 90.6 43.047.253.563.587.3BUCARoBERTa-L/MRLConcept86.2 89.6 45.247.652.665.488.0BUCARoBERTa-L/TBLATOMIC85.0 86.0 45.844.260.258.788.4BUCARoBERTa-L/MRLATOMIC84.6 87.8 43.246.061.460.385.5", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Statistics for the training and validation data used by Ma, MICO and BUCA.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The Neg and Pos column indicate % of instances for which all answer choices are predicted as negative or positive. The Incor as Neg, Cor as Pos, and Accurate column indicate % of instances for which all incorrect answers are predicted as negative, the correct answer is predicted as positive, and all answers are predicted accurately as negative or positive. Accurate is the intersection of Incor as Neg and Cor as Pos.", "figure_data": "DatasetPrediction All Neg Pos Incor as Neg Cor as Pos AccurateCOPA (dev)0.288.011.299.011.0COPA (test)0.488.411.299.210.8OpenbookQA (dev)1.467.84.893.23.4OpenbookQA (test)1.873.82.893.01.0SIQA (dev)6.350.215.786.79.4CSQA (dev)1.235.16.594.25.2SCT (dev)0.387.811.899.411.6", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Alternative answers for SIQA-question.", "figure_data": "", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Alternative answers for CSQA question.", "figure_data": "", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "PersonX wants to go to the office, ATOMIC (PersonX leaves the room, Q: PersonX wants to go to the office, as a result, PersonX will oEffect, get dressed up) xWant, to go somewhere else) A: get dressed up B: to go somewhere else QA pairs generated by KG Triples A.3 Implementation details Our experiments are run on a single A100 GPU card. We use RoBERTa-Large as our backbone model. The training batch size is 196, and the maximal sequence length for training is 64. The learning rate is set to 5e-5 for all experiments. For experiments with the margin ranking loss, η is set to 1. The validation set is evaluated by accuracy and used to select a best model for further evaluation. The models are trained for 20 epochs and early stopped when the change of validation loss is within 1%.", "figure_data": "TripleSourceNegative TripleGenerated QA PairsQ: Chopstick located or found at(chopstick, AtLocation, table)ConceptNet(bread, is created by, flour)A: tableB: flour(", "figure_id": "tab_8", "figure_label": "9", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "(Bhargava and Ng, 2022)", "Explanation": "The cited work provides a review of the recent attention given to commonsense reasoning in NLP research, which supports the claim that the topic has received significant interest in the field."}, {"Category": "Data Source", "Citation": "(Levesque, 2011)", "Explanation": "The cited work is acknowledged as a data source for the availability of a large number of datasets in the field of commonsense reasoning."}, {"Category": "Data Source", "Citation": "(Gordon et al., 2012)", "Explanation": "The cited work is mentioned as a data source for the availability of a large number of datasets in the field of commonsense reasoning."}, {"Category": "Data Source", "Citation": "(Sap et al., 2019)", "Explanation": "The cited work is acknowledged as a data source for the availability of a large number of datasets in the field of commonsense reasoning."}, {"Category": "Data Source", "Citation": "(Rashkin et al., 2018)", "Explanation": "The cited work is mentioned as a data source for the availability of a large number of datasets in the field of commonsense reasoning."}, {"Category": "Data Source", "Citation": "(Bisk et al., 2020)", "Explanation": "The cited work is acknowledged as a data source for the availability of a large number of datasets in the field of commonsense reasoning."}, {"Category": "Data Source", "Citation": "(Talmor et al., 2019)", "Explanation": "The cited work is mentioned as a data source for the availability of a large number of datasets in the field of commonsense reasoning."}, {"Category": "Methodological Basis", "Citation": "(Lourie et al., 2021)", "Explanation": "The cited work is discussed as a method for fine-tuning large language models (LMs) on commonsense reasoning datasets, which serves as a methodological basis for the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Pan et al., 2017)", "Explanation": "The cited work is mentioned as a method for using knowledge graphs (KGs) to train language models (LMs), which could be adopted in the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Liu et al., 2019a)", "Explanation": "The cited work is discussed as a method for training language models (LMs) using knowledge graphs (KGs), which could be adopted in the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Yasunaga et al., 2022)", "Explanation": "The cited work is mentioned as a method for training language models (LMs) using knowledge graphs (KGs), which could be adopted in the research conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Pan et al., 2017)", "Explanation": "The cited work is discussed as a method for using knowledge graphs (KGs) to train language models (LMs), which could be extended in the research conducted in the citing paper to explore new dimensions or contexts in the field of commonsense reasoning."}, {"Category": "Supporting Evidence", "Citation": "(Rajani et al., 2019)", "Explanation": "The cited work by Rajani et al. (2019) provides a model for explanation generation in the CommonsenseQA dataset, which serves as a foundational model for the citing paper to build upon in their research on language models in commonsense inference tasks."}, {"Category": "Methodological Basis", "Citation": "(Shwartz et al., 2020)", "Explanation": "The cited work by Shwartz et al. (2020) uses prompts to stimulate GPT and generate new knowledge, which the citing paper adopts as a method to stimulate language models in their research on commonsense inference tasks."}, {"Category": "Data Source", "Citation": "(Niu et al., 2021)", "Explanation": "The cited work by Niu et al. (2021) generates several candidate answers using GPT2 and then ranks them, which the citing paper utilizes as a data source in their research on commonsense inference tasks."}, {"Category": "Extension or Continuation", "Citation": "(Chen et al., 2021)", "Explanation": "The cited work by Chen et al. (2021) uses commonsense KGs to train the model, which the citing paper extends by exploring the use of KGs in the training of language models in commonsense inference tasks."}, {"Category": "Extension or Continuation", "Citation": "(Geng et al., 2023)", "Explanation": "The cited work by Geng et al. (2023) also uses commonsense KGs to train the model, which the citing paper further extends by exploring the use of KGs in the training of language models in commonsense inference tasks."}, {"Category": "Extension or Continuation", "Citation": "(Banerjee and Baral, 2020)", "Explanation": "The cited work by Banerjee and Baral (2020) learns to generate one of the inputs given the other two inputs of context, question, and answer, which the citing paper extends by exploring the use of language models in generating inputs in the context of commonsense inference tasks."}, {"Category": "Extension or Continuation", "Citation": "(Ma et al., 2021)", "Explanation": "The cited work by Ma et al. (2021) updates the model with a margin ranking loss computed on positive and negative examples from KGs, which the citing paper further extends by exploring the use of KGs in the training of language models in commonsense inference tasks."}, {"Category": "Methodological Basis", "Citation": "(Su et al., 2022)", "Explanation": "The cited work, MICO, provides a method of using the distance between positive and negative question-answer pairs obtained from the KG to calculate the loss, which the citing paper adopts in their research to improve the performance of their model."}, {"Category": "Data Source", "Citation": "(Pan et al., 2017)", "Explanation": "The cited work provides the knowledge graph (KG) that the citing paper uses as a foundational element for the study conducted in the research."}, {"Category": "Data Source", "Citation": "(Su et al., 2022)", "Explanation": "The cited work provides the pre-defined templates used to convert KG triples into question-answer pairs for the classification task in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Hwang et al., 2020)", "Explanation": "The cited work provides the templates used to convert KG triples into question-answer pairs, which the citing paper adopts in their research to create the input for the classification task."}, {"Category": "Methodological Basis", "Citation": "(Gunel et al., 2021)", "Explanation": "The cited work provides a method of supervised contrastive learning that the citing paper adopts to improve the representational distance between question-answer pairs in a given category."}, {"Category": "Data Source", "Citation": "(Speer et al., 2017)", "Explanation": "The cited work, ConceptNet, is used as a data source for training the framework in the citing paper."}, {"Category": "Data Source", "Citation": "(Sap et al., 2018)", "Explanation": "The cited work, ATOMIC, is also used as a data source for training the framework in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Gordon et al., 2012)", "Explanation": "The cited work, COPA, provides a dataset that serves as supporting evidence for the evaluation of the framework in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Mihaylov et al., 2018)", "Explanation": "The cited work, Open-BookQA, is another dataset that contributes to the evaluation of the framework in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Sap et al., 2019)", "Explanation": "The cited work, SIQA, is a dataset that further supports the evaluation of the framework in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Talmor et al., 2019)", "Explanation": "The cited work, CSQA, is another dataset that contributes to the evaluation of the framework in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Mostafazadeh et al., 2017)", "Explanation": "The cited work, SCT, is a dataset that also supports the evaluation of the framework in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Liu et al., 2019b)", "Explanation": "The cited work by Liu et al. provides the RoBERTa-Large model, which the citing paper uses as a base for their approach."}, {"Category": "Methodological Basis", "Citation": "(Radford et al., 2019)", "Explanation": "The cited work by Radford et al. introduces the GPT2 model, which the citing paper uses as a base for their approach."}, {"Category": "Extension or Continuation", "Citation": "(Shwartz et al., 2020)", "Explanation": "The cited work by Shwartz et al. introduces the Self-talk approach, which the citing paper extends by comparing it to their own approach."}, {"Category": "Extension or Continuation", "Citation": "(Dou and Peng, 2022)", "Explanation": "The cited work by Dou and Peng introduces the Wang approach, which the citing paper extends by comparing it to their own approach."}, {"Category": "Data Source", "Citation": "(Banerjee and Baral, 2020)", "Explanation": "The cited work by Banerjee and Baral introduces the SMLM model, which the citing paper uses as a data source for their approach."}, {"Category": "Data Source", "Citation": "(Su et al., 2022)", "Explanation": "The cited work by Su et al. introduces the MICO model, which the citing paper uses as a data source for their approach."}, {"Category": "Data Source", "Citation": "(Huang et al., 2021)", "Explanation": "The cited work by Huang et al. introduces the NLI-KB model, which the citing paper uses as a data source for their approach."}, {"Category": "Data Source", "Citation": "(Ma et al., 2021)", "Explanation": "The cited work by Ma et al. introduces the Ma model, which the citing paper uses as a data source for their approach."}, {"Category": "Supporting Evidence", "Citation": "(Ghosal et al., 2022)", "Explanation": "The cited work by Ghosal et al. provides a method for evaluating the accuracy of a binary classifier, which the citing paper adopts in their research to assess the performance of their model in predicting input sequences."}, {"Category": "Supporting Evidence", "Citation": "(Huang et al., 2021)", "Explanation": "The cited work provides the training data in the NLI-KB dataset, which is used in the citing paper to analyze the performance of different datasets in the commonsense QA task."}, {"Category": "Data Source", "Citation": "(Mishra et al., 2021)", "Explanation": "The cited work provides the definition of vocabulary overlap used in the analysis of the CKGs and the evaluation QA datasets in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Mishra et al., 2021)", "Explanation": "The cited work introduces the concept of vocabulary overlap, which the citing paper adopts in its analysis of the CKGs and the evaluation QA datasets in the commonsense QA task."}, {"Category": "Extension or Continuation", "Citation": "(Ma et al., 2021)", "Explanation": "The cited work by Ma et al. is used as a comparison to the proposed framework in the citing paper, highlighting the use of a small amount of data and the resulting performance in the evaluation dataset."}, {"Category": "Extension or Continuation", "Citation": "(Su et al., 2022)", "Explanation": "The cited work by Su et al. is also used as a comparison to the proposed framework in the citing paper, focusing on the use of a small amount of data and the performance in the evaluation dataset."}, {"Category": "Data Source", "Citation": "(Hwang et al., 2020)", "Explanation": "The cited work provides the templates used to generate the QA pairs in the citing paper, which is a crucial data source for the contrastive learning approach."}, {"Category": "Data Source", "Citation": "(Gordon et al., 2012)", "Explanation": "The cited work provides the dataset used in the evaluation of the framework, which is a key element in the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Mihaylov et al., 2018)", "Explanation": "The cited work serves as the source of the OpenBookQA dataset, which is used in the evaluation of the framework in the citing paper."}, {"Category": "Data Source", "Citation": "(Sap et al., 2019)", "Explanation": "The cited work provides the SocialIQA dataset, which the citing paper uses to generate questions and answer candidates for the evaluation of emotional and social interactions in everyday situations."}, {"Category": "Data Source", "Citation": "(Talmor et al., 2019)", "Explanation": "The cited work provides the CommonsenseQA dataset, which the citing paper uses to construct questions and answer candidates for evaluating various commonsense aspects."}, {"Category": "Data Source", "Citation": "(Mostafazadeh et al., 2017)", "Explanation": "The cited work provides the Story Cloze Test dataset, which the citing paper uses to evaluate story understanding and script learning in the LSDSem'17 shared task."}, {"Category": "Data Source", "Citation": "(Speer et al., 2017)", "Explanation": "The cited work, ConceptNet, is a knowledge graph that serves as the data source for the experiment conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Sap et al., 2018)", "Explanation": "The ATOMIC KG dataset is the source of social-interaction knowledge used in the citing paper for commonsense query answering research."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b47", "b48", "b14", "b15", "b6", "b50", "b25", "b3", "b3", "b21", "b14", "b14", "b3", "b2", "b51", "b36", "b44", "b37", "b50", "b21", "b39", "b40", "b56" ], "table_ref": [], "text": "Humans often resort to conversations and asking clarification questions to avoid misunderstandings when collaborating with others. Asking Clarification Questions (ACQs) is, therefore, a commonly used mechanism to boost efficiency on humanhuman as well as human-machine collaborative tasks (Shi et al., 2022;Zou et al., 2023;Shi et al., 2023;Feng et al., 2023). As an example of humanmachine collaboration, conversational systems are developed to not only have a natural conversation with people but also to answer various questions of topics ranging from different domains (e.g., news, movie, and music) in an accurate and efficient manner (Gao et al., 2018). To effectively and efficiently answer various questions, it is essential for many existing conversational systems to capture * Equal Contribution people's intents. Only then can conversational systems accurately reply to a series of questions from users (Anand et al., 2020;Zamani et al., 2022).\nNevertheless, one essential issue is that limited research exists on ACQs and most systems were trained with inconsistent and limited input of data resources. Indeed, in the literature, many studies introduced ACQs to assist conversational systems when applying to different / a mixture of domains (e.g., movie (Li et al., 2017) or open domain (Aliannejadi et al., 2019)). There is also a lack of commonly agreed benchmark datasets for the development of ACQs systems with comparable result analysis. However, on the other hand, in the literature (Aliannejadi et al., 2019;Zamani et al., 2020;Kumar and Black, 2020;Feng et al., 2023), a growing number of studies released publicly available datasets while showing a common interest in the ACQ research direction. This observed contradiction leads to a necessity for a comprehensive overview of the existing datasets as well as the current status of the ACQ research direction. By addressing this concern, many growing ACQs can be better designed, trained and tested with suitable features from properly selected datasets according to comprehensive guidance.\nTherefore, in this paper, we offer an overview of the current status of the ACQ research progress. In particular, we aggregate and compare the datasets that have been considered for evaluating recent ACQ techniques from various aspects, such as their dimension, resource, recency and semantic closeness. Afterwards, with the overall discussion of publicly available datasets, we shed light on the model performance while running experiments of corresponding representative techniques on such datasets. Note that, we also release our implementation code for such experiments 1 . Next, we summarised the concluding remarks as well as followup suggestions for developing the ACQ techniques. (Feng et al., 2023) -108K 260K github.com/sweetalyssum/clarit Qulac (Aliannejadi et al., 2019) 198 10K 3K github.com/aliannejadi/qulac ClariQ (Aliannejadi et al., 2021) 300 2M 4K github.com/aliannejadi/ClariQ TavakoliCQ (Tavakoli et al., 2021) 3 170K 7K github.com/Leila-Ta/Clarification_CQA MIMICS (Zamani et al., 2020) -462K 586K github.com/microsoft/MIMICS MANtIS (Penha et al., 2019) 14 80K 435 guzpenha.github.io/MANtIS/ ClariQ-FKw (Sekulić et al., 2021) 230 2K 2K github.com/isekulic/CQ-generation MSDialog (Qu et al., 2018) 12 35K 877 ciir.cs.umass.edu/downloads/msdialog MIMICS-Dou (Tavakoli et al., 2022) -1K 1K github.com/Leila-Ta/MIMICS-Duo\nConversational Question Answering ClarQ (Kumar and Black, 2020) 173 2M 2M github.com/vaibhav4595/ClarQ RaoCQ (Rao and Daumé III, 2018) 3 77K 770K github.com/raosudha89/ranking_clarification_questions AmazonCQ (Rao and Daumé III, 2019) 2 24K 179K github.com/raosudha89/clarification_question_generation_pytorch CLAQUA (Xu et al., 2019) 110 40K 40K github.com/msra-nlc/MSParS_V2.0\nOur Contributions. The main contributions of this work can be summarized as follows:\n• We systematically search through 77 relevant papers, selected as per their recency, reliability and use frequency, in the ACQ domain from top-tier venues.\n• We compare the ACQ datasets from their contributions to the development of ACQ techniques and experimentally show the performance of representative techniques.\n• We introduce a visualised semantic encoding strategy to explain dataset suitability when selected for their corresponding experiments.\n• We analytically outline promising open research directions in the construction of future datasets for ACQs, which sheds light on the development of future research." }, { "figure_ref": [], "heading": "Conversational Systems", "publication_ref": [ "b15", "b16", "b6", "b6", "b59", "b16", "b50", "b3" ], "table_ref": [], "text": "A conversational system functions to assist users while addressing various tasks or acting as a partner in casual conversations (Gao et al., 2018). In particular, conversation systems can be classified into four main categories: (1) Conversational Search (Conv. Search); (2) Conversational Question Answering (Conv. QA); (3) Task-oriented Dialogues Systems (TDSs); and (4) Social Chatbots (Gao et al., 2019;Anand et al., 2020). In particular, the first two types, Conv. Search and Conv. QA, extend the classic search and QA systems to a conversational nature (Anand et al., 2020;Zaib et al., 2021).\nFor TDSs and social chatbots, they are more recent research topics and were introduced to build systems for assisting users while addressing a specific task or offering emotional connection and companionship via conversations (Gao et al., 2019). However, due to the limited resources that investigate the challenge of asking clarification questions when developing these two systems, this study focuses on Conv. Search and Conv. QA systems. Moreover, ACQs in conversational systems partially focus on three main tasks, namely, Clarification Need Prediction (T 1 ), Asking Clarification Questions (T 2 ), and User Satisfaction with CQs (T 3 ) (Zamani et al., 2020;Tavakoli et al., 2022;Aliannejadi et al., 2019). First, T 1 evaluates the necessity of asking clarification questions when users provide their initial queries or requests. Next, with a positive decision, we turn to the action of providing suitable clarification questions (i.e., T 2 ) by following two main routines: generation or selection from a pool of candidate clarification questions. Afterwards, the third task T 3 is to evaluate the effectiveness of the corresponding clarification questions while considering user satisfaction levels from multiple aspects (e.g., the usefulness or relevance of clarification questions). An effective ACQ-encoded conversational system requires a joint effort to address the three tasks satisfactorily to enhance users' conversational experience. Therefore, in this survey, we explore the relevant ACQ datasets and discuss their suitability while addressing the above three tasks." }, { "figure_ref": [], "heading": "ACQ Datasets", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "In this section, we describe the main characteristics of the existing and relevant ACQ datasets. Note that we include some additional information, such as the corresponding institution, in Appendix A. A careful dataset selection and aggregation strat- egy 2 has been applied to this survey to ensure their recency and accessibility.\nTo offer an overview of dataset dimensions, in Table 1, we describe the ACQ datasets in statistics, together with links to access the datasets. The statistical information includes the number of the considered domains from the corresponding resource; the size of the whole dataset; the number of clarification questions in each dataset. These datasets can be grouped into three sets (large, medium and small, highlighted in pink, cyan and yellow colours) with varied scales of datasets: 1) Large datasets with greater than 10k clarification questions (i.e., ClariT, MIMICS, ClarQ, RaoCQ, AmazonCQ, CLAQUA). Note that all the Conv. QA datasets are classified as large datasets due to the fact that it is more convenient to prepare clarification questions within a QA pair than in a dialogue. 2) Medium datasets with no less than 1K clarification questions (i.e., Qulac, ClariQ, TavakoliCQ, ClariQ-FKw, MIMICS-Dou); 3) Small datasets that have no more than 1K instances and only include MANtIS and MSDialog. In what follows, we compare datasets for developing conversational search and QA systems, according to their key characteristics." }, { "figure_ref": [], "heading": "Conversational Search", "publication_ref": [ "b6", "b14", "b3", "b1", "b2", "b51", "b49", "b36", "b44", "b37", "b50" ], "table_ref": [], "text": "Conversational Search (Conv. Search) refers to information retrieval systems that permit a mixedinitiative interaction with one or more users using a conversational interface (Anand et al., 2020). To develop effective Conv. Search systems, many previous studies released a number of datasets and 2 We exclude datasets released before 2015 and the ones that are not publicly available. made them publicly available. Here, we briefly describe such datasets:\n• ClariT (Feng et al., 2023): The first clarification question dataset for task-oriented information seeking, which asks questions to clarify user requests and user profiles based on task knowledge.\n• Qulac (Aliannejadi et al., 2019): The first clarification question dataset in an opendomain information-seeking conversational search setting with a joint offline evaluation framework.\n• ClariQ (Aliannejadi et al., 2020(Aliannejadi et al., , 2021)):\nAn extended Qulac with additional crowdsourced topics, questions and answers in the training corpus as well as synthetic multi-turn conversations.\n• TavakoliCQ (Tavakoli et al., 2021;Tavakoli, 2020): It includes clarification questions collected from the StackExchange QA community and based on three resource categories that have the top number of posts.\n• MIMICS (Zamani et al., 2020): This dataset comprises three sub-datasets that are all sourced from the application of the clarification pane in Microsoft Bing. In particular, they differ in if such a sub-dataset is based on single or multiple clarification panes (i.e., MIMICS-Click or ClickExplore) or focusing on real search queries and their corresponding query-clarification pairs (i.e., MIMICS-Manual).\n• MANtIS (Penha et al., 2019): A multidomain (14 domains) conversational information-seeking dataset, sourced from StackExchange, like TavakoliCQ, with joint user intent annotations on the included utterances.\n• ClariQ-FKw (Sekulić et al., 2021): This dataset introduces facets (the keywords that disambiguate a query) to the ClariQ, which results in an updated version with a set of query-facet-clarification question triples.\n• MSDialog (Qu et al., 2018): This dataset was constructed from the dialogues on Microsoft Community3 -a forum that provides technical support for Microsoft products -and also details user intent types on an utterance level.\n• MIMICS-Duo (Tavakoli et al., 2022):\nA dataset, stands upon the queries from MIMICS-ClickExplore, that enables both online and offline evaluations for clarification selection and generation approach." }, { "figure_ref": [], "heading": "Conversational Question Answering", "publication_ref": [ "b59", "b21", "b39", "b40", "b56" ], "table_ref": [], "text": "The idea behind Conversational Question Answering (Conv. QA) is to ask the system a question about a provided passage offering a conversational interface (Zaib et al., 2021). Conv. QA has recently received growing attention in the research community while introducing multiple available large-scale datasets. A brief discussion of such datasets are as follows:\n• ClarQ (Kumar and Black, 2020): This dataset is sourced from the post-question pairs in StackExchange and developed with selfsupervised approaches within a bootstrapping framework.\n• RaoCQ (Rao and Daumé III, 2018): Another StackExchange-based dataset with a large volume of post-question-answer triples from three selected domains.\n• AmazonCQ (Rao and Daumé III, 2019): An Amazon platform-based Clarification QA dataset with questions targeting the missing information of products and answers provided by sellers or other users. In addition, a context is offered that contains both the product title and description. \nDataset Task Eval. Method T 1 T 2 T 3 Conv. Search ClariT (2023) ✓ G - Offline Qulac (2019) - R - Offline ClariQ (2021) ✓ R - Offline TavakoliCQ (2021) - G - Offline MIMICS (2020) ✓ R, G ✓ Offline/Online MANtIS (2019) - R, G - Offline ClariQ-FKw (2021) - G - Offline MSDialog (2018) - R, G - Offline MIMICS-Duo (2022) ✓ R, G ✓ Offline/Online Conv. QA ClarQ (2020) - R - Offline RaoCQ (2018) - R - Offline AmazonCQ (2019) - G - Offline CLAQUA (2019) ✓ G - Offline\n• CLAQUA (Xu et al., 2019): A clarificationfocus dataset that supports the supervised evaluation of text understanding and generation modules, along with a knowledge-based QA system (KBQA)." }, { "figure_ref": [], "heading": "Datasets Analysis", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "As discussed in Section 1, a major concern of developing the techniques for asking clarification questions is using suitable datasets to train, validate and test the corresponding approach. In particular, it is essential to be aware of the information on when, how and where a dataset is collected. Such information offers a comprehensive description of datasets for their various characteristics, such as their recency and reliability. Therefore, in Table 2, we describe the collection details of each ACQ dataset. In particular, we include the time when the datasets were built as well as the year the corresponding papers were published to indicate the recency of the datasets. In addition, we summarise the source of the data collection, which tells where the datasets came from. Next, we aggregate the main strategies for preparing the clarification questions. At first, due to our data selection strategy, most of the datasets are based on relatively recent information. However, we still observe that some datasets rely on the data collected years ago. with no time information on when their data was collected, which makes them incomparable based on this measure. On the other hand, regarding how and where the datasets were collected, the TREC WEB data, StackExchange and Bing are the commonly considered resource for preparing clarification questions in a dataset. Such platforms' search and question-answering nature is the leading cause of such a finding. Afterwards, the crowdsourcing strategy is commonly applied to generate qualified clarification questions. Note that the posts and comments of StackExchange are also widely used to provide clarification questions. According to the provided information, we conclude that the datasets have been collected based on varied strategies, on different periods and use inconsistent resources. However, it is difficult to tell how exactly a dataset is different from others and how to properly select a set of datasets to show the performance of a newly introduced model. Therefore, in this survey, we introduce a visualisation-based approach to assist the selection of datasets for an improved experimental setup.\nIn Figures 1a and1b, we use the t-distributed Stochastic Neighbor Embedding (i.e., t-SNE) method to visualize the semantic representation of clarification questions (semantic embeddings) for Conv. Search and Conv. QA datasets. As one can see from Figure 1a, Qulac and ClariQ datasets, and MIMICS and MIMICS-Dou datasets highly overlapped with each other. It was expected to be seen as ClariQ and MIMICS-Duo are built on top of Qulac and MIMICS, respectively. This indicates that achieving a high-quality performance of a proposed asking clarification model on both Qulac and ClariQ (or MIMICS and MIMICS-Duo) is not satis-factory as they include clarification questions with close semantic meanings. Figure 1a shows that Conv. Search datasets form 5 distinct clusters that can be used to evaluate asking clarification models. For example, the models' generalisability can be evaluated on the ClariT, Qulac, TavakaliCQ, MIM-ICS, and MSDialog datasets, which locates with few overlapped instances between them. More importantly, comparing Figures 1a and1b " }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "In this section, we detail the description of the applicable evaluation metrics for the included datasets when evaluating ACQs approaches. In particular, as previously discussed, we discuss such metrics accordingly if they are automatic or human-involved." }, { "figure_ref": [], "heading": "Automatic Evaluation", "publication_ref": [ "b17", "b18", "b17", "b8", "b54", "b53", "b38", "b8", "b34", "b7", "b27", "b44", "b46" ], "table_ref": [], "text": "With a ready dataset, ACQ-based conversational systems can be evaluated using a variety of automatic evaluation metrics. The widely-used metrics can be categorized into two groups based on the strategy of giving clarification questions, i.e., ranking or generation. For the ranking route, the commonly used evaluation metrics include (1) MAP (Jarvelin, 2000), (2) Precision (Järvelin and Kekäläinen, 2017), (3) Recall (Jarvelin, 2000), (4) F1-score (Beitzel, 2006), (5) Normalized Discounted Cumulative Gain (nDCG) (Wang et al., 2013), (6) Mean Reciprocal Rank (MRR) (Voorhees et al., 1999;Radev et al., 2002), and (7) Mean Square Error (MSE) (Beitzel, 2006). The main idea behind using these metrics is to evaluate the relevance of the top-ranked clarification questions by the system to reveal the corresponding user intent. On the other hand, some common metrics for the generation route include (8) BLEU (Papineni et al., 2002), ( 9) METEOR (Banerjee and Lavie, 2005), (10) ROUGE (Lin, 2004). BLEU and ROUGE were originally developed to evaluate machine translation and text summarization results, respectively. Recently, they have also been applied as evaluation metrics while addressing the ACQ task (Sekulić et al., 2021;Zhang and Zhu, 2021;Shao et al., 2022). Their scores are both based on the n-gram overlap between generated and reference questions. The difference between BLEU and ROUGE corresponds to the precision and recall metrics. BLEU calculates the ratio of predicted terms in the reference question, while ROUGE scores indicate the ratios of terms from the reference are included in the predicted text. Next, ROUGE-L, a newer version of ROUGE -focuses on the longest common subsequence -is recently being used in evaluating ACQ models. However, these above metrics are limited while ignoring human judgements. Therefore the METEOR was introduced to address such a concern by considering the stems, WordNet synonyms, and paraphrases of n-grams.\nThe main advantage of using automatic evaluation metrics is that they are not expensive for consideration and can be applied easily. However, they are not always aligned with human judgments. Therefore, recent studies also consider human evaluation besides their automatic evaluation to show how the generated or selected CQs impact on the performance of their conversation systems." }, { "figure_ref": [], "heading": "Human Evaluation", "publication_ref": [ "b26", "b26", "b26" ], "table_ref": [], "text": "In addition to automatic evaluation metrics, human evaluation provides a more accurate and qualitative evaluation of generated or ranked CQs. An essential reason is that automatic evaluation metrics mainly consider n-gram overlaps or ranking of CQs instead of their semantic meaning or other quality-wise aspects. Thus, human annotations are increasingly used to evaluate clarifying questions. The human annotation process consists of scoring generated or selected CQs based on several quality dimensions. Compared to automatic evaluation, et al., 2019), engangingness (Li et al., 2019), interestingness (Li et al., 2019), knowledgeable (Li et al., 2019), that evaluate a CQ by considering the whole conversation, instead of an individual queryquestion pair. However, the ACQ domain lacks a consistent or agreed terminology for the used human evaluation metrics. In addition, some of them could have overlapped focus when evaluating the clarification questions. For example, the usefulness can also be evaluated based on the knowledgeable of the corresponding clarification question." }, { "figure_ref": [], "heading": "Model Performance on ACQ", "publication_ref": [ "b9", "b20", "b9", "b3", "b2", "b20", "b45", "b33" ], "table_ref": [ "tab_3", "tab_4", "tab_5" ], "text": "In this section, to offer a complete view of the current progress of the ACQ task, we discuss the main observations of the recent ACQ techniques when running on various ACQ datasets. Moreover, for each of the ACQ-related tasks, i.e., T 1 , T 2 and T 3 , we show the performance of many commonly used baselines while running on the applicable datasets for offering some additional concluding remarks.\nFirst, according to our exploration of experimental results of recent ACQ techniques, we observe three main limitations of their inconsistent experimental setups, used baselines and model generalisability. Indeed, many research studies have inconsistent uses of datasets as well as incomparable results with distinct experimental setups. For example, Krasakis et al. ( 2020) and Bi et al. (2021) both used the Qulac dataset. In (Krasakis et al., 2020), they randomly kept 40 topics for testing their performance of a heuristic ranker. However, instead of following (Krasakis et al., 2020), Bi et al. (2021) used a few-turn-based setup while leveraging the Qulac dataset for asking clarification questions. Next, another common issue is the use of different baselines to show the leading performance of newly introduced techniques. For example, the study in (Aliannejadi et al., 2019) primarily employed ranking-based models, such as RM3, LambdaMART, and RankNet, to evaluate the performance of their question retrieval model. In contrast, the study in (Aliannejadi et al., 2021) utilized language models like RoBERTa and ELECTRA to evaluate the performance of their question relevance model. More importantly, many techniques were introduced while tested on a single dataset to show their top performance (e.g., (Krasakis et al., 2020;Sekulić et al., 2022;Zhao et al., 2022)), which lead to a significant generalisability concern. This also indicates the necessity of developing a benchmark while evaluating the ACQ techniques and identifying the exact state-of-theart. Next, to acquire an overview of model performance while running experiments on the included datasets, we present the experimental results with representative approaches on the three ACQs subtasks, i.e., T 1 , T 2 and T 3 that are discussed in Section 2. The details of our experiments can be found in Appendix B. Table 4 shows the results of two topperforming models (i.e., BERT and RandomForest) for the clarification need prediction task (T 1 ) from traditional ML and language models. A key observation is that the prediction of clarification need should be selectively made in a classification or regression setup. In particular, BERT, a language model that well classifies the classification need on ClariQ and CLAQUA datasets, does not consistently outperform a classic approach, Random-Forest, in addressing a regression-wise task (as per the results on MIMICS and MIMICS-Duo). Next, for the second sub-task, ask clarification questions, which can be addressed via generation or ranking. However, clarification question generation requires a detailed context description and associated information. The existing approaches (e.g., Seq2Seq models) could be either naive in solely taking the query as input for CQ generation or difficult to generalise to many datasets while using specific information. Therefore, in this study, we compare the ranking performance when applying some commonly used ranking baselines (i.e., BM25 and BM25 with query expanded via the Doc2Query technique (Nogueira et al., 2019)) on every dataset. Table 5 presents the experimental results of these two approaches on every dataset. Note that, we ignore the experimental results on ClariT, MIM-ICS, MIMICS-DUO and AmazonCQ since they are different from other datasets in having queries with multiple relevant clarification questions. For the results, we observe that the query expansion via Doc2Query can be effective for most of the conversational search datasets, due to their shorter queries. However, when query expansion is applied to a Conv. QA dataset, it is not promising for an improved performance. Another observation is that the Qulac, ClariQ and ClariQ-FKw datasets have similar clarification questions in their dataset as per Figure 1a and Doc2Query-based query expansion has limited improvement to BM25 on these datasets. However, for another two corpus, TavakoliCQ and MANtIS, with distinct clarification questions, a bigger improvement margin can be observed. This also indicates the usefulness of our introduced visualisation-based strategy for dataset selection.\nNext, for the third task, it is crucial to determine user satisfaction with clarification questions (CQs), as it provides insight into how well the CQs are serving their intended purpose. However, obtaining the necessary data for evaluating user satisfaction can be challenging. In the literature, only two datasets (i.e., MIMICS and MIMICS-Duo) include information for this task. In Table 6, we present the corresponding results. A similar observation to the clarification need prediction task is that the language model can assist an ACQ technique in effectively evaluating user satisfaction. However, due to the limited number of applicable datasets, this observation might not be consistent in a different context. This also aligns with the current status of the ACQ research task while evaluating the newly proposed ACQ techniques.\nOverall speaking, with the presented experimental results, we indicate the inconsistent performance of models while evaluated on different datasets. In particular, we also discuss the limited numbers of useful datasets while evaluating ACQ techniques (e.g., the models' performance on user satisfaction prediction)." }, { "figure_ref": [], "heading": "Discussion and Future Challenges", "publication_ref": [], "table_ref": [], "text": "From the exploration of datasets as well as the experimental results on them, in this section, we highlight the concluding remarks on the current status of the ACQ research task, mainly from the dataset point of view. In addition, we discuss the promising directions based on the main findings listed below. Findings.\n(1) Missing Standard Benchmark. Existing datasets are underdeveloped, and difficult to constitute a standard benchmark while introducing novel ACQ techniques. As a consequence, it is challenging to effectively and accurately compare the proposed techniques and capture the true state-of-the-art. (2) Few User-System Interactions Recorded for Evaluation. In the literature, only the MIMICS dataset was collected by using a clarification pane that simulates such interactions. This makes it challenging to evaluate models in a near-realistic scenario and to estimate how well they could perform in a real-world setting. (3) Inconsistent Dataset Collection and Formatting. Many included datasets in this paper are frequently presented in distinct structures and can only be applied with a tailored setup. This is a problem while developing techniques and evaluating them on multiple datasets. (4) Inconsistent Model Evaluation. Many newly introduced models apply customised evaluation strategies even while using an identical dataset for addressing a specific asking clarification task. This lead to difficulties in model performance comparison." }, { "figure_ref": [], "heading": "Future Research Directions.", "publication_ref": [ "b12" ], "table_ref": [], "text": "(1) Benchmark Development. For the development of an ACQs technique, it is important that the models are compared to a common-accepted benchmark to make the corresponding conclusions. However, according to the above findings, currently, it is still unavailable. Therefore, benchmark development is the first key future direction. (2) ACQ Evaluation Framework. Aside from the benchmark development, it is also essential for a proper evaluation of newly introduced techniques. In particu-lar, due to the human-machine interaction nature of the ACQ techniques, it is valuable for evaluation metrics to take user satisfaction information into account. In addition, the introduction of a corresponding evaluation framework can assist the development of ACQ techniques with systematic evaluations. (3) Large-Scale Human-to-Machine Dataset. Existing datasets have many limitations that increase the difficulty of developing largescale models for generating or ranking clarification questions. It remains challenging to collect and build large amounts of data. In the near future, researchers should optimize the process of ACQs based on the current retrieval technologies (see (Trippas et al., 2018) for a description of collecting such datasets). ( 4) Multi-Modal ACQs Dataset.\nRecently multi-modal conversational information seeking has received attention in conversational systems (Deldjoo et al., 2021). Amazon Alexa 4 organised the first conversational system challenge to incorporate multi-modal (voice and vision) customer experience. However, there is a lack of existing datasets containing multi-modal information for ACQs." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "In this section, we outline the key limitations of our research. Our findings on the ACQ models are not as advanced as the current state-of-the-art, but they serve as a benchmark for others to compare with when using similar datasets. Additionally, to conduct more extensive experiments on larger datasets and more advanced models, we require additional computational resources. Specifically, generating clarification questions is a demanding task as it requires the use of powerful language models. " }, { "figure_ref": [], "heading": "A Datasets Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.0.1 ClariT", "publication_ref": [ "b14" ], "table_ref": [], "text": "The ClariT dataset (Feng et al., 2023) was released in 2023 by researchers from the University College London. ClariT is the first dataset for asking clarification questions in task-oriented conversational information seeking. They built ClariT based on an existing dataset ShARC5 , which clarifies users' information needs in task-oriented dialogues. They extended dialogues in ShARC with user profiles to ask clarification questions considering personalized information. To ask clarification questions efficiently, they also removed unnecessary clarification questions in the original dialogues. The collected dataset consists of over 108k multi-turn conversations including clarification questions, user profiles, and corresponding task knowledge in general domains." }, { "figure_ref": [], "heading": "A.0.2 Qulac", "publication_ref": [ "b3" ], "table_ref": [], "text": "The Qulac (Questions for lack of carity) (Aliannejadi et al., 2019) dataset is a joint effort by researchers from the Università della Svizzera Italiana and the University of Massachusetts Amherst. Qulac is the first dataset as well as an offline evaluation framework for studying clarification questions in open-domain information-seeking conversational search systems. To acquire the clarification questions, they proposed a four-step strategy: (1) they defined the topics and their facets borrowed from TREC Web Track6 ; (2) they collected several candidates clarification questions for each query through crowdsourcing in which they asked human annotators to generate questions for a given query according to the results showed using a commercial search engine; (3) they assessed the relevance of the questions to each facet and collected new questions for those facets that require more specific questions; (4) finally, they collected the answers for every query-facet-question triplet. The collected dataset consists of over 10, 277 single-turn conversations including clarification questions and their answers on multi-faceted and ambiguous queries for 198 topics with 762 facets." }, { "figure_ref": [], "heading": "A.0.3 ClariQ", "publication_ref": [ "b1", "b2" ], "table_ref": [], "text": "The ClariQ dataset (Aliannejadi et al., 2020(Aliannejadi et al., , 2021) ) was released in 2020 by researchers from the University of Amsterdam, Microsoft, Google, Univer-sity of Glasgow, and MIPT. The ClariQ dataset was collected as part of the ConvAI37 challenge which was co-organized with the SCAI8 workshop. The ClariQ dataset is an extended version of Qulac, i.e., new topics, questions, and answers have been added in the training set using crowdsourcing. Like Qulac, ClariQ consists of single-turn conversations (initial_request, followed by clarification questions and answers). Moreover, it comes with synthetic multi-turn conversations (up to three turns). ClariQ features approximately 18K single-turn conversations, as well as 1.8 million multi-turn conversations." }, { "figure_ref": [], "heading": "A.0.4 TavakoliCQ", "publication_ref": [ "b51", "b49", "b49" ], "table_ref": [], "text": "Recently Tavakoli et al. (Tavakoli et al., 2021;Tavakoli, 2020), from RMIT University and the University of Massachusetts Amherst, explore the ACQs to provide insightful analysis into how they are used to disambiguate the user ambiguous request and information needs. To this purpose, they extracted a set of clarification questions from posts on the StackExchange question answering community (Tavakoli, 2020). They investigate three sites with the highest number of posts from three different categories covering a period from July 2009 to September 2019. Therefore, the created dataset includes three domains, i.e., business domain with 13, 187 posts, culture with 107, 266 posts, and life/arts with 55, 959 posts. To identify the potential clarification questions, they collected the comments of each post that contain at least one sentence with a question mark, excluding questions submitted by the author of the post and questions that appeared in quotation marks. Their finding indicates that the most useful clarification questions have similar patterns, regardless of the domain.\nA.0.5 MIMICS MIMICS (stands for the MIcrosoft's Mixed-Initiative Conversation Search Data) (Zamani et al., 2020). This is a large-scale dataset for search clarification which is introduced in 2020 by researchers from Microsoft. Recently, Microsoft Bing added a clarification pane to its results page to clarify faceted and ambiguous queries.9 Each clarification pane includes a clarification question and up to five candidate answers. They used in-ternal algorithms and machine learning models based on users' history with the search engine and content analysis to generate clarification questions and candidate answers. The final MIMICS dataset contains three datasets: (1) MIMICS-Click includes 414, 362 unique queries, each related to exactly one clarification pane, and the corresponding aggregated user interaction clicks; (2) MIMICS-ClickExplore contains the aggregated user interaction signals for over 64, 007 unique queries, each with multiple clarification panes, i.e., 168, 921 query-clarification pairs; (3) MIMICS-Manual includes over 2k unique real search queries and 2.8k query-clarification pairs. Each query-clarification pair in this dataset has been manually labeled by at least three trained annotators and the majority voting has been used to aggregate annotations. It also contains graded quality labels for each clarification question, the candidate answer set, and the landing result page for each candidate answer." }, { "figure_ref": [], "heading": "A.0.6 MANtIS", "publication_ref": [ "b36" ], "table_ref": [], "text": "The MANtIS (short for Multi-domAiN Information Seeking dialogues) dataset (Penha et al., 2019) is a large-scale dataset containing multi-domain and grounded information-seeking dialogues introduced by researchers from TU Delft. They built the MANtIS dataset using extraction of conversations from the StackExchange question answering community. This dataset includes 14 domains on StackExchange. Each question-answering thread of a StackExchange site is a conversation between an information seeker and an information provider. These conversations are included if (1) it takes place between exactly two users; (2) it consists of at least 2 utterances per user; (3) it has not been marked as spam, offensive, edited, or deprecated; (4) the provider's utterances contain at least a reference (a hyperlink), and; (5) the final utterance belongs to the seeker and contains positive feedback. The final MANtIS dataset includes 80k conversations over 14 domains. Then, to indicate the type of user intent, they sampled 1, 365 conversations from MANtIS and annotate their utterances according to the user intent, such as original question, follow-up question, potential answer, positive feedback, negative feedback, etc. The final sample contains 6, 701 user intent labels." }, { "figure_ref": [], "heading": "A.0.7 ClariQ-FKw", "publication_ref": [ "b44", "b1" ], "table_ref": [], "text": "The ClariQ-FKw (FKw stands for Facet Keywords) (Sekulić et al., 2021) was proposed by researchers from the University of Amsterdam and the Università della Svizzera Italiana in 2021. Their main objective was to use text generation-based large-scale language models to generate clarification questions for ambiguous queries and their facets, where by facets they mean keywords that disambiguate the query. The dataset includes queries, facets, and clarification questions, which form triplets construed on top of the ClariQ (Aliannejadi et al., 2020) dataset. To this end, they perform a simple data filtering to convert ClariQ data samples to the appropriate triplets and derive the facets from topic descriptions. The final ClariQ-FKw contains 2, 181 triplets." }, { "figure_ref": [], "heading": "A.0.8 MSDialog", "publication_ref": [ "b37" ], "table_ref": [], "text": "The MSDialog (Qu et al., 2018) proposed by researchers from the University of Massachusetts Amherst, RMIT University, Rutgers University, and Alibaba Group, is used to analyse informationseeking conversations by user intent distribution, co-occurrence, and flow patterns in conversational search systems. The MSDialog dataset is constructed based on the question-answering interactions between information seekers and providers on the online forum for Microsoft products. Thus, to create the MSDialog dataset, they first crawled over 35k multi-turn QA threads (i.e., dialogues) containing 300k utterances from the Microsoft Community10 -a forum that provides technical support for Microsoft products -and then annotated the user intent types on an utterance level based on crowdsourcing using Amazon Mechanical Turk (MTurk)11 . To provide a high-quality and consistent dataset, they selected about 2.4k dialogues based on four criteria, conversations 1) with 3 to 10 turns; 2) with 2 to 4 participants; 3) with at least one correct answer selected by the community, and; 4) that fall into one of the following categories: Windows, Office, Bing, and Skype, which are the major categories of Microsoft products. The final annotated dataset contains 2, 199 multi-turn dialogues with 10, 020 utterances." }, { "figure_ref": [], "heading": "A.0.9 MIMICS-Duo", "publication_ref": [ "b50" ], "table_ref": [], "text": "The MIMICS-Duo (Tavakoli et al., 2022) dataset is proposed by researchers at RMIT University, the University of Melbourne, and the University of Massachusetts Amherst. It provides the online and offline evaluation of clarification selection and generation approaches. It is constructed based on the queries in MIMICS-ClickExplore (Zamani et al., 2020), a sub-dataset of MIMICS (Zamani et al., 2020) that consists of online signals, such as user engagement based on click-through rate. The MIMICS-Duo contains over 300 search queries and 1, 034 query-clarification pairs." }, { "figure_ref": [], "heading": "A.0.10 ClarQ", "publication_ref": [ "b21", "b39", "b40", "b32", "b31" ], "table_ref": [], "text": "The ClarQ dataset (Kumar and Black, 2020) was created in 2020 by Carnegie Mellon University. The ClarQ is designed for large-scale clarification question generation models. To do this, the ClarQ dataset is built with a bootstrapping framework based on self supervision approaches on top of the post-comment tuples extracted from StackExchange12 question answering community. To construct the ClarQ, they first extracted the posts and their comments from 173 domains. Then, they filtered unanswered posts and only considered comments to posts with at least one final answer as a potential candidate for a clarification question. The ClarQ dataset consists of about 2 million postquestion tuples across 173 domains.\nA.0.11 RaoCQ Rao and Daumé III [2018] from the University of Maryland study the problem of ranking clarification questions and propose an ACQs dataset on top of StackExchange. To create this dataset, they use a dump of StackExchange and create a number of post-question-answer triplets, where the post is the initial unedited request, the question is the first comment containing a question (i.e., indicated by a question mark), and the answer is either the edits made to the post after the question (i.e., the edit closest in time following the question) or the author's answer of the post to the question in the comment section. The final dataset includes a total of 77, 097 triples across three domains askubuntu, unix, and superuser.\nA.0.12 AmazonCQ Rao and Daumé III [2019] from Microsoft and the University of Maryland, released a dataset for generating clarification questions. The dataset contains a context that is a combination of product title and description from the Amazon website,a question that is a clarification question asked to the product about some missing information in the context, and the answer that is the seller's (or other users') reply to the question. To construct this dataset, they combined the Amazon Question Answering dataset created by (McAuley and Yang, 2016) and the Amazon Review dataset proposed by (McAuley et al., 2015). The final dataset consists of 15, 859 contexts (i.e., product description) with 3 to 10 clarification questions, on average 7, per context." }, { "figure_ref": [], "heading": "A.0.13 CLAQUA", "publication_ref": [ "b56" ], "table_ref": [], "text": "The CLAQUA dataset (Xu et al., 2019) was created by researchers from of Peking University, the University of Science and Technology of China, and Microsoft Research Asia in 2019. They propose the CLAQUA dataset to provide a supervised resources for training, evaluation and creating powerful models for clarification-related text understanding and generation in knowledge-based question answering (KBQA) systems. The CLAQUA dataset is constructed in three steps, (1) sub-graph extraction, (2) ambiguous question annotation, and (3) clarification question annotation. In the first step, they extract ambiguous sub-graphs from an opendomain knowledge base, like FreeBase. They focus on shared-name ambiguity where two entities have the same name and there is a lack of necessary distinguishing information. Then, in the second step, they provide a table listing the shared entity names, their types, and their descriptions. Based on this table, annotators need to write ambiguous questions. Finally, in the third step, based on entities and the annotated ambiguous question, annotators are required to summarize distinguishing information and write a multi-choice clarification question including a spacial character that separate entity and pattern information. They provided these steps for single-and multi-turn conversations. The final CLAQUA dataset contains 17, 163 and 22, 213 single-turn and multi-turn conversations, respectively." }, { "figure_ref": [], "heading": "B Experiments on Model Performance B.1 Clarification Need Prediction", "publication_ref": [ "b1", "b2", "b50", "b56", "b10", "b28", "b30", "b11", "b57", "b24", "b58", "b22", "b23", "b42", "b13", "b35", "b55", "b0" ], "table_ref": [], "text": "The clarification need prediction is a major task in search clarification to decide whether to ask clarification questions. Between the discussed CQ datasets only ClariQ (Aliannejadi et al., 2020(Aliannejadi et al., , 2021)), MIMICS (Zamani et al., 2020), MIMICS-Duo (Tavakoli et al., 2022), and CLAQUA (Xu et al., 2019) provide the necessary information for the clarification need prediction task. The ClariQ and CLAQUA datasets model the clarification need prediction task as a classification problem. They both present the initial user request with a classification label that indicates the level of clarification required. In contrast to the ClariQ and CLAQUA datasets, the task in the MIMICS and MIMICS-Dou datasets is modelled as a regression task for predicting user engagement. Specifically, these datasets aim to predict the degree to which users find the clarification process useful and enjoy interacting with it. Based on this prediction, the system can make a decision on whether or not to request clarification. We subsequently evaluated the prediction task for clarification needs using a variety of traditional machine learning models and language models. The traditional machine learning models employed as baselines include Random Forest (Breiman, 2001), Decision Tree (Loh, 2011), Multinomial Naive Bayes (MultinomialNB) (Manning, 2008), Support Vector Machines (SVM) (Cortes and Vapnik, 1995), and Linear Regression (Yan and Su, 2009). The language model baselines utilized include BART (Lewis et al., 2019), XLNet (Yang et al., 2019), XLM (Lample and Conneau, 2019), Albert (Lan et al., 2019), distilBERT (Sanh et al., 2019), and BERT (Devlin et al., 2018). These models were applied to both classification and regression tasks. The input to traditional ML models is a matrix of TF-IDF features extracted from the raw input text. We use Scikit-learn13 (Pedregosa et al., 2011), HuggingFace 14 (Wolf et al., 2019), and TensorFlow (Abadi et al., 2016) for the implementation of the aforementioned models." }, { "figure_ref": [], "heading": "B.2 Question Relevance Ranking Baselines", "publication_ref": [ "b2", "b5", "b4", "b29" ], "table_ref": [ "tab_4" ], "text": "To address the second task, namely asking clarification questions, many studies have explored either generation or ranking strategies. However, as we argued in Section 5, the generation techniques require rich information for satisfactory performance and they are difficult to be applied to many datasets if some specific information is required. Therefore, we consider the ranking task for summarsing the model performance on the asking clarification question task and present the results of BM25 and Doc2Query + BM25. Note that, the BM25-based techniques are considered with their competitive performance in addressing the clarification question ranking task (Aliannejadi et al., 2021). We also compare some additional ranking techniques, such as the PL2 (Amati and Van Rijsbergen, 2002), DPH (Amati et al., 2008) and another recent dense retriever (i.e., ColBERT (Khattab and Zaharia, 2020)). However, the inclusion of such approaches is not useful while comparing the use of different datasets. Therefore, we only present the results of the above two approaches in Table 5. As for the implementation, we leverage PyTerrier15 (Macdonald and Tonellotto, 2020), a recently developed Python framework for conducting information retrieval experiments." }, { "figure_ref": [], "heading": "B.3 User Satisfaction with CQs", "publication_ref": [ "b50" ], "table_ref": [], "text": "In this experiment, we explored the task of determining user satisfaction with CQs by utilizing a variety of models from both traditional machine learning and language models on the ACQs datasets. To conduct this experiment, we employed the same models that we previously used for the Clarification Need Prediction task. By using the same models for both tasks, we aim to examine how well these models perform in predicting user satisfaction with CQs and how their performance compares to their performance in predicting the need for clarification. This will allow us to understand the strengths and limitations of these models in predicting user satisfaction and make informed decisions on which models to use in future applications. Only two datasets (i.e., MIMICS (Zamani et al., 2020) and MIMICS-Duo (Tavakoli et al., 2022)) out of 12 datasets provide the user satisfaction information. In both MIMICS and MIMICS-Dou, each clarification question is given a label to indicate how a user is satisfied with the clarification question. For MIMICS the labels are Good, Fair, or Bad. A good clarifying question is accurate, fluent, and grammatically correct. A fair clarifying question may not meet all of these criteria but is still acceptable. Otherwise, it is considered bad. While in MIMICS-Dou, users' satisfaction with clarification questions is assessed on a 5-level scale that is Very Bad, Bad, Fair, Good, and Very Good. Thus, we formulate user satisfaction with CQs task as a supervised classification in our experiments. " }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This research is supported by the Engineering and Physical Sciences Research Council [EP/S021566/1] and the EPSRC Fellowship titled \"Task Based Information Retrieval\" [EP/P024289/1]." } ]
2023-05-25
[ { "authors": "Martín Abadi; Paul Barham; Jianmin Chen; Zhifeng Chen; Andy Davis; Jeffrey Dean; Matthieu Devin ; Sanjay; Geoffrey Ghemawat; Michael Irving; Isard", "journal": "", "ref_id": "b0", "title": "{TensorFlow}: a system for {Large-Scale} machine learning", "year": "2016" }, { "authors": "Mohammad Aliannejadi; Julia Kiseleva; Aleksandr Chuklin; Jeff Dalton; Mikhail Burtsev", "journal": "", "ref_id": "b1", "title": "Convai3: Generating clarifying questions for opendomain dialogue systems (clariq)", "year": "2020" }, { "authors": "Mohammad Aliannejadi; Julia Kiseleva; Aleksandr Chuklin; Jeff Dalton; Mikhail Burtsev", "journal": "", "ref_id": "b2", "title": "Building and evaluating open-domain dialogue corpora with clarifying questions", "year": "2021" }, { "authors": "Mohammad Aliannejadi; Hamed Zamani; Fabio Crestani; W Bruce Croft", "journal": "", "ref_id": "b3", "title": "Asking clarifying questions in open-domain information-seeking conversations", "year": "2019" }, { "authors": "Giambattista Amati; Giuseppe Amodeo; Marco Bianchi; Carlo Gaibisso; Giorgio Gambosi", "journal": "", "ref_id": "b4", "title": "Fub, iasi-cnr and university of tor vergata at trec 2008 blog track", "year": "2008" }, { "authors": "Gianni Amati; Cornelis Joost Van Rijsbergen", "journal": "ACM Transactions on Information Systems (TOIS)", "ref_id": "b5", "title": "Probabilistic models of information retrieval based on measuring the divergence from randomness", "year": "2002" }, { "authors": "Avishek Anand; Lawrence Cavedon; Hideo Joho; Mark Sanderson; Benno Stein", "journal": "", "ref_id": "b6", "title": "Conversational search (dagstuhl seminar 19461)", "year": "2020" }, { "authors": "Satanjeev Banerjee; Alon Lavie", "journal": "", "ref_id": "b7", "title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments", "year": "2005" }, { "authors": "M Steven; Beitzel", "journal": "", "ref_id": "b8", "title": "On understanding and classifying web queries", "year": "2006" }, { "authors": "Keping Bi; Qingyao Ai; Bruce Croft", "journal": "", "ref_id": "b9", "title": "Asking clarifying questions based on negative feedback in conversational search", "year": "2021" }, { "authors": "Leo Breiman", "journal": "Machine learning", "ref_id": "b10", "title": "Random forests", "year": "2001" }, { "authors": "Corinna Cortes; Vladimir Vapnik", "journal": "Machine learning", "ref_id": "b11", "title": "Supportvector networks", "year": "1995" }, { "authors": "Yashar Deldjoo; Johanne R Trippas; Hamed Zamani", "journal": "", "ref_id": "b12", "title": "Towards multi-modal conversational information seeking", "year": "2021" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b13", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Yue Feng; A Hossein; Aldo Rahmani; Emine Lipani; Yilmaz", "journal": "", "ref_id": "b14", "title": "Towards asking clarification questions for information seeking on task-oriented dialogues", "year": "2023" }, { "authors": "Jianfeng Gao; Michel Galley; Lihong Li", "journal": "", "ref_id": "b15", "title": "Neural approaches to conversational ai", "year": "2018" }, { "authors": "Jianfeng Gao; Michel Galley; Lihong Li", "journal": "Now Foundations and Trends", "ref_id": "b16", "title": "Neural approaches to conversational AI: Question answering, task-oriented dialogues and social chatbots", "year": "2019" }, { "authors": "Kalervo Jarvelin", "journal": "", "ref_id": "b17", "title": "Ir evaluation methods for retrieving highly relevant documents", "year": "2000-07" }, { "authors": "Kalervo Järvelin; Jaana Kekäläinen", "journal": "ACM", "ref_id": "b18", "title": "Ir evaluation methods for retrieving highly relevant documents", "year": "2017" }, { "authors": "Omar Khattab; Matei Zaharia", "journal": "", "ref_id": "b19", "title": "Colbert: Efficient and effective passage search via contextualized late interaction over bert", "year": "2020" }, { "authors": "Antonios Minas Krasakis; Mohammad Aliannejadi; Nikos Voskarides; Evangelos Kanoulas", "journal": "", "ref_id": "b20", "title": "Analysing the effect of clarifying questions on document ranking in conversational search", "year": "2020" }, { "authors": "Vaibhav Kumar; Alan W Black", "journal": "", "ref_id": "b21", "title": "Clarq: A large-scale and diverse dataset for clarification question generation", "year": "2020" }, { "authors": "Guillaume Lample; Alexis Conneau", "journal": "", "ref_id": "b22", "title": "Crosslingual language model pretraining", "year": "2019" }, { "authors": "Zhenzhong Lan; Mingda Chen; Sebastian Goodman; Kevin Gimpel; Piyush Sharma; Radu Soricut", "journal": "", "ref_id": "b23", "title": "Albert: A lite bert for self-supervised learning of language representations", "year": "2019" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Ves Stoyanov; Luke Zettlemoyer", "journal": "", "ref_id": "b24", "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2019" }, { "authors": "Jiwei Li; Alexander H Miller; Sumit Chopra; Marc'aurelio Ranzato; Jason Weston", "journal": "", "ref_id": "b25", "title": "Dialogue learning with human-in-the-loop", "year": "2017" }, { "authors": "Margaret Li; Jason Weston; Stephen Roller", "journal": "", "ref_id": "b26", "title": "Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons", "year": "2019" }, { "authors": "Chin-Yew Lin", "journal": "", "ref_id": "b27", "title": "Rouge: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Wei-Yin Loh", "journal": "Wiley interdisciplinary reviews: data mining and knowledge discovery", "ref_id": "b28", "title": "Classification and regression trees", "year": "2011" }, { "authors": "Craig Macdonald; Nicola Tonellotto", "journal": "", "ref_id": "b29", "title": "Declarative experimentation ininformation retrieval using pyterrier", "year": "2020" }, { "authors": "D Christopher; Manning", "journal": "Syngress Publishing", "ref_id": "b30", "title": "Introduction to information retrieval", "year": "2008" }, { "authors": "Julian Mcauley; Christopher Targett; Qinfeng Shi; Anton Van Den; Hengel", "journal": "", "ref_id": "b31", "title": "Image-based recommendations on styles and substitutes", "year": "2015" }, { "authors": "Julian Mcauley; Alex Yang", "journal": "", "ref_id": "b32", "title": "Addressing complex and subjective product-related queries with customer reviews", "year": "2016" }, { "authors": "Rodrigo Nogueira; Wei Yang; Jimmy Lin; Kyunghyun Cho", "journal": "", "ref_id": "b33", "title": "Document expansion by query prediction", "year": "2019" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "", "ref_id": "b34", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "F Pedregosa; G Varoquaux; A Gramfort; V Michel; B Thirion; O Grisel; M Blondel; P Prettenhofer; R Weiss; V Dubourg; J Vanderplas; A Passos; D Cournapeau; M Brucher; M Perrot; E Duchesnay", "journal": "Journal of Machine Learning Research", "ref_id": "b35", "title": "Scikit-learn: Machine learning in Python", "year": "2011" }, { "authors": "Gustavo Penha; Alexandru Balan; Claudia Hauff", "journal": "", "ref_id": "b36", "title": "Introducing mantis: a novel multi-domain information seeking dialogues dataset", "year": "2019" }, { "authors": "Chen Qu; Liu Yang; Bruce Croft; Johanne R Trippas; Yongfeng Zhang; Minghui Qiu", "journal": "", "ref_id": "b37", "title": "Analyzing and characterizing user intent in information-seeking conversations", "year": "2018" }, { "authors": "Hong Dragomir R Radev; Harris Qi; Weiguo Wu; Fan", "journal": "", "ref_id": "b38", "title": "Evaluating web-based question answering systems", "year": "2002" }, { "authors": "Sudha Rao; Hal Daumé; Iii ", "journal": "", "ref_id": "b39", "title": "Learning to ask good questions: Ranking clarification questions using neural expected value of perfect information", "year": "2018" }, { "authors": "Sudha Rao; Hal Daumé; Iii ", "journal": "", "ref_id": "b40", "title": "Answer-based adversarial training for generating clarification questions", "year": "2019" }, { "authors": "Corbin Rosset; Chenyan Xiong; Xia Song; Daniel Campos; Nick Craswell; Saurabh Tiwary; Paul Bennett", "journal": "", "ref_id": "b41", "title": "Leading conversational search by suggesting useful questions", "year": "2020" }, { "authors": "Victor Sanh; Lysandre Debut; Julien Chaumond; Thomas Wolf", "journal": "", "ref_id": "b42", "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "year": "2019" }, { "authors": "Abigail See; Stephen Roller; Douwe Kiela; Jason Weston", "journal": "", "ref_id": "b43", "title": "What makes a good conversation? how controllable attributes affect human judgments", "year": "2019" }, { "authors": "Ivan Sekulić; Mohammad Aliannejadi; Fabio Crestani", "journal": "", "ref_id": "b44", "title": "Towards facet-driven generation of clarifying questions for conversational search", "year": "2021" }, { "authors": "Ivan Sekulić; Mohammad Aliannejadi; Fabio Crestani", "journal": "", "ref_id": "b45", "title": "Exploiting document-based features for clarification in conversational search", "year": "2022" }, { "authors": "Taihua Shao; Fei Cai; Wanyu Chen; Honghui Chen", "journal": "Information Sciences", "ref_id": "b46", "title": "Self-supervised clarification question generation for ambiguous multi-turn conversation", "year": "2022" }, { "authors": "Zhengxiang Shi; Yue Feng; Aldo Lipani", "journal": "", "ref_id": "b47", "title": "Learning to execute or ask clarification questions", "year": "2022" }, { "authors": "Zhengxiang Shi; Jerome Ramos; Eun To; Xi Kim; Hossein A Wang; Aldo Rahmani; Lipani", "journal": "", "ref_id": "b48", "title": "When and what to ask through world states and text instructions: Iglu nlp challenge solution", "year": "2023" }, { "authors": "Leila Tavakoli", "journal": "", "ref_id": "b49", "title": "Generating clarifying questions in conversational search systems", "year": "2020" }, { "authors": "Leila Tavakoli; Johanne R Trippas; Hamed Zamani; Falk Scholer; Mark Sanderson", "journal": "", "ref_id": "b50", "title": "Mimics-duo: Offline & online evaluation of search clarification", "year": "2022" }, { "authors": "Leila Tavakoli; Hamed Zamani; Falk Scholer; William Bruce Croft; Mark Sanderson", "journal": "Journal of the Association for Information Science and Technology", "ref_id": "b51", "title": "Analyzing clarification in asynchronous informationseeking conversations", "year": "2021" }, { "authors": "Damiano Johanne R Trippas; Lawrence Spina; Hideo Cavedon; Mark Joho; Sanderson", "journal": "", "ref_id": "b52", "title": "Informing the design of spoken conversational search: Perspective paper", "year": "2018" }, { "authors": "Ellen M Voorhees", "journal": "Trec", "ref_id": "b53", "title": "The trec-8 question answering track report", "year": "1999" }, { "authors": "Yining Wang; Liwei Wang; Yuanzhi Li; Di He; Tie-Yan Liu", "journal": "PMLR", "ref_id": "b54", "title": "A theoretical analysis of ndcg type ranking measures", "year": "2013" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz", "journal": "", "ref_id": "b55", "title": "Huggingface's transformers: State-ofthe-art natural language processing", "year": "2019" }, { "authors": "Jingjing Xu; Yuechen Wang; Duyu Tang; Nan Duan; Pengcheng Yang; Qi Zeng; Ming Zhou; Xu Sun", "journal": "", "ref_id": "b56", "title": "Asking clarification questions in knowledgebased question answering", "year": "2019" }, { "authors": "Xin Yan; Xiaogang Su", "journal": "", "ref_id": "b57", "title": "Linear regression analysis: theory and computing", "year": "2009" }, { "authors": "Zhilin Yang; Zihang Dai; Yiming Yang; Jaime Carbonell; Russ R Salakhutdinov; Quoc V Le", "journal": "Advances in neural information processing systems", "ref_id": "b58", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "year": "2019" }, { "authors": "Munazza Zaib; Wei Emma Zhang; Z Quan; Adnan Sheng; Yang Mahmood; Zhang", "journal": "", "ref_id": "b59", "title": "Conversational question answering: A survey", "year": "2021" } ]
[ { "formula_coordinates": [ 4, 330.61, 120.93, 166.6, 163.51 ], "formula_id": "formula_0", "formula_text": "Dataset Task Eval. Method T 1 T 2 T 3 Conv. Search ClariT (2023) ✓ G - Offline Qulac (2019) - R - Offline ClariQ (2021) ✓ R - Offline TavakoliCQ (2021) - G - Offline MIMICS (2020) ✓ R, G ✓ Offline/Online MANtIS (2019) - R, G - Offline ClariQ-FKw (2021) - G - Offline MSDialog (2018) - R, G - Offline MIMICS-Duo (2022) ✓ R, G ✓ Offline/Online Conv. QA ClarQ (2020) - R - Offline RaoCQ (2018) - R - Offline AmazonCQ (2019) - G - Offline CLAQUA (2019) ✓ G - Offline" } ]
A Survey on Asking Clarification Questions Datasets in Conversational Systems
The ability to understand a user's underlying needs is critical for conversational systems, especially with limited input from users in a conversation. Thus, in such a domain, Asking Clarification Questions (ACQs) to reveal users' true intent from their queries or utterances arise as an essential task. However, it is noticeable that a key limitation of the existing ACQs studies is their incomparability, from inconsistent use of data, distinct experimental setups and evaluation strategies. Therefore, in this paper, to assist the development of ACQs techniques, we comprehensively analyse the current ACQs research status, which offers a detailed comparison of publicly available datasets, and discusses the applied evaluation metrics, joined with benchmarks for multiple ACQs-related tasks. In particular, given a thorough analysis of the ACQs task, we discuss a number of corresponding research directions for the investigation of ACQs as well as the development of conversational systems.
Hossein A Rahmani; Xi Wang; Yue Feng; Qiang Zhang; Emine Yilmaz; Aldo Lipani
[ { "figure_caption": "Figure 1: tSNE on ACQ Datasets", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "reveals that clarification questions in Conv. Search are very focused while the clarification questions in Conv. QA datasets are more widely distributed. This indicates the high similarities among the Conv. Search-based data and the resulting necessity of properly selecting those publicly available datasets.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "A statistical summary of ACQ datasets for both Conv. Search and Conv. QA. The highlighted colours indicate the distinct corpus size of datasets (best viewed in colour).", "figure_data": "Dataset# Domains Scale # Clar. Q LinkConversational SearchClariT", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "A Summary of collection details of ACQ datasets. '-' means that the information is not available. 'SE' is StackExchange, 'MC' refers to Microsoft Community, and 'KB' is Knowledge Base. The detailed information of each dataset, such as the exact source domains, can be accessed in Appendix A.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Summary of tasks and evaluation method on ACQs datasets. The tasks can be generation and ranking, which are indicated by 'G' and 'R', respectively.", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Clarification need prediction performance of best representative methods from traditional ML and language models (RandomForest and BERT) on datasets. ↑ or ↓ is added to BERT to indicate a consistent performance change on all evaluation metrics. (The results of all methods are added to Table 7 in Appendix B.1).", "figure_data": "ModelPrecision RecallF1ClariQRandomForest0.35400.38060.3717BERT0.38040.32490.3344CLAQUARandomForest0.28600.50000.3638BERT ↑0.63490.6250.6255ModelMAEMSER 2MIMICSRandomForest2.44047.969-0.0012BERT ↓2.45628.1277 -0.0211MIMICS-DuoRandomForest2.850211.206 -0.0079BERT ↓2.880111.2268 -0.0098", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Question relevance ranking performance evaluation on representative approaches. 'P' and 'R' refers to Precision and Recall. ↑ or ↓ is added to Doc2Query + BM25 to indicate a consistent performance change to BM25 on all evaluation metrics.", "figure_data": "", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "User satisfaction prediction with CQs performance of running best representative methods from traditional ML and language models (MultinomialNB and distilBERT) on datasets. ↑ is added to distilBERT to indicate a consistent performance improvement on all evaluation metrics. (The results of all methods are added on Table8in Appendix B.3).", "figure_data": "ModelPrecision RecallF1MIMICSMultinomialNB0.82550.7842 0.7758distilBERT ↑0.94530.9397 0.939MIMICS-DuoMultinomialNB0.44070.2787 0.2336distilBERT0.27660.2803 0.2777", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "The performance of all methods on clarification need prediction on MIMICS and MIMICS-Duo. The best models are in bold.", "figure_data": "ModelMIMICS Precision RecallF1MIMICS-Duo Precision RecallF1RandomForest0.35400.38060.37170.28600.50000.3638DecisionTree0.21250.25200.20280.53290.50950.4305SVM0.28580.30240.27720.52810.50880.4333MultinomialNB0.29240.31860.28760.51850.51780.5166LogisticRegression0.27490.28780.28160.78620.50100.3660BART0.50830.33440.36570.58690.55030.5194XLNet0.13850.25000.17820.2860.50.3638XLM0.01190.25000.02270.2860.50.3638Albert0.29200.28770.28550.2860.50.3638distilBERT0.33910.33050.33220.59410.5940.5941BERT0.38040.32490.33440.63490.6250.6255MIMICSMIMICS-DuoMAEMSER 2MAEMSER 2RandomForest2.44047.969-0.00122.850211.206 -0.0079DecisionTree2.637410.0143 -0.25813.05214.2306 -0.2799SVR2.44478.1852 -0.02832.780114.6398 -0.3167MultinomialNB3.336416.7424 -1.10342.797118.942 -0.7037LogisticRegression3.408417.9488 -1.25492.797118.942 -0.7037BART2.39038.5296 -0.07162.723310.3239 0.0714XLNet2.45828.1836 -0.02812.797118.942 -0.7037XLM2.62149.9151 -0.24562.797118.942 -0.7037Albert2.43398.0300 -0.00882.797118.942 -0.7037distilBERT2.33257.86850.01152.774411.0613 0.0051BERT2.45628.1277 -0.02112.880111.2268 -0.0098", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "The performance of all methods on user satisfaction prediction with CQs on MIMICS and MIMICS-Duo. The best models are in bold.", "figure_data": "ModelMIMICS Precision RecallF1MIMICS-Duo Precision RecallF1RandomForest0.75220.5172 0.36860.12560.250.1672DecisionTree0.56480.5168 0.40500.22180.2311 0.2163SVM0.7360.5947 0.52120.23790.2498 0.2157MultinomialNB0.82550.7842 0.77580.44070.2787 0.2336LogisticRegression0.75220.5172 0.36860.37620.2542 0.1761BART0.93850.931 0.93020.12560.250.1672XLNet0.92190.9217 0.92170.12560.250.1672XLM0.93480.9309 0.93030.12560.250.1672Albert0.93850.931 0.93020.12560.250.1672distilBERT0.94530.9397 0.9390.27660.2803 0.2777BERT0.93850.931 0.93020.28510.264 0.2056", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" } ]
[{"Category": "Supporting Evidence", "Citation": "(Shi et al., 2022)", "Explanation": "The cited work by Shi et al. (2022) provides a basis for the discussion of ACQs in human-human and human-machine collaborative tasks, highlighting the importance of asking clarification questions to improve efficiency in these scenarios."}, {"Category": "Supporting Evidence", "Citation": "(Zou et al., 2023)", "Explanation": "The work by Zou et al. (2023) further supports the claim that ACQs are a common mechanism for boosting efficiency in human-machine collaboration, providing additional evidence in the context of human-machine collaboration."}, {"Category": "Supporting Evidence", "Citation": "(Shi et al., 2023)", "Explanation": "The work by Shi et al. (2023) adds to the discussion by discussing the use of ACQs in human-human and human-machine collaborative tasks, emphasizing the need for effective and efficient communication in these scenarios."}, {"Category": "Supporting Evidence", "Citation": "(Feng et al., 2023)", "Explanation": "The work by Feng et al. (2023) further supports the claim that ACQs are a key mechanism for improving efficiency in human-human and human-machine collaboration, highlighting the need for accurate and efficient responses in conversational systems."}, {"Category": "Methodological Basis", "Citation": "(Gao et al., 2018)", "Explanation": "The cited work by Gao et al. (2018) provides a methodological basis for the development of conversational systems that are capable of answering various questions accurately and efficiently, by focusing on capturing people's intents."}, {"Category": "Supporting Evidence", "Citation": "(Anand et al., 2020)", "Explanation": "The work by Anand et al. (2020) further supports the claim that capturing people's intents is essential for effective and efficient responses in conversational systems, highlighting the need for accurate and efficient responses in these systems."}, {"Category": "Supporting Evidence", "Citation": "(Zamani et al., 2022)", "Explanation": "The work by Zamani et al. (2022) adds to the discussion by discussing the use of ACQs in conversational systems, emphasizing the need for accurate and efficient responses in these systems."}, {"Category": "Data Source", "Citation": "(Li et al., 2017)", "Explanation": "The cited work by Li et al. (2017) is mentioned as a study that introduced ACQs to assist conversational systems in a specific domain, which provides a data source for the citing paper to reference in their research on ACQs."}, {"Category": "Data Source", "Citation": "(Aliannejadi et al., 2019)", "Explanation": "The cited work by Aliannejadi et al. (2019) is mentioned as another study that introduced ACQs to assist conversational systems in a different domain, providing another data source for the citing paper to reference in their research on ACQs."}, {"Category": "Extension or Continuation", "Citation": "(Kumar and Black, 2020)", "Explanation": "The cited work by Kumar and Black (2020) is mentioned as a study that released a publicly available dataset in the ACQ research direction, indicating a continuation of the research in this area and providing a dataset for the citing paper to reference in their research."}, {"Category": "Extension or Continuation", "Citation": "(Zamani et al., 2020)", "Explanation": "The cited work by Zamani et al. (2020) is mentioned as another study that released a publicly available dataset in the ACQ research direction, further indicating a continuation of the research in this area and providing another dataset for the citing paper to reference in their research."}, {"Category": "Extension or Continuation", "Citation": "(Feng et al., 2023)", "Explanation": "The cited work by Feng et al. (2023) is mentioned as a study that released a publicly available dataset in the ACQ research direction, further indicating a continuation of the research in this area and providing another dataset for the citing paper to reference in their research."}, {"Category": "Data Source", "Citation": "(Feng et al. 2023)", "Explanation": "The cited work is the source of the implementation code for the experiments conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Aliannejadi et al. 2019)", "Explanation": "The cited work provides the method of Qulac, which is adopted in the citing paper to conduct experiments on a dataset."}, {"Category": "Methodological Basis", "Citation": "(Aliannejadi et al. 2021)", "Explanation": "The cited work introduces the method of ClariQ, which the citing paper uses to run experiments on a dataset."}, {"Category": "Methodological Basis", "Citation": "(Tavakoli et al. 2021)", "Explanation": "The cited work provides the method of TavakoliCQ, which the citing paper utilizes in its experiments on a dataset."}, {"Category": "Data Source", "Citation": "(Zamani et al.", "Explanation": "The cited work is the source of the data used in the experiments conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Qu et al., 2018)", "Explanation": "The cited work, MSDialog, serves as the basis for the method used in the citing paper to generate conversational question answering systems."}, {"Category": "Data Source", "Citation": "(Tavakoli et al., 2022)", "Explanation": "The cited work, MIMICS-Dou, provides a dataset that the citing paper uses in their research on generating conversational question answering systems."}, {"Category": "Extension or Continuation", "Citation": "(Kumar and Black, 2020)", "Explanation": "The cited work, ClarQ, is extended in the citing paper to further explore the field of conversational question answering and generate systems that are more effective and efficient."}, {"Category": "Data Source", "Citation": "(Rao and Daum\u00e9 III, 2018)", "Explanation": "The cited work, RaoCQ, serves as a data source for the research conducted in the citing paper on generating conversational question answering systems."}, {"Category": "Data Source", "Citation": "(Penha et al., 2019)", "Explanation": "The cited work, MANtIS, provides a dataset that the citing paper uses in their research on generating conversational question answering systems."}, {"Category": "Methodological Basis", "Citation": "(Xu et al. 2019)", "Explanation": "The cited work provides a method for generating clarification questions, which the citing paper adopts in their research on ACQ datasets."}, {"Category": "Data Source", "Citation": "(Xu et al. 2019)", "Explanation": "The cited work introduces the MSParS dataset, which the citing paper utilizes in their research on ACQ techniques."}, {"Category": "Methodological Basis", "Citation": "(Gao et al., 2018)", "Explanation": "The cited work by Gao et al. provides a classification of conversational systems into four main categories, which the citing paper adopts in its own research to structure the discussion of conversation systems."}, {"Category": "Data Source", "Citation": "(Gao et al., 2019)", "Explanation": "The cited work by Gao et al. is used to acknowledge the origin of the categories of conversation systems discussed in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Anand et al., 2020)", "Explanation": "The cited work by Anand et al. is mentioned as a continuation of the research on conversational systems, specifically in the context of conversational search and conversational question answering systems."}, {"Category": "Data Source", "Citation": "(Zaib et al., 2021)", "Explanation": "The cited work by Zaib et al. is used to acknowledge the origin of the research on conversational search and conversational question answering systems."}, {"Category": "Methodological Basis", "Citation": "(Zamani et al., 2020)", "Explanation": "The cited work by Zamani et al. provides a framework for understanding the three main tasks in ACQ systems, which the citing paper builds upon to structure its own research on the topic."}, {"Category": "Extension or Continuation", "Citation": "(Tavakoli et al., 2022)", "Explanation": "The cited work by Tavakoli et al. extends the research on ACQ systems by providing a new perspective on the tasks involved in the system, which the citing paper further explores in its own study."}, {"Category": "Extension or Continuation", "Citation": "(Aliannejadi et al., 2019)", "Explanation": "The cited work by Aliannejadi et al. provides a new approach to understanding the tasks in ACQ systems, which the citing paper builds upon to expand the research in this area."}, {"Category": "Data Source", "Citation": "(Aliannejadi et al., 2019)", "Explanation": "The dataset Qulac is cited to acknowledge the origin of a clarification question dataset in an open-domain information-seeking conversational search setting with a joint offline evaluation framework."}, {"Category": "Data Source", "Citation": "(Aliannejadi et al., 2020)", "Explanation": "The dataset ClariQ is cited to highlight the extension of the Qulac dataset with additional crowdsourced topics, questions, and answers in the training corpus and synthetic multi-turn conversations."}, {"Category": "Data Source", "Citation": "(Tavakoli et al., 2021;Tavakoli, 2020)", "Explanation": "The cited work provides the source of the clarification questions collected from the StackExchange QA community, which serves as a dataset for the citing paper."}, {"Category": "Data Source", "Citation": "(Zamani et al., 2020)", "Explanation": "The cited work is the source of the MIMICS dataset, which includes three sub-datasets based on clarification panes in Microsoft Bing and is used in the citing paper for research purposes."}, {"Category": "Data Source", "Citation": "(Penha et al., 2019)", "Explanation": "The cited work is the source of the MANtIS dataset, a multidomain conversational information-seeking dataset sourced from StackExchange, which the citing paper uses for research on user intent annotations."}, {"Category": "Data Source", "Citation": "(Qu et al., 2018)", "Explanation": "The MSDialog dataset is used as a source of dialogues and user intent types for the research conducted in the cited work."}, {"Category": "Data Source", "Citation": "(Tavakoli et al., 2022)", "Explanation": "The MIMICS-Duo dataset is cited as a data source for the online and offline evaluations conducted in the research of the cited work."}, {"Category": "Data Source", "Citation": "(Zaib et al., 2021)", "Explanation": "The cited work provides the idea of Conversational Question Answering (Conv. QA), which is a conversational interface for asking the system a question about a provided passage."}, {"Category": "Supporting Evidence", "Citation": "(Kumar and Black, 2020)", "Explanation": "The cited work introduces the ClarQ dataset, which is a large-scale dataset sourced from post-question pairs in StackExchange and developed with self-supervised approaches within a bootstrapping framework."}, {"Category": "Supporting Evidence", "Citation": "(Rao and Daum\u00e9 III, 2018)", "Explanation": "The cited work introduces the RaoCQ dataset, which is another StackExchange-based dataset with a large volume of post-question-answer triples from three selected domains."}, {"Category": "Supporting Evidence", "Citation": "(Rao and Daum\u00e9 III, 2019)", "Explanation": "The cited work introduces the AmazonCQ dataset, which is an Amazon platform-based Clarification QA dataset with questions targeting the missing information of products and answers provided by sellers or other users."}, {"Category": "Data Source", "Citation": "(Xu et al., 2019)", "Explanation": "The cited work, CLAQUA, serves as the data source for the evaluation of text understanding and generation modules in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Jarvelin, 2000)", "Explanation": "The cited work by Jarvelin (2000) provides the basis for the use of MAP (Mean Average Precision) as a metric for evaluating the performance of ACQ-based conversational systems in the ranking route."}, {"Category": "Methodological Basis", "Citation": "(J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2017)", "Explanation": "The cited work by J\u00e4rvelin and Kek\u00e4l\u00e4inen (2017) introduces the concept of Precision as a metric for evaluating the performance of ACQ-based conversational systems in the ranking route."}, {"Category": "Methodological Basis", "Citation": "(Jarvelin, 2000)", "Explanation": "The cited work by Jarvelin (2000) also provides the basis for the use of Recall as a metric for evaluating the performance of ACQ-based conversational systems in the ranking route."}, {"Category": "Methodological Basis", "Citation": "(Beitzel, 2006)", "Explanation": "The cited work by Beitzel (2006) introduces the concept of F1-score as a metric for evaluating the performance of ACQ-based conversational systems in the ranking route."}, {"Category": "Methodological Basis", "Citation": "(Wang et al., 2013)", "Explanation": "The cited work by Wang et al. (2013) provides the basis for the use of nDCG (Normalized Discounted Cumulative Gain) as a metric for evaluating the performance of ACQ-based conversational systems in the ranking route."}, {"Category": "Methodological Basis", "Citation": "(Voorhees et al., 1999;Radev et al., 2002)", "Explanation": "The cited works by Voorhees et al. (1999) and Radev et al. (2002) provide the basis for the use of MRR (Mean Reciprocal Rank) as a metric for evaluating the performance of ACQ-based conversational systems in the ranking route."}, {"Category": "Methodological Basis", "Citation": "(Beitzel, 2006)", "Explanation": "The cited work by Beitzel (2006) also provides the basis for the use of MSE (Mean Square Error) as a metric for evaluating the performance of ACQ-based conversational systems in the ranking route."}, {"Category": "Methodological Basis", "Citation": "(Papineni et al., 2002)", "Explanation": "The cited work by Papineni et al. (2002) introduces the BLEU metric, which the citing paper adopts in evaluating the performance of machine translation and text summarization results in the ACQ task."}, {"Category": "Methodological Basis", "Citation": "(Banerjee and Lavie, 2005)", "Explanation": "The cited work by Banerjee and Lavie (2005) presents the METEOR metric, which the citing paper uses in evaluating the performance of machine translation and text summarization results in the ACQ task."}, {"Category": "Methodological Basis", "Citation": "(Lin, 2004)", "Explanation": "The cited work by Lin (2004) introduces the ROUGE metric, which the citing paper applies in evaluating the performance of machine translation and text summarization results in the ACQ task."}, {"Category": "Extension or Continuation", "Citation": "(Sekuli\u0107 et al., 2021)", "Explanation": "The cited work by Sekuli\u0107 et al. (2021) extends the use of BLEU and ROUGE metrics in the ACQ task by applying them in evaluating the performance of models in the field."}, {"Category": "Extension or Continuation", "Citation": "(Zhang and Zhu, 2021)", "Explanation": "The cited work by Zhang and Zhu (2021) continues the use of BLEU and ROUGE metrics in the ACQ task by applying them in evaluating the performance of models in the field."}, {"Category": "Extension or Continuation", "Citation": "(Shao et al., 2022)", "Explanation": "The cited work by Shao et al. (2022) further extends the use of BLEU and ROUGE metrics in the ACQ task by applying them in evaluating the performance of models in the field."}, {"Category": "Supporting Evidence", "Citation": "(Li et al., 2019)", "Explanation": "The cited work by Li et al. provides a set of human evaluation metrics that are used in the citing paper to assess the quality of generated or selected clarifying questions."}, {"Category": "Data Source", "Citation": "(Krasakis et al., 2020)", "Explanation": "The cited work is used as a reference for the Qulac dataset, which is utilized in the experimental setup of the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Bi et al., 2021)", "Explanation": "The citing paper adopts the few-turn-based setup from the cited work to ask clarification questions in the Qulac dataset."}, {"Category": "Data Source", "Citation": "(Aliannejadi et al., 2019)", "Explanation": "The cited work provides the ranking-based models used in the study to evaluate the performance of the question retrieval model in the citing paper."}, {"Category": "Data Source", "Citation": "(Aliannejadi et al., 2021)", "Explanation": "The cited work provides the language models used in the study to evaluate the performance of the question relevance model in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Krasakis et al., 2020;Sekuli\u0107 et al., 2022;Zhao et al., 2022)", "Explanation": "The cited works introduced techniques that were tested on a single dataset to show their top performance, which led to a generalisability concern. The citing paper extends the research by developing a benchmark to evaluate ACQ techniques and identify the state-of-the-art performance."}, {"Category": "Supporting Evidence", "Citation": "(Nogueira et al., 2019)", "Explanation": "The cited work provides the query expansion technique (Doc2Query) that is used in the citing paper to improve the ranking performance in conversational search datasets."}, {"Category": "Data Source", "Citation": "(Trippas et al., 2018)", "Explanation": "The cited work is used to provide a description of collecting large datasets for ACQs techniques, which the citing paper uses in the development of their own research."}, {"Category": "Data Source", "Citation": "(Deldjoo et al., 2021)", "Explanation": "The cited work is used to acknowledge the existence of a multi-modal conversational information seeking challenge in conversational systems, which is relevant to the research conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "Amazon Alexa 4", "Explanation": "The cited work is mentioned as a reference to the first conversational system challenge that incorporated multi-modal customer experience, indicating that the citing paper may build upon or expand upon this challenge in some way."}, {"Category": "Data Source", "Citation": "(Feng et al., 2023)", "Explanation": "The ClariT dataset is the primary data source for the study conducted in the citing paper, providing the basis for the research on asking clarification questions in task-oriented conversational information seeking."}, {"Category": "Data Source", "Citation": "(Aliannejadi et al., 2019)", "Explanation": "The Qulac dataset is a foundational element for the study conducted in the citing paper, as it provides the data and evaluation framework for research on clarification questions in open-domain information-seeking conversational search systems."}, {"Category": "Data Source", "Citation": "(Aliannejadi et al., , 2021)", "Explanation": "The cited work is the ClariQ dataset, which serves as a foundational data source for the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Tavakoli et al., 2021)", "Explanation": "The cited work by Tavakoli et al. is the source of the dataset used in the citing paper to explore the use of ACQs in disambiguation of user requests and information needs."}, {"Category": "Extension or Continuation", "Citation": "(Tavakoli, 2020)", "Explanation": "The cited work by Tavakoli is an extension of the research by Tavakoli et al., focusing on the same topic of ACQs and their use in disambiguation, but with a more in-depth analysis of the clarification questions extracted from the StackExchange community."}, {"Category": "Data Source", "Citation": "(Zamani et al., 2020)", "Explanation": "The cited work by Zamani et al. serves as the data source for the large-scale search clarification dataset used in the citing paper, providing a foundational element for the research conducted."}, {"Category": "Data Source", "Citation": "(Penha et al., 2019)", "Explanation": "The MANtIS dataset is a large-scale dataset used in the research conducted in the citing paper, providing a foundational element for the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Aliannejadi et al., 2020)", "Explanation": "The cited work by Aliannejadi et al. (2020) serves as the data source for the ClariQ dataset, which the researchers from the University of Amsterdam and the Universit\u00e0 della Svizzera Italiana use to construct the ClariQ-FKw dataset."}, {"Category": "Data Source", "Citation": "(Qu et al., 2018)", "Explanation": "The MSDialog dataset is a foundational element for the study conducted in the citing paper, as it is used to analyze information-seeking conversations in conversational search systems."}, {"Category": "Data Source", "Citation": "(Tavakoli et al., 2022)", "Explanation": "The cited work is the source of the MIMICS-Duo dataset, which the citing paper utilizes in its research on clarification selection and generation approaches."}, {"Category": "Supporting Evidence", "Citation": "(Zamani et al., 2020)", "Explanation": "The cited work provides the online signals and query-clarification pairs that form the basis for the construction of the MIMICS-Duo dataset in the citing paper."}, {"Category": "Data Source", "Citation": "[2018]", "Explanation": "The cited work by Rao and Daum\u00e9 III from the University of Maryland provides the ACQs dataset, which serves as a data source for the study conducted in the citing paper on ranking clarification questions."}, {"Category": "Data Source", "Citation": "[2019]", "Explanation": "The cited work by Rao and Daum\u00e9 III provides a dataset for generating clarification questions, which the citing paper uses as a source of data for their research."}, {"Category": "Data Source", "Citation": "(McAuley and Yang, 2016)", "Explanation": "The cited work provides the Amazon Question Answering dataset that is used as a data source in the construction of the final dataset in the citing paper."}, {"Category": "Data Source", "Citation": "(McAuley et al., 2015)", "Explanation": "The cited work contributes the Amazon Review dataset, which is combined with the Amazon Question Answering dataset to form the final dataset in the citing paper."}, {"Category": "Data Source", "Citation": "(Xu et al., 2019)", "Explanation": "The CLAQUA dataset is a supervised resource for training and evaluation in knowledge-based question answering systems, and the citing paper utilizes this dataset as a foundational element for their research."}, {"Category": "Data Source", "Citation": "(Aliannejadi et al., , 2021)", "Explanation": "The cited work by Aliannejadi et al. provides the ClariQ dataset, which is a key data source for the clarification need prediction task in the citing paper."}, {"Category": "Data Source", "Citation": "Zamani et al., 2020", "Explanation": "The cited work by Zamani et al. presents the MIMICS dataset, which is a data source for the clarification need prediction task in the citing paper."}, {"Category": "Data Source", "Citation": "Tavakoli et al., 2022", "Explanation": "The cited work by Tavakoli et al. introduces the MIMICS-Duo dataset, which is a data source for the clarification need prediction task in the citing paper."}, {"Category": "Data Source", "Citation": "Xu et al., 2019", "Explanation": "The cited work by Xu et al. presents the CLAQUA dataset, which is a data source for the clarification need prediction task in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Breiman, 2001)", "Explanation": "The cited work by Breiman (2001) provides the Random Forest model, which is employed as a baseline in the prediction task for clarification needs in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Loh, 2011)", "Explanation": "The cited work by Loh (2011) introduces the Decision Tree model, which is used as a baseline in the prediction task for clarification needs in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Manning, 2008)", "Explanation": "The cited work by Manning (2008) presents the Multinomial Naive Bayes model, which is employed as a baseline in the prediction task for clarification needs in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Cortes and Vapnik, 1995)", "Explanation": "The cited work by Cortes and Vapnik (1995) introduces the Support Vector Machines model, which is utilized as a baseline in the prediction task for clarification needs in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Yan and Su, 2009)", "Explanation": "The cited work by Yan and Su (2009) presents the Linear Regression model, which is employed as a baseline in the prediction task for clarification needs in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Lewis et al., 2019)", "Explanation": "The cited work by Lewis et al. (2019) introduces the BART model, which is used as a baseline in the prediction task for clarification needs in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Yang et al., 2019)", "Explanation": "The cited work by Yang et al. (2019) presents the XLNet model, which is employed as a baseline in the prediction task for clarification needs in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Lample and Conneau, 2019)", "Explanation": "The cited work by Lample and Conneau (2019) introduces the XLM model, which is used as a baseline in the prediction task for clarification needs in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Lan et al., 2019)", "Explanation": "The cited work by Lan et al. (2019) presents the Albert model, which is employed as a baseline in the prediction task for clarification needs in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Sanh et al., 2019)", "Explanation": "The cited work by Sanh et al. (2019) introduces the distilBERT model, which is used as a baseline in the prediction task for clarification needs in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Devlin et al., 2018)", "Explanation": "The cited work by Devlin et al. (2018) presents the BERT model, which is employed as a baseline in the prediction task for clarification needs in the citing paper."}, {"Category": "Data Source", "Citation": "(Pedregosa et al., 2011)", "Explanation": "The cited work is a data source for the implementation of the models used in the citing paper."}, {"Category": "Data Source", "Citation": "(Wolf et al., 2019)", "Explanation": "The cited work is a data source for the implementation of the models used in the citing paper."}, {"Category": "Data Source", "Citation": "(Abadi et al., 2016)", "Explanation": "The cited work is a data source for the implementation of the models used in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Aliannejadi et al., 2021)", "Explanation": "The cited work provides a comparison of BM25-based techniques for addressing the clarification question ranking task, which the citing paper adopts to compare the performance of different approaches in addressing the same task."}, {"Category": "Data Source", "Citation": "(Amati and Van Rijsbergen, 2002)", "Explanation": "The cited work introduces the PL2 approach for information retrieval, which the citing paper uses to implement the PL2 technique in the context of the asking clarification question task."}, {"Category": "Data Source", "Citation": "(Amati et al., 2008)", "Explanation": "The cited work presents the DPH approach for information retrieval, which the citing paper uses to implement the DPH technique in the context of the asking clarification question task."}, {"Category": "Data Source", "Citation": "(Khattab and Zaharia, 2020)", "Explanation": "The cited work introduces the ColBERT approach for dense retrieval, which the citing paper uses to implement the ColBERT technique in the context of the asking clarification question task."}, {"Category": "Data Source", "Citation": "(Zamani et al., 2020)", "Explanation": "The dataset from Zamani et al. (2020) is used as a source of information for the user satisfaction task in the ACQs datasets."}, {"Category": "Data Source", "Citation": "(Tavakoli et al., 2022)", "Explanation": "The dataset from Tavakoli et al. (2022) is also used as a source of information for the user satisfaction task in the ACQs datasets."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5" ], "table_ref": [], "text": "Cyber-Physical Production Systems (CPPS) combine physical systems, such as machines and production lines, with cyber systems, such as sensors, software, and networks, to create an interconnected production environment [1]. CPPS offer many benefits, including improved efficiency, flexibility, and quality control. However, as with any complex system, CPPS can experience faults that can have significant consequences on the production process and lead to long downtimes [2].\nTherefore, diagnosing faults in CPPS is crucial to ensure the smooth operation and maintenance of the production system. Diagnosis refers to the process of identifying the root cause of a problem of fault in a system [3]. In modern CPPS, this task is often too complex to be performed by humans. Therefore, computer-based methods are used to identify the root cause of a failure. These methods are often based on methods of Artificial Intelligence (AI) [4].\nModel-based diagnosis is a common approach used in CPPS to diagnose faults [5]. The basic idea is to compare the observed behavior of the system with a model of its expected behavior. The model represents the normal or expected behavior of the system and can be used to detect deviations from this behavior that may indicate a fault in the system.\nTo implement model-based diagnosis in CPPS, a model of the system is necessary. This model can be based on physical laws and equations or can be a more abstract representation of the system's behavior. The model should capture the essential features of the system and should be able to predict its behavior.\nOnce the model is developed, it is used to compare the actual behavior of the system with the expected behavior. This is done by collecting sensor data and from the system and comparing it with the predictions of the model. Once a fault is detected, its root cause can be identified [6]. Model-based diagnosis can detect faults that may not be easily detected by other methods and can therefore provide insights into the root cause of the problem.\nRotary Indexing Machines (RIMs) are a type of machine used in manufacturing processes that automatically indexes (rotates) a product to a new position for the next operation to be performed on it. The basic principle behind RIMs is the use of a circular indexing table that rotates about a vertical or horizontal axis, stopping at certain fixed positions. At these positions, the tools used to transform the product are installed. The indexing table can hold multiple products, which are then positioned at the tool stations for processing.\nOne of the advantages of RIMs is their ability to perform multiple production steps on a single product without the need for manual repositioning, handling or tool exchange. This reduces production time and improves accuracy and consistency in the production process. RIMs can also be customized with various tools to fit the specific needs of the production process. As such, RIMs are often custom-made for a specific application.\nHowever, RIMs also have some limitations, such as high initial investment costs and limited flexibility in production setup once the machines is installed. Additionally, the machines may require significant maintenance and downtime for repairs and maintenance.\nDespite the widespread use of Rotary Indexing Machines in production, relatively little research is done on diagnosis of failures in these machines, especially from a perspective of the actual production steps carried out on these machines. However, especially for smaller companies employing these machines long downtime due to failures are problematic.\nBased on these observations, we formulated the following research questions: RQ1: Which properties distinguish RIMs from other production machines and how does this affect the diagnosis algorithm needed for these machines? RQ2: Can an SMT solver be employed to build a diagnosis algorithm for RIMs? RQ3: Which types of faults can be diagnosed with such an algorithm?\nThis paper proposes a diagnosis algorithms for RIMs, which focuses on the product processed by these machines. Based on an SMT solver, this algorithm traces the steps a product takes through the machine and is able to diagnose possible causes in case of failure.\nThe contributions of this paper are: 1. We provide an analysis of the properties of RIMs and how these influence the diagnosis of faults in these machines. 2. We suggest a diagnosis algorithm based on the product perspective capable of diagnosing faults in such a machine. 3. We test this algorithm on a model of a rotary indexing machine. As an example, a RIM to be constructed by a German company is analyzed.\nThe remainder of the paper is structured as follows. Section II will present a short survey of the State of the Art of diagnosis of rotary indexing machines. In Section III the characteristics of RIMs will be analyzed and it will be examined how these can be exploited for diagnosis. Furthermore, this section will introduce the model of the RIM used for development and evaluation of the diagnosis algorithms. Section IV will first formalize the problem and then describe the algorithms, followed by an evaluation in Section V. Section VI will close with a conclusion and a short outlook." }, { "figure_ref": [], "heading": "State of the Art", "publication_ref": [ "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13" ], "table_ref": [], "text": "Some research has been done for diagnosis of rotary indexing machines. Most work however does not focus on the products or the processes taking part on the rotary indexing table, but on the rotational parts of the machine. Most of these approaches are based on Machine Learning (ML) approaches, since they are mostly based on vibration signals.\nThe work presented in [7] discusses an intelligent fault diagnosis technique that was developed to diagnose various faults in a deep groove ball bearing, which is an essential component of a rotating machine. The study used an experimental setup to generate faulty data, including inner race fault, outer race fault, and cage fault, along with the healthy condition. The time waveform of raw vibration data was transformed into a frequency spectrum using the fast Fourier transform (FFT) method and analyzed to detect the defective bearing. The study also applied a machine learning algorithm, the support vector machine (SVM), for fault diagnosis.\nSimilarly, the work presented in [8] uses a novel deeplearning-based technique for fault diagnosis in rotary machinery. The proposed technique inputs raw three-axis accelerometer signals as a high-definition 1D image into convolutional neural network (CNN) layers that automatically extract signal features, enabling high classification accuracy. The study achieved effective classification of different rotary machinery states using CNN for classification of raw three-axis accelerometer signals.\nFurthermore, [9] proposes a novel approach for fault diagnosis in rotating equipment using permutation entropy, signal processing, and artificial intelligence. Vibration signals are used to detect the faulty state of the bearing and determine the fault type in two separate steps. Permutation entropy is used for fault detection and wavelet packet transform and envelope analysis are used for fault isolation. The method uses a multi-output adaptive neuro-fuzzy inference system classifier to decide about the faulty bearing's condition by extracting the proper features of the signals.\nThe approach is evaluated using the Case Western Reserve University dataset and shows improved accuracy in diagnosing faults in rotating equipment compared to existing approaches.\nDiagnosis of production systems that are not specifically rotary indexing machines can also be carried out using MLapproaches, but are often based on Model-Based Diagnosis approaches. [10] suggest a new approach to Model-Based Diagnosis for CPS. The approach uses a learned quantitative model to derive residuals for generating a diagnosis model for root cause identification. This approach has advantages such as easy integration of new machine learning algorithms, seamless integration of qualitative models, and significant speed-up of diagnosis runtime. The paper defines the approach, discusses its advantages and disadvantages, and presents real-world use cases for evaluation.\nIn [11] a new model-based diagnosis approach for detecting and isolating faults in hybrid systems was presented. The approach involves modelling dynamic system behaviour using state space models and calculating Boolean residuals through an observer-pattern. The observer pattern is implemented using a symbolic system description specified in satisfiability theory modulo linear arithmetic. The residuals are used as fault symptoms, and the minimum cardinality diagnosis is obtained using Reiter's diagnosis lattice. This approach has the advantage of automating the diagnosis process and decoupling modelling and diagnosis. The paper also presents an evaluation of the approach using a four-tank model.\nThis approach was further developed in [12], presenting a novel approach for the automated reconfiguration of CPPS using residual-based fault detection and logical calculi. The approach operates on observed system data and information about the system topology to draw causal coherences, reducing modeling efforts. This automated reconfiguration is needed for autonomous systems, as the software controlling the system is often unable to adapt to unforeseen events and faults. The effectiveness of the approach is evaluated using a simulation of a CPPS, namely a tank-model from process engineering.\nAn algorithm that focuses on anomaly detection and diagnosis in a manufacturing process was suggested in [13]. This paper proposes a hybrid model for cyber-physical manufacturing systems (CPMS) that combines sensor data, con-text information, and expert knowledge to improve anomaly detection and diagnosis. The model uses a multimodel framework and context-sensitive adaptive threshold limits for anomaly detection, and classification models with expert knowledge for root cause diagnosis. The proposed approach was implemented using IoT to extract data from a computer numerical control machine, and results showed that context-sensitive modeling allowed for combining physicsbased and data-driven models to detect anomalies and identify root causes such as worn or broken tools or wrong material.\nSpecifically for RIMs, research has been done on the human-machine interaction for automation, especially in the case of fault. The specific objective of the work done in [14] was to design and develop a low-cost and userfriendly Human Machine Interface (HMI) for an automated wheel assembly unit of the back wheel for a kids' swing car. The wheel assembly process is accomplished using a five-station automated rotary index table. Various electro-mechanical elements like proximity sensors, pneumatic cylinders, solenoid valves, and stepper motors are employed for automating the rotary table. The HMI utilizes a low-cost Arduino controller and a touch screen display for real-time monitoring, however, no anomaly detection or diagnosis algorithm is supplied. The other option is to look at the quality of the products processed on the machine. In this case, a damaged product at the end of its processing cycle would indicate a fault in the machine in the preceding steps." }, { "figure_ref": [ "fig_0" ], "heading": "Characteristics of Rotary Indexing Machines", "publication_ref": [], "table_ref": [], "text": "As described in Figure 1, there are two possible perspectives when diagnosing RIMs. The first is the perspective of the whole machine, which monitors the state of the moving parts of the machine, often focused on the rotating parts and the bearings. The analysis of these types of failures is interesting, because it is possible to predict the failures of, for example, bearings from vibrational data. However, this is not the focus of this paper. Instead, this work will focus on the second possible perspective, which is that of the product. While a RIM performs a rotating motion, the path of the product through the machine is linear, since the product enters the machine at one station, then goes through each station once and is ejected from the machine one station before the one where it entered. We therefore have a linear product path on a rotating machine. Furthermore, while the machine performs all steps simultaneously on multiple products, a product only sees each station once.\nThe machine used as an example here is a RIM with eight stations that assembles a product, performs a quality check and sorts product into OK/Not-OK. The first station is an input station for the first part of the product. This is then followed by several assembly stations, in combination with feeding of further product parts. Afterwards, two stations perform quality control on the product, followed by a station ejecting the OK products and a station ejecting the Not-OK products. Then, the cycle begins anew." }, { "figure_ref": [], "heading": "Model of the Machine", "publication_ref": [], "table_ref": [], "text": "To analyse the RIM and to later test the diagnosis algorithm, two models of the machine were developed. Both models are written in python. Since they are supposed to recreate the data from the actual machine, no effects like vibrations or friction were integrated into the models, because these would also not be reported in the real machine.\nThe first model is constructed from the perspective of the product, which moves through each station once. The model simulates one run of a product through the machine. The output of the model looks like the following. Thu Apr 27 11:18:58 2023 pneumatic cylinder in position 0\nThe output of the model is the current state of the station the product is at. It consists of a time stamp and the report of the state of the tool currently processing the product. In the real machine, this state would be reported by sensors, e.g. measuring the position of the tools. The model runs through all stations of the machine once.\nThe second model simulates the production process of the whole machine. This means that all stations are active simultaneously and several products are processed. The model continuously outputs the state of each station, similar to how the machine would report its internal state during the process cycle." }, { "figure_ref": [], "heading": "Diagnosis of Rotary Indexing Machines", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "As mentioned above, this work will focus on diagnosis of RIMs from the perspective of the product. While vibrations and damages to rotating parts will impact the quality of the product, many industrially installed RIMs do not have this type of sensor. Instead, sensors will be mostly installed for two main reasons: 1. To measure the position of tooling equipment, which is mainly needed as a reporter to the automation software to ensure a tool has reached a designated position before the next step is started.\n2. For quality control purposes. In case the machine features one or more sorting stations, this data is used to sort the product into OK/Not-OK.\nSince this project aimed at developing a diagnosis algorithm for this type of RIMs, the algorithm should only make use of the existing sensors.\nAs summarized in Table 1, RIMs have several characteristics which can be exploited for diagnosis. As mentioned before, the product flow can be seen as linear. Therefore, Only one data stream needs to be processed for a diagnosis algorithm from the perspective of the product, the rotating motion of the machine can be ignored. A benefit of RIMs is that, since all tooling operations on the product are controlled by the same automation software, only one data stream needs to be processed and analysed and the time stamps of the values in this data stream are synchronized. Keeping track of time-sensitive steps is therefore easier than in set-ups where different machines work on the same product and their time stamps might not be perfectly synchronized. Furthermore, since RIMs often have even the quality control on the same machine, also these data points are synchronized with the other data points from production and can be easily mapped to the product path." }, { "figure_ref": [], "heading": "Diagnosis Algorithm", "publication_ref": [], "table_ref": [], "text": "In the following, a diagnosis algorithm for a RIM will be developed, using the example of the machine modeled in the previous section. While the evaluation will be done using this specific machine, the approach can be generalized to other RIMs." }, { "figure_ref": [], "heading": "Formalization of the Problem", "publication_ref": [], "table_ref": [], "text": "First, the problem of diagnosis in RIM needs to be formalized. For this, the processes within the RIM need to be described formally. Since the algorithm will focus on the product, only the product states need to be taken into account. Definition 1 (Product State). The product state S i ∈ S with S = {1, . . . , n} ⊂ N for n ∈ N is the condition of the product at a given time t = t s .\nA product's description is not only defined by the characteristics of the product at a defined time, but must also include its position in the RIM, since a product can have the same state S i while located at different positions within the RIM. Definition 2 (Product Position). The position P i of the product in the RIM is one element of a defined set of positions P , therefore P i ∈ P with P = {1, . . . , n} ⊂ N for n ∈ N. At each position P i a defined number of state transitions occur (see Definition 3).\nWith the Product State and the Product Position, a product can be fully described. To further describe the RIM, transitions between different states and positions need to be possible." }, { "figure_ref": [], "heading": "Definition 3 (State Transition).", "publication_ref": [], "table_ref": [], "text": "A state transition T i ∈ T with T = {1, . . . , n} ⊂ N for n ∈ N is a change in the products characteristics, leading to a change in the state of the product from state S i to state S j . At each product position P i , a defined number of state transitions happen.\nFor each state transition, a certain tool in the RIM is necessary. Each of these tools can have one ore more sensors attached to it, which report the position and other parameters of the tool. A state transition describes changes to the products characteristics, but not changes in position within the RIM. Definition 4 (Rotation). The rotation R i with R i ∈ R with R = {1, . . . , n} ⊂ N for n ∈ N changes the product position from a position P i to the next position P j within P .\nUsing the Definitions 1 to 4, the path of the product through the machine and its final state can be fully described.\nThe diagnosis algorithm should keep track of the path of the product through the machine and compare the expected state and position of the product with the reported values from the machine. This can be done step-wise, where each production step is the sum of all state transitions happening between two rotations. Definition 5 (Production Step). A production step Y i ∈ Y with Y = {1, . . . , n} ⊂ N for n ∈ Nis the entirety of all state transitions T n , . . . , m with n, m ∈ N taking place between two rotations R i and R j .\nA production step happens at a certain, defined position within the RIM, therefore the product position P i equals the production step Y i . The relation between state S, state transition T , position P , production step Y and rotation R is shown in Figure 2.\nSince RIMs carry out production steps in a clocked order, time is important for the analysis of the processes within the machine. However, in most cases the absolute time is not as important as the time between production steps. Therefore, the time a product spends in each production step is more important for the diagnosis than the absolute time. Definition 6 (Internal Time). The internal time t i is the time between the product entering the machine and the final product exiting the machine. The internal time t i starts with t i = 0 when the product enters the first production step of the RIM.\nEach product state is linked to a certain machine part, whose action triggered the state transition of the product at a given internal time. These machine parts report their actions via sensors in the RIM, including a time stamp. The state of a product is therefore not described directly, but can only be derived from the values of the sensors of the machine parts acting on the product. The actions which need to be taken by the machine to transform the product are pre-defined and their expected effects on the product are known. Definition 7 (Process Description). The process description M is a tuple (O, U, V, W ), where\n• O is a vector (o 1 , o 2 , . . . , o i ) with i, o ∈ N of the order of all production steps Y i ∈ Y within the RIM • U is a vector (u 1 , u 2 , . . . , u n ) with n ∈ N and u ∈ R of all timings within the RIM in internal time • V is a tuple of vectors V = (v 1 , v 2 , . . . , v k ) with k = (1, . . . , T i ), which is the mapping of all sensors onto a state transition T i Figure 2: Formalization of the RIM. At each position P i a defined number or state transitions T i happen. The of these state transitions T 1 , . . . , n is referred to as a production step Y i . Once a production step is reported as finished, the rotation R i moves the product to the next position.\n• W is a tuple of vectors W = (w 1 , w 2 , . . . , w k ) with k = (1, . . . , T i ), which is the mapping of all sensors measuring tool positions onto a tool within the RIM whose state they report The process description does not only define the order of steps within the production process, it also needs to contain the information about what the sensors measure and how this relates to the state transitions within the individual production steps. Within this process description, all sensors must be mapped onto the state transition in which they measure a machine parameter. Several sensors can be mapped onto one state transition, for example when one sensor measures the positions of a hydraulic cylinder and a second the pressure this cylinder exerts onto the product. The process description for this case should be step-wise, meaning each production step Y i (see Definition 4) should be described reported by the sensors in the RIM, including a time stamp The task of step-wise diagnosis is to compare the set of expected values E with the set of actual values K for each production step and identify any inconsistencies between K and E. Once an inconsistency is found, a possible cause of this inconsistency should be identified using the process description M .\nSince the data in one production step is limited to sensors that report on actions which happened in this production step, it is possible that faults are detected that cannot be definitely explained, for example when multiple diagnoses are possible for an observed fault. In such cases it can be useful to use data from multiple production steps to identify the root cause of a fault. " }, { "figure_ref": [], "heading": "Description of the Algorithm", "publication_ref": [], "table_ref": [], "text": "In the following, the implementation of two different algorithms is described. The first algorithm is a step-wise diagnosis algorithm, solving the diagnosis problem described in Definition 8, while the second algorithm is a multi-step diagnosis algorithm, which addresses the problem shown in Definition 9.\nSince the machine runs at a high speed and it is not necessary to diagnose the production of a product which passes through the machine without problems and is not later identified as problematic, the diagnosis algorithm is only activated once a fault is registered in a product. This is most often the case in one of the quality controls. However, a product can also successfully pass the quality controls but still show faults which are only recognized later. In this case, the diagnosis algorithm might be used at a later timepoint to trace the path of the product through the machine. It would therefore be necessary to safe data from the RIM for later diagnosis.\nIn the case described here, the anomaly detection is on the level of the product, therefore an anomaly is detected when a faulty product is registered. An anomalous product triggers the diagnosis algorithm. The diagnosis on the level of the product than aims at identifying the production step and, ideally, the tool that was responsible for the fault in the finished product as a root cause.\nBoth algorithms take as input a list of expected sensor values, a list of actual sensor values measured in the RIM and the process description, which contains information about the production steps and their order, the timings of the processes in the RIM, the mapping of sensors onto a state transition and a mapping of sensors onto tools, according to Definition 7.\nAlgorithm 1 one shows the pseudo-code of the implementation of the step-wise diagnosis algorithm. The algorithm uses an SMT solver to check the validity of the observed values K from the machine with respect to the expected values E. Since the observed values from the real machine will never be exactly equivalent to the expected ones, a tolerance range for the expected values is given, which can be defined symmetrical around the expected value or can be bigger in one direction than the other.\nThe algorithm then compares expected to real values for each production step of the RIM. When an inconsistency is encountered, the cause for this inconsistency needs to be ▷ find sensor for conflict value return \"Error in step \" + step_number + \": Fault found\" \"Most likely cause: \" + name_conf lict ▷ report sensor mapping value for conflict end if end for identified. First, the algorithm checks whether the inconsistency was between the expected values for the timings in the production steps and the actual timings. If this is the case, a timing error and the state transition in which it occurred is reported. In case the fault is not a timing fault, the corresponding sensor which measured the inconsistent value needs to be identified. Once this is done, the tool which is described by this sensor can be identified and reported as the root cause for the faulty product.\nAlgorithm 2 describes the multi-step diagnosis algorithm. The basic algorithm is the same as Algorithm 1, the main difference is that Algorithm 2 also takes into account failures which cannot be explained within one step. This is possible when a fault can have multiple causes and this conflict cannot be resolved within one production step. In this case, the algorithm saves the possible explanation in a variable outside of the main loop, therefore building a memory of former possible root causes. In later steps, this is than used it to check whether an explanation can be found in later steps. For this, again, an SMT solver is employed, which uses the expected values and the measured values from the current step, together with the process description and the list of faults and possible explanations from earlier steps to attempt finding a definite explanation for these earlier faults.\nThe advantage in using an SMT solver for this approach is that the implementation of the algorithms only needs minimal changes when the machine changes, since the input into the algorithm can be implemented externally. The process description and lists of expected sensor values can be implemented outside of the main algorithm, so that the approach can be easily adapted.\nBoth algorithms try to find the cause for the anomaly detected in the product. Therefore, the algorithms diagnose the process which led to the anomaly in the final product. How-ever, the cause for the anomaly in the product is an anomaly in the production process. \nK f ← nameconf lictR ▷ save name of conflict value check M for K f Counter ← +1 ▷ count number of possible explanations end if if C == 1 then\nreturn \"Fault found in step\" + stepnumber \"Most likely cause\" + M f ▷ report sensor mapping value for conflict else Z ← K f return \"Fault found in step\" + stepnumber \"More than one explanation possible\" end if if More than one explanation found then checkSAT(E,M,K,Z) return \"Explanation for fault in earlier step found!\" end if end for" }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [ "b14" ], "table_ref": [ "tab_3" ], "text": "To evaluate the algorithms described in the preceding section, they were implemented in python. As a SMT solver the Z3 solver [15] was used. The description of the RIM as provided by our cooperation partners was used to generate the process description, as well as the expected values for the sensor measurements and the expected times for each state transition. The model of the RIM described in Section 3.1 was used for simulations containing faults which the algorithms were supposed to diagnose.\nFigure 3 shows two examples for faults and how they can be diagnosed. In Figure 3A, the diagnosis algorithm is triggered when station 8 reports a product as Not-OK. This is due to station 6 reporting that the product failed the tightness test. The algorithms then check each station for consistency. When an inconsistency is found, Algorithm 1 reports it directly, while Algorithm 2 saves the possible explanations 3A, only one explanation is possible and both algorithms would report it. In Figure 3B, however, two possible explanations for the same observation are possible. Also here, the product fails the tightness test, however in station 4, the measured values show that the pressure on the product was incorrect. This can be explained by a broken jack cylinder or by a wrong position of a product part. In this step, this cannot be answered and Algorithm 1 would report both explanations. Algorithm 2 would save the explanations until the next step in the diagnosis, where it can be determined that the product was in the wrong position and this is then reported as the diagnosis.\nAs mentioned above, the diagnosis from the product perspective is triggered when a product shows an anomaly, mostly in the quality control. This then activates the diagnosis algorithm which aims at identifying the root cause of the anomaly observed in the product. The anomaly in the product is, however, caused by an anomaly in the machine behavior.\nFor evaluation of the two algorithms, five faults were simulated. Table 2 summarizes the results of the evaluation. Both algorithms performed well when the cause of the fault could be identified within a single step of the RIM. Algorithm2 could additionally identify the root cause of a fault whose cause could only be identified by combining the knowledge from two different steps. Both algorithms failed to identify when a sensor in the quality control was broken and when a broken piece was entered into the assembly process." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [ "tab_3", "tab_3" ], "text": "In this study, we addressed the topic of diagnosis of RIMs. Analyzing the characteristics of RIMs, we found that they offer some features which can be exploited for the development of diagnosis algorithms specifically suited for diagnosing causes of faults in products processed on these machines. Based on the identified characteristics, we developed two diagnosis algorithms. These are able to identify faults in the RIM which affect the product quality. As was shown in Table 2, both algorithms are able to identify faults whose root cause can be definitely identified within one production step. This does not mean that the fault needs to be detected in the same step. Faults are often only reported once the product is sorted as Not-OK. Once a product is reported as Not-OK, the algorithm start moving through the production process backwards. The reason for the sorting into Not-OK is usually found in the quality control stations, however these do only report which characteristics of the product did not fit with the expectations, but do not identify a reason for these faults. To identify the root cause, the algorithms check the expected sensor values for each station and compare them to the actual measurements. They also take into account the expected and actual times for each state transition and rotation. When Algorithm 1 encounters an inconsistency, it will check the system description for the sensor that is mapped on the inconsistent value and report it as cause. Should two possible explanations be possible in one step, Algorithm 1 cannot determine which explanation is more likely and will report both. In such cases, Algorithm 2 has an advantage, since it can save this information and check whether information from other (earlier) stations can identify the actual root cause. However, this is only possible in one direction, since the algorithm moves backwards through the production process. Therefore, only earlier production steps can be used to add further information for identification of a root cause. As can be seen in Table 2, both Algorithms fail for certain types of faults. There are two main reasons for this. The first is the absence of certain sensors. No sensors are installed to measure vibrations or to check whether parts are damaged before entering the machine. In the real RIM, a system exists that checks one part of the final product before putting into the assembly station. However, this was not part of this model. Other parts are also not checked for damages. Therefore, these types of faults cannot be detected. The machine will perform the assembly steps on them and in the quality control, they will show faults. But, since no sensors report the damage, the algorithms cannot identify this as root cause. Another type of fault that cannot be identified is a broken sensor. In the case tested here, a sensor in the quality control was broken and reported a product as Not-OK, despite the product being OK. This could also not be diagnosed by the algorithms. Since algorithm 2 performed better and is able to identify more complex faults, it would be most useful to continue development on this algorithm for usage in the actual RIM.\nAs pointed out before, the diagnosis to identify the root cause of an anomalous product needs to identify which anomaly in the production process lead to the anomaly in the product. Therefore, both algorithms were implemented to identify anomalies within the production process which are in turn the root cause for the anomaly in the product." }, { "figure_ref": [], "heading": "Conclusion and Outlook", "publication_ref": [], "table_ref": [], "text": "Despite their widespread application, almost no research exists on the diagnosis of RIMs, especially considering the product perspective. A possible explanation for this is the fact that RIMs are often built and employed by small and medium seized companies who lack the possibilities to develop anomaly detection or diagnosis algorithm on their own. Nevertheless, RIMs are ideally suited for diagnosis algorithms, since they incorporate processing and quality control of products into one machine, which means that all data will be generated within the same automation unit and can be reported in one file with one time stamp. This, and the fact that the product flow can be simplified into a linear product flow, makes diagnosis of possible faults in the products simple (RQ1). Since SMT solvers have been established for the diagnosis of various processes, including production processes, their use seems plausible also for the case of RIMs. However, to the best of the author's knowledge, such algorithms haven't yet been developed for RIMs. Here, two diagnosis algorithms based on SMT solvers for RIMs have been suggested (RQ2). The evaluation showed Figure 3: Two examples of faults diagnosed by the algorithm. Figure A shows an example of a fault that can be diagnosed with only one possible explanation. At first, Station 8 that a product was Not-OK, this can be traced back to the quality control in station 6 measuring that the product was not tight. This defect can be traced to station 4, where a timing error in the jack cylinder occurred. Since no other explanation is found, this explanation is reported. Figure B shows an example of a failure with two possible explanations. Also here, the product is reported as Not-OK, which is explained by the measurement in station 6. Station 4 reports that the pressure was wrong during the assembly process. For this, two possible explanations are possible in the machine. The jack cylinder could be broken or the part could have been in the wrong position from the preceding station. While Algorithm 1 would report both diagnoses, Algorithm 2 would use the observations from station 3 to determine that the part was in the wrong position, while the jack cylinder is not broken. In Algorithm two only the diagnosis that the part was in the wrong position would be reported. that both algorithms can diagnose faults whose root cause can be found in one production step, while Algorithm 2 is also able to diagnose faults whose root cause identification needs observations from several production steps. Some faults cannot be diagnosed with these algorithms, mainly faults that occur when a sensor is broken or when a damaged piece is placed into the machine. In general, the diagnosis algorithms are heavily limited by the available sensors. It is, for example, not possible to identify faults due to vibrations, since no vibration sensors are installed. In this work, the algorithms were only tested on a model of the RIM. In the future, this algorithm should be tested on a real machine. Furthermore, more sensors should be integrated into the algorithm and dependencies between new sensors should be added to the system description. Further research should also implement a system description for a wider range of RIMs to prove the applicability of the algorithms to RIMs in general. Another feature to be added is the ability of the algorithm to use information not only in one direction." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work was funded by the \"Zentrales Inoovationsprogramm Mittelstand\" of the Bundesministerium für Wirtschaft und Energie within the project \"RepAssist: Automatische Diagnose-und Reparaturassistenz für mehrstufige Produktionsanlagen anhand von Anlagenkausalitäten\"" } ]
2023-05-25
[ { "authors": "László Monostori", "journal": "Procedia Cirp", "ref_id": "b0", "title": "Cyber-physical production systems: Roots, expectations and r&d challenges", "year": "2014" }, { "authors": "Nadia Galaske; Reiner Anderl", "journal": "Procedia CIRP", "ref_id": "b1", "title": "Disruption management for resilient processes in production systems", "year": "2016" }, { "authors": "Alban Grastien", "journal": "Citeseer", "ref_id": "b2", "title": "A spectrum of diagnosis approaches", "year": "2013" }, { "authors": "Marta Fernandes; Juan Manuel Corchado; Goreti Marreiros", "journal": "Applied Intelligence", "ref_id": "b3", "title": "Machine learning techniques applied to mechanical fault diagnosis and fault prognosis in the context of real industrial manufacturing use-cases: a systematic literature review", "year": "2022" }, { "authors": "Johan De; Kleer ; James Kurien", "journal": "IFAC Proceedings Volumes", "ref_id": "b4", "title": "Fundamentals of model-based diagnosis", "year": "2003" }, { "authors": "Sriram Narasimhan; Gautam Biswas", "journal": "IEEE Transactions on systems, man, and cybernetics-Part A: Systems and humans", "ref_id": "b5", "title": "Model-based diagnosis of hybrid systems", "year": "2007" }, { "authors": "Dip Kumar Saha; Md Emdadul Hoque; Hamed Badihi", "journal": "Sensors", "ref_id": "b6", "title": "Development of intelligent fault diagnosis technique of rotary machine element bearing: A machine learning approach", "year": "2022" }, { "authors": "Davor Kolar; Dragutin Lisjak; Michał Paj; Danijel Pavković", "journal": "Sensors", "ref_id": "b7", "title": "Fault diagnosis of rotary machines using deep convolutional neural network with wide three axis vibration signal input", "year": "2020" }, { "authors": "Saeed Rajabi; Saman Mehdi; Stefania Azari; Francesco Santini; Flammini", "journal": "Expert systems with applications", "ref_id": "b8", "title": "Fault diagnosis in industrial rotating equipment based on permutation entropy, signal processing and multi-output neuro-fuzzy classifier", "year": "2022" }, { "authors": "Andreas Bunte; Benno Stein; Oliver Niggemann", "journal": "", "ref_id": "b9", "title": "Model-based diagnosis for cyber-physical production systems based on machine learning and residual-based diagnosis models", "year": "2019" }, { "authors": "Alexander Diedrich; Alexander Maier; Oliver Niggemann", "journal": "", "ref_id": "b10", "title": "Model-based diagnosis of hybrid systems using satisfiability modulo theory", "year": "2019" }, { "authors": "Kaja Balzereit; Oliver Niggemann", "journal": "IEEE", "ref_id": "b11", "title": "Automated reconfiguration of cyber-physical production systems using satisfiability modulo theories", "year": "2020" }, { "authors": "A Miguel; Francisco P Saez; Kira Maturana; Dawn M Barton; Tilbury", "journal": "IEEE Transactions on Automation Science and Engineering", "ref_id": "b12", "title": "Context-sensitive modeling and analysis of cyber-physical manufacturing systems for anomaly detection and diagnosis", "year": "2019" }, { "authors": "Sree Cv Chiranth; Rajendra; Channakeshava", "journal": "", "ref_id": "b13", "title": "Assembly automation and human machine interface (hmi) implementation for rotary table indexing machine", "year": "2020" }, { "authors": "Leonardo De; Moura ; Nikolaj Bjørner", "journal": "Springer", "ref_id": "b14", "title": "Z3: An efficient smt solver", "year": "2008-04-06" } ]
[ { "formula_coordinates": [ 6, 332.89, 310.34, 212.5, 74.64 ], "formula_id": "formula_0", "formula_text": "K f ← nameconf lictR ▷ save name of conflict value check M for K f Counter ← +1 ▷ count number of possible explanations end if if C == 1 then" } ]
A Diagnosis Algorithms for a Rotary Indexing Machine
Rotary Indexing Machines (RIMs) are widely used in manufacturing due to their ability to perform multiple production steps on a single product without manual repositioning, reducing production time and improving accuracy and consistency. Despite their advantages, little research has been done on diagnosing faults in RIMs, especially from the perspective of the actual production steps carried out on these machines. Long downtimes due to failures are problematic, especially for smaller companies employing these machines. To address this gap, we propose a diagnosis algorithm based on the product perspective, which focuses on the product being processed by RIMs. The algorithm traces the steps that a product takes through the machine and is able to diagnose possible causes in case of failure. We also analyze the properties of RIMs and how these influence the diagnosis of faults in these machines. Our contributions are three-fold. Firstly, we provide an analysis of the properties of RIMs and how they influence the diagnosis of faults in these machines. Secondly, we suggest a diagnosis algorithm based on the product perspective capable of diagnosing faults in such a machine. Finally, we test this algorithm on a model of a rotary indexing machine, demonstrating its effectiveness in identifying faults and their root causes.
Maria Krantz; Oliver Niggemann
[ { "figure_caption": "Figure 1 :1Figure1: There are two different perspectives on diagnosis of Rotary Indexing Machines. One possibility is to monitor the state of the machines, which is often focused on the rotating parts of the machine, as these have the most wear. The other option is to look at the quality of the products processed on the machine. In this case, a damaged product at the end of its processing cycle would indicate a fault in the machine in the preceding steps.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "individually. Definition 8 (Step-wise Diagnosis Problem). The step-wise diagnosis problem is a tuple (M, E, K), where • M is the process description according to Definition 7 • E is a vector (e 1 , e 2 , . . . , e n ) with e ∈ R and n ∈ N is the set of expected sensor values • K is a vector (k 1 , k 2 , . . . , k n ) with k ∈ R and n ∈ N,", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Definition 9 (9Multi-step Diagnosis Problem). The multistep diagnosis problem is a tuple (M, E, K, Z), where • M is the process description according to Definition 7 • E is a vector (e 1 , e 2 , . . . , e n ) with e ∈ R and n ∈ N containing the expected sensor values • K is a vector (k 1 , k 2 , . . . , k n ) with k ∈ R and n ∈ N, reported by the sensors in the RIM, including a time stamp • Z is a vector (z 1 , z 2 , . . . , z n ) with n ∈ N, in which the possible diagnoses can be saved", "figure_data": "", "figure_id": "fig_2", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Characteristics of RIMs which can be employed for diagnosis", "figure_data": "Characteristic of RIMUse in DiagnosisRotary MotionNot important for diagnosis from product perspectiveCan be employed to implementLinear Product Flowa diagnosis algorithm fromperspective of productInternal time stamps on sensorTime Stamps on Datadata can be used to trackcorrect processing of productQuality ControlDirect quality control enables easier diagnosisSingleAutomation Software", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Faults and Diagnoses by Algorithms", "figure_data": "FaultAlgorithm 1 Algorithm 2Timing Jack CylinderPart in Wrong Position×Pressure Broken××Jack Cylinder BrokenPart Broken××and checks whether another explanation is possible. In Fig-ure", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work provides a definition of Cyber-Physical Production Systems (CPPS) that the citing paper builds upon in discussing the combination of physical and cyber systems in a production environment."}, {"Category": "Data Source", "Citation": "[2]", "Explanation": "The cited work is acknowledged for its discussion of the potential consequences of faults in CPPS, which the citing paper uses to highlight the importance of diagnosing faults in CPPS."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work provides a definition of diagnosis in the context of CPPS, which the citing paper uses to discuss the process of identifying the root cause of a problem in a system."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work discusses the use of computer-based methods in diagnosing faults in CPPS, which the citing paper builds upon in discussing the need for such methods in modern CPPS."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The cited work discusses the use of model-based diagnosis in CPPS, which the citing paper builds upon in discussing the common approach used in CPPS to diagnose faults."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work provides a method for identifying the root cause of a fault detected by model-based diagnosis in CPPS, which the citing paper adopts to further enhance the diagnosis process."}, {"Category": "Supporting Evidence", "Citation": "[7]", "Explanation": "The cited work provides a discussion on an intelligent fault diagnosis technique for diagnosing faults in a deep groove ball bearing, which is a relevant component for the study of rotary indexing machines."}, {"Category": "Extension or Continuation", "Citation": "[8]", "Explanation": "The cited work uses a novel deep learning-based technique for fault diagnosis in rotary machinery, which is a continuation of the research on diagnosing faults in rotary indexing machines."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work proposes a novel approach for fault diagnosis in rotating equipment that the citing paper builds upon for the classification of different rotary machinery states using CNN for raw three-axis accelerometer signals."}, {"Category": "Data Source", "Citation": "[10]", "Explanation": "The cited work provides a new approach to Model-Based Diagnosis for CPS that the citing paper uses to diagnose production systems in rotary indexing machines, leveraging the data and methods from the cited work."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work presents a new model-based diagnosis approach that the citing paper adopts for detecting and isolating faults in hybrid systems."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work presents a novel approach for automated reconfiguration of CPPS using residual-based fault detection and logical calculi, which the citing paper further develops and implements in the evaluation."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work (Z3 solver) is used as a tool for implementing the algorithms described in the citing paper, providing a method for solving the system of equations in the model of the RIM."}]
[ { "figure_ref": [], "heading": "", "publication_ref": [ "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23", "b24", "b25", "b26", "b27", "b28", "b29", "b30", "b33", "b22", "b32", "b34", "b35" ], "table_ref": [], "text": "texture-based methods [6][7][8], motion-based methods [9][10][11], shape-based methods [12,13], rPPG-based methods [14][15][16][17][18][19][20][21][22][23][24], methods based on deep convolutional neural networks [25][26][27][28][29], and methods based on other liveness cues such as thermal signatures [30] and gaze [31]. Among these methods, the rPPG signal, which models the tiny periodic changes on skin color due to the heartbeat, shows good discriminative power to detect 3D mask attacks, since 3D masks are made from resin, plaster or silicone, which cannot produce such rPPG signals.\nNevertheless, methods based on rPPG signals face two challenges. One is the alignment error of face sequences. Faces are usually aligned through facial landmarks [22-24, 32, 33] to enhance the quality of the rPPG signals, whereas face landmarks are often difficult to locate precisely down to the pixel-wise level. For example, it is hard to precisely and consistently label the location of a nose tip, but the nose tip is often used in face alignment. Such a misalignment of facial landmarks can be tolerated to a certain extent in face recognition systems, but it greatly distorts the rPPG signal because the temporal rPPG signal from video frames is sensitive to their spatial positions. The other challenge is that the rPPG signal is weak and noisy. While the color change of rPPG signals is often out of the sensitivity of the human vision system, facial micro-motions and illumination variation significantly change the pixel values by a large amount, which results in low signal to noise ratio. These two challenges enormously affect the precision of rPPG signals. To address these challenges, three techniques are proposed in this paper.\nFirstly, to reduce the alignment error, a face-stitching algorithm is designed to align the faces in a video sequence to extract motion-robust rPPG signals, making use of not only the facial landmarks, but also facial keypoints. The proposed algorithm detects the facial keypoints down to the pixel-wise level by using the SIFT descriptor [34], which addresses the problem of ambiguous localization of facial landmarks in traditional facial alignment algorithms. At the same time, the facial landmarks are used as anchor points in video frames to avoid the problem of error propagation and address the drawback of unstable keypoint matching for two faces with a large pose difference. As the face motion between two consequent video frames in our application is small, an algorithm is proposed to accurately match keypoints, which minimizes not only the differences between feature representations, but also the spatial distance between two matched keypoints.\nSecondly, to enhance the rPPG signal and focus on the facial regions with rich blood vessels, a signal weighting mechanism based on the vascular density is proposed. Traditionally, rPPG signals are weighted either empirically [23] or in a datadriven way by minimizing the evaluation error [33], which lacks biological support. rPPG signals are originated from the periodical change of the skin color because of heartbeat [35], and hence it is conjectured that a facial region with rich blood vessels will result in a strong rPPG signal. Thus, larger weights should be given to regions with richer blood vessels when fusing the rPPG signals from different regions. The density of blood vessels is estimated by projecting the blood vessels of a biological specimen [36] to the face after proper alignments.\nLastly, to explore the discriminant features in different color spaces, a feature fusion scheme is proposed to combine the rPPG signals in different color spaces. rPPG signals model the tiny color changes of skin pixels, and these tiny changes may exhibit different characteristics in different color channels of different color spaces, which could form robust patterns for the face spoofing detection. Thus, the rPPG signals from different color spaces are combined to form a spatial-temporal feature representation to enhance the rPPG signal and boost the classification performance.\nIn the proposed framework, the pixel-wisely aligned faces are divided into regions of interest (ROIs), and the enhanced rPPG signals are extracted from each ROI, and encoded as a spatial-temporal representation. To improve the generalization capability of the classifier, an EfficientNet with a Gated Recurrent Unit (GRU) is designed to extract reliable spatial-temporal features for classification, where the lightweight EfficientNet Blocks are designed for spatial feature learning and the GRU is designed for temporal feature learning. The proposed Vascularweighted Motion-robust rPPG (VMrPPG) is validated on five publicly available datasets for detecting spoofing attacks, and demonstrates a superior performance compared with the stateof-the-art rPPG-based algorithms for face spoofing detection.\nOur contributions can be summarized as follows: 1) An image stitching algorithm on both facial landmarks and facial keypoints is proposed to reduce the alignment error so as to improve the robustness of face alignment, which makes use of the temporal consistency of keypoints to align the face reliably down to the pixel-wise level. 2) Inspired by the characteristics of rPPG signals, a weighted spatial-temporal representation based on the distribution of blood vessels is proposed to highlight the face regions with rich blood vessels. 3) To make full use of the tiny color changes of skin pixels, a colorfusion scheme is proposed to combine the rPPG signals in different color spaces. 4) Lastly, to improve the generalization capability, a lightweight EfficientNet with GRU is proposed to detect the spoofing face." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "In literature, many face anti-spoofing methods employ liveness cues such as textures, motions, shapes, and rPPG signals.\nIn this section, the spoofing detection methods based on the appearances of 3D mask attacks will be reviewed first, followed by the methods for rPPG signal enhancement." }, { "figure_ref": [], "heading": "A. 3D Mask Attack Detection", "publication_ref": [ "b36", "b5", "b6", "b37", "b38", "b6", "b5", "b37", "b37", "b17", "b39", "b10", "b8", "b40", "b11", "b41", "b12", "b42", "b16", "b17", "b19", "b18", "b20", "b13", "b43", "b44", "b40", "b20", "b23", "b45", "b25", "b24", "b26", "b26", "b46", "b47", "b48", "b27", "b49", "b30" ], "table_ref": [], "text": "Texture-based Methods exploit the difference in texture pattern between spoofing faces and genuine faces for spoof-ing detection [37]. Texture descriptors such as Local Binary Pattern (LBP) [6,7,38] and Binarized Statistical Image Features (BSIF) [39] have been used for face anti-spoofing. Kose et al. [7] introduced a multi-scale LBP on both RGB images and depth images to detect the abnormality of masked faces. Erdogmus and Marcel extended it to other descriptors such as modified LBP, transitional LBP, and direction-coded LBP [6,38]. LBP serves as a baseline method for many datasets, e.g., 3DMAD [38] and HKBU-Mars [18,40]. These handcrafted features can recognize unique textures on facial masks, but they often have limited discriminant power when faced with different illumination conditions or mask materials. Motion-based Methods mainly focus on unconscious subtle facial expressions such as eye blink, mouth movement, and facial muscle contraction, which cannot be observed on some rigid facial masks. Siddiqui et al. [11] encoded textures using LBP features at multiple scales and extracted micro-movement patterns in consecutive frames via Histogram of Oriented Optical Flows. Shao et al. [9] extracted the subtle facial motion using a VGG Net. Liu et al. [41] introduced the CNN-RNN architecture for facial motion feature extraction and classification. Shape-based Methods employ the differences in 3D face structures to detect spoofing attacks. Tang and Chen [12] applied Principal Curvature Measures and meshed SIFT-based features to the face spoofing detection. Hamdan and Mokhtar [42] combined features from Legendre Moments Invariants and Linear Discriminant Analysis as the liveness cue, and classified them using Maximum Likelihood Estimation. These methods apply image transformation to extract shape features, while Wang et al. [13] obtained geometry features by reconstructing a 3D morphable model from RGB images, and combined them with LBP features under both handcrafted fusion and VGG-generated fusion. rPPG-based Methods extract the liveness clues of heartbeats through RGB cameras using remote photoplethysmography (rPPG) technology. rPPG signals are initially extracted for heart rate estimation [43] and before long employed for face anti-spoofing. Under natural environments, researchers have noticed that the major challenge is the background noise. To tackle this problem, Liu et al. [17] divided the whole face into ROIs, and constructed the local rPPG correlation model, in which the phase and period information of rPPG signals from different ROIs are utilized as the liveness cues. This work is improved by using rPPG correspondence features [18] and multi-channel correspondence features [20] to differentiate genuine faces from spoofing attacks. [19,21], the signal similarity between neighboring ROIs is further extended to three features, amplitude, gradient, and phase, measured by Euclidean distance, normalized cross correlation (NCC) metric, and dot production. In PATRON [14], the respiratory signal is extracted from the rPPG signal as an auxiliary liveness cue. Another rPPG feature descriptor for face anti-spoofing is the long-term statistical spectral (LTSS) [44] which employs the first and second order statistics of the frequency spectrum of a signal. Its multi-scaled version (MS-LTSS) is fit to a Contextual Patch-based Convolutional Neural Network (CP-CNN) [45] to obtain a better performance. The CNN-RNN classifier [41], C(2+1) Network [21] and the transformer architecture [24] have also been adopted to extract the temporal information embedded in rPPG signals.\nCNN-based Methods extract liveness clues directly from raw face sequences [46]. Liu et al. [26] introduced a Deep Tree Network to cluster the video frames into sub-groups via the Tree Routing Unit and classify them using the Supervised Feature Learning module. George et al. [25] designed a Multichannel Convolutional Neural Network to fuse the clues from multiple synchronized cameras using Domain Specific Units. To address the problem of insufficient samples for spoof traces, the Generative Adversarial Network (GAN) training strategy has been adopted recently [27]. Liu et al. introduced a Spoof Trace Disentanglement Network (STDN), in which the spoof traces are encoded into a hierarchical representation by the generator [27]. The extended work, STDN+ [47], explicitly estimates the spoofing-related patterns on the trace modeling. For multi-modal inputs, Liu et al. [48] designed a Modality Translation Network (MT-Net) for the generator to map the patterns from different modalities and a Modality Assistance Network (MA-Net) for feature translation between different modalities. Qin et al. [49] designed a meta-teacher optimization framework to supervise the process of learning rich spoofing cues. To adapt the face spoofing detection to different scenarios in an automatic way, Yu et al. [28] developed a Neural Architecture Search (NAS) for face antispoofing, named NAS-FAS. Although the searched neural networks cannot outperform the state-of-the-art expert-designed networks, its high-level abstraction is promising. Methods Based on Other Liveness Cues have also been developed. Agarwal et al. [50] declared that the thermal imaging spectrum shows a predominant power to detect 3D mask attacks whereas such technology is costly. Directing gaze information to build behavior patterns has been shown effective to resist face spoofing attacks [31]." }, { "figure_ref": [], "heading": "B. Remote Photoplethysmography Signal", "publication_ref": [ "b31", "b32", "b16", "b17", "b18", "b19", "b22", "b34", "b50", "b50", "b51", "b34", "b52", "b22" ], "table_ref": [], "text": "An rPPG signal is a set of complex and weak signals of heartbeats with noise. The rPPG signals have been used in many applications such as remote heart rate monitoring [32,33] and face anti-spoofing [17][18][19][20]. To suppress the noise in rPPG signals, methods have been developed for denoising and signal enhancements [23,35,51]. One way to filter the noise is to utilize the correlation information of different color channels. To filter the noise in general situations, CHROM [51] is designed by utilizing the knowledge of color model, which can be robust to non-white illuminations. Method 2SR [52] detects the rPPG signals by tracking hue changes of the skin. Wang et al. [35] improved this work by incorporating data-driven discovery and physiological properties of skin reflections. Another way is to handle the rPPG signals in the frequency domain. The energy terms within the frequency range of normal heartbeat contain most heartbeat information while those out of the range mostly contains noise. Based on this idea, Lovisotto et al. [53] enhanced the signal using a lowpass filter at 4 Hz with the Beat Separation algorithm. Yao et al. [23] applied a bandpass filter with the cut-off frequency at 0.8 Hz -3.3 Hz." }, { "figure_ref": [], "heading": "III. PROPOSED FACE ALIGNMENT VIA IMAGE STITCHING", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Motivations of Face Alignment via Image Stitching", "publication_ref": [ "b33" ], "table_ref": [], "text": "As discussed, existing rPPG-based spoofing detection methods face two major challenges, face alignment errors and weak rPPG signals. Face alignment errors are caused by ambiguity of facial landmarks and localization errors of landmarks. The ambiguity arises from the fact that facial landmarks may not be precisely and consistently annotated at a pixel-wise level in the first place. Face recognition systems can tolerate face alignment errors to a certain extent, but the errors significantly affect the quality of extracted rPPG signals. Different from landmarks, the keypoints in SIFT feature space [34] can be robustly detected in a pixel-wise precision, which partially addresses the problem of ambiguity annotation of facial landmarks. However, keypoint-based face alignment has its own challenges: 1) there are only few matched keypoints when aligned directly to the reference face over a remarkable pose difference; 2) the errors are inevitably propagated through repeating alignments between successive video frames.\nTo tackle these challenges, SIFT keypoints and facial landmarks are jointly utilized for precise face alignment. The SIFT keypoints are utilized for pixel-wise matching and the facial landmarks serve as anchor points to guide the matching, so that the error propagation can be minimized. The proposed joint face alignment for rPPG signal extraction consists of three phases. Firstly, to match keypoints between successive frames of a face video, a matching mechanism is developed to utilize both spatial similarity and feature similarity for the detected SIFT keypoints. Secondly, the affine transform is applied to align each face to the template. Finally, a landmarkanchored face stitching method is proposed, in which SIFT features are used to align a face indirectly to the template through a set of intermediate video frames, to address the challenges of few matched keypoints between two faces of a large pose difference. Landmarks are detected and matched directly in most frames, which serve as anchor points to stop the error prorogation in successive matching of keypoints. A dynamic programming method is developed to derive the set of intermediate frames with the minimal alignment error, through which a chain of affine transforms is derived to stably and precisely align the face to the template." }, { "figure_ref": [], "heading": "B. Keypoint Matching by Maximizing both Spatial and Feature Similarities", "publication_ref": [ "b53", "b33", "b33" ], "table_ref": [], "text": "For face alignment between two successive frames in a video sequence, two matched keypoints should be spatially close, since the head movements between two successive frames are usually insignificant. To make good use of spatial similarity and feature similarity, a keypoint matching mechanism is designed to take both into consideration. Given a face video sequence, a face detection tool SeetaFace [54] is utilized firstly to crop the face region in each frame. Taking the first frame as the template, the subsequent frames are aligned to the template. For offline video sequences, the template can be the middle frame to mitigate the error propagation. The facial landmarks are also detected by SeetaFace and the keypoints are extracted by SIFT algorithm [34], with a feature vector f ∈ R 128 and a 2D space vector p = [x, y] ∈ N 2 .\nThe keypoint matching for faces in two frames (namely, the reference face and the query face) consists of two steps: initial matching to restrict the search space and fine matching. In [34], the k-Nearest-Neighbor (kNN) algorithm is applied on features alone to find the initial matching, while both spatial and feature distances are utilized in the proposed method. Denote the feature and spatial distances of two nearest neighbors in the query face to each keypoint in the reference face as d F 1 , d S1 and d F 2 , d S2 after proper range normalization. The fused distance is formulated as their Euclidean distance:\nd M i = d F i + d Si s.t. i ∈ {1, 2}.(1)\nAn initial match is granted when\nd M 1 ≤ δd M 2 .\nThe threshold δ is adaptable to different camera configurations for a balance of keypoint quantity and matching speed. In the second step, i.e., fine matching, the feature distance d F and spatial distance d S for each initial match are calculated. The initial feature and spatial distance sets of all initial matches, denoted as D F and D S , are assumed to follow the Gaussian distribution and the two distributions are independent. The joint distribution can be modeled as:\nG(d F , d S ) = G(d F ) • G λ (d S ) ∝ - (d F -µ F ) 2 2σ 2 F -λ (d S -µ S ) 2 2σ 2 S ,(2)\nwhere σ F , µ F and σ S , µ S are the standard deviation and mean of D F and D S , respectively. λ is a weight to balance two distances. An acceptance rate α ∈ (0, 1) is defined and the top α initial matches ordered by the joint distribution are the fine matches. The proposed method can accurately and efficiently derive the set of matched keypoints from face sequences." }, { "figure_ref": [], "heading": "C. Face Alignment via Affine Transform", "publication_ref": [ "b54", "b54" ], "table_ref": [], "text": "The out-plane rotation between two successive frames in a face video is negligible. Thus, the mapping between two successive frames is modeled as an affine transform, following the same design as in [55]. For each matched keypoint pair, the affine projection from the query face\nv = [x, y, 1] T to the reference face v ′ = [x ′ , y ′ , 1] T is formulated as:   x ′ y ′ 1   = P   x y 1   =   p 11 p 12 p 13 p 21 p 22 p 23 0 0 1     x y 1   ,(3)\nwhere the projection matrix P consists of 6 coefficients to solve. When the number of matched keypoints is sufficient, the Least Mean Square method [55] can be applied to find the projection matrix P . When the amount of matched keypoints is not sufficient to compute a transformation matrix, the matched facial landmarks are utilized alternatively." }, { "figure_ref": [ "fig_1" ], "heading": "D. Landmark-anchored Face Stitching", "publication_ref": [ "b4", "b7", "b39" ], "table_ref": [], "text": "1) Face Alignment Using Both Keypoints and Landmarks: Facial landmarks such as nose tips are difficult to annotate consistently to the pixel-wise precision. Traditional face alignment through annotated landmarks hence may introduce errors and distort the weak and noise-sensitive rPPG signal. SIFT keypoints could be matched precisely to the pixel-wise level. However, SIFT keypoint matching for two faces over a large pose difference may fail and lead to very few matched keypoints. In this paper, we propose to align the query face to the reference face through a series of intermediate faces mainly using SIFT keypoints, with landmarks serving as anchor points to prevent the error propagation in excessive intermediate matches. As summarized in Alg. 1, the proposed face alignment method searches for intermediate faces to minimize the alignment error by utilizing a dynamic programming method.\nFor all matched feature points between the reference face and the query face, the alignment error is defined as the Euclidean distance between the projected points from the query face and those of the reference face. Specifically, denote the reference face as F 1 and the query face as F k , k ≥ 2, k ∈ N + . We first aligns the query face to an intermediate face F k, 1 ≤ k < k, and then indirectly to the reference face. As both SIFT keypoints and facial landmarks are utilized, the total alignment error L(1, k) is defined as the sum of the alignment error using SIFT keypoints L K (1, k) and the alignment error using facial landmarks L L (1, k):\nL(1, k) = L K (1, k) + L L (1, k).(4)\nThese two types of errors are calculated differently. Firstly, the alignment error L K (1, k) using keypoints is derived. When the pose difference between F k and F 1 is large, there may be too few matched keypoints to directly align F k to F 1 . In this case, an intermediate face F k is used during the alignment between F k and F 1 , where F 1 is first aligned to F k and then F k is aligned to F k . Matched keypoints may be different between (F 1 , F k) and (F k, F k ). In this case, the keypoint alignment error from F 1 to F k through an intermediate F k is estimated as follows:\nL K (1, k) = L K (1, k) + 1 m m j=1 |P k,k v K k,j -v K k,j | 2 ,(5)\nwhere the first term is the recursive definition of the alignment error from F 1 to F k using keypoints, and the second term is the error of aligning F k to F k using keypoints.v K k,j represents the coordinate vector of the j-th matched SIFT keypoint in\nF k , v K k,j is the corresponding keypoint in F k and P k,k is the projection matrix from v K k,j of F k to v K k,j of F k,\nwhich can be estimated as outlined using Eqn. (3) in Section III-C.\nSecondly, we derive the alignment error L L (1, k) using landmarks. As the facial landmarks could be detected in almost all the frames, the alignment error through landmarks can be defined directly as follows:\nL L (1, k) = 1 n n j=1 |P 1,k v L k,j -v L 1,j | 2 ,(6)\nwhere v L k,j and v L 1,j represent the coordinate vector of the jth facial landmark out of n from the query face F k and the reference face F 1 , respectively. P 1,k is the projection matrix \nfor i = 1 to k ′ -1 do 5: Derive P i,k ′ using v K i , v K k ′ , v L i , v L k ′ as outlined in Section III-C 6:\nUpdate the alignment loss L(1, k ′ ) as defined in Eqn. ( 8)\n7: if L(1, k ′ ) < L min then 8: Update L min ← L(1, k ′ ) 9: Update k ← i 10: end if 11:\nend for 12:\nUpdate L K (1, k ′ ) using Eqn. (5) with\nk 13: Update P 1,k ′ ← P 1, k • P k,k ′ 14: end for 15: return {P 1,2 , ..., P 1,k }\nto align F 1 and F k . When an intermediate face F k is used, the projection matrix P 1,k can be estimated as follows,\nP 1,k = P 1, k • P k,k ,(7)\nwhere P 1, k and P k,k represent the projection matrices from F k to F 1 and from F k to F k, respectively. Finally, by integrating Eqn. ( 4)-( 7), the alignment error from\nF 1 to F k through an intermediate face F k is calculated as, L(1, k) = L K (1, k) + 1 m m j=1 |P k,k v K k,j -v K k,j | 2 + 1 n n j=1 |P 1, kP k,k v L k,j -v L 1,j | 2 .(8)\nIn the next subsection, a dynamic programming solution is proposed to find a set of intermediate faces to minimize the face alignment error L(1, k).\n2) Dynamic Programming Solution for Face Alignment through Intermediate Faces: The target here is to find a set of intermediate faces {F k1 , F k2 , ..., F kq } between F 1 and F k , so that F k is aligned to F 1 through this set of intermediate faces to minimize the error defined in Eqn. (8). An enumeration approach will result in 2 k different combinations of intermediate faces, but this method may have a lot of redundancy. Note that L K (1, k ′ ) and P 1,k ′ , 1 < k ′ ≤ k, can be reused to reduce the complexity, which leads to the following dynamic programming solution summarized in Alg. 1.\nThe key to this DP algorithm is to reuse L K (1, k ′ ) and P 1,k ′ , 1 < k ′ ≤ k, during optimization. The projection matrices {P 1,2 , . . . , P 1,k } and the minimal losses {L K (1, 2), . . . , L K (1, k)} are derived in sequence during each iteration of the outer loop. The time-complexity of this dynamic programming algorithm is O(k 2 ), which is much lower than O(2 k ) for the native implementation using enumeration. Fig. 1 visualizes the improvements compared to the landmark-only face alignment. The test video is selected from the HKBU-Mars V2 dataset [40]. 200 frames are aligned using landmark-only, keypoint-only, and the proposed face alignment method. The region outside the template after the projection is tailored so that all the aligned faces have the same size. For each method, all the aligned faces are overlaid and the heatmap for pixel-wise standard deviations is visualized. It can be seen that the proposed face alignment improves the precision significantly, e.g., the variations of the aligned faces are greatly reduced. The proposed face alignment method lays a solid foundation for the subsequent rPPG signal extraction." }, { "figure_ref": [ "fig_3" ], "heading": "IV. PROPOSED RPPG-BASED FACE ANTI-SPOOFING", "publication_ref": [ "b55" ], "table_ref": [], "text": "A. Overview of Proposed Framework rPPG signals are often weak and noisy due to head motions, alignment errors, and illumination variations. A set of techniques are developed in this paper to enhance the rPPG signals.\nIn the previous section, a face alignment method via both landmarks and keypoints is proposed to robustly and precisely align the faces at the pixel-wise level. To focus on the ROIs with rich blood vessels, a novel signal weighting mechanism is proposed, where the weight for each ROI is determined by the density of blood vessels within the ROI. To exploit different patterns of rPPG signals in various color spaces, rPPG signals from multiple color spaces are combined to form the spatial-temporal representation. To learn a generalized model for 3D mask spoofing detection, a lightweight EfficientNet with a GRU is proposed by utilizing the compound scaling mechanism [56], which provides a wide adaption ability of modeling the rPPG signals in various applications.\nThe overall framework of the proposed Vascular-weighted Motion-robust rPPG is shown in Fig. 2, which includes four main building blocks. 1) Robust face alignment. Facial landmarks may not be stably detected to a pixel-wise level, but can be detected in most frames. In contrast, a few SIFT keypoints can be matched over a large pose difference, but they can be detected stably at a pixel level. The proposed face alignment method makes good use of both keypoints and landmarks to extract stable rPPG signals from the face video. " }, { "figure_ref": [ "fig_4" ], "heading": "B. Vascular-weighted Spatial-temporal Representation", "publication_ref": [ "b32", "b32", "b34", "b19", "b34", "b32" ], "table_ref": [], "text": "The faces in a video are first aligned using the proposed face stitching method in Section III. The rPPG signals of the predefined ROIs in each frame are calculated as the average pixel values of a color channel as in [33]. The extracted rPPG signals contain various types of ineluctable noise from head micro-motions, alignment errors, illumination variations, etc. To filter the noise outside the heart rate, a bandpass filter is applied with the cutoff frequency at 0.85Hz and 3.5Hz, following the range of normal heart rates.\nThe filtered signal may still contain noise. The phase information is then utilized to distinguish the noise and the real signal. The blood flows out of the heart at a constant speed and reaches each ROI at a certain time, leaving a peak on the rPPG signal. As the distance from the heart to each ROI varies, the peak arrives at different times. With the same frequency, the rPPG signal in one ROI has a phase difference from those in other ROIs, while the phase of the noise is random. This unique phase information can be modeled as a robust liveness clue. To capture the phase pattern, the extracted signals from different ROIs are stacked to construct an image-like spatialtemporal representation, similarly as in [33]. Formally, denote the rPPG signals from color channel c as:\nS c = [r c 1 , r c 2 , ..., r c M ] T . (9\n)\nwhere M refers to the number of ROIs. The rPPG signal in each ROI r c i is a sequence of the average pixel values of ROIs in color channel c, i.e., r c i = [r c i,1 , r c i,2 , ..., r c i,N ], where r c i,j\nindicates the average pixel value of the i-th ROI of the j-th frame. The phase information is well embedded in the spatial-temporal representation without explicitly extracting it. The magnitudes of rPPG signals from all ROIs are also important for spoofing detection. The rPPG signal originates from the periodical contraction of the facial blood vessels and presents periodical color changes on face skin [35]. Larger density of blood vessels beneath a ROI results in larger magnitude of color changes. Thus, the ROIs with richer blood vessels should have stronger rPPG signals. To highlight the rPPG signals in the region with rich blood vessels, a signal weighting mechanism is designed to assign the weight according to the density of the blood vessels.\nAn arterial cast of head is utilized to estimate the density of blood vessels in each ROI. To map the 3D arterial cast to 2D face image, the frontal view image is taken first and 5 landmarks (2 eye centers, 1 nose tip, and 2 mouth corners) are manually labeled. The skin area containing few blood vessels in the frontal view image are cropped. An affine transform is then applied using the 5 landmarks. One transformed arterial cast image is shown in Fig. 3(a), which suggests that the cheeks and mandibles have the richest blood vessels while the forehead lacks rich blood vessels. It matches the rPPG signals measured in [20], which uses the normalized per-pixel standard deviation to represent the rPPG signal in each pixel of a frontal human face. As only the skin covering dense blood vessels presents visible color changes, the weights are estimated proportional to the area of such skin. C. Multi-color-space rPPG Representation rPPG signals represent the color changes of human face [35], while the color changes are affected by devices, illumination conditions, body conditions, etc. To enhance the generalization ability under cross-domain scenarios, the rPPG signals are normalized in the range of [0, 1]. Recent study [33] shows that the rPPG signals in different color spaces present different characteristics. For example, the rPPG signals in the blue channel is weaker as more blue light is absorbed by human skin. To construct a robust signal representation, the unique characteristics of rPPG signals in RGB, YUV, and Lab color spaces are jointly utilized in this paper." }, { "figure_ref": [], "heading": "D. Customized EfficientNet with GRU", "publication_ref": [ "b55", "b55", "b37", "b39", "b55", "b56" ], "table_ref": [ "tab_0" ], "text": "The proposed representation contains temporal changes of rPPG signals, and spatial correlations of rPPG signals from different ROIs. To extract the discriminant features from the proposed representation, the building blocks of EfficientNet [56] are utilized to make use of the power of EfficientNet in image classification [56]. As the number of subjects in face-spoofing datasets [38,40] is usually small, deeper networks may easily lead to overfit. To address this problem, the building blocks of EfficientNet are designed following the compound scaling mechanism but with a smaller size compared to the baseline architecture EfficientNet-B0 [56].\nIt should be noted that the horizontal axis is the proposed 2D spatial-temporal signal representation is the time axis. To obtain the liveness clues from both spatial and temporal dimension, a lightweight EfficientNet with GRU is designed to combine building blocks of EfficientNet with a Gated Recurrent Unit (GRU) [57]. The EfficientNet building blocks are designed to extract the spatial-temporal features at multiple scales, while the GRU is designed to explicitly model the temporal relations. More specifically, the Gated Recurrent Unit is applied at the end of EfficientNet, taking one column of features as the input followed by successive columns along the time axis. The GRU is then designed to learn the temporal correlations between rPPG signals at successive time instances. The detailed network architecture is shown in Table I. " }, { "figure_ref": [], "heading": "Stage", "publication_ref": [ "b57", "b58" ], "table_ref": [], "text": "Operator Resolution Channels Layers\ni O i H i × W i C i L i 1 Conv3x3 24 × 120 64 1 2\nMBConv1,k3x3 'MBConv' refers to mobile inverted bottleneck [58] with squeeze-and-excitation optimization [59]. The spatial-temporal representation has 18 channels: RGB, YUV, Lab, and their corresponding normalized channels. The proposed network outputs two confidence rates for live and spoof decisions.\nThe loss function is defined as the cross entropy between the ground-truth and the prediction labels. Denote u i,j as the confidence scores of the i-th batch of samples belonging to the j-th class and y i,j as their ground-truth labels. The loss function of N batches for M classes is calculated as:\nL = - N i=1 M j=1 y i,j log u i,j + (1 -y i,j ) log(1 -u i,j ). (11)\nThe superiority of EfficientNet mainly lies in the compound scaling mechanism, which can concentrate more on relevant regions and preserve more object details flexibly, depending on the size of training data. For rPPG-based face anti-spoofing, especially for the cross-dataset evaluation, the sizes of training data vary significantly. The compound-scaling mechanism could help handle the variety of application scenarios." }, { "figure_ref": [], "heading": "V. EXPERIMENTAL RESULTS", "publication_ref": [ "b37", "b59", "b17", "b39", "b60", "b37", "b59", "b17", "b39", "b60", "b5", "b0", "b2", "b15", "b21", "b16", "b18", "b19", "b17", "b23", "b13", "b14", "b20", "b18", "b53", "b33", "b32", "b23", "b17" ], "table_ref": [], "text": "A. Experimental Settings 1) Benchmark Datasets: Four datasets are used for evaluating 3D mask attack detection: 3DMAD [38], CSMAD [60], HKBU-Mars V1+ [18], and HKBU-Mars V2 [40], where the last two are collected under a similar protocol, but with no common video. And the Idiap Replay Attack dataset [61] is used to detect printed photo attacks and video replay attacks. 3DMAD dataset [38] consists of 2 sessions of videos on genuine faces and 1 session of videos on 3D mask attacks of 17 subjects. The used 3D masks are from Thatsmyface and the subjects vary in race, age, and gender. Each subject has 5 10-second videos in RGB and 5 10-second videos in depth. For a fair comparison, only the RGB videos are used. There are 300 frames of 640×480 pixels for each video. This dataset is collected under the indoor environment with well-controlled illumination conditions. CSMAD (Custom Silicone Mask Attack Dataset) [60] was collected from 14 subjects and 6 high-quality 3D masks. The dataset consists of 87 genuine videos and 159 attack videos, in which the 3D masks are worn on subjects' faces or mounted on an appropriate stand. The frame rate is approximately 30 frames per second (FPS), lasting from 5 to 12 seconds. Four lighting conditions are adopted, including flourscent ceiling light, left halogen lamp illuminating, right halogen lamp illuminating, and halogen lamp illuminating from both sides. Each video contains frames in visible light recorded by Intel RealSense SR300 and near-infrared images recorded by Seel Thermal Compact Pro. In this paper, only the visible-light videos are employed for evaluation. HKBU-Mars V1+ dataset [18] has 2 sessions of genuine attempts and 1 session of 3D mask attacks. It only contains RGB videos. Each session includes 60 10-second videos for 12 genuine subjects or 3D masks. The subjects vary in age and gender. Due to the privacy issue, the public version eliminate one subject's videos on sessions of genuine attempts. Six masks are made by Thatsmyface and the other two are made by REAL-f. The videos are recorded via Logeitech C920 webcamera with the resolution of 1280 × 720 pixels at 25 FPS, under controlled laboratory light conditions. HKBU-Mars V2 dataset [40] is much larger and covers more real-world variations. It contains 1 session of real-face videos and 1 session of spoofing attack videos. Every session has 504 10-second videos from 12 subjects or 3D masks. Similar to HKBU-Mars V1+, the subjects vary in age and gender. Six masks are made by Thatsmyface and the other six are made by REAL-f. In general, the subject's frontal face is recorded, allowing natural facial expressions and some head movements. Compared to HKBU-Mars V1+, this version introduces three more variations. 1) Multiple devices are used with FPSs ranging from 14 to 50 and resolutions ranging from 640 × 480 to 1920 × 1080 pixels. 2) The devices are either fixed on tripods or handheld, resulting in larger motions. 3) There are various lighting conditions, including room light, low light, bright light, warm light, side light and up side light. Idiap Replay Attack Dataset [61] consists of 1300 video clips of 50 subjects performing real attempts, printed photo attacks, and video replay attacks. The videos of real attempts are generated by users attempting to access a laptop through a Macbook built-in webcam while the spoofing attacks are performed by displaying a photo or video recording of the same user for at least 9 seconds. All videos are of the resolution of 320 × 240 at 25 FPS. Two lighting conditions are employed, the controlled office light with homogeneous backgrounds and the adverse illumination with complex backgrounds.\n2) Compared Methods: The following methods are selected for comparison. Multi-Scale Local Binary Pattern (MS-LBP) [6] is the baseline method of nearly all 3D mask attack datasets. It extracts multi-scale LBP-histogram features of 833 dimensions and utilizes a support vector machine (SVM) with a linear kernel as the classifier.\nColor Texture Analysis (CTA) [1] extracts 434-dimension LBP features on both HSV and YCbCr color spaces. The extracted features are then classified by an SVM with an RBF kernel. It is chosen for comparison because of its good generalization ability in detecting general face attacks. FBNet-RGB [3] is a representative deep neural network, which ranks the second in the Multi-modal Face Anti-spoofing Attack Detection Challenge of CVPR 2019. The method extracts features from image patches using a sequence of residual blocks. GrPPG [16] utilizes the Power Spectral Density curve generated by the Fast Fourier Transform (FFT) as the feature representation and extracts rPPG signals from ROIs. The features are then classified by an SVM with a linear kernel. PPGSecure [22] extracts signals from both skin area and backgrounds to construct spectral features using the FFT. The features are then classified by an RBF SVM. LrPPG [17] TSrPPG [19] excavates the similarity features between rPPG signals from multiple face regions, and the dissimilarity features between face regions and background regions by measuring the correspondences of signal amplitudes, gradients, and phases. MCCFrPPG [20] extends the CFrPPG [18] by applying segments of Short-Time Fourier Transform (STFT) to obtain spectrogram features. The multi-channel rPPG correspondence features are then classified by a linear SVM. TransRPPG [24] adopts a vision transformer to extract the temporal information from the rPPG signals. The multi-scale spatial-temporal maps on both facial skin and background re-gions are generated with the size 63×300×3 and 15×300×3. Two network branches of share-weight Transformer Layers are designed to learn the attentional features. PATRON [14] separates the respiratory signal from the original rPPG signal as a new liveness cue. The similarity features are extracted from both respiratory signals and original rPPG signals, and then classified by an RBF SVM. SUNRISE [15] considers the similarity features of multiple ROIs from both temporal representation and spectral representation of rPPG signals, and uses multiple RBF SVMs on signal fragments of different sizes for classification. LeTSrPPG [21] extends the TSrPPG [19] by tuning the ROI frames with a C(2+1)D neural network for higher quality rPPGs. The network is trained via minimizing an rPPG regression loss measuring the similarity between rPPG signals of face ROIs and the dissimilarity between rPPG signals of face ROIs and that of background ROIs.\n3) Implementation Details: The latest SeetaFace V6 [54] is used to detect faces and facial landmarks. To handle videos under extreme light conditions, histogram equalization is applied to improve the image contrast. As recommended in [34], the threshold to choose the initial matched keypoints δ is set to 0.6 for the distance defined in Eqn. ( 1) so that more than 70 initially matched SIFT keypoints can be found in real time. The relative weight λ in Eqn. ( 2) is empirically set to 3 to highlight the spatial similarity on face sequences. The acceptance rate α is set to 0.5 to reserve sufficient and highquality point matches for calculating affine transformation.\nThe face is split into 24 ROIs following Niu et al. 's design [33], with 4 rows and 6 columns to reserve the integrity of mandible regions. To learn a unified pattern from videos with different FPSs, the extracted rPPG signals are normalized to 30 FPS via cubic interpolation. To make better use of the information from the whole video and adapt to videos of different sizes, a video is cut into segments of 120 frames, with an overlapping of 117 frames. Having 9 color channels and 9 normalized color channels, the size of the proposed weighted spatial-temporal representation is [24,120,18]. The learning rate is set to 0.1 initially with a decay to 10% every 4 epochs and a L2 regularization penalty of 5×10 -4 . A decision can be made for each video segment whether it is a genuine attempt or a spoofing attack. The final decision is the majority vote of the results from all video segments." }, { "figure_ref": [], "heading": "4) Evaluation Metrics:", "publication_ref": [], "table_ref": [], "text": "The following evaluation metrics are reported. Equal Error Rate (EER) refers to the value when the False Acceptance Rate (FAR) and the False Reject Rate (FRR) are equal.\nHalf Total Error Rate (HTER) is evaluated at a threshold for the EER on the development set, and it is calculated as HT ER = (F AR + F RR)/2. BPCER@APCER=0.1 represents the BPCER value when APCER is 0.1, where APCER (Attack Presentation Classification Error Rate) and BPCER (Bonafide Presentation Classification Error Rate) are similar to FAR and FRR, but the threshold for APCER=0.1 is determined using the development set and applied on the test set when calculating BPCER. " }, { "figure_ref": [], "heading": "B. Experimental Results of Intra-dataset Evaluation", "publication_ref": [ "b17", "b19", "b20", "b19", "b23", "b23", "b18", "b14", "b20", "b20", "b20", "b19", "b20", "b19", "b19", "b19", "b19" ], "table_ref": [ "tab_2", "tab_3", "tab_4", "tab_0", "tab_6" ], "text": "1) Experimental Results on the 3DMAD Dataset: For a fair comparison, the standard evaluation protocol, Leave-One-Out-Cross-Validation (LOOCV) [18,20], is used. In each fold, one subject is used as the test set, 8 subjects are randomly chosen as the train set, and the remaining 8 subjects are used as the development set. 20 rounds of LOOCV are conducted to avoid coincidence and the average results over 20 rounds are reported. The comparisons to state-of-the-art methods on 3DMAD are shown in Table II. We implemented and evaluated LeTSrPPG [21] on the 3DMAD dataset. The results of other compared methods are reported from their original papers.\nIt is witnessed that the proposed approach reduces the HTER of the test set of the previously best performed rPPGbased method, MCCFrPPG [20], from 5.60% to 2.16%, and reduces the EER of the previously best performed method TransRPPG [24] from 2.38% to 0.87%. In terms of the AUC, the proposed method increases the result of the previously best performed rPPG-based method, TransRPPG [24], from 98.80% to 99.58%. It is also worth to mention that the standard derivations of the HTER of test set for all evaluated methods are high. After checking the dataset, the genuine face videos for two subjects with aging face and swarthy skin are more likely to be wrongly classified as spoofing faces, which suggests that the current face spoofing detection methods are still greatly suffered from age and skin color variations. TSrPPG [19], SUNRISE [15], and LeTSrPPG [21] were originally designed for short-time observation scenarios. For a fair comparison, the proposed method is also evaluated on short-time observation (1 second) scenarios following the same protocol as in [21]. The results are summarized in Table III. All results of the compared methods are reported from their original papers. It can be seen that the proposed VMrPPG significantly outperforms all the compared methods on all four evaluation criteria. Specifically, compared with the previously best performed method, LeTSrPPG [21], the proposed method reduces the EER from to 4.41%, and increases the AUC from 94.40% to 98.09%.\n2) Experimental Results on the HKBU-Mars V1+ Dataset: With fewer subjects than the 3DMAD dataset, training on this dataset requires higher generalization ability on the model. On this dataset, the penalty for L2 regularization is elevated to 5 × 10 -3 and the learning rate decays after 4 epochs. 20 rounds of LOOCV are applied on the HKBU-Mars V1+ dataset. In each fold, one subject is used for testing, 5 subjects are randomly selected for training, and the 6 remaining subjects are used as the development set. The evaluation results are summarized in Table IV. It can be seen that the proposed VMrPPG achieves the best score on all four evaluation metrics, which outperforms the second best MCCFrPPG [20] by 1.37%, 0.54%, 2.33%, and 0.03% in terms of the HTER on the development set, the HTER on the test set, the EER, and the AUC, respectively. Different from the results on the 3DMAD dataset, the appearancebased methods present a sharp performance degradation on the HKBU-Mars V1+ dataset while rPPG-based methods remain their good performance. This phenomenon suggests that, under more scenario variations, the rPPG signals tend to be more robust than the appearance features.\nWe have also conducted the comparison experiments for short-time observations on the HKBU-Mars V1+ dataset, following the experimental settings in [21]. The results are summarized in Table V. Similar to the results on the 3DMAD dataset, the proposed VMrPPG ranks the first on all four evaluation criteria, which suggests that the proposed VMrPPG significantly outperforms the state-of-the-art methods for both long-time observations and short-time observations. 3) Experimental Results on the HKBU-Mars V2 Dataset: The standard evaluation protocol, 20 rounds of LOOCV, is applied on the HKBU-Mars V2 dataset, with the traindevelopment-test subject-amount split as (5, 6, 1). The results are summarized in Table VI. The results of other compared methods are obtained from [20]. The proposed VMrPPG consistently outperforms all the compared methods for all three evaluation metrics. Compared to the state-of-the-art method, MCCFrPPG [20], the performance gains of the proposed method are 1.26%, 0.26%, and 10.26% in terms of the EER, the AUC, and BPCER@APCER=0.01, respectively. The superiority of the rPPG-based methods over the appearance-based methods are more distinct, which exhibits the discriminant power of rPPG-based methods to overcome the variations of backgrounds, illumination conditions, and camera sensors.\n4) Experimental Results on the CSMAD Dataset: Following [20], the Leave-Half-Out-for-Training protocol is adopted on the CSMAD dataset, which sets aside 7 subjects of genuine faces and 3 subjects of mask attacks for training, and leaves the rest for testing. The results on the CSMAD dataset are summarized in Table VII. The proposed VMrPPG performs best on all evaluation metrics. Compared with the second-best rPPG-based method, MCCFrPPG [20], it reduces the HTER test by 3.39%, the EER by 2.70%, the BPCER@APCER=0.1 by 4.15%, and the BPCER@APCER=0.01 by 1.23%, and boosts the AUC by 2.76%. It is also noted that the rPPG-based methods perform poorer on this dataset than the other 3 datasets. After checking the failure cases, more than 70% of them are genuine faces with side lighting that are incorrectly classified as masked faces. The results show that the rPPG signals are still strongly affected by the illumination conditions.\nFrom the intra-dataset evaluation on all four dataset, it can be concluded that the proposed VMrPPG consistently and significantly outperforms all the state-of-the-art face antispoofing methods based on rPPG signals." }, { "figure_ref": [], "heading": "C. Experimental Results of Cross-dataset Evaluation", "publication_ref": [ "b19", "b7", "b8", "b19", "b19", "b19", "b19", "b19", "b19", "b19", "b17" ], "table_ref": [ "tab_7", "tab_7", "tab_7" ], "text": "To evaluate the generalization ability of the proposed method on unseen scenarios, a set of cross-dataset evaluations are conducted following the same protocol as in MCCFrPPG [20], in which the model is trained on Dataset A but evaluated on Dataset B (denoted as A → B). For the HKBU-Mars V1+ dataset, 6 subjects are randomly selected as the train set and the remaining 6 subjects are used as the development set. For the 3DMAD dataset, the train-development split is (8,9). The CSMAD dataset treats 7 subjects for real attempts and 3 subjects for mask attacks for training respectively, and the rest as the development set. The cross-dataset evaluations are also repeated for 20 rounds and the average results over 20 rounds are reported. The results are summarized in Table VIII.\n1) 3DMAD vs. HKBU-Mars V1+: The results of the compared methods are obtained from [20]. For the setting of \"3DMAD → HKBU-Mars V1+\", the proposed VM-rPPG performs best on all three evaluation metrics, i.e., the HTER test of 2.21%, the AUC of 99.73%, and the BPCER@APCER=0.01 of 6.18%, which are better than the second-best method, MCCFrPPG [20], by 1.25%, 0.13%, and 1.45%, respectively. The proposed VMrPPG ranks the second best for BPCER@APCER=0.1, which is only 0.25% worse than MCCFrPPG [20]. For \"HKBU-Mars V1+ → 3DMAD\", the proposed VMrPPG achieves the best performance in terms of all evaluation criteria. Compared to the previous best method MCCFrPPG [20], the proposed method achieves a performance gain of 1.31%, 0.54%, 1.16%, and 0.22% in terms of the HTER test, the AUC, the BPCER@APCER=0.1, and the BPCER@APCER=0.01, respectively.\n2) 3DMAD vs. CSMAD: In this experiment, the proposed VMrPPG ranks the first on 7 evaluation criteria out of 8, as shown in Table VIII. The only metric where the proposed method ranks the second is the AUC for \"3DMAD → CS-MAD\", which is 0.30% lower than MCCFrPPG [20]. But the proposed VMrPPG outperforms MCCFrPPG [20] on all the other 7 evaluation criteria, 6 of which are significant.\n3) HKBU-Mars V1+ vs. CSMAD: As shown in Table VIII, the proposed VMrPPG ranks the first on all 8 evaluation criteria in this experiment. Compared to the second-best method MCCFrPPG [20], the proposed VMrPPG reduces the HTER on the test set by 1.36% for \"HKBU-Mars V1+ → CSMAD\". For \"CSMAD → HKBU-Mars V1+\", the proposed method reduce the HTER on the test set by 0.65% compared to the second-best method CFrPPG [18]. Since the CSMAD dataset contains more illumination conditions than the 3DMAD or HKBU-Mars V1+ datasets, most rPPG-based methods can't achieve satisfactory results when training on these two datasets while testing on the CSMAD dataset.\nThe experimental results on all three cross-dataset evaluations demonstrate that the proposed VMrPPG achieves the excellent generalization ability when facing unseen scenarios. It is also noted that the texture-based methods are suffered from huge performance degradation on cross-dataset evaluations, while the drop of the rPPG-based methods are insignificant, which again demonstrate the superior performance of rPPGbased methods on detecting 3D mask attacks." }, { "figure_ref": [ "fig_8", "fig_9" ], "heading": "D. Ablation Studies", "publication_ref": [ "b32", "b19", "b61" ], "table_ref": [ "tab_8" ], "text": "To evaluate each component of the proposed method, a set of ablation studies are performed on the HKBU-Mars V2 dataset, which contains the greatest variations. The baseline method extracts the rPPG signals from the SeetaFace-aligned faces, encodes the signals on the spatial-temporal representation, and learns the features using a ResNet-18 as in [33]. The same evaluation protocol of 20 rounds of LOOCV [20] is applied here. With the same classifier and parameter settings, it is observed that the proposed stitching-based face alignment contributes the most distinct rPPG signals. Compared to the baseline fixing the intermediate face at the first frame and using only landmarks for alignment, the proposed method reduces the EER from 4.17% to 3.78%. The method using only SIFT keypoints and the one fixing the intermediate face at the previous frame fall behind, which presents the negative effects caused by the error propagation over a long alignment chain when facing notable pose variations.\n2) Effects of rPPG-based Face Anti-spoofing Framework: The improvements of the proposed VMrPPG are assessed progressively. The baseline feature representation is the spatialtemporal representation (STR) encoding rPPG signals from faces after stitching-based face alignment. The weighted multichannel STR (WMC-STR) expands the features from the RGB color space to the RGB, YUV and Lab color spaces, and weights the features via a facial vascular density mask. The proposed lightweight EfficientNet with GRU (ENetGRU) is employed to replace the baseline ResNet-18 and extract features from WMC-STR. The STR, WMC-STR, and ENet-GRU are applied progressively to evaluate the performance gain by including each of the proposed building blocks.\nThe comparison results are shown in Fig. 5. By utilizing the spatial-temporal representation and ResNet-18 only, the EER and AUC are 3.78% and 98.98%, respectively. By using the proposed weighted multi-channel STR, the EER decreases to 3.37% and the AUC increases to 99.29%, which indicates that the characteristics of rPPG signals in multiple color spaces are helpful to create distinct signal representations, and highlighting rPPG signals in regions with dense facial vessels is helpful for spoofing detection. After replacing ResNet-18 by ENetGRU, the EER further decreases to 2.78% and the AUC further increases to 99.56%, which suggests the advanced power of the proposed lightweight EfficientNet with GRU in capturing the discriminant features for spoofing detection.\n3) Effects of Using Multiple Color Spaces: To illustrate the benefits of utilizing multiple color spaces, the proposed method is compared to methods utilizing single color space, i.e., RGB, YUV, HSV, Lab, and a learning-based color space LC 1 C 2 [62]. All the other components remain unchanged as STR in Section V-D2. The results are summarized in Table IX. It is witnessed from Table IX that WMC-STR performs better than methods using any single color space, including the learning-based color space. The results show that the complementary information residing in different color spaces help construct a more robust rPPG representation.\n4) Impact of Dataset Variances: The proposed method is compared with the baseline method on four datasets, which does not have any proposed components, and reserves landmark-only face alignment, spatial-temporal representation, and ResNet-18. The same evaluation protocol is used for each dataset as in intra-dataset evaluation. The comparisons in terms of the EER and the AUC are shown in Fig. 6(a) and 6(b), respectively. Compared to the baseline method, the proposed method achieves a solid and consistent performance gain, i.e., 2.63%, 4.59%, 1.39%, and 1.73% in terms of the EER on the 3DMAD, HKBU-Mars V1+, HKBU-Mars V2, and CSMAD datasets, respectively, and 1.76%, 3.08%, 0.60%, 1.74% respectively in terms of the AUC. " }, { "figure_ref": [], "heading": "E. Experimental Results on Other Spoofing Attacks", "publication_ref": [ "b60", "b20", "b5", "b20" ], "table_ref": [ "tab_9" ], "text": "The rPPG signals can be used to detect various types of spoofing attacks from the recorded videos. To evaluate the ability of detecting other spoofing types, the proposed method is evaluated on the Idiap Replay Attack dataset [61] for detecting photo attacks and video replay attacks, following the same experimental settings and the short-time observation (1 second) protocol as in LeTSrPPG [21]. The comparison results to the state-of-the-art rPPG-based and appearance-based methods are summarized in Table X. It can be seen that the proposed VMrPPG significantly and consistently outperforms all the compared methods in detecting photo attacks and video replay attacks. Compared to the second-best method, MS-LBP [6], the proposed method uses less liveness cues but achieves better scores, with the reduction of 4.04% on the HTER on the development set, 2.51% on the HTER on the test set, 3.75% on the EER, and an improvement of 1.31% on the AUC. Compared with the previously best performed rPPGbased method, LeTSrPPG [21], the proposed method achieves significant performance gains on all four evaluation metrics." }, { "figure_ref": [ "fig_7", "fig_8", "fig_9", "fig_8" ], "heading": "F. Discussions", "publication_ref": [], "table_ref": [], "text": "The proposed VMrPPG aims to address the two challenges of existing rPPG-based methods: the alignment error of face sequences that may greatly distort the rPPG signals, and the weak and noisy rPPG signals. The ablation studies in Fig. 4 show that the proposed face alignment algorithm using both SIFT keypoints and facial landmarks mitigates the distortion in the rPPG signals caused by face alignment errors. The ablation studies in Fig. 5, Fig. 6, and Table IX demonstrate that the proposed signal weighting mechanism based on the vascular density and the color space fusion helps construct a robust rPPG signal representation. As evidenced in Fig. 5, the proposed customized EfficientNet with the GRU has strong discriminant power. To show the generalization ability of the proposed framework, we have conducted a series of experiments, e.g., the intra-dataset evaluations on the 3DMAD, HKBU-Mars V1+, HKBU-Mars V2 and CSMAD datasets, the intra-dataset evaluations for short-time observations on the 3DMAD, HKBU-Mars V1+ and Idiap Replay Attack datasets, and the cross-dataset evaluations among the 3DMAD, HKBU-Mars V1+, and CSMAD datasets. The proposed method also significantly outperforms all the compared methods for other spoofing attacks such as printed photo attacks and video-replay attacks, as demonstrated in Section V-E. In summary, the proposed method significantly outperforms the state-of-theart rPPG-based methods on multiple datasets under different evaluation protocols for different spoofing attacks." }, { "figure_ref": [], "heading": "VI. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "rPPG signals provide an effective liveness clue to detect 3D mask attacks. However, the rPPG signal has low signal to noise ratio and is sensitive to the spatial positions of video frames. Thus, the facial micro movements and inaccurate face alignment, though can be tolerated by a face recognition system, largely weaken the rPPG signal and hence make it ineffective. To address this challenge, we propose a landmarkanchored face stitching algorithm to align the face at a pixelwise level, design a vascular-weighted multi-channel spatialtemporal representation to rPPG signals, and extract reliable spatial-temporal features by a lightweight EfficientNet with a GRU. More precisely, our contributions are four-fold: 1) To align the face at the pixel level to enhance the rPPG signal quality, a landmark-anchored face stitching algorithm is proposed which utilizes the facial landmarks as anchor points to prevent the error propagation in the face alignment chain, and utilizes face stitching through keypoint to achieve an accurate and consistent face alignment. 2) The rPPG signal features are extracted from different color spaces to make use of the signal characteristics embedded in different color spaces. 3) The processed signals from each ROI are stacked as a spatial-temporal representation, and then weighted using the density of the facial blood vessels, to highlight the ROIs with rich blood vessels. 4) The lightweight EfficientNet with the GRU following the compound-scaling mechanism is developed for spatial-temporal feature learning. The proposed method is compared with state-of-the-art rPPG-based face anti-spoofing models under both intra-dataset and cross-dataset evaluations on five datasets. Experimental results show that the proposed approach significantly and consistently outperforms all the compared rPPG-based methods." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by the National Natural Science Foundation of China under Grant 72071116 and the Ningbo Municipal Bureau of Science and Technology under Grants 2019B10026." } ]
2023-05-25
[ { "authors": "Z Boulkenafet; J Komulainen; A Hadid", "journal": "IEEE", "ref_id": "b0", "title": "Face anti-spoofing based on color texture analysis", "year": "2015" }, { "authors": "H Chen; G Hu; Z Lei; Y Chen; N M Robertson; S Z Li", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b1", "title": "Attention-based two-stream convolutional networks for face spoofing detection", "year": "2020" }, { "authors": "T Shen; Y Huang; Z Tong", "journal": "", "ref_id": "b2", "title": "FaceBagNet: Bag-of-local-features model for multi-modal face anti-spoofing", "year": "2019" }, { "authors": "W Sun; Y Song; C Chen; J Huang; A C Kot", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b3", "title": "Face spoofing detection based on local ternary label supervision in fully convolutional networks", "year": "2020" }, { "authors": "G Wang; H Han; S Shan; X Chen", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b4", "title": "Unsupervised adversarial domain adaptation for cross-domain face presentation attack detection", "year": "2021" }, { "authors": "N Erdogmus; S Marcel", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b5", "title": "Spoofing face recognition with 3D masks", "year": "2014" }, { "authors": "N Kose; J Dugelay", "journal": "", "ref_id": "b6", "title": "Shape and texture based countermeasure to protect face recognition systems against mask attacks", "year": "2013" }, { "authors": "H Steiner; A Kolb; N Jung", "journal": "", "ref_id": "b7", "title": "Reliable face anti-spoofing using multispectral SWIR imaging", "year": "2016" }, { "authors": "R Shao; X Lan; P C Yuen", "journal": "", "ref_id": "b8", "title": "Deep convolutional dynamic texture learning with adaptive channel-discriminability for 3D mask face antispoofing", "year": "2017" }, { "authors": "R Shao; X Lan; P C Yuen", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b9", "title": "Joint discriminative learning of deep dynamic textures for 3D mask face anti-spoofing", "year": "2018" }, { "authors": "T A Siddiqui; S Bharadwaj; T I Dhamecha; A Agarwal; M Vatsa; R Singh; N Ratha", "journal": "", "ref_id": "b10", "title": "Face anti-spoofing with multifeature videolet aggregation", "year": "2016" }, { "authors": "Y Tang; L Chen", "journal": "Springer", "ref_id": "b11", "title": "Shape analysis based anti-spoofing 3D face recognition with mask attacks", "year": "2016" }, { "authors": "Y Wang; S Chen; W Li; D Huang; Y Wang", "journal": "Springer", "ref_id": "b12", "title": "Face anti-spoofing to 3D masks by combining texture and geometry features", "year": "2018" }, { "authors": "L Birla; P Gupta", "journal": "Expert Systems with Applications", "ref_id": "b13", "title": "PATRON: Exploring respiratory signal derived from non-contact face videos for face anti-spoofing", "year": "2022" }, { "authors": "L Birla; P Gupta; S Kumar", "journal": "IEEE Transactions on Dependable and Secure Computing", "ref_id": "b14", "title": "SUNRISE: Improving 3D mask face anti-spoofing for short videos using pre-emptive split and merge", "year": "2023" }, { "authors": "X Li; J Komulainen; G Zhao; P C Yuen; M Pietikäinen", "journal": "IEEE", "ref_id": "b15", "title": "Generalized face anti-spoofing by detecting pulse from face videos", "year": "2016" }, { "authors": "S Liu; P C Yuen; S Zhang; G Zhao", "journal": "Springer", "ref_id": "b16", "title": "3D mask face antispoofing with remote photoplethysmography", "year": "2016" }, { "authors": "S Liu; X Lan; P C Yuen", "journal": "", "ref_id": "b17", "title": "Remote photoplethysmography correspondence feature for 3D mask face presentation attack detection", "year": "2018" }, { "authors": "", "journal": "", "ref_id": "b18", "title": "Temporal similarity analysis of remote photoplethysmography for fast 3D mask face presentation attack detection", "year": "2020" }, { "authors": "", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b19", "title": "Multi-channel remote photoplethysmography correspondence feature for 3D mask face presentation attack detection", "year": "2021" }, { "authors": "", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b20", "title": "Learning temporal similarity of remote photoplethysmography for fast 3D mask face presentation attack detection", "year": "2022" }, { "authors": "E M Nowara; A Sabharwal; A Veeraraghavan", "journal": "", "ref_id": "b21", "title": "PPGSecure: Biometric presentation attack detection using photopletysmograms", "year": "2017" }, { "authors": "C Yao; S Wang; J Zhang; W He; H Du; J Ren; R Bai; J Liu", "journal": "", "ref_id": "b22", "title": "rPPG-based spoofing detection for face mask attack using efficientnet on weighted spatial-temporal representation", "year": "2021" }, { "authors": "Z Yu; X Li; P Wang; G Zhao", "journal": "IEEE Signal Processing Letters (SPL)", "ref_id": "b23", "title": "TransRPPG: Remote photoplethysmography transformer for 3D mask face presentation attack detection", "year": "2021" }, { "authors": "A George; Z Mostaani; D Geissenbuhler; O Nikisins; A Anjos; S Marcel", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b24", "title": "Biometric face presentation attack detection with multichannel convolutional neural network", "year": "2020" }, { "authors": "Y Liu; J Stehouwer; A Jourabloo; X Liu", "journal": "", "ref_id": "b25", "title": "Deep tree learning for zero-shot face anti-spoofing", "year": "2019" }, { "authors": "Y Liu; J Stehouwer; X Liu", "journal": "Springer", "ref_id": "b26", "title": "On disentangling spoof trace for generic face anti-spoofing", "year": "2020" }, { "authors": "Z Yu; J Wan; Y Qin; X Li; S Z Li; G Zhao", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b27", "title": "NAS-FAS: Staticdynamic central difference network search for face anti-spoofing", "year": "2021" }, { "authors": "A Liu; C Zhao; Z Yu; J Wan; A Su", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b28", "title": "Contrastive context-aware learning for 3D high-fidelity mask face presentation attack detection", "year": "2022" }, { "authors": "S Bhattacharjee; S Marcel", "journal": "", "ref_id": "b29", "title": "What you can't see can help you-extended-range imaging for 3D-mask presentation attack detection", "year": "2017" }, { "authors": "N Alsufyani; A Ali; S Hoque; F Deravi", "journal": "", "ref_id": "b30", "title": "Biometric presentation attack detection using gaze alignment", "year": "2018" }, { "authors": "X Niu; X Zhao; H Han; A Das; A Dantcheva; S Shan; X Chen", "journal": "", "ref_id": "b31", "title": "Robust remote heart rate estimation from face utilizing spatial-temporal attention", "year": "2019" }, { "authors": "X Niu; S Shan; H Han; X Chen", "journal": "IEEE Transactions on Image Processing", "ref_id": "b32", "title": "Rhythmnet: end-to-end heart rate estimation from face via spatial-temporal epresentation", "year": "2020" }, { "authors": "D G Lowe", "journal": "International Journal of Computer Vision (IJCV)", "ref_id": "b33", "title": "Distinctive image features from scale-invariant keypoints", "year": "2004" }, { "authors": "W Wang; A C Brinker; S Stuijk; G De Haan", "journal": "IEEE Transactions on Biomedical Engineering", "ref_id": "b34", "title": "Algorithmic principles of remote PPG", "year": "2017" }, { "authors": "G Sun; H Li; B Li", "journal": "Liaoning Science and Technology Publishing House", "ref_id": "b35", "title": "Colour Atlas of Human Blood Vessels Cast and Angiography", "year": "2018" }, { "authors": "S Jia; G Guo; Z Xu", "journal": "Pattern Recognition", "ref_id": "b36", "title": "A survey on 3D mask presentation attack detection and countermeasures", "year": "2020" }, { "authors": "N Erdogmus; S Marcel", "journal": "", "ref_id": "b37", "title": "Spoofing in 2D face recognition with 3D masks and anti-spoofing with Kinect", "year": "2013" }, { "authors": "J Kannala; E Rahtu", "journal": "", "ref_id": "b38", "title": "BSIF: Binarized statistical image features", "year": "2012" }, { "authors": "S Liu; B Yang; P C Yuen; G Zhao", "journal": "", "ref_id": "b39", "title": "A 3D mask face anti-spoofing database with real world variations", "year": "2016" }, { "authors": "Y Liu; A Jourabloo; X Liu", "journal": "", "ref_id": "b40", "title": "Learning deep models for face anti-spoofing: Binary or auxiliary supervision", "year": "2018" }, { "authors": "B Hamdan; K Mokhtar", "journal": "Signal, Image and Video Processing", "ref_id": "b41", "title": "A self-immune to 3D masks attacks face recognition system", "year": "2018" }, { "authors": "X Li; J Chen; G Zhao; M Pietikäinen", "journal": "", "ref_id": "b42", "title": "Remote heart rate measurement from face videos under realistic situations", "year": "2014" }, { "authors": "G Heusch; S Marcel", "journal": "", "ref_id": "b43", "title": "Pulse-based features for face presentation attack detection", "year": "2018" }, { "authors": "B Lin; X Li; Z Yu; G Zhao", "journal": "", "ref_id": "b44", "title": "Face liveness detection by rPPG features and contextual patch-based CNN", "year": "2019" }, { "authors": "Z Yu; Y Qin; X Li; C Zhao; Z Lei; G Zhao", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b45", "title": "Deep learning for face anti-spoofing: A survey", "year": "2023" }, { "authors": "Y Liu; X Liu", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b46", "title": "Spoof trace disentanglement for generic face anti-spoofing", "year": "2023" }, { "authors": "A Liu; Z Tan; J Wan; Y Liang; Z Lei; G Guo; S Z Li", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b47", "title": "Face antispoofing via adversarial cross-modality translation", "year": "2021" }, { "authors": "Y Qin; Z Yu; L Yan; Z Wang; C Zhao; Z Lei", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b48", "title": "Meta-teacher for face anti-spoofing", "year": "2022" }, { "authors": "A Agarwal; D Yadav; N Kohli; R Singh; M Vatsa; A Noore", "journal": "", "ref_id": "b49", "title": "Face presentation attack with latex masks in multispectral videos", "year": "2017" }, { "authors": "G De Haan; V Jeanne", "journal": "IEEE Transactions on Biomedical Engineering", "ref_id": "b50", "title": "Robust pulse rate from chrominance-based rPPG", "year": "2013" }, { "authors": "W Wang; S Stuijk; G De Haan", "journal": "IEEE Transactions on Biomedical Engineering", "ref_id": "b51", "title": "A novel algorithm for remote photoplethysmography: Spatial subspace rotation", "year": "2016" }, { "authors": "G Lovisotto; H Turner; S Eberz; I Martinovic", "journal": "", "ref_id": "b52", "title": "Seeing red: PPG biometrics using smartphone cameras", "year": "2020" }, { "authors": "S Wu; M Kan; Z He; S Shan; X Chen", "journal": "Neurocomputing", "ref_id": "b53", "title": "Funnel-structured cascade for multi-view face detection with alignment-awareness", "year": "2017" }, { "authors": "C Geng; X Jiang", "journal": "Pattern Recognition", "ref_id": "b54", "title": "Face recognition based on the multi-scale local image structures", "year": "2011" }, { "authors": "M Tan; Q Le", "journal": "", "ref_id": "b55", "title": "EfficientNet: Rethinking model scaling for convolutional neural networks", "year": "2019" }, { "authors": "A Comas; T K Marks; H Mansour; S Lohit; Y Ma; X Liu", "journal": "", "ref_id": "b56", "title": "Turnip: Time-series U-Net with recurrence for NIR imaging PPG", "year": "2021" }, { "authors": "M Sandler; A Howard; M Zhu; A Zhmoginov; L C Chen", "journal": "", "ref_id": "b57", "title": "MobileNetv2: Inverted residuals and linear bottlenecks", "year": "2018" }, { "authors": "J Hu; L Shen; G Sun", "journal": "", "ref_id": "b58", "title": "Squeeze-and-excitation networks", "year": "2018" }, { "authors": "S Bhattacharjee; A Mohammadi; S Marcel", "journal": "IEEE", "ref_id": "b59", "title": "Spoofing deep face recognition with custom silicone masks", "year": "2018" }, { "authors": "I Chingovska; A Anjos; S Marcel", "journal": "", "ref_id": "b60", "title": "On the effectiveness of local binary patterns in face anti-spoofing", "year": "2012" }, { "authors": "Z Lu; X Jiang; A Kot", "journal": "Pattern Recognition", "ref_id": "b61", "title": "Color space construction by optimizing luminance and chrominance components for face recognition", "year": "2018" } ]
[ { "formula_coordinates": [ 4, 95.93, 208.46, 204.09, 9.65 ], "formula_id": "formula_0", "formula_text": "d M i = d F i + d Si s.t. i ∈ {1, 2}.(1)" }, { "formula_coordinates": [ 4, 184.86, 227.66, 56.58, 9.65 ], "formula_id": "formula_1", "formula_text": "d M 1 ≤ δd M 2 ." }, { "formula_coordinates": [ 4, 81.84, 340.55, 218.18, 41.42 ], "formula_id": "formula_2", "formula_text": "G(d F , d S ) = G(d F ) • G λ (d S ) ∝ - (d F -µ F ) 2 2σ 2 F -λ (d S -µ S ) 2 2σ 2 S ,(2)" }, { "formula_coordinates": [ 4, 48.96, 537.95, 251.06, 64.36 ], "formula_id": "formula_3", "formula_text": "v = [x, y, 1] T to the reference face v ′ = [x ′ , y ′ , 1] T is formulated as:   x ′ y ′ 1   = P   x y 1   =   p 11 p 12 p 13 p 21 p 22 p 23 0 0 1     x y 1   ,(3)" }, { "formula_coordinates": [ 4, 372.28, 339.06, 190.76, 11.03 ], "formula_id": "formula_4", "formula_text": "L(1, k) = L K (1, k) + L L (1, k).(4)" }, { "formula_coordinates": [ 4, 323.55, 487.57, 239.49, 30.32 ], "formula_id": "formula_5", "formula_text": "L K (1, k) = L K (1, k) + 1 m m j=1 |P k,k v K k,j -v K k,j | 2 ,(5)" }, { "formula_coordinates": [ 4, 311.98, 575.04, 251.06, 26.95 ], "formula_id": "formula_6", "formula_text": "F k , v K k,j is the corresponding keypoint in F k and P k,k is the projection matrix from v K k,j of F k to v K k,j of F k," }, { "formula_coordinates": [ 4, 356.73, 672.35, 206.31, 30.32 ], "formula_id": "formula_7", "formula_text": "L L (1, k) = 1 n n j=1 |P 1,k v L k,j -v L 1,j | 2 ,(6)" }, { "formula_coordinates": [ 5, 54.72, 140.41, 245.31, 45.97 ], "formula_id": "formula_8", "formula_text": "for i = 1 to k ′ -1 do 5: Derive P i,k ′ using v K i , v K k ′ , v L i , v L k ′ as outlined in Section III-C 6:" }, { "formula_coordinates": [ 5, 50.73, 200.19, 147.32, 58 ], "formula_id": "formula_9", "formula_text": "7: if L(1, k ′ ) < L min then 8: Update L min ← L(1, k ′ ) 9: Update k ← i 10: end if 11:" }, { "formula_coordinates": [ 5, 50.73, 258.99, 189.47, 48.15 ], "formula_id": "formula_10", "formula_text": "k 13: Update P 1,k ′ ← P 1, k • P k,k ′ 14: end for 15: return {P 1,2 , ..., P 1,k }" }, { "formula_coordinates": [ 5, 134.99, 362.33, 165.03, 11.44 ], "formula_id": "formula_11", "formula_text": "P 1,k = P 1, k • P k,k ,(7)" }, { "formula_coordinates": [ 5, 48.96, 415.65, 251.06, 88.56 ], "formula_id": "formula_12", "formula_text": "F 1 to F k through an intermediate face F k is calculated as, L(1, k) = L K (1, k) + 1 m m j=1 |P k,k v K k,j -v K k,j | 2 + 1 n n j=1 |P 1, kP k,k v L k,j -v L 1,j | 2 .(8)" }, { "formula_coordinates": [ 6, 390.6, 666.86, 168.56, 12.77 ], "formula_id": "formula_13", "formula_text": "S c = [r c 1 , r c 2 , ..., r c M ] T . (9" }, { "formula_coordinates": [ 6, 559.16, 669.32, 3.87, 8.64 ], "formula_id": "formula_14", "formula_text": ")" }, { "formula_coordinates": [ 7, 328.77, 467.14, 217.2, 25.5 ], "formula_id": "formula_15", "formula_text": "i O i H i × W i C i L i 1 Conv3x3 24 × 120 64 1 2" }, { "formula_coordinates": [ 7, 319.21, 689.75, 243.83, 30.32 ], "formula_id": "formula_16", "formula_text": "L = - N i=1 M j=1 y i,j log u i,j + (1 -y i,j ) log(1 -u i,j ). (11)" } ]
Mask Attack Detection Using Vascular-weighted Motion-robust rPPG Signals
Detecting 3D mask attacks to a face recognition system is challenging. Although genuine faces and 3D face masks show significantly different remote photoplethysmography (rPPG) signals, rPPG-based face anti-spoofing methods often suffer from performance degradation due to unstable face alignment in the video sequence and weak rPPG signals. To enhance the rPPG signal in a motion-robust way, a landmarkanchored face stitching method is proposed to align the faces robustly and precisely at the pixel-wise level by using both SIFT keypoints and facial landmarks. To better encode the rPPG signal, a weighted spatial-temporal representation is proposed, which emphasizes the face regions with rich blood vessels. In addition, characteristics of rPPG signals in different color spaces are jointly utilized. To improve the generalization capability, a lightweight EfficientNet with a Gated Recurrent Unit (GRU) is designed to extract both spatial and temporal features from the rPPG spatial-temporal representation for classification. The proposed method is compared with the state-of-the-art methods on five benchmark datasets under both intra-dataset and crossdataset evaluations. The proposed method shows a significant and consistent improvement in performance over other state-ofthe-art rPPG-based methods for face spoofing detection.
Chenglin Yao; Jianfeng Ren; Heshan Du; Jiang Liu; Xudong Jiang; Ruibin Bai
[ { "figure_caption": "Algorithm 1 2 : 3 :123Dynamic programming for robust face alignment Input: A set of keypoints {v K 1 , ..., v K k } and a set of landmarks {v L 1 , ..., v L k } Output: A set of projection matrices {P 1,2 , ..., P 1,k } 1: for k ′ = 2 to k do Let the minimum total alignment loss L min ← ∞ Let the index of intermediate face k ← 1 4:", "figure_data": "", "figure_id": "fig_0", "figure_label": "123", "figure_type": "figure" }, { "figure_caption": "Fig. 1 .1Fig. 1. Visualization of the Aligned Faces in Standard Deviation. The color map is within a scale of 5 in standard deviation. Deeper red represents higher standard deviation value.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "2) rPPG extraction from multiple color spaces. The rPPG signals are extracted from multiple ROIs to take account of their phase differences in different regions of a face. The extracted signals from RGB, YUV, and Lab color spaces are combined to form a consolidated signal representation. 3) Vascular-weighted", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig.2. Overall diagram of the proposed Vascular-weighted Motion-robust rPPG (VMrPPG). Four building blocks are presented: 1) Face alignment via image stitching; 2) rPPG signal extraction from multiple color spaces; 3) Spatial-temporal representation weighted using the density of blood vessels; and 4) A customized EfficientNet with a GRU. The photo of arterial cast originates from[36].", "figure_data": "", "figure_id": "fig_3", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Biology foundation of rPPG signals. The periodical blood vessel contraction results in the regular color changes of the skin. The cheeks and mandibles have rich blood vessels and hence strong rPPG signals. These regions are hence assigned larger weights as in Eqn. (10).", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "extracts a Local rPPG Confidence Map using the ridge regression of rPPG signals from multiple ROIs, and generate a Local rPPG Correlation Model as the features. The generated features are then classified by an RBF SVM. CFrPPG [18] extracts the correspondence feature of local rPPG signals after the Fast Fourier Transform. The correlation features are extracted based on the Power Spectral Density curve and classified by a linear SVM.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "1 )1Effects of Stitching-based Face Alignment: The proposed stitching-based face alignment is compared with other face alignment methods. The proposed method utilizes both SIFT keypoints and facial landmarks for face alignment, and hence one compared method solely utilizes SIFT keypoints (Keypoint Only) and another solely utilizes facial landmarks (Landmark Only). As the proposed method utilizes dynamic programming to select intermediate faces to build a face alignment chain, another compared method fixes the intermediate face at the previous frame (Always Previous). Other components of the baseline method remain unchanged. The experimental results are shown in Fig. 4.", "figure_data": "", "figure_id": "fig_6", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Ablation study on face alignment methods. The EER of the proposed stitching-based face alignment is 0.39% lower than the baseline method.", "figure_data": "", "figure_id": "fig_7", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig.5. Ablation study on rPPG-based face anti-spoofing framework in terms of the EER and the AUC. The EER gradually decreases by 0.41% and 0.59%, and the AUC progressively increases by 0.31% and 0.27% when using weighted multi-channel STR (WMC-STR) and replacing ResNet-18 with the proposed lightweight EfficientNet with GRU (ENetGRU).", "figure_data": "", "figure_id": "fig_8", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. Ablation studies on dataset variances. The proposed VMrPPG consistently and significantly outperforms the baseline method on all four datasets in terms of both the EER and the AUC.", "figure_data": "", "figure_id": "fig_9", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "ARCHITECTURE OF THE PROPOSED NETWORK.", "figure_data": "", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "ON THE 3DMAD DATASET. THE BEST SCORES ARE MARKED IN BOLD AND THE SECOND BEST IN UNDERLINE. THE RPPG-BASED METHODS ARE MARKED WITH ⋆ WHILE APPEARANCE-BASED METHODS WITH ▲.", "figure_data": "MethodHTER devHTER testEERAUC▲ MS-LBP [6]1.25 ± 1.904.22 ± 10.302.6699.60▲ CTA [1]2.78 ± 3.604.40 ± 9.704.2499.30▲ FBNet-RGB [3]3.91 ± 2.405.66 ± 9.705.5498.60⋆ GrPPG [16]13.40 ± 4.2013.20 ± 13.2013.9092.60⋆ PPGSecure [22]15.20 ± 4.4015.90 ± 14.6015.8090.80⋆ LrPPG [17]9.06 ± 4.408.57 ± 13.308.8896.00⋆ CFrPPG [18]5.95 ± 3.306.82 ± 12.106.9497.10⋆ MCCFrPPG [20]4.42 ± 2.305.60 ± 8.805.0198.70⋆ TransRPPG [24]--2.3898.80⋆ LeTSrPPG [21]11.15 ± 2.9912.35 ± 10.867.6595.53⋆ Proposed VMrPPG1.34 ± 1.512.16 ± 4.170.8799.58", "figure_id": "tab_2", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "ON THE 3DMAD DATASET FOR SHORT-TIME OBSERVATIONS. THE RPPG-BASED METHODS DESIGNED SPECIFICALLY FOR SHORT-TIME OBSERVATION ARE MARKED WITH •.", "figure_data": "MethodHTER devHTER testEERAUC⋆ GrPPG [16]34.10 ± 5.7033.70 ± 11.6038.3065.90⋆ PPGSecure [22]33.30 ± 3.1033.00 ± 8.1034.8069.40⋆ LrPPG [17]45.20 ± 3.2044.80 ± 8.8045.3055.70⋆ CFrPPG [18]32.80 ± 1.7032.70 ± 7.4032.5070.80⋆ TransRPPG [24]20.70 ± 2.2020.60 ± 8.3020.8084.50• TSrPPG [19]13.10 ± 3.0013.40 ± 11.2013.3093.80• SUNRISE [15]--12.5093.70• LeTSrPPG [21]11.5 ± 2.7011.80 ± 8.6011.9094.40⋆ Proposed VMrPPG3.67 ± 1.335.00 ± 7.064.4198.09BPCER@APCER=0.01 represents the BPCER value whenAPCER is 0.01.Area Under the Curve (AUC) indicates the area under theReceiver Operating Characteristic curve.", "figure_id": "tab_3", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "ON THE HKBU-MARS V1+ DATASET.", "figure_data": "MethodHTER devHTER testEERAUC▲ MS-LBP [6]20.50 ± 8.9024.00 ± 25.6024.8084.50▲ CTA [1]22.40 ± 10.4023.40 ± 20.5023.3081.90▲ FBNet-RGB [3]35.00 ± 11.3036.10 ± 26.0036.4067.30⋆ GrPPG [16]15.40 ± 6.7015.40 ± 20.4016.2089.30⋆ PPGSecure [22]14.20 ± 5.8015.60 ± 16.4017.4090.70⋆ LrPPG [17]8.43 ± 2.908.67 ± 8.808.9497.10⋆ CFrPPG [18]3.24 ± 2.004.10 ± 4.904.0099.30⋆ MCCFrPPG [20]2.85 ± 1.803.38 ± 4.803.1099.70⋆ PATRON [14]--14.7087.80⋆ LeTSrPPG [21]7.64 ± 4.374.09 ± 7.337.2795.64⋆ Proposed VMrPPG1.48 ± 1.662.84 ± 5.010.7799.73", "figure_id": "tab_4", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "ON THE CSMAD DATASET UNDER THE LEAVE-HALF-OUT-FOR-TRAINING PROTOCOL[20].", "figure_data": "MethodHTER testEERAUCBPCER@ APCER=0.1BPCER@ APCER=0.01▲ MS-LBP [6]8.36 ± 4.209.2896.207.6555.80▲ CTA [1]11.10 ± 4.6012.9094.7016.6048.70▲ FBNet-RGB [3]39.30 ± 4.2040.3063.8083.1097.90⋆ GrPPG [16]35.70 ± 2.8037.2070.2068.6091.70⋆ PPGSecure [22]21.90 ± 5.7023.3083.6032.6056.40⋆ LrPPG [17]19.10 ± 5.0019.6087.6032.9082.50⋆ CFrPPG [18]12.50 ± 3.0012.2093.8015.7059.70⋆ MCCFrPPG [20]10.30 ± 2.9010.7094.6011.3034.00⋆ Proposed VMrPPG6.91 ± 2.408.0097.367.1532.77", "figure_id": "tab_6", "figure_label": "VII", "figure_type": "table" }, { "figure_caption": "ON THE 3DMAD, CSMAD, AND HKBU-MARS V1+ DATASETS UNDER THE CROSS-DATASET EVALUATION. \"A→B\" REFERS TO TRAINING ON DATASET A AND TESTING ON DATASET B.", "figure_data": "SettingsMethodsHTER testAUCA → B BPCER@ APCER=0.1BPCER@ APCER=0.01HTER testAUCB → A BPCER@ APCER=0.1BPCER@ APCER=0.01▲ MS-LBP [6]36.80 ± 2.90 60.7087.5097.0041.30 ± 14.00 62.2089.2099.50▲ CTA [1]71.80 ± 2.10 45.9096.8099.3055.70 ± 8.70 48.6089.9097.40▲ FBNet-RGB [3]34.00 ± 1.40 73.6065.7097.8012.30 ± 10.60 89.6026.8066.80A: 3DMAD B: HKBU-Mars V1+⋆ GrPPG [16] ⋆ PPGSecure [22] ⋆ LrPPG [17]35.90 ± 4.50 67.20 14.40 ± 1.40 91.80 4.46 ± 0.90 98.9075.80 16.90 1.3397.60 25.80 31.2036.50 ± 6.80 66.50 19.10 ± 2.30 87.20 8.46 ± 0.30 95.3086.30 26.20 8.7998.60 45.00 17.00⋆ CFrPPG [18]4.23 ± 0.3099.002.8319.904.81 ± 0.4098.104.4414.30⋆ MCCFrPPG [20]3.46 ± 0.6099.600.257.634.78 ± 0.8098.504.008.59⋆ Proposed VMrPPG 2.21 ± 0.79 99.730.506.183.47 ± 0.98 99.042.848.37▲ MS-LBP [6]50.60 ± 5.60 49.5089.9098.1042.70 ± 6.40 58.2083.9098.90▲ CTA [1]48.90 ± 5.80 50.7088.6098.1058.40 ± 7.80 46.5093.6099.30▲ FBNet-RGB [3]46.30 ± 2.30 56.6080.8099.0050.20 ± 18.10 52.5087.5098.50A: 3DMAD B: CSMAD⋆ GrPPG [16] ⋆ PPGSecure [22] ⋆ LrPPG [17]43.60 ± 3.70 52.70 43.60 ± 1.50 60.70 40.50 ± 2.60 67.0085.50 87.40 63.5096.50 98.60 82.3050.00 ± 0.00 50.00 24.80 ± 11.90 77.20 17.00 ± 7.20 84.7090.00 46.50 46.3099.00 64.60 99.90⋆ CFrPPG [18]22.70 ± 0.60 82.6051.1089.106.37 ± 1.0096.308.2616.70⋆ MCCFrPPG [20]9.98 ± 0.40 95.7010.9046.903.71 ± 0.8098.603.478.09⋆ Proposed VMrPPG 9.18 ± 1.04 95.4010.5738.512.94 ± 1.33 99.412.237.36▲ MS-LBP [6]42.30 ± 3.20 52.3085.4097.9045.00 ± 5.80 54.8087.1099.00▲ CTA [1]53.60 ± 5.00 48.8090.3098.7037.80 ± 4.80 61.5080.2096.20▲ FBNet-RGB [3]41.60 ± 3.70 57.0087.1099.1040.90 ± 6.80 56.1086.0097.20A: HKBU-Mars V1+ B: CSMAD⋆ GrPPG [16] ⋆ PPGSecure [22] ⋆ LrPPG [17]54.00 ± 11.40 48.90 52.20 ± 2.20 52.10 40.40 ± 2.90 65.4088.70 91.60 65.1098.60 99.90 81.5050.00 ± 0.00 50.00 37.60 ± 3.90 53.30 13.80 ± 8.00 88.6090.00 83.70 39.2099.00 96.00 98.10⋆ CFrPPG [18]22.50 ± 0.70 84.0047.8085.802.58 ± 0.8099.301.2917.30⋆ MCCFrPPG [20]10.80 ± 0.50 95.3012.0040.902.67 ± 0.9099.700.755.50⋆ Proposed VMrPPG 9.44 ± 1.10 95.3710.7537.581.93 ± 1.05 99.760.594.50", "figure_id": "tab_7", "figure_label": "VIII", "figure_type": "table" }, { "figure_caption": "STUDIES OF UTILIZING RPPG SIGNALS IN DIFFERENT COLOR SPACES.", "figure_data": "Color SpaceEERAUCRGB (STR)3.7898.98Lab3.7899.10YUV4.1798.84HSV13.1991.39LC 1 C 2 [62]3.8198.83WMC-STR3.3799.29", "figure_id": "tab_8", "figure_label": "IX", "figure_type": "table" }, { "figure_caption": "ON THE IDIAP REPLAY ATTACK DATASET FOR SHORT-TIME OBSERVATIONS. MS-LBP [6] 8.54 ± 1.40 8.43 ± 8.00 8.76 97.40 ⋆ GrPPG [16] 45.30 ± 0.60 45.30 ± 5.20 45.30 56.50 ⋆ PPGSecure [22] 39.10 ± 0.60 38.90 ± 4.50", "figure_data": "MethodHTER devHTER testEERAUC▲ 39.1065.50⋆ LrPPG [17]44.20 ± 0.6044.30 ± 4.8044.2059.00⋆ CFrPPG [18]36.30 ± 0.7036.20 ± 5.1036.3068.00⋆ LeTSrPPG [21]18.50 ± 0.7018.70 ± 6.1018.6088.90⋆ Proposed VMrPPG4.50 ± 0.405.92 ± 4.135.0198.71", "figure_id": "tab_9", "figure_label": "X", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[22-24, 32, 33]", "Explanation": "The cited works provide a method for face alignment through facial landmarks, which the citing paper adopts to enhance the quality of rPPG signals in the study of 3D mask attacks."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work provides a SIFT descriptor that the citing paper uses to detect facial keypoints at the pixel-wise level, which is a methodological basis for the proposed face-stitching algorithm in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work provides a method for empirically weighting rPPG signals, which the citing paper adopts in their research to enhance the rPPG signal and focus on facial regions with rich blood vessels."}, {"Category": "Data Source", "Citation": "[33]", "Explanation": "The cited work provides a data-driven method for weighting rPPG signals, which the citing paper uses to focus on facial regions with rich blood vessels in their research."}, {"Category": "Extension or Continuation", "Citation": "[35]", "Explanation": "The cited work provides insights into the origin of rPPG signals and the role of blood vessels in generating the signal, which the citing paper uses to enhance the rPPG signal and focus on facial regions with rich blood vessels in their research."}, {"Category": "Extension or Continuation", "Citation": "[36]", "Explanation": "The cited work provides a method for estimating the density of blood vessels in facial regions, which the citing paper uses to enhance the rPPG signal and focus on facial regions with rich blood vessels in their research."}, {"Category": "Methodological Basis", "Citation": "[37]", "Explanation": "The cited work introduces texture-based methods for spoofing detection, which the citing paper adopts to detect the difference in texture pattern between spoofing faces and genuine faces."}, {"Category": "Data Source", "Citation": "[6,7,38]", "Explanation": "The cited works provide texture descriptors such as LBP, which the citing paper utilizes in its research on face anti-spoofing by introducing a multi-scale LBP on both RGB images and depth images."}, {"Category": "Data Source", "Citation": "[39]", "Explanation": "The cited work introduces the Binarized Statistical Image Features (BSIF) descriptor, which the citing paper uses in its research on face anti-spoofing by extending the LBP to other descriptors such as modified LBP, transitional LBP, and direction-coded LBP."}, {"Category": "Data Source", "Citation": "[18,40]", "Explanation": "The cited works provide datasets such as 3DMAD and HKBU-Mars, which the citing paper uses as a baseline for its research on face anti-spoofing by recognizing unique textures on facial masks."}, {"Category": "Methodological Basis", "Citation": "[38]", "Explanation": "The cited work extends the research on texture descriptors by introducing a multi-scale LBP on both RGB images and depth images, which the citing paper adopts in its research on face anti-spoofing."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work by Shao et al. [9] is used as a methodological basis for extracting facial motion features using a VGG Net in the citing paper."}, {"Category": "Data Source", "Citation": "[41]", "Explanation": "The cited work by Liu et al. [41] is acknowledged as the source of the CNN-RNN architecture for facial motion feature extraction and classification in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[12]", "Explanation": "The cited work by Tang and Chen [12] is extended in the citing paper by applying Principal Curvature Measures and meshed SIFT-based features to face spoofing detection."}, {"Category": "Data Source", "Citation": "[42]", "Explanation": "The cited work by Hamdan and Mokhtar [42] is acknowledged as the source of the liveness cue features from Legendre Moments Invariants and Linear Discriminant Analysis, which are used in the citing paper for classification using Maximum Likelihood Estimation."}, {"Category": "Data Source", "Citation": "[13]", "Explanation": "The cited work by Wang et al. [13] is the source of the geometry features obtained by reconstructing a 3D morphable model from RGB images, which are combined with LBP features under both handcrafted fusion and VGG-generated fusion in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work by Liu et al. introduced the local rPPG correlation model, which the citing paper adopts to extract rPPG signals for heart rate estimation and face anti-spoofing in natural environments."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work on rPPG correspondence features is improved upon in the citing paper to differentiate genuine faces from spoofing attacks in face anti-spoofing."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work on multi-channel correspondence features is utilized in the citing paper to further improve rPPG signal analysis for face anti-spoofing."}, {"Category": "Methodological Basis", "Citation": "[19,21]", "Explanation": "The cited works on signal similarity between neighboring ROIs are extended in the citing paper to three features for rPPG analysis in face anti-spoofing."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work on PATRON is referenced in the citing paper to extract respiratory signals from rPPG signals as an auxiliary liveness cue in face anti-spoofing."}, {"Category": "Methodological Basis", "Citation": "[44]", "Explanation": "The cited work on the LTSS feature descriptor is used in the citing paper to analyze rPPG signals for face anti-spoofing by employing the first and second order statistics of the frequency spectrum of a signal."}, {"Category": "Methodological Basis", "Citation": "[47]", "Explanation": "The cited work, STDN+, provides a method for estimating spoofing-related patterns on trace modeling, which the citing paper adopts in their research to improve the performance of face spoofing detection."}, {"Category": "Methodological Basis", "Citation": "[48]", "Explanation": "The Modality Translation Network (MT-Net) and Modality Assistance Network (MA-Net) designed in the cited work by Liu et al. are used in the citing paper to translate patterns from different modalities and improve feature translation between modalities in face spoofing detection."}, {"Category": "Methodological Basis", "Citation": "[49]", "Explanation": "The meta-teacher optimization framework proposed in the cited work by Qin et al. is utilized in the citing paper to supervise the process of learning rich spoofing cues in face spoofing detection."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The Neural Architecture Search (NAS) for face antispoofing developed in the cited work by Yu et al. is adopted in the citing paper to adapt face spoofing detection to different scenarios in an automatic way."}, {"Category": "Methodological Basis", "Citation": "[50]", "Explanation": "The thermal imaging spectrum technology proposed in the cited work by Agarwal et al. is used in the citing paper to detect 3D mask attacks in face spoofing detection, as it shows a predominant power in this area."}, {"Category": "Methodological Basis", "Citation": "[32,33]", "Explanation": "The cited works provide foundational data and methodologies for remote heart rate monitoring applications, which the citing paper builds upon in its research."}, {"Category": "Data Source", "Citation": "[17][18][19][20]", "Explanation": "The cited works are the data sources for the face anti-spoofing applications mentioned in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[23,35,51]", "Explanation": "The cited works have developed methods for denoising and signal enhancement in rPPG signals, which the citing paper further extends by utilizing the knowledge of color model and data-driven discovery to improve the noise filtering in general situations."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work by Wang et al. has improved the tracking of hue changes in rPPG signals by incorporating data-driven discovery and physiological properties of skin reflections, which the citing paper adopts as a methodological basis for its research."}, {"Category": "Data Source", "Citation": "[51]", "Explanation": "The cited work by CHROM is a data source for the rPPG signal filtering method designed to be robust to non-white illuminations in general situations."}, {"Category": "Data Source", "Citation": "[52]", "Explanation": "The cited work by 2SR detects rPPG signals by tracking hue changes in the skin, which the citing paper uses as a data source for its research."}, {"Category": "Data Source", "Citation": "[35]", "Explanation": "The cited work by Wang et al. is a data source for the improved tracking of hue changes in rPPG signals mentioned in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[53]", "Explanation": "The cited work by [53] provides a method for enhancing the signal using a lowpass filter at 4 Hz with the Beat Separation algorithm, which the citing paper adopts in their research to improve the quality of the signal."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work by [23] applies a bandpass filter with a cut-off frequency of 0.8 Hz -3.3 Hz, which the citing paper may have used in their research to filter out unwanted frequencies and improve the signal quality."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work by SIFT keypoints provides a robust method for detecting keypoints in a pixel-wise precision, which the citing paper adopts to address the ambiguity of facial landmarks in face alignment."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work by SIFT algorithm is utilized in the citing paper to extract keypoints and feature vectors for face alignment in video sequences."}, {"Category": "Methodological Basis", "Citation": "[55]", "Explanation": "The cited work provides a method for modeling the mapping between two successive frames in a face video as an affine transform, which the citing paper adopts in their research on face video analysis."}, {"Category": "Methodological Basis", "Citation": "[40]", "Explanation": "The cited work provides the test video dataset used in the study conducted in the citing paper, which serves as a foundational element for the analysis and results presented."}, {"Category": "Methodological Basis", "Citation": "[56]", "Explanation": "The cited work on compound scaling mechanism is used in the proposed framework to provide a wide adaption ability in modeling rPPG signals in various applications."}, {"Category": "Methodological Basis", "Citation": "[33]", "Explanation": "The cited work provides the method of calculating rPPG signals by taking the average pixel values of a color channel, which the citing paper adopts in the face stitching process to align the faces in a video."}, {"Category": "Methodological Basis", "Citation": "[33]", "Explanation": "The cited work provides a method of constructing an image-like spatial-temporal representation of rPPG signals by stacking signals from different ROIs, which the citing paper adopts in their research to capture the phase pattern of rPPG signals."}, {"Category": "Supporting Evidence", "Citation": "[20]", "Explanation": "The cited work in [20] provides a method to measure the rPPG signal in each pixel of a frontal human face, which is used in the citing paper to estimate the weights of the rPPG signals in the region with rich blood vessels."}, {"Category": "Methodological Basis", "Citation": "[33]", "Explanation": "The cited work shows that rPPG signals in different color spaces have unique characteristics, which the citing paper uses to construct a robust signal representation by jointly utilizing the characteristics in RGB, YUV, and Lab color spaces."}, {"Category": "Methodological Basis", "Citation": "[56]", "Explanation": "The cited work by EfficientNet is utilized in the citing paper to leverage the power of image classification in extracting discriminant features from the proposed representation of rPPG signals."}, {"Category": "Methodological Basis", "Citation": "[57]", "Explanation": "The Gated Recurrent Unit (GRU) in the proposed design of EfficientNet with GRU is utilized to model temporal relations in the spatial-temporal signal representation of rPPG signals."}, {"Category": "Methodological Basis", "Citation": "[58]", "Explanation": "The cited work introduces the mobile inverted bottleneck architecture, which the citing paper adopts in their proposed network for spatial-temporal representation."}, {"Category": "Methodological Basis", "Citation": "[59]", "Explanation": "The cited work presents the squeeze-and-excitation optimization technique, which the citing paper uses in the mobile inverted bottleneck architecture to improve the network performance."}, {"Category": "Data Source", "Citation": "[38]", "Explanation": "The 3DMAD dataset is used as a benchmark for evaluating 3D mask attack detection in the citing paper."}, {"Category": "Data Source", "Citation": "[60]", "Explanation": "The CSMAD dataset is a custom dataset used for evaluating 3D mask attack detection in the citing paper."}, {"Category": "Data Source", "Citation": "[18]", "Explanation": "The HKBU-Mars V1+ dataset is used for evaluating 3D mask attack detection in the citing paper."}, {"Category": "Data Source", "Citation": "[40]", "Explanation": "The HKBU-Mars V2 dataset is used for evaluating 3D mask attack detection in the citing paper."}, {"Category": "Data Source", "Citation": "[61]", "Explanation": "The Idiap Replay Attack dataset is used for evaluating printed photo and video replay attacks in the citing paper."}, {"Category": "Data Source", "Citation": "[18]", "Explanation": "The dataset used in the cited work (HKBU-Mars V1+) is employed in the citing paper for evaluation purposes, indicating a reliance on external data for the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[40]", "Explanation": "The cited work, HKBU-Mars V2 dataset, is used as a data source for the videos recorded in the citing paper. The dataset is much larger and covers more real-world variations, making it a valuable source of data for the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[61]", "Explanation": "The cited work, Idiap Replay Attack Dataset, is a source of video clips used in the research conducted in the citing paper to study the performance of various spoofing attacks under different lighting conditions."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work, Multi-Scale Local Binary Pattern (MS-LBP), serves as the baseline method for the research conducted in the citing paper on extracting LBP-histogram features and using a support vector machine (SVM) for classification in 3D mask attack datasets."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work, Color Texture Analysis (CTA), provides a method for extracting LBP features on both HSV and YCbCr color spaces, which the citing paper adopts in their research to study the performance of different spoofing attacks."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "FBNet-RGB is cited as a method for extracting features from image patches using residual blocks, which the citing paper adopts for their own research."}, {"Category": "Supporting Evidence", "Citation": "[16]", "Explanation": "GrPPG is cited for its use of the Power Spectral Density curve generated by the Fast Fourier Transform (FFT) as a feature representation, which the citing paper may have utilized in their own research."}, {"Category": "Supporting Evidence", "Citation": "[22]", "Explanation": "PPGSecure is cited for its extraction of signals from both skin area and backgrounds to construct spectral features using the FFT, which may have provided a basis for the citing paper in their own research."}, {"Category": "Supporting Evidence", "Citation": "[17]", "Explanation": "LrPPG and TSrPPG are cited for their excavation of similarity and dissimilarity features between rPPG signals from multiple face regions, which the citing paper may have leveraged in their own research."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work, MCCFrPPG, extends the CFrPPG by applying STFT to obtain spectrogram features, which the citing paper adopts in their research to extract temporal information from rPPG signals."}, {"Category": "Extension or Continuation", "Citation": "[24]", "Explanation": "The cited work, TransRPPG, adopts a vision transformer to extract spatial-temporal maps on facial skin and background regions, which the citing paper extends by designing two network branches of share-weight Transformer Layers to learn attentional features."}, {"Category": "Data Source", "Citation": "[14]", "Explanation": "The cited work, PATRON, separates respiratory signals from rPPG signals as a new liveness cue, which the citing paper utilizes as a new feature in their research to improve rPPG signal classification."}, {"Category": "Data Source", "Citation": "[15]", "Explanation": "The cited work, SUNRISE, considers similarity features of multiple ROIs from rPPG signals, which the citing paper uses in their research to extract features from both temporal and spectral representations of rPPG signals for classification."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work LeTSrPPG extends the TSrPPG by introducing a C(2+1)D neural network to improve the quality of rPPG signals, which the citing paper adopts in their research to achieve higher quality rPPGs."}, {"Category": "Methodological Basis", "Citation": "[33]", "Explanation": "The cited work provides a design for rPPG signal extraction that the citing paper adopts in their research to learn a unified pattern from videos with different frame rates."}, {"Category": "Data Source", "Citation": "Video segments of 120 frames with an overlapping of 117 frames", "Explanation": "The cited data source is the method of cutting videos into segments for analysis, which the citing paper utilizes in their research to make better use of information and adapt to videos of different sizes."}, {"Category": "Extension or Continuation", "Citation": "Learning rate of 0.1 with a decay to 10% every 4 epochs and a L2 regularization penalty of 5\u00d710 -4", "Explanation": "The citing paper extends the research by setting the learning rate and L2 regularization penalty to specific values to improve the learning process and reduce overfitting in the training of the model."}, {"Category": "Supporting Evidence", "Citation": "[18,20]", "Explanation": "The cited works are used to establish the standard evaluation protocol for fair comparison in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work, LeTSrPPG, is implemented and evaluated in the citing paper to provide a method for comparison to other state-of-the-art methods."}, {"Category": "Data Source", "Citation": "[20]", "Explanation": "The cited work is used to report the results of the previously best performed rPPG-based method on the 3DMAD dataset, which serves as a data source for comparison in the citing paper."}, {"Category": "Data Source", "Citation": "[24]", "Explanation": "The cited work is used to report the results of the previously best performed method TransRPPG on the 3DMAD dataset, which serves as a data source for comparison in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[24]", "Explanation": "The cited work, TransRPPG, provides a method for face spoofing detection that the proposed method in the citing paper builds upon to achieve a higher AUC result of 99.58%. This indicates a strong foundation for the research conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[19]", "Explanation": "The cited work, TSrPPG, is originally designed for short-time observation scenarios. The citing paper extends the research by evaluating the proposed method in a similar scenario, following the protocol from the cited work."}, {"Category": "Extension or Continuation", "Citation": "[15]", "Explanation": "The cited work, SUNRISE, is also mentioned in the context of short-time observation scenarios. The citing paper further extends the research by evaluating the proposed method in this context."}, {"Category": "Extension or Continuation", "Citation": "[21]", "Explanation": "The cited work, LeTSrPPG, is mentioned in the context of short-time observation scenarios. The citing paper extends the research by evaluating the proposed method in this scenario, following the protocol from the cited work."}, {"Category": "Supporting Evidence", "Citation": "[21]", "Explanation": "The cited work, LeTSrPPG, serves as a benchmark for the performance of the proposed method in the citing paper. The comparison with LeTSrPPG demonstrates the improvement in EER and AUC achieved by the proposed method."}, {"Category": "Extension or Continuation", "Citation": "HKBU-Mars V1+ Dataset", "Explanation": "The cited dataset is used in the research conducted in the citing paper to evaluate the generalization ability of the model. The results obtained on the dataset are presented in Table IV, showing the performance of the model in a new context."}, {"Category": "Supporting Evidence", "Citation": "[20]", "Explanation": "The cited work MCCFrPPG is used to compare the performance of the proposed VMrPPG method, demonstrating that the proposed method outperforms the second best method in terms of HTER on the development and test sets."}, {"Category": "Extension or Continuation", "Citation": "[21]", "Explanation": "The cited work is used to conduct comparison experiments for short-time observations on the HKBU-Mars V1+ dataset, extending the research of the citing paper to include short-time observations."}, {"Category": "Supporting Evidence", "Citation": "[20]", "Explanation": "The cited work, MCCFrPPG [20], serves as a benchmark for comparison in the evaluation of the proposed VMrPPG method on the HKBU-Mars V2 dataset. The results show that the proposed method outperforms the compared methods in all three evaluation metrics, indicating the superior performance of the rPPG-based methods in overcoming variations in backgrounds, illumination conditions, and camera sensors."}, {"Category": "Supporting Evidence", "Citation": "[20]", "Explanation": "The cited work by [20] provides a protocol for training and testing on the CSMAD dataset, which the citing paper adopts to ensure a fair and consistent evaluation of the proposed VMrPPG method."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work provides the protocol for cross-dataset evaluations, which the citing paper adopts in its research to evaluate the generalization ability of the proposed method on unseen scenarios."}, {"Category": "Supporting Evidence", "Citation": "[20]", "Explanation": "The cited work MCCFrPPG is the previous best method in the field, and the proposed method in the citing paper achieves a performance gain in several evaluation criteria compared to it, indicating the improvement in the research."}, {"Category": "Extension or Continuation", "Citation": "3DMAD vs. CSMAD", "Explanation": "The proposed VMrPPG method in the citing paper is compared to the previous method CSMAD, and the results show that the proposed method ranks the first on 7 evaluation criteria out of 8, indicating an extension or continuation of the research in a new direction."}, {"Category": "Extension or Continuation", "Citation": "HKBU-Mars V1+ vs. CSMAD", "Explanation": "The proposed VMrPPG method in the citing paper is compared to the previous method CSMAD in a new experiment, and the results show that the proposed method ranks the first on all 8 evaluation criteria, indicating an extension or continuation of the research in a new context."}, {"Category": "Supporting Evidence", "Citation": "[20]", "Explanation": "The cited work MCCFrPPG provides a second-best method for cross-dataset evaluation, which the citing paper uses to compare the performance of the proposed VMrPPG method."}, {"Category": "Supporting Evidence", "Citation": "[18]", "Explanation": "The cited work CFrPPG serves as a second-best method for cross-dataset evaluation, and the citing paper uses it to demonstrate the performance improvement of the proposed VMrPPG method."}, {"Category": "Extension or Continuation", "Citation": "CSMAD dataset", "Explanation": "The citing paper extends the research by evaluating the performance of rPPG-based methods on the CSMAD dataset, which contains more illumination conditions than the 3DMAD or HKBU-Mars V1+ datasets."}, {"Category": "Extension or Continuation", "Citation": "Cross-dataset evaluations", "Explanation": "The citing paper further extends the research by conducting cross-dataset evaluations to demonstrate the generalization ability of the proposed VMrPPG method in unseen scenarios."}, {"Category": "Methodological Basis", "Citation": "[33]", "Explanation": "The cited work provides the baseline method for extracting rPPG signals and learning features using a ResNet-18, which the citing paper adopts in the evaluation of the proposed method."}, {"Category": "Methodological Basis", "Citation": "(ResNet-18 by ENetGRU)", "Explanation": "The cited work provides a specific model (ResNet-18 by ENetGRU) that the citing paper adopts in their research to improve the performance of spoofing detection."}, {"Category": "Extension or Continuation", "Citation": "(the proposed method is compared to methods utilizing single color space)", "Explanation": "The cited work introduces the use of multiple color spaces in the proposed method, which the citing paper extends by comparing the method to those utilizing single color spaces."}, {"Category": "Data Source", "Citation": "(the proposed method is compared to methods utilizing single color space)", "Explanation": "The cited work provides the single color spaces (RGB, YUV, HSV, Lab, and LC 1 C 2 [62]) that the citing paper uses in the comparison of the proposed method to methods utilizing single color space."}, {"Category": "Extension or Continuation", "Citation": "[61]", "Explanation": "The cited work provides the Idiap Replay Attack dataset for evaluating the ability of the proposed method to detect photo and video replay attacks. The citing paper extends the research by using the dataset to assess the performance of the method in detecting these spoofing attacks."}, {"Category": "Supporting Evidence", "Citation": "[21]", "Explanation": "The cited work, LeTSrPPG, is used as a benchmark for comparison in the evaluation of the proposed VMrPPG method. The results show that the proposed method outperforms the LeTSrPPG method in detecting photo and video replay attacks."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b55", "b27", "b5", "b6", "b0", "b7", "b21", "b41", "b55" ], "table_ref": [], "text": "Anomaly detection involves the identification and localisation of instances in data that are inconsistent with nominal observations. Detecting out-of-distribution data is a pivotal task in many fields of industry [4,55], medicine [23,54] and video surveillance [27]. In a supervised setting, a model is trained on a dataset with normal and abnormal examples. However, anomalies are usually unforeseen and these models often struggle during inference. Conversely, unsupervised methods model the distribution of only nominal samples to detect anomalies as patterns that deviate from the nominal distribution. Thus, they are not restricted to a finite set of anomalies.\nRepresentation-based methods [6,7,9,16,36,47] rely on extracted features from pretrained neural networks to define the similarity metric for nominal samples and to approach the problem on a nearest neighbour strategy. Reconstruction-based methods [1,8,26] learn a generative model from only nominal training examples. Such models learn the entire distribution of nominal samples but are incapable of generating samples that deviate from this distribution. This allows for the detection of anomalies by comparing anomalous input with its predicted anomaly-free reconstruction. However, past methods have suffered from inferior reconstruction quality or insufficient coverage of the nominal distribution, both resulting in erroneous comparisons between the reconstruction and the input image.\nRecently, diffusion models [21,41] have gained popularity as prolific deep generative models. This paper revisits reconstruction-based anomaly detection framework, harnessing the potential of diffusion models to generate an impressive reconstruction of anomalous images, see Figure 1. In this paper, we show that plain diffusion models are inapplicable to the anomaly detection task. Thus, we make the following contributions. First, we propose a conditioning mechanism that guides the denoising process to amend each perturbed image until it approximates a target image. This conditioning mechanism increases Image AU-ROC from 85.7% to 92.4% and from 87.0% to 94.1% on MVTec [4] and VisA [55], respectively. Second, we discover that a combination of a pixel-wise and feature-wise comparison of the reconstruction and the input image boosts the detection and localisation precision. Third, we introduce an unsupervised domain adaptation technique to shift the domain of a pretrained feature extractor to the problem at hand. For this purpose, a similar image to a target image is generated by our denoising pipeline. The pretrained feature extractor is then fine-tuned by minimising the extracted features' distance from the two images. In order to avoid catastrophic forgetting of the pretrained network, we additionally include a distillation loss from a frozen feature extractor. Our domain adaptation technique instils invariance to nominal changes during reconstruction while preserving generality and learning the new domain. This domainadapted feature comparison further lifts results to an Image AUROC of 99.8% and 98.9% on MVTec and VisA, sur-Figure 1. Our approach achieves defect-free reconstruction of input images that are devoid of anomalies. An accurate anomaly detection heatmap is computed. Note that reconstructions are analogous to the expected nominal approximation of the input. In the category of cables, an incorrectly placed green cable has been corrected to a blue one by the model. Such corrected images may offer further benefit for the industry in repairing defects or worker training.\npassing not only reconstruction-based methods but state-ofthe-art (SOTA) representation-based models. We additionally introduce a compressed version of DDAD, denoted as DDAD-S, tailored for applications constrained by limited resources." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b12", "b13", "b18", "b1", "b17", "b5", "b6", "b10", "b4" ], "table_ref": [], "text": "Representation-based methods Self-supervised learning has been used in the past to learn image features [13,31,33], often by solving auxiliary tasks. In anomaly detection, [14,18] have demonstrated that high-quality features facilitate the detection of anomalous samples. DN2 [2] has successfully employed simple ResNets [17], pretrained on Imagenet [38], to extract informative features. Recent approaches such as SPADE [6] uses a memory bank of nominal extracted features, PaDiM [7] uses locally constrained bag-of-features, PatchCore [36] uses a memory bank and neighborhood-aware patch-level features, CFLOW and FastFlow [16,47] use normalizing flow [11,25], and US and RD4AD [5,9] use a knowledge distillation method [19] for anomaly detection. All rely on pretrained feature extractors without any adaptation to the domain of the current problem. These models may fail when a pretrained feature extractor cannot provide informative features. In this work, we utilise locally aware patch features, as proposed by [36], to improve the comparison of the input image and its reconstruction at inference time. We propose a method to transfer knowledge of the current domain of feature extractors used in the aforementioned models, achieving superior performance." }, { "figure_ref": [], "heading": "Reconstruction-based methods", "publication_ref": [ "b0", "b35", "b21", "b41", "b45" ], "table_ref": [], "text": "The initial frameworks for anomaly detection were developed based on the foundational concept that a generative model, trained on nominal samples, learns to accurately reconstruct nominal data while failing to reconstruct anomalies. Anomalous data typically deviate significantly from learned patterns leading to a poor reconstruction of anomalies at inference time. An early work [30] applied Variational Autoencoder (VAE) [26] to detect anomalies in skin disease images. However, reconstructions were blurry and anomalies weren't adequately removed. Various techniques have since been proposed, [3] use a perceptual loss based on structural similarity (SSIM) to improve learning. [39] deploy one generative model as a novelty detector connected end-to-end to a second network enhancing the inlier samples and distorted outliers. [34] use an adversarial autoencoder to effectively compute the likeli-hood of a sample generated by the inlier distribution. However, these methods are only capable of one-class classification and do not localise anomalies. Ganomaly [1] makes use of a conditional GAN [15,32], outperforming previous state-of-the-art models. [35,50] use a discriminative endto-end trainable surface anomaly paradigm for the detection and localisation of anomalies. These models rely on synthetic anomalies for training. Recently, denoising diffusion models have gained popularity for image, and audio generation [21,41]. In the medical domain, denoising diffusion models have been used to detect brain tumours [45]. AnoD-DPM [46] showed that these models outperform GANs for anomaly detection in the medical domain." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b21", "b41", "b20", "b42", "b43", "b9" ], "table_ref": [], "text": "Denoising diffusion models [21,41] are generative models, inspired by non-equilibrium thermodynamics, aiming to learn a distribution p θ (x) that closely resembles the data distribution q(x). Diffusion models generate latent noisy variables x 1 , ..., x T , having the same dimensions as the input data x ∼ q(x), by gradually adding noise ϵ ∼ N (0, I) at each time step t. This results in x T being complete noise normally distributed with mean 0 and variance 1. Given a pre-defined variance schedule β 1 < β 2 < ... < β T where β t ∈ (0, 1), the forward process over a series of T steps is defined as follows:\nq(x 1:T |x) = (t≥1) q(x t |x t-1 ), q(x t |x t-1 ) = N (x t ; 1 -β t x t-1 , β t I).(1)\nGiven the additivity property, merging multiple Gaussians results in a Gaussian distribution. Therefore x t is directly computed at any arbitrary time step t by perturbing the input image x as q(x t |x) = N (x t ; √ α t x, (1 -α t )I), where\nα t = t i=1 (1 -β i ).\nDespite the ease with which noise is introduced to an image, undoing this perturbation is inherently challenging. This is referred to as reverse or denoising process in DDPM [20] defined by a parameterised function\np θ (x t-1 |x t ) = N (x t-1 ; µ θ (x t , t), β t I)\n, where the mean is derived using the learnable function ϵ Denoising Diffusion Implicit Models (DDIM) [42] accelerate upon DDPM by employing a non-Markovian sampling process. DDIM uses an implicit density model rather than an explicit one used in DDPM. DDIM suggests a sampling process q σ (x t-1 |x t , x) by defining a new variance schedule. Based on x t = √ α t x + √ 1 -α t ϵ, one can predict the denoised observation x 0 as follows:\nf (t) θ (x t ) := (x t - √ 1 -α t .ϵ (t) θ (x t ))/ √ α t .(2)\nHaving defined the generative process p\n(t) θ (x t-1 |x t ) = q σ (x t-1 |x t , f (t) θ (x t )), accordingly via x t-1 = √ α t-1 f (t) θ (x t )+ 1 -α t-1 -σ 2 t .ϵ (t) θ (x t )+σ t ϵ t ,(3)\nwhere σ t determines the stochasticity of the sampling process, one can generate new samples.\nThe connection between diffusion models and score matching [43] was introduced by [44] and derived a scorebased function to estimate the deviation that should happen at each time step to make a less noisy image. It can be written as:\n∇ xt log p θ (x t ) = - 1 √ 1 -α ϵ (t) θ (x t ),(4)\nwhich [10] used this property to introduce a classifier guidance mechanism. Similarly, we leverage the score-based function to introduce our conditioned denoising process in the following section. Note that, in this paper, we refer to x as the input image and x 0 as its reconstruction." }, { "figure_ref": [ "fig_2" ], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "In this section, we detail our DDAD framework. We first present our proposed conditioning mechanism for reconstruction. We then explain how it is used to eradicate anomalies while preserving nominal information. We then present a robust approach to compare the reconstructed image with the input, resulting in an accurate anomaly localisation. An overview of DDAD is presented in Figure 2." }, { "figure_ref": [], "heading": "Conditioned Denoising Process for Reconstruction", "publication_ref": [], "table_ref": [], "text": "Given a target image y and a perturbed image x t , our aim is to denoise x t step-by-step to result in an image starkly similar to y. To this end, we condition the score function on the target image to achieve a posterior score function ∇ xt log p θ (x t |y). However, directly calculating this posterior score function is challenging, since x t and y do not consist of the same signal-to-noise ratio. To tackle this challenge, we rely on the assumption that if the reconstructed image x 0 is similar to y, therefore, adding the same noise as x t consists of, to the y, would result in x t ∼ y t . This helps to guide x t towards y t at each denoising step.\nIn order to compute y t , we add ϵ (t) θ (x t ) which is predicted by the trained diffusion model, to y. Following this, the condition is modified by replacing y by y t , resulting in ∇ xt log p θ (x t |y t ) to guide the denoising process. Based on Bayes' rule, this decomposes as follows:\n∇ xt log p θ (x t |y t ) = ∇ xt log p θ (x t ) + ∇ xt log p θ (y t |x t ).\n(5) The unconditional score term ∇ xt log p θ (x t ) can be directly calculated from Eq. 4. In many cases calculating After a denoising U-Net has been trained, the feature extractor is adapted to the problem domain by minimising the distance between the extracted features of a target image and a generated image which resembles the target image. At inference time, after perturbing the input image, the denoising process is conditioned on the same input image to make an anomaly-free reconstruction. Finally, the reconstructed image is compared with the input through both pixel and feature matching to generate an accurate anomaly localisation.\nthe conditional score (or likelihood) ∇ xt log p θ (y t |x t ) is intractable. Nevertheless, having calculated y t allows for directly computing this likelihood. Intuitively, the likelihood ∇ xt log p θ (y t |x t ) can be viewed as a correction score for a deviation that occurs in x t from y t at each denoising step. Knowing that both x t and y t consist of the same noise, this deviation is only present at the image (signal) level. Consequently, the divergence can be calculated by y t -x t , and an adjusted noise term ε is updated as follows:\nε = ϵ (t) θ (x t ) -w √ 1 -α t (y t -x t ),(6)\nwhere w controls the power of the conditioning. Given ε, the new prediction f (t) θ (x t ) is calculated using Eq. 2. Finally, the less-noisy image x t-1 is calculated via the denoising process as follows:\nx t-1 = √ α t-1 f (t) θ (x t ) + 1 -α t-1 -σ 2 t ε + σ t ϵ t .(7)\nThe summary of our reconstruction process is shown in Algorithm 1.\nAlgorithm 1 Reconstruction Process\n1: x T ′ ← √ α T ′ x + √ 1 -α T ′ ϵ t 2: for all t = T ′ , ..., 1 do 3: y t ← √ α t y + √ 1 -α t ϵ (t) θ (x t ) 4: ε ← ϵ (t) θ (x t ) -w √ 1 -α t (y t -x t ) 5: f (t) θ (x t ) ← (x t - √ 1 -α t .ε)/ √ α t 6: x t-1 ← √ α t-1 f (t) θ (x t ) 7:\n+ 1 -α t-1 -σ 2 t ε + σ t ϵ t 8: end for 9: return x 0" }, { "figure_ref": [], "heading": "Reconstruction for Anomaly Detection", "publication_ref": [], "table_ref": [], "text": "For anomaly detection tasks, the target image y is set as the input image x. This enables the denoising process, which is conditioned on y, to generate an anomaly-free approximation of x. Since the model is only trained on nominal data, anomalous regions lie in the low probability density of p θ (x). Therefore, during denoising, the reconstruction of anomalies falls behind the nominal part.\nOver an entire trajectory, earlier steps focus on the abstract picture of the image whereas later steps aim to reconstruct fine-grained details. Since anomalies mostly emerge at a fine level, the starting denoising time step can be set earlier than complete noise i.e. T ′ < T , where a sufficient amount of signal-to-noise ratio is present. Note that the model is trained on complete trajectories.\nWe label our model as DDAD-n, where n refers to the number of denoising iterations." }, { "figure_ref": [], "heading": "Anomaly Scoring", "publication_ref": [ "b11", "b6", "b17", "b48" ], "table_ref": [], "text": "In the simplest case, we can detect and localise anomalies via a pixel-wise comparison between the input and its reconstruction. However, comparing only pixel distances of two images may not capture all anomalies such as poked parts or dents, whereby visible colour variations are not present. Therefore, we additionally compute distances between image features extracted by deep neural networks to also capture perceptual similarity [12,52]. Features are sensitive to changes in edges and textures where a pixel-wise comparison may fail, but they are often robust against slight transformations. We discovered that employing both image and feature level comparisons yields the most precise anomaly localisation.\nGiven a reconstructed image x 0 and the target image y, we define a pixel-wise distance function D p and a featurewise distance function D f to derive the anomaly heatmap. D p is calculated based on the L 1 norm in pixel space. At the feature level, similar to PatchCore [36] and PaDiM [7], we utilise adaptive average pooling to spatially smooth each individual feature map. Features within a given patch are aggregated in a single representation, resulting in the same dimensionality as the input feature. Finally, a cosine similarity is utilised to define D f as:\nD f (x 0 , y) = j∈J (1 -cos(ϕ j (x 0 ), ϕ j (y))) ,(8)\nwhere ϕ [17,48] refers to a pretrained feature extractor and j ∈ J is the set of layers considered. We only use j ∈ {2, 3} to retain the generality of the used features [36]. Finally, we normalise the pixel-wise distance D p to share the same upper bound as the feature-wise distance D f . Consequently, the final anomaly score function is a combination of the pixel and the feature distance:\nD anomaly = v max(D f ) max(D p ) D p + D f ,(9)\nwhere v controls the importance of the pixel-wise distance." }, { "figure_ref": [], "heading": "Domain Adaptation", "publication_ref": [], "table_ref": [], "text": "In Section 4.3 we used a pretrained feature extractor for feature-wise comparison between an input image and its reconstruction. However, these networks are trained on Im-ageNet and do not adapt well to domain-specific characteristics of an anomaly detection task and a specific category. We propose a novel unsupervised domain adaptation technique by converging different extracted layers from two nearly identical images. This helps the networks become agnostic to nominal changes that may occur during reconstruction, at the same time learning the problem's domain.\nTo achieve this, we first sample a random image x from the training dataset and perturb it with noise to obtain x t . Similarly, we randomly select a target image y from the training dataset. Given a trained denoising model θ, a noisy image x t is denoised to x 0 to approximate y. Features are then extracted from the reconstructed and target image, denoted as ϕ j (x 0 ) and ϕ j (y). With the assumption that x 0 ∼ y, their feature should be similar. Therefore, the network ϕ is fine-tuned by minimising the distance between extracted features. A loss function L Similarity , based on cosine similarity, is employed for each of the final activation layers of the j th spatial resolution block. This transfers the pretrained model ϕ to the domain-adapted network φ. Nevertheless, we observe that the generalisation of the network diminishes after several iterations while learning the patterns of the new dataset. To mitigate this, we incorporate a distillation loss from a frozen feature extractor ϕ which mirrors the state of the network ϕ prior to domain adaptation. This distillation loss safeguards the feature extractor from losing its generality during adaptation to the new domain. Consequently, the domain adaptation loss L DA can be expressed as follows:\nL DA = L Similarity (x 0 , y) + λ DL L DL (x 0 , y) = j∈J (1 -cos(ϕ j (x 0 ), ϕ j (y))) + λ DL j∈J 1 -cos(ϕ j (y), ϕ j (y)) + λ DL j∈J 1 -cos(ϕ j (x 0 ), ϕ j (x 0 )) ,(10)\nwhere λ DL determines the significance of distillation loss L DL . For our experiments, J is set as {1, 2, 3}. The resulting feature extractor is resilient to slight changes during reconstruction. In Appendix, Section 10.3, we highlight its role in making the model robust to nominal variation of the object and spurious anomalies in the background present in the reconstruction." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets and Evaluation Metrics", "publication_ref": [ "b55", "b22", "b4" ], "table_ref": [], "text": "We demonstrate the integrity of DDAD on three datasets: MVTec, VisA and MTD. Our model correctly classifies all samples in 11 out of 15 and 4 out of 12 categories in MVTec and VisA, respectively. The MVTec Anomaly Detection benchmark [4] is a widely known industrial dataset comprising 15 classes with 5 textures and 10 objects. Each category contains anomaly-free samples for training and various anomalous samples for testing ranging from small scratches to large missing components. We also evaluate our model on a new dataset called VisA [55]. This dataset is twice the size of MVTec comprising 9,621 normal and 1,200 anomalous high-resolution images. This dataset exhibits objects of complex structures placed in sporadic locations as well as multiple objects in one image. Anomalies include scratches, dents, colour spots, cracks, and structural defects. We also experimented on the Magnetic Tile Defects (MTD) dataset [22]. This dataset is a single-category dataset with 925 nominal training images and 5 sub-categories of different types of defects totalling 392 test images. We use 80% of defectfree images as the training set.\nFor MVTec and VisA datasets, we train the denoising network on images of size 256 × 256 and, for comparison, images are cropped to 224 × 224. No data augmentation is applied to any dataset, since augmentation transformations may masquerade as anomalies.\nWe assess the efficacy of our model by utilizing the Area Under Receiver Operator Characteristics (AUROC) metric, both at the image and pixel level. For image AUROC, we determine the maximum anomaly score across pixels and assign it as the overall anomaly score of the image. A oneclass classification is then used to calculate the image AU-ROC for anomaly detection. For pixel level, in addition to pixel AUROC, we employ the Per Region Overlap (PRO) metric [5] for a more comprehensive evaluation of localisation performance. The PRO score treats anomaly regions of varying sizes equally, making it a more robust metric than pixel AUROC." }, { "figure_ref": [], "heading": "Experimental Setting", "publication_ref": [ "b9" ], "table_ref": [], "text": "To train our denoising model, we employ the modified UNet framework introduced in [10]. For our compact model DDAD-S, we reduced the base channels from 64 to 32 and the number of attention layers from 4 to 2. While DDAD comprises 32 million parameters, DDAD-S consists of only 8 million parameters. This reduction not only accelerates training and inference but also maintains comparable performance to our larger model. Consequently, DDAD-S proves to be a more viable choice for edge devices within a resource-constrained production line. Complete implementation details are provided in Appendix, Section 7. Furthermore, the selection of values of the two hyperparameters w and v are presented in Appendix, Section 8. Note that although the model is trained using T = 1000, we empirically identified T ′ = 250 as the optimal noise time step. This choice strikes a favourable balance between signal and noise in the context of our study." }, { "figure_ref": [ "fig_4", "fig_5", "fig_3" ], "heading": "Experimental Results and Discussions", "publication_ref": [ "b6" ], "table_ref": [ "tab_0", "tab_1", "tab_3" ], "text": "Anomaly detection results on MVTec, VisA and MTD datasets are shown in Tables 1,2, and 3 respectively. Our proposed framework DDAD outperforms all existing approaches, not only the reconstruction-based but also representation-based methods, achieving the highest Image AUROC in all datasets. The proposed use of diffusion models not only enables anomaly detection and localisation but also the reconstruction of anomalies, based on generative modelling, which has been a longstanding idea, having limited success in anomaly detection.\nIn Figure 4, we demonstrate the impact of each module of our framework on the MVTec dataset. Ablations with VisA are added to the Appendix, Section 9. We have shown plain diffusion models alone are not sufficient to lift reconstruction-based methods up to a competitive level. We have observed that applying the conditioning mechanism raises anomaly detection and localisation by 6.7% and 4.2%, respectively, in comparison to an unconditional denoising process, based on pixel-wise comparison. This demonstrates the ability of our guidance to increase the quality of reconstruction. Additionally, the use of diffusion-based domain adaptation adds 8.2% and 4.8% to the feature-wise comparison, and the combination of the pixel and feature level raises the final performance by 1.2% and 0.7% on anomaly detection and localisation respectively. Comprehensive analysis justification for the use of both pixel and feature comparisons is discussed in Appendix, Section 12. DDAD performance on the PRO metric is presented in Table 4. DDAD achieves SOTA results on VisA and competitive results to PaDiM [7] and PatchCore [36] in MVTec. The inferior pixel-level performance compared to image-level performance can be attributed to the initial denoising point T ′ = 250, which presents a greater challenge to reconstruct large missing components (such as some samples in the transistor category). However, starting from earlier time steps introduces ambiguities in the reconstruction and leads to increased inference time. Some failure modes of the model are presented in the Appendix, Section 13.1.\nFigures 1 and5 present the qualitative results obtained for reconstruction and anomaly segmentation. Note that anomalies are localised with remarkable accuracy in various samples of the VisA and MVTec datasets. The model's reconstruction outputs are particularly impressive, as they not only segment anomalous regions but also transform them into their nominal counterparts. For instance, the model regenerates missing links on transistors, erases blemishes on circuit boards, and recreates missing components on PCBs. These reconstructions hold significant value in industrial settings, as they provide valuable insights to workers, enabling them to identify defects and potentially resolve them. Figure 3 also qualitatively analyses the impact of conditioning as the hyperparameter w increases, emphasising that higher values of w lead to more pronounced conditioning in the reconstructions. Furthermore, this figure also includes a qualitative ablation of the feature-wise and pixel-wise comparisons. More detailed quantitative and qualitative results are included in the Appendix." }, { "figure_ref": [], "heading": "Inference Time", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "The trade-off between accuracy and computation time on the VisA dataset is depicted in Table 5. Among the tested approaches, DDAD-10 stands out by utilizing 10 iterations and delivering the most favourable results. However, DDAD-5 becomes an appealing option due to its faster inference time, which holds significant importance, especially in industrial applications. Despite the diffusion model's reputation of slow inference, our approach remains highly competitive with various representation-based models. Our unique conditioning mechanism enables competitive results with fewer denoising steps. This trend holds even with a compressed denoising network (DDAD-S). Our complete DDAD model requires 0.79GB of memory during inference, while DDAD-S only needs 0.59GB including the feature extractor's memory usage. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We have introduced Denoising Diffusion Anomaly Detection (DDAD), a new reconstruction-based approach for de-tecting anomalies. Our model leverages the impressive generative capabilities of recent diffusion models to perform anomaly detection. We design a conditioned denoising process to generate an anomaly-free image that closely resembles the target image. Moreover, we propose an image comparison method based on pixel and feature matching for accurate anomaly localisation. Finally, we introduced a novel technique that utilises our denoising model to adapt a pretrained neural network to the problem's domain for expressive feature extraction. DDAD achieves state-of-the-art results on benchmark datasets, namely MVTec, VisA, and MTD, despite being a reconstruction-based method.\nLimitations and future work. In this work, we demonstrate that our contributions enhance inference speeds while maintaining equivalent anomaly detection performance. Nevertheless, we believe there is still room for improving anomaly localisation. Interventions such as dynamically selecting the denoising starting points or abstracting to a latent space for training are promising avenues to explore in future work." }, { "figure_ref": [], "heading": "Anomaly Detection with Conditioned Denoising Diffusion Models", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b48" ], "table_ref": [], "text": "DDAD is implemented in Python 3.8 and PyTorch 1.13. The denoising model undergoes training using the Adam optimiser, with a learning rate of 0.0003 and weight decay of 0.05. Fine-tuning of the feature extractor uses an AdamW optimiser with a learning rate of 0.0001. During fine-tuning, each batch is divided into two mini-batches, each of size 16 or 8. One mini-batch consists of input images, while the other comprises target images. The conditioning control parameter is set to w = 3 for fine-tuning the feature extractor. The balance between pixel-wise and feature-wise distance is established as v = 1 for MVTec and v = 7 for VisA. To smooth the anomaly heatmaps, a Gaussian filter with σ g = 4 is applied. All experiments are executed on a GeForce RTX 3090. The denoising network requires 4 to 6 hours of training, depending on the number of samples for each category.\nWe obtained the best results using WideResNet101 [48] as the feature extractor. The stochasticity parameter of σ for the denoising process is set equal to 1. Empirically, we achieved similar results in employing a denoising process that is either probabilistic or implicit. Nevertheless, it is essential to note that changing this hyperparameter affects reconstruction, and thus requires additional hyperparameter tuning." }, { "figure_ref": [], "heading": "MVTec", "publication_ref": [], "table_ref": [ "tab_6", "tab_5" ], "text": "In table 9 and table 10, the settings used to achieve the best result on DDAD and DDAD-S are demonstrated. We have trained DDAD and DDAD-S with a batch size of 32 and 16 respectively. For both models, the feature extracted is fine-tuned and the model is tested on a batch size of 16. Hyperparameter v is set to 1 to balance pixel and feature comparison. Results on the PRO metric and comparison with the other approaches are depicted in Table 7. Results on different denoising steps are presented in Table 6. We have observed setting λ DL = 0.1 for the MVTec dataset leads to the best result." }, { "figure_ref": [], "heading": "VisA", "publication_ref": [], "table_ref": [ "tab_0", "tab_8" ], "text": "Table 11 showcases the configuration employed to attain optimal results for DDAD. DDAD has undergone training and testing with a batch size of 32. For the categories mac-aroni2 and pcb1, we achieved better results with a batch size of 16 during fine-tuning. The hyperparameter v is established at 7; however, setting v to 1.5 for cashew yields a more precise detection. Results on the PRO metric are depicted in Table 8. We have observed setting λ DL = 0.01 for the VisA dataset leads to the best result." }, { "figure_ref": [], "heading": "Hyperparameters", "publication_ref": [], "table_ref": [], "text": "In this section, we discuss the role of each hyperparameter introduced in the paper and how they solely affect the quality of reconstruction or precision of the localisation heatmap." }, { "figure_ref": [ "fig_4" ], "heading": "Conditioning hyperparameter w", "publication_ref": [ "b55" ], "table_ref": [ "tab_10" ], "text": "Table 12 presents quantitative results on the impact of the hyperparameter w on enhanced reconstruction, illustrating how the conditioning mechanism reduces misclassification and mislocalisation across 13 out of 15 categories. To ensure a fair comparison, we exclusively used pixel-wise distance to assess the reconstruction quality on the MVTec dataset. As shown in Figure 4 (left), this conditioning improves anomaly detection and localisation by 6.7% and 4.2%, respectively. Notably, in some categories, such as pill and tile, our conditioning mechanism enhances reconstruction by up to 30%. The same improvement is observed in Figure 8 when our conditioning mechanism is applied in the denoising process. Figure 6 qualitatively illustrates the impact of conditioning on reconstruction.\nBy introducing the conditioning mechanism, we achieve reconstruction of anomalous regions while effectively preserving the pattern of nominal regions. In the provided example, the first row displays a sample from the pill category of the MVTec dataset [4], where red dots are often randomly distributed. A plain diffusion model fails to accurately reconstruct the dots. However, by increasing the conditioning parameter w, the model successfully reconstructs these red dots, simultaneously eliminating and replacing the anomaly (yellow colour on the top left side of the pill) with the nominal pattern.\nIn the second row, an example of a cable is shown, where the plain diffusion correctly changed the colour of the top grey cable to green. However, compared to the conditioned reconstruction, where the wires are accurately reconstructed, the plain diffusion model failed to correctly reconstruct the individual wires within the cable. In the third row, there is an example of a printed part, indicated by a red box, on the capsule that is not successfully reconstructed using a plain diffusion model. However, when conditioning is applied, the printed part is restored to its original form.\nIn the case of the hazelnut, the plain diffusion model results in a rotated reconstruction, which is incorrect. When conditioning is applied, the rotation is effectively corrected, and the hazelnut is reconstructed in the right orientation. 2) (99.7, 97.9) DDAD-S-10 (98.2,98.6) (100,98.4) (100,99.2) (100,98.2) (99.9,95.1) (100,98.5) (99.8,98.3) (99.4,96.0) (99.8,98.4) (100,98.1) (99.5,99.1) (98.3,99.0) (100,98.7) (100,95.3) (99.9,97.5) (99.7,97.9) Additionally, the rays on the hazelnut are reconstructed similarly to the input image, maintaining their original appearance. The last row showcases an example from the VisA dataset [55]. After the reconstruction process, certain normal parts highlighted by the red boxes are eliminated. This absence of information is rectified through the conditioning of the model on the input image, allowing the model to accurately reconstruct these areas. The conditioning mechanism plays a crucial role in preventing these alterations from being erroneously flagged as anomalous patterns, ensuring precision in the reconstruction process." }, { "figure_ref": [], "heading": "Hyperparameter v", "publication_ref": [], "table_ref": [], "text": "In two tables, 13 and 14, we elucidate the influence of the hyperparameter v on the amalgamation of pixel-wise and feature-wise comparisons. Most categories demonstrate that minor adjustments to this hyperparameter do not yield significant changes. This observation suggests that the combination technique accommodates a broad spectrum of anomalies and is not highly sensitive to the v hyperparameter. Nevertheless, we fine-tuned this hyperparameter to optimise results." }, { "figure_ref": [], "heading": "Ablation on VisA", "publication_ref": [], "table_ref": [], "text": "As demonstrated in Section 5.3, our conditioning approach significantly enhances the model's performance compared to plain diffusion models. This improvement on MVTec [4] is also evident in Figure 8, where the image AUROC, pixel AUROC, and PRO metrics have increased by 7.1%, 2.9%, and 6.0%, respectively, using pixel-wise comparison. While pixel-wise comparison alone achieves promising results of 94.1%, the overall performance increases to 98.9% after combining it with feature-wise comparison. We observed that a pretrained feature extractor performs poorly in feature-wise comparison. However, these results have significantly improved after domain adaptation " }, { "figure_ref": [ "fig_6" ], "heading": "Distillation loss to not forget", "publication_ref": [], "table_ref": [ "tab_9" ], "text": "As quantitatively illustrated in Table 19, undertaking domain adaptation without incorporating a distillation loss leads to the feature extractor erasing its prior knowledge.\nIt is crucial to retain the pretrained information during the transition to a new domain, as the feature extractor's capability to discern anomalous features is rooted in its training on extensive data on ImageNet. We exemplify this phenomenon with Pill and Screw categories in Figure 7. The figure showcases how the introduction of distillation loss prevents AUROC deterioration over epochs, indicating that the feature extractor adapts to the new domain while preserving its pretrained knowledge. In the absence of distillation loss, the feature extractor begins to lose its generality, a critical aspect for extracting anomalous features. " }, { "figure_ref": [], "heading": "Robustness to anomalies on the background", "publication_ref": [], "table_ref": [], "text": "In industrial and production scenarios, a significant challenge often involves dealing with anomalies, such as dust or environmental changes in the background during photography. In this section, we highlight the robustness of the domain-adapted feature extractor to such spurious patterns. As depicted in Figure 9, a pretrained feature extractor erroneously identifies normal background elements, indicated by the blue boxes, as anomalies. However, after domain adaptation, the feature extractor becomes resilient, no longer misidentifying or mislocating these elements. In the first three samples, showcasing PCBs, not only are anomalies mislocalised, but the images are also misclassified." }, { "figure_ref": [], "heading": "Comparative Analysis of Present Diffusion-Based Anomaly Detection Models", "publication_ref": [ "b51", "b40", "b55" ], "table_ref": [ "tab_5" ], "text": "In this section, we compare our model with similar approaches that utilise denoising diffusion models for anomaly detection. We showcase a unique aspect of our architecture that sets it apart from others and demonstrates superior performance. AnoDDPM [46] demonstrated that starting from a fulllength Markovian chain is not imperative. Additionally, they demonstrated that a multi-scale simplex noise leads to a better reconstruction.\nHowever, substituting Gaussian noise with simplex noise results in a slower inference time. Generally, the time complexity of sampling simplex noise, which is O(n 2 ), is typically higher than that of Gaussian noise, which is O(1), due to its inherent complexity. While the time complexity of simplex noise is often discussed in terms of operations per sample, varying with implementation details and dimensions, Gaussian noise generation using conventional methods is commonly considered constant time, O(1), per sample. To avoid this replacement, we introduced a conditioning mechanism that enables us to initiate from higher time steps. This allows for the reconstruction of compo-nents situated in low distribution while preserving the nominal part of the image.\nDiffusionAD [51], developed concurrently with this work, employs two sub-networks for denoising and segmentation, inspired by DRAEM [50], showcasing the success of diffusion models over VAEs in anomaly detection. While a single denoising step accelerates the process, it makes it akin to VAEs, moving directly from noise to signal, with the distinction that in this case, the starting point is a noise-to-signal ratio. Additionally, they rely on external synthetic anomalies, potentially decreasing robustness to unseen anomalies. According to the results, DDAD outperforms by 1.1% on the Image AUROC metric for the VisA dataset. Results on pixel AUROC are not published.\nScore-based perturbation resilience [40] formulates the problem with a geometric perspective. The idea is based on the assumption that samples that deviate from the manifold of normal data, cannot be restored in the same way as normal samples. Hence, the gradient of the log-likelihood results in identifying anomalies. Score-based perturbation resilience, unlike DiffusionAD and DRAEM, does not rely on any external data, making them robust to a wide range of anomalies. However, this approach fails to outperform representation-based models in both anomaly segmentation and localisation. According to the results, DDAD outperforms by 2.1% and 0.7% on the Image AUROC and Pixel AUROC metrics for the MVTec dataset. Lu et al. [29] leverage the KL divergence between the posterior distribution and estimated distribution as the pixel-level anomaly score. Additionally, an MSE error for feature reconstruction serves as a feature-level score. This model relies on a pretrained feature extractor, which may not be adapted to the domain of the problem. Moreover, the outcomes are not competitive with representation-based models. DDAD outperforms by 1.4% on the Pixel AUROC metric for the MVTec dataset. Results on Image AUROC are not published.\nTo avoid reliance on external resources, we introduce a Figure 6. Some qualitative results, showcasing the insufficiency of plain diffusion models for more accurate anomaly detection. Table 16. Impact of the domain adaptation on VisA [55] only when compared in feature-wise distance. AUROC metric is in the format of (ImageAUROC, PixelAUROC) " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "domain adaptation technique to address the domain shift problem. Additionally, a guidance mechanism is introduced to tailor the denoising process for the task of anomaly detection, preserving the nominal part of the image. Notably, the aforementioned papers did not benchmark on both MVTec and VisA, nor were they evaluated based on all three metrics: Image AUROC, Pixel AUROC, and PRO. In this paper, we demonstrate the robustness of our model through a comprehensive analysis of both MVTec and VisA, evaluating all three metrics. We show that DDAD not only outperforms reconstruction-based models but also representation-based models.\n12. The Importance of Combining Pixel-wise and Feature-wise Comparison\nIn Figure 10, we present six examples from MVTec (top three rows) and six examples from VisA (last three rows), where either pixel-wise or feature-wise comparison proves ineffective. In the last row, the initial PCB example fails in both scenarios. The feature-wise comparison identifies two anomalous regions, whereas the pixel-wise comparison does not identify any region as an anomaly. Intriguingly, after combining the approaches, the score of the region previously misidentified as an anomaly decreases. This region is now segmented as normal after the combination." }, { "figure_ref": [], "heading": "Additional Qualitative Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Mislocalisation", "publication_ref": [], "table_ref": [], "text": "Although our model achieved a high AUROC for anomaly detection, it faced challenges in accurately localising extreme rotations or figure alterations. For example, as depicted in Figure 11, when starting from time step 250, the model struggled to reconstruct these substantial changes. Conversely, beginning from larger time steps make the reconstruction process difficult and slow. Additionally, it is important to note that our conditioning approach aims to preserve the overall structure of the reconstructed image similar to the input image. However, in cases where there are drastic changes such as rotations or figure alterations, the conditioning mechanism may lead to mislocalisation." }, { "figure_ref": [], "heading": "Qualitative results on MTD", "publication_ref": [ "b55", "b22" ], "table_ref": [], "text": "To showcase the versatility of our model beyond the MVTec [4] and VisA [55] datasets, we also evaluated DDAD performance on an entirely different dataset called MTD [22]. This evaluation allows us to demonstrate the potential of our model across diverse datasets. In Figure 12, we present qualitative results illustrating the performance of our DDAD approach on the MTD dataset. " } ]
2023-12-03
[ { "authors": "Samet Akcay; Toby P Amir Atapour-Abarghouei; Breckon", "journal": "Springer", "ref_id": "b0", "title": "Ganomaly: Semi-supervised anomaly detection via adversarial training", "year": "2018" }, { "authors": "Liron Bergman; Niv Cohen; Yedid Hoshen", "journal": "", "ref_id": "b1", "title": "Deep nearest neighbor anomaly detection", "year": "2020" }, { "authors": "Paul Bergmann; Sindy Löwe; Michael Fauser; David Sattlegger; Carsten Steger", "journal": "", "ref_id": "b2", "title": "Improving unsupervised defect segmentation by applying structural similarity to autoencoders", "year": "2018" }, { "authors": "Paul Bergmann; Michael Fauser; David Sattlegger; Carsten Steger", "journal": "", "ref_id": "b3", "title": "Mvtec ad-a comprehensive real-world dataset for unsupervised anomaly detection", "year": "2004" }, { "authors": "Paul Bergmann; Michael Fauser; David Sattlegger; Carsten Steger", "journal": "", "ref_id": "b4", "title": "Uninformed students: Student-teacher anomaly detection with discriminative latent embeddings", "year": "2020" }, { "authors": "Niv Cohen; Yedid Hoshen", "journal": "", "ref_id": "b5", "title": "Sub-image anomaly detection with deep pyramid correspondences", "year": "2020" }, { "authors": "Thomas Defard; Aleksandr Setkov; Angelique Loesch; Romaric Audigier", "journal": "Springer", "ref_id": "b6", "title": "Padim: a patch distribution modeling framework for anomaly detection and localization", "year": "2021" }, { "authors": "David Dehaene; Pierre Eline", "journal": "", "ref_id": "b7", "title": "Anomaly localization by modeling perceptual features", "year": "2020" }, { "authors": "Hanqiu Deng; Xingyu Li", "journal": "", "ref_id": "b8", "title": "Anomaly detection via reverse distillation from one-class embedding", "year": "2022" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b9", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Laurent Dinh; Jascha Sohl-Dickstein; Samy Bengio", "journal": "", "ref_id": "b10", "title": "Density estimation using real NVP", "year": "2017" }, { "authors": "Alexey Dosovitskiy; Thomas Brox", "journal": "Advances in neural information processing systems", "ref_id": "b11", "title": "Generating images with perceptual similarity metrics based on deep networks", "year": "2016" }, { "authors": "Spyros Gidaris; Praveer Singh; Nikos Komodakis", "journal": "", "ref_id": "b12", "title": "Unsupervised representation learning by predicting image rotations", "year": "2018" }, { "authors": "Izhak Golan; Ran El-Yaniv", "journal": "", "ref_id": "b13", "title": "Deep anomaly detection using geometric transformations", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b14", "title": "", "year": "2018" }, { "authors": "Ian Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio", "journal": "Communications of the ACM", "ref_id": "b15", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "Denis Gudovskiy; Shun Ishizaka; Kazuki Kozuka", "journal": "", "ref_id": "b16", "title": "Cflow-ad: Real-time unsupervised anomaly detection with localization via conditional normalizing flows", "year": "2022" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b17", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Dan Hendrycks; Mantas Mazeika; Saurav Kadavath; Dawn Song", "journal": "Advances in neural information processing systems", "ref_id": "b18", "title": "Using self-supervised learning can improve model robustness and uncertainty", "year": "2019" }, { "authors": "Geoffrey Hinton; Oriol Vinyals; Jeff Dean", "journal": "", "ref_id": "b19", "title": "Distilling the knowledge in a neural network", "year": "" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "", "ref_id": "b20", "title": "Denoising diffusion probabilistic models", "year": "" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Yibin Huang; Congying Qiu; Kui Yuan", "journal": "The Visual Computer", "ref_id": "b22", "title": "Surface defect saliency of magnetic tile", "year": "2020" }, { "authors": "Jeremy Irvin; Pranav Rajpurkar; Michael Ko; Yifan Yu; Silviana Ciurea-Ilcus; Chris Chute; Henrik Marklund; Behzad Haghgoo; Robyn Ball; Katie Shpanskaya", "journal": "", "ref_id": "b23", "title": "Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison", "year": "2019" }, { "authors": "Jongheon Jeong; Yang Zou; Taewan Kim; Dongqing Zhang; Avinash Ravichandran; Onkar Dabeer", "journal": "", "ref_id": "b24", "title": "Winclip: Zero-/few-shot anomaly classification and segmentation", "year": "2023" }, { "authors": "P Durk; Prafulla Kingma; Dhariwal", "journal": "Advances in neural information processing systems", "ref_id": "b25", "title": "Glow: Generative flow with invertible 1x1 convolutions", "year": "2018" }, { "authors": "P Diederik; Max Kingma; Welling", "journal": "", "ref_id": "b26", "title": "Auto-encoding variational bayes", "year": "2013" }, { "authors": "Wen Liu; Weixin Luo; Dongze Lian; Shenghua Gao", "journal": "", "ref_id": "b27", "title": "Future frame prediction for anomaly detection-a new baseline", "year": "2018" }, { "authors": "Zhikang Liu; Yiming Zhou; Yuansheng Xu; Zilei Wang", "journal": "", "ref_id": "b28", "title": "Simplenet: A simple network for image anomaly detection and localization", "year": "2023" }, { "authors": "Fanbin Lu; Xufeng Yao; Chi-Wing Fu; Jiaya Jia", "journal": "", "ref_id": "b29", "title": "Removing anomalies as noises for industrial defect localization", "year": "2023" }, { "authors": "Yuchen Lu; Peng Xu", "journal": "", "ref_id": "b30", "title": "Anomaly detection for skin disease images using variational autoencoder", "year": "2018" }, { "authors": "Michael Mathieu; Camille Couprie; Yann Lecun", "journal": "", "ref_id": "b31", "title": "Deep multi-scale video prediction beyond mean square error", "year": "2015" }, { "authors": "Mehdi Mirza; Simon Osindero", "journal": "", "ref_id": "b32", "title": "Conditional generative adversarial nets", "year": "2014" }, { "authors": "Mehdi Noroozi; Paolo Favaro", "journal": "Springer", "ref_id": "b33", "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "year": "2016" }, { "authors": "Stanislav Pidhorskyi; Ranya Almohsen; Gianfranco Doretto", "journal": "Advances in neural information processing systems", "ref_id": "b34", "title": "Generative probabilistic novelty detection with adversarial autoencoders", "year": "2018" }, { "authors": "Nicolae-Cȃtȃlin Ristea; Neelu Madan; Tudor Radu; Kamal Ionescu; Fahad Nasrollahi; Thomas B Shahbaz Khan; Mubarak Moeslund; Shah", "journal": "", "ref_id": "b35", "title": "Self-supervised predictive convolutional attentive block for anomaly detection", "year": "2022" }, { "authors": "Karsten Roth; Latha Pemula; Joaquin Zepeda; Bernhard Schölkopf; Thomas Brox; Peter Gehler", "journal": "", "ref_id": "b36", "title": "Towards total recall in industrial anomaly detection", "year": "2022" }, { "authors": "Marco Rudolph; Bastian Wandt; Bodo Rosenhahn", "journal": "", "ref_id": "b37", "title": "Same same but differnet: Semi-supervised defect detection with normalizing flows", "year": "2021" }, { "authors": "Olga Russakovsky; Jia Deng; Hao Su; Jonathan Krause; Sanjeev Satheesh; Sean Ma; Zhiheng Huang; Andrej Karpathy; Aditya Khosla; Michael Bernstein", "journal": "International journal of computer vision", "ref_id": "b38", "title": "Imagenet large scale visual recognition challenge", "year": "2015" }, { "authors": "Mohammad Sabokrou; Mohammad Khalooei; Mahmood Fathy; Ehsan Adeli", "journal": "", "ref_id": "b39", "title": "Adversarially learned one-class classifier for novelty detection", "year": "2018" }, { "authors": "Woosang Shin; Jonghyeon Lee; Taehan Lee; Sangmoon Lee; Jong Pil; Yun ", "journal": "", "ref_id": "b40", "title": "Anomaly detection using score-based perturbation resilience", "year": "2023" }, { "authors": "Jascha Sohl-Dickstein; Eric Weiss; Niru Maheswaranathan; Surya Ganguli", "journal": "PMLR", "ref_id": "b41", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b42", "title": "Denoising diffusion implicit models", "year": "2021" }, { "authors": "Yang Song; Stefano Ermon", "journal": "Advances in neural information processing systems", "ref_id": "b43", "title": "Generative modeling by estimating gradients of the data distribution", "year": "2019" }, { "authors": "Yang Song; Jascha Sohl-Dickstein; P Diederik; Abhishek Kingma; Stefano Kumar; Ben Ermon; Poole", "journal": "", "ref_id": "b44", "title": "Score-based generative modeling through stochastic differential equations", "year": "2021" }, { "authors": "Julia Wolleb; Florentin Bieder; Robin Sandkühler; Philippe C Cattin", "journal": "Springer", "ref_id": "b45", "title": "Diffusion models for medical anomaly detection", "year": "2022" }, { "authors": "Julian Wyatt; Adam Leach; Sebastian M Schmon; Chris G Willcocks", "journal": "", "ref_id": "b46", "title": "Anoddpm: Anomaly detection with denoising diffusion probabilistic models using simplex noise", "year": "2022" }, { "authors": "Jiawei Yu; Ye Zheng; Xiang Wang; Wei Li; Yushuang Wu; Rui Zhao; Liwei Wu", "journal": "", "ref_id": "b47", "title": "Fastflow: Unsupervised anomaly detection and localization via 2d normalizing flows", "year": "2021" }, { "authors": "Sergey Zagoruyko; Nikos Komodakis", "journal": "", "ref_id": "b48", "title": "Wide residual networks", "year": "2016" }, { "authors": "Vitjan Zavrtanik; Matej Kristan; Danijel Skočaj", "journal": "Pattern Recognition", "ref_id": "b49", "title": "Reconstruction by inpainting for visual anomaly detection", "year": "2021" }, { "authors": "Vitjan Zavrtanik; Matej Kristan; Danijel Skočaj", "journal": "", "ref_id": "b50", "title": "Draema discriminatively trained reconstruction embedding for surface anomaly detection", "year": "2021" }, { "authors": "Hui Zhang; Zheng Wang; Zuxuan Wu; Yu-Gang Jiang", "journal": "", "ref_id": "b51", "title": "Diffusionad: Denoising diffusion for anomaly detection", "year": "2023" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros; Eli Shechtman; Oliver Wang", "journal": "", "ref_id": "b52", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" }, { "authors": "Ying Zhao", "journal": "", "ref_id": "b53", "title": "Omnial: A unified cnn framework for unsupervised anomaly localization", "year": "2023" }, { "authors": "David Zimmerer; Jens Petersen; Gregor Köhler; Paul Jäger; Peter Full; Klaus Maier-Hein; Tobias Roß; Tim Adler; Annika Reinke; Lena Maier-Hein", "journal": "", "ref_id": "b54", "title": "Medical out-ofdistribution analysis challenge", "year": "2022" }, { "authors": "Yang Zou; Jongheon Jeong; Latha Pemula; Dongqing Zhang; Onkar Dabeer", "journal": "Springer", "ref_id": "b55", "title": "Spot-the-difference self-supervised pretraining for anomaly detection and segmentation", "year": "2006" } ]
[ { "formula_coordinates": [ 3, 85.25, 407.16, 201.12, 38.55 ], "formula_id": "formula_0", "formula_text": "q(x 1:T |x) = (t≥1) q(x t |x t-1 ), q(x t |x t-1 ) = N (x t ; 1 -β t x t-1 , β t I).(1)" }, { "formula_coordinates": [ 3, 50.11, 505.25, 84.67, 14.11 ], "formula_id": "formula_1", "formula_text": "α t = t i=1 (1 -β i )." }, { "formula_coordinates": [ 3, 50.11, 556.01, 162.33, 9.68 ], "formula_id": "formula_2", "formula_text": "p θ (x t-1 |x t ) = N (x t-1 ; µ θ (x t , t), β t I)" }, { "formula_coordinates": [ 3, 81.42, 696.22, 204.95, 19.14 ], "formula_id": "formula_3", "formula_text": "f (t) θ (x t ) := (x t - √ 1 -α t .ϵ (t) θ (x t ))/ √ α t .(2)" }, { "formula_coordinates": [ 3, 308.86, 72.47, 236.25, 64.13 ], "formula_id": "formula_4", "formula_text": "(t) θ (x t-1 |x t ) = q σ (x t-1 |x t , f (t) θ (x t )), accordingly via x t-1 = √ α t-1 f (t) θ (x t )+ 1 -α t-1 -σ 2 t .ϵ (t) θ (x t )+σ t ϵ t ,(3)" }, { "formula_coordinates": [ 3, 352.48, 226.38, 192.64, 23 ], "formula_id": "formula_5", "formula_text": "∇ xt log p θ (x t ) = - 1 √ 1 -α ϵ (t) θ (x t ),(4)" }, { "formula_coordinates": [ 3, 310.73, 668.33, 232.51, 9.65 ], "formula_id": "formula_6", "formula_text": "∇ xt log p θ (x t |y t ) = ∇ xt log p θ (x t ) + ∇ xt log p θ (y t |x t )." }, { "formula_coordinates": [ 4, 95.76, 417.79, 190.6, 19.14 ], "formula_id": "formula_7", "formula_text": "ε = ϵ (t) θ (x t ) -w √ 1 -α t (y t -x t ),(6)" }, { "formula_coordinates": [ 4, 56.32, 501.02, 230.04, 17.66 ], "formula_id": "formula_8", "formula_text": "x t-1 = √ α t-1 f (t) θ (x t ) + 1 -α t-1 -σ 2 t ε + σ t ϵ t .(7)" }, { "formula_coordinates": [ 4, 55.87, 572.83, 165.72, 98.9 ], "formula_id": "formula_9", "formula_text": "1: x T ′ ← √ α T ′ x + √ 1 -α T ′ ϵ t 2: for all t = T ′ , ..., 1 do 3: y t ← √ α t y + √ 1 -α t ϵ (t) θ (x t ) 4: ε ← ϵ (t) θ (x t ) -w √ 1 -α t (y t -x t ) 5: f (t) θ (x t ) ← (x t - √ 1 -α t .ε)/ √ α t 6: x t-1 ← √ α t-1 f (t) θ (x t ) 7:" }, { "formula_coordinates": [ 5, 78.75, 195.28, 207.62, 20.09 ], "formula_id": "formula_10", "formula_text": "D f (x 0 , y) = j∈J (1 -cos(ϕ j (x 0 ), ϕ j (y))) ,(8)" }, { "formula_coordinates": [ 5, 89.1, 320.6, 197.26, 23.23 ], "formula_id": "formula_11", "formula_text": "D anomaly = v max(D f ) max(D p ) D p + D f ,(9)" }, { "formula_coordinates": [ 5, 324.31, 153.82, 220.8, 95.53 ], "formula_id": "formula_12", "formula_text": "L DA = L Similarity (x 0 , y) + λ DL L DL (x 0 , y) = j∈J (1 -cos(ϕ j (x 0 ), ϕ j (y))) + λ DL j∈J 1 -cos(ϕ j (y), ϕ j (y)) + λ DL j∈J 1 -cos(ϕ j (x 0 ), ϕ j (x 0 )) ,(10)" } ]
Anomaly Detection with Conditioned Denoising Diffusion Models
Traditional reconstruction-based methods have struggled to achieve competitive performance in anomaly detection. In this paper, we introduce Denoising Diffusion Anomaly Detection (DDAD), a novel denoising process for image reconstruction conditioned on a target image. This ensures a coherent restoration that closely resembles the target image. Our anomaly detection framework employs the conditioning mechanism, where the target image is set as the input image to guide the denoising process, leading to a defectless reconstruction while maintaining nominal patterns. Anomalies are then localised via a pixel-wise and feature-wise comparison of the input and reconstructed image. Finally, to enhance the effectiveness of the feature-wise comparison, we introduce a domain adaptation method that utilises nearly identical generated examples from our conditioned denoising process to fine-tune the pretrained feature extractor. The veracity of DDAD is demonstrated on various datasets including MVTec and VisA benchmarks, achieving state-of-the-art results of 99.8% and 98.9% image-level AUROC respectively.
Arian Mousakhan; Thomas Brox; Jawad Tayyub
[ { "figure_caption": "θ(x t ). DDPM suggests the training objective ||ϵ", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "θ(x t ) -ϵ|| 2 to train the model.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure2. Framework of DDAD. After a denoising U-Net has been trained, the feature extractor is adapted to the problem domain by minimising the distance between the extracted features of a target image and a generated image which resembles the target image. At inference time, after perturbing the input image, the denoising process is conditioned on the same input image to make an anomaly-free reconstruction. Finally, the reconstructed image is compared with the input through both pixel and feature matching to generate an accurate anomaly localisation.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Top: Influence of conditioning parameter on reconstruction outcomes. Bottom: The first row illustrates a scenario where pixel-wise comparison proves ineffective, while the second row showcases a failure in feature-wise comparison. It is demonstrated that a combination leads to accurate detection in both cases.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Effectiveness of various components of our model on anomaly detection and segmentation. Left: Effectiveness of conditioning based on only pixel-wise image comparison. Middle: Performance increase due to domain adaptation of feature extractor. The conditioning is applied for reconstruction. Right: Impact of merging feature-wise and pixel-wise image comparison. All results are shown on MVTec [4] dataset.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. First and second rows depict samples on 'metal nut', 'capsule', 'transistor', and 'grid' selected from MVTec [4]. Third and fourth rows depict samples of 'pcb4', 'chewing gum', 'pcb3' and 'capsules' selected from VisA [55].", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Role of distillation loss in fine-tuning to avoid pretrained knowledge loss.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 .Figure 9 .Figure 10 .Figure 11 .891011Figure 8. Effectiveness of various components of our model on anomaly detection and segmentation. Left: Effectiveness of conditioning based on pixel-wise image comparison. Middle: Performance increase due to domain adaptation of feature extractor. Right: Impact of merging feature-wise and pixel-wise image comparison. All results are shown on the VisA [55] dataset.", "figure_data": "", "figure_id": "fig_7", "figure_label": "891011", "figure_type": "figure" }, { "figure_caption": "A detailed comparison of Anomaly Classification and Localisation performance of various methods on MVTec benchmark [4]in the format of (image AUROC,pixel AUROC). The first five rows represent texture categories,and the next nine rows represent object categories.", "figure_data": "Representation-basedReconstruction-basedMethodRD4AD[9] PatchCore[36] SimpleNet [28] GANomaly [1] RIAD [49] Score-based PR [40] DRAEM [50] DDAD-S-10 DDAD-10Carpet(98.9,98.9)(98.7,98.9)(99.7,98.2)(20.3,-)(84.2,96.3)(91.7,96.4)(97.0,95.5)(98.2,98.6)(99.3,98.7)Grid(100,99.3)(99.7,98.3)(99.7,98.8)(40.4,-)(99.6,98.8)(100,98.9)(99.9,99.7)(100,98.4)(100,99.4)Leather(100,99.4)(100,99.3)(100,99.2)(41.3,-)(100,99.4)(99.9,99.3)(100,98.6)(100,99.2)(100,99.4)Tile(99.3,95.6)(100,99.3)(99.8,97.0)(40.8,-)(98.7,89.1)(99.8,96.8)(99.6,99.2)(100,98.2)(100,98.2)Wood(99.2,95.3)(99.2,95.0)(100,94.5)(74.4,-)(93.0,85.8)(96.1,95.4)(99.1,96.4)(99.9,95.1)(100,95.0)Bottle(100,98.7)(100,98.6)(100,98.0)(25.1,-)(99.9,98.4)(100,95.9)(99.2,99.1)(100,98.5)(100,98.7)Cable(95.0,97.4)(99.5,98.4)(99.9,97.6)(45.7,-)(81.9,84.2)(94.2,96.9)(91.8,94.7)(99.8,98.3)(99.4,98.1)Capsule(96.3,98.7)(98.1,98.8)(97.7,98.9)(68.2,-)(88.4,92.8)(97.2,96.6)(98.5,94.3)(99.4,96.0)(99.4,95.7)Hazelnut(99.9,98.9)(100,98.7)(100,97.9)(53.7,-)(83.3,96.1)(98.6,98.7)(100,99.7)(99.8,98.4)(100,98.4)Metal nut(100,97.3)(100,98.4)(100,98.8)(27.0,-)(88.5,92.5)(96.6,96.6)(98.7,99.5)(100,98.1)(100,99.0)Pill(96.6,98.2)(99.8,98.9)(99.0,98.6)(47.2,-)(83.8,95.7)(96.1,98.2)(98.9,97.6)(99.5,99.1)(100,99.1)Screw(97.0,99.6)(98.1,99.4)(98.2,99.3)(23.1,-)(84.5,98.8)(98.6,99.5)(93.9,97.6)(98.3,99.0)(99.0,99.3)Toothbrush (99.5,99.1)(100,98.7)(99.7,98.5)(37.2,-)(100,98.9)(98.1,97.8)(100,98.1)(100,98.7)(100,98.7)Transistor(96.7,92.5)(100,96.3)(100,97.6)(44.0,-)(90.9,87.7)(98.7,94.7)(93.1,90.9)(100,95.3)(100,95.3)Zipper(98.5,98.2)", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Anomaly Classification and localisation performance (image AUROC,pixel AUROC) of various methods on VisA benchmark. The best results are highlighted in bold.WinCLIP[24] (95.4,88.9) (85.0,81.6) (92.1,84.7) (96.5,93.3) (80.3,88.5) (76.2,70.9) (63.7,59.3) (73.6,61.2) (51.2,71.6) (73.4,85.3) (79.6,94.4) (69.7,75.4) (78.1,79.6) SPD [55] (89.1,97.3) (68.1,86.3) (90.5,86.1) (99.3,96.9) (89.8,88.0) (85.7,98.8) (70.8,96.0) (92.7,97.7) (87.9,97.2) (85.4,96.7) (99.1,89.2) (95.6,95.4) (87.8,93.8) DRAEM [50] (91.8,96.6) (74.7,98.5) (95.1,83.5) (94.8,96.8) (97.4,87.2) (97.2,99.9) (85.0,99.2) (47.6,88.7) (89.8,91.3) (92.0,98.0) (98.6,96.8)", "figure_data": "MethodCandleCapsulesCashewChewing gumFryumMacaroni1 Macaroni2PCB1PCB2PCB3PCB4Pipe fryumAverage(100,98.8)(88.7,93.5)OmniAL [53] (85.1,90.5) (87.9,98.6) (97.1,98.9)(94.9,98.7)(97.0,89.3) (96.9,98.9) (89.9,99.1) (96.6,98.7) (99.4,83.2) (96.9,98.4) (97.4,98.5) (91.4,99.1) (94.2,96.0)DDAD-10(99.9,98.7) (100,99.5) (94.5,97.4)(98.1,96.5)(99.0,96.9) (99.2,98.7) (99.2,98.2) (100,93.4) (99.7,97.4) (97.2,96.3) (100,98.5)(100,99.5)(98.9,97.6)", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Image AUROC results of Anomaly Detection on MTD[22] ", "figure_data": "GANomaly [1] DifferNet [37] PatchCore-10 [36] DDAD-1076.797.797.998.3", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "PRO metric for anomaly localisation on MVTec AD [4]and VisA[55] dataset. The best results are highlighted in bold.", "figure_data": "MethodSPADE [6]PaDiM[7]RD4AD[9] PatchCore[36] DDAD-10MVTec91.792.193.993.592.3Method WinCLIP [24] DRAEM [50] RD4AD[9] PatchCore[36] DDAD-10VisA56.873.170.991.292.7", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Inference time per image and performance of the model on MVTec[4] with different number of denoising steps in the format of (Image AUROC, Pixel AUROC, PRO).", "figure_data": "MethodPatchCore-1%PaDiMDDAD-5PatchCore-10%Performance (99.0, 98.1, 93.5) (95.4,97.5,92.1) (99.3, 97.5, 91.2) (99.1,98.1,93.5)Time (s)0.170.190.210.22MethodDDAD-S-10DDAD-10SPADEDDAD-25Performance (99.7,97.9,91.3) (99.8,98.1,92.4) (85.3, 96.6, 91.5) (99.7, 97.9, 91.0)Time (s)0.340.380.660.90", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "DDAD Performance on MVTec [4], based on various denoising steps.Format (ImageAUROC, PixelAUROC)", "figure_data": "CategoriesCarpetGridLeatherTileWoodBottleCableCapsuleHazelnutMetal nutPillScrewToothbrush TransistorZipperAvgDDAD-5(94.3,96.4) (100,99.3) (100,99.1) (100,98.2) (99.5,94.4) (100,98.7) (99.6,98.2) (99.1,93.8) (100,98.2) (99.7,98.0) (99.9,98.8) (97.4,98.9) (100,98.6) (99.8,94.0) (100,98.3) (99.3, 97.5)DDAD-10(99.3,98.7) (100,99.4) (100,99.4) (100,98.2) (100,95.3) (100,98.7) (99.4,98.1) (99.4,95.7) (100,98.3) (100,98.9) (100,99.1) (99.0,99.3) (100,98.7)(100,95.3) (100,98.2) (99.8,98.1)DDAD-25(99.0,98.7) (100,99.3) (100,99.0) (100,98.3) (99.4,94.2) (100,98.7) (99.6,98.2) (99.6,95.4) (99.9,98.2) (99.5,98.7) (100,98.9) (99.1,99.3) (100,98.7)(100,95.0) (100,98.", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Anomaly Localisation Performance on MVTec [4], based on PRO metric.", "figure_data": "CategoriesCarpet Grid Leather Tile Wood Bottle Cable Capsule Hazelnut Metal nut Pill Screw Toothbrush Transistor Zipper AvgSPADE [6]94.786.797.275.987.495.590.993.795.494.494.696.093.587.492.691.7PaDiM [7]96.294.697.886.091.194.888.893.592.685.692.794.493.184.595.992.1RD4AD [9]97.097.699.190.690.996.691.095.895.592.396.498.294.578.095.493.9PatchCore [36]96.695.998.987.489.696.192.695.593.991.394.197.991.483.597.193.5DDAD-586.896.497.293.182.191.890.292.587.588.194.394.791.887.393.991.2DDAD-1093.997.397.793.182.991.888.993.486.791.195.596.392.690.193.292.3DDAD-2594.297.097.984.177.592.387.491.086.091.694.995.992.990.492.491.0DDAD-S-1093.793.996.593.284.390.687.691.685.487.495.196.992.491.888.691.3", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": ". Image AU-ROC, pixel AUROC, and PRO metrics increased by 32.2%, 22.1%, and 44.4%, respectively, when only feature-wise comparison is used. The inability of the pretrained feature extractor to extract informative features may explain the inferior performance of representation-based models compared to DDAD, where the backbone fails to provide better features. Detailed performance of feature-wise and pixelwise comparison for each category are shown in tables 15 and 16, respectively.", "figure_data": "10. Feature Extractor10.1. Different backbonesTables 17 and 18 provide a detailed analysis of results ob-tained using various backbones as the feature extractor. No-tably, while WideResNet101 yielded the best outcomes forboth MVTec and VisA, WideResNet50 demonstrated com-parable results.", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Anomaly Localization Performance on VisA[55], based on PRO metric.", "figure_data": "CategoriesCandle Capsules Cashew Chewing gum Fryum Macaroni1 Macaroni2 PCB1 PCB2 PCB3 PCB4 Pipe fryum AvgSPADE [6]93.236.157.493.991.361.363.438.442.280.371.661.765.9PaDiM [7]95.774.987.983.580.292.175.491.388.784.981.692.585.9RD4AD [9]92.256.979.092.581.071.968.043.246.480.372.268.370.9PatchCore [36]94.085.594.584.695.395.494.494.389.290.990.195.791.2DDAD-1096.695.080.385.294.298.599.393.393.386.695.594.792.7", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Setting for replicating results on MVTec [4].", "figure_data": "CategoriesCarpet Grid Leather Tile Wood Bottle Cable Capsule Hazelnut Metal nutPillScrew Toothbrush Transistor Zipperw041141133857920010Training epochs2500200020001000 2000100030001500200030001000 2000200020001000FE epochs0680165083144206", "figure_id": "tab_9", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Sensitivity of conditioning parameter w on MVTec [4] only when compared in pixel-wise distance. Format (ImageAUROC, PixelAUROC) (66.7,82.6) (100,99.2) (99.9,98.9) (66.6,64.8) (93.6,81.9) (96.3,87.5) (61.2,89.0) (80.7,76.9) (95.0,95.5) (79.1,90.7) (69.5,80.9) (96.5,98.8) (99.7,97.6) (82.1,82.5) (99.2,96.3) w = 1 (69.5,83.6) (100,99.4) (99.9,99.1) (75.6,72.1) (94.4,83.5) (96.3,89.8) (63.3,87.9) (84.8,85.5) (96.5,96.8) (82.9,90.9) (76.5,89.6) (97.7,99.1) (100,97.8) (85.9,84.0) (99.7,97.1) w = 2 (73.4,84.9) (100,99.5) (100,99.2) (86.0,78.8) (96.7,84.8) (96.0,90.9) (69.4,86.5) (86.8,90.3) (97.1,97.2) (85.0,90.3) (85.0,94.1) (98.6,99.2) (99.7,97.9) (87.0,84.9) (99.8,97.7)", "figure_data": "CategoriesCarpetGridLeatherTileWoodBottleCableCapsuleHazelnutMetal nutPillScrewToothbrush TransistorZipperw = 0w = 3(77.0,85.5) (100,99.6) (100,99.2) (92.9,83.9) (96.8,85.8) (95.2,91.2) (73.5,85.1) (89.7,92.4) (97.2,97.4) (86.0,89.2) (90.2,95.7) (99.1,99.3) (99.2,97.9) (86.1,85.0) (99.9,98.0)w = 4(79.3,86.2) (100,99.6) (100,99.3) (96.3,87.3) (96.8,86.7) (94.2,91.0) (74.4,83.8) (91.1,92.9) (97.6,97.5) (86.9,87.9) (92.6,96.6) (99.2,99.3) (98.9,97.8) (85.7,84.8) (99.9,98.2)w = 5(79.4,86.7) (100,99.6) (100,99.3) (98.3,89.4) (97.1,87.4) (93.0,90.7) (75.4,82.8) (92.2,92.9) (97.6,97.6) (87.7,86.5) (94.1,97.1) (99.4,99.3) (97.8,97.7) (85.2,84.5) (99.9,98.4)w = 6(79.9,87.1) (100,99.6) (100,99.4) (98.5,90.7) (97.3,88.0) (92.6,90.3) (77.0,81.9) (93.1,92.7) (97.7,97.6) (87.6,85.3) (94.3,97.5) (99.5,99.3) (96.7,97.6) (83.6,84.2) (100,98.5)", "figure_id": "tab_10", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Detailed results on parameter v on MVTec [4]. The format for AUROC is (Image AUROC, Pixel AUROC) (99.4,98.8) (100,99.3) (100,99.4) (100,98.2) (99.7,95.0) (100,98.7) (99.3,98.1) (99.4,95.7) (100,98.1) (99.9,98.9) (100,99.1) (98.8,99.3) (100,98.6) (92.6,91.5) (100,98.3) (99.3,97.8) v = 1.0 (99.3,98.7) (100,99.4) (100,99.4) (100,98.2) (100,95.0) (100,98.7) (99.4,98.1) (99.4,95.7) (100,98.3) (100,98.9) (100,99.1) (99.0,99.3) (100,98.7) (100,95.3) (100,98.2) (99.8,98.1) v = 2.0 (98.4,98.4) (100,99.4) (100,99.4) (100,98.3) (100,94.6) (100,98.7) (98.8,98.0) (98.9,95.9) (99.9,98.7) (98.4,98.8) (100,98.8) (99.2,99.4) (100,98.8) (98.7,92.0) (100,97.6) (99.5, 97.8)", "figure_data": "CategoriesCarpetGridLeatherTileWoodBottleCableCapsuleHazelnutMetal nutPillScrewToothbrush TransistorZipperAvgv = 0.8", "figure_id": "tab_11", "figure_label": "13", "figure_type": "table" }, { "figure_caption": "Detailed results on parameter v on VisA[55]. The format for AUROC is (Image AUROC, Pixel AUROC)", "figure_data": "CategoriesCandleCapsulesCashewChewing gumFryumMacaroni1 Macaroni2PCB1PCB2PCB3PCB4Pipe fryumv=5.0 (AUROC) (99.8,98.7) (100.0,99.4) (98.3,96.8)(98.3,96.8)(99.0,96.7) (99.3,98.8) (99.1,98.5) (99.9,94.1) (99.8,97.0) (98.4,95.8) (100.0,98.8) (99.9,99.5)v=5.0 (PRO)96.695.284.084.093.098.599.293.892.481.996.194.4v=6.0 (AUROC) (99.9,98.7) (100.0,99.5) (96.5,94.9)(98.1,96.8)(99.0,96.8) (99.2,98.7) (99.2,98.5) (99.8,93.0) (99.8,96.9) (97.5,96.4) (100.0,98.6) (99.9,99.5)v=6.0 (PRO)96.495.265.285.193.998.399.293.792.385.595.894.8v=7.0 (AUROC) (99.9,98.7) (100.0,99.5) (96.0,94.5)(98.1,96.5)(99.0,96.9) (99.2,98.7) (99.2,98.4) (100,93.4) (99.7,97.4) (97.5,96.3) (100.0,98.5) (100.0,99.5)v=7.0 (PRO)96.195.064.285.194.298.599.293.393.385.795.594.7v=8.0 (AUROC) (99.9,98.7) (100.0,99.4) (95.4,94.1)(98.1,96.2)(98.9,97.0) (99.1,98.6) (99.2,98.4) (99.8,91.2) (99.7,97.3) (98.4,95.6) (100.0,98.4) (100.0,99.5)v=8.0 (PRO)96.594.963.485.093.498.499.393.393.581.595.394.2", "figure_id": "tab_12", "figure_label": "14", "figure_type": "table" }, { "figure_caption": "Impact of the conditioning on VisA[55] only when compared in pixel-wise distance. AUROC metric is in the format of (Image AUROC, Pixel AUROC) AUROC (91.9,95.9) (91.2,99.7) (87.4,63.7) (97.2,85.3) (94.9,95.4) (97.2,99.5) (80.4,98.2) (95.5,69.2) (98.8,95.4) (99.0,95.5) (98.9,96.1) (97.2,97.7) (94.1, 91.0)", "figure_data": "CategoriesCandleCapsulesCashewChewing gumFryumMacaroni1 Macaroni2PCB1PCB2PCB3PCB4Pipe fryumAvgW/O conditioning -AURO (79.6,88.1) (80.5,99.4) (87.4,63.7)(92.5,70.6)(85.9,94.5) (73.8,92.7) (69.3,94.3) (90.8,88.7) (98.6,97.1) (99.7,97.1) (98.9,93.2) (86.5,77.4) (87.0, 88.1)W/O conditioning -PRO82.494.139.853.293.195.397.493.494.395.978.174.882.7W conditioning -W conditioning -PRO94.397.739.877.193.699.799.488.492.594.590.397.288.7", "figure_id": "tab_13", "figure_label": "15", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[4,55]", "Explanation": "The cited works provide foundational data in industry that the citing paper builds upon to develop methods for detecting out-of-distribution data."}, {"Category": "Supporting Evidence", "Citation": "[23,54]", "Explanation": "The cited works in medicine offer evidence of the importance of detecting anomalies in the field, which the citing paper further explores in its research."}, {"Category": "Extension or Continuation", "Citation": "[27]", "Explanation": "The cited work in video surveillance extends the research on anomaly detection by providing insights into the application of the method in a new context."}, {"Category": "Data Source", "Citation": "[6,7,9,16,36,47]", "Explanation": "The cited works provide a set of features extracted from pretrained neural networks that the citing paper uses in its representation-based method for detecting anomalies."}, {"Category": "Data Source", "Citation": "[1,8,26]", "Explanation": "The cited works offer a set of reconstruction-based methods that the citing paper leverages to learn a generative model for detecting anomalies in data."}, {"Category": "Supporting Evidence", "Citation": "[21,41]", "Explanation": "The cited works on diffusion models are referenced to support the claim that they have gained popularity as deep generative models and are useful in the context of anomaly detection."}, {"Category": "Extension or Continuation", "Citation": "[4]", "Explanation": "The MVTec dataset is mentioned in the context of anomaly detection, indicating that the cited work is an extension or continuation of research in this area."}, {"Category": "Extension or Continuation", "Citation": "[55]", "Explanation": "The VisA dataset is also mentioned in the context of anomaly detection, suggesting that the cited work is a continuation of research in this area."}, {"Category": "Methodological Basis", "Citation": "[13,31,33]", "Explanation": "The cited works have been used in the past to learn image features through self-supervised learning, which the citing paper adopts as a method to extract informative features for anomaly detection."}, {"Category": "Extension or Continuation", "Citation": "[14,18]", "Explanation": "The cited works have demonstrated the effectiveness of high-quality features in detecting anomalous samples, which the citing paper builds upon to further improve the detection of anomalous samples."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work has successfully employed ResNets to extract informative features for anomaly detection, which the citing paper adopts as a method to improve feature extraction in the current problem."}, {"Category": "Extension or Continuation", "Citation": "[6]", "Explanation": "The cited work has used a memory bank of nominal extracted features for anomaly detection, which the citing paper extends by using a memory bank of locally aware patch features to improve the comparison of input image and its reconstruction at inference time."}, {"Category": "Extension or Continuation", "Citation": "[7]", "Explanation": "The cited work has used locally constrained bag-of-features for anomaly detection, which the citing paper extends by using a memory bank of locally aware patch features to improve the comparison of input image and its reconstruction at inference time."}, {"Category": "Extension or Continuation", "Citation": "[36]", "Explanation": "The cited work has used a memory bank and neighborhood-aware patch-level features for anomaly detection, which the citing paper extends by using a memory bank of locally aware patch features to improve the comparison of input image and its reconstruction at inference time."}, {"Category": "Methodological Basis", "Citation": "[16,47]", "Explanation": "The cited works have used normalizing flow for anomaly detection, which the citing paper adopts as a method to improve the comparison of input image and its reconstruction at inference time."}, {"Category": "Extension or Continuation", "Citation": "[5,9]", "Explanation": "The cited works have used a knowledge distillation method for anomaly detection, which the citing paper extends by using a memory bank of locally aware patch features to improve the comparison of input image and its reconstruction at inference time."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work introduces the foundational concept of using a generative model to detect anomalies, which the citing paper builds upon in their research on skin disease image anomaly detection."}, {"Category": "Extension or Continuation", "Citation": "[3]", "Explanation": "The cited work proposes a perceptual loss based on structural similarity to improve learning in anomaly detection, which the citing paper further extends in their research on skin disease image anomaly detection."}, {"Category": "Extension or Continuation", "Citation": "[39]", "Explanation": "The cited work deploys a novelty detector connected end-to-end to a second network to enhance inlier samples and distorted outliers in anomaly detection, which the citing paper extends in their research on skin disease image anomaly detection."}, {"Category": "Extension or Continuation", "Citation": "[34]", "Explanation": "The cited work uses an adversarial autoencoder to effectively compute the likelihood of a sample generated by the inlier distribution in anomaly detection, which the citing paper further extends in their research on skin disease image anomaly detection."}, {"Category": "Extension or Continuation", "Citation": "[1]", "Explanation": "The cited work makes use of a conditional GAN to detect anomalies in skin disease images, which the citing paper builds upon in their research on skin disease image anomaly detection."}, {"Category": "Methodological Basis", "Citation": "[35,50]", "Explanation": "The cited works provide a discriminative end-to-end trainable surface anomaly paradigm for the detection and localization of anomalies, which the citing paper adopts in its own research."}, {"Category": "Data Source", "Citation": "[21,41]", "Explanation": "The cited works on denoising diffusion models are used as a data source for image and audio generation in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[45]", "Explanation": "The cited work on denoising diffusion models for brain tumour detection is extended in the citing paper to include a broader range of medical applications."}, {"Category": "Methodological Basis", "Citation": "[46]", "Explanation": "The cited work on AnoD-DPM shows that denoising diffusion models outperform GANs for anomaly detection in the medical domain, which the citing paper leverages in its own research."}, {"Category": "Methodological Basis", "Citation": "[21,41]", "Explanation": "The cited works provide the basis for the development of denoising diffusion models, which the citing paper adopts in its research on learning a distribution p \u03b8 (x) that closely resembles the data distribution q(x). The methods and techniques used in the cited works are crucial for understanding the process of generating latent noisy variables and the forward process in the context of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[20]", "Explanation": "The cited work provides the function p \u03b8 (x t-1 |x t ) = N (x t-1 ; \u00b5 \u03b8 (x t , t), \u03b2 t I) that is used in the citing paper to define the denoising process in DDPM."}, {"Category": "Methodological Basis", "Citation": "[42]", "Explanation": "The cited work introduces the DDIM method, which the citing paper adopts to accelerate the denoising process in DDPM by using an implicit density model instead of the explicit one used in DDPM."}, {"Category": "Methodological Basis", "Citation": "[43]", "Explanation": "The cited work introduces the connection between diffusion models and score matching, which the citing paper adopts in the generative process to generate new samples."}, {"Category": "Methodological Basis", "Citation": "[44]", "Explanation": "The cited work derives a score-based function to estimate the deviation in the time step, which the citing paper uses to estimate the deviation in the generation of new samples."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work introduces a classifier guidance mechanism that the citing paper leverages to implement a conditioned denoising process in their research."}, {"Category": "Methodological Basis", "Citation": "[12,52]", "Explanation": "The cited works provide the deep neural networks and feature extraction techniques used in the citing paper to compare image features and capture perceptual similarity in anomaly detection."}, {"Category": "Methodological Basis", "Citation": "[17,48]", "Explanation": "The cited work provides a pretrained feature extractor (\u03d5) that the citing paper utilizes in defining the cosine similarity (cos) for calculating the feature-wise distance (D f ) in the anomaly score function (Eq. 8). This method is adopted to enhance the performance of the anomaly detection system."}, {"Category": "Data Source", "Citation": "[4]", "Explanation": "The MVTec Anomaly Detection benchmark is cited as the source of a widely known industrial dataset that the citing paper uses for evaluation."}, {"Category": "Data Source", "Citation": "[55]", "Explanation": "The VisA dataset is cited as a new dataset that the citing paper uses for evaluation, with a focus on high-resolution images and complex object structures."}, {"Category": "Data Source", "Citation": "[22]", "Explanation": "The Magnetic Tile Defects (MTD) dataset is cited as a single-category dataset that the citing paper uses for evaluation, with a focus on a specific type of defects."}, {"Category": "Data Source", "Citation": "[5]", "Explanation": "The cited work introduces the Per Region Overlap (PRO) metric for evaluating localisation performance in the context of anomaly detection, which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work introduces the modified UNet framework that the citing paper adopts for training the denoising model."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The cited work, PaDiM, serves as a methodological basis for the comparison of performance metrics in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[36]", "Explanation": "The cited work, PatchCore, is discussed in the context of performance metrics in the citing paper, indicating an extension or continuation of the research."}, {"Category": "Methodological Basis", "Citation": "[48]", "Explanation": "The cited work, WideResNet101, is used as the feature extractor in the denoising model implemented in the citing paper. The choice of this model is likely based on its effectiveness in extracting features for the denoising task."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work, the MVTec dataset, provides a benchmark for evaluating the performance of the diffusion model in reconstructing anomalous regions in images. The citing paper uses this dataset to assess the impact of the conditioning mechanism on reconstruction quality and to compare the performance of the model with and without conditioning."}, {"Category": "Methodological Basis", "Citation": "(99.7, 97.9)", "Explanation": "The cited work provides the performance metrics of the DDAD-S-10 model, which the citing paper adopts to evaluate the performance of the model in the context of image reconstruction."}, {"Category": "Methodological Basis", "Citation": "(98.2,98.6)", "Explanation": "The cited work provides the performance metrics of the model, which the citing paper uses to compare the performance of the model in the context of image reconstruction."}, {"Category": "Methodological Basis", "Citation": "(100,98.4)", "Explanation": "The cited work provides the performance metrics of the model, which the citing paper uses to compare the performance of the model in the context of image reconstruction."}, {"Category": "Methodological Basis", "Citation": "(100,99.2)", "Explanation": "The cited work provides the performance metrics of the model, which the citing paper uses to compare the performance of the model in the context of image reconstruction."}, {"Category": "Methodological Basis", "Citation": "(100,98.2)", "Explanation": "The cited work provides the performance metrics of the model, which the citing paper uses to compare the performance of the model in the context of image reconstruction."}, {"Category": "Methodological Basis", "Citation": "(99.9,95.1)", "Explanation": "The cited work provides the performance metrics of the model, which the citing paper uses to compare the performance of the model in the context of image reconstruction."}, {"Category": "Methodological Basis", "Citation": "(100,98.5)", "Explanation": "The cited work provides the performance metrics of the model, which the citing paper uses to compare the performance of the model in the context of image reconstruction."}, {"Category": "Methodological Basis", "Citation": "(99.8,98.3)", "Explanation": "The cited work provides the performance metrics of the model, which the citing paper uses to compare the performance of the model in the context of image reconstruction."}, {"Category": "Methodological Basis", "Citation": "(99.4,96.0)", "Explanation": "The cited work provides the performance metrics of the model, which the citing paper uses to compare the performance of the model in the context of image reconstruction."}, {"Category": "Methodological Basis", "Citation": "(99.8,98.4)", "Explanation": "The cited work provides the performance metrics of the model, which the citing paper uses to compare the performance of the model in the context of image reconstruction."}, {"Category": "Methodological Basis", "Citation": "(100,98.1)", "Explanation": "The cited work provides the performance metrics of the model, which the citing paper uses to compare the performance of the model in the context of image reconstruction."}, {"Category": "Methodological Basis", "Citation": "[46]", "Explanation": "The cited work by AnoDDPM demonstrated the use of multi-scale simplex noise for better reconstruction in anomaly detection, which the citing paper adopts in their own architecture to improve performance."}, {"Category": "Methodological Basis", "Citation": "[51]", "Explanation": "The cited work, DiffusionAD, provides a method for denoising and segmentation in anomaly detection that the citing paper adopts in its own research."}, {"Category": "Data Source", "Citation": "[50]", "Explanation": "The cited work, DRAEM, serves as a data source for the development of the denoising and segmentation sub-networks in DiffusionAD, which the citing paper builds upon in its research."}, {"Category": "Extension or Continuation", "Citation": "[51]", "Explanation": "The citing paper extends the research of DiffusionAD by exploring the use of a single denoising step in anomaly detection, which is similar to the approach taken in the cited work but with a different focus on the starting point of the denoising process."}, {"Category": "Extension or Continuation", "Citation": "[40]", "Explanation": "The cited work, Score-based perturbation resilience, formulates the problem of anomaly detection with a geometric perspective that the citing paper builds upon in its own research, expanding the research in a new direction."}, {"Category": "Supporting Evidence", "Citation": "[29]", "Explanation": "The cited work by Lu et al. provides a method for anomaly detection that leverages the KL divergence and feature reconstruction error to score pixels and features, respectively. The citing paper builds upon this approach to develop a more effective method for anomaly detection."}, {"Category": "Extension or Continuation", "Citation": "[55]", "Explanation": "The cited work by VisA introduces a method for domain adaptation that the citing paper further extends to improve the accuracy of anomaly detection in the MVTec dataset."}, {"Category": "Data Source", "Citation": "[22]", "Explanation": "The cited work, MTD dataset, serves as a data source for the evaluation of DDAD performance in the citing paper."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8" ], "table_ref": [], "text": "Significant advancements in Natural Language Processing (NLP), particularly in the context of large pre-trained models, have considerably impacted computer vision, enabling multi-modal learning, transfer learning, and spurring the development of new architectures [1]. These pre-trained NLP models have been instrumental in the extraction of high-level features from text data, subsequently enabling the fusion with visual features from images or videos, which in turn improves performance on tasks such as image captioning and video classification. Recently, OpenAI's Contrastive Language Image Pre-training (CLIP) [2] model has been particularly popular due to its superior performance in image classification, object detection, and semantic segmentation. These achievements underscore the potential of large vision-language pre-trained models across a spectrum of applications.\nThe remarkable performance of large pre-trained models on 2D vision tasks has stimulated researchers to investigate the potential applications of these models within the domain of 3D point cloud processing. Several approaches, including PointCLIP [3], PointCLIP v2 [4] and CLIP2Point [5] have been proposed over the years. While these methods demonstrate advantages in zero-shot and few-shot 3D point cloud processing tasks, experimental results on real-world datasets suggest that their efficacy in handling real-world tasks is limited. This limitation is primarily attributed to the significant domain gap between 2D depth maps derived from 3D projection and training images of CLIP. Consequently, the primary focus of our work is to minimize this domain gap and improve the performance of CLIP on zero-shot and few-shot 3D vision tasks.\nPrevious studies have made significant progress in addressing this challenge. For instance, [6] leverages prompt learning to adapt the input domain of the pre-trained visual encoder from computergenerated images of CAD objects to real-world images. The CLIP 2 [7] model aligns threedimensional space with a natural language representation that is applicable in real-world scenarios, facilitating knowledge transfer between domains without prior training. To further boost the performance, we introduce a style feature extraction step and a style transfer module via stable diffusion [8] to further mitigate the domain gap.\nIn this paper, we propose a novel pre-training framework, called DiffCLIP, to minimize the domain gap within both the visual and textual branches. On the visual side, we design a multi-view projection on the original 3D point cloud, resulting in multi-view depth maps. Each depth map undergoes a style transfer process guided by stable diffusion and ControlNet [9], generating photorealistic 2D RGB images. The images generated from the style transfer module are then input into the frozen image encoder of CLIP. For zero-shot tasks, we design handcrafted prompts for both the stable diffusion module and the frozen text encoder of CLIP. For few-shot tasks, we further establish a style-prompt generation module that inputs style features extracted from a pre-trained point-cloud-based network and processed via a meta-net, and outputs the projected features as the style-prompt. Coupled with the class label, the entire prompt is then fed into the frozen text encoder of CLIP to guide the downstream process.\nTo demonstrate the effectiveness of DiffCLIP, we evaluate its zero-shot and few-shot classification ability on both synthetic and real-world datasets like ModelNet and ScanObjectNN. Our main contributions are summarized as follows:\n1. We propose DiffCLIP, a novel neural network architecture that combines a pre-trained CLIP model with point-cloud-based networks for enhanced 3D understanding. 2. We develop a technique that effectively minimizes the domain gap in point-cloud processing, leveraging the capabilities of stable diffusion and style-prompt generation, which provides significant improvements in the task of point cloud understanding. 3. We conduct experiments on widely adopted benchmark datasets such as ModelNet and the more challenging ScanObjectNN to illustrate the robust 3D understanding capability of DiffCLIP and conduct ablation studies that evaluate the significance of each component within DiffCLIP's architecture.\n2 Related Work" }, { "figure_ref": [], "heading": "3D Shape Classification", "publication_ref": [ "b15", "b16", "b17", "b17", "b18" ], "table_ref": [], "text": "3D point cloud processing methods for classification can be divided into three categories: projectionbased [10; 11], volumetric-based [12; 13] and point-based [14; 15]. In projection-based methods, 3D shapes are often projected into multiple views to obtain comprehensive features. Then they are fused with well-designed weight. The most common instances of such a method are MVCNN [16] which employs a straightforward technique of max-pooling to obtain a global descriptor from multiview features, and MHBN [17] that makes use of harmonized bilinear pooling to integrate local convolutional features and create a smaller-sized global descriptor. Among all the point-based methods, PointNet [18] is a seminal work that directly takes point clouds as its input and achieves permutation invariance with a symmetric function. Since features are learned independently for each point in PointNet [18], the local structural information between points is hardly captured. Therefore, PointNet++ [19] is proposed to capture fine geometric structures from the neighborhood of each point.\nmethod, an intuitive multi-view fusion strategy, to make the projection processing as efficient as possible for the zero-shot task." }, { "figure_ref": [], "heading": "Domain Adaptation and Generalization in Vision-Language Models", "publication_ref": [ "b20", "b23", "b26", "b27" ], "table_ref": [], "text": "Vision-language pre-trained models such as CLIP exhibit impressive zero-shot generalization ability to the open world. Despite the good zero-shot performance, it is found that further adapting CLIP using task-specific data comes at the expense of out-of-domain (OOD) generalization ability. [20; 21]. Recent advances explore improving the OOD generalization of CLIP on the downstream tasks by adapter learning [22; 23], model ensembling [21], test-time adaptation [24], prompt learning [25; 26], and model fine-tuning [27]. Specifically, in StyLIP [28], style features are extracted hierarchically from the image encoder of CLIP to generate domain-specific prompts. To the best of our knowledge, our work is the first to extract style features of 3D point clouds directly from point-based processing networks. With the extracted style features, we are able to generate domain-specific prompts for the text encoder of CLIP." }, { "figure_ref": [], "heading": "Style Transfer using Diffusion Model", "publication_ref": [ "b28", "b29", "b30", "b31", "b32", "b33", "b8" ], "table_ref": [], "text": "Traditional style transfer is one of the domain generalization methods which helps reduce the domain gap between source and target domains by aligning their styles while preserving the content information [29]. For example, Domain-Specific Mappings [30] has been proposed to transfer the style of source domains to a target domain without losing domain-specific information. Fundamentally different from traditional methods of style transfer that operate on pre-existing images, text-to-image diffusion models generate an image based on a textual description that may include style-related information, thus embedding the style defined in textual descriptions into the generated images. These models are based on a diffusion process, which involves gradually smoothing out an input image to create a stylized version of it, showing a strong ability in capturing complex texture and style information in images. For instance, Glide [31], Disco Diffusion [32], Stable Diffusion [33],\nand Imagen [34] all support text-to-image generation and editing via a diffusion process, which can further be used to change the image styles via textual prompts variations. We refer to the textual prompts in diffusion models as style prompts as they often encapsulate the style of generated images.\nIn our work, we implement different text-to-image diffusion models and choose ControlNet [9] to help transfer depth map from 3D projection to a more realistic image style, minimizing its style gap with the training images of CLIP." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b2", "b3" ], "table_ref": [], "text": "DiffCLIP is a novel neural network architecture that enhances the performance of the CLIP model on 3D point processing tasks via style transfer and style-prompt generation. In Section 3.1, we first revisit PointCLIP [3] and PointCLIP v2 [4] that first introduce the CLIP model into the processing of 3D point clouds. Then in Section 3.2, we introduce the framework and motivation of each part of DiffCLIP, which minimizes the domain gap between 3D tasks and 2D pre-training data. In Section 3.3, we present the detailed usage of DiffCLIP on zero-shot and few-shot classification tasks." }, { "figure_ref": [], "heading": "Revisiting PointCLIP and PointCLIP v2", "publication_ref": [ "b2", "b3", "b34" ], "table_ref": [], "text": "PointCLIP [3] is a recent extension of CLIP, which enables zero-shot classification on point cloud data. It involves taking a point cloud and converting it into multiple depth maps from different angles without rendering. The predictions from the individual views are combined to transfer knowledge from 2D to 3D. Moreover, an inter-view adapter is used to improve the extraction of global features and fuse the few-shot knowledge acquired from 3D into CLIP that was pre-trained on 2D image data.\nSpecifically, for an unseen dataset of K classes, PointCLIP constructs the textual prompt by placing all category names into a manual template, termed prompt, and then encodes the prompts into a Ddimensional textual feature, acting as the zero-shot classifier W t ∈ R K×D . Meanwhile, the features of each projected image from M views are encoded as f i for i = 1, ..., M by the visual encoder of CLIP. On top of this, the classification logits i of view i and the final logits p of each point are calculated by\nlogits i = f i W T t , ∀i = 1, ..., M(1a)\nlogits p = M i=1 α i logits i ,(1b)\nwhere α i is a hyper-parameter weighing the importance of view i.\nPoinCLIP v2 [4] introduces a realistic shape projection module to generate more realistic depth maps for the visual encoder of CLIP, further narrowing down the domain gap between projected point clouds and natural images. Moreover, a large-scale language model, GPT-3 [35], is leveraged to automatically design a more descriptive 3D-semantic prompt for the textual encoder of CLIP instead of the previous hand-crafted one." }, { "figure_ref": [ "fig_0" ], "heading": "DiffCLIP Framework", "publication_ref": [], "table_ref": [], "text": "The DiffCLIP framework is illustrated in Fig. 1. We describe each component of DiffCLIP in detail in the following sections." }, { "figure_ref": [], "heading": "Multi-View Realistic Projection", "publication_ref": [ "b7" ], "table_ref": [], "text": "In DiffCLIP, we use stable diffusion [8] to help transfer projected depth maps to a more realistic image style, minimizing the domain gap with the training images of CLIP. To generate realistic depth maps for better controlling the stable diffusion model, as well as to save computational resources for zero-shot and few-shot tasks, we design a multi-view realistic projection module, which has three steps: proportional sampling, central projection, and 2D max-pooling densifying.\nPortional Sampling. We randomly sample points on each face and edge of the object in the dataset, the number of which is proportional to the area of the faces and the length of the edges. Specifically, assuming that k l points are sampled on a line with length l and k f points are sampled on a face with area s, we set sampling threshold values β 1 and β 2 , so the number of sampled points can be calculated by k l = l β1 and k f = s β2 . Central Projection. For each projection viewpoint n ∈ {1, 2, ..., N }, we select a specific projection center and projection plane, and use affine transformation to calculate the coordinates of the projection point on the projection plane for each sampling point. The pixel intensity d is calculated as d = 1 dis , where dis is the distance between the sampling point and our projection plane. On the 2D plane, the projected point may not fall exactly on a pixel, so we use the nearest-neighbor pixel to approximate its density.\n2D Max-pooling Densifying. In the central projection step, the nearest-neighbor pixel approximation may yield many pixels to be unassigned, causing the projected depth map to look unrealistic, sparse, and scattered. To tackle this problem, we densify the projected points via a local max-pooling operation to guarantee visual continuity. For each projected pixel of the depth map, we choose the max density d max among four points: the pixel itself and the pixels to its right, bottom, and bottom right. We then assign the max density to the neighboring four pixels. Categories (right) in the ModelNet10 Dataset Using a \"Monitor\" Depth Map (left). For example, when transferring the style of \"monitor\" to \"bathtub\", the base of the monitor will be filled with blue rippling water. When transferring the style of \"monitor\" to \"chair\" or \"sofa\", characteristic textures of these objects are displayed. When using the \"monitor\" label to transfer its own style, the resulting image clearly generates the toolbar and menu bar on the computer screen." }, { "figure_ref": [ "fig_1" ], "heading": "Stable-Diffusion-Based Style Transfer", "publication_ref": [ "b7", "b8" ], "table_ref": [], "text": "To better use stable diffusion [8] in our task-specific domain, where the input is depth maps, we use ControlNet [9], a robust neural network training method to avoid overfitting and to preserve generalization ability when large models are trained for specific problems. In DiffCLIP, ControlNet employs a U-Net architecture identical to the one used in stable diffusion. It duplicates the weights of an extensive diffusion model into two versions: one that is trainable and another that remains fixed. The linked trainable and fixed neural network units incorporate a specialized convolution layer known as \"zero convolution.\" In this arrangement, the weights of the convolution evolve from zero to optimized parameters in a learned manner. ControlNet has several implementations with different image-based conditions to control large diffusion models, including Canny edge, Hough line, HED boundary, human pose, depth, etc. We use the frozen pre-trained parameters of ControlNet under depth condition, which is pre-trained on 3M depth-image-caption pairs from the internet. The pre-trained ControlNet is then used to generate our own realistic images of depth maps using different class labels as prompts. We illustrate the style-transfer effects of ControlNet in Fig. 2." }, { "figure_ref": [ "fig_2" ], "heading": "Style-Prompt Generation", "publication_ref": [ "b14", "b37", "b38", "b39" ], "table_ref": [], "text": "Optionally, when we want to do few-shot learning, we set up a style prompt generation module to embed style into prompts as is shown in the left part of Fig. 3.\nWe make use of \"style features\" in point cloud processing analogous to the usage in 2D vision. In 2D vision tasks, ConvNets act as visual encoders that extract features at different layers: higher-level layers tend to learn more abstract and global features while lower-level layers detect simple and local features such as edges, corners, and blobs [36; 37]. We conjecture that in the 3D point cloud case, the combination of features extracted from multiple levels could also improve the generalizability of the model, similar to how style features improve learning on RGB images in 2D vision tasks.\nDiffCLIP leverages CLIP's frozen vision encoder f v and text encoder f t . Additionally, a pre-trained point-based network, Point Transformer f p [15], is used in DiffCLIP. The Point Transformer operates by first encoding each point in the point cloud into a high-dimensional feature vector using a multilayer perception network [38], and then processing the encoded feature vectors are then processed by a series of self-attention layers, similar to the ones used in the Transformer [39] architecture.\nWe pre-train the Point Transformer network using our customized dataset, ShapeNet37, which consists of sampled data from ShapeNetCore [40]. The network comprises several blocks, each generating feature maps at different scales. To incorporate domain-specific characteristics, we project the output features from each block into a token and then concatenate them together with a class label to generate the style prompt as the input of CLIP's text encoder. " }, { "figure_ref": [], "heading": "Image Vectors of Multi-view", "publication_ref": [], "table_ref": [], "text": "Text" }, { "figure_ref": [], "heading": "Using DiffCLIP", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Zero-shot 3D classification", "publication_ref": [], "table_ref": [], "text": "For each 3D point cloud input x, after being projected into M views and densified, we obtain their smooth depth maps x ′ i , ∀i ∈ {1, ..., M }. These depth maps are then fed into ControlNet as conditions to guide the image generation of stable diffusion. We use the prompt \"a photo of a [class], best quality, extremely detailed\" as the default style prompt input to the stable diffusion model, denoted as t j , where j ∈ {1, ..., K}. The generated realistic images are represented as x ′′ ij = d(x ′ i , t j ), where the function d(•) denotes the stable diffusion combined with ControlNet. Based on the realistic images generated by the projection and style transfer stages, we employ CLIP to extract their visual features\nf ij = f v (x ′′ ij ).\nIn the textual branch, we use \"a photo of a [class]\" as the prompt and encode their textual features as the zero-shot classifier W t ∈ R K×D . Furthermore, the classification logits ij for each view i and each style transfer text guidance j are calculated separately as follows:\nlogits ij = f ij W T t , ∀i ∈ {1, ..., M }, j ∈ {1, ..., K}(2)\nFor each style transfer text guidance j, the classification logits j ∈ R 1×K is defined as M axP ([logits 1j , ..., logits M j ]), where M axP (•) represents taking the maximum value of each column in the matrix. Then, the probability matrix P ∈ R K×K is generated by P = [sof tmax(logits 1 ), ..., sof tmax(logits K )].\nWe design two strategies to calculate the final probability vector p = [p 1 , p 2 , ..., p K ] ∈ R K for each classification. Each of them includes a global logits part and a local one in order to make full use of raw probability distributions of each diffusion result from each view on all the classes. The first strategy is given by\np = β 1 p glo + β 2 p loc(3)\nwhere \np\nand β 1 and β 2 are hyper-parameters. p glo represents summing all values in the matrix that are no more than the diagonal by column, which represents the global information, and p loc returns the diagonal entries of P that represent the probabilities of the realistic images generated by text guidance j being classified into category j, which provides local information. We illustrate the detailed computation in our supplementary material. The second strategy to aggregate the matrix P to calculate the final logits is as follows:\np = norm(p glo ) * p loc(5)\nwhere\np glo = K Π K i=1 P i1 , ..., K Π K i=1 P iK p loc = max i∈{1,2,...,K} P i1 , ..., max i∈{1,2,...,K} P iK norm(p) = (p -min(p))/(max(p) -min(p))(6)\nEach entry in p glo represents the global feature of each category, which is defined as the geometric mean of the probabilities of that category across all style transfer results. Each entry in p loc represents the local feature for each category and is obtained from the diffusion result that is most similar to that category itself. The experiment results demonstrate that these two calculation strategies have their own strengths and weaknesses on different datasets." }, { "figure_ref": [], "heading": "Few-Shot 3D Classification", "publication_ref": [], "table_ref": [], "text": "In the context of few-shot classification tasks, a primary innovation of our model lies in the generation of style prompts. The image branch of the model remains identical to the one used in zero-shot classification tasks. As detailed in Section 3. To generate a style-prompt for a given (x, y) pair, where x ∈ R 2048×3 is an original 3D point cloud input, we define the full style prompt t y by\nt y = [c 1 (x)][c 2 (x)]...[c B (x)][CLS y ],(7)\nwhere [CLS y ] is the word embedding of label y. Subsequently, a zero-shot classifier W t ∈ R D×K can be generated.\nFollowing the computation of logits, as in zero-shot classification, we establish another trainable module, using a convolutional neural network to fuse logits s i of each view i ∈ {1, ..., M }, illustrated in the right portion of Fig. 4 Experiments and Results" }, { "figure_ref": [], "heading": "Zero-Shot Classification", "publication_ref": [ "b41", "b41", "b42", "b36", "b38", "b40" ], "table_ref": [ "tab_3", "tab_5" ], "text": "We evaluate the zero-shot classification performance of DiffCLIP on three well-known datasets: ModelNet10 [42], ModelNet40 [42], and ScanObjectNN [43]. For each dataset, we require no training data and adopt the full test set for evaluation. For the pre-trained CLIP model, we adopt ResNet-50 [37] as the visual encoder and transformer [39] as the textual encoder by default, and then try several other encoders in the ablation studies. We initially set the prompt of stable diffusion as \"a photo of a [class], best quality, extremely detailed\", and then we try multiple prompts to adapt the three different datasets later.\nPerformance. In Table 1, we present the performances of zero-shot DiffCLIP for three datasets with their best performance settings. Without any 3D training data, DiffCLIP achieves an accuracy of 43.2% on OBJ_BG of ScanObjectNN dataset, which is state-of-the-art performance, and an accuracy of 80.6% for zero-shot classification on ModelNet10, which is comparable to the state-of-the-art method, ReCon [41]. However, the performance of DiffCLIP is not as good on ModelNet40, in that we only have time to experiment with projections from 4 views at most. Objects in ModelNet40 dataset are randomly rotated, so we assume that more projection views will largely improve the classification performance. We leave this extra set of experiments until after the submission process.\nAblation Studies. We conduct ablation studies on zero-shot DiffCLIP concerning the importance of stable diffusion plus ControlNet module, as well as the influence of different projection views and projection view numbers on ModelNet10. For zero-shot DiffCLIP structure without style transfer stages, in which the depth maps are directly sent into the frozen image encoder of CLIP, the best classification accuracy reaches 58.8%, while implementing ViT/16 as the image encoder of CLIP. Hence, we can see the style transfer stage increases the accuracy by 21.8%. In terms of the number of projected views, we try 1 (randomly selected), 1 (elaborately selected), 4, and 8 projection view(s) on ModelNet10. In Table 2, records are taken from a bird's-eye view angle of 35 degrees, and the angles in the table represent the number of degrees of horizontal rotation around the Z-axis. The result of one elaborately selected view from a -135°angle outperforms others. Due to the characteristics of ModelNet10 dataset that all the objects are placed horizontally, projection from 4 views does not show obvious advantages. We also ablate on the choice of different visual backbones, shown in Table 3. The results suggest that ViT/16 yields the best results in our framework.\nPrompt Design. In DiffCLIP, we also design several different prompts as both the textual branch of the CLIP model and the style prompt input of stable diffusion. For ModelNet10 and Model-Net40, we implement a default prompt for stable diffusion. The performance of multiple prompt designs for CLIP on ModelNet10 is shown in Table 4. We try to place the class tag at the beginning, middle, and end of sentences, among which class tag located at the middle of the sentence performs best. Specifically, real-world objects in ScanObjectNN are incomplete and unclear, we elaborately design its diffusion style prompt as \"a photo of a [class], behind the building, best quality, extremely detailed\", in order to generate an obstacle in front of the target object, which is shown to be more recognizable for CLIP's image encoder. " }, { "figure_ref": [], "heading": "Few-shot Experiments", "publication_ref": [ "b17", "b18" ], "table_ref": [], "text": "We experiment DiffCLIP with the trainable Style-Prompt Generation Module and the trainable Multi-view Fusion Module under 1, 2, 4, and 8 shots on ModelNet10. For K-shot settings, we randomly sample K point clouds from each category of the training set. We set the learning rate as 1e-4 initially to train the meta-nets in the style prompt generation module, and then turn the learning rate of the meta-net into 1e-7 and train the multi-view fusion block. Due to the limitation of memory of our GPU, the training batch size is set as 1. Default prompts are used for stable diffusion.\nPerformance. As is shown in Table 5, we compare few-shot classification performance of DiffCLIP to PointNet [18] and PointNet++ [19] in terms of K-shot classification accuracy. " }, { "figure_ref": [], "heading": "Discussion and Limitations", "publication_ref": [], "table_ref": [], "text": "In zero-shot 3D classification tasks, DiffCLIP achieves state-of-the-art performance on OBJ_BG of ScanObjectNN dataset and accuracy on ModelNet10 which is comparable to state-of-the-art. In few-shot 3D classification tasks, DiffCLIP surpasses PointNet in performance and approximately approaches PointNet++. However, our model does have some notable room for improvement. First, DiffCLIP's performance on zero-shot tasks needs more exploration, such as projecting 3D point cloud into more views, designing better functions to calculate the logits vector p. Second, DiffCLIP's performance on few-shot tasks are still not that good which needs further improvement, such as finetuning on initial CLIP encoders as well as ControlNet. Moreover, the ability of DiffCLIP processing some other 3D tasks including segmentation and object detection is expected to be explored. We plan to leave these for future work.\nLimitations. First, the implementation of stable diffusion introduces a time-intensive aspect to both the training and testing of the model due to the intricate computations required in the diffusion process, which can significantly elongate the time for model execution, thus potentially limiting the model's utility in time-sensitive applications. We also acknowledge that the scale of our pre-training dataset for the point transformer is presently limited. This constraint impacts the performance of DiffCLIP. A larger, more diverse dataset would inherently provide a richer source of learning for the model, thereby enhancing its capability to understand more data." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In " }, { "figure_ref": [], "heading": "B More Information on the DiffCLIP Pipeline", "publication_ref": [], "table_ref": [], "text": "Densify on ScanObjectNN dataset. In ModelNet datasets, only coordinates of a limited set of key points and normal vectors of faces are provided. To increase the density of our data representation, we perform densification on 2D depth maps after point sampling and projection. In contrast, the ScanobjectNN dataset provides coordinates for all points, so we use a different densification method. Specifically, we calculate the k-nearest neighbors (k = 4) for each point and construct triangular planes by connecting the point to all possible pairs of its neighboring points." }, { "figure_ref": [], "heading": "C Experimental Details", "publication_ref": [], "table_ref": [], "text": "Construction of Pretraining Dataset In Section 3.2.3, we mentioned that we use our customized dataset, ShapeNet37, which consists of sampled data from ShapeNetCore, to pretrain point transformer. Specifically, in order to thoroughly test the generalization ability of the DiffCLIP model, we removed all data from categories that overlapped with those in ModelNet10 and ModelNet40 datasets from the ShapeNet dataset. The remaining data belonged to 37 categories, which we named ShapeNet37. " }, { "figure_ref": [ "fig_4" ], "heading": "D An example of probability distribution matrix P", "publication_ref": [], "table_ref": [], "text": "To better illustrate the calculation process of probability matrix P mentioned in equation 3 and equation 5 in Section 3.3.1, we give the following matrix in Fig. 4 as an example. In equation 6, while calculating p glo , numbers in blue boxes, which are bigger than numbers in orange boxes, are ignored." }, { "figure_ref": [ "fig_6" ], "heading": "E Calculating the Logits of Style Transfer", "publication_ref": [], "table_ref": [], "text": "To better illustrate the result of style transfer through stable diffusion and logits' calculation, we draw the bar chart (Fig. 5) of detailed logits of an example. " } ]
[ { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b0", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b1", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Renrui Zhang; Ziyu Guo; Wei Zhang; Kunchang Li; Xupeng Miao; Bin Cui; Yu Qiao; Peng Gao; Hongsheng Li", "journal": "", "ref_id": "b2", "title": "Pointclip: Point cloud understanding by clip", "year": "2022" }, { "authors": "Xiangyang Zhu; Renrui Zhang; Bowei He; Ziyao Zeng; Shanghang Zhang; Peng Gao", "journal": "", "ref_id": "b3", "title": "Pointclip v2: Adapting clip for powerful 3d open-world learning", "year": "2022" }, { "authors": "Tianyu Huang; Bowen Dong; Yunhan Yang; Xiaoshui Huang; W H Rynson; Wanli Lau; Wangmeng Ouyang; Zuo", "journal": "", "ref_id": "b4", "title": "Clip2point: Transfer clip to point cloud classification with image-depth pre-training", "year": "2022" }, { "authors": "Deepti Hegde; Maria Jose Jeya; Valanarasu; M Vishal; Patel", "journal": "", "ref_id": "b5", "title": "Clip goes 3d: Leveraging prompt tuning for language grounded 3d recognition", "year": "2023" }, { "authors": "Yihan Zeng; Chenhan Jiang; Jiageng Mao; Jianhua Han; Chaoqiang Ye; Qingqiu Huang; Dit-Yan Yeung; Zhen Yang; Xiaodan Liang; Hang Xu", "journal": "", "ref_id": "b6", "title": "Clipˆ2: Contrastive language-image-point pretraining from real-world point cloud data", "year": "2023" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b7", "title": "Highresolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Lvmin Zhang; Maneesh Agrawala", "journal": "", "ref_id": "b8", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Lei Li; Siyu Zhu; Hongbo Fu; Ping Tan; Chiew-Lan Tai", "journal": "", "ref_id": "b9", "title": "End-to-end learning local multiview descriptors for 3d point clouds", "year": "2020" }, { "authors": "Haoxuan You; Yifan Feng; Rongrong Ji; Yue Gao", "journal": "", "ref_id": "b10", "title": "Pvnet: A joint convolutional network of point cloud and multi-view for 3d shape recognition", "year": "2018" }, { "authors": "Daniel Maturana; Sebastian Scherer", "journal": "IEEE", "ref_id": "b11", "title": "Voxnet: A 3d convolutional neural network for realtime object recognition", "year": "2015" }, { "authors": "Gernot Riegler; Ali Osman Ulusoy; Andreas Geiger", "journal": "", "ref_id": "b12", "title": "Octnet: Learning deep 3d representations at high resolutions", "year": "2017" }, { "authors": "Meng-Hao Guo; Jun-Xiong Cai; Zheng-Ning Liu; Tai-Jiang Mu; Shi-Min Ralph R Martin; Hu", "journal": "Computational Visual Media", "ref_id": "b13", "title": "Pct: Point cloud transformer", "year": "2021" }, { "authors": "Hengshuang Zhao; Li Jiang; Jiaya Jia; Vladlen Philip Hs Torr; Koltun", "journal": "", "ref_id": "b14", "title": "Point transformer", "year": "2021" }, { "authors": "Hang Su; Subhransu Maji; Evangelos Kalogerakis; Erik Learned-Miller", "journal": "", "ref_id": "b15", "title": "Multi-view convolutional neural networks for 3d shape recognition", "year": "2015" }, { "authors": "Tan Yu; Jingjing Meng; Junsong Yuan", "journal": "", "ref_id": "b16", "title": "Multi-view harmonized bilinear network for 3d object recognition", "year": "2018" }, { "authors": "Hao Charles R Qi; Kaichun Su; Leonidas J Mo; Guibas", "journal": "", "ref_id": "b17", "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "year": "2017" }, { "authors": "Charles Ruizhongtai; Qi ; Li Yi; Hao Su; Leonidas J Guibas", "journal": "Advances in neural information processing systems", "ref_id": "b18", "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "year": "2017" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b19", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Mitchell Wortsman; Gabriel Ilharco; Jong Wook Kim; Mike Li; Simon Kornblith; Rebecca Roelofs; Raphael Gontijo Lopes; Hannaneh Hajishirzi; Ali Farhadi; Hongseok Namkoong", "journal": "", "ref_id": "b20", "title": "Robust fine-tuning of zero-shot models", "year": "2022" }, { "authors": "Peng Gao; Shijie Geng; Renrui Zhang; Teli Ma; Rongyao Fang; Yongfeng Zhang; Hongsheng Li; Yu Qiao", "journal": "", "ref_id": "b21", "title": "Clip-adapter: Better vision-language models with feature adapters", "year": "2021" }, { "authors": "Renrui Zhang; Rongyao Fang; Wei Zhang; Peng Gao; Kunchang Li; Jifeng Dai; Yu Qiao; Hongsheng Li", "journal": "", "ref_id": "b22", "title": "Tip-adapter: Training-free clip-adapter for better vision-language modeling", "year": "2021" }, { "authors": "Manli Shu; Weili Nie; De-An Huang; Zhiding Yu; Tom Goldstein; Anima Anandkumar; Chaowei Xiao", "journal": "", "ref_id": "b23", "title": "Test-time prompt tuning for zero-shot generalization in vision-language models", "year": "2022" }, { "authors": "Kaiyang Zhou; Jingkang Yang; Chen Change Loy; Ziwei Liu", "journal": "International Journal of Computer Vision", "ref_id": "b24", "title": "Learning to prompt for vision-language models", "year": "2022" }, { "authors": "Kaiyang Zhou; Jingkang Yang; Chen Change Loy; Ziwei Liu", "journal": "", "ref_id": "b25", "title": "Conditional prompt learning for vision-language models", "year": "2022" }, { "authors": "Yang Shu; Xingzhuo Guo; Jialong Wu; Ximei Wang; Jianmin Wang; Mingsheng Long", "journal": "", "ref_id": "b26", "title": "Clipood: Generalizing clip to out-of-distributions", "year": "2023" }, { "authors": "Shirsha Bose; Enrico Fini; Ankit Jha; Mainak Singha; Biplab Banerjee; Elisa Ricci", "journal": "", "ref_id": "b27", "title": "Stylip: Multi-scale style-conditioned prompt learning for clip-based domain generalization", "year": "2023" }, { "authors": "Bu Jin; Beiwen Tian; Hao Zhao; Guyue Zhou", "journal": "", "ref_id": "b28", "title": "Language-guided semantic style transfer of 3d indoor scenes", "year": "2022" }, { "authors": "Hsin-Yu Chang; Zhixiang Wang; Yung-Yu Chuang", "journal": "Springer", "ref_id": "b29", "title": "Domain-specific mappings for generative adversarial style transfer", "year": "2020" }, { "authors": "Alex Nichol; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mcgrew; Ilya Sutskever; Mark Chen", "journal": "", "ref_id": "b30", "title": "Glide: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2021" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b31", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b32", "title": "Highresolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans; Jonathan Ho; David J Fleet; Mohammad Norouzi", "journal": "", "ref_id": "b33", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b34", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "D Matthew; Rob Zeiler; Fergus", "journal": "Springer", "ref_id": "b35", "title": "Visualizing and understanding convolutional networks", "year": "2014" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b36", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Paul Werbos", "journal": "", "ref_id": "b37", "title": "Beyond regression: New tools for prediction and analysis in the behavioral sciences", "year": "1974" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b38", "title": "Attention is all you need", "year": "2017" }, { "authors": "Thomas Angel X Chang; Leonidas Funkhouser; Pat Guibas; Qixing Hanrahan; Zimo Huang; Silvio Li; Manolis Savarese; Shuran Savva; Hao Song; Su", "journal": "", "ref_id": "b39", "title": "Shapenet: An information-rich 3d model repository", "year": "2015" }, { "authors": "Zekun Qi; Runpei Dong; Guofan Fan; Zheng Ge; Xiangyu Zhang; Kaisheng Ma; Li Yi", "journal": "", "ref_id": "b40", "title": "Contrast with reconstruct: Contrastive 3d representation learning guided by generative pretraining", "year": "2023" }, { "authors": "Zhirong Wu; Shuran Song; Aditya Khosla; Fisher Yu; Linguang Zhang; Xiaoou Tang; Jianxiong Xiao", "journal": "", "ref_id": "b41", "title": "3d shapenets: A deep representation for volumetric shapes", "year": "2015" }, { "authors": "Angelina Mikaela; Quang-Hieu Uy; Binh-Son Pham; Thanh Hua; Sai-Kit Nguyen; Yeung", "journal": "", "ref_id": "b42", "title": "Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data", "year": "2019" } ]
[ { "formula_coordinates": [ 4, 144.26, 500.64, 162.4, 12.69 ], "formula_id": "formula_0", "formula_text": "logits i = f i W T t , ∀i = 1, ..., M(1a)" }, { "formula_coordinates": [ 4, 356.1, 491.31, 148.57, 30.32 ], "formula_id": "formula_1", "formula_text": "logits p = M i=1 α i logits i ,(1b)" }, { "formula_coordinates": [ 6, 255.91, 592.47, 58.08, 12.32 ], "formula_id": "formula_2", "formula_text": "f ij = f v (x ′′ ij )." }, { "formula_coordinates": [ 6, 202.48, 654.9, 302.19, 12.69 ], "formula_id": "formula_3", "formula_text": "logits ij = f ij W T t , ∀i ∈ {1, ..., M }, j ∈ {1, ..., K}(2)" }, { "formula_coordinates": [ 7, 261.68, 138.05, 242.98, 9.68 ], "formula_id": "formula_4", "formula_text": "p = β 1 p glo + β 2 p loc(3)" }, { "formula_coordinates": [ 7, 135.51, 184.06, 6.37, 8.77 ], "formula_id": "formula_5", "formula_text": "p" }, { "formula_coordinates": [ 7, 257.94, 303.85, 246.73, 9.68 ], "formula_id": "formula_7", "formula_text": "p = norm(p glo ) * p loc(5)" }, { "formula_coordinates": [ 7, 204.67, 344.73, 300, 59.28 ], "formula_id": "formula_8", "formula_text": "p glo = K Π K i=1 P i1 , ..., K Π K i=1 P iK p loc = max i∈{1,2,...,K} P i1 , ..., max i∈{1,2,...,K} P iK norm(p) = (p -min(p))/(max(p) -min(p))(6)" }, { "formula_coordinates": [ 7, 231.28, 639.49, 273.39, 9.65 ], "formula_id": "formula_9", "formula_text": "t y = [c 1 (x)][c 2 (x)]...[c B (x)][CLS y ],(7)" } ]
DiffCLIP: Leveraging Stable Diffusion for Language Grounded 3D Classification
Large pre-trained models have had a significant impact on computer vision by enabling multi-modal learning, where the CLIP model has achieved impressive results in image classification, object detection, and semantic segmentation. However, the model's performance on 3D point cloud processing tasks is limited due to the domain gap between depth maps from 3D projection and training images of CLIP. This paper proposes DiffCLIP, a new pre-training framework that incorporates stable diffusion with ControlNet to minimize the domain gap in the visual branch. Additionally, a style-prompt generation module is introduced for few-shot tasks in the textual branch. Extensive experiments on the ModelNet10, ModelNet40, and ScanObjectNN datasets show that DiffCLIP has strong abilities for 3D understanding. By using stable diffusion and style-prompt generation, DiffCLIP achieves an accuracy of 43.2% for zero-shot classification on OBJ_BG of ScanObjectNN, which is state-of-the-art performance, and an accuracy of 80.6% for zero-shot classification on ModelNet10, which is comparable to state-of-the-art performance. Recently, several 3D large pre-trained models are proposed. CLIP [2] is a cross-modal text-image pre-training model based on contrastive learning. PointCLIP [3] and PointCLIP v2 [4] extend CLIP's 2D pre-trained knowledge to 3D point cloud understanding. Some other 3D processing methods based on large pre-trained model CLIP are proposed in recent years, including CLIP2Point [5], CLIP 2 [7], and CG3D [6]. Similar to some of those methods, our model combines widely used point cloud learning architectures with novel large pre-trained models such as CLIP. The key difference here is that our point-based network hierarchically guides the generation of prompts by using direct 3D point cloud features, which largely reduces the domain gap between training and testing data for better zero-shot and few-shot learning. We also design a simpler yet more effective projection
Sitian Shen; Zilin Zhu; Linqian Fan; Harry Zhang; Xinxiao Wu
[ { "figure_caption": "Figure 1 :1Figure 1: Framework Structure of DiffCLIP. In the visual branch, DiffCLIP has two modules: Multi-View Realistic Projection Module, which produces multi-view depth maps, and Stable-Diffusion-Based Style Transfer Module, which uses a pre-trained ControlNet and frozen Stable Diffusion network to transfer styles on the depth images. In the textual branch, DiffCLIP uses an optional Style-Prompt Generation Module for few-shot tasks and manual prompts for zero-shot tasks. Frozen CLIP image encoder and text encoder are used to generate feature representations of images and text which then go through a Multi-Modal Fusion Block.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Style Transfer using Stable Diffusion and ControlNet: Illustrating Results for 10Categories (right) in the ModelNet10 Dataset Using a \"Monitor\" Depth Map (left). For example, when transferring the style of \"monitor\" to \"bathtub\", the base of the monitor will be filled with blue rippling water. When transferring the style of \"monitor\" to \"chair\" or \"sofa\", characteristic textures of these objects are displayed. When using the \"monitor\" label to transfer its own style, the resulting image clearly generates the toolbar and menu bar on the computer screen.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Prompt generation module (left) and Multi-view Fusion Block (right) of DiffCLIP.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "2 . 3 ,23we assume that the point-based network consists of B blocks. To incorporate domain-specific characteristics, we establish B meta-nets as style projectors P b (•; θ b ) , ∀b ∈ {1, ..., B} to encode domain characteristics into B prefix features c b , where θ b represents the parameters of the b-th meta-net. Assuming that the textual feature from the b-th block is D b -dimension, the total dimension of our textual encoder is D = B b=1 D b . Specifically, c b ∈ R D b is computed by c b (x) = P b (F b (x); θ b ).", "figure_data": "", "figure_id": "fig_3", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: An example of matrix P .", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: An example of style transfer result. Logits of ten images through stable diffusion's style transfer and the following calculation from source depth condition, the 'Monitor', are shown.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "iK ≤ P KK ) , p loc = [P 11 , ..., P KK ]", "figure_data": "KKP i1 I(P i1 ≤ P 11 ), ...,P iK I(Pi=1i=1", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "3. Zero-shot classification accuracy (%) of DiffCLIP on ModelNet10[42], ModelNet40[42], ScanObjectNN[43].", "figure_data": "Zero-shot PerformanceMethodMN10 MN40ScanObjectNN OBJ_ONLY OBJ_BG PB_T50_RSPointCLIP [3]30.223.821.319.315.4Cheraghian [4]68.5----CLIP2Point [5]66.649.435.530.523.3PointMLP+CG3D [6]64.150.4--25.0PointTransformer+CG3D [6]67.350.6--25.6CLIP 2 [7]----39.1PointCLIP v2 [4]73.164.250.141.235.4ReCon [41]75.661.743.740.430.5DiffCLIP80.649.745.343.235.2", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Zero-shot classification accuracy (%) of Diff-CLIP on ModelNet10 using different prompt for CLIP.", "figure_data": "MethodModelNet10 1-shot 2-shot 4-shot 8-shotPointNet [18]42.944.361.478.0PointNet++ [19] 53.269.373.983.8DiffCLIP45.848.262.679.0Table 5: Few-shot classification accuracy(%) of DiffCLIP on ModelNet10 with dif-ferent shot numbers.", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "conclusion, we propose a new pre-training framework called DiffCLIP that addresses the domain gap in 3D point cloud processing tasks by incorporating stable diffusion with ControlNet in the visual branch and introducing a style-prompt generation module for few-shot tasks in the textual branch. The experiments conducted on ModelNet10, ModelNet40, and ScanObjectNN datasets demonstrate that DiffCLIP has strong abilities for zero-shot 3D understanding. The results show that DiffCLIP achieves state-of-the-art performance with an accuracy of 43.2% for zero-shot classification on OBJ_BG of ScanObjectNN dataset and a comparable accuracy with state-of-the-art of 80.6% for zero-shot classification on ModelNet10. These findings suggest that the proposed framework can effectively minimize the domain gap and improve the performance of large pre-trained models on 3D point cloud processing tasks.A Visual Encoder DetailsIn our experiment, we use ViT/32, ViT/16, and RN.×16 as three of the image encoders. ViT/32 is short for ViT-B/32, which represents vision transformer with 32 × 32 patch embeddings. ViT/16 has the same meaning. RN.×16 denotes ResNet-50 which requires 16 times more computations than the standard model.", "figure_data": "", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work provides a foundation for the development of new architectures in the field of computer vision, which the citing paper builds upon in its research."}, {"Category": "Extension or Continuation", "Citation": "[2]", "Explanation": "The cited work, OpenAI's CLIP model, has been widely used in the field of image processing and is the basis for the development of new models in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The PointCLIP model is a key approach in the field of point cloud processing that the citing paper builds upon in its research."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The PointCLIP v2 model is another key approach in the field of point cloud processing that the citing paper builds upon in its research."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The CLIP2Point model is a key approach in the field of point cloud processing that the citing paper builds upon in its research."}, {"Category": "Extension or Continuation", "Citation": "[6]", "Explanation": "The cited work leverages prompt learning to adapt the input domain of the pre-trained visual encoder from computer-generated images of CAD objects to real-world images, which the citing paper builds upon to address the challenge of domain gap in real-world tasks."}, {"Category": "Extension or Continuation", "Citation": "[7]", "Explanation": "The cited work aligns three-dimensional space with a natural language representation that is applicable in real-world scenarios, facilitating knowledge transfer between domains without prior training. The citing paper further builds upon this work to improve the performance in real-world tasks."}, {"Category": "Extension or Continuation", "Citation": "[8]", "Explanation": "The cited work introduces a style feature extraction step and a style transfer module via stable diffusion to further mitigate the domain gap. The citing paper builds upon this work to further boost the performance in real-world tasks."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work, ControlNet, is used as a key component in the style transfer process of the multi-view projection module in the DiffCLIP architecture. The method of style transfer is adopted to generate photorealistic 2D RGB images from the depth maps, which is a crucial step in the process of visual understanding in DiffCLIP."}, {"Category": "Methodological Basis", "Citation": "[10; 11]", "Explanation": "The cited works provide a projection-based method for processing 3D point clouds, which the citing paper adopts to project 3D shapes into multiple views for feature extraction and fusion."}, {"Category": "Methodological Basis", "Citation": "[12; 13]", "Explanation": "The cited works present volumetric-based methods for processing 3D point clouds, which the citing paper may have used to process the data in a more efficient manner."}, {"Category": "Methodological Basis", "Citation": "[14; 15]", "Explanation": "The cited works offer point-based methods for processing 3D point clouds, which the citing paper may have used to process the data in a more precise and detailed manner."}, {"Category": "Supporting Evidence", "Citation": "[16]", "Explanation": "The cited work, MVCNN, is a common example of a projection-based method that employs max-pooling to obtain a global descriptor from multiview features, which the citing paper may have used to process the data in a more efficient and effective way."}, {"Category": "Supporting Evidence", "Citation": "[17]", "Explanation": "The cited work, MHBN, uses harmonized bilinear pooling to integrate local convolutional features and create a smaller-sized global descriptor, which the citing paper may have used to process the data in a more efficient and effective manner."}, {"Category": "Supporting Evidence", "Citation": "[18]", "Explanation": "The cited work, PointNet, is a seminal point-based method that directly takes point clouds as input and achieves permutation invariance with a symmetric function, which the citing paper may have used to process the data in a more precise and detailed manner."}, {"Category": "Supporting Evidence", "Citation": "[19]", "Explanation": "The cited work, PointNet++, captures fine geometric structures from the neighborhood of each point, which the citing paper may have used to process the data in a more efficient and effective manner."}, {"Category": "Extension or Continuation", "Citation": "[28]", "Explanation": "The cited work, StyLIP, is mentioned as a state-of-the-art method for improving the OOD generalization of CLIP on the downstream tasks by extracting style features from the image encoder of CLIP to generate domain-specific prompts. The citing paper extends this work by proposing a new method to extract style features of 3D point clouds from point-based processing networks and use them to generate domain-specific prompts for the text encoder of CLIP."}, {"Category": "Methodological Basis", "Citation": "[29]", "Explanation": "The cited work provides a foundation for the study of style transfer in the context of domain generalization, which the citing paper builds upon in its research on reducing the domain gap between source and target domains."}, {"Category": "Extension or Continuation", "Citation": "[30]", "Explanation": "The cited work on Domain-Specific Mappings is extended in the citing paper to transfer the style of source domains to a target domain without losing domain-specific information."}, {"Category": "Supporting Evidence", "Citation": "[31]", "Explanation": "The cited work on Glide shows the strong ability of text-to-image diffusion models in capturing complex texture and style information in images, providing evidence for the citing paper to use these models in their research on text-to-image generation and editing via a diffusion process."}, {"Category": "Data Source", "Citation": "[32]", "Explanation": "The cited work on Disco Diffusion is acknowledged as a data source for text-to-image generation and editing via a diffusion process, which the citing paper utilizes in their research on changing image styles via textual prompts variations."}, {"Category": "Supporting Evidence", "Citation": "[33]", "Explanation": "The cited work on Stable Diffusion is acknowledged as a data source for text-to-image generation and editing via a diffusion process, which the citing paper utilizes in their research on changing image styles via textual prompts variations."}, {"Category": "Supporting Evidence", "Citation": "[34]", "Explanation": "The cited work on Imagen is acknowledged as a data source for text-to-image generation and editing via a diffusion process, which the citing paper utilizes in their research on changing image styles via textual prompts variations."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work, ControlNet, is used in the citing paper to help transfer depth map from 3D projection to a more realistic image style, indicating a methodological basis for the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work PointCLIP is the first to introduce the CLIP model into the processing of 3D point clouds, providing a methodological basis for the development of DiffCLIP."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work PointCLIP v2 further builds upon the original PointCLIP by improving the performance of the CLIP model in 3D point processing tasks, providing a methodological basis for the development of DiffCLIP."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work, PointCLIP, is a method that enables zero-shot classification on point cloud data by converting point clouds into depth maps and combining predictions from individual views to transfer knowledge from 2D to 3D. The citing paper adopts this method to perform zero-shot classification on point cloud data."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work introduces a realistic shape projection module to generate more realistic depth maps for the visual encoder of CLIP, which the citing paper adopts to improve the performance of the visual encoder in the classification task."}, {"Category": "Methodological Basis", "Citation": "[8]", "Explanation": "The cited work, stable diffusion, is used as a method in the citing paper to help transfer projected depth maps to a more realistic image style, minimizing the domain gap with the training images of CLIP."}, {"Category": "Methodological Basis", "Citation": "[8]", "Explanation": "The cited work, stable diffusion, is used as a foundational method in the citing paper to improve the task-specific domain of depth map input in the research."}, {"Category": "Extension or Continuation", "Citation": "[9]", "Explanation": "The cited work, ControlNet, is extended in the citing paper to use a U-Net architecture for specific problems in large model training and to preserve generalization ability in the task of depth map input."}, {"Category": "Data Source", "Citation": "[3]", "Explanation": "The cited work provides a dataset of 3M depth-image-caption pairs that is used in the pre-training of ControlNet under depth condition to generate realistic images of depth maps in the research."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work, Point Transformer, is used in DiffCLIP to process point clouds in the 3D space. The citing paper adopts the pre-trained network to extract features from the point cloud data."}, {"Category": "Methodological Basis", "Citation": "[38]", "Explanation": "The cited work provides the multilayer perception network used in the Point Transformer to encode points in the point cloud into high-dimensional feature vectors."}, {"Category": "Methodological Basis", "Citation": "[39]", "Explanation": "The cited work provides the self-attention layers used in the Point Transformer to process the encoded feature vectors."}, {"Category": "Data Source", "Citation": "[40]", "Explanation": "The cited work provides the ShapeNet37 dataset used to pre-train the Point Transformer network, which consists of sampled data from ShapeNetCore."}, {"Category": "Extension or Continuation", "Citation": "[42]", "Explanation": "The cited work, ModelNet10 and ModelNet40, are used as datasets for zero-shot classification evaluation in the citing paper, which extends the research on these datasets to include new dimensions and variables."}, {"Category": "Data Source", "Citation": "[43]", "Explanation": "The cited work, ScanObjectNN, is used as a dataset for zero-shot classification evaluation in the citing paper, providing a data source for the study conducted in the paper."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work, PointNet, serves as a methodological basis for the few-shot classification performance comparison in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work, PointNet++, also serves as a methodological basis for the few-shot classification performance comparison in the citing paper."}]
[ { "figure_ref": [ "fig_0" ], "heading": "", "publication_ref": [ "b7", "b8", "b10", "b11", "b19", "b17" ], "table_ref": [], "text": "education, and healthcare consultation. Notably, ChatGPT has passed part of the US medical licensing exams, highlighting the potential of LLMs in the medical domain [4]- [8].\nSo far, several studies have been conducted to integrate LLMs into Computer-Assisted Diagnosis (CAD) systems of medical images. Conventional CAD typically operates following pure computer vision [9]- [11] or vision-language paradigms [12], [13]. LLMs have shown the ability to effectively interpret findings from medical images, thus mitigating the limitations of only visual interpretation. For example, in our pilot study, we leverage the intermediate results obtained from image CAD networks and then utilize LLMs to generate the final diagnostic reports [14].\nAlthough some efforts have been made to combine CAD networks and LLMs [14]- [16], it should be noted that these studies are limited in their scopes, which often focus on specific image domains. That is, such a system may only support a single image modality, a single organ, or a single application (such as Chest X-ray), which greatly limits the generalizability in the real clinical workflow. The primary reason for this limitation comes from the notable topologic and semantic variations observed among medical image modalities and organs, which present distinct challenges when attempting to encode various images with a single model. Furthermore, LLMs often generate diagnostic reports that exhibit discrepancy in writing style when compared to those produced by real human experts [14], implying the lack of expertise of general LLMs in the field of diagnostics. It is also articulable that the integration of LLMs and CADs may lead to medical dialogue systems [17]- [20]. In this way, patients will be able to interact through LLMs and acquire more medical advice and explanation, while this functionality is often missing in conventional CAD systems. However, existing studies show that the general LLMs typically produce medical advice based solely on their encoded knowledge, without considering specific knowledge in the medical domain [18]. As a result, patients may receive unreliable responses, dwarfing trust in CADs and thereby hindering the practical use of such systems in real medical scenarios.\nIn order to tackle the challenges mentioned above, we propose ChatCAD+ in this paper. And our contributions are made in the following aspects. A2: Atelectasis often has no symptoms unless it leads to low blood oxygen levels or pneumonia. … it does not necessarily mean that you are currently in serious trouble. However, it is always a good idea to consult with a doctor for further evaluation. (3) Knowledge-based reliable interaction. As illustrated in Fig. 1(c), ChatCAD+ does not directly provide medical advice. Instead, it first seeks help via our proposed knowledge retrieval module for relevant knowledge from professional sources, e.g. Merck Manuals, Mayo Clinic, and Cleveland Clinic. Then, the LLM considers the retrieved knowledge as a reference to provide reliable medical advice. In summary, our ChatCAD+ for the first time builds a universal and reliable medical dialogue system primarily. The improved quality of answers and diagnostic reports of the chatbot reveals the potential of LLMs in interactive medical consultation." }, { "figure_ref": [], "heading": "II. RELATED WORKS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Large Language Models in Healthcare", "publication_ref": [ "b20", "b23", "b24", "b27", "b22", "b28", "b17" ], "table_ref": [], "text": "Recent advances in Transformer architecture [21] and computing power have enabled the training of large language models with billions of parameters, leading to a significant improvement in their ability to summarize, translate, predict and generate human-like text [22]- [24]. In the pre-ChatGPT era, several healthcare language models have been developed based on general-purpose model weight and training schemes. BioBERT [25] and PubMedBERT [26] are examples of BERT [27] fine-tuned on PubMed, whereas ClinicalBERT [28] was further trained on the MIMIC-CXR dataset.\nAfter ChatGPT showed great potential of 100B-scale models, researchers expand the healthcare language model to a much larger scale and give very promising results. Med-PaLM [23] was developed in late 2022 using curated biomedical corpora and human feedback, and showed promising results, including a 67.6% accuracy on the MedQA exam. ChatGPT, which was not given supplementary medical training, still passed all three parts of the USMLE and achieved over 50% accuracy across all exams and surpassed 60% accuracy in the majority of them [29]. ChatCAD [14] combined medical image analysis models with ChatGPT and offered an interactive computer-aided diagnosis. ChatDoctor [17] was a medical chat model fine-tuned on LLaMA model using clinical QA that is synthesized by ChatGPT. DoctorGLM [18] demonstrated that finetuning an LLM for healthcare use could be done at an affordable cost." }, { "figure_ref": [], "heading": "B. Multi-modality Large Models", "publication_ref": [ "b29", "b30", "b31", "b33", "b34", "b35", "b36", "b37" ], "table_ref": [], "text": "Initially, end-to-end pretraining is a commonly employed technique for facilitating multi-modal input. Various model architectures have been suggested, including the dual-encoder structure of CLIP [30], encoder-decoder architecture Pali [31] and unified transformer architecture BEiT [32]. These methods employ large-scale image-text datasets for comprehensive pretraining. However, as model sizes grow, the computational expense of pretraining can become prohibitively high.\nTo reduce training costs, researchers have started utilizing pre-existing pretrained models. Frozen [33] fine-tuned an image encoder, whose outputs served as soft prompts for the language model. Flamingo [34] introduced cross-attention layers into the LLM to incorporate visual features, pre-training these new layers on billions of image-text pairs. BLIP-2 [35] leveraged both frozen image encoders and frozen LLMs to achieve stronger performance at a lower computation cost.\nIn more recent studies, researchers have opted not to train anything, but instead design prompts that allow LLMs to utilize vision models. For example, Visual-ChatGPT [36] connected ChatGPT and a series of visual foundation models to enable sending and receiving images during chatting. Chat-CAD [14] linked existing CAD models with LLMs to boost diagnostic accuracy and enhance patient care. Grounded-SAM combined a trained Grounding DINO model [37] and a trained SAM model [38] to detect and segment natural image objects with text inputs." }, { "figure_ref": [], "heading": "III. METHOD", "publication_ref": [], "table_ref": [], "text": "The proposed ChatCAD+ is a multi-modality system, which handles both image and text. Note that, in this paper, the term modality follows the definition of language plus image, which is somehow different from the often-used concept of medical image modality. As the input image is not restricted to a fixed image domain, it passes through universal interpretation, and the result is forwarded to template-aided diagnosis to get the report for the user. The text query is processed through the knowledge retrieval module, which retrieves clinically-sound knowledge to support the LLM. The process of universal interpretation can be divided into three steps. Initially, the proposed domain identification module is employed to determine the specific domain of the medical image. Subsequently, the corresponding domainspecific model is activated to interpret the image. Finally, the result of the interpretation is converted into a text description via the rule-based prob2text module for further processing." }, { "figure_ref": [ "fig_2", "fig_1" ], "heading": "A. Universal Interpretation", "publication_ref": [ "b29" ], "table_ref": [ "tab_0" ], "text": "Identifying the domain of a medical image is a crucial step for the subsequent operations of ChatCAD+. To achieve this, we employ a method of visual-language contrast that computes the cosine similarity between the input image and textual representations of various potential domains. This approach takes advantage of language's ability to densely describe the features of a particular type of image and its ease of extensibility. In particular, we employ the pre-trained CLIP model [30] to encode both the medical image and the text associated with domains of interest. In this study, we demonstrate upon three domains: Chest X-ray, Panoramic Dental X-ray, and Knee MRI. The workflow is depicted in Fig. 2. Assuming there are three domains D 1 , D 2 , D 3 , along with their textual representations M 1 , M 2 , M 3 , and also a visual representation denoted as I for the input image, the process is defined as follows\nD pred = argmax i∈{1,2,3} I • M i ∥I∥ ∥M i ∥ ,(1)\nwhere D pred denotes the prediction of the medical image domain. The module thereby calls the domain-specific CAD model to analyze visual information given D pred .\nThe gap between the CAD models for images and the following LLM is bridged by converting the output of CAD into text. For example, an image diagnosis model typically outputs tensors representing the likelihood of certain clinical findings. To establish a link between image and text, these tensors are transformed into textual descriptions according to diagnostic-related rules, which is denoted as prob2text in Fig. 1(a). The prob2text tool has been designed with the goal of presenting clinically relevant information in a manner that is more easily interpretable by LLMs. The details of the prompt design are illustrated in Table I. Using Chest X-ray as an example, we follow the three types (P1-P3) of prompt designs in [14] and adopt P3 (illustrative) as the recommended setting in this study. Concretely, it employs a grading system that maps the numerical scores into clinically illustrative descriptions of disease severity. The scores are divided into four levels based on their magnitude: \"No sign\", \"Small possibility\", \"Likely\", and \"Definitely\". The corresponding texts are then used to describe the likelihood of different observations in Chest Xray, providing a concise and informative summary of the patient's condition. The prompt design for Panoramic Dental X-ray and Knee MRI are in similar ways. And other prompt designs, such as P1 and P2 [14], will be discussed in a later section." }, { "figure_ref": [ "fig_4", "fig_4", "fig_4" ], "heading": "B. Template-aided Diagnosis", "publication_ref": [], "table_ref": [], "text": "The LLM utilizes a text description to generate a diagnostic report based on visual characteristics. This process is facilitated through prompts like \"Write a report based on results from Network(s)\". It is important to note that, when applicable, an additional report generation network is commonly integrated for each identified image domain. This integration allows the LLM to combine the generated reports from different networks, leading to an overall enhancement of the report. Such paradigm has been studied in [14], wherein they noted that ChatGPT exhibits a distinct writing style that sets it apart from human experts. This distinction is a crucial aspect to consider when developing a professional diagnosis system. To address this concern, we offer top-k reports that share similar semantics with the text description from the report database via a template retrieval system, to guide the LLM in refining its writing style.\nThe proposed template retrieval system models each report in a database into the following TF-IDF features. It further adopts KD-Tree data structure to speed up the process of retrieval. The process is depicted in Fig. 3. For simplification, we take MIMIC-CXR as an example in this section. The retrieval system is built on the training set of the MIMIC-CXR dataset, with N denoting its size. Specifically, the TF-IDF score of a term t in a document d is computed as\nTF-IDF(t, d) = TF(t, d) • IDF(t),(2)\nwhere TF(t, d) represents the frequency of the term t appearing in the document d, and IDF(t) (inverse document frequency) quantifies the rarity or commonness of t across all documents. In particular, we determine IDF(t) by n t , which denotes the number of documents containing t, following:\nIDF(t) = -log n t N .(3)\nTo improve the specificity of disease-related information, we select 17 medical entities of thoracic diseases annotated in MIMIC-CXR as the dictionary of terms. We then compute the TF-IDF feature of each document in the MIMIC-CXR dataset, which combines the TF-IDF scores for every term in the dictionary. After calculating the TF-IDF features of each document d, we organize all documents in KD-Tree [39], which enables fast online querying. KD-Tree is a space-partitioning data structure used for organizing data in a high-dimensional space.\nIt can implement k-nearest-neighbor querying in O(log(n)) time on average. Since the classical KD-Tree works with L 2 distance, we transform the similarity metric to the L 2 distance by projecting all TF-IDF feature vectors of documents onto the surface of a unit hypersphere, as demonstrated in the right panel of Fig. 3. This allows for the efficient use of the KD-Tree structure to perform similarity queries. We observe that the cosine similarity between two vectors monotonically decreases as the angle between them increases. At the same time, after mapping each vector onto the surface of a unit hypersphere, their L 2 distance monotonically increases with the angle between them. This relationship can be formally proved as follows:\ncos (⃗ q, ⃗ v) = cos ( ⃗ q |⃗ q| , ⃗ v |⃗ v| ) = cos(θ), L 2 ( ⃗ q |⃗ q| , ⃗ v |⃗ v| ) = 2r • sin( θ 2 ).(4)\nHere, ⃗ q is the feature vector of the query, ⃗ v is the feature vector of a sample in the database, r is the radius of the hypersphere (here equals 1), θ is the angle between ⃗ q |⃗ q| and\n⃗ v\n|⃗ v| , and θ ∈ [0, π]. Therefore, for any vector ⃗ q as query and two vector\n⃗ v i and ⃗ v j in database, cos(⃗ q, ⃗ v i ) > cos(⃗ q, ⃗ v j ) ⇔ θ i < θ j ⇔ L 2 ( ⃗ q |⃗ q| , ⃗ vi | ⃗ vi| ) < L 2 ( ⃗ q |⃗ q| , ⃗ vj | ⃗ vj |\n). Since we only care about the rank of cosine similarity, we can convert every ⃗ q and ⃗ v i to ⃗ q |⃗ q| and ⃗ vi | ⃗ vi| , and then use L 2 metric for querying. After the projection, the rank remains unchanged, and thus KD-Tree can be utilized for efficient querying.\nThe querying is performed online as demonstrated at the bottom of Fig. 3. The global term frequency, which represents the frequency of a term throughout the entire training set, is utilized for the inference of the TF-IDF feature of the query description. Once the feature is obtained, the KD-Tree engine efficiently returns a list of k (we set k=3 by default) text documents that share the most similar features with the query.\nAfter obtaining k reports as templates, the LLM is asked to write the report on the basis of CAD network(s) while following the writing style of templates. The improvement is illustrated in Section IV." }, { "figure_ref": [ "fig_5" ], "heading": "C. Reliable Interaction", "publication_ref": [], "table_ref": [], "text": "The proposed ChatCAD+ offers reliable medical advice via the construction of a professional knowledge database and LLM-based knowledge retrieval. In this study, we demonstrate using the Merck Manuals. The Merck Manuals are a series of healthcare reference books that provide evidence-based information on the diagnosis and treatment of diseases and medical conditions. The implementation of LLM-based knowledge retrieval leverages the Chain-of-Thought (CoT) technique, which is widely known for its ability to enhance the performance of LLM in problem-solving. CoT breaks down a problem into a series of sub-problems and then synthesizes the answers to these sub-problems progressively to solve the original problem. Humans typically do not read the entire knowledge base but rather skim through topic titles to find what they are looking for. Taking inspiration from this approach, we have designed a prompt system that automatically guides the LLM to execute such searches. Correspondingly, we have structured the Merck Manuals database as a hierarchical dictionary, with topic titles of different levels serving as keys, as shown in Fig 4.\nThe proposed knowledge retrieval methods operates as follows in Algorithm 1. Initially, we provide the LLM with only titles of five related medical topics in the database, and then allow the LLM to select the most relevant topic. Once the LLM has made its choice, we provide it with the relevant medical knowledge pertaining to that topic. If the topic has multiple sections, we provide the LLM with both medical knowledge and a list of sections, and repeat this process. The LLM may return to the parent topic if it determines that none of the provided topics are relevant to the query. The system continues until the LLM has identified relevant medical knowledge, and at this time it is prompted to display relevant information and the system terminates. " }, { "figure_ref": [], "heading": "IV. EXPERIMENTAL RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Dataset and Implementations", "publication_ref": [ "b40", "b41", "b11", "b42", "b8" ], "table_ref": [], "text": "For a fair comparison, the performance of ChatCAD+ is assessed on the public MIMIC-CXR dataset [40] and two private datasets. The MIMIC-CXR is a large public dataset of Chest X-ray images, associated radiology reports, and other clinical data. The dataset consists of 377,110 de-identified Chest X-ray images and associated 227,827 radiology reports. The MIMIC-CXR dataset allows us to measure the quality of diagnostic accuracy and report generation, as most report generation methods are designed specifically for Chest X-ray.\nThe second dataset of Dental X-ray consists of 426 panoramic X-ray images that were collected from real-world clinics. The size of the image is 2903 × 1536. Each patient has corresponding clinical records of golden standard, measured and diagnosed by experienced experts.s\nThe third dataset of Knee MRI is a collection of images obtained from patients who visited Shanghai Sixth People's Hospital affiliated with Shanghai Jiao Tong University School of Medicine. The dataset consists of 1205 images captured using a Philips Achieva 3.0T TX MRI scanner. The imaging protocol used a T2-weighted sequence with fat suppression and fast spin echoes to acquire the images.\nDomain Specific CAD Models. For Chest CAD, the thoracic disease classifier [41] is pre-trained on the official training split of the CheXpert [42] dataset, with over 224k Chest X-rays. At the same time, the Chest X-ray report generation network [12] adopted in templated-aided diagnosis is pretrained on the training split of MIMIC-CXR. The periodontal diagnosis model proposed in [43] is utilized as Tooth CAD. For Knee CAD, we adopt the model proposed in [9]. During implementation, both pre-trained models were kept frozen.\nEvaluations. ChatGPT was adopted as the default LLM Is periodontitis serious?\nPeriodontitis is a common oral disease that, if left untreated, can lead to issues such as increased pocket depth and alveolar bone resorption, seriously impacting oral health. Therefore, if you suspect that you have periodontitis, it is recommended to seek timely medical attention for clinical assessment and necessary oral X-ray examinations in order to facilitate early diagnosis and treatment.\n[References] ''' … The resulting inflammation affects the periodontal ligament, gingiva, cementum, and alveolar bone. The gingiva progressively loses its attachment to the teeth, bone loss begins, and periodontal pockets deepen… '''" }, { "figure_ref": [], "heading": "Merch Manuals periodontitis", "publication_ref": [], "table_ref": [], "text": "How is it treated? Can medication alone be effective?\nThe treatment methods for periodontitis include removal of risk factors, oral hygiene measures such as scaling and root planing, systemic antibiotics, local medication, surgical intervention, or extraction of affected teeth. During the treatment process, it is important to control systemic factors that may affect periodontal health, such as poor oral hygiene, diabetes, and smoking. Therefore, if you are diagnosed with periodontitis, it is recommended to seek timely medical attention and follow a treatment plan provided by a professional dentist in order to better control and manage periodontitis. If the pleural effusion is causing symptoms such as difficulty breathing or chest pain, drainage may be needed. If the pleural effusion is due to underlying causes such as pneumonia or surgery, the primary disease should be treated. In many cases, asymptomatic pleural effusions do not require treatment as they can resolve on their own. Pleuritic pain can be managed with nonsteroidal anti-inflammatory drugs (NSAIDs) or other oral analgesics. Thoracentesis is a necessary method for treating symptomatic pleural effusions, and repeated thoracentesis may be needed for recurrent effusions. For chronic, recurrent, symptomatic pleural effusions, pleurodesis or intermittent drainage with indwelling catheters may be considered. If the pleural effusion is caused by pneumonia or malignant tumors, additional special treatment measures may be required.\n[References] '''…The effusion itself generally does not require treatment if it is asymptomatic because many effusions resorb spontaneously ..." }, { "figure_ref": [], "heading": "''' Merch Manuals pleural effusion", "publication_ref": [], "table_ref": [], "text": "Hello, based on medical knowledge, X-ray alone may not accurately diagnose pleural effusion. If X-ray density cannot determine whether it is fluid or lung parenchymal infiltration, or if it is necessary to determine if the suspected effusion is encapsulated, lateral decubitus X-ray, chest CT scan, or ultrasound examination should be performed. These examinations are more sensitive than upright X-ray and can detect effusions as small as 10 milliliters. Therefore, we recommend a comprehensive examination and diagnosis under the guidance of a doctor.\nIs X-ray enough?\n[References] '''…these tests are more sensitive than upright x-rays and can detect fluid volumes < 10ml '''" }, { "figure_ref": [], "heading": "Merch Manuals pleural effusion", "publication_ref": [ "b43", "b45", "b46" ], "table_ref": [], "text": "In this context, the diagnostic result suggests that there are mild defects present in the cartilage. Cartilage defects refer to any damage, irregularities, or abnormalities in the cartilage tissue. These defects can range in severity from mild to severe, with mild defects indicating relatively minor damage or abnormalities. It is important to consult with a healthcare professional or a specialist, such as an orthopedic surgeon, who can provide further interpretation of the MRI results and recommend appropriate treatment options based on the severity of the cartilage defect.\nAny idea to ease my knee pain?\nHere are some general strategies that may help ease knee pain: Quadriceps-strengthening exercises: Strengthening the quadriceps muscles can help provide support and stability to the knee joint. Exercises like leg extensions, squats, and lunges may be beneficial. Stabilizing pads, supports, or braces: Depending on the cause of your knee pain, using stabilizing pads or braces may help provide additional support and stability to the knee joint. Nonsteroidal anti-inflammatory drugs (NSAIDs): NSAIDs, such as ibuprofen or naproxen, can help reduce pain and inflammation in the knee joint. However, it's important to consult with a healthcare professional before starting any medication to ensure it is safe and appropriate for your situation. Identifying and addressing the specific cause: Knee pain can have various underlying causes, and treatment should be tailored. If you have chondromalacia, focusing on quadriceps strengthening exercises, balancing with hamstring exercises, and using arch supports may be helpful. Excessive pronation of the foot may require orthotic inserts. For stress fractures, it is important to rest and avoid weight-bearing activities. In cases of intra-articular pathology, surgery may be necessary, but this decision should be made by a qualified healthcare professional.\n[References] Merch Manuals knee-pain What tests should be done for knee pain?\nFor knee pain evaluation, MRI is usually not needed for initial assessment. It is reasonable to proceed with conservative treatment for several weeks and perform MRI if symptoms do not improve. However, in cases of severe trauma or suspected intraarticular injury that cannot be ruled out by other means, MRI should be performed.\nOther tests may be performed to investigate related injuries, such as arterial angiography or CT angiography for suspected arterial injury. Electromyography and/or nerve conduction studies are rarely performed directly, but are used for patients with typical neurological symptoms that persist for several weeks to months after injury.\n[References] '''… MRI is usually not needed at the initial evaluation. A reasonable approach is to do MRI if symptoms do not resolve after a few weeks of conservative management… ''' Merch Manuals knee-pain knee ability , and in all experiments. The quality of LLM-aided diagnosis was tested on the official test set of the MIMIC-CXR dataset, focusing on five findings (cardiomegaly, edema, consolidation, atelectasis, and pleural effusion). Due to limitations on the perhour accesses of OpenAI API-Key, we randomly selected 200 samples for each finding of interest, resulting in a total of 1000 test samples. To evaluate the performance of LLM-aided diagnosis, we used several widely-used Natural Language Generation (NLG) metrics, including BLEU [44], METEOR [45], and ROUGE-L [46]. Specifically, BLEU measures the similarity between generated and ground-truth reports by counting word overlap, whereas METEOR considers synonym substitution and evaluates performance on both sentence level and report level. Meanwhile, ROUGE-L evaluates based on the length of the longest common subsequence. We also measured three classification metrics, including precision (PR), recall (RC), and F1-score (F1), on the 1000 test samples. These metrics provide additional insight into the performance of LLM-aided diagnosis. If not specified, we choose P3 as the default prompt design in LLM-aided diagnosis and k=3 for template retrieval. Note that we adopt Chexbert labeler [47] to extract labels from generated reports, and calculate classification metrics on them." }, { "figure_ref": [ "fig_8", "fig_8" ], "heading": "B. Universal and Reliable CAD", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "Fig. 5 demonstrates the universal application and reliability of our proposed method. The ChatCAD+ can process different medical images and provide proper diagnostic reports. Users can interact with ChatCAD+ and ask additional questions for further clarification.\nBy leveraging external medical knowledge, the enhanced ChatCAD+ is capable of delivering medical advice in a more professional manner. In Fig. 5, each reply from ChatCAD+ includes a reference, and the pertinent knowledge obtained from the external database is underlined for emphasis. For instance, in the scenario where the question is raised about whether a Chest X-ray is sufficient for diagnosing pleural effusion, ChatGPT would simply recommend a CT scan without providing a convincing explanation. In contrast, ChatCAD+ is capable of informing the patient that CT scans have the capability to detect effusions as small as 10 milliliters, thereby providing a more detailed and informative response.\nThe effectiveness of our proposed knowledge retrieval method is qualitatively compared. Existing knowledge retrieval methods tend to follow the paradigm of LangChain [48], which involves dividing the text into paragraphs and then utilizing sentence transformers to compare the similarity between the embedding of a user query and all paragraphs. This kind of knowledge retrieval method can hardly handle the professional medical knowledge database. We select the retrieval method adopted in [48] as the baseline and compare its performance with our proposed method in Table IV " }, { "figure_ref": [], "heading": "C. Quality of the Template-aided Diagnosis", "publication_ref": [ "b11" ], "table_ref": [ "tab_3", "tab_3", "tab_4" ], "text": "The diagnostic accuracy of different methods was compared and presented in Table II, with R2GenCMN [12] being the selected report generation network in template-aided diagnosis, and VLCI [13] being the state-of-the-art method. Notably, ChatCAD+ demonstrated superior performance in terms of RC and F1 when compared to R2GenCMN, suggesting higher accuracy and better information retrieval ability. Generally, ChatCAD+ was the second-best overall method based on average performance across all diseases of interest, as indicated by bold purple highlighting in Table II. However, it should be noted that ChatCAD+ exhibited inferior classification performance compared to ChatCAD. It seems that the introduction of template retrieval has a negative impact. However, we will demonstrate later that this degradation is a substitute for better natural language generation methods.\nIn Table III, the quality of language generation is assessed using various NLG metrics. ChatCAD+ demonstrates a substantial advantage over ChatCAD across all NLG metrics, as observed in the results obtained using P3. Compared with R2GenCMN, which imitates the human language style well while weak in diagnostic accuracy, ChatCAD+ also shows better performance on METEOR and Corpus BLEU, indicating that ChatCAD+ possesses both proficient report generation and diagnostic capability without favoring either.\nAdditionally, the impact of prompt designs is investigated. It is observed that the style of clinical text descriptions does not significantly affect most metrics, except for METEOR [49], where P3 achieves the highest METEOR score and P1 obtains the lowest. This phenomenon may be attributed to the similarity of P3 to human language style, as it reflects the severity of the disease using rhetoric, whereas P1 directly displays the probability without extensive post-process. ChatCAD+ shows a significant advantage over ChatCAD on all NLG metrics using P3, which underscores the rationality of utilizing template reports as exemplars." }, { "figure_ref": [ "fig_9", "fig_10" ], "heading": "D. Ablation Study", "publication_ref": [ "b49" ], "table_ref": [], "text": "In this subsection, we conduct an empirical study on the impact of template retrieval, which serves as a crucial component in enhancing the quality of report generation. Importance of template guidance. In Fig. 6, we compare the NLG performance with different numbers of retrieved templates. As k increases, the overall performance shows an upward trend. Moreover, in most cases, the largest improvement in performance occurs when k changes from 0 to 1. This indicates that regardless of the quantity, as long as templates are provided as references, the quality of generated reports can be significantly enhanced. Meanwhile, the performance improvement tends to saturate around k=3, and further increasing k does not result in more significant improvements, which validates the rationality of the adopted default setting. Influence on report length distribution. The distribution of report length is demonstrated in Fig. 7 by varying the value of k. Report length serves as an important criterion [50] to measure the similarity to ground truth radiology reports. A straightforward observation is that the distribution of Chat-CAD exhibits a significant shift from that of the radiologist. In contrast, ChatCAD+ shows a more fitted curve irrespective of the value of k." }, { "figure_ref": [], "heading": "V. LIMITATIONS AND DISCUSSION", "publication_ref": [], "table_ref": [], "text": "In this work, we have developed ChatCAD+, a universal and reliable interactive CAD system that can accept images of different domains as valid input. We expect that the proposed paradigm of universal CAD networks can motivate the advancement of cross-domain model literature in medical image domain. Furthermore, the template retrieval system enhances the report generation ability of ChatCAD+ and let it act more like a radiologist. The LLM-based knowledge retrieval method enables the LLM to search by itself, and utilize the external medical knowledge database to provide convincing medical treatments. In this way, our ChatCAD+ has utilized the Merck Manuals database as its primary external knowledge database, which may be considered limited in its scope. However, we are confident in the versatility of our proposed LLM-based knowledge retrieval method, as it can be readily adapted to incorporate other databases. By doing so, we anticipate that the reliability of ChatCAD+ will be greatly enhanced. There are several limitations that must be acknowledged in this study. While we have succeeded in enhancing the report generation capacity of our model to closely resemble that of a radiologist, it has come at the cost of reduced diagnostic accuracy.\nAdditionally, the knowledge retrieval method we employed is relatively slow, as its efficacy is closely tied to the response speed of the OpenAI API. These limitations provide an opportunity for future research to address them and improve upon the current findings.\nQ1 How to treat periodontitis? Is taking medication enough? K1 (Baseline) Esophageal cancer can occur at a young age. Unna-Thost disease and Vorner disease: autosomal dominant inheritance. Papillon-Lefèvre syndrome: autosomal recessive inheritance, onset within 6 months after birth. Severe periodontitis can lead to tooth loss. Vohwinkel syndrome: autosomal dominant inheritance, patients may experience progressive toe and finger amputation and high-frequency hearing loss. Diffuse non-epidermolytic palmoplantar keratoderma: this autosomal dominant inherited form develops in infancy, causing a sharply defined symmetric keratoderma affecting the entire palm and sole. Treatment: Symptomatic treatment includes the use of emollients, keratolytic agents, and physical methods for removal. K1 (Ours) Treatment of risk factors Scaling and root planing Sometimes oral antibiotics, antibiotic packs, or both Surgery or extraction Treatment of modifiable risk factors such as poor oral hygiene, diabetes, and smoking improves outcomes. ... If deeper pockets persist, systemic antibiotics can be used. A common regimen is amoxicillin 500 mg orally 3 times a day for 10 days. In addition, a gel containing doxycycline or microspheres of minocycline can be placed into isolated recalcitrant pockets. These drugs are resorbed in 2 weeks. Another approach is to surgically eliminate the pocket and recontour the bone (pocket reduction/elimination surgery) so that the patient can clean the depth of the normal crevice (sulcus) between the tooth and gingiva. In certain patients, regenerative surgery and bone grafting are done to encourage alveolar bone growth. Splinting of loose teeth and selective reshaping of tooth surfaces to eliminate traumatic occlusion may be necessary. Extractions are often necessary in advanced disease. Contributing systemic factors should be controlled before initiating periodontal therapy. Ninety percent of patients with necrotizing ulcerative periodontitis due to HIV (HIV-associated periodontitis) respond to combined treatment with scaling and planing, irrigation of the sulcus with povidone-iodine (which the dentist applies with a syringe), regular use of chlorhexidine mouth rinses, and systemic antibiotics, usually metronidazole 250 mg orally 3 times a day for 14 days. Localized aggressive periodontitis requires periodontal surgery plus oral antibiotics (eg, amoxicillin 500 mg 4 times a day or metronidazole 250 mg 3 times a day for 14 days). A1 (Ours) The treatment of periodontitis requires a comprehensive approach that considers risk factors, scaling and root planing, oral antibiotics, and local medications. For all cases of periodontitis, the first stage of treatment involves thorough cleaning, scaling and root planing, and reinforcement of daily oral hygiene. If deep periodontal pockets are present, systemic antibiotics may be used, with amoxicillin 500mg orally, three times a day for 10 days being a commonly used method. The second stage of treatment involves surgical removal of periodontal pockets and bone reshaping to allow the patient to self-clean the base of the gingival sulcus. Localized aggressive periodontitis requires periodontal surgery in combination with oral antibiotics. At the same time, systemic factors such as poor oral hygiene, diabetes, and smoking must be controlled to improve the prognosis. Therefore, drug therapy alone is not sufficient and comprehensive treatment is necessary. Q2 What tests should be done for knee pain? K2 (Baseline) Patellar tendon tear: the patella is palpable and displaced upwards (high-riding patella). Transverse patellar fracture: a gap between the two pieces of the patella may be palpable. However, due to significant local swelling, these symptoms may be mistaken for knee ligament injury with joint effusion. If a patient presents with knee swelling and pain after injury, the clinician will ask the patient to sit down and attempt to extend the injured leg to test knee extension actively or lie flat and elevate the injured leg. After knee swelling following trauma, remember to test the knee's ability for independent extension. Routine knee joint X-ray examination should be performed. Knee X-rays can show a high or low position of the patella. X-rays can reveal dislocation or patellar fractures, but they may also appear normal. K2 (Ours) MRI is usually not needed at the initial evaluation. A reasonable approach is to do MRI if symptoms do not resolve after a few weeks of conservative management. However, MRI is often done when severe injury or significant intra-articular injury is suspected or cannot be otherwise excluded. Other tests may be done to check for related injuries: Arteriography or CT angiography to check for suspected arterial injuries Electromyography and/or nerve conduction studies, which are rarely done immediately but more typically done when nerve symptoms persist weeks to months after the injury. A2 (Ours) For knee pain evaluation, MRI is usually not needed for initial assessment. It is reasonable to proceed with conservative treatment for several weeks and perform MRI if symptoms do not improve. However, in cases of severe trauma or suspected intra-articular injury that cannot be ruled out by other means, MRI should be performed. Other tests may be performed to investigate related injuries, such as arterial angiography or CT angiography for suspected arterial injury. Electromyography and/or nerve conduction studies are rarely performed directly, but are used for patients with typical neurological symptoms that persist for several weeks to months after injury. Therefore, which tests are needed depends on the specific situation of the patient. Q3 Is pleural effusion serious? K3 (Baseline) Under normal circumstances, a thin layer of pleural fluid (10-20mL) covers the surfaces of the visceral and parietal pleura, with a composition similar to plasma but with lower protein content (<1.5g/dL [<15 g/L]). \"Etiology\": \"Abs\": \" Pleural effusions are usually classified into exudates, transudates, and fluid accumulation based on laboratory characteristics of the fluid (see criteria for diagnosing exudative pleural effusions in the table).Regardless of unilateral or bilateral involvement, leaky effusions usually require little investigation and can be treated, while exudative effusions require identification of the underlying cause. Various factors can cause pleural effusions (see the table for causes of pleural effusions). Leaky pleural effusions are caused by an increase in systemic circulatory hydrostatic pressure and a decrease in plasma colloid osmotic pressure. K3 (Ours) Some pleural effusions are asymptomatic and are discovered incidentally during physical examination or on chest x-ray. Many cause dyspnea, pleuritic chest pain, or both. Pleuritic chest pain, a vague discomfort or sharp pain that worsens during inspiration, indicates inflammation of the parietal pleura. Pain is usually felt over the inflamed site, but referred pain is possible. The posterior and peripheral portions of the diaphragmatic pleura are supplied by the lower 6 intercostal nerves, and irritation there may cause pain in the lower chest wall or abdomen that may simulate intra-abdominal disease. Irritation of the central portion of the diaphragmatic pleura, innervated by the phrenic nerves, causes pain referred to the neck and shoulder. Physical examination reveals absent tactile fremitus, dullness to percussion, and decreased breath sounds on the side of the effusion. These findings can also be caused by pleural thickening. ... Pericardial rub is best heard over the left border of the sternum in the 3rd and 4th intercostal spaces, is characteristically a to-and-fro sound synchronous with the heartbeat, and is not influenced significantly by respiration. Sensitivity and specificity of the physical examination for detecting effusion are probably low. A3 (Ours) The severity of pleural effusion needs to be determined based on the patient's specific situation. Generally, a large amount of effusion can cause symptoms such as dyspnea and chest pain, but some patients may not have obvious symptoms. Physical examination can reveal reduced breath sounds on the effusion side, dullness to percussion, and possibly pleural friction rub. However, the sensitivity and specificity of physical examination for pleural effusion are both low, so clinical symptoms and imaging examinations need to be combined to determine the severity of the effusion and treatment plan. " }, { "figure_ref": [], "heading": "A4 (Ours)", "publication_ref": [], "table_ref": [], "text": "The treatment methods for pulmonary atelectasis include maintaining cough and deep breathing, chest physical therapy, exercise, continuous positive airway pressure, etc. However, the evidence for the effectiveness of these treatments is not sufficient, so the treatment plan needs to be developed according to the specific situation of the patient. If there is suspicion of tumor or foreign body obstruction, bronchoscopy examination is needed. Opioid drugs can be used to relieve severe pleural pain, but cough suppressant drugs should be avoided. For the causes of pulmonary atelectasis such as mucus plugs, foreign bodies, tumors, masses, pleural effusions, etc., targeted treatments should be used. " } ]
2023-07-07
[ { "authors": " Openai", "journal": "", "ref_id": "b0", "title": "Chatgpt: Optimizing language models for dialogue", "year": "2023" }, { "authors": "H Touvron; T Lavril; G Izacard; X Martinet; M.-A Lachaux; T Lacroix; B Rozière; N Goyal; E Hambro; F Azhar; A Rodriguez; A Joulin; E Grave; G Lample", "journal": "", "ref_id": "b1", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "O.-A Contributors", "journal": "", "ref_id": "b2", "title": "Open-Assistant", "year": "2023" }, { "authors": "J Qiu; L Li; J Sun; J Peng; P Shi; R Zhang; Y Dong; K Lam; F P ; -W Lo; B Xiao", "journal": "", "ref_id": "b3", "title": "Large ai models in health informatics: Applications, challenges, and the future", "year": "2023" }, { "authors": "Y Huang; A Gomaa; T Weissmann; J Grigo; H B Tkhayat; B Frey; U S Gaipl; L V Distel; A Maier; R Fietkau", "journal": "", "ref_id": "b4", "title": "Benchmarking chatgpt-4 on acr radiation oncology in-training exam (txit): Potentials and challenges for ai-assisted medical education and decision making in radiation oncology", "year": "2023" }, { "authors": "J Holmes; Z Liu; L Zhang; Y Ding; T T Sio; L A Mcgee; J B Ashman; X Li; T Liu; J Shen", "journal": "", "ref_id": "b5", "title": "Evaluating large language models on a highly-specialized topic, radiation oncology physics", "year": "2023" }, { "authors": "S Biswas", "journal": "", "ref_id": "b6", "title": "Chatgpt and the future of medical writing", "year": "2023" }, { "authors": "V W Xue; P Lei; W C Cho", "journal": "Clinical and Translational Medicine", "ref_id": "b7", "title": "The potential impact of chatgpt in clinical and translational medicine", "year": "2023" }, { "authors": "Z Zhuang; L Si; S Wang; K Xuan; X Ouyang; Y Zhan; Z Xue; L Zhang; D Shen; W Yao", "journal": "IEEE Transactions on Medical Imaging", "ref_id": "b8", "title": "Knee cartilage defect assessment by graph representation and surface convolution", "year": "2022" }, { "authors": "Z Cui; Y Fang; L Mei; B Zhang; B Yu; J Liu; C Jiang; Y Sun; L Ma; J Huang", "journal": "Nature communications", "ref_id": "b9", "title": "A fully automatic ai system for tooth and alveolar bone segmentation from cone-beam ct images", "year": "2022" }, { "authors": "S Wang; X Ouyang; T Liu; Q Wang; D Shen", "journal": "IEEE Transactions on Medical Imaging", "ref_id": "b10", "title": "Follow my eye: Using gaze to supervise computer-aided diagnosis", "year": "2022" }, { "authors": "Z Chen; Y Shen; Y Song; X Wan", "journal": "", "ref_id": "b11", "title": "Generating radiology reports via memory-driven transformer", "year": "2021-08" }, { "authors": "W Chen; Y Liu; C Wang; G Li; J Zhu; L Lin", "journal": "", "ref_id": "b12", "title": "Visual-linguistic causal intervention for radiology report generation", "year": "2023" }, { "authors": "S Wang; Z Zhao; X Ouyang; Q Wang; D Shen", "journal": "", "ref_id": "b13", "title": "Chatcad: Interactive computer-aided diagnosis on medical image using large language models", "year": "2023" }, { "authors": "L Milecki; V Kalogeiton; S Bodard; D Anglicheau; J.-M Correas; M.-O Timsit; M Vakalopoulou", "journal": "", "ref_id": "b14", "title": "Medimp: Medical images and prompts for renal transplant representation learning", "year": "2023" }, { "authors": "C Niu; G Wang", "journal": "bioRxiv", "ref_id": "b15", "title": "Ct multi-task learning with a large image-text (lit) model", "year": "2023" }, { "authors": "L Yunxiang; L Zihan; Z Kai; D Ruilong; Z You", "journal": "", "ref_id": "b16", "title": "Chatdoctor: A medical chat model fine-tuned on llama model using medical domain knowledge", "year": "2023" }, { "authors": "H Xiong; S Wang; Y Zhu; Z Zhao; Y Liu; Q Wang; D Shen", "journal": "", "ref_id": "b17", "title": "Doctorglm: Fine-tuning your chinese doctor is not a herculean task", "year": "2023" }, { "authors": "H Wang; C Liu; N Xi; Z Qiang; S Zhao; B Qin; T Liu", "journal": "", "ref_id": "b18", "title": "Huatuo: Tuning llama model with chinese medical knowledge", "year": "" }, { "authors": "B Keno; K ; H Tianyu; C Shan", "journal": "", "ref_id": "b19", "title": "medalpaca: Finetuned large language models for medical question answering", "year": "" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Ł Kaiser; I Polosukhin", "journal": "", "ref_id": "b20", "title": "Attention is all you need", "year": "2017" }, { "authors": "T Brown; B Mann; N Ryder; M Subbiah; J D Kaplan; P Dhariwal; A Neelakantan; P Shyam; G Sastry; A Askell", "journal": "Advances in neural information processing systems", "ref_id": "b21", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "K Singhal; S Azizi; T Tu; S S Mahdavi; J Wei; H W Chung; N Scales; A Tanwani; H Cole-Lewis; S Pfohl", "journal": "", "ref_id": "b22", "title": "Large language models encode clinical knowledge", "year": "2022" }, { "authors": "C Raffel; N Shazeer; A Roberts; K Lee; S Narang; M Matena; Y Zhou; W Li; P J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b23", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "J Lee; W Yoon; S Kim; D Kim; S Kim; C H So; J Kang", "journal": "Bioinformatics", "ref_id": "b24", "title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", "year": "2020" }, { "authors": "Y Gu; R Tinn; H Cheng; M Lucas; N Usuyama; X Liu; T Naumann; J Gao; H Poon", "journal": "ACM Transactions on Computing for Healthcare (HEALTH)", "ref_id": "b25", "title": "Domain-specific language model pretraining for biomedical natural language processing", "year": "2021" }, { "authors": "J Devlin; M.-W Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b26", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "E Alsentzer; J R Murphy; W Boag; W.-H Weng; D Jin; T Naumann; M Mcdermott", "journal": "", "ref_id": "b27", "title": "Publicly available clinical bert embeddings", "year": "2019" }, { "authors": "T H Kung; M Cheatham; A Medinilla; C Chatgpt; L Sillos; C De Leon; M Elepano; R Madriaga; G Aggabao; Diaz-Candido", "journal": "medRxiv", "ref_id": "b28", "title": "Performance of chatgpt on usmle: Potential for ai-assisted medical education using large language models", "year": "2022" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark", "journal": "PMLR", "ref_id": "b29", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "X Chen; X Wang; S Changpinyo; A Piergiovanni; P Padlewski; D Salz; S Goodman; A Grycner; B Mustafa; L Beyer", "journal": "", "ref_id": "b30", "title": "Pali: A jointly-scaled multilingual language-image model", "year": "2022" }, { "authors": "W Wang; H Bao; L Dong; J Bjorck; Z Peng; Q Liu; K Aggarwal; O K Mohammed; S Singhal; S Som", "journal": "", "ref_id": "b31", "title": "Image as a foreign language: Beit pretraining for all vision and vision-language tasks", "year": "2022" }, { "authors": "M Tsimpoukelli; J L Menick; S Cabi; S Eslami; O Vinyals; F Hill", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b32", "title": "Multimodal few-shot learning with frozen language models", "year": "2021" }, { "authors": "J.-B Alayrac; J Donahue; P Luc; A Miech; I Barr; Y Hasson; K Lenc; A Mensch; K Millican; M Reynolds", "journal": "", "ref_id": "b33", "title": "Flamingo: a visual language model for few-shot learning", "year": "2022" }, { "authors": "J Li; D Li; S Savarese; S Hoi", "journal": "", "ref_id": "b34", "title": "Blip-2: Bootstrapping languageimage pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "C Wu; S Yin; W Qi; X Wang; Z Tang; N Duan", "journal": "", "ref_id": "b35", "title": "Visual chatgpt: Talking, drawing and editing with visual foundation models", "year": "2023" }, { "authors": "S Liu; Z Zeng; T Ren; F Li; H Zhang; J Yang; C Li; J Yang; H Su; J Zhu; L Zhang", "journal": "", "ref_id": "b36", "title": "Grounding dino: Marrying dino with grounded pre-training for open-set object detection", "year": "2023" }, { "authors": "A Kirillov; E Mintun; N Ravi; H Mao; C Rolland; L Gustafson; T Xiao; S Whitehead; A C Berg; W.-Y Lo; P Dollár; R Girshick", "journal": "", "ref_id": "b37", "title": "Segment anything", "year": "2023" }, { "authors": "J L Bentley", "journal": "Communications of the ACM", "ref_id": "b38", "title": "Multidimensional binary search trees used for associative searching", "year": "1975" }, { "authors": "A E Johnson; T J Pollard; S J Berkowitz; N R Greenbaum; M P Lungren; C -Y. Deng; R G Mark; S Horng", "journal": "Scientific data", "ref_id": "b39", "title": "Mimic-cxr, a deidentified publicly available database of chest radiographs with free-text reports", "year": "2019" }, { "authors": "W Ye; J Yao; H Xue; Y Li", "journal": "", "ref_id": "b40", "title": "Weakly supervised lesion localization with probabilistic-cam pooling", "year": "2020" }, { "authors": "J Irvin; P Rajpurkar; M Ko; Y Yu; S Ciurea-Ilcus; C Chute; H Marklund; B Haghgoo; R Ball; K Shpanskaya", "journal": "", "ref_id": "b41", "title": "Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison", "year": "2019" }, { "authors": "L Mei; Y Fang; Z Cui; K Deng; N Wang; X He; Y Zhan; X Zhou; M Tonetti; D Shen", "journal": "Springer", "ref_id": "b42", "title": "Hc-net: Hybrid classification network for automatic periodontal disease diagnosis", "year": "2023" }, { "authors": "K Papineni; S Roukos; T Ward; W.-J Zhu", "journal": "", "ref_id": "b43", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "S Banerjee; A Lavie", "journal": "", "ref_id": "b44", "title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments", "year": "2005" }, { "authors": "C.-Y Lin", "journal": "", "ref_id": "b45", "title": "Rouge: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "A Smit; S Jain; P Rajpurkar; A Pareek; A Y Ng; M P Lungren", "journal": "", "ref_id": "b46", "title": "Chexbert: combining automatic labelers and expert annotations for accurate radiology report labeling using bert", "year": "2020" }, { "authors": "", "journal": "", "ref_id": "b47", "title": "Chatglm application with local knowledge implementation", "year": "2023" }, { "authors": "M Denkowski; A Lavie", "journal": "Association for Computational Linguistics", "ref_id": "b48", "title": "Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems", "year": "2011-07" }, { "authors": "K Chaitanya; E Erdil; N Karani; E Konukoglu", "journal": "", "ref_id": "b49", "title": "Contrastive learning of global and local features for medical image segmentation with limited annotations", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 365.92, 102.8, 197.11, 23.23 ], "formula_id": "formula_0", "formula_text": "D pred = argmax i∈{1,2,3} I • M i ∥I∥ ∥M i ∥ ,(1)" }, { "formula_coordinates": [ 4, 106.9, 521.94, 193.12, 8.96 ], "formula_id": "formula_1", "formula_text": "TF-IDF(t, d) = TF(t, d) • IDF(t),(2)" }, { "formula_coordinates": [ 4, 134.86, 604.24, 165.16, 22.53 ], "formula_id": "formula_2", "formula_text": "IDF(t) = -log n t N .(3)" }, { "formula_coordinates": [ 4, 364.72, 484.98, 198.31, 48.48 ], "formula_id": "formula_3", "formula_text": "cos (⃗ q, ⃗ v) = cos ( ⃗ q |⃗ q| , ⃗ v |⃗ v| ) = cos(θ), L 2 ( ⃗ q |⃗ q| , ⃗ v |⃗ v| ) = 2r • sin( θ 2 ).(4)" }, { "formula_coordinates": [ 4, 315.29, 580.41, 4.23, 6.12 ], "formula_id": "formula_4", "formula_text": "⃗ v" }, { "formula_coordinates": [ 4, 311.97, 594.94, 251.06, 24.51 ], "formula_id": "formula_5", "formula_text": "⃗ v i and ⃗ v j in database, cos(⃗ q, ⃗ v i ) > cos(⃗ q, ⃗ v j ) ⇔ θ i < θ j ⇔ L 2 ( ⃗ q |⃗ q| , ⃗ vi | ⃗ vi| ) < L 2 ( ⃗ q |⃗ q| , ⃗ vj | ⃗ vj |" } ]
ChatCAD+: Towards a Universal and Reliable Interactive CAD using LLMs
The integration of Computer-Assisted Diagnosis (CAD) with Large Language Models (LLMs) holds great potential in clinical applications, specifically in the roles of virtual family doctors and clinic assistants. However, current works in this field are plagued by limitations, specifically a restricted scope of applicable image domains and the provision of unreliable medical advice. This restricts their overall processing capabilities. Furthermore, the mismatch in writing style between LLMs and radiologists undermines their practical usefulness. To tackle these challenges, we introduce ChatCAD+, which is designed to be universal and reliable. It is capable of handling medical images from diverse domains and leveraging up-to-date information from reputable medical websites to provide reliable medical advice. Additionally, it incorporates a template retrieval system that improves report generation performance via exemplar reports. This approach ensures greater consistency with the expertise of human professionals. The source code is available at GitHub.
Zihao Zhao; Sheng Wang; Jinchen Gu; Yitao Zhu; Lanzhuju Mei; Zixu Zhuang; Zhiming Cui; Qian Wang; Dinggang Shen; Jinchen Zihao Zhao; Yitao Gu; Lanzhuju Zhu; Zhiming Mei; Cui
[ { "figure_caption": "( 1 )1Universal image interpretation. Due to the difficulty in obtaining a unified CAD network tackling various images currently, ChatCAD+ incorporates a domain identification module to work with a variety of CAD new opacities are found, and there are no larger pleural effusion … A1: The REPORTS show that … Q2: Moderate but atelectasis? What a hell! Is it ok?", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 1 .1Fig. 1. Overview of the proposed ChatCAD+, which is composed of three key components. (a) Universal interpretation. This module automatically identifies the domain of the image query and selects the appropriate Computer Assisted Diagnosis (CAD) network(s) for image interpretation. The output is converted into medical text description for further processing. (b) Template-aided diagnosis. Several template reports are retrieved from the in-house dataset to instruct the writing of final diagnostic reports. (c) Reliable interaction. Given the text query, the Large Language Model (LLM) retrieves related knowledge from medical knowledge databases to provide reliable medical treatments and explanations.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. The illustration of domain identification. Cosine similarity will be computed between the input image and each pre-defined text phrase, and the text with the highest score is determined to represent the image domain. The corresponding CAD model is hence called to perform interpretation.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "CAD Model→(disease: prob) P1 (direct) \"{disease} score: {prob}\" P2 (simplistic) prob∈[0, 0.5): \"No Finding\" prob∈[0.5, 1]: \"The prediction is {disease}\" P3 (illustrative) prob∈[0, 0.2): \"No sign of {disease}\" prob∈[0.2, 0.5): \"Small possibility of {disease}\" prob∈[0.5, 0.9): \"Patient is likely to have {disease}\" prob∈[0.9, 1]: \"Definitely have {disease}\"", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig.3. Template retrieval system. The system performs online querying by leveraging the global term frequency learned during training to infer the TF-IDF score of the query description. All data points, during training and querying, are projected onto the surface of a unit ball, so that KD-Tree engine can swiftly retrieves a ranked list of the top k text documents that exhibit the highest similarity to the query features.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. The illustration of the hierarchical Merck Manuals database.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Diagnosis of periodontitis: The patient is suspected to have periodontitis. Examination of each tooth reveals suspected inflammation around tooth 16, tooth 24, tooth 26, tooth 34, tooth 36, and tooth 46.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "[References] '''… Treatment of risk factors. Scaling and root planning. Sometimes oral antibiotics, antibiotic packs or both. Surgery or extraction. Treatment of modifiable risk factors such as poor oral hygiene, diabetes, and smoking improves outcomes… ''' Merch Manuals periodontitis Based on the patient's examination results, it was found that the tip of the central catheter in the left internal jugular vein terminates at the junction of the bronchus and right atrium. The heart silhouette is moderately to severely enlarged, with mild pulmonary congestion, and bilateral pleural effusion with atelectasis of the lower lobes of the lungs. No pneumothorax is observed. These findings suggest that the patient may have cardiomegaly, pulmonary atelectasis, pulmonary edema, and pleural effusion. It is recommended that the patient seek medical attention as soon as possible and undergo further diagnosis and treatment.Pleural effusion? What should I do? Is it ok?", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig.5. Examples of universal and interactive medical consultation using ChatCAD+, with ChatGPT as the default LLM. The underlined text signifies information obtained from reputable medical websites, which is not comprehensively incorporated in ChatGPT.", "figure_data": "", "figure_id": "fig_8", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig.6. Ablation study on the MIMIC-CXR dataset w.r.t. the number of retrieved templates. A range of values , varying from 0 to 5, was chosen for k, and its influence is evaluated on several NLG metrics.", "figure_data": "", "figure_id": "fig_9", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. Distributions of report length when changing k.", "figure_data": "", "figure_id": "fig_10", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "ILLUSTRATION OF PROB2TEXT. P3 IS THE DEFAULT SETTING.", "figure_data": "", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "and Table V on the Merck Manuals. The results clearly illustrate the advancement of our method. On the contrary, [48] struggles to identify relevant medical topics due to limited vocabulary size. This issue can be worse if the question of the user does not explicitly point to any medical topics or involve multiple medical entities.", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "ACCURACY COMPARISON WITH DIFFERENT METHODS (BOLD BLACK: THE BEST METHOD. BOLD PURPLE: THE SECOND BEST.)", "figure_data": "MethodsCardiomegaly Edema Consolidation Atelectasis Pleural Effusion AveragePR0.6490.6000.4190.4990.8480.603R2GenCMN [12]RC F10.507 0.5690.286 0.3870.051 0.0910.448 0.4720.406 0.5490.339 0.414PR0.5710.8130.4390.5190.8410.636VLCI [13]RC F10.352 0.4350.060 0.1130.070 0.1210.312 0.3890.421 0.5610.243 0.324PR0.5810.5580.2890.4510.7530.526ChatCADRC F10.742 0.6520.472 0.5110.441 0.3490.688 0.5450.672 0.7100.603 0.553PR0.6050.6040.2640.4980.8340.561ChatCAD+RC F10.624 0.6140.323 0.4210.129 0.1730.552 0.5240.517 0.6380.429 0.474", "figure_id": "tab_3", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "RESULTS WITH PREVIOUS STUDIES AND DIFFERENT PROMPT SETTINGS. THE BEST VALUES ARE HIGHLIGHTED IN BOLD BLACK, WHILE THE SECOND-BEST VALUES ARE MARKED IN BOLD PURPLE. THE DEFAULT SETTING IS CHATCAD+ (P3).", "figure_data": "ModelBLEU-1 BLEU-2 BLEU-3 BLEU-4 Corpus BLEU METEOR ROUGE-LR2GenCMN0.3670.0830.0270.0120.0350.2100.183ChatCAD (P3)0.2850.0640.0170.0060.0360.2330.168ChatCAD+ (P1)0.3030.0690.0190.0070.0410.2270.168ChatCAD+ (P2)0.3130.0740.0210.0080.0440.2330.173ChatCAD+ (P3)0.3160.0760.0210.0080.0440.2410.174", "figure_id": "tab_4", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "BETWEEN KNOWLEDGE RETRIEVAL METHODS. COMPLETELY UNRELATED KNOWLEDGE IS MARKED IN RED, PARTIALLY RELATED KNOWLEDGE IS MARKED IN PURPLE. RELATED AND IMPORTANT KNOWLEDGE IS IN GREEN.", "figure_data": "", "figure_id": "tab_5", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "Common causes of pleural effusion include heart failure, pneumonia, and tuberculosis Q4 What should I do after getting lung atelectasis? K4 (Baseline) do not recommend oral or intravenous corticosteroids because they have not shown efficacy in metal fume fever. Patients with pre-existing lung disease (such as asthma or chronic obstructive pulmonary disease) rarely require treatment and hospitalization due to worsening of pre-existing conditions. If severe cases of either syndrome cause respiratory distress, treatment methods used for ARDS (such as mechanical ventilation and positive end-expiratory pressure [PEEP]) may also be used. \" , \"Atelectasis\": \"Abs\": \"Atelectasis is collapse of lung tissue with loss of volume. Patients may have dyspnea or respiratory failure if atelectasis is extensive. They may also develop pneumonia. Atelectasis is usually asymptomatic, but hypoxemia and pleuritic chest pain may be present in certain cases. Diagnosis is by chest x-ray. Treatment includes maintaining coughing and deep breathing and treating the cause.\" K4 (Ours) Maximizing cough and deep breathing If obstruction by tumor or foreign body is suspected, bronchoscopy Evidence for the efficacy of most treatments for atelectasis is weak or absent. Nonetheless, commonly recommended measures include chest physiotherapy to help maintain ventilation and clearance of secretions, and encouragement of lung expansion techniques such as directed cough, deep breathing exercises, and use of an incentive spirometer. In ambulatory patients, exercise (eg, walking) is a desirable way to promote deep breathing. For patients who are not intubated and do not have excessive secretions, continuous positive airway pressure may help. For patients who are intubated and mechanically ventilated, positive end-expiratory pressure and/or higher tidal volume ventilation may help. ... If other measures are ineffective or if a cause of obstruction other than mucous plugging is suspected, bronchoscopy should be done.", "figure_data": "", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "BETWEEN KNOWLEDGE RETRIEVAL METHODS. COMPLETELY UNRELATED KNOWLEDGE IS MARKED IN RED, PARTIALLY RELATED KNOWLEDGE IS MARKED IN PURPLE. RELATED AND IMPORTANT KNOWLEDGE IS IN GREEN.", "figure_data": "", "figure_id": "tab_7", "figure_label": "V", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[4]- [8]", "Explanation": "The cited works highlight the potential of LLMs in the medical domain, which provides a methodological basis for the citing paper to explore the integration of LLMs into CAD systems in the medical domain."}, {"Category": "Extension or Continuation", "Citation": "[14]", "Explanation": "The cited work in the pilot study is an extension of the research on combining CAD networks and LLMs, as it further explores the use of LLMs in generating final diagnostic reports in the context of medical images."}, {"Category": "Data Source", "Citation": "[14]- [16]", "Explanation": "The cited works in the study of combining CAD networks and LLMs are limited in their scopes, focusing on specific image domains. This highlights the reliance on external data and pre-existing models in the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work provides a comparison of diagnostic reports generated by LLMs and real human experts, which serves as a basis for the discussion on the limitations of LLMs in the field of diagnostics in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[17]- [20]", "Explanation": "The cited works on medical dialogue systems are extended in the citing paper to discuss the potential integration of LLMs and CADs in the field of medical advice and explanation."}, {"Category": "Supporting Evidence", "Citation": "[18]", "Explanation": "The cited work on the limitations of general LLMs in providing medical advice based on encoded knowledge supports the claim in the citing paper that patients may receive unreliable responses from LLMs in real medical scenarios."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work on Transformer architecture provides a methodological basis for the development of large language models in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[22]- [24]", "Explanation": "The cited works on the training of large language models in the healthcare context serve as a basis for the extension of research in the citing paper."}, {"Category": "Data Source", "Citation": "[25], [26], [28]", "Explanation": "The cited works on BioBERT, PubMedBERT, and ClinicalBERT provide data sources for the development of healthcare language models in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[23]", "Explanation": "The cited work on Med-PaLM serves as a basis for the extension of research in the healthcare language model to a much larger scale in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[29]", "Explanation": "The cited work on the USMLE exam results in ChatGPT serves as a basis for the extension of research in the development of healthcare language models in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "ChatCAD combined medical image analysis models with ChatGPT, providing a methodological basis for the development of an interactive computer-aided diagnosis system."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "ChatDoctor was a medical chat model fine-tuned on LLaMA model using clinical QA that is synthesized by ChatGPT, providing a methodological basis for the development of a fine-tuned LLM for healthcare use."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "DoctorGLM demonstrated that finetuning an LLM for healthcare use could be done at an affordable cost, providing a methodological basis for the development of a cost-effective LLM for healthcare use."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The dual-encoder structure of CLIP is mentioned as a model architecture for end-to-end pretraining, which the citing paper adopts as a method for facilitating multi-modal input."}, {"Category": "Methodological Basis", "Citation": "[31]", "Explanation": "The encoder-decoder architecture of Pali is also mentioned as a model architecture for end-to-end pretraining, which the citing paper may have considered as a method for facilitating multi-modal input."}, {"Category": "Methodological Basis", "Citation": "[32]", "Explanation": "The unified transformer architecture of BEiT is mentioned as a model architecture for end-to-end pretraining, which the citing paper may have considered as a method for facilitating multi-modal input."}, {"Category": "Data Source", "Citation": "[33]", "Explanation": "The frozen model fine-tuned on image-text datasets is mentioned as a pre-existing model that the citing paper may have used for fine-tuning the image encoder."}, {"Category": "Data Source", "Citation": "[34]", "Explanation": "The cross-attention layers introduced in the LLM and pre-trained on image-text pairs are mentioned as a pre-existing model that the citing paper may have used for fine-tuning."}, {"Category": "Data Source", "Citation": "[35]", "Explanation": "The BLIP-2 model leveraging both frozen image encoders and frozen LLMs is mentioned as a pre-existing model that the citing paper may have used for fine-tuning to achieve stronger performance at a lower computation cost."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The Visual-ChatGPT model connecting ChatGPT and visual foundation models to enable image input and output during chatting is mentioned as a pre-existing model that the citing paper may have used for design inspiration in creating prompts for LLMs to utilize vision models."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work, Chat-CAD, provides a method of linking CAD models with LLMs to boost diagnostic accuracy in patient care, which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "[37]", "Explanation": "The cited work, Grounding DINO model, is a method for detecting and segmenting natural image objects with text inputs, which the citing paper uses in their research to ground the SAM model."}, {"Category": "Methodological Basis", "Citation": "[38]", "Explanation": "The cited work, SAM model, is a method for segmenting natural image objects with text inputs, which the citing paper uses in combination with the Grounding DINO model to improve the detection and segmentation of objects in natural images."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work, the pre-trained CLIP model, is used to encode the input image and text associated with domains of interest, which forms the basis for the visual-language contrast method employed in the citing paper to identify the domain of a medical image."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work provides a recommended prompt design for converting image output into text, which the citing paper adopts in the design of the prob2text tool to establish a link between image and text in the context of image diagnosis."}, {"Category": "Data Source", "Citation": "[14]", "Explanation": "The cited work provides the prompt designs P1 and P2 that the citing paper uses in their research for prompt design in Panoramic Dental X-ray and Knee MRI."}, {"Category": "Data Source", "Citation": "[40]", "Explanation": "The cited work, MIMIC-CXR dataset, is used as a public dataset for evaluating the performance of ChatCAD+ in terms of diagnostic accuracy and report generation."}, {"Category": "Data Source", "Citation": "(private datasets)", "Explanation": "The private datasets are used to measure the quality of diagnostic accuracy and report generation in the study conducted in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[41]", "Explanation": "The cited work provides a pre-trained thoracic disease classifier that the citing paper uses as a foundational element for the study of Chest CAD."}, {"Category": "Supporting Evidence", "Citation": "[12]", "Explanation": "The cited work provides a pre-trained report generation network for Chest X-ray that the citing paper adopts in templated-aided diagnosis for Chest CAD."}, {"Category": "Supporting Evidence", "Citation": "[43]", "Explanation": "The cited work provides a pre-trained periodontal diagnosis model that the citing paper utilizes in Tooth CAD."}, {"Category": "Supporting Evidence", "Citation": "[9]", "Explanation": "The cited work provides a pre-trained model for Knee CAD that the citing paper adopts in the implementation of the model."}, {"Category": "Data Source", "Citation": "[42]", "Explanation": "The cited work provides the official training split of the CheXpert dataset for the pre-training of the thoracic disease classifier in the study of Chest CAD."}, {"Category": "Data Source", "Citation": "MIMIC-CXR", "Explanation": "The training split of MIMIC-CXR is used in the pre-training of the report generation network for Chest X-ray in the study of templated-aided diagnosis for Chest CAD."}, {"Category": "Supporting Evidence", "Citation": "[44]", "Explanation": "The cited work provides the standard metrics for evaluating the quality of Natural Language Generation (NLG) in the context of LLM-aided diagnosis."}, {"Category": "Methodological Basis", "Citation": "[47]", "Explanation": "The cited work, Chexbert labeler, is used in the citing paper to extract labels from generated reports for calculating classification metrics."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work, R2GenCMN, is the report generation network used in template-aided diagnosis in the citing paper, providing a methodological basis for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work, VLCI, is the state-of-the-art method used in the citing paper, providing a methodological basis for the study conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[12]", "Explanation": "The cited work, R2GenCMN, is extended in the citing paper to compare the diagnostic accuracy of different methods, exploring new dimensions and variables in the field of report generation and template-aided diagnosis."}, {"Category": "Extension or Continuation", "Citation": "[13]", "Explanation": "The cited work, VLCI, is extended in the citing paper to assess the quality of language generation using various NLG metrics, exploring new dimensions and variables in the field of natural language generation."}, {"Category": "Methodological Basis", "Citation": "[49]", "Explanation": "The cited work introduces the concept of METEOR as a metric for evaluating the quality of text generation, which the citing paper adopts in its research on report generation and diagnostic accuracy."}, {"Category": "Methodological Basis", "Citation": "[50]", "Explanation": "The cited work provides a criterion for measuring the similarity of report length in radiology reports, which the citing paper adopts in their study to evaluate the performance of report generation."}]
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b25", "b4", "b38", "b20" ], "table_ref": [], "text": "Knowledge distillation (KD) generally optimizes a small student model by transferring knowledge from a large teacher model. While most existing works aim to make a student learn better from a given teacher, the training of the teacher itself usually follows the trivial way and is rarely investigated. However, without any intervention, large models suf-fer from high risk of coming into solutions that, while generalize well, are difficult for small models to mimic, which would unfavourably affect distillation. This argument is supported by recent work showing the optimization difficulty is a major barrier in knowledge distillation (Stanton et al., 2021), and is also confirmed by evidence that larger teacher with higher accuracy counter-intuitively makes worse student (Cho & Hariharan, 2019;Zhu & Wang, 2021;Mirzadeh et al., 2020). An illustration is shown in Fig. 1(a-c). Considering the function space from input image to target output, the subspace consisting of functions that the teacher could fit, F T (referred to as hypothesis space in machine learning), is larger than that of the student, F S , since the teacher has larger capacity. When the solution of the teacher is out of the subspace attainable to the student (F S ), the student would fail to mimic the teacher's solution well.\nOur proposed method, TriKD, is based on online knowledge distillation and inspired by the following motivation: could we make the teacher not only accurate, but also easy to mimic? In this paper, we try to achieve this goal through providing both the online teacher and the student with a common anchor, which constrains the two models to learn to solve the target task in a small-model friendly approach. The pre-trained anchor model is of equal capacity comparing with the student, which ensures the function expressed by the anchor, f A , is within F S and easily mimickable to the student. By penalizing the function distances from the anchor to the student and especially to the teacher, the anchor pulls the search space of both the student and especially the teacher near f A . The teacher then has good chance to also lie within or close to F S , leading to easy mimicking. Meanwhile, even being restricted to a small search space, we find that the large teacher could still reveal high-accuracy solutions thanks to its high capacity. Benefited from accurate but easy-to-mimic hints, the student can then mimic the teacher more faithfully and perform better after distillation. In short, the anchor model, teacher model, and student model formulate a novel triplet knowledge distillation mechanism. An illustration is shown in Fig. 1(d).\nSince an appropriate anchor is not trivial to find, we develop a curriculum strategy: the trained student from one TriKD generation is used as the anchor of the next generation, and a new pair of randomly initialized student and teacher join in. Generation by generation, the newly trained student Every neural network with compatible input and output format corresponds to a certain point on the plane, and the color represents the expected risk, darker means lower risk. The small model is the target student and its performance is our major interest. As the large teacher model has stronger fitting ability than the student, the collection of functions it could attain, FT, is also larger than FS. (a) When trained independently, the teacher model may step towards local minima out of the scope that the student could well fit. (b)(c) For both online and offline distillation, the large model is likely to lie beyond the subspace attainable to student model. This makes the student, though performing better, still lie far away from the optima, leading to a sub-optimal solution. (d) In our TriKD, a pre-trained anchor model is used to pull both the teacher and student models within or near the subspace attainable to the student model, making the teacher easy to mimic. The mutual learning between teacher and student then makes the student learn a high-quality solution with better generalization.\nbecomes better and better, and its performance finally converges. Considering Fig. 1(d), this process can be interpreted as gradually moving the anchor towards local minima.\nOverall, our main contributions are as below: 1). We propose a novel triplet knowledge distillation mechanism named TriKD. TriKD makes distillation more efficient by making the teacher not only accurate by also easy to mimic.\n2). To find a proper anchor model for TriKD, we propose a curriculum strategy where student in one generation serves as the anchor of the next generation. 3). Our TriKD achieves state-of-the-art performance on knowledge distillation, and also demonstrates better generalization in tackling the overfitting issue. 4). Theoretical analysis in a statistical perspective is given to analyze the rationality of triplet distillation." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Offline Knowledge Distillation", "publication_ref": [ "b13", "b8", "b28", "b26", "b29", "b23", "b16", "b14", "b33", "b2", "b9" ], "table_ref": [], "text": "Offline knowledge distillation makes the student learn from a pre-trained and fixed teacher. Hinton et al. (2015) propose mimicking the softened class distributions predicted by large teachers. Some studies (Ding et al., 2019;Wen et al., 2019) then go a further step to explore the trade-off between the supervision of soft logits and hard task label, and others (Tian et al., 2020;Xu et al., 2020) propose to introduce auxiliary tasks to enrich the transferred knowledge. Instead of final outputs, many works exploit the intermediate features (Romero et al., 2015;Kim et al., 2018;Jin et al., 2019;Zagoruyko & Komodakis, 2017;Chen et al., 2021) as transferred knowledge. Self-distillation, pioneered by Born again (Furlanello et al., 2018), makes the teacher share the same network architecture as the student, and continuously updates the student in an iterative manner. Our TriKD is related to Born again as it also involves such iterative training, but we use it to obtain a more reliable anchor." }, { "figure_ref": [], "heading": "Online Knowledge Distillation", "publication_ref": [ "b36", "b35", "b30", "b11", "b1" ], "table_ref": [], "text": "Online knowledge distillation makes multiple randomlyinitialized models collaboratively learn from scratch. This line of research is especially significant for scenarios without available pre-trained teacher model. A monumental work is deep mutual learning (DML) (Zhang et al., 2018).\nDuring the training phase, DML uses a pool of randomly initialized models as the student pool, and each student is guided by the output of other peers as well as the task label.\nBased on DML, some works (Zhang et al., 2020;Yao & Sun, 2020) additionally take intermediate features into account, and others (Guo et al., 2020;Chen et al., 2020) design different mimicking targets. Our TriKD is also built upon DML as the teacher and the student are all randomly initialized and learn mutually from each other, but we additionally incorporate an anchor model to enhance distillation." }, { "figure_ref": [], "heading": "'Larger Teacher, Worse Student'", "publication_ref": [ "b4", "b20", "b38", "b4", "b38", "b20", "b20" ], "table_ref": [], "text": "Intuitively, the performance of the student should increase when the teacher has larger capacity and higher performance. However, Cho et al. (Cho & Hariharan, 2019) identify that very large teacher actually makes the student deteriorate. This phenomenon has also been witnessed by following works (Mirzadeh et al., 2020;Zhu & Wang, 2021), and has been attributed to the capacity mismatch between teacher and student. To overcome this problem, ESKD (Cho & Hariharan, 2019) proposes an early-stopping strategy, and SCKD (Zhu & Wang, 2021) automatically adjusts the distillation process through considering the gradient similarity between the teacher's and the student's distillation loss. TAKD (Mirzadeh et al., 2020) divides the distillation process into multiple stages, and introduces intermediatesized models, called teacher assistant, to bridge the capacity gap between the original teacher and student. While TAKD (Mirzadeh et al., 2020) treats mimicking difficulty as an inherent property of teacher model capacity, i.e., larger teachers are inherently harder to mimic, we believe that a given large network with fixed capacity should be able to fit both hard and easy functions, and we could make a large teacher still easy to mimic by deliberately making the function it expresses easy. Detailed comparisons between TAKD and our TriKD are provided in C.1 in Appendix." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Triplet Distillation", "publication_ref": [], "table_ref": [], "text": "Our TriKD incorporates three models: online teacher T, student S, and anchor A. Among them, the anchor supervises both the teacher and student, and the student and the teacher learn mutually from each other. At the beginning of the distillation process, the anchor is already fully-trained on the target task, while the student and the teacher are randomly initialized. During distillation, the parameters of the anchor model keep fixed, while the parameters in the other two models are optimized, which is detailed below." }, { "figure_ref": [], "heading": "GUIDANCE FROM ANCHOR TO TEACHER/STUDENT", "publication_ref": [], "table_ref": [], "text": "The anchor A is designed to constrain the student S and the teacher T to learn to solve the target task in a studentfriendly manner. For this purpose, we first ensure the function expressed by the anchor itself, f A , is easily attainable to the student. This is achieved by making the anchor model A of the same architecture and size as the student S, and already trained on the target task. We then try to constrain the search space of both the teacher and the student to be near f A , which is realized through penalizing the KL-divergence from the anchor to the teacher/student:\nLKL(fA, fT) = N i=1 τ 2 KL (fA(xi)||fT(xi)) ,(1)\nLKL(fA, fS) = N i=1 τ 2 KL (fA(xi)||fS(xi)) ,(2)\nwhere x denotes training sample, N is the number of training samples, τ represents temperature used to soften the output distributions. Specifically,\nf (•) (x) = σ( z (•) (x) τ ),(3)\nwhere σ denotes the softmax function, and z is logit scores output by the penultimate layer of the neural network. In this way, the teacher is prevented from solutions that are far from the anchor, and thus has good chance to lie within or close to F S . It is then reasonable to expect that the function expressed by the teacher, f T , would be a relatively easy mimicking target to the student. We will show some experiment results supportive of this expectation in 4.3, which demonstrate that the constraint from the anchor does make mimicking easier, as teacher-student behavior similarity becomes substantially higher." }, { "figure_ref": [], "heading": "MUTUAL DISTILLATION BETWEEN TEACHER AND", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "STUDENT", "publication_ref": [ "b36" ], "table_ref": [], "text": "When not considering the anchor A, the rest part of TriKD follows the standard online knowledge distillation method DML (Zhang et al., 2018). Specifically, the student and the online teacher not only learn from the hard labels, but also mutually draw lessons from the training experiences of each other. From the student perspective, the loss regarding hard label is the standard cross-entropy loss L ce (f S ), defined as:\nLce(fS) = - N i=1 K k=1 y k i log(f k S (xi)),(4)\nK is the number of classes, y is hard classification label. Furthermore, the student also learns from the teacher:\nLKL(fT, fS) = N i=1 τ 2 KL (fT(xi)||fS(xi)) .(5)\nCombining with the constraint from anchor, the complete loss function for the student is:\nLS = w1Lce(fS) + w2LKL(fT, fS) + w3LKL(fA, fS). (6)\nSimilarly, the loss function for the teacher is in the symmetric form:\nLT = w4Lce(fT) + w5LKL(fS, fT) + w6LKL(fA, fT), (7\n)\nwhere w is the weight of each loss. For L ce , τ is fixed to 1, whereas for L KL , τ is a hyper-parameter to tune.\nOur TriKD is based on online knowledge distillation, and uses an additional anchor to make the teacher easy to mimic by constraining the search space. On the other hand, we hope the teacher, with large capacity and correspondingly strong learning ability, could still find a low-expected-risk solution to accurately guide the student, even though its search space is constrained by the anchor. Note that here exists a potential risk that if the constraint from the anchor is too strong (w 3 and w 6 are too large), the performance of the teacher may be upper-bounded by the anchor, thus leading to easy but inaccurate teacher solutions. However, experiments in 4.3 and 4.4 show that with proper hyper-parameters, the teacher can be both easy (4.3) and accurate (4.4) simultaneously. This means that low mimicking difficulty of the teacher could be attained even when the constraint from anchor is relatively mild, and the constraint would not barrier the accuracy of the teacher until its grows much stronger.\nThere is thus a range of constraint strength where the merits of both low-mimicking-difficulty and low-expected-risk teacher could be simultaneously enjoyed. With the aforementioned merits, the student could benefit substantially more from TriKD than existing distillation methods, and finally become more accurate than existing models." }, { "figure_ref": [ "fig_1" ], "heading": "Curriculum learning for Proper Anchor", "publication_ref": [], "table_ref": [], "text": "Intuitively, the selection of anchor model affects the performance of TriKD, and it is thus of great significance to find a proper anchor. However, such an appropriate anchor is not trivial to find. We therefore propose a curriculum strategy to achieve this goal.\nThe curriculum process is composed of a sequence of generations, each of which is a triplet distillation process as described in 3.1. In curriculum learning, the student of the gth generation will become the anchor of the (g + 1)th generation, denoted as:\nA g+1 = S * g ,(8)\nwhere S * g is the student trained in the gth generation. The student and the teacher are randomly re-initialized at the beginning of each generation. We empirically find that the performance of the student tend to raise within the first several generations; it then converges and more generations would not make further improvement. We can then take the student with converged performance as the final model, which is generally with better performance. Fig. 2 shows the whole pipeline of the proposed method.\nFor the first generation, as there is no available lastgeneration student to serve as the anchor, we simply pretrain the anchor model with only online distillation between it and the teacher. We also try to use a trivial one only trained with label, and find it achieves comparable performance but with slower convergence. Therefore, in this paper we use the student trained with vanilla online distillation as the anchor for generation 1, and we refer to the vanilla online distillation process itself as generation 0." }, { "figure_ref": [], "heading": "Theoretical Analysis", "publication_ref": [ "b18", "b18" ], "table_ref": [], "text": "We explain why TriKD could improve knowledge distillation in a formal context of the risk minimization decomposition. Lopez-Paz et al. (Lopez-Paz et al., 2015) decomposed the excess risk of the student trained only with hard label as follows:\nR(fS) -R(fR) ≤ O |FS|C √ n + ϵ1,(9)\nwhere R(•) denotes expected risk, f S is the student function in function class F S , f R is the real (target) function. The O(•) term is the estimation error, and ϵ term is approximation error. | • | C is some appropriate capacity measurement of function class. For distillation, the teacher learns from the target function, leading to the following excess risk:\nR(fT) -R(fR) ≤ O |FT|C n α + ϵ2,(10)\nand the student learns from the teacher, leading to the following excess risk:\nR(fS) -R(fT) ≤ O |FS|C n β + ϵ3,(11)\nwhere α, β range between [ 1 2 , 1], higher value means easier problem and faster learning. As analyzed in (Lopez-Paz et al., 2015), the effectiveness of vanilla knowledge distillation is theoretically ensured by the following inequality:\nO |FT|C n α +O |FS|C n β +ϵ2+ϵ3 ≤ O |FS|C √ n +ϵ1. (12)\nFurthermore, if the left side of Eq. ( 12) decreases, the excess risk of the student becomes lower, meaning better performance. Next, we show that introducing the anchor model A lowers the left side of Eq. ( 12).\nConsidering vanilla online knowledge distillation, its loss function is:\nL online =w1Lce(fS) + w2LKL(fT, fS) + w4Lce(fT) + w5LKL(fS, fT).(13)\nTriKD can be equivalently recognized as minimizing L online , but with additional inequality constraints coming from the anchor:\nmin f S ,f T L online , s.t. LKL(fA, fS) < δ, LKL(fA, fT) < δ,(14)\nwhere L KL serves as a function distance metric to constrain the search space of the teacher and the student; δ is the distance threshold. Rather than directly solving Eq. ( 14), we can instead add penalty terms to the loss function to substitute the hard constraints, making the optimization much easier. We then get Eq. ( 6) and Eq. ( 7), which we actually optimize in practice. Considering Eq. ( 14), it means conducting the vanilla online distillation, but with constraints that shrink the search space of teacher T from the entire F T to its subset F ′ T :\nF ′ T = {f |f ∈ FT, LKL(fA, fT) < δ},(15)\nand similarly shrink the search space of student S from F S to its subset F ′ S :\nF ′ S = {f |f ∈ FS, LKL(fA, fS) < δ}.(16)\nThe student and especially the teacher are then asked to find a solution within the shrinked search space F ′ S and F ′ T . Following the left side of Eq. ( 12), the risk bound for our proposed TriKD is:\nO |F ′ T |C n α ′ + O |F ′ S |C n β ′ + ϵ ′ 2 + ϵ ′ 3 .(17)\nFirst, as\nF ′ S , F ′ T are subsets of F S , F T , we have |F ′ S | C ≤ |F S | C , |F ′ S | C ≤ |F S | C .\nNext, recall that TriKD is built upon two empirically-validated expectations: 1) the teacher would be easy to mimic if its search space is near f A (i.e. it is taken from F ′ T rather than F T ), and 2) even the search space is constrained to F ′ T , the teacher could still find a lowexpected-risk solution therein to provide accurate enough guidance. The first one implies that β ′ > β, i.e. the mimicking from student to teacher is easier in our case. The second one implies that\nO |F ′ T |C n α ′ + ϵ ′ 2 ≈ O |FT|C n α + ϵ2,(18)\nindicating the teacher would present similar expected risk either with or without anchor. Now we have analyzed all the involved variables except the ϵ 3 term, and they all support that the bound in Eq. ( 17) is lower than the left side of Eq. ( 12). Finally, considering ϵ 3 term, it signifies the approximation error from the student search space F S to the teacher function f T ∈ F T :\nϵ3 = inf f ∈F S R(f ) -R(fT).(19)\nAccording to Eq. ( 18), the difference in the R(f T ) term will be minor between TriKD and standard distillation; For the infimum term, in TriKD F ′ S replaces F S , and since F ′ S is a subset of F S , its infimum should be higher, making ϵ ′ 3 ≥ ϵ 3 . However, it is unclear how large the difference is because the infimum on F ′ S could still be very low. More importantly, the impact of the ϵ 3 term to the total distillation process is limited, because the expected risk of real models in practice are far from the best one they could theoretically attain. Therefore, the influence of the ϵ 3 term should be dwarfed by that of the other terms. Combining all the aforementioned changes together, the bound in Eq. ( 17) is lower than the left side of Eq. ( 12), signifying better distillation." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b26" ], "table_ref": [], "text": "In this section, we empirically validate our proposed methods from five aspects. In 4.1 we compare TriKD with stateof-the-art knowledge distillation methods on image classification to show the general effectiveness of the proposed Table 1. Compare the top-1 accuracy (%) of different KD methods on CIFAR100. Bold and underline denote the best and the second best results, respectively. For methods from KD to CRD, we quote the results in Tian et al. (Tian et al., 2020). For Review to DKD, we show the results reported by their original authors. For DML, we report our reimplemented results. \"(•)\" means the result was not reported by the authors and we re-run their provided codes. Note that DML and TriKD do not involve pre-trained teacher model. method. In 4.2, we validate the proposed method on the finegrained problem of face recognition, with a special focus on the method's performance when confronting overfitting. In 4.3 and 4.4, we justify the rationality of our motivation. Specifically, in 4.3, we show TriKD makes the teacher an easier mimicking target from perspective of teacher-student behavior similarity; in 4.4 we show the performance of the teacher is not limited by the small volume of F ′ T . In 4.5, we conduct ablation studies to dissect the effect of each involved component. Detailed descriptions of experiment settings, as well as additional experiments and ablations, are provided in the Appendix." }, { "figure_ref": [], "heading": "Knowledge Distillation on Image Classification", "publication_ref": [ "b17", "b6", "b17", "b13", "b2", "b23", "b0", "b33", "b26", "b6" ], "table_ref": [ "tab_0" ], "text": "We compare TriKD with state-of-the-art knowledge distillation methods on two widely-used image classification benchmarks: CIFAR100 (Krizhevsky et al., 2009) and Im-ageNet (Deng et al., 2009). Given a pair of model architectures including one large and one small, we choose the small model as the anchor and as the student, and choose the big model as the teacher.\nCIFAR100 (Krizhevsky et al., 2009): results are shown in Table 1. TriKD averagely raises the student's performance by 3.84% comparing with the non-distillation baseline, and performs significantly better than vanilla KD (Hinton et al., 2015), with an average improvement by 2.16%. TriKD also outperforms state-of-the-art methods on all teacher-student pairs. Note that TriKD only uses the logits for knowledge transfer, but achieves better performance than those involving more complex information like intermediate feature map (Chen et al., 2021;Romero et al., 2015;Ahn et al., 2019), attention map (Zagoruyko & Komodakis, 2017), instance similarity (Tian et al., 2020), etc ImageNet (Deng et al., 2009): to validate the efficacy of our method on large-scale datasets, we also compare TriKD with other methods on ImageNet. As shown in Table 2, TriKD also outperforms other methods, showing that the proposed triplet distillation mechanism could steadily produce highquality models regardless of dataset volume." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Knowledge Distillation on Face Recognition", "publication_ref": [ "b3", "b31", "b13", "b36", "b34" ], "table_ref": [], "text": "We validate our proposed TriKD framework on the fine-grained problem of face recognition, with Mobile-FaceNet (Chen et al., 2018) as the main architecture. We use CASIA-WebFace (Yi et al., 2014) We first investigate the performance of the student w.r.t. its capacity. The 150k CASIA-WebFace subset is used for this experiment. The results are shown in Fig. 3. The Baseline(L) with only task label loss performs poorly, and starts in an underfitting state and then grows to an overfitting state. In contrast, our TriKD not only performs better than the baseline by a large margin in terms of all model sizes (even up to 10% in G5, MobileFaceNet 1.125X), but also overcomes the overfitting issue, making the performance consistently raise as model capacity grows. Ablative results are also shown in Fig. 3, indicating both the teacher and the anchor are indispensable. We defer detailed analysis of this ablation study to Sec.4.5.\nWe further compare TriKD with the existing methods including KD (Hinton et al., 2015), DML (Zhang et al., 2018), and BYOT (Zhang et al., 2019) " }, { "figure_ref": [], "heading": "Teacher-Student Behavior Similarity", "publication_ref": [ "b22", "b5" ], "table_ref": [ "tab_4" ], "text": "We introduce the anchor A in hopes that it could lower the difficulty for the student to mimic the teacher. If it does work as expected, we should see an increase in teacher-student behavior similarity because the student would mimic the teacher more faithfully. Here we conduct experiments to validate this phenomenon.\nWe show the KL-divergence between outputs of the student and the teacher trained on CIFAR100. For in-domain data, we report the results on CIFAR100. For out-of-domain data, where the student is more likely to act differently from the teacher, we report the results on SVHN (Netzer et al., 2011) and STL10 (Coates et al., 2011). Table 4 shows the results. Compared with offline knowledge distillation, online distillation has a huge advantage in increasing teacher-student behavior similarity. On the other hand, our TriKD steadily shows great improvement upon online distillation, showing that the anchor does make the mimicking easier. The increase in teacher-student behavior similarity shows that the anchor model successfully drives the large teacher into easy-to-mimic solutions, supporting the expectation in 3.1.1." }, { "figure_ref": [], "heading": "Performance of Teacher after TriKD", "publication_ref": [ "b36", "b26" ], "table_ref": [ "tab_5" ], "text": "In TriKD, the search space of the teacher is constrained by the anchor, and the teacher is expected to find a high-quality solution within the designated search space. This implies our expectation that the anchor would not barrier the teacher in chasing good solutions. Here we investigate the performance of teacher after TriKD to check if the expectation holds. The results are shown in Table 6. The teacher ac- tually outperforms its trivially-trained baseline, and also performs better than online distillation in most cases. The result indicates that the teacher is not encumbered by the constraint from anchor, and thus with TriKD, we can simultaneously enjoy the merits of an easy-to-mimic and accurate teacher model. Note that existing works have already shown that online knowledge distillation would make both the large model (teacher) and the small model (student) improve (Zhang et al., 2018). However, it is also shown in (Tian et al., 2020) that after switching from offline distillation to online distillation, the performance gain of the teacher could hardly trigger performance gain of the student. Our TriKD, in contrast, makes the accurate teacher model also easy to mimic, and thus the student could benefit more from distillation." }, { "figure_ref": [ "fig_2" ], "heading": "Ablation study", "publication_ref": [ "b36", "b9" ], "table_ref": [ "tab_6" ], "text": "The proposed triplet distillation consists of three roles, i.e. the teacher T, and target student S and the Anchor A. From the student perspective, it is supervised by T, A and task label L. Here we investigate the influence of each role.\nFor CIFAR100, results are shown in Table 7. The L + T setting is similar to DML (Zhang et al., 2018). The L + A setting is similar to Born again (Furlanello et al., 2018), where the first generation anchor is a trivially trained model. In contrast, the first generation anchor in L + A * is trained with L + T. For both conditions we report the result after three iterative generations. The result shows that both A and T could boost the performance of the target student when introduced individually. However, simply combining these two methods through making the student of L + T the first-generation anchor of L + A brings minor improvement. Our TriKD, in contrast, further improves the performance of the target student.\nFor CASIA-Webface, results are shown in Fig. 3(a). The Baseline (L) with only task label loss starts in an underfitting state and then grows to an overfitting state. Then, adding only the anchor L + A and adding only the teacher L + T both bring impressive improvement, illustrating the effectiveness of each role. When including all three roles, further improvement is obtained, clearly illustrating the necessity and effectiveness of the three different roles. We refer readers to Appendix for more ablative experiments." }, { "figure_ref": [ "fig_4", "fig_4", "fig_4" ], "heading": "Conclusion", "publication_ref": [ "b19", "b21", "b10", "b10" ], "table_ref": [], "text": "This work aims to address the problem of the student's limited ability and the unattainable optimization goal of the large teacher. We propose a novel triplet distillation mechanism, TriKD, to solve the mimicking difficulty problem. Besides teacher and student, we introduce a third model called anchor to make the teacher accurate but easy to mimic. To obtain a high-quality anchor, a curriculum strategy is proposed, which allows the student benefits from accurate but easy-to-mimic hints and obtain good performance, then it can be used as the new anchor for new students. Theoretical analysis in the context of risk minimization decomposition supports the rationality of our method. Furthermore, our TriKD achieves state-of-the-art performance on knowledge distillation and also demonstrates better generalization in tackling the over-fitting issue. In the future, we will explore how we could more efficiently find a proper anchor, and try to extend TriKD to more tasks.\nAccording to Proposition 3 in (Menon et al., 2020), for constant C > 0 and any student network S, the risk in vanilla knowledge distillation could be bounded as:\nE ( R(fS, D) -R(fS)) 2 ≤ 1 N V [L(fT(x), fS(x)] + C E ∥fT(x) -fR(x)∥ 2 2 , (20\n)\nwhere E denotes the expectation, V denotes the variance, R(•, D) is empirical risk on dataset D. L is the distillation loss, typically the KL-Divergence loss.\nIn TriKD, there are two types of supervision for the student, i.e. that from the teacher (f T ) and the anchor model (f A ), we apply two coefficients (w T , w A ) to combine them, and w T + w A = 1. Following Eq. ( 20), the variance-bias decomposition of TriKD is:\n[l]E ( R(f S , D) -R(f S )) 2 ≤ 1 N V [L((w T f T (x) + w A f A (x)), f S (x)] + C (E [∥((w T f T (x) + w A f A (x)) -f R (x)∥ 2 ]) 2 .\n(21) This error bound establishes a fundamental variance-bias trade-off when performing distillation. Specifically, they show the fidelity of the distilled risk's approximation to the expected one mainly depends on two factors: how variable the loss is given a random instance (the variance term), and how well the mimicking target w T f T (x) + w A f A approximates the real output f R on average (the bias term). Our goal is to analyze how arranging the teacher model T and the anchor model A could lower the bound in Eq. ( 21).\nFor the Variance part, as shown in Fig. 4, we conduct experiments to explore how to lower it. There are basically four valid combinations, i.e. M0: S learns from A with vanilla distillation, M1: S learns from both A and T with vanilla offline distillation, M2: S learns from A with offline distillation and from T with online distillation, M3: T learns from A with vanilla distillation and S learns from T with online learning, M4: both S and T learns from A with vanilla distillation and S learns from T with online learning. Generally, we consider two main factors: the way model S learns from model T -vanilla offline distillation or online mutual distillation, and whether model T learns from model A. Fig. 4(a) reveals that online mutual learning makes important contribution to decrease the variance, and M4, which is used in TriKD, can gain lower variance when the size of model A is small comparing with T. Furthermore, we compare M4 with vanilla distillation (M0 and M1) as shown in Fig. 4(b), M4 can get the lowest variance in all the experiments settings. To sum up, the above experiments show that arranging the anchor A and the teacher T as in M4 and making A small can greatly help reduce the variance.\nFor the Bias part, it follows:\nC E ∥((wTfT(x) + wAfA(x)) -fR(x)∥ 2 2 ≤ C (E [wT∥(fT(x) -fR(x)∥2 + wA∥fA(x) -fR(x)∥2]) 2 . (22\n)\nThe second line is obtained based on triangular inequality. Minimizing this term means that we should make the introduced teacher model T as well as the anchor model A approximate the Bayes class-probability distribution f R better. In detail, it means the expected calibration error (ECE) (Naeini et al., 2015) of the two models should be small. In (Guo et al., 2017), the authors analysed the calibration measured by ECE in terms of different aspects, such as network depth, width, Batch Normalization and weight decay. The experiments in (Guo et al., 2017) showed that increasing width of a network will make the ECE first rise and then fall. To make it clearer, we conduct this experiment again in terms of the effect of network width on face recognition task (Webface) and image classification task (CI-FAR100), and all the models are trained enough epochs to ensure the model converges sufficiently. The backbones are MobilefaceNet and Resnet18 respectively, we applied various width including 0.5X, 1.0X, 2.0X, 3.0X, 4.0X. As shown in Fig. 5, we observe that increasing the network width positively affect model calibration. As a result, we can minimize the bias term through making the model T wider. The anchor A, however, faces a variance-bias tradeoff: as shown in the variance part, small anchor tend to benefit lowering the variance, but it could degrade the bias, and vice versa. In this paper, we keep the anchor A small (the same size as the student) in favor of low variance, and we leave further exploration of the trade-off to future work.\nCombining the above two parts, we can introduce a large model T to M4, and keep the anchor A small, which forms our proposed TriKD." }, { "figure_ref": [], "heading": "B. Experimental Details", "publication_ref": [ "b17", "b12", "b32", "b24", "b6", "b31", "b7", "b3" ], "table_ref": [], "text": "CIFAR100 (Krizhevsky et al., 2009) dataset consists of 60K images from 100 categories with size of 32 × 32. In the standard protocol, 50k images are used for training and 10K for testing. We choose CIFAR-style resnet (He et al., 2016), wide-resnet (Zagoruyko & Komodakis, 2016) and vgg (Simonyan & Zisserman, 2014) as model architecture. We train all the models for 240 epochs. The initial learning rate is 0.1 and is decayed by a factor of 10 at 150, 180, and 210 epochs, respectively. We run experiments on one Tesla-V100 GPU with a batch size of 128. An SGD optimizer with 0.0005 weight decay and 0.9 momentum is adopted. For all the experiments, we set w 1 = w 2 = w 3 = w 4 = w 5 = w 6 = 1 at the beginning. After epoch 150, where the learning rate decays for the first time, we decrease w 1 to 0.1 and increase w 2 to 10. For all experiments except vgg, the temperature τ is set to 1 for L KL ; for vgg, we set it to 4.\nImageNet (Deng et al., 2009) consists of 1.28 million training images and 50k validation images from 1000 categories. Following the mainstream settings, all methods are trained on the entire training set and evaluated on the single-crop validation set. The input image resolution is 224 × 224 for both training and evaluation. We use resnet34 as teacher and resnet18 as student. We train all the models for 100 epochs. The initial learning rate is 0.1 and is decayed by a factor of 10 at 30, 60, and 90 epochs, respectively. We run experiments on one Tesla-V100 GPU with a batch size of 256. An SGD optimizer with a 0.0001 weight decay and 0.9 momentum is adopted. Due to limited resources, we simply set w 1 = w 2 = w 3 = w 4 = w 5 = w 6 = 1, and τ = 1. (Yi et al., 2014) et al., 2016) dataset is used for testing, which contains 1M images of 60k identities as the gallery set and 100k images of 530 identities from FaceScrub as the probe set. For better stability of training, Arcface loss (Deng et al., 2019) et al., 2018) in our experiments. Following the work of AM-Softmax loss, the faces are aligned and cropped out with size of 112 × 96. For optimization, SGD with momentum 0.9 is used and the batch size is 256. All the models are trained with 40k iterations. The learning rate starts from 0.1 and linearly reduces to 0. The setting of weight decay keeps the same as (Chen et al., 2018)." }, { "figure_ref": [], "heading": "CASIA-WebFace", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C. More experiments C.1. Comparing with TAKD", "publication_ref": [ "b20", "b38", "b4", "b20", "b20", "b4", "b38", "b13", "b20" ], "table_ref": [ "tab_9" ], "text": "Large models tend to generalize better. However, existing studies (Mirzadeh et al., 2020;Zhu & Wang, 2021;Cho & Hariharan, 2019) have shown that in knowledge distillation, the performance of the student would indeed deteriorate when the capacity of the teacher increases. To boost the performance of the student when the capacity gap between the teacher and the student is large, TAKD (Mirzadeh et al., 2020) proposed to bridge the gap by introducing intermediate-sized models named teacher assistant. Both TAKD and our TriKD attempt to reduce the difficulty for the student to mimic the teacher. However, TAKD treats learning difficulty as an inherent property of teacher model capacity, i.e. larger teachers are inherently harder, and smaller teachers are easier. In contrast, we believe that a given network architecture with fixed capacity should be able to fit both hard and easy functions, and we could make a large teacher still easy to mimic by deliberately making the function it expresses easy; the reason why large teacher usually fails in existing distillation frameworks is that the teacher would spontaneously learn to express sophisticated functions when trained without constraint. This is easy to understand when considering the teacher model's function identity: with larger capacity, the larger teacher should be able to easily fit the same function as a smaller teacher does, and thus in distillation a student supervised by a larger teacher should at least perform no worse than supervised by a smaller one. Here we also provide an experiment to compare our TriKD with TAKD. The experiment is conducted on CIFAR100. For fair comparision, following TAKD, we use resnet8 as the student and resnet110 as the teacher, and we use stochastic gradient descent with Nesterov momentum of 0.9 and learning rate of 0.1 for 150 epochs. we decrease learning rate to 0.01 on epoch 80 and 0.001 on epoch 120. Weight decay is set to 0.0001. The result is shown in Table 8. Its shows that our TriKD consistently outperforms TAKD with different teacher assistant size.\nWe further emphasize that our proposed TriKD is a general knowledge distillation method rather than specially designed for situations where the capacity gap between the teacher and the student is large, like (Mirzadeh et al., 2020;Cho & Hariharan, 2019;Zhu & Wang, 2021). The mimicking difficulty is a ubiquitous problem in knowledge distillation rather than exclusive to teacher-student pairs with extremely large capacity gap. Experiments also show that this method could greatly benefit the student even though the teacher is relatively small. (Hinton et al., 2015) and TAKD (Mirzadeh et al., 2020) " }, { "figure_ref": [], "heading": "C.2. Additional Results on Image Classification", "publication_ref": [], "table_ref": [ "tab_10" ], "text": "We provide some additional results with more architectures on image classification. For the experiments in this section, we set the teacher to be 2 times as wide as the student. For experiments on ImageNet, all methods are trained for 120 epochs. For the hyper-parameters, SGD with momentum 0.9 is used for optimization and the batch size is 256. The learning rate starts from 0.1 and linearly reduces to 0. The weight decay set as 5e -4 for ShuffleNet V2, 1e -4 for ResNet18. For experiments on CIFAR100, all models are trained for 200 epochs. As for the hyper-parameters, SGD with momentum 0.9 is used for optimization and the batch size is 128. The learning rate starts from 0.1 and is multiplied by 0.1 at 60, 120 and 180 epochs. The weight decay is set as 5e -4. Table 9 shows the result." }, { "figure_ref": [], "heading": "C.3. Impact of Teacher Size", "publication_ref": [], "table_ref": [], "text": "The teacher, a large network with high fitting ability, represents the potential upper limit of student's performance.\nWithout losing flexibility, it can be set with any desired model size no less than the target model size. Table 10 shows the results of our TriKD with the teacher in different model size, i.e. 0.5×, 1.0×, 2.0× of the base network size. The experiment is conducted on face recognition and the network architecture is MobileFaceNet. As can be seen, our learning mechanism is stable w.r.t. different size of the teacher models, which can flexibly adapt to different training resources and better meet the trade-off between computational cost and performance. More specifically, larger teacher T induce better model S, which is consistent with our motivation and demonstrates that larger model T has an edge in exploring generalizable solutions. " }, { "figure_ref": [ "fig_5" ], "heading": "C.4. Iterate for different number of generations", "publication_ref": [], "table_ref": [ "tab_11" ], "text": "As mentioned in 3.2, we adopt a curriculum strategy to obtain an appropriate anchor model for TriKD. Here we investigate how many generations are needed for this process.\nThe experiment is conducted on CIFAR100. Table 11 shows the results. Generation 0, as mentioned in 3.2, is a plain online distillation process without using an anchor. The result shows that it generally takes 1 to 2 generations (generation 0 not included) for the process to converge, and at that time the student generally reaches a good performance. We empirically find that the first and the second generations are the most likely to bring in improvement, and the following generations tend to bring in less, if any. Specifically, we attribute the improvement in the first and later generations to different mechanisms. The first generation's improvement is due to the introduction of the triplet relationship, and the later generations improves the student through using more accurate anchor; the former is qualitative, and the latter is majorly quantitative. As shown in Fig. 6, from a teacherstudent behavior similarity perspective, the KL-divergence between the teacher and the student drops dramatically after generation 1, but then drops slowly in the following generations. It means that it is the triplet relationship, rather than the curriculum process, that makes the mimicking easier. On the other hand, from the variance-bias perspective (see A), the curriculum learning can be identified as a means to gradually decrease the bias of the anchor." }, { "figure_ref": [], "heading": "Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Variance and Bias Analysis", "publication_ref": [], "table_ref": [], "text": "In this section, we empirically analyze how TriKD works from a variance-bias perspective. We will show that 1) TriKD reduces the variance of the target student, and 2) a large teacher induces a better-calibrated distribution for the student to mimic, leading to lower bias. We hope the analysis in this section could provide some extra insight." } ]
[ { "authors": "S Ahn; S X Hu; A Damianou; N D Lawrence; Z Dai", "journal": "", "ref_id": "b0", "title": "Variational information distillation for knowledge transfer", "year": "2019" }, { "authors": "D Chen; J.-P Mei; C Wang; Y Feng; C Chen", "journal": "", "ref_id": "b1", "title": "Online knowledge distillation with diverse peers", "year": "2020" }, { "authors": "P Chen; S Liu; H Zhao; J Jia", "journal": "", "ref_id": "b2", "title": "Distilling knowledge via knowledge review", "year": "2021" }, { "authors": "S Chen; Y Liu; X Gao; Z Han", "journal": "", "ref_id": "b3", "title": "Mobilefacenets: Efficient cnns for accurate real-time face verification on mobile devices", "year": "2018" }, { "authors": "J H Cho; B Hariharan", "journal": "", "ref_id": "b4", "title": "On the efficacy of knowledge distillation", "year": "2019" }, { "authors": "A Coates; A Ng; H Lee", "journal": "", "ref_id": "b5", "title": "An analysis of single-layer networks in unsupervised feature learning", "year": "2011" }, { "authors": "J Deng; W Dong; R Socher; L.-J Li; K Li; L Fei-Fei", "journal": "", "ref_id": "b6", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "J Deng; J Guo; N Xue; S Zafeiriou", "journal": "", "ref_id": "b7", "title": "Arcface: Additive angular margin loss for deep face recognition", "year": "2019" }, { "authors": "Q Ding; S Wu; H Sun; J Guo; S.-T Xia", "journal": "", "ref_id": "b8", "title": "Adaptive regularization of labels", "year": "2019" }, { "authors": "T Furlanello; Z C Lipton; M Tschannen; L Itti; A Anandkumar", "journal": "", "ref_id": "b9", "title": "Born again neural networks", "year": "2018" }, { "authors": "C Guo; G Pleiss; Y Sun; K Q Weinberger", "journal": "", "ref_id": "b10", "title": "On calibration of modern neural networks", "year": "2017" }, { "authors": "Q Guo; X Wang; Y Wu; Z Yu; D Liang; X Hu; P Luo", "journal": "", "ref_id": "b11", "title": "Online knowledge distillation via collaborative learning", "year": "2020" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b12", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "G Hinton; O Vinyals; J Dean", "journal": "", "ref_id": "b13", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "X Jin; B Peng; Y Wu; Y Liu; J Liu; D Liang; J Yan; X Hu", "journal": "", "ref_id": "b14", "title": "Knowledge distillation via route constrained optimization", "year": "2019" }, { "authors": "I Kemelmacher-Shlizerman; S M Seitz; D Miller; E Brossard", "journal": "", "ref_id": "b15", "title": "The megaface benchmark: 1 million faces for recognition at scale", "year": "2016" }, { "authors": "J Kim; S Park; N Kwak", "journal": "", "ref_id": "b16", "title": "Paraphrasing complex network: Network compression via factor transfer", "year": "2018" }, { "authors": "A Krizhevsky; G Hinton", "journal": "", "ref_id": "b17", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "D Lopez-Paz; L Bottou; B Schölkopf; V Vapnik", "journal": "", "ref_id": "b18", "title": "Unifying distillation and privileged information", "year": "2015" }, { "authors": "A K Menon; A S Rawat; S J Reddi; S Kim; S Kumar", "journal": "", "ref_id": "b19", "title": "Why distillation helps: a statistical perspective", "year": "2020" }, { "authors": "S I Mirzadeh; M Farajtabar; A Li; N Levine; A Matsukawa; H Ghasemzadeh", "journal": "", "ref_id": "b20", "title": "Improved knowledge distillation via teacher assistant", "year": "2020" }, { "authors": "M P Naeini; G Cooper; M Hauskrecht", "journal": "", "ref_id": "b21", "title": "Obtaining well calibrated probabilities using bayesian binning", "year": "2015" }, { "authors": "Y Netzer; T Wang; A Coates; A Bissacco; B Wu; A Y Ng", "journal": "", "ref_id": "b22", "title": "Reading digits in natural images with unsupervised feature learning", "year": "2011" }, { "authors": "A Romero; N Ballas; S E Kahou; A Chassang; C Gatta; Y Bengio", "journal": "", "ref_id": "b23", "title": "Fitnets: Hints for thin deep nets", "year": "2015" }, { "authors": "K Simonyan; A Zisserman", "journal": "", "ref_id": "b24", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2014" }, { "authors": "S Stanton; P Izmailov; P Kirichenko; A A Alemi; A G Wilson", "journal": "", "ref_id": "b25", "title": "Does knowledge distillation really work", "year": "2021" }, { "authors": "Y Tian; D Krishnan; P Isola", "journal": "", "ref_id": "b26", "title": "Contrastive representation distillation", "year": "2020" }, { "authors": "F Wang; J Cheng; W Liu; H Liu", "journal": "IEEE Signal Processing Letters (SPL)", "ref_id": "b27", "title": "Additive margin softmax for face verification", "year": "2018" }, { "authors": "T Wen; S Lai; X Qian", "journal": "", "ref_id": "b28", "title": "Preparing lessons: Improve knowledge distillation with better supervision", "year": "2019" }, { "authors": "G Xu; Z Liu; X Li; C C Loy", "journal": "", "ref_id": "b29", "title": "Knowledge distillation meets self-supervision", "year": "2020" }, { "authors": "A Yao; D Sun", "journal": "", "ref_id": "b30", "title": "Knowledge transfer via dense cross-layer mutual-distillation", "year": "2020" }, { "authors": "D Yi; Z Lei; S Liao; S Z Li", "journal": "", "ref_id": "b31", "title": "Learning face representation from scratch", "year": "2014" }, { "authors": "S Zagoruyko; N Komodakis", "journal": "", "ref_id": "b32", "title": "Wide residual networks", "year": "2016" }, { "authors": "S Zagoruyko; N Komodakis", "journal": "", "ref_id": "b33", "title": "Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer", "year": "2017" }, { "authors": "L Zhang; J Song; A Gao; J Chen; C Bao; K Ma", "journal": "", "ref_id": "b34", "title": "Be your own teacher: Improve the performance of convolutional neural networks via self distillation", "year": "2019" }, { "authors": "X Zhang; S Lu; H Gong; Z Luo; M Liu", "journal": "", "ref_id": "b35", "title": "Amln: adversarial-based mutual learning network for online knowledge distillation", "year": "2020" }, { "authors": "Y Zhang; T Xiang; T M Hospedales; H Lu", "journal": "", "ref_id": "b36", "title": "Deep mutual learning", "year": "2018" }, { "authors": "B Zhao; Q Cui; R Song; Y Qiu; J Liang", "journal": "", "ref_id": "b37", "title": "Decoupled knowledge distillation", "year": "2022" }, { "authors": "Y Zhu; Y Wang", "journal": "", "ref_id": "b38", "title": "Student customized knowledge distillation: Bridging the gap between student and teacher", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 339.05, 611.02, 202.99, 26.84 ], "formula_id": "formula_0", "formula_text": "LKL(fA, fT) = N i=1 τ 2 KL (fA(xi)||fT(xi)) ,(1)" }, { "formula_coordinates": [ 3, 340.21, 657.78, 201.83, 26.84 ], "formula_id": "formula_1", "formula_text": "LKL(fA, fS) = N i=1 τ 2 KL (fA(xi)||fS(xi)) ,(2)" }, { "formula_coordinates": [ 4, 131.73, 89.95, 158.31, 20.43 ], "formula_id": "formula_2", "formula_text": "f (•) (x) = σ( z (•) (x) τ ),(3)" }, { "formula_coordinates": [ 4, 103.75, 384.88, 186.29, 27.03 ], "formula_id": "formula_3", "formula_text": "Lce(fS) = - N i=1 K k=1 y k i log(f k S (xi)),(4)" }, { "formula_coordinates": [ 4, 88.37, 453.3, 201.68, 26.84 ], "formula_id": "formula_4", "formula_text": "LKL(fT, fS) = N i=1 τ 2 KL (fT(xi)||fS(xi)) .(5)" }, { "formula_coordinates": [ 4, 64.6, 521.95, 225.44, 8.06 ], "formula_id": "formula_5", "formula_text": "LS = w1Lce(fS) + w2LKL(fT, fS) + w3LKL(fA, fS). (6)" }, { "formula_coordinates": [ 4, 62.86, 574.21, 223.7, 8.06 ], "formula_id": "formula_6", "formula_text": "LT = w4Lce(fT) + w5LKL(fS, fT) + w6LKL(fA, fT), (7" }, { "formula_coordinates": [ 4, 286.56, 574.5, 3.48, 7.77 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 4, 400.23, 411.08, 141.87, 12.69 ], "formula_id": "formula_8", "formula_text": "A g+1 = S * g ,(8)" }, { "formula_coordinates": [ 5, 102.37, 98.51, 187.67, 19.97 ], "formula_id": "formula_9", "formula_text": "R(fS) -R(fR) ≤ O |FS|C √ n + ϵ1,(9)" }, { "formula_coordinates": [ 5, 101.21, 206.07, 188.84, 19.75 ], "formula_id": "formula_10", "formula_text": "R(fT) -R(fR) ≤ O |FT|C n α + ϵ2,(10)" }, { "formula_coordinates": [ 5, 102.41, 265.13, 187.63, 19.75 ], "formula_id": "formula_11", "formula_text": "R(fS) -R(fT) ≤ O |FS|C n β + ϵ3,(11)" }, { "formula_coordinates": [ 5, 60.05, 349.73, 229.99, 19.97 ], "formula_id": "formula_12", "formula_text": "O |FT|C n α +O |FS|C n β +ϵ2+ϵ3 ≤ O |FS|C √ n +ϵ1. (12)" }, { "formula_coordinates": [ 5, 90.16, 460.33, 199.88, 20.82 ], "formula_id": "formula_13", "formula_text": "L online =w1Lce(fS) + w2LKL(fT, fS) + w4Lce(fT) + w5LKL(fS, fT).(13)" }, { "formula_coordinates": [ 5, 123.99, 530.7, 166.05, 39.74 ], "formula_id": "formula_14", "formula_text": "min f S ,f T L online , s.t. LKL(fA, fS) < δ, LKL(fA, fT) < δ,(14)" }, { "formula_coordinates": [ 5, 100.24, 704.84, 189.81, 12.96 ], "formula_id": "formula_15", "formula_text": "F ′ T = {f |f ∈ FT, LKL(fA, fT) < δ},(15)" }, { "formula_coordinates": [ 5, 353.98, 99.04, 188.06, 12.95 ], "formula_id": "formula_16", "formula_text": "F ′ S = {f |f ∈ FS, LKL(fA, fS) < δ}.(16)" }, { "formula_coordinates": [ 5, 354.68, 173.51, 187.36, 24.87 ], "formula_id": "formula_17", "formula_text": "O |F ′ T |C n α ′ + O |F ′ S |C n β ′ + ϵ ′ 2 + ϵ ′ 3 .(17)" }, { "formula_coordinates": [ 5, 307.44, 203.97, 234, 27.25 ], "formula_id": "formula_18", "formula_text": "F ′ S , F ′ T are subsets of F S , F T , we have |F ′ S | C ≤ |F S | C , |F ′ S | C ≤ |F S | C ." }, { "formula_coordinates": [ 5, 353.59, 333.41, 188.45, 24.87 ], "formula_id": "formula_19", "formula_text": "O |F ′ T |C n α ′ + ϵ ′ 2 ≈ O |FT|C n α + ϵ2,(18)" }, { "formula_coordinates": [ 5, 369.01, 457.1, 173.03, 14.06 ], "formula_id": "formula_20", "formula_text": "ϵ3 = inf f ∈F S R(f ) -R(fT).(19)" }, { "formula_coordinates": [ 9, 64.5, 245.62, 221.8, 44.91 ], "formula_id": "formula_21", "formula_text": "E ( R(fS, D) -R(fS)) 2 ≤ 1 N V [L(fT(x), fS(x)] + C E ∥fT(x) -fR(x)∥ 2 2 , (20" }, { "formula_coordinates": [ 9, 286.31, 282.76, 3.73, 7.77 ], "formula_id": "formula_22", "formula_text": ")" }, { "formula_coordinates": [ 9, 66.49, 405.34, 211.9, 57.61 ], "formula_id": "formula_23", "formula_text": "[l]E ( R(f S , D) -R(f S )) 2 ≤ 1 N V [L((w T f T (x) + w A f A (x)), f S (x)] + C (E [∥((w T f T (x) + w A f A (x)) -f R (x)∥ 2 ]) 2 ." }, { "formula_coordinates": [ 9, 310.18, 214.07, 228.53, 36.56 ], "formula_id": "formula_24", "formula_text": "C E ∥((wTfT(x) + wAfA(x)) -fR(x)∥ 2 2 ≤ C (E [wT∥(fT(x) -fR(x)∥2 + wA∥fA(x) -fR(x)∥2]) 2 . (22" }, { "formula_coordinates": [ 9, 538.31, 242.85, 3.73, 7.77 ], "formula_id": "formula_25", "formula_text": ")" } ]
Triplet Knowledge Distillation
In Knowledge Distillation, the teacher is generally much larger than the student, making the solution of the teacher likely to be difficult for the student to learn. To ease the mimicking difficulty, we introduce a triplet knowledge distillation mechanism named TriKD. Besides teacher and student, TriKD employs a third role called anchor model. Before distillation begins, the pre-trained anchor model delimits a subspace within the full solution space of the target problem. Solutions within the subspace are expected to be easy targets that the student could mimic well. Distillation then begins in an online manner, and the teacher is only allowed to express solutions within the aforementioned subspace. Surprisingly, benefiting from accurate but easy-to-mimic hints, the student can finally perform well. After the student is well trained, it can be used as the new anchor for new students, forming a curriculum learning strategy. Our experiments on image classification and face recognition with various models clearly demonstrate the effectiveness of our method. Furthermore, the proposed TriKD is also effective in dealing with the overfitting issue. Moreover, our theoretical analysis supports the rationality of our triplet distillation.
Xijun Wang; Dongyang Liu; Meina Kan; Chunrui Han; Zhongqin Wu; Shiguang Shan
[ { "figure_caption": "Figure 1 .1Figure 1. An intuitive illustration of our motivation. The 2d plane represents the function space from input image to task-specific output.Every neural network with compatible input and output format corresponds to a certain point on the plane, and the color represents the expected risk, darker means lower risk. The small model is the target student and its performance is our major interest. As the large teacher model has stronger fitting ability than the student, the collection of functions it could attain, FT, is also larger than FS. (a) When trained independently, the teacher model may step towards local minima out of the scope that the student could well fit. (b)(c) For both online and offline distillation, the large model is likely to lie beyond the subspace attainable to student model. This makes the student, though performing better, still lie far away from the optima, leading to a sub-optimal solution. (d) In our TriKD, a pre-trained anchor model is used to pull both the teacher and student models within or near the subspace attainable to the student model, making the teacher easy to mimic. The mutual learning between teacher and student then makes the student learn a high-quality solution with better generalization.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure2. An overview of Triplet Knowledge Distillation. In the gth generation, a pre-trained anchor Ag supervises a pair of randomly initialized student Sg and teacher Tg; the student and the teacher also learn mutually from each other. After the gth generation, the student Sg will become the new anchor Ag+1 for the (g + 1)th generation. Supervision from task label is omitted in the figure.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure3. Evaluate TriKD with different student size on Megaface in terms of rank-1 face identification rate (%). The baseline is trained with hard label only. Besides the baseline and our TriKD, we also conduct ablative studies (L + T and L + A) to reveal the effect of anchor A and T, respectively.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Comparison of different ways to introduce model A and T (b) Comparison with the standard distillation", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Exploring how to arrange T and A to get a lower variance S. (a) and (b) reveal the variance of target model's losses on different conditions. There are basically four valid combinations (i.e. M1-M4) in terms of two main factors: the way model S learns from model T -standard offline distillation or online mutual learning, and whether model T learns from model A. Online denotes that two networks study with each other step by step during the training process. (a) illustrates that online mutual learning makes important contribution to decrease the variance, and M4 can gain lower variance when the size of model A is smaller than model T. (b) demonstrates that M4 can get the lowest variance under all the experimental settings compared with standard distillation (M0 and M1). Dataset: Webface.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Teacher-student behavior similarity w.r.t. generations. Generation 0 is vanilla online knowledge distillation without anchor. The networks are trained on the training set of CIFAR100, and KL-Divergence is measured on the test set of CIFAR100. Legend format: student (teacher).", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Compare different KD methods on ImageNet. Bold and underline denote the best and the second best results, respectively. The results of Review of and DKD are from their original paper. Results of other existing methods are quoted fromTian et al. (2020) ", "figure_data": "Teacherwrn-40-2 wrn-40-2 resnet56 resnet110 resnet110 resnet32x4 vgg13Studentwrn-16-2 wrn-40-1 resnet20resnet20resnet32resnet8x4vgg8Teacher75.6175.6172.3474.3174.3179.4274.64Student73.2671.9869.0669.0671.1472.5070.36KD(Hinton et al., 2015)74.9273.5470.6670.6773.0873.3372.98FitNet(Romero et al., 2015)73.5872.2469.2168.9971.0673.5071.02AT(Zagoruyko & Komodakis, 2017)74.0872.7770.5570.2272.3173.4471.43DML(Zhang et al., 2018)75.4174.7371.2271.4773.5275.3674.58VID(Ahn et al., 2019)74.1173.3070.3870.1672.6173.0971.23CRD(Tian et al., 2020)75.6474.3871.6371.5673.7575.4674.29Review(Chen et al., 2021)76.1275.0971.89(71.86)73.8975.6374.84DKD(Zhao et al., 2022)76.2474.8171.97(71.66)74.1176.3274.68TriKD(Ours)76.9475.9672.3472.5574.3176.8275.35Error(%) Methods Teacher Student KDAT OFD CRD Review DKD DML TriKD(Ours)Top-173.31 69.75 70.66 70.69 70.81 71.17 71.61 71.70 71.1871.88Top-591.42 89.07 89.88 90.01 89.98 90.13 90.51 90.41 90.0590.70", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "for training, and MegaFace(Kemelmacher-Shlizerman et al., 2016) for testing. Rank-1 face identification rate is reported. Unlike CIFAR100 and ImageNet, where the performance generally raises as the capacity of the model increases (at least within the scope of our interest), training with the CASIA-WebFace dataset is frequently bothered with the", "figure_data": "Baseline (L)L+TL+ATriKD (L+T+A)Baseline (L)L+TL+ATriKD (L+T+A)Rank-1 identi. rate of S (%)64 66 68 70 72 74 76 78 80 82 840.5X 0.625X 0.75X 0.875X 1X 1.125X 1.25X 1.375X 1.5X 64 66.8 68.3 69.2 69.7 69.5 69.4 69.3 68.3 69 72.9 75.7 77.9 79.3 80.4 81.5 82.1 82.4Rank-1 identi. rate of T(%)64 66 68 70 72 74 76 78 80 82 8468.6 75.8 0.5X 0.625X 0.75X 0.875X 68.6 68.6 68.6 78.8 79.7 80.368.6 80.7 1X68.6 81 1.125X 1.25X 1.375X 1.5X 68.6 68.6 68.6 81.6 81.5 81.5Student SizeStudent Size", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison with existing methods on MegaFace in terms of rank-1 face identification rate (%). Training set: CASIA-WebFace. Backbone: MobileFaceNet.", "figure_data": "Dataset Methods baseline KD DML BYOT TriKD (Ours)50k35.24 40.48 46.76 44.2655.95150k64.00 71.80 74.10 72.8079.30490k81.50 83.00 83.60 81.5084.50overfitting problem since each person has only about 50images, which is much smaller than that on general imagedataset. Intuitively, the constraint from the anchor preventsthe teacher from expressing overly complicated functions.Therefore, we naturally wonder if TriKD could help alle-viate the overfitting issue. Consequently, for experimentson face recognition, we especially care about the relation-ship between student capacity and performance. We fix themodel size of teacher, but adjust the model size of studentto investigate the relationship. For sake of convenience,in each generation we make the anchor model A slightlysmaller than the student model S, so that with training onlyone time we can obtain a serious of output models with in-creasing size. In all experiments unless otherwise specified,the student model starts with width 0.5X of MobileFaceNetand each generation uniformly increases the width of thenetwork by 0.125 times of the MobileFaceNet size. Theteacher model is 2.0X of MobileFaceNet in all generations.", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Teacher-student behavior similarity on CIFAR100. Format: KL-divergence on training set/ KL-divergence on test set. Lower KL-divergence signifies stronger behavior similarity.", "figure_data": "Methodswrn-40-2 wrn-16-2wrn-40-2 wrn-40-1resnet56 resnet20resnet32x4 resnet8x4Offline KD 0.315/0.721 0.335/0.934 0.485/0.710 0.339/0.799Online KD 0.088/0.228 0.094/0.233 0.133/0.205 0.075/0.247TriKD(Ours) 0.062/0.161 0.070/0.169 0.086/0.146 0.055/0.173Table 5. Teacher-student behavior similarity on SVHN and STL10.Format: KL-divergence on SVHN/ KL-divergence on STL10.Both on the test set. Lower KL-divergence signifies strongerteacher-student behavior similarity.Methodswrn-40-2 wrn-16-2wrn-40-2 wrn-40-1resnet56 resnet20resnet32x4 resnet8x4Offline KD 2.601/2.498 3.644/3.416 2.610/2.478 2.248/2.211Online KD 0.998/0.942 1.439/1.301 0.959/0.888 1.000/0.940TriKD(Ours) 0.761/0.711 1.096/0.987 0.673/0.625 0.726/0.680", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Teacher Top-1 accuracy on CIFAR-100. Vanilla means trained with task labels only. Online means online distillation.", "figure_data": "Teacherwrn-40-2 wrn-40-2 resnet56 resnet32x4 vgg13Studentwrn-16-2 wrn-40-1 resnet20 resnet8x4 vgg8Vanilla75.6175.6172.3479.4274.64Online KD77.7478.0574.0080.2875.91TriKD(Ours) 79.0178.7075.1280.0576.09", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Effect of each role in triplet distillation. L, T, and A represent the supervision from task label, online teacher, and anchor, respectively. The first-generation anchor in L+A is the model trained with L, while the first-generation anchor in L+T* and L+T+A is trained with L+T. The experiment is conducted on CIFAR100.", "figure_data": "Methodsresnet56 wrn-40-2 wrn-40-2 resnet32x4 vgg13 resnet20 wrn-40-1 wrn-16-2 resnet8x4 vgg8L69.2971.6373.4772.9270.10L+T71.2274.7375.4175.3674.58L+A71.7074.0675.1874.3571.63L+A*71.6074.4975.1274.5472.49L+T+A 72.3475.9676.9476.8275.35", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "used in MobileFaceNet is replaced with AM-Softmax loss (Wang Expected Calibration Error for Different Model Widths. We explore Expected Calibration Error in terms of network width on Face Recognition task (Webface) and Image classification task (CIFAR100), and all the models are trained enough epochs to ensure the model converges sufficiently. The backbones are MobilefaceNet and Resnet18 respectively, we applied various width including 0.5X, 1.0X, 2.0X, 3.0X, 4.0X.", "figure_data": "Expected Calibration Error for Different Model WidthsExpected Calibration Error for Different Model WidthsMobileFaceNetResnet180.00410.05Expected Calibration Error0.0033 0.0034 0.0035 0.0036 0.0037 0.0038 0.0039 0.004Expected Calibration Error0.045 0.01 0.015 0.02 0.025 0.03 0.035 0.040.00320.0050.003100.5X1.0X2.0X3.0X4.0X0.5X1.0X2.0X3.0X4.0XModel WidthModel WidthFigure 5.", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Compare TriKD with KD", "figure_data": "", "figure_id": "tab_9", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Additional results of TriKD w.r.t. different network architectures. Teacher is two times as wide as the student.", "figure_data": ". Dataset: CIFAR100. Stu-dent=resnet8, Teacher=resnet110. The results of KD and TAKDare quoted from the original TAKD paper.KDTAKDTriKDTA=56 TA=32 TA=20 TA=1461.4161.4761.5561.8261.5062.79", "figure_id": "tab_10", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Best accuracy(%) achieved by student after each generation. Except generation 0, where we use vanilla online distillation to train an initial anchor, for all generations we use the last-generation student as the anchor, and use randomly initialized student and teacher to form the triplet relationship. The experiment is conducted on CIFAR100.", "figure_data": "Generationsresnet56 resnet110 resnet110 wrn-40-2 wrn-40-2 resnet32x4 vgg13 resnet20 resnet20 resnet32 wrn-40-1 wrn-16-2 resnet8x4 vgg8071.2271.4773.5274.7375.4175.3674.58171.7671.8273.9975.3576.9476.2775.35272.3472.2474.3175.8776.9476.8275.35372.3472.5574.3175.9676.9476.8275.35472.3472.5574.3175.9676.9476.8275.35", "figure_id": "tab_11", "figure_label": "11", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "(Stanton et al., 2021)", "Explanation": "The cited work by Stanton et al. provides evidence that optimization difficulty is a major barrier in knowledge distillation, which the citing paper uses to support the claim that larger teacher models with high accuracy can be difficult for small models to mimic during the distillation process."}, {"Category": "Extension or Continuation", "Citation": "(Cho & Hariharan, 2019)", "Explanation": "The cited work by Cho and Hariharan extends the research on the optimization difficulty in knowledge distillation by showing that larger teacher models with higher accuracy can actually make worse students during the distillation process."}, {"Category": "Extension or Continuation", "Citation": "(Zhu & Wang, 2021)", "Explanation": "The cited work by Zhu and Wang also builds upon the research on the optimization difficulty in knowledge distillation by providing evidence that larger teacher models with higher accuracy can negatively impact the quality of the students during the distillation process."}, {"Category": "Extension or Continuation", "Citation": "(Mirzadeh et al., 2020)", "Explanation": "The cited work by Mirzade et al. further extends the research on the optimization difficulty in knowledge distillation by showing that larger teacher models with higher accuracy can result in students that are more difficult to train and mimic during the distillation process."}, {"Category": "Supporting Evidence", "Citation": "(Hinton et al., 2015)", "Explanation": "The cited work by Hinton et al. (2015) introduces the concept of mimicking softened class distributions predicted by large teachers, which serves as a foundational method for the study of knowledge distillation in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Ding et al., 2019;Wen et al., 2019)", "Explanation": "The cited works by Ding et al. (2019) and Wen et al. (2019) explore the trade-off between soft logits and hard task label supervision, which the citing paper extends by further expanding the study of this concept."}, {"Category": "Extension or Continuation", "Citation": "(Tian et al., 2020;Xu et al., 2020)", "Explanation": "The cited works by Tian et al. (2020) and Xu et al. (2020) introduce the use of auxiliary tasks to enrich transferred knowledge, which the citing paper extends by exploring this approach in the study of knowledge distillation."}, {"Category": "Extension or Continuation", "Citation": "(Romero et al., 2015;Kim et al., 2018;Jin et al., 2019;Zagoruyko & Komodakis, 2017;Chen et al., 2021)", "Explanation": "The cited works by Romero et al. (2015), Kim et al. (2018), Jin et al. (2019), Zagoruyko & Komodakis (2017), and Chen et al. (2021) exploit intermediate features as transferred knowledge, which the citing paper extends by further exploring this approach in the study of knowledge distillation."}, {"Category": "Supporting Evidence", "Citation": "Born again (Furlanello et al., 2018)", "Explanation": "The work by Furlanello et al. (2018) on self-distillation is a key reference in the citing paper, as it introduces the concept of iterative training for knowledge distillation, which the citing paper builds upon in the study of TriKD."}, {"Category": "Methodological Basis", "Citation": "(Zhang et al., 2018)", "Explanation": "The cited work, deep mutual learning (DML), serves as the foundational method for the citing paper to use in the training phase of online knowledge distillation. DML is a key element in the process of collaboratively learning from scratch using a pool of randomly initialized models as the student pool and each student being guided by the output of other peers and the task label."}, {"Category": "Methodological Basis", "Citation": "(Cho & Hariharan, 2019)", "Explanation": "The cited work by Cho and Hariharan (2019) identifies the problem of capacity mismatch between teacher and student in knowledge distillation, and proposes the early-stopping strategy to address it. The citing paper adopts this strategy in their own research to improve the performance of the student."}, {"Category": "Methodological Basis", "Citation": "(Mirzadeh et al., 2020)", "Explanation": "The cited work by Mirzade et al. (2020) introduces the concept of teacher assistant to bridge the capacity gap between the original teacher and student in knowledge distillation. The citing paper adopts this method to improve the performance of the student in their research."}, {"Category": "Methodological Basis", "Citation": "(Zhu & Wang, 2021)", "Explanation": "The cited work by Zhu and Wang (2021) proposes the SCKD method to automatically adjust the distillation process based on the gradient similarity between the teacher and student. The citing paper adopts this method to improve the performance of the student in their research."}, {"Category": "Extension or Continuation", "Citation": "(Mirzadeh et al., 2020)", "Explanation": "The cited work by Mirzadeh et al. (2020) treats mimicking difficulty as an inherent property of teacher model capacity, but the citing paper believes that a given large network with fixed capacity should be able to fit both hard and easy functions and proposes a new method, TriKD, to make a large teacher still easy to mimic by deliberately making the function it expresses easy."}, {"Category": "Methodological Basis", "Citation": "(Zhang et al., 2018)", "Explanation": "The cited work by Zhang et al. (2018) provides the standard method of online knowledge distillation, which the citing paper adopts in the rest part of TriKD to learn from hard labels and training experiences of the student and the online teacher."}, {"Category": "Methodological Basis", "Citation": "(Lopez-Paz et al., 2015)", "Explanation": "The cited work by Lopez-Paz et al. provides a formal decomposition of the excess risk in knowledge distillation, which the citing paper adopts to explain the performance of TriKD in improving the distillation process."}, {"Category": "Methodological Basis", "Citation": "(Lopez-Paz et al., 2015)", "Explanation": "The cited work provides a theoretical basis for the effectiveness of vanilla knowledge distillation, which the citing paper leverages to analyze the performance of the method in the context of online knowledge distillation."}, {"Category": "Supporting Evidence", "Citation": "(Tian et al., 2020)", "Explanation": "The cited work by Tian et al. provides the results of a study on knowledge distillation methods, which the citing paper uses to compare the performance of their proposed method, TriKD, in the context of image classification."}, {"Category": "Extension or Continuation", "Citation": "(Review to DKD)", "Explanation": "The cited work on DKD is extended in the citing paper by further exploring the method and its performance in the context of image classification."}, {"Category": "Extension or Continuation", "Citation": "(DML)", "Explanation": "The cited work on DML is extended in the citing paper by reimplementing the method and reporting the results in the context of image classification."}, {"Category": "Methodological Basis", "Citation": "(Krizhevsky et al., 2009)", "Explanation": "The cited work by Krizhevsky et al. (2009) provides the dataset used in the study conducted in the citing paper on the CIFAR100 image classification benchmark."}, {"Category": "Methodological Basis", "Citation": "(Deng et al., 2009)", "Explanation": "The cited work by Deng et al. (2009) provides the dataset used in the study conducted in the citing paper on the ImageNet image classification benchmark."}, {"Category": "Extension or Continuation", "Citation": "(Hinton et al., 2015)", "Explanation": "The cited work by Hinton et al. (2015) is a foundational method in knowledge distillation, and the citing paper extends this work by introducing a new method called TriKD for improving the performance of the student model in image classification tasks."}, {"Category": "Supporting Evidence", "Citation": "(Chen et al., 2021)", "Explanation": "The cited work by Chen et al. provides a comparison of different methods involving intermediate feature maps, which supports the claim in the citing paper that TriKD outperforms other methods in terms of performance."}, {"Category": "Supporting Evidence", "Citation": "(Romero et al., 2015)", "Explanation": "The cited work by Romero et al. also provides a comparison of different methods involving intermediate feature maps, which further supports the claim in the citing paper that TriKD outperforms other methods in terms of performance."}, {"Category": "Supporting Evidence", "Citation": "(Ahn et al., 2019)", "Explanation": "The cited work by Ahn et al. also provides a comparison of different methods involving attention maps, which further supports the claim in the citing paper that TriKD outperforms other methods in terms of performance."}, {"Category": "Supporting Evidence", "Citation": "(Tian et al., 2020)", "Explanation": "The cited work by Tian et al. provides a comparison of different methods involving instance similarity, which further supports the claim in the citing paper that TriKD outperforms other methods in terms of performance."}, {"Category": "Supporting Evidence", "Citation": "(Zagoruyko & Komodakis, 2017)", "Explanation": "The cited work by Zagoruyko and Komodakis provides a comparison of different methods involving attention maps, which further supports the claim in the citing paper that TriKD outperforms other methods in terms of performance."}, {"Category": "Supporting Evidence", "Citation": "(Deng et al., 2009)", "Explanation": "The cited work by Deng et al. provides a comparison of TriKD with other methods on ImageNet, which further supports the claim in the citing paper that the proposed triplet distillation mechanism is effective regardless of dataset volume."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2018)", "Explanation": "The cited work by Chen et al. serves as the main architecture for the fine-grained face recognition problem in the citing paper, providing a foundational method for the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Yi et al., 2014)", "Explanation": "The cited work by Yi et al. is used as a data source for the fine-grained face recognition problem in the citing paper, providing a dataset for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Hinton et al., 2015)", "Explanation": "The cited work by Hinton et al. serves as the basis for the comparison of TriKD with the existing methods in the citing paper, providing a foundational understanding of the methods used in the field."}, {"Category": "Methodological Basis", "Citation": "(Zhang et al., 2018)", "Explanation": "The cited work by Zhang et al. serves as a methodological basis for the comparison of TriKD with the existing methods in the citing paper, providing insights into the DML method."}, {"Category": "Methodological Basis", "Citation": "(Zhang et al., 2019)", "Explanation": "The cited work by Zhang et al. serves as a methodological basis for the comparison of TriKD with the existing methods in the citing paper, providing insights into the BYOT method."}, {"Category": "Supporting Evidence", "Citation": "(Netzer et al., 2011)", "Explanation": "The cited work by Netzer et al. (2011) provides the out-of-domain data source of SVHN, which is used in the experiments to test the effectiveness of the anchor model in driving the large teacher into easy-to-mimic solutions."}, {"Category": "Supporting Evidence", "Citation": "(Coates et al., 2011)", "Explanation": "The cited work by Coates et al. (2011) provides the out-of-domain data source of STL10, which is used in the experiments to test the effectiveness of the anchor model in driving the large teacher into easy-to-mimic solutions."}, {"Category": "Supporting Evidence", "Citation": "(Zhang et al., 2018)", "Explanation": "The cited work by Zhang et al. (2018) has already demonstrated the potential benefits of online knowledge distillation in improving both the teacher and student models."}, {"Category": "Extension or Continuation", "Citation": "(Tian et al., 2020)", "Explanation": "The cited work by Tian et al. (2020) has shown that the performance gain of the teacher after switching to online distillation is not always accompanied by a similar gain in the student model. The citing paper builds upon this finding to explore a new approach, TriKD, that makes the teacher model more accurate and easy to mimic, thus potentially leading to a more significant performance improvement in the student model."}, {"Category": "Supporting Evidence", "Citation": "(Zhang et al., 2018)", "Explanation": "The cited work by Zhang et al. (2018) provides the L + T setting, which is similar to the proposed method in the citing paper and serves as a foundational method for the study of triplet distillation."}, {"Category": "Supporting Evidence", "Citation": "(Furlanello et al., 2018)", "Explanation": "The cited work by Furlanello et al. (2018) introduces the L + A setting, which is similar to the proposed method in the citing paper and provides a basis for the study of anchor-based training in triplet distillation."}, {"Category": "Extension or Continuation", "Citation": "Our TriKD", "Explanation": "The citing paper extends the research on triplet distillation by introducing the TriKD method, which further improves the performance of the target student in the context of the proposed study."}, {"Category": "Methodological Basis", "Citation": "(Menon et al., 2020)", "Explanation": "The cited work provides a proposition that forms the basis for the risk analysis in the citing paper, specifically in the context of knowledge distillation."}, {"Category": "Supporting Evidence", "Citation": "(Naeini et al., 2015)", "Explanation": "The cited work by Naeini et al. (2015) introduces the concept of expected calibration error (ECE), which is used in the citing paper to measure the calibration of the teacher and anchor models in terms of the Bayes class-probability distribution f R."}, {"Category": "Supporting Evidence", "Citation": "(Guo et al., 2017)", "Explanation": "The experiments conducted by Guo et al. (2017) in the cited work provide evidence that increasing the width of a network can lead to a rise in ECE, which is relevant to the research in the citing paper on the calibration of the teacher and anchor models."}, {"Category": "Methodological Basis", "Citation": "(He et al., 2016)", "Explanation": "The cited work by He et al. (2016) provides the model architecture of CIFAR-style resnet, which the citing paper adopts in their research on image classification tasks."}, {"Category": "Methodological Basis", "Citation": "(Zagoruyko & Komodakis, 2016)", "Explanation": "The cited work by Zagoruyko and Komodakis (2016) introduces the wide-resnet model architecture, which the citing paper uses in their study of image classification tasks."}, {"Category": "Methodological Basis", "Citation": "(Simonyan & Zisserman, 2014)", "Explanation": "The cited work by Simonyan and Zisserman (2014) presents the vgg model architecture, which the citing paper employs in their research on image classification tasks."}, {"Category": "Data Source", "Citation": "(Deng et al., 2009)", "Explanation": "The cited work, ImageNet, serves as the data source for the training and evaluation of the models in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Yi et al., 2014)", "Explanation": "The cited work provides a set of parameters and a specific dataset for testing, which the citing paper adopts in their research to evaluate the performance of their model."}, {"Category": "Data Source", "Citation": "(Deng et al., 2019)", "Explanation": "The cited work is acknowledged as the source of the Arcface loss function used in the experiments of the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Chen et al., 2018)", "Explanation": "The citing paper extends the work of AM-Softmax loss by aligning and cropping faces in a specific size for training, building upon the research of the cited work."}, {"Category": "Methodological Basis", "Citation": "(Mirzadeh et al., 2020)", "Explanation": "The cited work by Mirzade et al. (2020) provides the methodological basis for the study conducted in the citing paper, as it discusses the issue of performance deterioration in knowledge distillation when the capacity of the teacher increases."}, {"Category": "Extension or Continuation", "Citation": "(Zhu & Wang, 2021)", "Explanation": "The cited work by Zhu and Wang (2021) extends the research on knowledge distillation by exploring the performance of the student in the context of a large capacity gap between the teacher and the student."}, {"Category": "Extension or Continuation", "Citation": "(Cho & Hariharan, 2019)", "Explanation": "The cited work by Cho and Hariharan (2019) continues the research on knowledge distillation by focusing on the issue of the performance of the student in the context of a large capacity gap between the teacher and the student."}, {"Category": "Methodological Basis", "Citation": "(Mirzadeh et al., 2020)", "Explanation": "The cited work by Mirzade et al. (2020) provides a method for knowledge distillation that the citing paper adopts in their research, specifically focusing on situations where the capacity gap between the teacher and the student is large."}, {"Category": "Extension or Continuation", "Citation": "(Cho & Hariharan, 2019)", "Explanation": "The cited work by Cho and Hariharan (2019) is an extension of the research on knowledge distillation, exploring new dimensions and variables in the field."}, {"Category": "Extension or Continuation", "Citation": "(Zhu & Wang, 2021)", "Explanation": "The cited work by Zhu and Wang (2021) is a continuation of the research on knowledge distillation, building upon previous studies to further advance the field."}]
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b16", "b1", "b28", "b9", "b1", "b13", "b23", "b29", "b27", "b21", "b31", "b17", "b26", "b32", "b5", "b2", "b19", "b19", "b31", "b17", "b26", "b27" ], "table_ref": [ "tab_0" ], "text": "In recent years, the need to accommodate non-regular structures in data science has brought a boom in machine learning methods on graphs. Graph deep learning (GDL) has already made a significant impact on the applied sciences and industry, with ground-breaking achievements in computational biology [17,2,28,10], and a wide adoption as a general-purpose tool in social media, e-commerce, and online marketing platforms, among others. These achievements pose exciting theoretical challenges: can the success of GDL models be grounded in solid mathematical frameworks? Since the input space of a GDL model is non-Euclidean, i.e., graphs can be of any size and any topology, less is known about GDL than standard neural networks. We claim that contemporary theories of GDL are missing an important ingredient: meaningful notions of metric on the input space, namely, graph similarity measures that are defined for all graphs of any size, which respect and describe in some sense the behavior of GDL models. In this paper, we aim at providing an analysis of GDL by introducing such appropriate metrics, using graphon theory.\nA graphon is an extension of the notion of a graph, where the node set is parameterized by a probability space instead of a finite set. Graphons can be seen as limit objects of graphs, as the number of nodes increases to infinity, under an appropriate metric. One result from graphon theory (that reformulates Szemerédi's regularity lemma from discrete mathematics) states that any sufficiently large graph behaves as if it was randomly sampled from a stochastic block model with a fixed number of classes. This result poses an \"upper bound\" on the complexity of graphs: while deterministic large graphs may appear to be complex and intricate, they are actually approximately regular and behave random-like. In this paper we extend this regularity result to an appropriate setting for message passing neural networks (MPNNs), a popular GDL model. Since MPNNs take as input a graph with a signal defined over the nodes (a graph-signal), we extend graphon theory from a theory of graphs to a theory of graph-signals. We define a metric, called the graph-signal cut distance (see Figure 1 for illustration), and formalize regularity statements for MPNNs of the following sort.\n(1) Any deterministic graph-signal behaves as if it was randomly sampled from a stochastic block model, where the number of blocks only depends on how much we want the graphsignal to look random-like, and not on the graph-signal itself.\n(2) If two graph-signals behave as if they were sampled from the same stochastic block model, then any (regular enough) MPNN attains approximately the same value on both.\nFormally, (1) is proven by extending Szemerédi's weak regularity lemma to graphon-signals. As a result of this new version of the regularity lemma, we show that the space of graph-signals is a dense subset of the space of graphon-signals, which is shown to be compact. Point (2) is formalized by proving that MPNNs with Lipschitz continuous message functions are Lipschitz continuous mappings from the space of graph-signals to an output space, in the graphon-signal cut distance.\nWe argue that the above regularity result is a powerful property of MPNNs. To illustrate this, we use the new regularity result to prove two corollaries. First, a generalization bound of MPNNs, showing that if the learned MPNN performs well on the training graph-signals, it is guaranteed to also perform well on test graph-signals. This is shown by first bounding the covering number of the graphon-signal space, and then using the Lipschitzness of MPNNs. Second, we prove that MPNNs are stable to graph-signal subsampling. This is done by first showing that randomly subsampling a graphon-signal produces a graph-signal which is close in cut distance to the graphon-signal, and then using the Lipschitzness of MPNNs.\nAs opposed to past works that analyze MPNNs using graphon analysis, we do not assume any generative model on the data. Our results apply to any regular enough MPNN on any distribution of graph-signals, making the analysis rather universal. We note that past works about generalization in GNNs [14,23,29,27] consider special assumptions on the data distribution, and often on the MPNN model. Our work provides upper bounds under no assumptions on the data distribution, and only mild Lipschitz continuity assumptions on the message passing functions. Hence, our theory bounds the generalization error when all special assumptions (that are often simplistic) from other papers are not met. We show that when all assumptions fail, MPNNs still have generalization and sampling guarantees, albeit much slower ones. See Table 1. This is also true for past sampling theorems, e.g., [21,31,18,26,32].\nThe problem with graph-signal domains. Since the input space of MPNNs is non-Euclidean, results like universal approximation theorems and generalization bounds are less well developed for MPNNs than Euclidean deep learning models. For example, analysis like in [6] is limited to graphs of fixed sizes, seen as adjacency matrices. The graph metric induced by the Euclidean metric on adjacency matrices is called edit-distance. This reduction of the graph problem to the Euclidean case does not describe the full complexity of the problem. Indeed, the edit-distance is defined for weighted graphs, and non-isomorphic simple graphs are always far apart in this metric. This is an unnatural description of the reality of machine learning on graphs, where different large non-isomorphic simple graphs can describe the same large-scale phenomenon and have similar outputs for the same MPNN.\nOther papers that consider graphs of arbitrary but bounded size are based on taking the union of the Euclidean edit-distance spaces up to a certain graph size [3]. If one omits the assumption that all graphs are limited by a predefined size, the edit-metric becomes non-compact -a topology too fine to explain the behavior of real MPNNs. For example, two graphs with different number of nodes are always far apart in edit-distance, while most MPNN architectures in practice are not sensitive to the addition of one node to a large graph. In [19], the expressivity of GNNs is analyzed on spaces of graphons. It is assumed that graphons are Lipschitz continuous kernels. The metric on the graphon space is taken as the L ∞ distance between graphons as functions. We claim that the Lipschitz continuity of the graphons in [19], the choice of the L ∞ metric, and the choice of an arbitrary compact subset therein, are not justified as natural models for graphs, and are not grounded in theory. Note that graphon analysis is measure theoretic, and results like the regularity lemma are no longer true when requiring Lipschitz continuity for the graphons. Lastly, in papers like [31,18,26,27], the data is assumed to be generated by one, or a few graphons, which limits the data distribution significantly. We claim that this discrepancy between theory and practice is an artifact of the inappropriate choices of the metric on the space of graphs, and the choice of a limiting generative model for graphs." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "For n ∈ N, we denote [n] = {1, . . . , n}. We denote the Lebesgue p space over the measure space X by L p (X ), or, in short, L p . We denote by µ the standard Lebesgue measure on [0, 1]. A partition is a sequence\nP k = {P 1 , . . . , P k } of disjoint measurable subsets of [0, 1] such that k j=1 P j = [0, 1].\nThe partition is called equipartition if µ(P i ) = µ(P j ) for every i, j ∈ [k]. We denote the indicator function of a set S by 1 S . See Appendix A for more details. We summarize our notations in Appendix I." }, { "figure_ref": [], "heading": "Message passing neural networks", "publication_ref": [ "b14", "b10" ], "table_ref": [], "text": "Most graph neural networks used in practice are special cases of MPNN (see [15] and [11] of a list of methods). MPNNs process graphs with node features, by repeatedly updating the feature at each node using the information from its neighbors. The information is sent between the different nodes along the edges of the graph, and hence, this process is called message passing. Each node merges all messages sent from its neighbors using an aggregation scheme, where typical choices is to sum, average or to take the coordinate-wise maximum of the messages. In this paper we focus on normalized sum aggregation (see Section 4.1). For more details on MPNNs we refer the reader to Appendix E." }, { "figure_ref": [], "heading": "Szemerédi weak regularity lemma", "publication_ref": [ "b12", "b25", "b12" ], "table_ref": [], "text": "The following is taken from [13,25]. Let G = {V, E} be a simple graph with nodes V and edges E. For any two subsets U, S ⊂ V , denote the number of edges with one end point at U and the other at S by e G (U, S).\nLet P = {V 1 , . . . , V k } be a partition of V . The partition is called equipartition if ||V i | -|V j || ≤ 1 for every i, j ∈ [k].\nGiven two node set U, S ⊂ V , if the edges between each pair of classes V i and V j were random, we would expect the number of edges of G connecting U and S to be close to the expected value e P(U,S) :=\nk i=1 k j=1 e G (Vi,Vj ) |Vi||Vj | |V i ∩ U | |V j ∩ S|.\nHence, the irregularity, that measures how non-random like the edges between {V j } k j=1 are, is defined to be\nirreg G (P) = max U,S⊂V |e G (U, S) -e P (U, S)| / |V | 2 . (1\n)\nTheorem 2.1 (Weak Regularity Lemma [13]). For every ϵ > 0 and every graph G = (V, E), there is an equipartition\nP = {V 1 , . . . , V k } of V into k ≤ 2 c/ϵ 2 classes such that irreg G (P) ≤ ϵ.\nHere, c is a universal constant that does not depend on G and ϵ.\nTheorem 2.1 asserts that we can represent any large graph G by a smaller, coarse-grained version of it: the weighted graph G ϵ with node set V ϵ = {V 1 , . . . , V k }, where the edge weight between the nodes V i and V j is e G (Vi,Vj ) |Vi|,|Vj | . The \"large-scale\" structure of G is given by G ϵ , and the number of edges between any two subsets of nodes U i ⊂ V i and U j ⊂ V j is close to the \"expected value\" e P(Ui,Uj ) . Hence, the deterministic graph G \"behaves\" as if it was randomly sampled from G ϵ ." }, { "figure_ref": [], "heading": "Graphon analysis", "publication_ref": [ "b3", "b24" ], "table_ref": [], "text": "A graphon [4,24] can be seen as a weighted graph with a \"continuous\" node set, or more accurately, the nodes are parameterized by an atomless standard probability space called the graphon domain. Since all such graphon domains are equivalent to [0, 1] with the standard Lebesgue measure (up to a measure preserving bijection), we take [0, 1] as the node set. The space of graphons W 0 is defined to be the set of all measurable symmetric function W :\n[0, 1] 2 → [0, 1], W (x, y) = W (y, x).\nThe edge weight W (x, y) of a graphon W ∈ W 0 can be seen as the probability of having an edge between the nodes x and y.\nGraphs can be seen as special graphons. Let I m = {I 1 , . . . , I m } be an interval equipartition: a partition of [0, 1] into intervals of equal length. The graph G = {V, E} with adjacency matrix A = {a i,j } m i,j=1 induces the graphon W G , defined by W G (x, y) = a ⌈xm⌉,⌈ym⌉1 . Note that W G is piecewise constant on the partition I m . We hence identify graphs with their induced graphons. A graphon can also be seen as a generative model of graphs. Given a graphon W , a corresponding random graph is generated by sampling i.i.d. nodes {X n } from he graphon domain, and connecting each pair X n , X m in probability W (X n , X m ) to obtain the edges of the graph." }, { "figure_ref": [], "heading": "Regularity lemma for graphons", "publication_ref": [ "b25", "b25" ], "table_ref": [], "text": "A simple way to formulate the regularity lemma in the graphon language is via stochastic block models (SBM). A SBM is a piecewise constant graphon, defined on a partition of the graphon domain [0, 1]. The number of classes of the SBM is defined to be the number of sets in the partition. A SBM is seen as a generative model for graphs, where graphs are randomly sampled from the graphon underlying the SBM, as explained above. Szemerédi weak regularity lemma asserts that for any error tolerance ϵ, there is a number of classes k, such that any deterministic graph (of any size and topology) behaves as if it was randomly sampled from a SBM with k classes, up to error ϵ. Hence, in some sense, every graph is approximately quasi-random.\nTo write the weak regularity lemma in the graphon language, the notion of irregularity (1) is extended to graphons. For any measurable W : [0, 1] 2 → R the cut norm is defined to be\n∥W ∥ □ = sup U,S⊂[0,1] U ×S W (x, y)dxdy ,\nwhere U, S ⊂ [0, 1] are measurable. It can be verified that the irregularity (1) is equal to the cut norm of a difference between graphons induced by adequate graphs. The cut metric between two graphons W, V ∈ W 0 is defined to be d\n□ (W, V ) = ∥W -V ∥ □ . The cut distance is defined to be δ □ (W, V ) = inf ϕ∈S [0,1] ∥W -V ϕ ∥ □ ,\nwhere S [0,1] is the space of measure preserving bijections [0, 1] → [0, 1], and V ϕ (x, y) = V (ϕ(x), ϕ(y)) (see Section 3.1 and Appendix A.3 for more details). The cut distance is a pseudo metric on the space of graphons. By considering equivalence classes of graphons with zero cut distance, we can construct a metric space W 0 for which δ □ is a metric. The following version of the weak regularity lemma is from [25,Lemma 7].\nTheorem 2.2. For every graphon W ∈ W 0 and ϵ > 0 there exists a step graphon W ′ ∈ W 0 with respect to a partition of at most ⌈2 c/ϵ 2 ⌉ sets such that δ □ (W, W ′ ) ≤ ϵ, for some universal constant c.\nThe exact definition of a step graphon is given in Definition 3.3. It is possible to show, using Theorem 2.2, that W 0 is a compact metric space [25,Lemma 8]. Instead of recalling this construction here, we refer to Section 3.4 for the extension of this construction to graphon-signals." }, { "figure_ref": [], "heading": "Graphon-signal analysis", "publication_ref": [], "table_ref": [], "text": "A graph-signal (G, f ) is a graph G, that may be weighted or simple, with node set [n], and a signal f ∈ R n×k that assigns the value f j ∈ R k for every node j ∈ [n]. A graphon-signal will be defined in Section 3.1 similarly to a graph-signal, but over the node set [0, 1]. In this section, we show how to extend classical results in graphon theory to a so called graphon-signal theory. All proofs are given in the appendix." }, { "figure_ref": [], "heading": "The graphon signal space", "publication_ref": [], "table_ref": [], "text": "For any r > 0, define the signal space\nL ∞ r [0, 1] := {f ∈ L ∞ [0, 1] | ∀x ∈ [0, 1], |f (x)| ≤ r} .(2)\nWe define the following \"norm\" on L ∞ r [0, 1] (which is not a vector space).\nDefinition 3.1 (Cut norm of a signal). For a signal f : [0, 1] → R, the cut norm ∥f ∥ □ is defined as\n∥f ∥ □ := sup S⊆[0,1] S f (x)dµ(x) ,(3)\nwhere the supremum is taken over the measurable subsets S ⊂ [0, 1].\nIn Appendix A.2 we prove basic properties of signal cut norm. One important property is the equivalence of the signal cut norm to the\nL 1 norm ∀f ∈ L ∞ r [0, 1], ∥f ∥ □ ≤ ∥f ∥ 1 ≤ 2∥f ∥ □ .\nGiven a bound r on the signals, we define the space of graphon-signals to be the set of pairs\nWL r := W 0 × L ∞ r [0, 1]. We define the graphon-signal cut norm, for measurable W, V : [0, 1] 2 → R and f, g : [0, 1] → R, by ∥(W, f )∥ □ = ∥W ∥ □ + ∥f ∥ □ .\nWe define the graphon-signal cut metric by\nd □ (W, f ), (V, g) = ∥(W, f ) -(V, g)∥ □ .\nWe next define a pseudo metric that makes the space of graphon-signals a compact space. Let S ′ [0,1] be the set of measurable measure preserving bijections between co-null sets of [0, 1], namely,\nS ′ [0,1] = {ϕ : A → B | A, B co-null in [0, 1], and ∀S ∈ A, µ(S) = µ(ϕ(S))} ,\nwhere ϕ is a measurable bijection and A, B, S are measurable. For ϕ ∈ S ′ [0,1] , we define W ϕ (x, y) := W (ϕ(x), ϕ(y)), and f ϕ (z) = f (ϕ(z)). Note that W ϕ and f ϕ are only defined up to a null-set, and we arbitrarily set W, W ϕ , f and f ϕ to 0 in their respective null-sets, which does not affect our analysis. Define the cut distance between two graphon-signals (W, f ), (V, g) ∈ WL r by\nδ □ (W, f ), (V, g) = inf ϕ∈S ′ [0,1] d □ (W, f ), (V, g) ϕ .(4)\nHere, (V, g) ϕ := (V ϕ , g ϕ ). More details on this construction are given in Appendix A.3. The graphon-signal cut distance δ □ is a pseudo-metric, and can be made into a metric by introducing the equivalence relation:\n(W, f ) ∼ (V, g) if δ □ ((W, f ), (V, g)) = 0. The quotient space WL r := WL r / ∼ of equivalence classes [(W, f )] of graphon-signals (W, f ) is a metric space with the metric δ □ ([(W, f )], [(V, g)]) = δ □ ((W, f ), (V, g)).\nBy abuse of terminology, we call elements of WL r also graphon-signals. A graphon-signal in WL r is defined irrespective of a specific \"indexing\" of the nodes in [0, 1]." }, { "figure_ref": [], "heading": "Induced graphon-signals", "publication_ref": [], "table_ref": [], "text": "Any graph-signal can be identified with a corresponding graphon-signal as follows.\nDefinition 3.2. Let (G, f ) be a graph-signal with node set [n] and adjacency matrix\nA = {a i,j } i,j∈[n] . Let {I k } n k=1 with I k = [(k -1)/n, k/n) be the equipartition of [0, 1] into n intervals. The graphon- signal (W, f ) (G,f ) = (W G , f f ) induced by (G, f ) is defined by W G (x, y) = n i,j=1 a ij 1 Ii (x)1 Ij (y),\nand\nf f (z) = n i f i 1 Ii (z).\nWe denote (W, f ) (G,f ) = (W G , f f ). We identify any graph-signal with its induced graphon-signal. This way, we define the cut distance between a graph-signal and a graphon-signal. As before, the cut distance between a graph-signal (G, f ) and a graphon-signal (W, g) can be interpreted as how much the deterministic graph-signal (G, f ) \"looks like\" it was randomly sampled from (W, g). To formulate our regularity lemma, we first define spaces of step functions." }, { "figure_ref": [ "fig_1" ], "heading": "Graphon-signal regularity lemma", "publication_ref": [ "b5" ], "table_ref": [], "text": "Definition 3.3. Given a partition P k , and d ∈ N, we define the space S d P k of step functions of dimension d over the partition P k to be the space of functions F : [0, 1] d → R of the form\nF (x 1 , . . . , x d ) = j=(j1,...,j d )∈[k] d c j d l=1 1 Pj l (x l ),(5)\nfor any choice of\n{c j ∈ R} j∈[k] d .\nWe call any element of W 0 ∩ S 2 P k a step graphon with respect to P k . A step graphon is also called a stochastic block model (SBM). We call any element of L ∞ r [0, 1] ∩ S 1 P k a step signal. We also call\n[WL r ] P k := (W 0 ∩ S 2 P k ) × (L ∞ r [0, 1] ∩ S 1 P k )\nthe space of SBMs with respect to P k . In Appendix B.2 we give a number of versions of the graphon-signal regularity lemma. Here, we show one version in which the partition is fixed regardless of the graphon-signal. Theorem 3.4 (Regularity lemma for graphon-signals -equipartition). For any c > 1, and any sufficiently small ϵ > 0, for every n ≥ 2 ⌈ 9c 4ϵ 2 ⌉ and every (W, f ) ∈ WL r , there exists a step graphon-signal\n(W n , f n ) ∈ [WL r ] In such that δ □ (W, f ), (W n , f n ) ≤ ϵ,(6)\nwhere I n is the equipartition of [0, 1] into n intervals.\nFigure 2 illustrates the graphon-signal regularity lemma. By identifying graph-signals with their induced graphon-signals, (6) shows that the space of graph-signals is dense in the space of graphon-signals with cut distance.\nSimilarly to the classical case, Theorem 3.4 is interpreted as follows. While deterministic graph-signals may seem intricate and complex, they are actually regular, and \"look like\" random graph-signals that were sampled from SBMs, where the number of blocks of the SBM only depends on the desired approximation error between the SBM and the graph-signal, and not on the graph-signal itself.\nRemark 3.5. The lower bound n ≥ 2 ⌈ 9c 4ϵ 2 ⌉ on the number of steps in the graphon-signal regularity lemma is essentially tight in the following sense. There is a universal constant C such that for every ϵ > 0 there exists a graphon-signal (W, f ) such that no step graphon-signal (W ′ , f ′ ) with less than 2 ⌈ C ϵ 2 ⌉ steps satisfies δ □ (W, f ), (W ′ , f ′ ) ≤ ϵ. To see this, [8, Theorem 1.4, Theorem 7.1] shows that the bound in the standard weak regularity lemma (for graphs/graphons) is essentially tight in the above sense. For the graphon-signal case, we can take the graphon W ′ from [8, Theorem 7.1] which does not allow a regularity partition with less than 2 ⌈ C ϵ 2 ⌉ steps, and consider the graphon-signal (W ′ , 1), which then also does not allow such a regularity partition." }, { "figure_ref": [], "heading": "Compactness of the graphon-signal space and its covering number", "publication_ref": [], "table_ref": [], "text": "We prove that WL r is compact using Theorem 3.4, similarly to [25, Lemma 8]. Moreover, we can bound the number of balls of radius ϵ required to cover WL r .\nTheorem 3.6. The metric space ( WL r , δ □ ) is compact. Moreover, given r > 0 and c > 1, for every sufficiently small ϵ > 0, the space WL r can be covered by\nκ(ϵ) = 2 k 2 (7) balls of radius ϵ, where k = ⌈2 9c 4ϵ 2 ⌉.\nThe Proof of Theorem 3.6 is given in Appendix C. This is a powerful result -the space of arbitrarily large graph-signals is dense in the \"small\" space WL r . We will use this property in Section 4.3 to prove a generalization bound for MPNNs." }, { "figure_ref": [], "heading": "Graphon-signal sampling lemmas", "publication_ref": [ "b24" ], "table_ref": [], "text": "In this section we prove that randomly sampling a graphon signal produces a graph-signal that is close in cut distance to the graphon signal. Let us first describe the sampling setting. More details on the construction are given in Appendix D.1. Let Λ = (λ 1 , . . . λ k ) ∈ [0, 1] k be k independent uniform random samples from [0, 1], and (W, f ) ∈ WL r . We define the random weighted graph W (Λ) as the weighted graph with k nodes and edge weight w i,j = W (λ i , λ j ) between node i and node j. We similarly define the random sampled signal f (Λ) with value f i = f (λ i ) at each node i. Note that W (Λ) and f (Λ) share the sample points Λ. We then define a random simple graph as follows. We treat each w i,j = W (λ i , λ j ) as the parameter of a Bernoulli variable e i,j , where P(e i,j = 1) = w i,j and P(e i,j = 0) = 1 -w i,j . We define the random simple graph G(W, Λ) as the simple graph with an edge between each node i and node j if and only if e i,j = 1.\nWe note that, given a graph signal (G, f ), sampling a graph-signal from (W, f ) (G,f ) is equivalent to subsampling the nodes of G independently and uniformly (with repetitions), and considering the resulting subgraph and subsignal. Hence, we can study the more general case of sampling a graphon-signal, where graph-signal sub-sampling is a special case. We now extend [24,Lemma 10.16], which bounds the cut distance between a graphon and its sampled graph, to the case of a sampled graphon-signal.\nTheorem 3.7 (Sampling lemma for graphon-signals). Let r > 1. There exists a constant K 0 > 0 that depends on r, such that for every k ≥ K 0 , every (W, f ) ∈ WL r , and for\nΛ = (λ 1 , . . . λ k ) ∈ [0, 1] k independent uniform random samples from [0, 1], we have E δ □ W, f , W (Λ), f (Λ) < 15 log(k) ,\nand\nE δ □ W, f , G(W, Λ), f (Λ) < 15 log(k) .\nThe proof of Theorem 3.7 is given in Appendix D.2" }, { "figure_ref": [], "heading": "Graphon-signal analysis of MPNNs", "publication_ref": [], "table_ref": [], "text": "In this section, we propose utilizing the compactness of the graphon-signal space under cut distance, and the sampling lemma, to prove regularity results for MPNNs, uniform generalization bounds, and stability to subsampling theorems." }, { "figure_ref": [], "heading": "MPNNs on graphon signals", "publication_ref": [ "b27", "b30", "b36", "b8", "b20", "b22" ], "table_ref": [], "text": "Next, we define MPNNs on graphon-signals, in such a way that the application of a MPNN on an induced graphon-signal is equivalent to applying the MPNN on the graph-signal and then inducing it. A similar construction was presented in [27], for average aggregation, but we use normalized sum aggregation.\nAt each layer, we define the message function Φ(x, y) as a linear combination of simple tensors as follows. Let K ∈ N. For every k ∈ [K], let ξ k r , ξ k t : R d → R p be Lipschitz continuous functions that we call the receiver and transmitter message functions respectively. Define the message function Φ :\nR 2d → R p by Φ(a, b) = K k=1 ξ k r (a)ξ k t (b),\nwhere the multiplication is elementwise along the feature dimension. Given a signal f , define the message kernel\nΦ f : [0, 1] 2 → R p by Φ f (x, y) = Φ(f (x), f (y)) = K k=1 ξ k r (f (x))ξ k t (f (y)).\nWe see the x variable of Φ f (x, y) as the receiver of the message, and y as the transmitter. Define the aggregation of a message kernel Q : [0, 1] 2 → R p , with respect to the graphon W ∈ W 0 , to be the signal\nAgg(W, Q) ∈ L ∞ r [0, 1], defined by Agg(W, Q)(x) = 1 0 W (x, y)Q(x, y)dy,\nfor an appropriate r > 0. A message passing layer (MPL) takes the form\nf (t) → Agg(W, Φ(t+1)\nf (t) )\n, where f (t) is the signal at layer t. Each MPL is optionally followed by an update layer, which updates the signal pointwise via\nf (t+1) = µ (t+1) f (t) (x), Agg(W, Φ (t+1) f (t) )(x)\n, where µ (t+1) is a learnable mapping called the update function. A MPNN is defined by choosing the number of layers T , and defining message and update functions {µ t ,\n( t ξ k r ), ( t ξ k t )} k∈[K],t∈[T ]\n. A MPNN only modifies the signal, and keeps the graph/graphon intact. We denote by Θ t (W, f ) the output of the MPNN applied on (W, f ) ∈ WL r at layer t ∈ [T ]. More details on the construction are given in Appendix E.1.\nThe above construction is rather general. Indeed, it is well known that many classes of functions\nF : R d ×R d → R C (e.g., L 2 functions) can be approximated by (finite) linear combinations of simple tensors F (a, b) ≈ K k=1 ξ k 1 (a)ξ k 2 (b).\nHence, message passing based on general message functions Φ : R 2d → R p can be approximated by our construction. Moreover, many well-known MPNNs can be written using our formulation with a small K, e.g., [30,36] and spectral convolutional networks [9,20,22], if we replace the aggregation in these method with normalized sum aggregation.\nIn Appendix E.1 we show that for any graph-signal (G, f ), we have\nΘ t (W, f ) (G,f ) = (W, f ) Θt(G,f )\n, where the MPNN on a graph-signal is defined with normalized sum aggregation\nAgg(G, Φ f ) i = 1 n j∈[n] a i,j (Φ f ) i,j .\nHere, n is the number of nodes, and {a i,j } i,j∈[n] is the adjacency matrix of G. Hence, we may identify graph-signals with their induced graphon-signals when analyzing MPNNs." }, { "figure_ref": [], "heading": "Lipschitz continuity of MPNNs", "publication_ref": [], "table_ref": [], "text": "We now show that, under the above construction, MPNNs are Lipschitz continuous with respect to cut distance.\nTheorem 4.1. Let Θ be a MPNN with T layers. Suppose that there exist constants L, B > 0 such that for every layer t ∈ [T ], every y ∈ {t, r} and every k ∈ [K],\nµ t (0) , t ξ k y (0) ≤ B,and\nL µ t , Lt ξ k y < L,\nwhere L µ t and Lt ξ k y are the Lipschitz constants of µ t and t ξ k y . Then, there exists a constant L Θ (that depends on T, K, B and L) such that for every\n(W, f ), (V, g) ∈ WL r , ∥Θ(W, f ) -Θ(V, g)∥ □ ≤ L Θ ∥f -g∥ □ + ∥W -V ∥ □ .\nThe constant L Θ depends exponentially on T , and polynomially on K, B and L. For formulas of L Θ , under different assumptions on the hypothesis class of the MPNN, we refer to Appendix F." }, { "figure_ref": [], "heading": "A generalization theorem for MPNN", "publication_ref": [ "b33", "b13", "b23", "b27", "b29", "b6", "b7" ], "table_ref": [ "tab_0" ], "text": "In this section we prove a uniform generalization bound for MPNNs. For background on generalization analysis, we refer the reader to Appendix G.1. While uniform generalization bounds are considered a classical approach in standard neural networks, the approach is less developed in the case of MPNNs. For some works on generalization theorems of MPNNs, see [33,14,23,27,29].\nWhen a MPNN is used for classification or regression, Θ T is followed by global pooling. Namely, for the output signal g : [0, 1] → R p , we return g(x)dx ∈ R p . This is then typically followed by a learnable mapping R p → R C . In our analysis, we see this mapping as part of the loss, which can hence be learnable. The combined loss is assumed to be Lipschitz continuous2 .\nWe model the ground truth classifier into C classes as a piecewise constant function C : WL r → {0, 1} C , where the sets of different steps in WL r are Borel measurable sets, correspond to different classes. We consider an arbitrary probability Borel measure ν on WL r as the data distribution. More details on the construction are given in Appendix G.2.\nLet Lip( WL r , L 1 ) be the space of Lipschitz continuous mappings Υ : WL r → R C with Lipschitz constant L 1 . By Theorem 4.1, we may assume that our hypothesis class of MPNNs is a subset of Lip( WL r , L 1 ) for some given L 1 . Let X = (X 1 , . . . , X N ) be independent random samples from the data distribution ( WL r , ν). Let Υ X be a model that may depend on the sampled data, e.g., via training. Let E be a Lipschitz continuous loss function3 with Lipschitz constant L 2 . For every function Υ in the hypothesis class Lip( WL r , L 1 ) (i.e. Υ X ), define the statistical risk\nR(Υ) = E E(Υ, C) = E(Υ(x), C(x))dν(x). We define the empirical risk R(Υ X , X) = 1 N N i=1 E Υ X (X i ), C(X i ) .\nTheorem 4.2 (MPNN generalization theorem). Consider the above classification setting, and let L = L 1 L 2 . Let X 1 , . . . , X N be independent random samples from the data distribution ( WL r , ν).\nThen, for every p > 0, there exists an event U p ⊂ WL r N , with probability\nν N (U p ) ≥ 1 -Cp -2 C 2 N , in which R(Υ X ) -R(Υ X , X) ≤ ξ -1 (N/2C) 2L + 1 √ 2 L + E(0, 0) 1 + log(2/p) ,(8)\nwhere ξ(ϵ) = κ(ϵ) 2 log(κ(ϵ))\nϵ 2\n, κ is the covering number of WL r given in (7), and ξ -1 is the inverse function of ξ.\nThe theorem is proved in Appendix G.4. Note that the term ξ -1 (N/2C) in (8) decreases to zero as the size of the training set N goes to infinity.\nIn Table 1 we compare the assumptions and dependency on the number of data points of different generalization theorems. All past works consider special assumptions. Our work provides upper bounds under no assumptions on the data distribution, and only mild assumptions on the MPNN (Lipschitz continuity of the message passing and update functions). In Table 2 " }, { "figure_ref": [], "heading": "Generalization analysis paper", "publication_ref": [ "b13", "b29", "b27" ], "table_ref": [], "text": "Assumption on the graphs No weight sharing General MPL Dependency on N Generalization Limits of GNNs [14] bounded degree\n✗ ✗ N -1/2 PAC-bayesian MPNN [23] bounded degree ✗ ✗ N -1/2 PAC-bayesian GCN [23] bounded degree ✓ ✗ N -1/2\nVC meets 1WL [29] bounded color complexity ✓ ✗ N -1/2 Generalization Analysis of MPNNs [27] sampled from a small set of graphons\n✓ ✓ N -1/2\nOur graphon-signal theory non\n✓ ✓ ξ -1 (N )" }, { "figure_ref": [], "heading": "Stability of MPNNs to graph-signal subsampling", "publication_ref": [ "b15", "b4", "b6", "b21", "b31", "b17", "b26", "b32" ], "table_ref": [], "text": "When working with very large graphs, it is often the practice to subsample the large graph, and apply a MPNN to the smaller subsampled graph [16,5,7]. Here, we show that such an approach is justified theoretically. Namely, any (Lipschitz continuous) MPNN has approximately the same outcome on the large graph and its subsampled version.\nTransferability and stability analysis [21,31,18,26,32] often studies a related setting. Namely, it is shown that a MPNN applied on a randomly sampled graph G approximates the MPNN on the graphon W from which the graph is sampled. However, previous analyses assumed that the generating graphon W has metric properties. Namely, it is assumed that there is some probability metric space M which is the graphon domain, and the graphon W : M × M → [0, 1] is Lipschitz continuous with respect to M, where the dimension of M affects the asymptotics. This is an unnatural setting, as general graphons are only assumed to be measurable, not continuous. Constraining the construction to Lipschitz continuous graphons with a uniformly bounded Lipschitz constant only accounts for a small subset of WL r , and, hence, limits the analysis significantly. In comparison, our analysis applies to any graphon-signal in WL r . When we only assume that the graphon is measurable, [0, 1] is only treated as a standard (atomless) probability space, which is very general, and equivalent for example to [0, 1] d for any d ∈ N, and to any Polish space. Note that graphon theory allows restricting the graphon domain to [0, 1] since [0, 1], as a measure space, is very generic. \nΣ(Λ) = G(W, Λ), Θ G(W, Λ), f (Λ) . Then E δ □ Σ, Σ(Λ) < 15 log(k) L." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "We presented an extension of graphon theory to a graphon-signal theory. Especially, we extended well-known regularity, compactness, and sampling lemmas from graphons to graphon-signals. We then showed that the normalized sum aggregation of MPNNs is in some sense compatible with the graphon-signal cut distance, which leads to the Lipschitz continuity of MPNNs with respect to cut distance. This then allowed us to derive generalization and sampling theorems for MPNNs. The strength of our analysis is in its generality and simplicity-it is based on a natural notion of graph similarity, that allows studying the space of all graph-signals, it applies to any graph-signal data distribution, and does not impose any restriction on the number of parameters of the MPNNs, only to their regularity through the Lipschitzness of the message functions.\nThe main limitation of the theory is the very slow asymptotics of the generalization and subsampling theorems. This follows the fact that the upper bound on the covering number of the compact space WL r grows faster than the covering number of any finite-dimensional compact space. Yet, we believe that our work can serve as a point of departure for future works, that 1) will model subspaces of WL r of lower complexity, which approximate the support of the data-distribution in real-life settings of graph machine learning, and, 2) will lead to improved asymptotics. Another open problem is to find an essentially tight estimate of the covering number of WL r , which may be lower than the estimate presented in this paper." }, { "figure_ref": [], "heading": "A Basic definitions and properties of graphon-signals", "publication_ref": [], "table_ref": [], "text": "In this appendix, we give basic properties of graphon-signals, cut norm, and cut distance." }, { "figure_ref": [], "heading": "A.1 Lebesgue spaces and signal spaces", "publication_ref": [], "table_ref": [], "text": "For 1 ≤ p < ∞, the space L p [0, 1] is the space of (equivalence classes up to null-set) of measurable functions f :\n[0, 1] → R, with finite L 1 norm ∥f ∥ p = 1 0 |f (x)| p dx 1/p < ∞. The space L ∞ [0, 1] is the space of (equivalence classes) of measurable functions with finite L ∞ norm ∥f ∥ ∞ = ess sup x∈[0,1] |f (x)| = inf{a ≥ 0 | |f (x)| ≤ a for almost every x ∈ [0, 1]}." }, { "figure_ref": [], "heading": "A.2 Properties of cut norm", "publication_ref": [], "table_ref": [], "text": "Every f ∈ L ∞ r [0, 1] can be written as f = f + -f -, where\nf + (x) = f (x) f (x) > 0 0 f (x) ≤ 0.\nand f -is defined similarly. It is easy to see that the supremum in (3) is attained for S which is either the support of f + or f -, and\n∥f ∥ □ = max{∥f + ∥ 1 , ∥f -∥ 1 }.\nAs a result, the signal cut norm is equivalent to the L 1 norm\n1 2 ∥f ∥ 1 ≤ ∥f ∥ □ ≤ ∥f ∥ 1 .(9)\nMoreover, for every r > 0 and measurable function\nW : [0, 1] 2 → [-r, r], 0 ≤ ∥W ∥ □ ≤ ∥W ∥ 1 ≤ ∥W ∥ 2 ≤ ∥W ∥ ∞ ≤ r.\nThe following lemma is from [24, Lemma 8.10].\nLemma A.1. For every measurable W :\n[0, 1] 2 → R, the supremum sup S,T ⊂[0,1] S T W (x, y)dxdy\nis attained for some S, T ." }, { "figure_ref": [], "heading": "A.3 Properties of cut distance and measure preserving bijections", "publication_ref": [], "table_ref": [], "text": "Recall that we denote the standard Lebesgue measure of [0, 1] by µ. Let S [0,1] be the space of measurable bijections [0, 1] → [0, 1] with measurable inverse, that are measure preserving, namely, for every measurable\nA ⊂ [0, 1], µ(A) = µ(ϕ(A)). Recall that S ′ [0,1]\nis the space of measurable bijections between co-null sets of [0, 1].\nFor ϕ ∈ S [0,1] or ϕ ∈ S ′ [0,1] , we define W ϕ (x, y) := W (ϕ(x), ϕ(y)). In case ϕ ∈ S ′ [0,1] , W ϕ is only define up to a null-set, and we arbitrarily set W to 0 in this null-set. This does not affect our analysis, as the cut norm is not affected by changes to the values of functions on a null sets. The cut-metric between graphons is then defined to be\nδ □ (W, W ϕ ) = inf ϕ∈S [0,1] ∥W -W ϕ ∥ □ = inf ϕ∈S [0,1] sup S,T ⊆[0,1] S×T W (x, y) -W (ϕ(x), ϕ(y)) dxdy .\nRemark A.2. Note that δ □ can be defined equivalently with respect to ϕ ∈ S ′ [0,1] . Indeed, By [24, Equation (8.17) and Theorem 8.13], δ □ can be defined equivalently with respect to the measure preserving maps that are not necessarily invertible. These include the extensions of mappings from S ′ [0,1] by defining ϕ(x) = 0 for every x in the co-null set underlying ϕ.\nSimilarly to the graphon case, the graphon-signal distance δ □ is a pseudo-metric. By introducing an equivalence relation (W, f ) ∼ (V, g) if δ □ ((W, f ), (V, g)) = 0, and the quotient space WL r := WL r / ∼, WL r is a metric space with a metric δ □ defined by δ\n□ ([(W, f )], [V, g)]) = d □ (W, V ) where [(W, f )], [(V, g)],\nare the equivalence classes of (W, f ) and (V, g) respectively. By abuse of terminology, we call elements of WL r also graphon-signals.\nRemark A.3. We note that WL r ̸ = W 0 × L ∞ r [0, 1] (for the natural definition of L ∞ r [0, 1]), since in WL r we require that the measure preserving bijection is shared between the graphon W and the signal f . Sharing the measure preserving bijetion between W and f is an important modelling requirement, as ϕ is seen as a \"re-indexing\" of the node set [0, 1]. When re-indexing a node x, both the neighborhood W (x, •) of x and the signal value f (x) at x should change together, otherwise, the graphon and the signal would fall out of alignment.\nWe identify graphs with their induced graphons and signal with their induced signals" }, { "figure_ref": [], "heading": "B Graphon-signal regularity lemmas", "publication_ref": [], "table_ref": [], "text": "In this appendix, we prove a number of versions of the graphon-signal regularity lemma, where Theorem 3.4 is one version." }, { "figure_ref": [], "heading": "B.1 Properties of partitions and step functions", "publication_ref": [ "b3" ], "table_ref": [], "text": "Given a partition P k and d ∈ N, the next lemma shows that there is an equiparition E n such that the space S d En uniformly approximates the space \nS d P k in L 1 [0,\n∥F -F ′ ∥ 1 ≤ d∥F ∥ ∞ k n .\nProof. Let P k = {P 1 , . . . , P k } be a partition of [0, 1]. For each i, we divide P i into subsets P i = {P i,1 , . . . , P i,mi } of measure 1/n (up to the last set) with a residual, as follows. If µ(P i ) < 1/n, we choose P i = {P i,1 = P i }. Otherwise, we take P i,1 , . . . , P i,mi-1 of measure 1/n, and µ(P i,mi ) ≤ 1/n. We call P i,mi the remainder. We now define the sequence of sets of measure 1/n Q := {P 1,1 , . . . , P 1,m1-1 , P 2,1 , . . . , P 2,m2-1 , . . . , P k,1 , . . . , P k,m k -1 },\nwhere, by abuse of notation, for any i such that m i = 1, we set {P i,1 , . . . , P i,mi-1 } = ∅ in the above formula. Note that in general ∪Q ̸ = [0, 1]. We moreover define the union of residuals\nΠ := P 1,m1 ∪ P 2,m2 ∪ • • • ∪ P k,m k . Note that µ(Π) = 1 -µ(∪Q) = 1 -l 1 n = h/n\n, where l is the number of elements in Q, and h = n -l. Hence, we can partition Π into h parts {Π 1 , . . . Π h } of measure 1/n with no residual. Thus we have obtain the equipartition of [0, 1] to n sets of measure 1/n\nE n := {P 1,1 , . . . , P 1,m1-1 , P 2,1 , . . . , P 2,m2-1 , . . . , S k,1 , . . . , S k,m k -1 , Π 1 , Π 2 , . . . , Π h }.(11)\nFor convenience, we also denote\nE n = {Z 1 , . . . , Z n }. Let F (x) = j=(j1,...,j d )∈[k] d c j d l=1 1 Pj l (x l ) ∈ S d P k .\nWe can write F with respect to the equipartition E n as\nF (x) = j=(j1,...,j d )∈[n] d ; ∀l=1,...,d, Zj l ̸ ⊂Π cj d l=1 1 Zj l (x l ) + E(x),\nfor some {c j } with the same values as the values of {c j }. Here, E is supported in the set\nΠ (d) ⊂ [0, 1] d , defiedby\nΠ (d) = Π × [0, 1] d-1 ∪ [0, 1] × Π × [0, 1] d-2 ∪ . . . ∪ [0, 1] d-1 × Π .\nConsider the step function\nF ′ (x) = j=(j1,...,j d )∈[n] d ; ∀l=1,...,d, Zj l ̸ ⊂Π cj d l=1 1 Zj l (x l ) ∈ S d En .\nSince µ(Π) = k/n, we have µ(Π (d) ) = dk/n, and so\n∥F -F ′ ∥ 1 ≤ d∥F ∥ ∞ k n . Lemma B.2. Let {Q 1 , Q 2 , . . . , Q m } partition of [0, 1]. Let {I 1 , I 2 , . . . , I m } be a partition of [0, 1]\ninto intervals, such that for every j ∈ [m], µ(Q j ) = µ(I j ). Then, there exists a measure preserving bijection ϕ :\n[0, 1] → [0, 1] ∈ S ′ [0,1] such that 4 ϕ(Q j ) = I j\nProof. By the definition of a standard probability space, the measure space induced by [0, 1] on a non-null subset Q j ⊆ [0, 1] is a standard probability space. Moreover, each Q j is atomless, since [0, 1] is atomless. Since there is a measure-preserving bijection (up to null-set) between any two atomless standard probability spaces, we obtain the result.\nLemma B.3. Let S = {S j ⊂ [0, 1]} m-1\nj=0 be a collection of measurable sets (that are not disjoint in general), and d ∈ N. Let C d S be the space of functions F : [0, 1] d → R of the form\nF (x) = j=(j1,...,j d )∈[m] d c j d l=1 1 Sj l (x l ),\nfor some choice of {c j ∈ R} j∈[m] d . Then, there exists a partition P k = {P 1 , . . . , P k } into k = 2 m sets, that depends only on S, such that\nC d S ⊂ S d P k .\nProof. The partition P k = {P 1 , . . . , P k } is defined as follows. Let\nP = P ⊂ [0, 1] | ∃ x ∈ [0, 1], P = ∩{S j ∈ S|x ∈ S j } .\nWe must have | P| ≤ 2 m . Indeed, there are at most 2 m different subsets of S for the intersections.\nWe endow an arbitrarily order to P and turn it into a sequence. If the size of P is strictly smaller than 2 m , we add enough copies of {∅} to P to make the size of the sequence 2 m , that we denote by P k , where k = 2 m .\nThe following simple lemma is proved similarly to Lemma B.3. We give it without proof.\nLemma B. 4.\nLet P k = {P 1 , . . . , P k }, Q m = {Q 1 , . . . , Q k } be two partitions.\nThen, there exists a partition Z km into km sets such that for every d,\nS d P k ⊂ S d Z mk , and S d Qm ⊂ S d Z mk ." }, { "figure_ref": [], "heading": "B.2 List of graphon-signal regularity lemmas", "publication_ref": [ "b25", "b25" ], "table_ref": [], "text": "The following lemma from [25, Lemma 4.1] is a tool in the proof of the weak regularity lemma.\nLemma B.5. Let K 1 , K 2 , . . . be arbitrary nonempty subsets (not necessarily subspaces) of a Hilbert space H. Then, for every ϵ > 0 and v ∈ H there is m ≤ ⌈1/ϵ 2 ⌉ and v i ∈ K i and\nγ i ∈ R, i ∈ [m], such that for every w ∈ K m+1 w, v -( m i=1 γ i v i ) ≤ ϵ ∥w∥∥v∥.(12)\nThe following theorem is an extension of the graphon regularity lemma from [25] to the case of graphon-signals. Much of the proof follows the steps of [25].\nTheorem B.6 (Weak regularity lemma for graphon-signals). Let ϵ, ρ > 0. For every (W, f ) ∈ WL r there exists a partition P k of [0, 1] into k = ⌈r/ρ⌉ 2 2⌈1/ϵ 2 ⌉ sets, a step function graphon\nW k ∈ S 2 P k ∩ W 0 and a step function signal f k ∈ S 1 P k ∩ L ∞ r [0, 1], such that ∥W -W k ∥ □ ≤ ϵ and ∥f -f k ∥ □ ≤ ρ.(13)\nProof. We first analyze the graphon part. In Lemma B.5, set H = L 2 ([0, 1] 2 ) and for all i ∈ N, set\nK i = K = {1 S×T | S, T ⊂ [0, 1] measurable} .\nThen, by Lemma B.5, there exists m ≤ ⌈1/ϵ 2 ⌉ two sequences of sets\nS m = {S i } m i=1 , T m = {T i } m i=1 , a sequence of coefficients {γ i ∈ R} m i=1 ,and\nW ′ ϵ = m i=1 γ i 1 Si×Ti ,\nsuch that for any V ∈ K, given by V (x, y) = 1 S (x)1 T (y), we have\nV (x, y) W (x, y) -W ′ ϵ (x, y) dxdy = S T W (x, y) -W ′ ϵ (x, y) dxdy(14)\n≤ ϵ∥1 S×T ∥∥W ∥ ≤ ϵ.(15)\nWe may choose exactly m = ⌈1/ϵ 2 ⌉ by adding copies of the empty set to S m and T m , if the constant m guaranteed by Lemma B.5 is strictly less than ⌈1/ϵ 2 ⌉. Let W ϵ (x, y) = (W ′ ϵ (x, y) + W ′ ϵ (y, x))/2. By the symmetry of W , it is easy to see that Equation ( 15) is also true when replacing W ′ ϵ by W ϵ . Indeed,\nV (x, y) W (x, y) -W ϵ (x, y) dxdy ≤ 1/2 V (x, y) W (x, y) -W ′ ϵ (x, y) dxdy + 1/2 V (y, x) W (x, y) -W ′ ϵ (x, y) dxdy ≤ ϵ.\nConsider the concatenation of the two sequences T m , S m given by Y 2m = T m ∪ S m . Note that in the notation of Lemma B.3, W ϵ ∈ C 2 Y2m . Hence, by Lemma B.3, there exists a partition\nQ n into n = 2 2m = 2 2⌈ 1 ϵ 2 ⌉\nsets, such that W ϵ is a step graphon with respect to Q n . To analyze the signal part, we partition the range of the signal [-r, r] into j = ⌈r/ρ⌉ intervals {J i } j i=1 of length less or equal to 2ρ, where the left edge point of each\nJ i is -r + (i -1) ρ r . Consider the partition of [0, 1] based on the preimages Y j = {Y i = f -1 (J i )} j i=1 .\nIt is easy to see that for the step signal\nf ρ (x) = j i=1 a i 1 Yi (x),\nwhere a i the midpoint of the interval Y i , we have\n∥f -f ρ ∥ □ ≤ ∥f -f ρ ∥ 1 ≤ ρ.\nLastly, by Lemma B.4, there is a partition P k of [0, 1] into k = ⌈r/ρ⌉ 2 2⌈1/ϵ 2 ⌉ sets such that\nW ϵ ∈ S 2\nP k and f ρ ∈ S 1 P k ." }, { "figure_ref": [], "heading": "Corollary B.7 (Weak regularity lemma for graphon-signals -version 2)", "publication_ref": [], "table_ref": [], "text": ". Let r > 0 and c > 1.\nFor every sufficiently small ϵ > 0 (namely, ϵ that satisfies ( 17)), and for every (W, f ) ∈ WL r there exists a partition\nP k of [0, 1] into k = 2 ⌈2c/ϵ 2 ⌉ sets, a step graphon W k ∈ S 2 P k ∩ W 0 and a step signal f k ∈ S 1 P k ∩ L ∞ r [0, 1], such that d □ (W, f ), (W k , f k ) ≤ ϵ.\nProof. First, evoke Theorem B.6, with errors ∥W -W k ∥ □ ≤ ν and ∥f -f k ∥ □ ≤ ρ = ϵ -ν. We now show that there is some ϵ 0 > 0 such that for every ϵ < ϵ 0 , there is a choice of ν such that the number of sets in the partition, guaranteed by Theorem B.6, satisfies\nk(ν) := ⌈r/(ϵ -ν)⌉ 2 2⌈1/ν 2 ⌉ ≤ 2 ⌈2c/ϵ 2 ⌉ . Denote c = 1 + t. In case ν ≥ 2 2(1 + 0.5t)/ϵ 2 -1 ,(16)\nwe have\n2 2⌈1/ν 2 ⌉ ≤ 2 2(1+0.5t)/ϵ 2 .\nOn the other hand, for\nν ≤ ϵ - r 2 t/ϵ 2 -1 ,\nwe have ⌈r/(ϵ -ν)⌉ ≤ 2 2(0.5t)/ϵ 2 .\nThe reconcile these two conditions, we restrict to ϵ such that\nϵ - r 2 t/ϵ 2 -1 ≥ 2 2(1 + 0.5t)/ϵ 2 -1 . (17\n)\nThere exists ϵ 0 that depends on c and r (and hence also on t) such that for every ϵ < ϵ 0 (17) is satisfied. Indeed, for small enough ϵ,\n1 2 t/ϵ 2 -1 = 2 -t/ϵ 2 1 -2 -t/ϵ 2 < 2 -t/ϵ 2 < ϵ r 1 - 1 1 + 0.1t , so ϵ - r 2 t/ϵ 2 -1 > ϵ(1 + 0.1t).\nMoreover, for small enough ϵ,\n2 2(1 + 0.5t)/ϵ 2 -1 = ϵ 1 (1 + 0.5t) -ϵ 2 < ϵ/(1 + 0.4t).\nHence, for every ϵ < ϵ 0 , there is a choice of ν such that\nk(ν) = ⌈r/(ϵ -ν)⌉ 2 2⌈1/ν 2 ⌉ ≤ 2 2(0.5t)/ϵ 2 2 2(1+0.5t)/ϵ 2 ≤ 2 ⌈2c/ϵ 2 ⌉ .\nLastly, we add as many copies of ∅ to P k(ν) as needed so that we get a sequence of k = 2 ⌈2c/ϵ 2 ⌉ sets." }, { "figure_ref": [], "heading": "Theorem B.8 (Regularity lemma for graphon-signals -equipartition version)", "publication_ref": [ "b0" ], "table_ref": [], "text": ". Let c > 1 and r > 0. For any sufficiently small ϵ > 0, and every (W, f ) ∈ WL r there exists ϕ ∈ S ′ [0,1] , a step function graphon\n[W ϕ ] n ∈ S 2\nIn ∩ W 0 and a step signal\n[f ϕ ] n ∈ S 1 In ∩ L ∞ r [0, 1], such that d □ (W ϕ , f ϕ ) , [W ϕ ] n , [f ϕ ] n ≤ ϵ, (18\n)\nwhere \nI n is the equipartition of [0, 1] into n = 2 ⌈2c/ϵ 2 ⌉ intervals. Proof. Let c = 1 + t > 1, ϵ > 0 and 0 < α, β < 1.\n∥W k -W n ∥ □ ≤ 2ϵβ and ∥f k -f n ∥ 1 ≤ rϵβ.\nHence, by the triangle inequality,\nd □ (W, f ), (W n , f n ) ≤ d □ (W, f ), (W k , f k ) + d □ (W k , f k ), (W n , f n ) ≤ ϵ(α + (2 + r)β).\nIn the following, we restrict to choices of α and β which satisfy α + (2 + r)β = 1. Consider the function n : (0, 1) → N defined by\nn(α) := ⌈2 4(1+0.5t) (ϵα) 2 +1 /(ϵβ)⌉ = ⌈(2 + r) • 2 9(1+0.5t) 4(ϵα) 2 +1 /(ϵ(1 -α))⌉.\nUsing a similar technique as in the proof of Corollary B.7, there is ϵ 0 > 0 that depends on c and r (and hence also on t) such that for every ϵ < ϵ 0 , we may choose α 0 (that depends on ϵ) which satisfies\nn(α 0 ) = ⌈(2 + r) • 2 2(1+0.5t) (ϵα 0 ) 2 +1 /(ϵ(1 -α 0 ))⌉ < 2 ⌈ 2c ϵ 2 ⌉ .(19)\nMoreover, there is a choice α 1 which satisfies\nn(α 1 ) = ⌈(2 + r) • 2 2(1+0.5t) (ϵα 1 ) 2 +1 /(ϵ(1 -α 1 ))⌉ > 2 ⌈ 2c ϵ 2 ⌉ . (20\n)\nWe note that the function n : (0, 1) → N satisfies the following intermediate value property. For every 0 < α 1 < α 2 < 1 and every m ∈ N between n(α 1 ) and n(α 2 ), there is a point α\n∈ [α 1 , α 2 ] such that n(α) = m. This follows the fact that α → (2 + r) • 2 2(1+0.5t) (ϵα) 2 +1 /(ϵ(1 -α)) is a continuous function.\nHence, by ( 19) and ( 20), there is a point α (and\nβ such that α + (2 + r)β = 1) such that n(α) = n = ⌈2 2(1+0.5t) (ϵα) 2 +1 /(ϵβ)⌉ = 2 ⌈2c/ϵ 2 ⌉ .\nBy a slight modification of the above proof, we can replace n with the constant n = ⌈2 2c ϵ 2 ⌉. As a result, we can easily prove that for any n ′ ≥ 2 ⌈ 2c ϵ 2 ⌉ we have the approximation property (18) with n ′ instead of n. This is done by choosing an appropriate c ′ > c and using Theorem B.8 on c ′ , giving a constant\nn ′ = ⌈2 2c ′ ϵ 2 ⌉ ≥ 2 ⌈ 2c ϵ 2 ⌉ = n.\nThis leads to the following corollary.\nCorollary B.9 (Regularity lemma for graphon-signals -equipartition version 2). Let c > 1 and r > 0. For any sufficiently small ϵ > 0, for every n ≥ 2 ⌈ 2c ϵ 2 ⌉ and every (W, f ) ∈ WL r , there exists ϕ ∈ S ′ [0,1] , a step function graphon\n[W ϕ ] n ∈ S 2\nIn ∩ W 0 and a step function signal\n[f ϕ ] n ∈ S 1 In ∩ L ∞ r [0, 1], such that d □ W ϕ , f ϕ , [W ϕ ] n , [f ϕ ] n ≤ ϵ,\nwhere I n is the equipartition of [0, 1] into n intervals.\nNext, we prove that we can use the average of the graphon and the signal in each part for the approximating graphon-signal. For that we define the projection of a graphon signal upon a partition.\nDefinition B.10. Let P n = {P 1 , . . . , P n } be a partition of [0, 1], and (W, f ) ∈ WL r . We define the projection of (W, f ) upon (S 2 P × S 1 P ) ∩ WL r to be the step graphon-signal (W, f ) Pn = (W Pn , f Pn ) that attains the value\nW Pn (x, y) = Pi×Pj W (x, y)dxdy , f Pn (x) = Pi f (x)dx\nfor every (x, y) ∈ P i × P j .\nAt the cost of replacing the error ϵ by 2ϵ, we can replace W ′ with its projection. This was shown in [1]. Since this paper does not use the exact same setting as us, for completeness, we write a proof of the claim below." }, { "figure_ref": [], "heading": "Corollary B.11 (Regularity lemma for graphon-signals -projection version).", "publication_ref": [], "table_ref": [], "text": "For any c > 1, and any sufficiently small ϵ > 0, for every n ≥ 2 ⌈ 8c ϵ 2 ⌉ and every (W, f ) ∈ WL r , there exists ϕ ∈ S ′ [0,1] , such that\nd □ W ϕ , f ϕ , [W ϕ ] In , [f ϕ ] In ≤ ϵ.\nwhere I n is the equipartition of [0, 1] into n intervals.\nWe first prove a simple lemma.\nLemma B.12. Let P n = {P 1 , . . . , P n } be a partition of [0, 1], and Let V, R ∈ S 2 Pn ∩ W 0 . Then, the supremum of \nsup S,T ⊂[0,1] S T V (x, y) -R(x,\n∩ L ∞ r [0, 1], the supremum of sup S⊂[0,1] S f (x) -g(x) dx(22)\nis attained for S of the form S = i∈s P i ,\nwhere s ⊂ [n].\nProof. First, by Lemma A.1, the supremum of ( 21) is attained for some S, T ⊂ [0, 1]. Given the maximizers S, T , without loss of generality, suppose that S T\nV (x, y) -R(x, y) dxdy > 0.\nwe can improve T as follows. Consider the set t ⊂ [n] such that for every j ∈ t S T ∩Pj V (x, y) -R(x, y) dxdy > 0.\nBy increasing the set T ∩ P j to P j , we can only increase the size of the above integral. Indeed,\nS Pj V (x, y) -R(x, y) dxdy = µ(P j ) µ(T ∩ P j ) S T ∩Pj V (x, y) -R(x, y) dxdy ≥ S T ∩Pj V (x, y) -R(x, y) dxdy.\nHence, by increasing T to\nT ′ = {j|T ∩Pj ̸ =∅} P j ,\nwe get\nS T ′ V (x, y) -R(x, y) dxdy ≥ S T\nV (x, y) -R(x, y) dxdy.\nWe similarly replace each T ∩ P j such that S T ∩Pj V (x, y) -R(x, y) dxdy ≤ 0 by the empty set. We now repeat this process for S, which concludes the proof for the graphon part.\nFor the signal case, let f = f + -f -, and suppose without loss of generality that ∥f ∥ □ = ∥f ∥ 1 . It is easy to see that the supremum of ( 22) is attained for the support of f + , which has the required form.\nProof. Proof of Corollary B.11 Let W n ∈ S Pn ∩W 0 be the step graphon guaranteed by Corollary B.9, with error ϵ/2 and measure preserving bijection ϕ ∈ S ′ [0,1] . Without loss of generality, we suppose that W ϕ = W . Otherwise, we just denote W ′ = W ϕ and replace the notation W with W ′ in the following. By Lemma B.12, the infimum underlying ∥W Pn -W n ∥ □ is attained for for some\nS = i∈s P i , T = j∈t P j .\nWe now have, by definition of the projected graphon,\n∥W n -W Pn ∥ □ = i∈s,j∈t Pi Pj (W Pn (x, y) -W n (x, y))dxdy = i∈s,j∈t Pi Pj (W (x, y) -W n (x, y))dxdy = S T (W (x, y) -W n (x, y))dxdy = ∥W n -W ∥ □ .\nHence, by the triangle inequality,\n∥W -W Pn ∥ □ ≤ ∥W -W n ∥ □ + ∥W n -W Pn ∥ □ < 2∥W n -W ∥ □ . A similar argument shows ∥f -f Pn ∥ □ < 2∥f n -f ∥ □ .\nHence,\nd □ W ϕ , f ϕ , [W ϕ ] In , [f ϕ ] In ≤ 2d □ W ϕ , f ϕ , [W ϕ ] n , [f ϕ ] n ≤ ϵ." }, { "figure_ref": [], "heading": "C Compactness and covering number of the graphon-signal space", "publication_ref": [ "b25", "b35", "b11" ], "table_ref": [], "text": "In this appendix we prove Theorem 3.6. Given a partition P k , recall that\n[WL r ] P k := (W 0 ∩ S 2 P k ) × (L ∞ r [0, 1] ∩ S 1 P k )\nis called the space of SBMs or step graphon-signals with respect to P k . Recall that WL r is the space of equivalence classes of graphon-signals with zero δ □ distance, with the δ □ metric (defined on arbitrary representatives). By abuse of terminology, we call elements of WL r also graphon-signals.\nTheorem C.1. The metric space ( WL r , δ □ ) is compact.\nThe proof is a simple extension of [25,Lemma 8] from the case of graphon to the case of graphon-signal. The proof relies on the notion of martingale. A martingale is a sequence of random variables for which, for each element in the sequence, the conditional expectation of the next value in the sequence is equal to the present value, regardless of all prior values. The Martingale convergence theorem states that for any bounded martingale {M n } n over the probability pace X, the sequence {M n (x)} n converges for almost every x ∈ X, and the limit function is bounded (see [35,12])." }, { "figure_ref": [], "heading": "Proof of Theorem", "publication_ref": [], "table_ref": [], "text": "C.1. Consider a sequence {[(W n , f n )]} n∈N ⊂ WL r , with (W n , f n ) ∈ WL r .\nFor each k, consider the equipartition into m k intervals I m k , where m k = 2 30⌈(r 2 +1)⌉k 2 . By Corollary B.11, there is a measure preserving bijection ϕ n,k (up to nullset) such that\n∥(W n , f n ) ϕ n,k -(W n , f n ) ϕ n,k Im k ∥ □;r < 1/k, where (W n , f n ) ϕ n,k Im k is the projection of (W n , f n ) ϕ n,k upon I m k (Definition B.10). For every fixed k, each pair of functions (W n , f n ) ϕ n,k Im k is defined via m 2 k + m k values in [0, 1]. Hence, since [0, 1] m 2 k +m k\nis compact, there is a subsequence {n k j } j∈N , such that all of these values converge. Namely, for each k, the sequence\n{(W n k j , f n k j ) ϕ n k j ,k Im k } ∞ j=1 converges pointwise to some step graphon-signal (U k , g k ) in [WL r ] P k as j → ∞. Note that I m l is a refinement of I m k for every l > k.\nAs as a result, by the definition of projection of graphon-signals to partitions, for every l > k, the value of (W\nϕ n,k n\n) Im k at each partition set I i m k × I j m k can be obtained by averaging the values of (W ϕ n,l n\n) Im l at all partition sets I i ′ m l × I j ′ m l that are subsets of I i m k × I j m k . A similar property applies also to the signal. Moreover, by taking limits, it can be shown that the same property holds also for (U k , g k ) and (U l , g l ). We now see {(U k , g k )} ∞ k=1 as a sequence of random variables over the standard probability space [0, 1] 2 . The above discussion shows that {(U k , g k )} ∞ k=1 is a bounded martingale. By the martingale convergence theorem, the sequence {(U k , g k )} ∞ k=1 converges almost everywhere pointwise to a limit (U, g), which must be in WL r .\nLastly, we show that there exist increasing sequences {k z ∈ N} ∞ z=1 and {t z = n kz jz } z∈N such that (W tz , f tz ) ϕ tz ,kz converges to (U, g) in cut distance. By the dominant convergence theorem, for each z ∈ N there exists a k z such that\n∥(U, g) -(U kz , g kz )∥ 1 < 1 3z\n.\nWe choose such an increasing sequence {k z } z∈N with k z > 3z. Similarly, for ever z ∈ N, there is a j z such that, with the notation t z = n kz jz ,\n∥(U kz , g kz ) -(W tz , f tz ) ϕ tz ,kz Im kz ∥ 1 < 1 3z ,\nand we may choose the sequence {t z } z∈N increasing. Therefore, by the triangle inequality and by the fact that the L 1 norm bounds the cut norm,\nδ □ (U, g), (W tz , f tz ) ≤ ∥(U, g) -(W tz , f tz ) ϕ tz ,kz ∥ □ ≤ ∥(U, g) -(U kz , g kz )∥ 1 + ∥(U kz , g kz ) -(W tz , f tz ) ϕ tz ,kz Im kz ∥ 1 + ∥(W tz , f tz ) ϕ tz ,kz Im kz -(W tz , f tz ) ϕ tz ,kz ∥ □ ≤ 1 3z + 1 3z + 1 3z ≤ 1 z .\nBy abuse of notation, we also treat W (Λ ′ ) as a weighted graph with k nodes and the adjacency matrix W (Λ ′ ). We denote by Λ = (λ 1 , . . . , λ k ) :\n(λ ′ 1 , . . . λ ′ k ) → (λ ′ 1 , . . . λ ′ k )\nthe identity random variable in [0, 1] k . We hence call (λ 1 , . . . , λ k ) random independent samples from [0, 1]. We call the random variable W (Λ) a random sampled weighted graph.\nGiven\nf ∈ L ∞ r [0, 1] and Λ ′ = (Λ ′ 1 , . . . , Λ ′ k ) ∈ [0, 1] k ,\nwe denote by f (Λ ′ ) the discrete signal with k nodes, and value f (λ ′ i ) for each node i = 1, . . . , k. We define the sampled signal as the random variable f (Λ).\nWe then define the random sampled simple graph as follows. First, for a deterministic Λ ′ ∈ [0, 1] k , we define a 2D array of Bernoulli random variables {e i,j (Λ ′ )} i,j∈ [k] where e i,j (Λ ′ ) = 1 in probability W (λ ′ i , λ ′ j ), and zero otherwise, for i, j ∈ [k]. We define the probability space {0, 1} k×k with normalized counting measure, defined for any S ⊂ {0, 1} k×k by\nP Λ ′ (S) = z∈S i,j∈[k] P Λ ′ ;i,j (z i,j ), where P Λ ′ ;i,j (z i,j ) = W (λ ′ i , λ ′ j ) if z i,j = 1 1 -W (λ ′ i , λ ′ j ) if z i,j = 0.\nWe denote the identity random variable by G(W, Λ ′ ) : z → z, and call it a random simple graph sampled from W (Λ ′ ).\nNext we also allow to \"plug\" the random variable Λ into Λ ′ . For that, we define the joint probability space Ω = [0, 1] k × {0, 1} k×k with the product σ-algebra of the Lebesgue sets in [0, 1] k with the power set σ-algebra of {0, 1} k×k , with measure, for any measurable S ⊂ Ω,\nµ(S) = [0,1] k P Λ ′ S(Λ ′ ) dΛ ′ , where S(Λ ′ ) ⊂ {0, 1} k×k := {z = {z i,j } i,j∈[k] ∈ {0, 1} k×k | (Λ ′ , z) ∈ S}.\nWe call the random variable G(W, Λ) : Λ ′ × z → z the random simple graph generated by W . We extend the domains of the random variables W (Λ), f (Λ) and G(W, Λ ′ ) to Ω trivially (e.g.,\nf (Λ)(Λ ′ , z) = f (Λ)(Λ ′ ) and G(W, Λ ′ )(Λ ′ , z) = G(W, Λ ′ )(z))\n, so that all random variables are defined over the same space Ω. Note that the random sampled graphs and the random signal share the same sample points.\nGiven a kernel U ∈ W 1 , we define the random sampled kernel U (Λ) similarly. Similarly to the above construction, given a weighted graph H with k nodes and edge weights h i,j , we define the simple graph sampled from H as the random variable simple graph G(H) with k nodes and independent Bernoulli variables e i,j ∈ {0, 1}, with P(e i,j = 1) = h i,j , as the edge weights. The following lemma is taken from [24, Equation (10.9)].\nLemma D.1. Let H be a weighted graph of k nodes. Then\nE d □ (G(H), H) ≤ 11 √ k .\nThe following is a simple corollary of Lemma D.1, using the law of total probability.\nCorollary D.2. Let W ∈ W 0 and k ∈ N. Then E d □ (G(W, Λ), W (Λ)) ≤ 11 √ k ." }, { "figure_ref": [], "heading": "D.2 Sampling lemmas of graphon-signals", "publication_ref": [], "table_ref": [], "text": "The following lemma, from [24, Lemma 10.6], shows that the cut norm of a kernel is approximated by the cut norm of its sample.\nLemma D.3 (First Sampling Lemma for kernels). Let U ∈ W 1 , and Λ ∈ [0, 1] k be uniform independent samples from [0, 1]. Then, with probability at least\n1 -4e - √ k/10 , - 3 k ≤ ∥U [Λ]∥ 2 -∥U ∥ 2 ≤ 8 k 1/4 .\nWe derive a version of Lemma D.3 with expected value using the following lemma.\nLemma D.4. Let z : Ω → [0, 1] be a random variable over the probability space Ω. Suppose that in an event E ⊂ Ω of probability 1 -ϵ we have z < α. Then\nE(z) ≤ (1 -ϵ)α + ϵ.\nProof.\nE(z) = Ω z(x)dx = E z(x)dx + Ω\\E z(x)dx ≤ (1 -ϵ)α + ϵ.\nAs a result of this lemma, we have a simple corollary of Lemma D.3." }, { "figure_ref": [], "heading": "Corollary D.5 (First sampling lemma -expected value version", "publication_ref": [ "b24", "b24", "b3" ], "table_ref": [], "text": "). Let U ∈ W 1 and Λ ∈ [0, 1] k be chosen uniformly at random, where k ≥ 1. Then E |∥U [Λ]∥ 2 -∥U ∥ 2 | ≤ 14 k 1/4 .\nProof. By Lemma D.4, and since 6/k\n1/4 > 4e - √ k/10 , E ∥U [Λ]∥ 2 -∥U ∥ 2 ≤ 1 -4e - √ k/10 8 k 1/4 + 4e - √ k/10 < 14 k 1/4 .\nWe note that a version of the first sampling lemma, Lemma D.3, for signals instead of kernels, is just a classical Monte Carlo approximation, when working with the L 1 [0, 1] norm, which is equivalent to the signal cut norm.\nLemma D.6 (First sampling lemma for signals). Let f ∈ L ∞ r [0, 1]. Then E |∥f (Λ)∥ 1 -∥f ∥ 1 | ≤ r k 1/2 .\nProof. By standard Monte Carlo theory, since r 2 bounds the variance of f (λ), where λ is a random uniform sample from [0, 1], we have\nV(∥f (Λ)∥ 1 ) = E |∥f (Λ)∥ 1 -∥f ∥ 1 | 2 ≤ r 2 k .\nHere, V denotes variance, and we note that\nE∥f (Λ)∥ 1 = 1 k k j=1 |f (λ j )| = ∥f ∥ 1 . Hence, by Cauchy Schwarz inequality, E |∥f (Λ)∥ 1 -∥f ∥ 1 | ≤ E |∥f (Λ)∥ 1 -∥f ∥ 1 | 2 ≤ r k 1/2 .\nWe now extend [24,Lemma 10.16], which bounds the cut distance between a graphon and its sampled graph, to the case of a sampled graphon-signal.\nTheorem D.7 (Second sampling lemma for graphon signals). Let r > 1. Let k ≥ K 0 , where K 0 is a constant that depends on r, and let (W, f ) ∈ WL r . Then,\nE δ □ (W, f ), (W (Λ), f (Λ)) < 15 log(k)\n,\nand\nE δ □ (W, f ), (G(W, Λ), f (Λ)) < 15 log(k) .\nThe proof follows the steps of [24,Lemma 10.16] and [4]. We note that the main difference in our proof is that we explicitly write the measure preserving bijection that optimizes the cut distance. While this is not necessary in the classical case, where only a graphon is sampled, in our case we need to show that there is a measure preserving bijection that is shared by the graphon and the signal. We hence write the proof for completion.\nProof. Denote a generic error bound, given by the regularity lemma Theorem B.8 by ϵ. If we take n intervals in the Theorem B.8 , then the error in the regularity lemma will be, for c such that\n2c = 3, ⌈3/ϵ 2 ⌉ = log(n) so 3/ϵ 2 + 1 ≥ log(n).\nFor small enough ϵ, we increase the error bound in the regularity lemma to satisfy\n4/ϵ 2 > 3/ϵ 2 + 1 ≥ log(n).\nMore accurately, for the equipartition to intervals I n , there is ϕ ′ ∈ S ′ [0,1] and a piecewise constant graphon signal\n([W ϕ ] n , [f ϕ ] n ) such that ∥W ϕ ′ -[W ϕ ′ ] n ∥ □ ≤ α 2 log(n) and ∥f ϕ ′ -[f ϕ ′ ] n ∥ □ ≤ (1 -α) 2 log(n) , for some 0 ≤ α ≤ 1. If we choose n such that n = ⌈ √ k r log(k) ⌉,\nthen an error bound in the regularity lemma is\n∥W ϕ ′ -[W ϕ ′ ] n ∥ □ ≤ α 2 1 2 log(k) -log log(k) -log(r) and ∥f ϕ ′ -[f ϕ ′ ] n ∥ □ ≤ (1 -α) 2 1 2 log(k) -log log(k) -log(r)\n, for some 0 ≤ α ≤ 1. Without loss of generality, we suppose that ϕ ′ is the identity. This only means that we work with a different representative of [(W, f )] ∈ WL r throughout the proof. We hence have\nd □ (W, W n ) ≤ α 2 √ 2 log(k) -2 log log(k) -2 log(r) and ∥f -f n ∥ 1 ≤ (1 -α) 4 √ 2 log(k) -2 log log(k) -2 log(r)\n, for some step graphon-signal\n(W n , f n ) ∈ [WL r ]\nIn . Now, by the first sampling lemma (Corollary D.5),\nE d 2 W (Λ), W n (Λ) -d 2 (W, W n ) ≤ 14 k 1/4 .\nMoreover, by the fact that f -\nf n ∈ L ∞ 2r [0, 1], Lemma D.6 implies that E ∥f (Λ) -f n (Λ)∥ 1 -∥f -f n ∥ 1 ≤ 2r k 1/2 .\nTherefore,\nE d □ W (Λ), W n (Λ) ≤ E d □ W (Λ), W n (Λ) -d □ (W, W n ) + d □ (W, W n ) ≤ 14 k 1/4 + α 2 √ 2 log(k) -2 log log(k) -2 log(r)\n.\nSimilarly, we have\nE∥f (Λ) -f n (Λ)∥ 1 ≤ E ∥f (Λ) -f n (Λ)∥ 1 -∥f -f n ∥ 1 + ∥f -f n ∥ 1 ≤ 2r k 1/2 + (1 -α) 4 √ 2 log(k) -2 log log(k) -2 log(r)\n. Now, let π Λ be a sorting permutation in [k], such that\nπ Λ (Λ) := {Λ π -1 Λ (i) } k i=1 = (λ ′ 1 , . . . , λ ′ k )\nis a sequence in a non-decreasing order. Let {I i k = [i-1, i)/k} k i=1 be the intervals of the equipartition I k . The sorting permutation π Λ induces a measure preserving bijection ϕ that sorts the intervals I i k . Namely, we define, for every x ∈ [0, 1],\nif x ∈ I i k , ϕ(x) = J i,πΛ(i) (x),(24)\nwhere J i,j : I i k → I j k are defined as x → x -i/k + j/k, for all x ∈ I i k . By abuse of notation, we denote by W n (Λ) and f n (Λ) the induced graphon and signal from W n (Λ) and f n (Λ) respectively. Hence, W n (Λ) ϕ and f n (Λ) ϕ are well defined. Note that the graphons W n and W n (Λ) ϕ are stepfunctions, where the set of values of W n (Λ) ϕ is a subset of the set of values of W n . Intuitively, since k ≫ m, we expect the partition {[λ ′ i , λ ′ i+1 )} k i=1 to be \"close to a refinement\" of I n in high probability. Also, we expect the two sets of values of W n (Λ) ϕ and W n to be identical in high probability. Moreover, since Λ ′ is sorted, when inducing a graphon from the graph W n (Λ) and \"sorting\" it to W n (Λ) ϕ , we get a graphon that is roughly \"aligned\" with W n . The same philosophy also applied to f n and f n (Λ) ϕ . We next formalize these observations.\nFor each i ∈ [n], let λ ′ ji be the smaller point of Λ ′ that is in I i n , set j i = j i+1 if Λ ′ ∩ I i n = ∅, and set j n+1 = k + 1. For every i = 1, . . . , n, we call\nJ i := [j i -1, j i+1 -1)/k\nthe i-th step of W n (Λ) ϕ (which can be the empty set). Let a i = ji-1 k be the left edge point of J i . Note that a i = |Λ ∩ [0, i/n)| /k is distributed binomially (up to the normalization k) with k trials and success in probability i/n.\n∥W n -W n (Λ) ϕ ∥ □ ≤ ∥W n -W n (Λ) ϕ ∥ 1 = i k I i n ∩Ji I k n ∩J k W n (x, y) -W n (Λ) ϕ (x, y) dxdy + i j̸ =i k l̸ =k I i n ∩Jj I k n ∩J l W n (x, y) -W n (Λ) ϕ (x, y) dxdy = i j̸ =i k l̸ =k I i n ∩Jj I k n ∩J l W n (x, y) -W n (Λ) ϕ (x, y) dxdy = i k I i n \\Ji I k n \\J k W n (x, y) -W n (Λ) ϕ (x, y) dxdy ≤ i k I i n \\Ji I k n \\J k 1dxdy ≤ 2 i I i n \\Ji 1dxdy ≤ 2 i (|i/n -a i | + |(i + 1)/n -a i+1 |).\nHence,\nE∥W n -W n (Λ) ϕ ∥ □ ≤ 2 i (E |i/n -a i | + E |(i + 1)/n -a i+1 |) ≤ 2 i E(i/n -a i ) 2 + E (i + 1)/n -a i+12\nBy properties of the binomial distribution, we have E(ka i ) = ik/n, so\nE(ik/n -ka i ) 2 = V(ka i ) = k(i/n)(1 -i/n).\nAs a result\nE∥W n -W n (Λ) ϕ ∥ □ ≤ 5 n i=1 (i/n)(1 -i/n) k ≤ 2 n 1 (i/n)(1 -i/n) k di,\nand for n > 10,\n≤ 5 n √ k 1.1 0 z -z 2 dz ≤ 5 n √ k 1.1 0 √ zdz ≤ 10/3(1.1) 3/2 n √ k < 4 n √ k . Now, by n = ⌈ √ k r log(k) ⌉ ≤ √ k r log(k) + 1, for large enough k, E∥W n -W n (Λ) ϕ ∥ □ ≤ 4 1 r log(k) + 4 1 √ k ≤ 5 r log(k)\n.\nSimilarly,\nE∥f n -f n (Λ) ϕ ∥ 1 ≤" }, { "figure_ref": [], "heading": "log(k)", "publication_ref": [], "table_ref": [], "text": ".\nNote that in the proof of Corollary B.7, in Equation ( 16), α is chosen close to 1, and especially, for small enough ϵ, α > 1/2. Hence, for large enough k,\nE(d 2 (W, W (Λ) ϕ )) ≤ d □ (W, W n ) + E d □ (W n , W n (Λ) ϕ ) + E(d □ (W n (Λ), W (Λ))) ≤ α 2 √ 2 log(k) -2 log log(k) -2 log(r) + 5 r log(k) + 14 k 1/4 + α 2 √ 2 log(k) -2 log log(k) -2 log(r) ≤ α 6 log(k) , Similarly, for each k, if 1 -α < 1 √ log(k)\n, then\nE(d □ (f, f (Λ) ϕ )) ≤ (1 -α) 2 √ 2 log(k) -2 log log(k) -2 log(r) + 5 log(k) + 2r k 1/2 + (1 -α) 4 √ 2 log(k) -2 log log(k) -2 log(r) ≤ 14 log(k) . Moreover, for each k such that 1 -α > 1 √ log(k)\n, if k is large enough (where the lower bound of k depends on r), we have\n5 log(k) + 2r k 1/2 < 5.5 log(k) < 1 log(k) 6 log(k) < (1 -α) 6 log(k) so, by 6 √ 2 < 9, E(d □ (f, f (Λ) ϕ )) ≤ (1 -α) 2 √ 2 log(k) -2 log log(k) -2 log(r) + 2 log(k) + 2r k 1/2 + (1 -α) 4 √ 2 log(k) -2 log log(k) -2 log(r) ≤ (1 -α) 15 log(k) .\nLastly, by Corollary D.2,\nE d □ W, G(W, Λ) ϕ ≤ E d □ W, W (Λ) ϕ + E d □ W (Λ) ϕ , G(W, Λ) ϕ ≤ α 6 log(k) + 11 √ k ≤ α 7 log(k) ,\nAs a result, for large enough k,\nE δ □ (W, f ), (W (Λ), f (Λ)) < 15 log(k) ,and\nE δ □ (W, f ), (G(W, Λ), f (Λ)) < 15 log(k) .\nGiven a graph-signal (G, f ), with f ∈ R n×d with adjacency matrix A ∈ R n×n , a spectral convolutional layer based on a polynomial filter p(λ) = J j=0 λ j C j , where C j ∈ R d×p , is defined to be\np(A)f = J j=0 1 n j A j f C j ,\nfollowed by a pointwise non-linearity like ReLU. Such a convolutional layer can be seen as J + 1 MPLs. We first apply J MPLs, where each MPL is of the form\nθ(f ) = f , 1 n Af .\nWe then apply an update layer\nU (f ) = f C\nfor some C ∈ R (J+1)d×p , followed by the pointwise non-linearity. The message part of θ can be written in our formulation with Φ(a, b) = b, and the update part of θ with η(c, d) = (c, d). The last update layer U is linear followed by the pointwise non-linearity." }, { "figure_ref": [], "heading": "F Lipschitz continuity of MPNNs", "publication_ref": [], "table_ref": [], "text": "In this appendix we prove Theorem 4.1. For v ∈ R d , we often denote by |v| = ∥v∥ ∞ . We define the L 1 norm of a measurable function h : [0, 1] → R d by\n∥h∥ 1 := 1 0 |h(x)| dx = 1 0 ∥h(x)∥ ∞ dx.\nSimilarly, ∥h∥ ∞ := sup\nx∈R d |h(x)| = sup x∈R d ∥h(x)∥ ∞ .\nWe define Lipschitz continuity with respect to the infinity norm. Namely, Z :\nR d → R c is called Lipschitz continuous with Lipschitz constant L if |Z(x) -Z(y)| = ∥Z(x) -Z(y)∥ ∞ ≤ L∥x -z∥ ∞ = L |x -z| .\nWe denote the minimal Lipschitz bound of the function Z by L Z .\nWe extend L ∞ r [0, 1] to the space of functions f : [0, 1] → R d with the above L 1 norm. Define the space K q of kernels bounded by q > 0 to be the space of measurable functions\nK : [0, 1] 2 → [-q, q].\nThe cut norm, cut metric, and cut distance are defined as usual for kernels in K q ." }, { "figure_ref": [], "heading": "F.1 Lipschitz continuity of message passing and update layers", "publication_ref": [ "b24", "b24" ], "table_ref": [], "text": "In this subsection we prove that message passing layers and update layers are Lipschitz continuous with respect to he graphon-signal cut metric.\nLemma F.1 (Product rule for message kernels). Let Φ f , Φ g be the message kernels corresponding to the signals f, g. Then\n∥Φ f -Φ g ∥ L 1 [0,1] 2 ≤ K k=1 L ξ k r ∥ξ k t ∥ ∞ + ∥ξ k r ∥ ∞ L ξ k t ∥f -g∥ 1 .\nProof. Suppose p = 1 For every x, y ∈ [0, 1] 2\n|Φ f (x, y) -Φ g (x, y)| = K k=1 ξ k r (f (x))ξ k t (f (y)) - K k=1 ξ k r (g(x))ξ k t (g(y)) ≤ K k=1 ξ k r (f (x))ξ k t (f (y)) -ξ k r (g(x))ξ k t (g(y)) ≤ K k=1 ξ k r (f (x))ξ k t (f (y)) -ξ k r (g(x))ξ k t (f (y)) + ξ k r (g(x))ξ k t (f (y)) -ξ k r (g(x))ξ k t (g(y)) ≤ K k=1 L ξ k r |f (x) -g(x)| ξ k t (f (y)) + ξ k r (g(x)) L ξ k t |f (y) -g(y)| .\nHence,\n∥Φ f -Φ g ∥ L 1 [0,1] 2 ≤ K k=1 1 0 1 0 L ξ k r |f (x) -g(x)| ξ k t (f (y)) + ξ k r (g(x)) L ξ k t |f (y) -g(y)| dxdy ≤ K k=1 L ξ k r ∥f -g∥ 1 ∥ξ k t ∥ ∞ + ∥ξ k r ∥ ∞ L ξ k t ∥f -g∥ 1 = K k=1 L ξ k r ∥ξ k t ∥ ∞ + ∥ξ k r ∥ ∞ L ξ k t ∥f -g∥ 1 .\nLemma F.2. Let Q, V be two message kernels, and W ∈ W 0 . Then\n∥Agg(W, Q) -Agg(W, V )∥ 1 ≤ ∥Q -V ∥ 1 .\nProof.\nAgg(W, Q)(x) -Agg(W, V )(x) = 1 0 W (x, y)(Q(x, y) -V (x, y))dy So ∥Agg(W, Q) -Agg(W, V )∥ 1 = 1 0 1 0 W (x, y)(Q(x, y) -V (x, y))dy dx ≤ 1 0 1 0 |W (x, y)(Q(x, y) -V (x, y))| dydx ≤ 1 0 1 0 |(Q(x, y) -V (x, y))| dydx = ∥Q -V ∥ 1 .\nAs a result of Lemma F.2 and the product rule Lemma F.1, we have the following corollary, that computes the error in aggregating two message kernels with the same graphon.\nCorollary F.3. ∥Agg(W, Φ f ) -Agg(W, Φ g )∥ 1 ≤ K k=1 L ξ k r ∥ξ k t ∥ ∞ + ∥ξ k r ∥ ∞ L ξ k t ∥f -g∥ 1 .\nNext we fix the message kernel, and bound the difference between the aggregation of the message kernel with respect to two different graphons. Let L + [0, 1] be the space of measurable function f : [0, 1] → [0, 1]. The following lemma is a trivial extension of [24,Lemma 8.10] \nfrom K 1 to K r . Lemma F.4. For any kernel Q ∈ K r ∥Q∥ □ = sup f,g∈L + [0,1] [0,1] 2 f (x)Q(x, y)g(y)dxdy ,\nwhere the supremum is attained for some f, g\n∈ L + [0, 1].\nThe following Lemma is proven as part of the proof of [24,Lemma 8.11].\nLemma F.5. For any kernel\nQ ∈ K r sup f,g∈L ∞ 1 [0,1] [0,1] 2 f (x)Q(x, y)g(y)dxdy ≤ 4∥Q∥ □ .\nFor completeness, we give here a self-contained proof.\nProof. Any function f ∈ L ∞ 1 [0, 1] can be written as f = f + -f -, where f + , f -∈ L + [0, 1]. Hence, by Lemma F.4, sup f,g∈L ∞ 1 [0,1] [0,1] 2 f (x)Q(x, y)g(y)dxdy = sup f+,f-,g+,g-∈L + [0,1] [0,1] 2 (f + (x) -f -(x))Q(x, y)(g + (y) -g -(y))dxdy ≤ s∈{+,-} sup fs,gs∈L + [0,1] [0,1] 2 f s (x)Q(x, y)g s (y)dxdy = 4∥Q∥ □ .\nNext we state a simple lemma.\nLemma F.6. Let f = f + -f -be a signal, where f + , f -: [0, 1] → (0, ∞) are measurable. Then the supremum in the cut norm ∥f ∥ □ = sup S⊂[0,1] S f (x)dx is attained as the support of either\nf + or f -. Lemma F.7. Let f ∈ L ∞ r [0, 1] , W, V ∈ W 0 , and suppose that ξ k r (f (x)) , ξ k t (f (x)) ≤ ρ for every x ∈ [0, 1] and k = 1, . . . , K. Then ∥Agg(W, Φ f ) -Agg(V, Φ f )∥ □ ≤ 4Kρ 2 ∥W -V ∥ □ .\nMoreover, if ξ k r and ξ k t are non-negatively valued for every k = 1, . . . , K, then\n∥Agg(W, Φ f ) -Agg(V, Φ f )∥ □ ≤ Kρ 2 ∥W -V ∥ □ .\nProof. Let T = W -V . Let S be the maximizer of the supremum underlying the cut norm of Agg(T, Φ f ). Suppose without loss of generality that S Agg(T, Φ\nf )(x)dx > 0. Denote q k r (x) = ξ k r (f (x)) and q k t (x) = ξ k t (f (x)). We have S Agg(W, Φ f )(x) -Agg(V, Φ f )(x) dx = S Agg(T, Φ f )(x)dx = K k=1 S 1 0 q k r (x)T (x, y)q k t (y)dydx. Let v k r (x) = q k r (x)/ρ x ∈ S 0 x / ∈ S.(25)\nMoreover, define v k t = q k t /ρ, and note that v\nk r , v k t ∈ L ∞ 1 [0, 1]. We hence have, by Lemma F.5, S Agg(T, Φ f )(x)dx = K k=1 ρ 2 1 0 1 0 v k r (x)T (x, y)v k t (y)dydx ≤ K k=1 ρ 2 1 0 1 0 v k r (x)T (x, y)v k t (y)dydx ≤ 4Kρ 2 ∥T ∥ □ . Hence, ∥Agg(W, Φ f ) -Agg(V, Φ f )∥ □ ≤ 4Kρ 2 ∥T ∥ □\nLastly, in case ξ k r , ξ k t are nonnegatively valued, so are q k r , q k t , and hence by Lemma F.4,\nS Agg(T, Φ f )(x)dx ≤ Kρ 2 ∥T ∥ □ . Theorem F.8. Let (W, f ), (V, g) ∈ WL r , and suppose that ξ k r (f (x)) , ξ k t (f (x)) ≤ ρ and L ξ k t , L ξ k t < L for every x ∈ [0, 1] and k = 1, . . . , K. Then, ∥Agg(W, Φ f ) -Agg(V, Φ g )∥ □ ≤ 4KLρ∥f -g∥ □ + 4Kρ 2 ∥W -V ∥ □ .\nProof. By Lemma F.1, Lemma F.2 and Lemma F.7,\n∥Agg(W, Φ f ) -Agg(V, Φ g )∥ □ ≤ ∥Agg(W, Φ f ) -Agg(W, Φ g )∥ □ + ∥Agg(W, Φ g ) -Agg(V, Φ g )∥ □ ≤ K k=1 L ξ k r ∥ξ k t ∥ ∞ + ∥ξ k r ∥ ∞ L ξ k t ∥f -g∥ 1 + 4Kρ 2 ∥W -V ∥ □ ≤ 4KLρ∥f -g∥ □ + 4Kρ 2 ∥W -V ∥ □ .\nLastly, we show that update layers are Lipschitz continuous. Since the update function takes two functions f : [0, 1] → R di (for generally two different output dimensions d 1 , d 2 ), we \"concatenate\" these two inputs and treat it as one input f : [0, 1] → R d1+d2 . Lemma F.9. Let η : R d+p → R s be Lipschitz with Lipschitz constant L η , and let f, g ∈ L ∞ r [0, 1] with values in R d+p for some d, p ∈ N.\nThen\n∥η(f ) -η(g)∥ 1 ≤ L η ∥f -g∥ 1 . Proof. ∥η(f ) -η(g)∥ 1 = 1 0 η f (x) -η g(x) dx ≤ 1 0 L η |f (x) -g(x)| dx = L η ∥f -g∥ 1 ." }, { "figure_ref": [], "heading": "F.3 Lipschitz continuity theorems for MPNNs", "publication_ref": [ "b15" ], "table_ref": [], "text": "The following recurrence sequence will govern the propagation of the Lipschitz constant of the MPNN and the bound of signal along the layers.\nLemma F. 16. Let a = (a 1 , a 2 , . . .) and b = (b 1 , b 2 , . . .). The solution to e t+1 = a t e t + b t , with initialization e 0 , is\ne t = Z t (a, b, e 0 ) := t-1 j=0 a j e 0 + t-1 j=1 j-1 i=1 a t-i b t-j ,(26)\nwhere, by convention, 0 i=1 a t-i := 1.\nIn case there exist a, b ∈ R such that a i = a and b i = b for every i, e t = a t e 0 + t-1 j=0 a j b.\nSetting 1.\nTheorem F.17. Let Θ be a MPNN with T layers. Suppose that for every layer and every y and k,\n∥ t ξ k y ∥ ∞ , ∥η t ∥ ∞ ≤ ρ, L η t , Lt ξ k y < L.\nLet (W, f ), (V, g) ∈ WL r . Then, for MPNN with no update function\n∥Θ t (W, f ) -Θ t (V, g)∥ □ ≤ (4KLρ) t ∥f -g∥ □ + t-1 j=0 (4KLρ) j 4Kρ 2 ∥W -V ∥ □ ,\nand for MPNN with update function\n∥Θ t (W, f ) -Θ t (V, g)∥ □ ≤ (4KL 2 ρ) t ∥f -g∥ □ + t-1 j=0 (4KL 2 ρ) j 4Kρ 2 L∥W -V ∥ □ .\nProof. We prove for MPNNs with update function, where the proof without update function is similar. We can write a recurrence sequence for a bound ∥Θ t (W, f ) -Θ t (V, g)∥ □ ≤ e t , by Theorem F.8 and Lemma F.9, as\ne t+1 = 4KL 2 ρe t + 4Kρ 2 L∥W -V ∥ □ .\nThe proof now follows by applying Lemma F.16 with a = 4KL 2 ρ and b = 4Kρ 2 L.\nSetting 2.\nLemma F.18. Let Θ be a MPNN with T layers. Suppose that for every layer t and every y ∈ {r, t} and k ∈\n[K], η t (0) , t ξ k y (0) ≤ B, L η t , Lt ξ k y < L with L, B > 1. Let (W, f ) ∈ WL r .\nThen, for MPNN without update function, for every layer t,\n∥Θ t (W, f )∥ ∞ ≤ (2KL 2 B 2 ) 2 t ∥f ∥ 2 t ∞ ,\nand for MPNN with update function, for every layer t,\n∥Θ t (W, f )∥ ∞ ≤ (2KL 3 B 2 ) 2 t ∥f ∥ 2 t ∞ ,\nProof. We first prove for MPNNs without update functions. Denote by C t a bound on ∥ t f ∥ ∞ , and let C 0 be a bound on ∥f ∥ ∞ . By Lemma F.10, we may choose bounds such that\nC t+1 ≤ K(LC t + B) 2 = KL 2 C 2 t + 2KLBC t + KB 2 .\nWe can always choose C t , K, L > 1, and therefore,\nC t+1 ≤ KL 2 C 2 t + 2KLBC t + KB 2 ≤ 2KL 2 B 2 C 2 t .\nDenote a = 2KL 2 B 2 . We have\nC t+1 = a(C t ) 2 = a(aC 2 t-1 ) 2 = a 1+2 C 4 t-1 = a 1+2 (a(C t-2 ) 2 ) 4 = a 1+2+4 (C t-2 ) 8 = a 1+2+4+8 (C t-3 ) 16 ≤ a 2 t C 2 t 0 .\nNow, for MPNNs with update function, we have\nC t+1 ≤ LK(LC t + B) 2 + B = KL 3 C 2 t + 2KL 2 BC t + KB 2 L + B ≤ 2KL 3 B 2 C 2 t ,\nand we proceed similarly.\nTheorem F.19. Let Θ be a MPNN with T layers. Suppose that for every layer t and every y ∈ {r, t} and k ∈\n[K], η t (0) , t ξ k y (0) ≤ B, L η t , Lt ξ k y < L, with L, B > 1.\nLet (W, g), (V, g) ∈ WL r . Then, for MPNNs without update functions\n∥Θ t (W, f ) -Θ t (V, g)∥ □ ≤ t-1 j=0 4K(L 2 r j + LB)∥f -g∥ □ + t-1 j=1 j-1 i=1 4K(L 2 r t-i + LB)4K(Lr t-j + B) 2 ∥W -V ∥ □ , where r i = (2KL 2 B 2 ) 2 i ∥f ∥ 2 i\n∞ , and for MPNNs with update functions\n∥Θ t (W, f ) -Θ t (V, g)∥ □ ≤ t-1 j=0 4K(L 3 r j + L 2 B)∥f -g∥ □ + t-1 j=1 j-1 i=1 4K(L 3 r t-i + L 2 B)4KL(Lr t-j + B) 2 ∥W -V ∥ □ , where r i = (2KL 3 B 2 ) 2 i ∥f ∥ 2 i ∞ .\nProof. We prove for MPNNs without update functions. The proof for the other case is similar. By Corollary F.11, since the signals at layer t are bounded by\nr t = (2KL 2 B 2 ) 2 t ∥f ∥ 2 t ∞ ,\nwe have\n∥Θ t+1 (W, f ) -Θ t+1 (V, g)∥ □ ≤ 4K(L 2 r t + LB)∥Θ t (W, f ) -Θ t (V, g)∥ □ + 4K(Lr t + B) 2 ∥W -V ∥ □ .\nwhere in the notations of Lemma F.16,\na t = 4K L 2 (L t ∥f ∥ ∞ + t-1 j=1 L j B) + LB and b t = K L(L t ∥f ∥ ∞ + t-1 j=1 L j B) + B ∥W -V ∥ □ .\nNext, for MPNNs with update functions, there is a bound that satisfies\ne t = 4K(L 3 r t-1 + L 2 B)e t-1 + K(L 2 r + LB)∥W -V ∥ □ = 4K L 3 L 2t ∥f ∥ ∞ + t-1 j=1 L 2j (LB + B) + L 2 B e t-1 + K L 2 L 2t ∥f ∥ ∞ + t-1 j=1 L 2j (LB + B) + LB ∥W -V ∥ □ .\nHence, by Lemma F.16, and Z defined by ( 26),\ne t = O(K t L 3t+2t 2 r t B t ) ∥f -g∥ □ + ∥W -V ∥ □ ." }, { "figure_ref": [], "heading": "G Generalization bound for MPNNs", "publication_ref": [], "table_ref": [], "text": "In this appendix we prove Theorem 4.2." }, { "figure_ref": [], "heading": "G.1 Statistical learning and generalization analysis", "publication_ref": [ "b28" ], "table_ref": [], "text": "In the statistical setting of learning, we suppose that the dataset comprises independent random samples from a probability space that describes all possible data P. We suppose that for each x ∈ P there is a ground truth value y x ∈ Y, e.g., the ground truth class or value of x, where Y is, in general, some measure space. The loss is a measurable function L : Y 2 → R + that defines similarity in Y. Given a measurable function Θ : P → Y, that we call the model or network, its accuracy on all potential inputs is defined as the statistical risk R stat (Θ) = E x∼P L(Θ(x), y x ) .\nThe goal in learning is to find a network Θ, from some hypothesis space T , that has a low statistical risk. In practice, the statistical risk cannot be computed analytically. Instead, we suppose that a dataset X = {x m } M m=1 ⊂ P of M ∈ N random independent samples with corresponding values {y m } M m=1 ⊂ Y is given. We estimate the statistical risk via a \"Monte Carlo approximation,\" called the empirical risk R emp (Θ) = 1 M M m=1 L(Θ(x m ), y m ). The network Θ is chosen in practice by optimizing the empirical risk. The goal in generalization analysis is to show that if a learned Θ attains a low empirical risk, then it is also guaranteed to have a low statistical risk.\nOne technique for bounding the statistical risk in terms of the empirical risk is to use the bound R stat (Θ) ≤ R emp (Θ) + E, where E is the generalization error E = sup Θ∈T |R stat (Θ) -R emp (Θ)|, and to find a bound for E. Since the trained network Θ = Θ X depends on the data X , the network is not a constant when varying the dataset, and hence the empirical risk is not really a Monte Carlo approximation of the statistical risk in the learning setting. If the network Θ was fixed, then Monte Carlo theory would have given us a bound of E 2 of order O κ(p)/M in an event of probability 1 -p, where, for example, in Hoeffding's inequality Theorem G.2, κ(p) = log(2/p). Let us call such an event a good sampling event. Since the good sampling event depends on Θ, computing a naive bound to the generalization error would require intersecting all good sampling events for all Θ ∈ T . such that for every (X 1 , . . . , X N ) ∈ E p Lip , for every bounded Lipschitz continuous function F : X → R d with Lipschitz constant L F , we have\nF (x)dµ(x) - 1 N N i=1 F (X i ) ∞ ≤ 2ξ -1 (N )L f + 1 √ 2 ξ -1 (N )∥F ∥ ∞ (1 + log(2/p)),(27)\nwhere ξ(r) = κ(r) 2 log(κ(r))\nr 2\nand ξ -1 is the inverse function of ξ.\nProof. Let r > 0. There exists a covering of X by a set of balls {B j } j∈[J] of radius r, where J = κ(r). For j = 2, . . . , J, we define I j := B j \\ ∪ i<j B i , and define I 1 = B 1 . Hence, {I j } j∈[J] is a family of measurable sets such that I j ∩ I i = ∅ for all i ̸ = j ∈ [J], j∈[J] I j = χ, and diam(I j ) ≤ 2r for all j ∈ [J], where by convention diam(∅) = 0. For each j ∈ [J], let z j be the center of the ball B j .\nNext, we compute a concentration of error bound on the difference between the measure of I j and its Monte Carlo approximation, which is uniform in j ∈ [J]. Let j ∈ [J] and q ∈ (0, 1). By Hoeffding's inequality Theorem G.2, there is an event E q j with probability µ(E q j ) ≥ 1 -q, in which\n1 N N i=1 1 Ij (X i ) -µ(I k ) ∞ ≤ 1 √ 2 log(2/q) √ N . (28\n)\nConsider the event\nE Jq Lip = J j=1 E q j ,\nwith probability µ N (E Jq Lip ) ≥ 1 -Jq. In this event, (28) holds for all j ∈ J . We change the failure probability variable p = Jq, and denote E p Lip = E Jq Lip . Next we bound uniformly the Monte Carlo approximation error of the integral of bounded Lipschitz continuous functions F : χ → R F . Let F : χ → R F be a bounded Lipschitz continuous function with Lipschitz constant L F . We define the step function\nF r (y) = j∈[J]\nF (z j )1 Ij (y).\nThen, \n1 N N i=1 F (X i ) - χ F (y)dµ(y) ∞ ≤ 1 N N i=1 F (X i ) - 1 N N i=1 F r (X i ) ∞ + 1 N N i=1 F r (X i ) - χ F r(\nTo bound (1), we define for each X i the unique index j i ∈ [J] s.t. X i ∈ I ji . We calculate,\n1 N N i=1 F (X i ) - 1 N N i=1 F r (X i ) ∞ ≤ 1 N N i=1 F (X i ) - j∈J F (z j )1 Ij (X i ) ∞ = 1 N N i=1 ∥F (X i ) -F (z ji )∥ ∞ ≤rL F .\nWe proceed by bounding (2). In the event of E p Lip , which holds with probability at least 1 -p, equation ( 28) holds for all j ∈ J . In this event, we get\n1 N N i=1 F r (X i ) - χ F r (y)dµ(y) ∞ = j∈[J] 1 N N i=1 F (z j )1 Ij (X i ) - Ij F (z j )dy ∞ ≤ j∈[J] ∥F ∥ ∞ 1 N N i=1\n1 Ij (X i ) -µ(I j )\n≤ J∥F ∥ ∞ 1 √ 2 log(2J/p) √ N .\nRecall that J = κ(r). Then, with probability at least 1 -p\n1 N N i=1 F r (X i ) - χ F r (y)dµ(y) ∞ ≤ κ(r)∥F ∥ ∞ 1 √2\nlog(κ(r)) + log(2/p) √ N .\nTo bound (3), we calculate Lastly, choosing r = ξ -1 (N ) for ξ(r) = κ(r) 2 log(κ(r)) Since the event E p Lip is independent of the choice of F : χ → R F , the proof is finished." }, { "figure_ref": [], "heading": "G.4 A generalization theorem for MPNNs", "publication_ref": [ "b31" ], "table_ref": [], "text": "The following generalization theorem of MPNN is now a direct result of Theorem G.3.\nSince (31) is true for any Υ ′ ∈ Lip( WL r , L 1 ), it is also true for Υ X for any realization of X, so we also have R(Υ X ) -R(Υ X , X) ≤ 2ξ -1 (N/2C)L + 1 √ 2 ξ -1 (N/2C)(L + E(0, 0))(1 + log(2/p)).\nLastly, we denote\nE p = E mult ∩ C i=1 E p i ." }, { "figure_ref": [], "heading": "G.5 Experiments", "publication_ref": [ "b34", "b28" ], "table_ref": [ "tab_0" ], "text": "The nontrivial part in our construction of the MPNN architecture is the choice of normalized sum aggregation as the aggregation method of the MPNNs. We hence show the accuracy and generalization gap of this aggregation scheme in practice in Table 1.\nMost MPNNs typically use sum, mean or max aggregation. Intuitively, normalized sum aggregation is close to average aggregation, due its \"normalized nature.\" For example, normalized sum and mean aggregations are well behaved for dense graphs with number of nodes going to infinity, while sum aggregation diverges for such graphs. Moreover, sum aggregation cannot be extended to graphons, while normalized sum and mean aggregations can. In Table 2, we first show that MPNNs with normalized sum aggregation perform well and generalize well. We then compare the normalized sum aggregation MPNNs (in rows 1 and 3 of Table 2) to baseline MPNNs with mean aggregation (rows 2 and 4 in Table 2), and show that normalized sum aggregation is not worse than mean aggregation.\nThe source code, courtesy of Ningyuan (Teresa) Huang, is available as part of https://github. com/nhuang37/finegrain_expressivity_GNN . Table 2: Standard MPNN architectures with normalized sum aggregation (nsa) and mean aggregation (ma), 3-layers with 512-hidden-dimension, and global mean pooling, denoted by \"MPNN-nsa\" and \"MPNN-ma.\" We use the MPNNs GIN [34] and GraphConv [28], and report the mean accuracy ± std over ten data splits. Nsa has good generalization and better performance than ma." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "I thank Ningyuan (Teresa) Huang for providing the experiments of Table 2.\nRon Levie is partially funded by ISF (Israel Science Foundation) grant #1937/23: Analysis of graph deep learning using graphon theory." }, { "figure_ref": [], "heading": "", "publication_ref": [ "b4" ], "table_ref": [], "text": "The next theorem bounds the covering number of WL r .\nTheorem C.2. Let r > 0 and c > 1. For every sufficiently small ϵ > 0, the space WL r can be covered by\nballs of radius ϵ in cut distance, where k = ⌈2 2c/ϵ 2 ⌉.\nProof. Let 1 < c < c ′ and 0 < α < 1. Given an error tolerance αϵ > 0, using Theorem B.8, we take the equipartition I n into n = 2 ⌈ 2c α 2 ϵ 2 ⌉ intervals, for which any graphon-signal (W, f ) ∈ WL r can be approximated by some (W, f ) n in [ WL r ] In , up to error αϵ. Consider the rectangle R n,r = [0, 1] n 2 × [-r, r] n . We identify each element of [ WL r ] In with an element of R n,r using the coefficients of (5). More accurately, the coefficients c i,j of the step graphon are identifies with the first n 2 entries of a point in R n,r , and the the coefficients b i of the step signals are identifies with the last n entries of a point in R n,r . Now, consider the quantized rectangle Rn,r , defined as Rn,r = (1 -α)ϵZ) n 2 +2rn ∩ R n,r .\nNote that Rn consists of\npoints. Now, every point x ∈ R n,r can be approximated by a quantized version x Q ∈ Rn,r up to error in normalized ℓ 1 norm\nwhere we re-index the entries of x and x Q in a 1D sequence. Let us denote by (W, f ) Q the quantized version of (W n , f n ), given by the above equivalence mapping between (W, f ) n and R n,r . We hence have\nWe now choose the parameter α. Note that for any c ′ > c, there exists ϵ 0 > 0 that depends on c ′ -c, such that for any ϵ < ϵ 0 there is a choice of α (close to 1) such that\nwhere k = ⌈2 2c ′ /ϵ 2 ⌉. This is shown similarly to the proof of Corollary B.7 and Theorem B.8. We now replace the notation c ′ → c, which concludes the proof." }, { "figure_ref": [], "heading": "D Graphon-signal sampling lemmas", "publication_ref": [], "table_ref": [], "text": "In this appendix, we prove Theorem 3.7. We denote by W 1 the space of measurable functions U : [0, 1] → [-1, 1], and call each U ∈ W 1 a kernel." }, { "figure_ref": [], "heading": "D.1 Formal construction of sampled graph-signals", "publication_ref": [], "table_ref": [], "text": "Let W ∈ W 0 be a graphon, and" }, { "figure_ref": [], "heading": "E Graphon-signal MPNNs", "publication_ref": [], "table_ref": [], "text": "In this appendix we give properties and examples of MPNNs." }, { "figure_ref": [], "heading": "E.1 Properties of graphon-signal MPNNs", "publication_ref": [], "table_ref": [], "text": "Consider the construction of MPNN from Section 4.1. We first explain how a MPNN on a grpah is equivalent to a MPNN on the induced graphon.\nLet G be a graph of n nodes, with adjacency matrix A = {a i,j } i,j∈[n] and signal f ∈ R n×d . Consider a MPL θ, with receiver and transmitter message functions\nwhere K ∈ N, and update function µ : R d+p → R s . The application of the MPL on (G, f ) is defined as follows. We first define the message kernel\nWe then aggregate the message kernel with normalized sum aggregation\nLastly, we apply the update function, to obtain the output θ(G, f ) of the MPL with value at each node i θ(G, f\nLemma E.1. Consider a MPL θ as in the above setting. Then, for every graph signal (G, A, f ),\nProof. Let {I i , . . . , I n } be the equipartition to intervals. For each j ∈ [n], let y j ∈ I j be an arbitrary point. Let i ∈ [n] and x ∈ I i . We have\nTherefore, for every i ∈ [n] and every x ∈ I i ," }, { "figure_ref": [], "heading": "E.2 Examples of MPNNs", "publication_ref": [ "b36" ], "table_ref": [], "text": "The GIN convolutional layer [36] is defined as follows. First, the message function is\nwhere M is a multi-layer perceptron (MLP) and ϵ a constant. Each layer may have a different MLP and different constant ϵ. The standard GIN is defined with sum aggregation, but we use normalized sum aggregation." }, { "figure_ref": [], "heading": "F.2 Bounds of signals and MPLs with Lipschitz message and update functions", "publication_ref": [], "table_ref": [], "text": "We will consider three settings for the MPNN Lipschitz bounds. In all settings, the transmitter, receiver, and update functions are Lipschitz. In the first setting all message and update functions are assumed to be bounded. In the second setting, there is no additional assumption over Lipschtzness of the transmitter, receiver, and update functions. In the third setting, we assume that the message function Φ is also Lipschitz with Lipschitz bound L Φ , and that all receiver and transmitter functions are non-negatively bounded (e.g., via an application of ReLU or sigmoid in their implementation). Note that in case K = 1 and all functions are differentiable, by the product rule, Φ can be Lipschitz only in two cases: if both ξ r and ξ t are bounded and Lipschitz, or if either ξ r or ξ t is constant, and the other function is Lipschitz. When K > 1, we can have combinations of these cases.\nWe next derive bounds for the different settings. A bound for setting 1 is given in Theorem F.8. Moreover, When the receiver and transmitter message functions and the update functions are bounded, so is the signal at each layer." }, { "figure_ref": [], "heading": "Bounds for setting 2.", "publication_ref": [ "b27", "b25", "b25" ], "table_ref": [], "text": "Next we show boundedness when the receiver and transmitter message and update functions are only assumed to be Lipschitz.\nDefine the formal bias B ξ of a function ξ : R d1 → R d2 to be ξ(0) [27]. We note that the formal bias of an affine-linear operator is its classical bias.\nLemma F.10. Let (W, f ) ∈ WL r , and suppose that for every y ∈ {r, t} and k = 1, . . . , K\nProof. Let y ∈ {r, t}. We have\nNext, we have a direct result of Theorem F.8.\nCorollary F.11. Suppose that for every y ∈ {r, t} and k = 1, . . . , K\nThen, for every\nBound for setting 3.\nLemma F.12. Let (W, f ) ∈ WL r , and suppose that\nProof. We have\nAdditional bounds.\nLemma F.13. Let f be a signal, W, V ∈ W 0 , and suppose that ∥Φ f ∥ ∞ ≤ ρ for every k = 1, . . . , K, and that ξ k r and ξ k t are non-negatively valued. Then\nProof. The proof follows the steps of Lemma F.7 until Equation (25), from where we proceed differently. Since all of the functions q k r and q k t , k ∈ [K], and since ∥Φ f ∥ ∞ ≤ ρ, the product of each q k r (x)q k t (y) must be also bounded by ρ for every x ∈ [0, 1] and k ∈ [K]. Hence, we may replace the normalization in Equation (25) with\nwhere for every k ∈\nTheorem F.14. Let (W, f ), (V, g) ∈ WL r , and suppose that ∥Φ∥ ∞, ∥ξ k r ∥ ∞ , ∥ξ k t ∥ ∞ ≤ ρ, all message functions ξ are non-negative valued, and\nThe proof follows the steps of Theorem F.8.\nCorollary F.15. Suppose that for every y ∈ {r, t} and k = 1, . . . , K andξ, Φ are all non-negatively valued. Then, for every (W, f ), (V, g) ∈ WL r ,\nThe proof follows the steps of Corollary F.11.\nWe hence derive a recurrence sequence for a bound ∥Θ t (W, f ) -Θ t (V, g)∥ □ ≤ e t , as\nWe now apply Lemma F.16." }, { "figure_ref": [], "heading": "Setting 3.", "publication_ref": [ "b34" ], "table_ref": [], "text": "Lemma F.20. Suppose that for every layer t and every y ∈ {r, t} and k = 1, . . . , K, andξ, Φ are all non-negatively valued. Then, for MPNNs without update function\nand for MPNNs with update function\nProof. We first prove for MPNNs without update functions. By Lemma F.10, there is a bound e t of ∥Θ t (W, f )∥ ∞ that satisfies e t = Le t-1 + B.\nSolving this recurrence sequence via Lemma F.16 concludes the proof. Lastly, for MPNN with update functions, we have a bound that satisfies e t = L 2 e t-1 + LB + B, and we proceed as before.\nLemma F.21. Suppose that for every y ∈ {r, t} and k = 1, . . . , K\nand ξ, Φ are all non-negatively valued. Let (W, g), (V, g) ∈ WL r . Then, for MPNNs without update functions\nand for MPNNs with update functions\nProof. We start with MPNNs without update functions. By Corollary F.15 and Lemma F.20, there is a bound e t on the error ∥Θ t (W, Φ f ) -Θ t (V, Φ g )∥ □ at step t that satisfies\nHence, by Lemma F.16, and Z defined by ( 26),\nUniform convergence bounds are approaches for intersecting adequate sampling events that allow bounding the generalization error more efficiently. This intersection of events leads to a term in the generalization bound, called the complexity/capacity, that describes the richness of the hypothesis space T . This is the philosophy behind approaches such as VC-dimension, Rademacher dimension, fat-shattering dimension, pseudo-dimension, and uniform covering number (see, e.g., [34])." }, { "figure_ref": [], "heading": "G.2 Classification setting", "publication_ref": [ "b23" ], "table_ref": [], "text": "We define a ground truth classifier into C classes as follows. Let C : WL r → R C be a measurable piecewise constant function of the following form. There is a partition of WL r into disjoint measurable sets B 1 , . . . , B C ⊂ WL r such that C i=1 B i = WL r , and for every i ∈ [C] and every\nwhere e i ∈ R C is the standard basis element with entries (e i ) j = δ i,j , where δ i,j is the Kronecker delta.\nWe define an arbitrary data distribution as follows. Let B be the Borel σ-algebra of WL r , and ν be any probability measure on the measurable space ( WL r , B). We may assume that we complete B with respect to ν, obtaining the σ-algebra Σ. If we do not complete the measure, we just denote Σ = B. Defining ( WL r , Σ, ν) as a complete measure space or not will not affect our construction.\nLet S be a metric space. Let Lip(S, L) be the space of Lipschitz continuous mappings Υ : S → R C with Lipschitz constant L. Note that by Theorem 4.1, for every i ∈ [C], the space of MPNN with Lipschitz continuous input and output message functions and Lipschitz update functions, restricted to B i , is a subset of Lip(B i , L 1 ) which is the restriction of Lip( WL r , L 1 ) to B i ⊂ WL r , for some L 1 > 0. Moreover, B i has finite covering κ(ϵ) given in (23). Let E be a Lipschitz continuous loss function with Lipschitz constant L 2 . Therefore, since C| Bi is in Lip(B i , 0), for any Υ ∈ Lip( WL r , L 1 ), the function" }, { "figure_ref": [], "heading": "G.3 Uniform Monte Carlo approximation of Lipschitz continuous functions", "publication_ref": [ "b27", "b27", "b23", "b23" ], "table_ref": [], "text": "The proof of Theorem 4.2 is based on the following Theorem G.3, which studies uniform Monte Carlo approximations of Lipschitz continuous functions over metric spaces with finite covering.\nDefinition G.1. A metric space M is said to have covering number κ : (0, ∞) → N, if for every ϵ > 0, the space M can be covered by κ(ϵ) ball of radius ϵ.\nTheorem G.2 (Hoeffding's Inequality). Let Y 1 , . . . , Y N be independent random variables such that a ≤ Y i ≤ b almost surely. Then, for every k > 0,\nThe following theorem is an extended version of [27,Lemma B.3], where the difference is that we use a general covering number κ(ϵ), where in [27,Lemma B.3] the covering number is exponential in ϵ. For completion, we repeat here the proof, with the required modification.\nTheorem G.3 (Uniform Monte Carlo approximation for Lipschitz continuous functions). Let X be a probability metric space 5 , with probability measure µ, and covering number κ(ϵ). Let X 1 , . . . , X N be drawn i.i.d. from X . Then, for every p > 0, there exists an event E p Lip ⊂ X N (regarding the choice of (X 1 , . . . , X N )), with probability\nLet Lip( WL r , L 1 ) denote the space of Lipschitz continuous functions Θ : WL r → R C with Lipschitz bound bounded by L 1 and ∥Θ∥ ∞ ≤ L 1 . We note that the theorems of Appendix F.2 prove that MPNN with Lipschitz continuous message and update functions, and bounded formal biases, are in Lip( WL r , L 1 ).\nTheorem G.4 (MPNN generalization theorem). Consider the classification setting of Appendix G.2. Let X 1 , . . . , X N be independent random samples from the data distribution ( WL r , Σ, ν). Then, for every p > 0, there exists an event E p ⊂ WL r N regarding the choice of (X 1 , . . . , X N ), with probability\nin which for every function Υ in the hypothesis class Lip( WL r , L 1 ), with we have\nwhere ξ(r) = κ(r) 2 log(κ(r))\n, κ is the covering number of WL r given in (23), and ξ -1 is the inverse function of ξ.\nProof. For each i ∈ [C], let S i be the number of samples of X that falls within B i . The random variable (S 1 , . . . , S C ) is multinomial, with expected value (N/C, . . . , N/C) and variance\n). We now use Chebyshev's inequality, which states that for any a > 0,\nWe choose a N C = N 2C , so a = N 1/2 2C 1/2 , and\nTherefore,\nWe intersect these events of i ∈ [C], and get an event E mult of probability more than 1 -2 C 2 N in which S i > N 2C for every i ∈ [C]. In the following, given a set B i we consider a realization M = S i , and then use the law of total probability.\nFrom Theorem G.3 we get the following. For every p > 0, there exists an event E p i ⊂ B M i regarding the choice of (X 1 , . . . , X M ) ⊂ B i , with probability\nsuch that for every function Υ ′ in the hypothesis class Lip( WL r , L 1 ), we have\nwhere ξ(r) = κ(r) 2 log(κ(r))\n, κ is the covering number of WL r given in (23), and ξ -1 is the inverse function of ξ. In the last inequality, we use the bound, for every x ∈ WL r , " }, { "figure_ref": [], "heading": "H Stability of MPNNs to graph subsampling", "publication_ref": [], "table_ref": [], "text": "Proof. By Lipschitz continuity of Θ,\nHence,\nand the claim of the theorem follows from Theorem 3.7.\nAs explained in Section 3.5, the above theorem of stability of MPNNs to graphon-signal sampling also applies to subsampling graph-signals." }, { "figure_ref": [], "heading": "I Notations", "publication_ref": [], "table_ref": [], "text": "[n] = {1, . . . , n}.\nL p (X ) or L p : Lebesgue p space over the measure space X . µ: standard Lebesgue measure on [0, 1]. P k = {P 1 , . . . , P k }: partition (page 3) G = {V, E}: simple graph with nodes V and edges E. A = {a i,j } m i,j=1 : graph adjacency matrix (page 4). e G (U, S): the number of edges with one end point at U and the other at S, where U, S ⊂ V (page 3).\ne P(U,S) : density of of edges between U and S (page 3). irreg G (P): irregularity Equation (1). W 0 : space of graphons (page 4). W : graphon (page 4). W G : induced graphon from the graph G (page 4). ∥W ∥ □ : cut norm (page 4). d □ (W, V ): cut metric (page 4). δ □ (W, V ): cut distance (page 4). S [0,1] : the space of measure preserving bijections [0, 1] → [0, 1] (page 4). S ′ [0,1] : the set of measurable measure preserving bijections between co-null sets of [0, 1] (page 5). V ϕ (x, y) = V (ϕ(x), ϕ(y)) (page 4). W 0 : space of graphons modulo zero cut distance (page 4). L ∞ r [0, 1]: signal space Equation (2). ∥f ∥ □ : cut norm of a signal Definition 3.1. WL r : graphon-signal space (page 5). ∥(W, f )∥ □ : graphon-signal cut distance (page 5). δ □ (W, f ), (V, g) : graphon-signal cut distance Equation (4). WL r : graphon-signal space modulo zero cut distance (page 5). " } ]
2023-12-08
[ { "authors": "N Alon; W De La Vega; R Kannan; M Karpinski", "journal": "Journal of Computer and System Sciences", "ref_id": "b0", "title": "Random sampling and approximation of max-csps", "year": "2003" }, { "authors": "K Atz; F Grisoni; G Schneider", "journal": "Nature Machine Intelligence", "ref_id": "b1", "title": "Geometric deep learning on molecular representations", "year": "2021" }, { "authors": "W Azizian; M Lelarge", "journal": "", "ref_id": "b2", "title": "Expressive power of invariant and equivariant graph neural networks", "year": "2021" }, { "authors": "C Borgs; J T Chayes; L Lovász; V T Sós; K Vesztergombi", "journal": "Advances in Mathematics", "ref_id": "b3", "title": "Convergent sequences of dense graphs i: Subgraph frequencies, metric properties and testing", "year": "2008" }, { "authors": "J Chen; T Ma; C Xiao", "journal": "", "ref_id": "b4", "title": "FastGCN: Fast learning with graph convolutional networks via importance sampling", "year": "2018" }, { "authors": "Z Chen; S Villar; L Chen; J Bruna", "journal": "NeurIPS. Curran Associates, Inc", "ref_id": "b5", "title": "On the equivalence between graph isomorphism testing and function approximation with gnns", "year": "2019" }, { "authors": "W.-L Chiang; X Liu; S Si; Y Li; S Bengio; C.-J Hsieh", "journal": "Association for Computing Machinery", "ref_id": "b6", "title": "Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks", "year": "2019" }, { "authors": "D Conlon; J Fox", "journal": "Geometric and Functional Analysis", "ref_id": "b7", "title": "Bounds for graph regularity and removal lemmas", "year": "2012" }, { "authors": "M Defferrard; X Bresson; P Vandergheynst", "journal": "NeurIPS. Curran Associates Inc", "ref_id": "b8", "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "year": "2016" }, { "authors": "J M S ", "journal": "Cell", "ref_id": "b9", "title": "A deep learning approach to antibiotic discovery", "year": "2020" }, { "authors": "M Fey; J E Lenssen", "journal": "", "ref_id": "b10", "title": "Fast graph representation learning with PyTorch Geometric", "year": "2019" }, { "authors": "G B Folland", "journal": "John Wiley & Sons", "ref_id": "b11", "title": "Real analysis: modern techniques and their applications", "year": "1999" }, { "authors": "A M Frieze; R Kannan", "journal": "Combinatorica", "ref_id": "b12", "title": "Quick approximation to matrices and applications", "year": "1999" }, { "authors": "V Garg; S Jegelka; T Jaakkola", "journal": "PMLR", "ref_id": "b13", "title": "Generalization and representational limits of graph neural networks", "year": "2020-07" }, { "authors": "J Gilmer; S S Schoenholz; P F Riley; O Vinyals; G E Dahl", "journal": "", "ref_id": "b14", "title": "Neural message passing for quantum chemistry", "year": "2017" }, { "authors": "W L Hamilton; R Ying; J Leskovec", "journal": "", "ref_id": "b15", "title": "Inductive representation learning on large graphs", "year": "" }, { "authors": "J M Jumper; R Evans; A Pritzel; T Green; M Figurnov; O Ronneberger; K Tunyasuvunakool; R Bates; A Zídek; A Potapenko; A Bridgland; C Meyer; S A A Kohl; A Ballard; A Cowie; B Romera-Paredes; S Nikolov; R Jain; J Adler; T Back; S Petersen; D A Reiman; E Clancy; M Zielinski; M Steinegger; M Pacholska; T Berghammer; S Bodenstein; D Silver; O Vinyals; A W Senior; K Kavukcuoglu; P Kohli; D Hassabis", "journal": "Nature", "ref_id": "b16", "title": "Highly accurate protein structure prediction with alphafold", "year": "2021" }, { "authors": "N Keriven; A Bietti; S Vaiter", "journal": "", "ref_id": "b17", "title": "Convergence and stability of graph convolutional networks on large random graphs", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b18", "title": "", "year": "2020" }, { "authors": "N Keriven; A Bietti; S Vaiter", "journal": "NeurIPS. Curran Associates, Inc", "ref_id": "b19", "title": "On the universality of graph neural networks on large random graphs", "year": "2021" }, { "authors": "T N Kipf; M Welling", "journal": "", "ref_id": "b20", "title": "Semi-supervised classification with graph convolutional networks", "year": "2017" }, { "authors": "R Levie; W Huang; L Bucci; M Bronstein; G Kutyniok", "journal": "Journal of Machine Learning Research", "ref_id": "b21", "title": "Transferability of spectral graph convolutional neural networks", "year": "2021" }, { "authors": "R Levie; F Monti; X Bresson; M M Bronstein", "journal": "IEEE Transactions on Signal Processing", "ref_id": "b22", "title": "Cayleynets: Graph convolutional neural networks with complex rational spectral filters", "year": "2019" }, { "authors": "R Liao; R Urtasun; R Zemel", "journal": "", "ref_id": "b23", "title": "A PAC-bayesian approach to generalization bounds for graph neural networks", "year": "2021" }, { "authors": "L M Lovász", "journal": "Colloquium Publications", "ref_id": "b24", "title": "Large networks and graph limits", "year": "2012" }, { "authors": "L M Lovász; B Szegedy", "journal": "GAFA Geometric And Functional Analysis", "ref_id": "b25", "title": "Szemerédi's lemma for the analyst", "year": "2007" }, { "authors": "S Maskey; R Levie; G Kutyniok", "journal": "Applied and Computational Harmonic Analysis", "ref_id": "b26", "title": "Transferability of graph neural networks: An extended graphon approach", "year": "2023" }, { "authors": "S Maskey; R Levie; Y Lee; G Kutyniok", "journal": "NeurIPS. Curran Associates, Inc", "ref_id": "b27", "title": "Generalization analysis of message passing neural networks on large random graphs", "year": "2022" }, { "authors": "O Méndez-Lucio; M Ahmad; E A Del Rio-Chanona; J K Wegner", "journal": "Nature Machine Intelligence", "ref_id": "b28", "title": "A geometric deep learning approach to predict binding conformations of bioactive molecules", "year": "2021" }, { "authors": "C Morris; F Geerts; J Tönshoff; M Grohe", "journal": "", "ref_id": "b29", "title": "Wl meet vc", "year": "2023" }, { "authors": "C Morris; M Ritzert; M Fey; W L Hamilton; J E Lenssen; G Rattan; M Grohe", "journal": "", "ref_id": "b30", "title": "Weisfeiler and leman go neural: Higher-order graph neural networks", "year": "2019-07" }, { "authors": "L Ruiz; L F O Chamon; A Ribeiro", "journal": "IEEE Transactions on Signal Processing", "ref_id": "b31", "title": "Graphon signal processing", "year": "2021" }, { "authors": "L Ruiz; Z Wang; A Ribeiro", "journal": "", "ref_id": "b32", "title": "Graphon and graph neural network stability", "year": "2021" }, { "authors": "F Scarselli; A C Tsoi; M Hagenbuchner", "journal": "Neural Networks", "ref_id": "b33", "title": "The vapnik-chervonenkis dimension of graph and recursive neural networks", "year": "2018" }, { "authors": "S Shalev-Shwartz; S Ben-David", "journal": "Cambridge University Press", "ref_id": "b34", "title": "Understanding Machine Learning: From Theory to Algorithms", "year": "2014" }, { "authors": "D Williams", "journal": "Cambridge University Press", "ref_id": "b35", "title": "Probability with Martingales", "year": "1991" }, { "authors": "K Xu; W Hu; J Leskovec; S Jegelka", "journal": "", "ref_id": "b36", "title": "How powerful are graph neural networks?", "year": "1994" }, { "authors": "( W Agg; Q ", "journal": "", "ref_id": "b37", "title": "aggregation of message Q with respect to graphon W (page 8)", "year": "" }, { "authors": "( W ", "journal": "", "ref_id": "b38", "title": "the output of the MPNN applied on (W, f ) ∈ WL r at layer t ∈ [T ] (page 8)", "year": "" } ]
[ { "formula_coordinates": [ 3, 149.56, 487.99, 365.38, 14.11 ], "formula_id": "formula_0", "formula_text": "P k = {P 1 , . . . , P k } of disjoint measurable subsets of [0, 1] such that k j=1 P j = [0, 1]." }, { "formula_coordinates": [ 3, 90, 726.9, 423, 21.67 ], "formula_id": "formula_1", "formula_text": "Let P = {V 1 , . . . , V k } be a partition of V . The partition is called equipartition if ||V i | -|V j || ≤ 1 for every i, j ∈ [k]." }, { "formula_coordinates": [ 4, 308.94, 123.14, 151.44, 14.61 ], "formula_id": "formula_2", "formula_text": "k i=1 k j=1 e G (Vi,Vj ) |Vi||Vj | |V i ∩ U | |V j ∩ S|." }, { "formula_coordinates": [ 4, 198.84, 161.95, 311.08, 17.55 ], "formula_id": "formula_3", "formula_text": "irreg G (P) = max U,S⊂V |e G (U, S) -e P (U, S)| / |V | 2 . (1" }, { "formula_coordinates": [ 4, 509.92, 164.86, 4.24, 8.8 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 4, 173.07, 200.25, 303.89, 13.8 ], "formula_id": "formula_5", "formula_text": "P = {V 1 , . . . , V k } of V into k ≤ 2 c/ϵ 2 classes such that irreg G (P) ≤ ϵ." }, { "formula_coordinates": [ 4, 365.47, 390.74, 149.47, 10.31 ], "formula_id": "formula_6", "formula_text": "[0, 1] 2 → [0, 1], W (x, y) = W (y, x)." }, { "formula_coordinates": [ 4, 217.46, 681.68, 168.08, 17.23 ], "formula_id": "formula_7", "formula_text": "∥W ∥ □ = sup U,S⊂[0,1] U ×S W (x, y)dxdy ," }, { "formula_coordinates": [ 5, 231.6, 111.96, 274.89, 37.59 ], "formula_id": "formula_8", "formula_text": "□ (W, V ) = ∥W -V ∥ □ . The cut distance is defined to be δ □ (W, V ) = inf ϕ∈S [0,1] ∥W -V ϕ ∥ □ ," }, { "formula_coordinates": [ 5, 189.41, 467.63, 324.75, 12.69 ], "formula_id": "formula_9", "formula_text": "L ∞ r [0, 1] := {f ∈ L ∞ [0, 1] | ∀x ∈ [0, 1], |f (x)| ≤ r} .(2)" }, { "formula_coordinates": [ 5, 234.14, 538.88, 280.03, 17.29 ], "formula_id": "formula_10", "formula_text": "∥f ∥ □ := sup S⊆[0,1] S f (x)dµ(x) ,(3)" }, { "formula_coordinates": [ 5, 214.87, 596.77, 173.26, 32.6 ], "formula_id": "formula_11", "formula_text": "L 1 norm ∀f ∈ L ∞ r [0, 1], ∥f ∥ □ ≤ ∥f ∥ 1 ≤ 2∥f ∥ □ ." }, { "formula_coordinates": [ 5, 90, 651.05, 423, 35.79 ], "formula_id": "formula_12", "formula_text": "WL r := W 0 × L ∞ r [0, 1]. We define the graphon-signal cut norm, for measurable W, V : [0, 1] 2 → R and f, g : [0, 1] → R, by ∥(W, f )∥ □ = ∥W ∥ □ + ∥f ∥ □ ." }, { "formula_coordinates": [ 5, 281.27, 694.41, 172.2, 10.36 ], "formula_id": "formula_13", "formula_text": "d □ (W, f ), (V, g) = ∥(W, f ) -(V, g)∥ □ ." }, { "formula_coordinates": [ 5, 135.41, 740.2, 332.17, 12.94 ], "formula_id": "formula_14", "formula_text": "S ′ [0,1] = {ϕ : A → B | A, B co-null in [0, 1], and ∀S ∈ A, µ(S) = µ(ϕ(S))} ," }, { "formula_coordinates": [ 6, 199.69, 166.22, 314.48, 19.36 ], "formula_id": "formula_15", "formula_text": "δ □ (W, f ), (V, g) = inf ϕ∈S ′ [0,1] d □ (W, f ), (V, g) ϕ .(4)" }, { "formula_coordinates": [ 6, 90, 219.35, 423, 36.22 ], "formula_id": "formula_16", "formula_text": "(W, f ) ∼ (V, g) if δ □ ((W, f ), (V, g)) = 0. The quotient space WL r := WL r / ∼ of equivalence classes [(W, f )] of graphon-signals (W, f ) is a metric space with the metric δ □ ([(W, f )], [(V, g)]) = δ □ ((W, f ), (V, g))." }, { "formula_coordinates": [ 6, 89.46, 333.41, 425.07, 70.87 ], "formula_id": "formula_17", "formula_text": "A = {a i,j } i,j∈[n] . Let {I k } n k=1 with I k = [(k -1)/n, k/n) be the equipartition of [0, 1] into n intervals. The graphon- signal (W, f ) (G,f ) = (W G , f f ) induced by (G, f ) is defined by W G (x, y) = n i,j=1 a ij 1 Ii (x)1 Ij (y)," }, { "formula_coordinates": [ 6, 348.95, 373.95, 88.21, 30.32 ], "formula_id": "formula_18", "formula_text": "f f (z) = n i f i 1 Ii (z)." }, { "formula_coordinates": [ 6, 203.53, 705.44, 310.63, 31.83 ], "formula_id": "formula_19", "formula_text": "F (x 1 , . . . , x d ) = j=(j1,...,j d )∈[k] d c j d l=1 1 Pj l (x l ),(5)" }, { "formula_coordinates": [ 6, 166.74, 745.58, 63.56, 10.24 ], "formula_id": "formula_20", "formula_text": "{c j ∈ R} j∈[k] d ." }, { "formula_coordinates": [ 7, 128.24, 134.36, 186.44, 13.15 ], "formula_id": "formula_21", "formula_text": "[WL r ] P k := (W 0 ∩ S 2 P k ) × (L ∞ r [0, 1] ∩ S 1 P k )" }, { "formula_coordinates": [ 7, 155.63, 205.53, 358.54, 32.26 ], "formula_id": "formula_22", "formula_text": "(W n , f n ) ∈ [WL r ] In such that δ □ (W, f ), (W n , f n ) ≤ ϵ,(6)" }, { "formula_coordinates": [ 7, 89.23, 578.82, 424.94, 36.26 ], "formula_id": "formula_23", "formula_text": "κ(ϵ) = 2 k 2 (7) balls of radius ϵ, where k = ⌈2 9c 4ϵ 2 ⌉." }, { "formula_coordinates": [ 8, 89.36, 259.93, 422.97, 55.95 ], "formula_id": "formula_24", "formula_text": "Λ = (λ 1 , . . . λ k ) ∈ [0, 1] k independent uniform random samples from [0, 1], we have E δ □ W, f , W (Λ), f (Λ) < 15 log(k) ," }, { "formula_coordinates": [ 8, 200.45, 334.27, 202.11, 23.7 ], "formula_id": "formula_25", "formula_text": "E δ □ W, f , G(W, Λ), f (Λ) < 15 log(k) ." }, { "formula_coordinates": [ 8, 145.22, 572.37, 209.08, 42.86 ], "formula_id": "formula_26", "formula_text": "R 2d → R p by Φ(a, b) = K k=1 ξ k r (a)ξ k t (b)," }, { "formula_coordinates": [ 8, 158.48, 630.89, 248.99, 51.31 ], "formula_id": "formula_27", "formula_text": "Φ f : [0, 1] 2 → R p by Φ f (x, y) = Φ(f (x), f (y)) = K k=1 ξ k r (f (x))ξ k t (f (y))." }, { "formula_coordinates": [ 8, 135.43, 713.36, 249.9, 45.42 ], "formula_id": "formula_28", "formula_text": "Agg(W, Q) ∈ L ∞ r [0, 1], defined by Agg(W, Q)(x) = 1 0 W (x, y)Q(x, y)dy," }, { "formula_coordinates": [ 9, 414.36, 109.33, 92.82, 11.87 ], "formula_id": "formula_29", "formula_text": "f (t) → Agg(W, Φ(t+1)" }, { "formula_coordinates": [ 9, 487.86, 112.46, 23.17, 12.05 ], "formula_id": "formula_30", "formula_text": "f (t) )" }, { "formula_coordinates": [ 9, 239.62, 137.76, 185.69, 15.19 ], "formula_id": "formula_31", "formula_text": "f (t+1) = µ (t+1) f (t) (x), Agg(W, Φ (t+1) f (t) )(x)" }, { "formula_coordinates": [ 9, 346.24, 164.38, 94.17, 12.2 ], "formula_id": "formula_32", "formula_text": "( t ξ k r ), ( t ξ k t )} k∈[K],t∈[T ]" }, { "formula_coordinates": [ 9, 90, 224.16, 423, 25.98 ], "formula_id": "formula_33", "formula_text": "F : R d ×R d → R C (e.g., L 2 functions) can be approximated by (finite) linear combinations of simple tensors F (a, b) ≈ K k=1 ξ k 1 (a)ξ k 2 (b)." }, { "formula_coordinates": [ 9, 385.98, 286.81, 125.2, 9.96 ], "formula_id": "formula_34", "formula_text": "Θ t (W, f ) (G,f ) = (W, f ) Θt(G,f )" }, { "formula_coordinates": [ 9, 230.72, 318.09, 146.13, 27.27 ], "formula_id": "formula_35", "formula_text": "Agg(G, Φ f ) i = 1 n j∈[n] a i,j (Φ f ) i,j ." }, { "formula_coordinates": [ 9, 205.77, 480.28, 119.54, 12.69 ], "formula_id": "formula_36", "formula_text": "µ t (0) , t ξ k y (0) ≤ B,and" }, { "formula_coordinates": [ 9, 335.5, 482.35, 65.06, 11.78 ], "formula_id": "formula_37", "formula_text": "L µ t , Lt ξ k y < L," }, { "formula_coordinates": [ 9, 183.3, 520.14, 236.4, 35.27 ], "formula_id": "formula_38", "formula_text": "(W, f ), (V, g) ∈ WL r , ∥Θ(W, f ) -Θ(V, g)∥ □ ≤ L Θ ∥f -g∥ □ + ∥W -V ∥ □ ." }, { "formula_coordinates": [ 10, 89.49, 256.54, 309.78, 40.9 ], "formula_id": "formula_39", "formula_text": "R(Υ) = E E(Υ, C) = E(Υ(x), C(x))dν(x). We define the empirical risk R(Υ X , X) = 1 N N i=1 E Υ X (X i ), C(X i ) ." }, { "formula_coordinates": [ 10, 89.36, 355.32, 424.81, 73.77 ], "formula_id": "formula_40", "formula_text": "ν N (U p ) ≥ 1 -Cp -2 C 2 N , in which R(Υ X ) -R(Υ X , X) ≤ ξ -1 (N/2C) 2L + 1 √ 2 L + E(0, 0) 1 + log(2/p) ,(8)" }, { "formula_coordinates": [ 10, 170.62, 448.54, 6.71, 6.73 ], "formula_id": "formula_41", "formula_text": "ϵ 2" }, { "formula_coordinates": [ 10, 96.38, 614.25, 418.65, 22.54 ], "formula_id": "formula_42", "formula_text": "✗ ✗ N -1/2 PAC-bayesian MPNN [23] bounded degree ✗ ✗ N -1/2 PAC-bayesian GCN [23] bounded degree ✓ ✗ N -1/2" }, { "formula_coordinates": [ 10, 373.09, 647.43, 141.94, 6.66 ], "formula_id": "formula_43", "formula_text": "✓ ✓ N -1/2" }, { "formula_coordinates": [ 10, 373.09, 657.94, 143.31, 6.66 ], "formula_id": "formula_44", "formula_text": "✓ ✓ ξ -1 (N )" }, { "formula_coordinates": [ 11, 88.43, 377.24, 356.5, 51.24 ], "formula_id": "formula_45", "formula_text": "Σ(Λ) = G(W, Λ), Θ G(W, Λ), f (Λ) . Then E δ □ Σ, Σ(Λ) < 15 log(k) L." }, { "formula_coordinates": [ 14, 89.64, 288.71, 422.86, 100.66 ], "formula_id": "formula_46", "formula_text": "[0, 1] → R, with finite L 1 norm ∥f ∥ p = 1 0 |f (x)| p dx 1/p < ∞. The space L ∞ [0, 1] is the space of (equivalence classes) of measurable functions with finite L ∞ norm ∥f ∥ ∞ = ess sup x∈[0,1] |f (x)| = inf{a ≥ 0 | |f (x)| ≤ a for almost every x ∈ [0, 1]}." }, { "formula_coordinates": [ 14, 238.19, 447.34, 120.45, 20.69 ], "formula_id": "formula_47", "formula_text": "f + (x) = f (x) f (x) > 0 0 f (x) ≤ 0." }, { "formula_coordinates": [ 14, 239.71, 515.19, 123.58, 10.31 ], "formula_id": "formula_48", "formula_text": "∥f ∥ □ = max{∥f + ∥ 1 , ∥f -∥ 1 }." }, { "formula_coordinates": [ 14, 252.64, 556.42, 261.52, 22.31 ], "formula_id": "formula_49", "formula_text": "1 2 ∥f ∥ 1 ≤ ∥f ∥ □ ≤ ∥f ∥ 1 .(9)" }, { "formula_coordinates": [ 14, 208.75, 587.49, 209.58, 33.8 ], "formula_id": "formula_50", "formula_text": "W : [0, 1] 2 → [-r, r], 0 ≤ ∥W ∥ □ ≤ ∥W ∥ 1 ≤ ∥W ∥ 2 ≤ ∥W ∥ ∞ ≤ r." }, { "formula_coordinates": [ 14, 239.51, 651.25, 146.32, 46.7 ], "formula_id": "formula_51", "formula_text": "[0, 1] 2 → R, the supremum sup S,T ⊂[0,1] S T W (x, y)dxdy" }, { "formula_coordinates": [ 15, 186.78, 152.74, 204.07, 12.94 ], "formula_id": "formula_52", "formula_text": "A ⊂ [0, 1], µ(A) = µ(ϕ(A)). Recall that S ′ [0,1]" }, { "formula_coordinates": [ 15, 149.02, 235.4, 304.96, 47.82 ], "formula_id": "formula_53", "formula_text": "δ □ (W, W ϕ ) = inf ϕ∈S [0,1] ∥W -W ϕ ∥ □ = inf ϕ∈S [0,1] sup S,T ⊆[0,1] S×T W (x, y) -W (ϕ(x), ϕ(y)) dxdy ." }, { "formula_coordinates": [ 15, 89.64, 380.51, 424.52, 20.69 ], "formula_id": "formula_54", "formula_text": "□ ([(W, f )], [V, g)]) = d □ (W, V ) where [(W, f )], [(V, g)]," }, { "formula_coordinates": [ 15, 303.48, 634.11, 53.33, 13.15 ], "formula_id": "formula_55", "formula_text": "S d P k in L 1 [0," }, { "formula_coordinates": [ 15, 251.11, 714.45, 100.78, 22.31 ], "formula_id": "formula_56", "formula_text": "∥F -F ′ ∥ 1 ≤ d∥F ∥ ∞ k n ." }, { "formula_coordinates": [ 16, 90, 224.02, 353.47, 13.47 ], "formula_id": "formula_58", "formula_text": "Π := P 1,m1 ∪ P 2,m2 ∪ • • • ∪ P k,m k . Note that µ(Π) = 1 -µ(∪Q) = 1 -l 1 n = h/n" }, { "formula_coordinates": [ 16, 123.76, 282.78, 390.41, 10.38 ], "formula_id": "formula_59", "formula_text": "E n := {P 1,1 , . . . , P 1,m1-1 , P 2,1 , . . . , P 2,m2-1 , . . . , S k,1 , . . . , S k,m k -1 , Π 1 , Π 2 , . . . , Π h }.(11)" }, { "formula_coordinates": [ 16, 104.94, 303.91, 290.4, 52.66 ], "formula_id": "formula_60", "formula_text": "E n = {Z 1 , . . . , Z n }. Let F (x) = j=(j1,...,j d )∈[k] d c j d l=1 1 Pj l (x l ) ∈ S d P k ." }, { "formula_coordinates": [ 16, 167.35, 383.75, 268.3, 33.51 ], "formula_id": "formula_61", "formula_text": "F (x) = j=(j1,...,j d )∈[n] d ; ∀l=1,...,d, Zj l ̸ ⊂Π cj d l=1 1 Zj l (x l ) + E(x)," }, { "formula_coordinates": [ 16, 90, 438.46, 101.57, 10.31 ], "formula_id": "formula_62", "formula_text": "Π (d) ⊂ [0, 1] d , defiedby" }, { "formula_coordinates": [ 16, 150.51, 459.04, 301.98, 10.81 ], "formula_id": "formula_63", "formula_text": "Π (d) = Π × [0, 1] d-1 ∪ [0, 1] × Π × [0, 1] d-2 ∪ . . . ∪ [0, 1] d-1 × Π ." }, { "formula_coordinates": [ 16, 172.1, 502.12, 258.8, 33.51 ], "formula_id": "formula_64", "formula_text": "F ′ (x) = j=(j1,...,j d )∈[n] d ; ∀l=1,...,d, Zj l ̸ ⊂Π cj d l=1 1 Zj l (x l ) ∈ S d En ." }, { "formula_coordinates": [ 16, 90, 567.34, 423, 59.82 ], "formula_id": "formula_65", "formula_text": "∥F -F ′ ∥ 1 ≤ d∥F ∥ ∞ k n . Lemma B.2. Let {Q 1 , Q 2 , . . . , Q m } partition of [0, 1]. Let {I 1 , I 2 , . . . , I m } be a partition of [0, 1]" }, { "formula_coordinates": [ 16, 143.43, 639.81, 180.98, 33.81 ], "formula_id": "formula_66", "formula_text": "[0, 1] → [0, 1] ∈ S ′ [0,1] such that 4 ϕ(Q j ) = I j" }, { "formula_coordinates": [ 17, 90, 109.8, 172.22, 11.87 ], "formula_id": "formula_67", "formula_text": "Lemma B.3. Let S = {S j ⊂ [0, 1]} m-1" }, { "formula_coordinates": [ 17, 220.76, 147.19, 161.47, 31.83 ], "formula_id": "formula_68", "formula_text": "F (x) = j=(j1,...,j d )∈[m] d c j d l=1 1 Sj l (x l )," }, { "formula_coordinates": [ 17, 279.64, 213.2, 43.72, 13.36 ], "formula_id": "formula_69", "formula_text": "C d S ⊂ S d P k ." }, { "formula_coordinates": [ 17, 184.67, 254.6, 235.88, 12.17 ], "formula_id": "formula_70", "formula_text": "P = P ⊂ [0, 1] | ∃ x ∈ [0, 1], P = ∩{S j ∈ S|x ∈ S j } ." }, { "formula_coordinates": [ 17, 155.9, 356.39, 266.16, 9.71 ], "formula_id": "formula_71", "formula_text": "Let P k = {P 1 , . . . , P k }, Q m = {Q 1 , . . . , Q k } be two partitions." }, { "formula_coordinates": [ 17, 225.7, 388.25, 151.6, 13.36 ], "formula_id": "formula_72", "formula_text": "S d P k ⊂ S d Z mk , and S d Qm ⊂ S d Z mk ." }, { "formula_coordinates": [ 17, 89.47, 468.48, 424.9, 62.58 ], "formula_id": "formula_73", "formula_text": "γ i ∈ R, i ∈ [m], such that for every w ∈ K m+1 w, v -( m i=1 γ i v i ) ≤ ϵ ∥w∥∥v∥.(12)" }, { "formula_coordinates": [ 17, 90, 603.16, 424.17, 33.8 ], "formula_id": "formula_74", "formula_text": "W k ∈ S 2 P k ∩ W 0 and a step function signal f k ∈ S 1 P k ∩ L ∞ r [0, 1], such that ∥W -W k ∥ □ ≤ ϵ and ∥f -f k ∥ □ ≤ ρ.(13)" }, { "formula_coordinates": [ 17, 203.53, 670.43, 195.94, 9.71 ], "formula_id": "formula_75", "formula_text": "K i = K = {1 S×T | S, T ⊂ [0, 1] measurable} ." }, { "formula_coordinates": [ 17, 90, 690.84, 424.38, 24.28 ], "formula_id": "formula_76", "formula_text": "S m = {S i } m i=1 , T m = {T i } m i=1 , a sequence of coefficients {γ i ∈ R} m i=1 ,and" }, { "formula_coordinates": [ 17, 260.06, 723.96, 82.89, 30.32 ], "formula_id": "formula_77", "formula_text": "W ′ ϵ = m i=1 γ i 1 Si×Ti ," }, { "formula_coordinates": [ 18, 155.34, 136.93, 358.83, 19.31 ], "formula_id": "formula_78", "formula_text": "V (x, y) W (x, y) -W ′ ϵ (x, y) dxdy = S T W (x, y) -W ′ ϵ (x, y) dxdy(14)" }, { "formula_coordinates": [ 18, 305.79, 160.76, 208.38, 9.71 ], "formula_id": "formula_79", "formula_text": "≤ ϵ∥1 S×T ∥∥W ∥ ≤ ϵ.(15)" }, { "formula_coordinates": [ 18, 112.31, 244.11, 377.47, 58.45 ], "formula_id": "formula_80", "formula_text": "V (x, y) W (x, y) -W ϵ (x, y) dxdy ≤ 1/2 V (x, y) W (x, y) -W ′ ϵ (x, y) dxdy + 1/2 V (y, x) W (x, y) -W ′ ϵ (x, y) dxdy ≤ ϵ." }, { "formula_coordinates": [ 18, 90, 326.73, 423, 23.93 ], "formula_id": "formula_81", "formula_text": "Q n into n = 2 2m = 2 2⌈ 1 ϵ 2 ⌉" }, { "formula_coordinates": [ 18, 90, 363.43, 423.2, 27.17 ], "formula_id": "formula_82", "formula_text": "J i is -r + (i -1) ρ r . Consider the partition of [0, 1] based on the preimages Y j = {Y i = f -1 (J i )} j i=1 ." }, { "formula_coordinates": [ 18, 255.75, 402.21, 91.5, 30.79 ], "formula_id": "formula_83", "formula_text": "f ρ (x) = j i=1 a i 1 Yi (x)," }, { "formula_coordinates": [ 18, 241.06, 460.13, 120.88, 10.3 ], "formula_id": "formula_84", "formula_text": "∥f -f ρ ∥ □ ≤ ∥f -f ρ ∥ 1 ≤ ρ." }, { "formula_coordinates": [ 18, 90, 498.49, 36.16, 11.23 ], "formula_id": "formula_85", "formula_text": "W ϵ ∈ S 2" }, { "formula_coordinates": [ 18, 89.47, 555.46, 423.53, 51.4 ], "formula_id": "formula_86", "formula_text": "P k of [0, 1] into k = 2 ⌈2c/ϵ 2 ⌉ sets, a step graphon W k ∈ S 2 P k ∩ W 0 and a step signal f k ∈ S 1 P k ∩ L ∞ r [0, 1], such that d □ (W, f ), (W k , f k ) ≤ ϵ." }, { "formula_coordinates": [ 18, 90, 661.2, 424.17, 62.68 ], "formula_id": "formula_87", "formula_text": "k(ν) := ⌈r/(ϵ -ν)⌉ 2 2⌈1/ν 2 ⌉ ≤ 2 ⌈2c/ϵ 2 ⌉ . Denote c = 1 + t. In case ν ≥ 2 2(1 + 0.5t)/ϵ 2 -1 ,(16)" }, { "formula_coordinates": [ 18, 251.71, 741.94, 99.58, 12.44 ], "formula_id": "formula_88", "formula_text": "2 2⌈1/ν 2 ⌉ ≤ 2 2(1+0.5t)/ϵ 2 ." }, { "formula_coordinates": [ 19, 262.92, 119.24, 77.16, 22.49 ], "formula_id": "formula_89", "formula_text": "ν ≤ ϵ - r 2 t/ϵ 2 -1 ," }, { "formula_coordinates": [ 19, 221.09, 199.21, 288.65, 22.49 ], "formula_id": "formula_90", "formula_text": "ϵ - r 2 t/ϵ 2 -1 ≥ 2 2(1 + 0.5t)/ϵ 2 -1 . (17" }, { "formula_coordinates": [ 19, 509.74, 205.89, 4.43, 8.8 ], "formula_id": "formula_91", "formula_text": ")" }, { "formula_coordinates": [ 19, 90, 265.51, 324.3, 59.52 ], "formula_id": "formula_92", "formula_text": "1 2 t/ϵ 2 -1 = 2 -t/ϵ 2 1 -2 -t/ϵ 2 < 2 -t/ϵ 2 < ϵ r 1 - 1 1 + 0.1t , so ϵ - r 2 t/ϵ 2 -1 > ϵ(1 + 0.1t)." }, { "formula_coordinates": [ 19, 189.09, 353.36, 235.98, 22.31 ], "formula_id": "formula_93", "formula_text": "2 2(1 + 0.5t)/ϵ 2 -1 = ϵ 1 (1 + 0.5t) -ϵ 2 < ϵ/(1 + 0.4t)." }, { "formula_coordinates": [ 19, 163.06, 408.22, 276.88, 12.44 ], "formula_id": "formula_94", "formula_text": "k(ν) = ⌈r/(ϵ -ν)⌉ 2 2⌈1/ν 2 ⌉ ≤ 2 2(0.5t)/ϵ 2 2 2(1+0.5t)/ϵ 2 ≤ 2 ⌈2c/ϵ 2 ⌉ ." }, { "formula_coordinates": [ 19, 167, 506.35, 49.97, 11.23 ], "formula_id": "formula_95", "formula_text": "[W ϕ ] n ∈ S 2" }, { "formula_coordinates": [ 19, 219.06, 506.35, 290.68, 36.81 ], "formula_id": "formula_96", "formula_text": "[f ϕ ] n ∈ S 1 In ∩ L ∞ r [0, 1], such that d □ (W ϕ , f ϕ ) , [W ϕ ] n , [f ϕ ] n ≤ ϵ, (18" }, { "formula_coordinates": [ 19, 509.74, 532.8, 4.43, 8.8 ], "formula_id": "formula_97", "formula_text": ")" }, { "formula_coordinates": [ 19, 90, 555.7, 277.37, 30.89 ], "formula_id": "formula_98", "formula_text": "I n is the equipartition of [0, 1] into n = 2 ⌈2c/ϵ 2 ⌉ intervals. Proof. Let c = 1 + t > 1, ϵ > 0 and 0 < α, β < 1." }, { "formula_coordinates": [ 19, 201.42, 745.58, 200.16, 10.37 ], "formula_id": "formula_99", "formula_text": "∥W k -W n ∥ □ ≤ 2ϵβ and ∥f k -f n ∥ 1 ≤ rϵβ." }, { "formula_coordinates": [ 20, 108.6, 132.64, 385.81, 10.3 ], "formula_id": "formula_100", "formula_text": "d □ (W, f ), (W n , f n ) ≤ d □ (W, f ), (W k , f k ) + d □ (W k , f k ), (W n , f n ) ≤ ϵ(α + (2 + r)β)." }, { "formula_coordinates": [ 20, 165.45, 184.51, 272.1, 14.58 ], "formula_id": "formula_101", "formula_text": "n(α) := ⌈2 4(1+0.5t) (ϵα) 2 +1 /(ϵβ)⌉ = ⌈(2 + r) • 2 9(1+0.5t) 4(ϵα) 2 +1 /(ϵ(1 -α))⌉." }, { "formula_coordinates": [ 20, 190.8, 243.03, 323.37, 15.74 ], "formula_id": "formula_102", "formula_text": "n(α 0 ) = ⌈(2 + r) • 2 2(1+0.5t) (ϵα 0 ) 2 +1 /(ϵ(1 -α 0 ))⌉ < 2 ⌈ 2c ϵ 2 ⌉ .(19)" }, { "formula_coordinates": [ 20, 190.8, 285.01, 318.94, 15.74 ], "formula_id": "formula_103", "formula_text": "n(α 1 ) = ⌈(2 + r) • 2 2(1+0.5t) (ϵα 1 ) 2 +1 /(ϵ(1 -α 1 ))⌉ > 2 ⌈ 2c ϵ 2 ⌉ . (20" }, { "formula_coordinates": [ 20, 509.74, 291.04, 4.43, 8.8 ], "formula_id": "formula_104", "formula_text": ")" }, { "formula_coordinates": [ 20, 90, 323.67, 423, 37.17 ], "formula_id": "formula_105", "formula_text": "∈ [α 1 , α 2 ] such that n(α) = m. This follows the fact that α → (2 + r) • 2 2(1+0.5t) (ϵα) 2 +1 /(ϵ(1 -α)) is a continuous function." }, { "formula_coordinates": [ 20, 211.87, 352.05, 301.4, 33.94 ], "formula_id": "formula_106", "formula_text": "β such that α + (2 + r)β = 1) such that n(α) = n = ⌈2 2(1+0.5t) (ϵα) 2 +1 /(ϵβ)⌉ = 2 ⌈2c/ϵ 2 ⌉ ." }, { "formula_coordinates": [ 20, 168.08, 452.48, 105.54, 14.54 ], "formula_id": "formula_107", "formula_text": "n ′ = ⌈2 2c ′ ϵ 2 ⌉ ≥ 2 ⌈ 2c ϵ 2 ⌉ = n." }, { "formula_coordinates": [ 20, 305.51, 501.43, 53.06, 11.23 ], "formula_id": "formula_108", "formula_text": "[W ϕ ] n ∈ S 2" }, { "formula_coordinates": [ 20, 90, 515.83, 294.63, 36.73 ], "formula_id": "formula_109", "formula_text": "[f ϕ ] n ∈ S 1 In ∩ L ∞ r [0, 1], such that d □ W ϕ , f ϕ , [W ϕ ] n , [f ϕ ] n ≤ ϵ," }, { "formula_coordinates": [ 20, 176.51, 674.68, 249.98, 17.23 ], "formula_id": "formula_110", "formula_text": "W Pn (x, y) = Pi×Pj W (x, y)dxdy , f Pn (x) = Pi f (x)dx" }, { "formula_coordinates": [ 21, 213.9, 147.75, 175.2, 12.38 ], "formula_id": "formula_111", "formula_text": "d □ W ϕ , f ϕ , [W ϕ ] In , [f ϕ ] In ≤ ϵ." }, { "formula_coordinates": [ 21, 214.86, 239.37, 133.31, 17.23 ], "formula_id": "formula_112", "formula_text": "sup S,T ⊂[0,1] S T V (x, y) -R(x," }, { "formula_coordinates": [ 21, 242.64, 305.48, 271.53, 47.7 ], "formula_id": "formula_113", "formula_text": "∩ L ∞ r [0, 1], the supremum of sup S⊂[0,1] S f (x) -g(x) dx(22)" }, { "formula_coordinates": [ 21, 89.35, 406, 60.65, 8.8 ], "formula_id": "formula_114", "formula_text": "where s ⊂ [n]." }, { "formula_coordinates": [ 21, 137.52, 567.6, 333.13, 52.56 ], "formula_id": "formula_115", "formula_text": "S Pj V (x, y) -R(x, y) dxdy = µ(P j ) µ(T ∩ P j ) S T ∩Pj V (x, y) -R(x, y) dxdy ≥ S T ∩Pj V (x, y) -R(x, y) dxdy." }, { "formula_coordinates": [ 21, 260.16, 642.86, 82.67, 22.6 ], "formula_id": "formula_116", "formula_text": "T ′ = {j|T ∩Pj ̸ =∅} P j ," }, { "formula_coordinates": [ 21, 166.72, 689.7, 164.6, 17.23 ], "formula_id": "formula_117", "formula_text": "S T ′ V (x, y) -R(x, y) dxdy ≥ S T" }, { "formula_coordinates": [ 22, 247.21, 241.99, 108.58, 19.91 ], "formula_id": "formula_118", "formula_text": "S = i∈s P i , T = j∈t P j ." }, { "formula_coordinates": [ 22, 155.27, 307.24, 292.46, 90.96 ], "formula_id": "formula_119", "formula_text": "∥W n -W Pn ∥ □ = i∈s,j∈t Pi Pj (W Pn (x, y) -W n (x, y))dxdy = i∈s,j∈t Pi Pj (W (x, y) -W n (x, y))dxdy = S T (W (x, y) -W n (x, y))dxdy = ∥W n -W ∥ □ ." }, { "formula_coordinates": [ 22, 89.62, 430.17, 351.73, 44.18 ], "formula_id": "formula_120", "formula_text": "∥W -W Pn ∥ □ ≤ ∥W -W n ∥ □ + ∥W n -W Pn ∥ □ < 2∥W n -W ∥ □ . A similar argument shows ∥f -f Pn ∥ □ < 2∥f n -f ∥ □ ." }, { "formula_coordinates": [ 22, 132.66, 504.25, 337.69, 12.38 ], "formula_id": "formula_121", "formula_text": "d □ W ϕ , f ϕ , [W ϕ ] In , [f ϕ ] In ≤ 2d □ W ϕ , f ϕ , [W ϕ ] n , [f ϕ ] n ≤ ϵ." }, { "formula_coordinates": [ 22, 208.28, 634.98, 186.44, 13.36 ], "formula_id": "formula_122", "formula_text": "[WL r ] P k := (W 0 ∩ S 2 P k ) × (L ∞ r [0, 1] ∩ S 1 P k )" }, { "formula_coordinates": [ 23, 169.8, 205.39, 324.06, 9.71 ], "formula_id": "formula_123", "formula_text": "C.1. Consider a sequence {[(W n , f n )]} n∈N ⊂ WL r , with (W n , f n ) ∈ WL r ." }, { "formula_coordinates": [ 23, 89.64, 250.57, 424.74, 59.9 ], "formula_id": "formula_124", "formula_text": "∥(W n , f n ) ϕ n,k -(W n , f n ) ϕ n,k Im k ∥ □;r < 1/k, where (W n , f n ) ϕ n,k Im k is the projection of (W n , f n ) ϕ n,k upon I m k (Definition B.10). For every fixed k, each pair of functions (W n , f n ) ϕ n,k Im k is defined via m 2 k + m k values in [0, 1]. Hence, since [0, 1] m 2 k +m k" }, { "formula_coordinates": [ 23, 90, 333.93, 423, 49.59 ], "formula_id": "formula_125", "formula_text": "{(W n k j , f n k j ) ϕ n k j ,k Im k } ∞ j=1 converges pointwise to some step graphon-signal (U k , g k ) in [WL r ] P k as j → ∞. Note that I m l is a refinement of I m k for every l > k." }, { "formula_coordinates": [ 23, 297.01, 383.62, 16.62, 12.74 ], "formula_id": "formula_126", "formula_text": "ϕ n,k n" }, { "formula_coordinates": [ 23, 241.2, 531.33, 116.19, 22.31 ], "formula_id": "formula_127", "formula_text": "∥(U, g) -(U kz , g kz )∥ 1 < 1 3z" }, { "formula_coordinates": [ 23, 220.94, 595.11, 161.11, 22.31 ], "formula_id": "formula_128", "formula_text": "∥(U kz , g kz ) -(W tz , f tz ) ϕ tz ,kz Im kz ∥ 1 < 1 3z ," }, { "formula_coordinates": [ 23, 130.89, 657.22, 340.72, 80.33 ], "formula_id": "formula_129", "formula_text": "δ □ (U, g), (W tz , f tz ) ≤ ∥(U, g) -(W tz , f tz ) ϕ tz ,kz ∥ □ ≤ ∥(U, g) -(U kz , g kz )∥ 1 + ∥(U kz , g kz ) -(W tz , f tz ) ϕ tz ,kz Im kz ∥ 1 + ∥(W tz , f tz ) ϕ tz ,kz Im kz -(W tz , f tz ) ϕ tz ,kz ∥ □ ≤ 1 3z + 1 3z + 1 3z ≤ 1 z ." }, { "formula_coordinates": [ 25, 309.21, 122.4, 110.27, 12.55 ], "formula_id": "formula_130", "formula_text": "(λ ′ 1 , . . . λ ′ k ) → (λ ′ 1 , . . . λ ′ k )" }, { "formula_coordinates": [ 25, 133.33, 158.27, 190.42, 12.55 ], "formula_id": "formula_131", "formula_text": "f ∈ L ∞ r [0, 1] and Λ ′ = (Λ ′ 1 , . . . , Λ ′ k ) ∈ [0, 1] k ," }, { "formula_coordinates": [ 25, 89.64, 257.97, 302.7, 65.36 ], "formula_id": "formula_132", "formula_text": "P Λ ′ (S) = z∈S i,j∈[k] P Λ ′ ;i,j (z i,j ), where P Λ ′ ;i,j (z i,j ) = W (λ ′ i , λ ′ j ) if z i,j = 1 1 -W (λ ′ i , λ ′ j ) if z i,j = 0." }, { "formula_coordinates": [ 25, 89.64, 403.76, 351.57, 52.62 ], "formula_id": "formula_133", "formula_text": "µ(S) = [0,1] k P Λ ′ S(Λ ′ ) dΛ ′ , where S(Λ ′ ) ⊂ {0, 1} k×k := {z = {z i,j } i,j∈[k] ∈ {0, 1} k×k | (Λ ′ , z) ∈ S}." }, { "formula_coordinates": [ 25, 113.61, 486.69, 254.11, 10.87 ], "formula_id": "formula_134", "formula_text": "f (Λ)(Λ ′ , z) = f (Λ)(Λ ′ ) and G(W, Λ ′ )(Λ ′ , z) = G(W, Λ ′ )(z))" }, { "formula_coordinates": [ 25, 248.61, 611.2, 105.78, 23.67 ], "formula_id": "formula_135", "formula_text": "E d □ (G(H), H) ≤ 11 √ k ." }, { "formula_coordinates": [ 25, 90, 664.88, 278.27, 43.05 ], "formula_id": "formula_136", "formula_text": "Corollary D.2. Let W ∈ W 0 and k ∈ N. Then E d □ (G(W, Λ), W (Λ)) ≤ 11 √ k ." }, { "formula_coordinates": [ 26, 232.54, 168.45, 193.1, 49.83 ], "formula_id": "formula_137", "formula_text": "1 -4e - √ k/10 , - 3 k ≤ ∥U [Λ]∥ 2 -∥U ∥ 2 ≤ 8 k 1/4 ." }, { "formula_coordinates": [ 26, 257.95, 281.01, 87.09, 9.3 ], "formula_id": "formula_138", "formula_text": "E(z) ≤ (1 -ϵ)α + ϵ." }, { "formula_coordinates": [ 26, 170.34, 319.36, 262.32, 17.23 ], "formula_id": "formula_139", "formula_text": "E(z) = Ω z(x)dx = E z(x)dx + Ω\\E z(x)dx ≤ (1 -ϵ)α + ϵ." }, { "formula_coordinates": [ 26, 89.21, 382.07, 423.79, 55.28 ], "formula_id": "formula_140", "formula_text": "). Let U ∈ W 1 and Λ ∈ [0, 1] k be chosen uniformly at random, where k ≥ 1. Then E |∥U [Λ]∥ 2 -∥U ∥ 2 | ≤ 14 k 1/4 ." }, { "formula_coordinates": [ 26, 164.96, 442.24, 273.07, 49.83 ], "formula_id": "formula_141", "formula_text": "1/4 > 4e - √ k/10 , E ∥U [Λ]∥ 2 -∥U ∥ 2 ≤ 1 -4e - √ k/10 8 k 1/4 + 4e - √ k/10 < 14 k 1/4 ." }, { "formula_coordinates": [ 26, 90, 563.09, 321.25, 41.75 ], "formula_id": "formula_142", "formula_text": "Lemma D.6 (First sampling lemma for signals). Let f ∈ L ∞ r [0, 1]. Then E |∥f (Λ)∥ 1 -∥f ∥ 1 | ≤ r k 1/2 ." }, { "formula_coordinates": [ 26, 207.67, 647, 187.67, 23.89 ], "formula_id": "formula_143", "formula_text": "V(∥f (Λ)∥ 1 ) = E |∥f (Λ)∥ 1 -∥f ∥ 1 | 2 ≤ r 2 k ." }, { "formula_coordinates": [ 26, 90, 679.93, 423.37, 56.82 ], "formula_id": "formula_144", "formula_text": "E∥f (Λ)∥ 1 = 1 k k j=1 |f (λ j )| = ∥f ∥ 1 . Hence, by Cauchy Schwarz inequality, E |∥f (Λ)∥ 1 -∥f ∥ 1 | ≤ E |∥f (Λ)∥ 1 -∥f ∥ 1 | 2 ≤ r k 1/2 ." }, { "formula_coordinates": [ 27, 209.86, 172.03, 179.33, 23.7 ], "formula_id": "formula_145", "formula_text": "E δ □ (W, f ), (W (Λ), f (Λ)) < 15 log(k)" }, { "formula_coordinates": [ 27, 204.6, 213.21, 193.81, 23.7 ], "formula_id": "formula_146", "formula_text": "E δ □ (W, f ), (G(W, Λ), f (Λ)) < 15 log(k) ." }, { "formula_coordinates": [ 27, 89.75, 337.25, 250.89, 49.35 ], "formula_id": "formula_147", "formula_text": "2c = 3, ⌈3/ϵ 2 ⌉ = log(n) so 3/ϵ 2 + 1 ≥ log(n)." }, { "formula_coordinates": [ 27, 246.48, 412.36, 110.04, 10.81 ], "formula_id": "formula_148", "formula_text": "4/ϵ 2 > 3/ϵ 2 + 1 ≥ log(n)." }, { "formula_coordinates": [ 27, 90, 447.13, 288.5, 133.03 ], "formula_id": "formula_149", "formula_text": "([W ϕ ] n , [f ϕ ] n ) such that ∥W ϕ ′ -[W ϕ ′ ] n ∥ □ ≤ α 2 log(n) and ∥f ϕ ′ -[f ϕ ′ ] n ∥ □ ≤ (1 -α) 2 log(n) , for some 0 ≤ α ≤ 1. If we choose n such that n = ⌈ √ k r log(k) ⌉," }, { "formula_coordinates": [ 27, 90, 606.95, 338.9, 76.7 ], "formula_id": "formula_150", "formula_text": "∥W ϕ ′ -[W ϕ ′ ] n ∥ □ ≤ α 2 1 2 log(k) -log log(k) -log(r) and ∥f ϕ ′ -[f ϕ ′ ] n ∥ □ ≤ (1 -α) 2 1 2 log(k) -log log(k) -log(r)" }, { "formula_coordinates": [ 27, 191.16, 718.41, 219.48, 34.93 ], "formula_id": "formula_151", "formula_text": "d □ (W, W n ) ≤ α 2 √ 2 log(k) -2 log log(k) -2 log(r) and ∥f -f n ∥ 1 ≤ (1 -α) 4 √ 2 log(k) -2 log log(k) -2 log(r)" }, { "formula_coordinates": [ 28, 218.32, 160.48, 76.96, 9.65 ], "formula_id": "formula_152", "formula_text": "(W n , f n ) ∈ [WL r ]" }, { "formula_coordinates": [ 28, 206.93, 192.3, 189.15, 22.49 ], "formula_id": "formula_153", "formula_text": "E d 2 W (Λ), W n (Λ) -d 2 (W, W n ) ≤ 14 k 1/4 ." }, { "formula_coordinates": [ 28, 213.01, 222.91, 183.96, 43.93 ], "formula_id": "formula_154", "formula_text": "f n ∈ L ∞ 2r [0, 1], Lemma D.6 implies that E ∥f (Λ) -f n (Λ)∥ 1 -∥f -f n ∥ 1 ≤ 2r k 1/2 ." }, { "formula_coordinates": [ 28, 138.29, 300.28, 326.42, 46.18 ], "formula_id": "formula_155", "formula_text": "E d □ W (Λ), W n (Λ) ≤ E d □ W (Λ), W n (Λ) -d □ (W, W n ) + d □ (W, W n ) ≤ 14 k 1/4 + α 2 √ 2 log(k) -2 log log(k) -2 log(r)" }, { "formula_coordinates": [ 28, 145.64, 383.49, 307.77, 43.29 ], "formula_id": "formula_156", "formula_text": "E∥f (Λ) -f n (Λ)∥ 1 ≤ E ∥f (Λ) -f n (Λ)∥ 1 -∥f -f n ∥ 1 + ∥f -f n ∥ 1 ≤ 2r k 1/2 + (1 -α) 4 √ 2 log(k) -2 log log(k) -2 log(r)" }, { "formula_coordinates": [ 28, 221.23, 462.35, 160.54, 15.3 ], "formula_id": "formula_157", "formula_text": "π Λ (Λ) := {Λ π -1 Λ (i) } k i=1 = (λ ′ 1 , . . . , λ ′ k )" }, { "formula_coordinates": [ 28, 236.76, 533.06, 277.41, 12.69 ], "formula_id": "formula_158", "formula_text": "if x ∈ I i k , ϕ(x) = J i,πΛ(i) (x),(24)" }, { "formula_coordinates": [ 28, 248.66, 700.55, 105.37, 9.65 ], "formula_id": "formula_159", "formula_text": "J i := [j i -1, j i+1 -1)/k" }, { "formula_coordinates": [ 29, 119.14, 131.87, 364.37, 195.35 ], "formula_id": "formula_160", "formula_text": "∥W n -W n (Λ) ϕ ∥ □ ≤ ∥W n -W n (Λ) ϕ ∥ 1 = i k I i n ∩Ji I k n ∩J k W n (x, y) -W n (Λ) ϕ (x, y) dxdy + i j̸ =i k l̸ =k I i n ∩Jj I k n ∩J l W n (x, y) -W n (Λ) ϕ (x, y) dxdy = i j̸ =i k l̸ =k I i n ∩Jj I k n ∩J l W n (x, y) -W n (Λ) ϕ (x, y) dxdy = i k I i n \\Ji I k n \\J k W n (x, y) -W n (Λ) ϕ (x, y) dxdy ≤ i k I i n \\Ji I k n \\J k 1dxdy ≤ 2 i I i n \\Ji 1dxdy ≤ 2 i (|i/n -a i | + |(i + 1)/n -a i+1 |)." }, { "formula_coordinates": [ 29, 146.75, 359.16, 303.06, 52.53 ], "formula_id": "formula_161", "formula_text": "E∥W n -W n (Λ) ϕ ∥ □ ≤ 2 i (E |i/n -a i | + E |(i + 1)/n -a i+1 |) ≤ 2 i E(i/n -a i ) 2 + E (i + 1)/n -a i+12" }, { "formula_coordinates": [ 29, 206.34, 442.8, 190.32, 11.72 ], "formula_id": "formula_162", "formula_text": "E(ik/n -ka i ) 2 = V(ka i ) = k(i/n)(1 -i/n)." }, { "formula_coordinates": [ 29, 197.86, 484.32, 207.28, 61.36 ], "formula_id": "formula_163", "formula_text": "E∥W n -W n (Λ) ϕ ∥ □ ≤ 5 n i=1 (i/n)(1 -i/n) k ≤ 2 n 1 (i/n)(1 -i/n) k di," }, { "formula_coordinates": [ 29, 90, 574.77, 363.8, 86.06 ], "formula_id": "formula_164", "formula_text": "≤ 5 n √ k 1.1 0 z -z 2 dz ≤ 5 n √ k 1.1 0 √ zdz ≤ 10/3(1.1) 3/2 n √ k < 4 n √ k . Now, by n = ⌈ √ k r log(k) ⌉ ≤ √ k r log(k) + 1, for large enough k, E∥W n -W n (Λ) ϕ ∥ □ ≤ 4 1 r log(k) + 4 1 √ k ≤ 5 r log(k)" }, { "formula_coordinates": [ 29, 242.32, 685.06, 84.32, 11.72 ], "formula_id": "formula_165", "formula_text": "E∥f n -f n (Λ) ϕ ∥ 1 ≤" }, { "formula_coordinates": [ 30, 90, 131.87, 384.67, 141.23 ], "formula_id": "formula_166", "formula_text": "E(d 2 (W, W (Λ) ϕ )) ≤ d □ (W, W n ) + E d □ (W n , W n (Λ) ϕ ) + E(d □ (W n (Λ), W (Λ))) ≤ α 2 √ 2 log(k) -2 log log(k) -2 log(r) + 5 r log(k) + 14 k 1/4 + α 2 √ 2 log(k) -2 log log(k) -2 log(r) ≤ α 6 log(k) , Similarly, for each k, if 1 -α < 1 √ log(k)" }, { "formula_coordinates": [ 30, 90, 279.52, 390.1, 102.39 ], "formula_id": "formula_167", "formula_text": "E(d □ (f, f (Λ) ϕ )) ≤ (1 -α) 2 √ 2 log(k) -2 log log(k) -2 log(r) + 5 log(k) + 2r k 1/2 + (1 -α) 4 √ 2 log(k) -2 log log(k) -2 log(r) ≤ 14 log(k) . Moreover, for each k such that 1 -α > 1 √ log(k)" }, { "formula_coordinates": [ 30, 90, 404.92, 366.62, 155 ], "formula_id": "formula_168", "formula_text": "5 log(k) + 2r k 1/2 < 5.5 log(k) < 1 log(k) 6 log(k) < (1 -α) 6 log(k) so, by 6 √ 2 < 9, E(d □ (f, f (Λ) ϕ )) ≤ (1 -α) 2 √ 2 log(k) -2 log log(k) -2 log(r) + 2 log(k) + 2r k 1/2 + (1 -α) 4 √ 2 log(k) -2 log log(k) -2 log(r) ≤ (1 -α) 15 log(k) ." }, { "formula_coordinates": [ 30, 141.61, 594.33, 308.77, 42.65 ], "formula_id": "formula_169", "formula_text": "E d □ W, G(W, Λ) ϕ ≤ E d □ W, W (Λ) ϕ + E d □ W (Λ) ϕ , G(W, Λ) ϕ ≤ α 6 log(k) + 11 √ k ≤ α 7 log(k) ," }, { "formula_coordinates": [ 30, 90, 668.47, 303.14, 44.55 ], "formula_id": "formula_170", "formula_text": "E δ □ (W, f ), (W (Λ), f (Λ)) < 15 log(k) ,and" }, { "formula_coordinates": [ 30, 204.6, 711.7, 193.81, 23.7 ], "formula_id": "formula_171", "formula_text": "E δ □ (W, f ), (G(W, Λ), f (Λ)) < 15 log(k) ." }, { "formula_coordinates": [ 32, 252.97, 146.04, 97.07, 30.32 ], "formula_id": "formula_172", "formula_text": "p(A)f = J j=0 1 n j A j f C j ," }, { "formula_coordinates": [ 32, 265.54, 215.66, 71.92, 22.31 ], "formula_id": "formula_173", "formula_text": "θ(f ) = f , 1 n Af ." }, { "formula_coordinates": [ 32, 278.54, 258.98, 45.22, 8.77 ], "formula_id": "formula_174", "formula_text": "U (f ) = f C" }, { "formula_coordinates": [ 32, 216.41, 387.55, 170.18, 26.29 ], "formula_id": "formula_175", "formula_text": "∥h∥ 1 := 1 0 |h(x)| dx = 1 0 ∥h(x)∥ ∞ dx." }, { "formula_coordinates": [ 32, 261.65, 435.46, 119.89, 17.21 ], "formula_id": "formula_176", "formula_text": "x∈R d |h(x)| = sup x∈R d ∥h(x)∥ ∞ ." }, { "formula_coordinates": [ 32, 90, 459.89, 423, 45.1 ], "formula_id": "formula_177", "formula_text": "R d → R c is called Lipschitz continuous with Lipschitz constant L if |Z(x) -Z(y)| = ∥Z(x) -Z(y)∥ ∞ ≤ L∥x -z∥ ∞ = L |x -z| ." }, { "formula_coordinates": [ 32, 257.77, 561.02, 87.46, 10.81 ], "formula_id": "formula_178", "formula_text": "K : [0, 1] 2 → [-q, q]." }, { "formula_coordinates": [ 32, 173.46, 695.81, 256.07, 30.55 ], "formula_id": "formula_179", "formula_text": "∥Φ f -Φ g ∥ L 1 [0,1] 2 ≤ K k=1 L ξ k r ∥ξ k t ∥ ∞ + ∥ξ k r ∥ ∞ L ξ k t ∥f -g∥ 1 ." }, { "formula_coordinates": [ 33, 111.07, 133.27, 371.59, 136.09 ], "formula_id": "formula_180", "formula_text": "|Φ f (x, y) -Φ g (x, y)| = K k=1 ξ k r (f (x))ξ k t (f (y)) - K k=1 ξ k r (g(x))ξ k t (g(y)) ≤ K k=1 ξ k r (f (x))ξ k t (f (y)) -ξ k r (g(x))ξ k t (g(y)) ≤ K k=1 ξ k r (f (x))ξ k t (f (y)) -ξ k r (g(x))ξ k t (f (y)) + ξ k r (g(x))ξ k t (f (y)) -ξ k r (g(x))ξ k t (g(y)) ≤ K k=1 L ξ k r |f (x) -g(x)| ξ k t (f (y)) + ξ k r (g(x)) L ξ k t |f (y) -g(y)| ." }, { "formula_coordinates": [ 33, 130.29, 301.94, 342.06, 116.28 ], "formula_id": "formula_181", "formula_text": "∥Φ f -Φ g ∥ L 1 [0,1] 2 ≤ K k=1 1 0 1 0 L ξ k r |f (x) -g(x)| ξ k t (f (y)) + ξ k r (g(x)) L ξ k t |f (y) -g(y)| dxdy ≤ K k=1 L ξ k r ∥f -g∥ 1 ∥ξ k t ∥ ∞ + ∥ξ k r ∥ ∞ L ξ k t ∥f -g∥ 1 = K k=1 L ξ k r ∥ξ k t ∥ ∞ + ∥ξ k r ∥ ∞ L ξ k t ∥f -g∥ 1 ." }, { "formula_coordinates": [ 33, 212.08, 470.59, 178.83, 9.65 ], "formula_id": "formula_182", "formula_text": "∥Agg(W, Q) -Agg(W, V )∥ 1 ≤ ∥Q -V ∥ 1 ." }, { "formula_coordinates": [ 33, 90, 501.98, 378.93, 133.45 ], "formula_id": "formula_183", "formula_text": "Agg(W, Q)(x) -Agg(W, V )(x) = 1 0 W (x, y)(Q(x, y) -V (x, y))dy So ∥Agg(W, Q) -Agg(W, V )∥ 1 = 1 0 1 0 W (x, y)(Q(x, y) -V (x, y))dy dx ≤ 1 0 1 0 |W (x, y)(Q(x, y) -V (x, y))| dydx ≤ 1 0 1 0 |(Q(x, y) -V (x, y))| dydx = ∥Q -V ∥ 1 ." }, { "formula_coordinates": [ 33, 90, 696.69, 366.37, 51.32 ], "formula_id": "formula_184", "formula_text": "Corollary F.3. ∥Agg(W, Φ f ) -Agg(W, Φ g )∥ 1 ≤ K k=1 L ξ k r ∥ξ k t ∥ ∞ + ∥ξ k r ∥ ∞ L ξ k t ∥f -g∥ 1 ." }, { "formula_coordinates": [ 34, 90, 135.87, 418.78, 65.84 ], "formula_id": "formula_185", "formula_text": "from K 1 to K r . Lemma F.4. For any kernel Q ∈ K r ∥Q∥ □ = sup f,g∈L + [0,1] [0,1] 2 f (x)Q(x, y)g(y)dxdy ," }, { "formula_coordinates": [ 34, 289.94, 213.3, 45.78, 10.31 ], "formula_id": "formula_186", "formula_text": "∈ L + [0, 1]." }, { "formula_coordinates": [ 34, 194.62, 253.2, 213.76, 48.14 ], "formula_id": "formula_187", "formula_text": "Q ∈ K r sup f,g∈L ∞ 1 [0,1] [0,1] 2 f (x)Q(x, y)g(y)dxdy ≤ 4∥Q∥ □ ." }, { "formula_coordinates": [ 34, 90, 330.07, 424.38, 131.18 ], "formula_id": "formula_188", "formula_text": "Proof. Any function f ∈ L ∞ 1 [0, 1] can be written as f = f + -f -, where f + , f -∈ L + [0, 1]. Hence, by Lemma F.4, sup f,g∈L ∞ 1 [0,1] [0,1] 2 f (x)Q(x, y)g(y)dxdy = sup f+,f-,g+,g-∈L + [0,1] [0,1] 2 (f + (x) -f -(x))Q(x, y)(g + (y) -g -(y))dxdy ≤ s∈{+,-} sup fs,gs∈L + [0,1] [0,1] 2 f s (x)Q(x, y)g s (y)dxdy = 4∥Q∥ □ ." }, { "formula_coordinates": [ 34, 90, 535.32, 422.9, 62.45 ], "formula_id": "formula_189", "formula_text": "f + or f -. Lemma F.7. Let f ∈ L ∞ r [0, 1] , W, V ∈ W 0 , and suppose that ξ k r (f (x)) , ξ k t (f (x)) ≤ ρ for every x ∈ [0, 1] and k = 1, . . . , K. Then ∥Agg(W, Φ f ) -Agg(V, Φ f )∥ □ ≤ 4Kρ 2 ∥W -V ∥ □ ." }, { "formula_coordinates": [ 34, 195.95, 627.33, 211.11, 12.38 ], "formula_id": "formula_190", "formula_text": "∥Agg(W, Φ f ) -Agg(V, Φ f )∥ □ ≤ Kρ 2 ∥W -V ∥ □ ." }, { "formula_coordinates": [ 34, 90, 660.76, 422.99, 92.05 ], "formula_id": "formula_191", "formula_text": "f )(x)dx > 0. Denote q k r (x) = ξ k r (f (x)) and q k t (x) = ξ k t (f (x)). We have S Agg(W, Φ f )(x) -Agg(V, Φ f )(x) dx = S Agg(T, Φ f )(x)dx = K k=1 S 1 0 q k r (x)T (x, y)q k t (y)dydx. Let v k r (x) = q k r (x)/ρ x ∈ S 0 x / ∈ S.(25)" }, { "formula_coordinates": [ 35, 90, 150.87, 403.76, 138.17 ], "formula_id": "formula_192", "formula_text": "k r , v k t ∈ L ∞ 1 [0, 1]. We hence have, by Lemma F.5, S Agg(T, Φ f )(x)dx = K k=1 ρ 2 1 0 1 0 v k r (x)T (x, y)v k t (y)dydx ≤ K k=1 ρ 2 1 0 1 0 v k r (x)T (x, y)v k t (y)dydx ≤ 4Kρ 2 ∥T ∥ □ . Hence, ∥Agg(W, Φ f ) -Agg(V, Φ f )∥ □ ≤ 4Kρ 2 ∥T ∥ □" }, { "formula_coordinates": [ 35, 89.6, 321.58, 423.12, 95.03 ], "formula_id": "formula_193", "formula_text": "S Agg(T, Φ f )(x)dx ≤ Kρ 2 ∥T ∥ □ . Theorem F.8. Let (W, f ), (V, g) ∈ WL r , and suppose that ξ k r (f (x)) , ξ k t (f (x)) ≤ ρ and L ξ k t , L ξ k t < L for every x ∈ [0, 1] and k = 1, . . . , K. Then, ∥Agg(W, Φ f ) -Agg(V, Φ g )∥ □ ≤ 4KLρ∥f -g∥ □ + 4Kρ 2 ∥W -V ∥ □ ." }, { "formula_coordinates": [ 35, 159.6, 450.14, 283.3, 76.61 ], "formula_id": "formula_194", "formula_text": "∥Agg(W, Φ f ) -Agg(V, Φ g )∥ □ ≤ ∥Agg(W, Φ f ) -Agg(W, Φ g )∥ □ + ∥Agg(W, Φ g ) -Agg(V, Φ g )∥ □ ≤ K k=1 L ξ k r ∥ξ k t ∥ ∞ + ∥ξ k r ∥ ∞ L ξ k t ∥f -g∥ 1 + 4Kρ 2 ∥W -V ∥ □ ≤ 4KLρ∥f -g∥ □ + 4Kρ 2 ∥W -V ∥ □ ." }, { "formula_coordinates": [ 35, 90, 637.99, 331.5, 94.3 ], "formula_id": "formula_195", "formula_text": "∥η(f ) -η(g)∥ 1 ≤ L η ∥f -g∥ 1 . Proof. ∥η(f ) -η(g)∥ 1 = 1 0 η f (x) -η g(x) dx ≤ 1 0 L η |f (x) -g(x)| dx = L η ∥f -g∥ 1 ." }, { "formula_coordinates": [ 38, 200.73, 184.39, 313.43, 30.79 ], "formula_id": "formula_196", "formula_text": "e t = Z t (a, b, e 0 ) := t-1 j=0 a j e 0 + t-1 j=1 j-1 i=1 a t-i b t-j ,(26)" }, { "formula_coordinates": [ 38, 221.06, 374.95, 160.88, 13.86 ], "formula_id": "formula_197", "formula_text": "∥ t ξ k y ∥ ∞ , ∥η t ∥ ∞ ≤ ρ, L η t , Lt ξ k y < L." }, { "formula_coordinates": [ 38, 135.67, 420.31, 331.66, 30.32 ], "formula_id": "formula_198", "formula_text": "∥Θ t (W, f ) -Θ t (V, g)∥ □ ≤ (4KLρ) t ∥f -g∥ □ + t-1 j=0 (4KLρ) j 4Kρ 2 ∥W -V ∥ □ ," }, { "formula_coordinates": [ 38, 127.81, 481.96, 347.38, 30.32 ], "formula_id": "formula_199", "formula_text": "∥Θ t (W, f ) -Θ t (V, g)∥ □ ≤ (4KL 2 ρ) t ∥f -g∥ □ + t-1 j=0 (4KL 2 ρ) j 4Kρ 2 L∥W -V ∥ □ ." }, { "formula_coordinates": [ 38, 219.8, 566.95, 163.4, 12.38 ], "formula_id": "formula_200", "formula_text": "e t+1 = 4KL 2 ρe t + 4Kρ 2 L∥W -V ∥ □ ." }, { "formula_coordinates": [ 38, 89.35, 647.62, 297.07, 40.7 ], "formula_id": "formula_201", "formula_text": "[K], η t (0) , t ξ k y (0) ≤ B, L η t , Lt ξ k y < L with L, B > 1. Let (W, f ) ∈ WL r ." }, { "formula_coordinates": [ 38, 226.11, 698.04, 150.78, 14.34 ], "formula_id": "formula_202", "formula_text": "∥Θ t (W, f )∥ ∞ ≤ (2KL 2 B 2 ) 2 t ∥f ∥ 2 t ∞ ," }, { "formula_coordinates": [ 38, 226.11, 741.93, 150.78, 14.34 ], "formula_id": "formula_203", "formula_text": "∥Θ t (W, f )∥ ∞ ≤ (2KL 3 B 2 ) 2 t ∥f ∥ 2 t ∞ ," }, { "formula_coordinates": [ 39, 187.76, 141.57, 227.48, 12.69 ], "formula_id": "formula_204", "formula_text": "C t+1 ≤ K(LC t + B) 2 = KL 2 C 2 t + 2KLBC t + KB 2 ." }, { "formula_coordinates": [ 39, 192.16, 180.9, 218.68, 12.69 ], "formula_id": "formula_205", "formula_text": "C t+1 ≤ KL 2 C 2 t + 2KLBC t + KB 2 ≤ 2KL 2 B 2 C 2 t ." }, { "formula_coordinates": [ 39, 173.23, 220.23, 256.04, 30.45 ], "formula_id": "formula_206", "formula_text": "C t+1 = a(C t ) 2 = a(aC 2 t-1 ) 2 = a 1+2 C 4 t-1 = a 1+2 (a(C t-2 ) 2 ) 4 = a 1+2+4 (C t-2 ) 8 = a 1+2+4+8 (C t-3 ) 16 ≤ a 2 t C 2 t 0 ." }, { "formula_coordinates": [ 39, 209.56, 277.31, 183.38, 45.05 ], "formula_id": "formula_207", "formula_text": "C t+1 ≤ LK(LC t + B) 2 + B = KL 3 C 2 t + 2KL 2 BC t + KB 2 L + B ≤ 2KL 3 B 2 C 2 t ," }, { "formula_coordinates": [ 39, 89.35, 374.79, 298.46, 38.77 ], "formula_id": "formula_208", "formula_text": "[K], η t (0) , t ξ k y (0) ≤ B, L η t , Lt ξ k y < L, with L, B > 1." }, { "formula_coordinates": [ 39, 89.35, 423.55, 389.94, 98.32 ], "formula_id": "formula_209", "formula_text": "∥Θ t (W, f ) -Θ t (V, g)∥ □ ≤ t-1 j=0 4K(L 2 r j + LB)∥f -g∥ □ + t-1 j=1 j-1 i=1 4K(L 2 r t-i + LB)4K(Lr t-j + B) 2 ∥W -V ∥ □ , where r i = (2KL 2 B 2 ) 2 i ∥f ∥ 2 i" }, { "formula_coordinates": [ 39, 89.35, 546.98, 395.56, 99.29 ], "formula_id": "formula_210", "formula_text": "∥Θ t (W, f ) -Θ t (V, g)∥ □ ≤ t-1 j=0 4K(L 3 r j + L 2 B)∥f -g∥ □ + t-1 j=1 j-1 i=1 4K(L 3 r t-i + L 2 B)4KL(Lr t-j + B) 2 ∥W -V ∥ □ , where r i = (2KL 3 B 2 ) 2 i ∥f ∥ 2 i ∞ ." }, { "formula_coordinates": [ 39, 250.57, 682.83, 101.86, 14.34 ], "formula_id": "formula_211", "formula_text": "r t = (2KL 2 B 2 ) 2 t ∥f ∥ 2 t ∞ ," }, { "formula_coordinates": [ 39, 147.62, 725.88, 307.75, 26.48 ], "formula_id": "formula_212", "formula_text": "∥Θ t+1 (W, f ) -Θ t+1 (V, g)∥ □ ≤ 4K(L 2 r t + LB)∥Θ t (W, f ) -Θ t (V, g)∥ □ + 4K(Lr t + B) 2 ∥W -V ∥ □ ." }, { "formula_coordinates": [ 41, 90, 132.45, 316.13, 81.19 ], "formula_id": "formula_213", "formula_text": "a t = 4K L 2 (L t ∥f ∥ ∞ + t-1 j=1 L j B) + LB and b t = K L(L t ∥f ∥ ∞ + t-1 j=1 L j B) + B ∥W -V ∥ □ ." }, { "formula_coordinates": [ 41, 162.66, 241.51, 277.69, 83.53 ], "formula_id": "formula_214", "formula_text": "e t = 4K(L 3 r t-1 + L 2 B)e t-1 + K(L 2 r + LB)∥W -V ∥ □ = 4K L 3 L 2t ∥f ∥ ∞ + t-1 j=1 L 2j (LB + B) + L 2 B e t-1 + K L 2 L 2t ∥f ∥ ∞ + t-1 j=1 L 2j (LB + B) + LB ∥W -V ∥ □ ." }, { "formula_coordinates": [ 41, 195.25, 357.75, 212.51, 14.01 ], "formula_id": "formula_215", "formula_text": "e t = O(K t L 3t+2t 2 r t B t ) ∥f -g∥ □ + ∥W -V ∥ □ ." }, { "formula_coordinates": [ 43, 127.14, 146.1, 387.03, 31.82 ], "formula_id": "formula_216", "formula_text": "F (x)dµ(x) - 1 N N i=1 F (X i ) ∞ ≤ 2ξ -1 (N )L f + 1 √ 2 ξ -1 (N )∥F ∥ ∞ (1 + log(2/p)),(27)" }, { "formula_coordinates": [ 43, 171.09, 196.27, 7.3, 6.73 ], "formula_id": "formula_217", "formula_text": "r 2" }, { "formula_coordinates": [ 43, 209.96, 318.52, 299.78, 31.82 ], "formula_id": "formula_218", "formula_text": "1 N N i=1 1 Ij (X i ) -µ(I k ) ∞ ≤ 1 √ 2 log(2/q) √ N . (28" }, { "formula_coordinates": [ 43, 509.74, 328.88, 4.43, 8.8 ], "formula_id": "formula_219", "formula_text": ")" }, { "formula_coordinates": [ 43, 271.81, 368.45, 59.38, 30.32 ], "formula_id": "formula_220", "formula_text": "E Jq Lip = J j=1 E q j ," }, { "formula_coordinates": [ 43, 245.13, 480.91, 57.12, 22.6 ], "formula_id": "formula_221", "formula_text": "F r (y) = j∈[J]" }, { "formula_coordinates": [ 43, 152.11, 526.29, 300.38, 65.77 ], "formula_id": "formula_222", "formula_text": "1 N N i=1 F (X i ) - χ F (y)dµ(y) ∞ ≤ 1 N N i=1 F (X i ) - 1 N N i=1 F r (X i ) ∞ + 1 N N i=1 F r (X i ) - χ F r(" }, { "formula_coordinates": [ 43, 147.12, 668.4, 314.99, 83.31 ], "formula_id": "formula_224", "formula_text": "1 N N i=1 F (X i ) - 1 N N i=1 F r (X i ) ∞ ≤ 1 N N i=1 F (X i ) - j∈J F (z j )1 Ij (X i ) ∞ = 1 N N i=1 ∥F (X i ) -F (z ji )∥ ∞ ≤rL F ." }, { "formula_coordinates": [ 44, 122.25, 147.44, 364.73, 69.08 ], "formula_id": "formula_225", "formula_text": "1 N N i=1 F r (X i ) - χ F r (y)dµ(y) ∞ = j∈[J] 1 N N i=1 F (z j )1 Ij (X i ) - Ij F (z j )dy ∞ ≤ j∈[J] ∥F ∥ ∞ 1 N N i=1" }, { "formula_coordinates": [ 44, 272.73, 224.4, 115.51, 23.8 ], "formula_id": "formula_226", "formula_text": "≤ J∥F ∥ ∞ 1 √ 2 log(2J/p) √ N ." }, { "formula_coordinates": [ 44, 216.68, 280.27, 152.84, 60.57 ], "formula_id": "formula_227", "formula_text": "1 N N i=1 F r (X i ) - χ F r (y)dµ(y) ∞ ≤ κ(r)∥F ∥ ∞ 1 √2" }, { "formula_coordinates": [ 46, 252.2, 186.37, 98.61, 30.32 ], "formula_id": "formula_228", "formula_text": "E p = E mult ∩ C i=1 E p i ." } ]
A graphon-signal analysis of graph neural networks
We present an approach for analyzing message passing graph neural networks (MPNNs) based on an extension of graphon analysis to a so called graphon-signal analysis. A MPNN is a function that takes a graph and a signal on the graph (a graph-signal) and returns some value. Since the input space of MPNNs is non-Euclidean, i.e., graphs can be of any size and topology, properties such as generalization are less well understood for MPNNs than for Euclidean neural networks. We claim that one important missing ingredient in past work is a meaningful notion of graph-signal similarity measure, that endows the space of inputs to MPNNs with a regular structure. We present such a similarity measure, called the graphon-signal cut distance, which makes the space of all graph-signals a dense subset of a compact metric space -the graphonsignal space. Informally, two deterministic graph-signals are close in cut distance if they "look like" they were sampled from the same random graph-signal model. Hence, our cut distance is a natural notion of graph-signal similarity, which allows comparing any pair of graph-signals of any size and topology. We prove that MPNNs are Lipschitz continuous functions over the graphon-signal metric space. We then give two applications of this result: 1) a generalization bound for MPNNs, and, 2) the stability of MPNNs to subsampling of graph-signals. Our results apply to any regular enough MPNN on any distribution of graph-signals, making the analysis rather universal.
Ron Levie
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of the graph-signal cut distance. Left: a stochastic block model (SBM) with three classes and a signal. The color of the class represents the value of the signal at this class. The thickness of the edges between the classes (including self-loops) represents the probability/density of edges between the classes. Middle: a small graph-signal which looks like was sampled from the SMB. The color of the nodes represents the signal values. Right: a large graph-signal which looks like was sampled from the SMB. In graphon-signal cut distance, these two graph-signals are close to each other.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Illustration of the graphon-signal regularity lemma. The values of the graphon are in gray scale over [0, 1] 2 , and the signal is plotted in color on the diagonal of [0, 1] 2 . (a) A graphonsignal. (b) Representation of the same graphon-signal under the \"good\" permutation/measure preserving bijection guaranteed by the regularity lemma. (c) The approximating step graphon-signal guaranteed by the regularity lemma.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Theorem 4 . 3 .43Consider the setting of Theorem 4.2, and let Θ be a MPNN with Lipschitz constant L. Denote Σ = W, Θ(W, f ) , and", "figure_data": "", "figure_id": "fig_2", "figure_label": "43", "figure_type": "figure" }, { "figure_caption": "XFF (z j ) 1 FFF1Ij dµ(y) -χ (y)dµ(y) ∞ ≤ j∈[J] Ij ∥F (z j ) -F (y)∥ ∞ dµ(y) ≤ rL F .By plugging the bounds of (1), (2) and (3) into (29), we get1 N N i=1 (X i )χ (y)dµ(y) ∞ ≤ 2rL F + κ(r)∥F ∥ ∞ 1 √2log(κ(r)) + log(2/p)", "figure_data": "", "figure_id": "fig_4", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "F(y)dµ(y) ∞ ≤ 2ξ -1 (N )L f + 1 √ 2 ξ -1 (N )∥F ∥ ∞ (1 + log(2/p)).", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "in Appendix G.5 we present experiments that illustrate the generalization capabilities of MPNNs with normalized sum aggregation. Comparison of the assumptions made by different GNN generalization analysis papers.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "1] d norm (see Definition 3.3).", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[17,2,28,10]", "Explanation": "The cited works provide foundational data and methodologies in computational biology that the citing paper builds upon in its research on GDL models."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work by [14] provides a generalization bound of MPNNs that the citing paper builds upon to show that the learned MPNN performs well on training and test graph-signals."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work by [23] contributes to the analysis of MPNNs by providing a way to analyze the data distribution and the MPNN model, which the citing paper uses to improve the analysis of MPNNs."}, {"Category": "Methodological Basis", "Citation": "[29]", "Explanation": "The cited work by [29] offers insights into the generalization of GNNs, which the citing paper utilizes to further analyze the performance of MPNNs in GNNs."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work by [27] provides a way to analyze the data distribution and the MPNN model in GNNs, which the citing paper builds upon to improve the analysis of MPNNs in GNNs."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work by [6] is used to provide a limited analysis of MPNNs in the context of graphs of fixed sizes. The analysis is based on the edit-distance metric induced by the Euclidean metric on adjacency matrices, which is used to reduce the graph problem to the Euclidean case."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work provides a method of taking the union of the Euclidean edit-distance spaces up to a certain graph size, which the citing paper adopts in their research to consider graphs of arbitrary but bounded size."}, {"Category": "Extension or Continuation", "Citation": "[19]", "Explanation": "The cited work analyzes the expressivity of GNNs on spaces of graphons, which the citing paper extends by considering the metric on the graphon space as the L \u221e distance between graphons as functions."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work provides a model for graphons that the citing paper claims is not justified and not grounded in theory, and the citing paper adopts a different approach in their research."}, {"Category": "Extension or Continuation", "Citation": "[31,18,26,27]", "Explanation": "The cited works are mentioned in the context of data generation by graphons, and the citing paper extends the research by exploring new dimensions and variables in the data distribution."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work provides a list of methods used in practice for graph neural networks, which the citing paper builds upon to discuss the special cases of MPNNs."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work is another method used in practice for graph neural networks, which the citing paper references to provide a more comprehensive understanding of the methods discussed in the text."}, {"Category": "Methodological Basis", "Citation": "[13,25]", "Explanation": "The cited works provide the basis for the concept of equipartition in graph analysis, which the citing paper adopts in their research on the number of edges in a graph."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work provides the Weak Regularity Lemma, which the citing paper uses to represent large graphs in a smaller, coarse-grained version for analysis."}, {"Category": "Methodological Basis", "Citation": "[4,24]", "Explanation": "The cited works provide the concept of graphons and the definition of the space of graphons, which the citing paper adopts to model the node set in a weighted graph."}, {"Category": "Data Source", "Citation": "The cited work is used to acknowledge the equivalence of all graphon domains to [0, 1] with the standard Lebesgue measure, which the citing paper utilizes in its research on the space of graphons."}, {"Category": "Methodological Basis", "Citation": "(1)", "Explanation": "The cut norm is defined as a method to measure the difference between two graphons, which is used in the citing paper to calculate the irregularity of a given graphon."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work provides a similar construction for average aggregation that the citing paper builds upon in defining the message function for MPNNs on graphon-signals using normalized sum aggregation."}, {"Category": "Methodological Basis", "Citation": "[30,36]", "Explanation": "The cited works provide a basis for the construction of message passing based on general message functions, which the citing paper uses in their research to approximate the message passing process."}, {"Category": "Methodological Basis", "Citation": "[9,20,22]", "Explanation": "The cited works on spectral convolutional networks are used in the citing paper to replace the aggregation in the method with normalized sum aggregation, which contributes to the research on message passing based on general message functions."}, {"Category": "Methodological Basis", "Citation": "[33]", "Explanation": "The cited work by [33] provides a foundation for generalization analysis in MPNNs, which the citing paper builds upon in its research on uniform generalization bounds."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work by [14] contributes to the development of generalization theorems in MPNNs, which the citing paper further extends in its study of uniform generalization bounds."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work by [23] offers insights on generalization analysis in MPNNs, serving as a basis for the citing paper to explore uniform generalization bounds in this area."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work by [27] provides a contribution to the generalization theorems of MPNNs, which the citing paper leverages in its research on uniform generalization bounds for this class of models."}, {"Category": "Methodological Basis", "Citation": "[29]", "Explanation": "The cited work by [29] contributes to the study of generalization theorems in MPNNs, offering a foundation for the citing paper to analyze uniform generalization bounds in this context."}, {"Category": "Supporting Evidence", "Citation": "[14]", "Explanation": "The cited work provides a theoretical foundation for the generalization limits of GNNs by establishing a bounded degree assumption on the graphs."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work introduces a PAC-bayesian approach to MPNN and GCN models, which the citing paper adopts in their research to establish a theoretical basis for their study."}, {"Category": "Data Source", "Citation": "[29]", "Explanation": "The cited work is the source of the bounded color complexity assumption in the generalization analysis of MPNNs, which the citing paper utilizes in their research to underpin their study."}, {"Category": "Extension or Continuation", "Citation": "[27]", "Explanation": "The cited work on generalization analysis of MPNNs extends the research by sampling from a small set of graphons, which the citing paper builds upon in their study to explore new dimensions and variables in the field."}, {"Category": "Our graphon-signal theory non", "Explanation": "The cited work is a new theory that the citing paper has developed to non-parametrically model the graphon-signal relationship in a more general and flexible way."}, {"Category": "Methodological Basis", "Citation": "[16,5,7]", "Explanation": "The cited works provide the practice of subsampling large graphs and applying MPNNs to the smaller subsampled graphs, which the citing paper adopts as a methodological basis for their research."}, {"Category": "Extension or Continuation", "Citation": "[21,31,18,26,32]", "Explanation": "The cited works study the transferability and stability analysis of MPNNs on randomly sampled graphs, which the citing paper extends by applying the analysis to a more general setting of graphons without metric properties."}, {"Category": "Methodological Basis", "Citation": "[0,1]", "Explanation": "The cited work provides a standard probability space and a measure space that the citing paper uses to define a non-null subset and an atomless measure space for a specific Q j subset of the interval [0, 1]. This method is adopted to study the properties of the Q j subset in the context of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[25, Lemma 4.1]", "Explanation": "The cited lemma provides a tool for the proof of the weak regularity lemma in the citing paper, which the authors adopt to underpin their research on graphon-signals."}, {"Category": "Methodological Basis", "Citation": "(c > 1 and r > 0)", "Explanation": "The cited work provides a specific set of values for the parameters c and r that the citing paper adopts in its research on step function graphons and step signals."}, {"Category": "Supporting Evidence", "Citation": "(for any sufficiently small \u03f5 > 0)", "Explanation": "The cited work offers a general condition for the value of \u03f5 to hold true, which the citing paper uses to ensure the validity of its research on step function graphons and step signals."}, {"Category": "Data Source", "Citation": "(for every (W, f ) \u2208 WL r)", "Explanation": "The cited work provides a specific set of data in the form of (W, f ) that the citing paper uses in its research on step function graphons and step signals."}, {"Category": "Extension or Continuation", "Citation": "(there exists \u03d5 \u2208 S \u2032 [0,1] , a step function graphon [W \u03d5 ] n \u2208 S 2 In \u2229 W 0 and a step signal [f \u03d5 ] n \u2208 S 1 In \u2229 L \u221e r [0, 1])", "Explanation": "The cited work extends the research on step function graphons and step signals by introducing new concepts and definitions, such as the step function graphon [W \u03d5 ] n and the step signal [f \u03d5 ] n."}, {"Category": "Methodological Basis", "Citation": "[W \u03d5 ] n", "Explanation": "The cited work provides the step function graphon and signal for the approximating graphon-signal in the cited work, which the citing paper adopts to conduct its research."}, {"Category": "Data Source", "Citation": "[f \u03d5 ] n", "Explanation": "The cited work provides the step function signal for the approximating graphon-signal in the cited work, which the citing paper utilizes in its research."}, {"Category": "Extension or Continuation", "Citation": "In \u2229 W 0", "Explanation": "The cited work extends the research by considering the intersection of the graphon and the signal in a specific context."}, {"Category": "Extension or Continuation", "Citation": "In \u2229 L \u221e r [0, 1]", "Explanation": "The cited work further extends the research by considering the intersection of the graphon and the signal in a specific range of values."}, {"Category": "Extension or Continuation", "Citation": "where I n is the equipartition of [0, 1] into n intervals", "Explanation": "The cited work extends the research by providing a specific method for partitioning the graphon and signal into n intervals."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work in [1] provides a method for replacing the error term in a specific setting, which the citing paper adopts to improve the accuracy of their research."}, {"Category": "Methodological Basis", "Citation": "[24,Lemma 10.16]", "Explanation": "The cited work provides a theorem that bounds the cut distance between a graphon and its sampled graph, which the citing paper extends to the case of a sampled graphon-signal in Theorem D.7."}, {"Category": "Data Source", "Citation": "[4]", "Explanation": "The cited work is referenced in the proof of Theorem D.7 as a source of information for the main difference in the proof, which is the need to show a measure preserving bijection that is shared by the graphon and the signal."}, {"Category": "Methodological Basis", "Citation": "Theorem B.8", "Explanation": "The cited work provides a generic error bound in the regularity lemma, which the citing paper adopts to establish a specific error bound for n intervals in the theorem."}, {"Category": "Extension or Continuation", "Citation": "log(n)", "Explanation": "The citing paper extends the work of log(n) by using it to calculate the error in the regularity lemma for a given number of intervals."}, {"Category": "Methodological Basis", "Citation": "2c = 3", "Explanation": "The cited work establishes a relationship between 2c and 3, which the citing paper uses to determine the error in the regularity lemma for a given value of c."}, {"Category": "Methodological Basis", "Citation": "3/\u03f5 2 + 1 \u2265 log(n)", "Explanation": "The cited work provides a method for calculating the error in the regularity lemma based on the value of 3/\u03f5 2 + 1 and the number of intervals n."}, {"Category": "Methodological Basis", "Citation": "4/\u03f5 2 > 3/\u03f5 2 + 1 \u2265 log(n)", "Explanation": "The cited work offers a method for increasing the error bound in the regularity lemma to satisfy a specific condition based on the value of 4/\u03f5 2 and the number of intervals n."}, {"Category": "Data Source", "Citation": "[W \u03d5 ] n , [f \u03d5 ] n ", "Explanation": "The cited work provides a specific data source in the form of a piecewise constant graphon signal and a graphon signal, which the citing paper uses to calculate the error in the regularity lemma."}, {"Category": "Methodological Basis", "Citation": "\u2225W \u03d5 \u2032 -[W \u03d5 \u2032 ] n \u2225 \u25a1 \u2264 \u03b1 2 log(n)", "Explanation": "The cited work provides a method for calculating the error in the regularity lemma based on the difference between the graphon signal and its n-interval approximation."}, {"Category": "Methodological Basis", "Citation": "\u2225f \u03d5 \u2032 -[f \u03d5 \u2032 ] n \u2225 \u25a1 \u2264 (1 -\u03b1) 2 log(n)", "Explanation": "The cited work offers a method for calculating the error in the regularity lemma based on the difference between the graphon signal and its n-interval approximation, with a different error bound than the previous method."}, {"Category": "Methodological Basis", "Citation": "(K k=1 L \u03be k r \u2225\u03be k t \u2225 \u221e + \u2225\u03be k r \u2225 \u221e L \u03be k t \u2225f -g\u2225 1 )", "Explanation": "The cited work provides the product rule for message kernels, which the citing paper uses to prove the Lipschitz continuity of message passing layers and update layers in the graphon-signal cut metric."}, {"Category": "Methodological Basis", "Citation": "(K k=1 L \u03be k r \u2225f -g\u2225 1 )", "Explanation": "The cited work provides the method of calculating the L1 norm of the difference between two message kernels, which the citing paper adopts in its own research to measure the distance between the two message kernels."}, {"Category": "Extension or Continuation", "Citation": "(K k=1 L \u03be k r \u2225f -g\u2225 1 = K k=1 L \u03be k r \u2225\u03be k t \u2225 \u221e + \u2225\u03be k r \u2225 \u221e L \u03be k t \u2225f -g\u2225 1 )", "Explanation": "The citing paper extends the method presented in the cited work by introducing the L1 norm of the difference between the message kernels and the L1 norm of the difference between the message kernels and the message kernel t."}, {"Category": "Methodological Basis", "Citation": "[24,Lemma 8.10]", "Explanation": "The cited work provides a method for extending a result from a specific kernel to a more general class of kernels, which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "(f + -f -)", "Explanation": "The cited work provides a method for writing any function f in L \u221e 1 [0, 1] as the sum of two functions f + and f - in L + [0, 1], which the citing paper adopts in its research."}, {"Category": "Supporting Evidence", "Citation": "(F.4)", "Explanation": "The cited lemma provides a foundational result that the citing paper builds upon to establish a key inequality in the analysis of a function and its associated cut norm."}, {"Category": "Extension or Continuation", "Citation": "(F.6)", "Explanation": "The cited lemma is extended in the citing paper to state a new result that further elaborates on the properties of a function and its support in the context of the cut norm."}, {"Category": "Extension or Continuation", "Citation": "(F.7)", "Explanation": "The cited lemma is again extended in the citing paper to provide a new result that builds upon the previous extension and further elaborates on the properties of a function in the context of the cut norm."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work, GIN, serves as the basis for the MPNN architecture used in the citing paper for aggregating and processing graph data."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work, GraphConv, is also used as a method for processing graph data in the MPNN architecture of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work defines the GIN convolutional layer, which the citing paper adopts in their research to process the input data and extract features."}, {"Category": "Methodological Basis", "Citation": "(2027)", "Explanation": "The cited work by (2027) provides the definition of formal bias for a function, which the citing paper uses in its analysis of the bias of an affine-linear operator."}, {"Category": "Methodological Basis", "Citation": "(34)", "Explanation": "The cited work provides a framework for understanding the complexity/capacity of the hypothesis space in generalization bounds, which the citing paper leverages to describe the richness of the hypothesis space in their research."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b2", "b41", "b8", "b24", "b1", "b48", "b29", "b30", "b36" ], "table_ref": [], "text": "Grammatical error correction (GEC) aims to remove all underlying textual errors in a given sentence without changing its meaning (Bryant et al., 2022). During the past decade, GEC has attracted a lot of research interest and has been integrated into many real-life applications like writing assistants.\nA significant effort has been undertaken to build high-quality datasets for research on GEC. Most GEC datasets are for English (Yannakoudakis et al., 2011;Dahlmeier et al., 2013;Napoles et al., 2017;Bryant et al., 2019), which mainly collect sentences from learner essays. For Chinese GEC (CGEC), datasets are relatively scarce. Similar to English GEC, most of them are built from essays written by learners, including NLPCC18 (Zhao et al., 2018), CGED (Rao et al., 2018(Rao et al., , 2020)), YACLC (Wang et al., 2021), and MuCGEC (Zhang et al., 2022a).\nBesides learner GEC, there is also great demand for correcting errors made by native speakers. For English GEC, researchers have already constructed\nRef. 1" }, { "figure_ref": [], "heading": "目前现有的汉语依存树库规模较小。", "publication_ref": [], "table_ref": [], "text": "The scale of current existing Chinese dependency treebank is relatively small." }, { "figure_ref": [], "heading": "Ref. 2", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "目前现有的汉语依存树库规模较小。", "publication_ref": [ "b23", "b11", "b35", "b39", "b23", "b38" ], "table_ref": [ "tab_0" ], "text": "The scale of current existing Chinese dependency treebank is relatively small. several native datasets, e.g., GMEG (Napoles et al., 2019) and CWEB (Flachs et al., 2020). For CGEC, such research has just begun. CCTC (Wang et al., 2022) is the first native CGEC dataset composed of web documents written by natives. Another recent work, FCGEC (Xu et al., 2022), collects sentences from the questions in Chinese examinations. Among all the above datasets, only GMEG (Napoles et al., 2019) targets texts from multiple domains. The lack of multi-domain datasets inevitably introduces biases in the construction and evaluation of CGEC approaches. First, cuttingedge CGEC approaches (Li et al., 2022a;Zhang et al., 2022b;Wu and Wu, 2022) are all evaluated under the in-domain setting, where the training and test sets are from the same domain. It remains unclear how well those approaches generalize to out-of-domain inputs, which is important for practical application. Second, all CGEC approaches are only evaluated in a single domain, basically learner essays. This can be misleading since an approach that outperforms others in one domain may actually perform poorly in another.\nTo alleviate these problems, this work proposes NaSGEC (pronounced as /\"neIsgek/), a multidomain dataset from native speaker texts for Chinese GEC. NaSGEC comprises 12,500 sentences from 3 native text domains: social media platform (MEDIA), undergraduate theses (THESIS), and Chinese examinations (EXAM). These domains are closely related to real-life GEC application scenarios, i.e., writing aid, paper proofreading, and Chinese teaching. Based on detailed data analysis (see Section 3), we demonstrate that they have diverse writing styles and error distributions, thus posing great challenges for existing models and will be an ideal testbed for domain adaptation techniques. Furthermore, there are usually different correction methods for an error, as shown in Table 1. Hence, we assign each sentence to two annotators for annotation and one expert for double-checking, leading to multiple high-quality references.\nUsing NaSGEC, we conduct extensive experiments. We evaluate the performance of the stateof-the-art (SOTA) CGEC model on NaSGEC with different kinds of training data. We first train the model on commonly-used human-annotated training sets. Since these training sets are collected from learner texts while NaSGEC is a native dataset, we also generate synthetic training data from native texts. The multi-domain property of NaSGEC enables us to shed light on the domain problem in CGEC. We conduct domain transfer experiments and design three indicators for evaluating domain differences. In summary, our main contributions can be concluded as follows:\n(1) We propose NaSGEC, a multi-domain CGEC dataset from native speaker texts, which contains 12.5k sentences with multiple references. We also conduct detailed data analysis on it.\n(2) We launch benchmark experiments on NaS-GEC with SOTA CGEC models and different training data. We find models have their own advantages in specific domains, suggesting that the multi-domain NaSGEC can support a more comprehensive evaluation.\n(3) Based on NaSGEC, we perform preliminary domain transfer experiments and analysis. We find using small-scale in-domain data for finetuning can significantly boost model performance. We also analyze the similarity between domains by comparing cross-domain transfer performance. We devise several indicators of domain shifts to gain more insights.\nTo further improve model performance in a specific domain, we propose a simple domainaware data augmentation method.\n(4) We systematically compare NaSGEC to previously released CGEC datasets, including both learner and native ones.\nAll codes and models have been released at https://github.com/HillZhang1999/ NaSGEC. We will also release the dataset after improving it according to reviewers' comments." }, { "figure_ref": [ "fig_0" ], "heading": "Construction of NaSGEC", "publication_ref": [], "table_ref": [], "text": "This section describes the construction process of NaSGEC in detail. As shown in Figure 1, we first collect raw sentences from three domains. Then, each sentence is assigned to two annotators for independent annotation. To guarantee data quality, an expert will carefully review the annotation results." }, { "figure_ref": [], "heading": "Data Collection", "publication_ref": [], "table_ref": [], "text": "NaSGEC collects data from 3 native Chinese text domains, which cover both formal and informal writing styles and errors of different difficulties.\nThe MEDIA domain contains 4k sentences from articles posted on the Wechat public account platform1 , which is one of the most popular social media platforms in China. Articles in this platform covers rich topics. We also notice that the sentences in it are mostly informal and often expressed in a spoken-language tone. During our preliminary annotation, we found that errors in this domain are extremely sparse, so direct annotation would result in high costs to acquire enough erroneous sentences. Therefore, we turn to select sentences by voting with multiple competitive CGEC models. Specifically, we utilize large-scale pseudo training data to train three seq2seq-based models and three seq2edit-based models. Then, we only choose candidate sentences corrected by more than half of those models for annotation. We crawl 1M candidate sentences from the Wechat public account platform, and accumulate about 120k potentially wrong sentences from them with the abovementioned method. Finally, we randomly pick 4k sentences for annotation.\nThe THESIS domain consists of 1.5k sentences from undergraduate theses. We first collect 120 dissertations written by Chinese undergraduates majoring in computer science, with about 40k sentences in total. Intuitively, texts in this domain are usually formal and contain technical terms. Similar to MEDIA, errors in THESIS are also very sparse.\nTo save costs, we adopt the same method as in MEDIA to select sentences for annotation.\nThe EXAM domain contains 7k sentences from the ungrammatical sentence judgment questions in Chinese examinations. Such questions are elaborately designed by experts and ask students to choose 1-3 ungrammatical sentences from 4 candidates. We crawl them from a public educational website 2 , as well as their answers and analyses." }, { "figure_ref": [], "heading": "Annotation Workflow", "publication_ref": [ "b32", "b24" ], "table_ref": [], "text": "For groundwork, we extend the annotation guidelines of MuCGEC (Zhang et al., 2022a) to accommodate errors made by native speakers. We subsequently use them to instruct our annotators and gradually improve them according to annotator feedback before starting the annotation process. For example, we define how to distinguish dialect from errors after discussing with annotators.\n2 http://www.gzywtk.com/ During annotation, we ask our annotators to directly rewrite the whole sentence to craft a grammatical and fluent version of it with its intended meaning. The so-called direct rewriting annotation paradigm has proven efficient and effective in GEC (Sakaguchi et al., 2016;Napoles et al., 2017).\nSince multiple acceptable correction ways usually exist, we assign each sentence to two random annotators for independent annotation. Following Zhang et al. (2022a), we ask each annotator to submit the best reference in his/her mind to improve the annotation efficiency. Then, an expert reviewer will check these two submissions in a double-blind manner. Besides directly rejecting incorrect submissions, the reviewer also needs to supplement other correct references missed by annotators. If annotators make wrong submissions, they are required to learn from their mistakes for selfimprovement. The learning method is re-typing one of the correct references determined by reviewers. All annotations are conducted with the support of our developed online annotation platform, which is presented in Appendix A. We select and show some typical annotation examples in Appendix F." }, { "figure_ref": [ "fig_1" ], "heading": "Annotation Process", "publication_ref": [ "b31", "b48", "b35", "b39" ], "table_ref": [ "tab_2" ], "text": "We hired 13 well-educated native undergraduates familiar with Chinese grammar as our annotators. 2 graduate students, who participated in the compilation of guidelines, served as the reviewers. Annotators received detailed instructions before annotating; those with low annotation quality were warned during annotating. We established a chat group to allow annotators to ask questions. All annotators and reviewers were paid properly. The whole annotation process took more than 4 months. 3 Analysis of NaSGEC Overall statistics. We list detailed statistics of NaSGEC and other existing datasets for comparison in Table 2. We use the tool 3 released with MuCGEC (Zhang et al., 2022a) to extract the edits of references and original sentences. Such edits are span-level edits merged from character-based ones based on pre-defined linguistic rules. Within NaSGEC, the average length of sentences varies across domains. The sentences in THESIS are the longest, probably because students tend to write long sentences in dissertations to explain technical concepts more clearly. Regarding the average number of edits and references, we observe that erroneous sentences in EXAM need the fewest edits to correct but have the most correction ways. The reason may be that each erroneous sentence in EXAM typically only has one complicated error to challenge students, which is often varied in its valid corrections. As reflected by the type-token ratio (Richards, 1987), MEDIA has the greatest lexical variety, intuitively due to the diversity of its topics. All the above analysis indicates systematical discrepancies across NaSGEC's domains.\nWe also present the statistics of two mainstream learner datasets, i.e., NLPCC18 (Zhao et al., 2018) and MuCGEC (Zhang et al., 2022a). Compared with those learner datasets, sentences in NaSGEC are significantly longer but contain much fewer edits, as natives make mistakes far less frequently than learners and seldom make obvious mistakes. Besides, sentences in NaSGEC also have more name entities and a higher level of lexical variety, showing that natives have a larger vocabulary.\n3 https://github.com/HillZhang1999/ MuCGEC/tree/main/scorers/ChERRANT Moreover, we also compare two newly published native datasets, CCTC (Wang et al., 2022) and FCGEC (Xu et al., 2022). The salient feature of CCTC is its low error density. Only 9.3% of sentences in CCTC contain errors, and each erroneous sentence just has one error (reflected by Avg. Edits). As for FCGEC, it is quite similar to the EXAM domain of NaSGEC, which is unsurprising since they share the same provenance.\nError type distributions. We use the tool provided by MuCGEC to classify extracted edits into 4 error types according to their correction operations. Figure 2 shows the distributions of these error types in NaSGEC and other datasets for comparison.\nWithin NaSGEC, the most frequent error type in MEDIA and THEIS is substituted errors. After further decomposition, we find that the majority of substituted errors in these 2 domains are caused by spelling or misuse of punctuation, as native speakers usually make such minor mistakes due to carelessness when typing essays or papers. The MEDIA domain also has a considerable proportion of missing errors, mainly caused by missing punctuation. Such errors often occur in informal texts, as the absence of punctuation generally does not affect the understanding of the sentence. Compared with the other domains, EXAM has a more even type distribution, where the proportion of substituted, missing, and redundant errors is quite close.\nLike MEDIA and THESIS domains of NaSGEC, the learner dataset MuCGEC also has a high proportion of substituted and missing errors. After a deeper look into samples, we find that learners are more prone to misuse verbs or nouns due to lexical or grammatical unfamiliarity, and they also tend to miss more specific words instead of punctuation. Among all datasets, CCTC has the most unbalanced distribution: the substituted errors account for nearly 70%, and we find most of them are caused by spelling. Although both come from Chinese examinations, FCGEC and NaSGEC-EXAM still have some discrepancies, such as FCGEC contains more redundant errors, which may be due to different annotation guidelines and data sources.\nAnnotation Accuracy. We measure each annotator's accuracy by comparing all his/her submissions against the golden references determined by reviewers. Overall, the average annotation accuracy is 77.46%. Such a low figure clearly indicates the difficulty of the CGEC task. Moreover, it also highlights the importance of our review mechanism: about a quarter of references in our dataset will be problematic without our strict expert checking." }, { "figure_ref": [], "heading": "Benchmark Experiments on NaSGEC", "publication_ref": [], "table_ref": [], "text": "This section provides benchmark results for NaS-GEC with a current SOTA CGEC model. Following previous work, we train the model on humanannotated training data from learner texts. However, there exists a gap between learner training data and our native dataset. So we also use synthetic native training data to mitigate the gap." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b16", "b38" ], "table_ref": [], "text": "Model. Our benchmark models are based on BART (Lewis et al., 2020), a pre-trained Seq2Seq model that has recently achieved SOTA performance on mainstream CGEC datasets (Zhang et al., 2022b;Wu and Wu, 2022) 4 . We provide the implementation and training details in Appendix B.\nEvaluation metric. We use the character-based metric proposed by Zhang et al. (2022a). Concretely, we align the system output and golden reference with the input sentence to extract two groups of character-based edits. Then, we merge them into spans based on rules and compare them to calculate the precision (P), recall (R), and F 0.5 score. In the GEC community, there is a consensus that a good system should correct errors accurately to ensure a positive user experience. Therefore, most work uses F 0.5 , which places more emphasis on precision by weighting precision twice as recall. We do not use previous word-based metrics since we find they will introduce uncertainty into evaluation due to word segmentation errors." }, { "figure_ref": [], "heading": "Training Data", "publication_ref": [ "b43", "b48", "b42" ], "table_ref": [], "text": "Real learner training data. There are two public available large-scale human-annotated CGEC training datasets, which refer to HSK (Zhang, 2009) and Lang8 (Zhao et al., 2018). Both of them focus on errors occurring in learner essays. Lang8 has about 1.2M sentence pairs, and HSK contains about 150k. We combine them together for training and randomly select 5k of them as the dev set following previous work (Zhang et al., 2022a).\nPseudo native training data. So far, there has been no large-scale training data for errors made by native speakers. As manual annotation is expensive, we create synthetic native training data based on heuristic rules. We first extract 100M clean sentences from the WuDaoCorpora (Yuan et al., 2021), which is mainly composed of articles crawled from native websites. Then, we inject errors into clean sentences by randomly replacing, inserting, deleting and swapping tokens. To better generate spelling errors, we also utilize confusion sets. The proportion of each error is set empirically. More details can be found in Appendix D." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [ "tab_3", "tab_3" ], "text": "Table 3 shows all experimental results. We evaluate models on the whole data of each domain.\nIn the MEDIA and THESIS domains, the pseudo native training data significantly outperforms the real learner data, although the former is automatically crafted. This shows the text domain of train- ing data can greatly influence model performance.\nIn the EXAM domain, the real learner training data instead outperforms the pseudo native data substantially. We speculate the reason is that most errors in the EXAM domain are carefully designed to be difficult, which can hardly be simulated by simple rules but may occur in learner essays.\nWe also combine both data to make full use of them. We train our model on one kind of data until it converges, then continue to train it on another. As shown in the last two rows of Table 3, the data combinations lead to minor performance improvements in two domains, i.e., THESIS and EXAM.\nFinally, the best F 0.5 scores are 45.79, 31.87, and 20.02 for the MEDIA, THESIS, and EXAM domains, respectively, achieved by 3 different models. It is worth noting that, although all models only have slight differences regarding overall average performance (the largest gap is just 1.72 F 0.5 ), they exhibit quite divergent behaviors in different domains (up to 13.72 F 0.5 gap). This clearly demonstrates the value of NaSGEC as a multi-domain dataset to support a more comprehensive model evaluation." }, { "figure_ref": [], "heading": "Domain Analysis Within NaSGEC", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct domain transfer experiments on NaSGEC by splitting data and performing fine-tuning. We devise indicators of GEC domain shifts to gain more insights into the connections and differences between our domains. To further improve model performance in specific domains, we also propose a simple domain-aware data augmentation method." }, { "figure_ref": [], "heading": "Domain Transfer Experiments", "publication_ref": [], "table_ref": [ "tab_3", "tab_6" ], "text": "We perform domain transfer experiments by finetuning the baseline on training data from different domains. best model in it according to Table 3 as its baseline.\nAfter fine-tuning, we evaluate and compare all three fine-tuned models on this domain's test set. All experimental results are presented in Table 5. We also perform error type analysis in Appendix E.\nIn-domain results. For in-domain results (finetune on one domain and evaluate on the same domain), we have the following observations. First, the best performance in each domain is achieved by fine-tuning baselines on training data from the same domain, showing that in-domain data benefits more than out-of-domain data. For example, although THESIS-train is much smaller than training sets in other domains, the THESIStuned model still performs best on THESIS-test.\nSecond, fine-tuning models on little in-domain data can bring very significant performance improvements. Specifically, in-domain fine-tuning leads to 10.89, 7.22, and 22.72 F 0.5 improvements in MEDIA, Thesis, and EXAM, respectively.\nOut-of-domain results. For out-of-domain results (fine-tune on one domain and evaluate on another), we have the following observations. First, in the MEDIA domain, fine-tuning the baseline with THESIS-train can lead to performance gain and vice versa, which indicates that the ME-DIA and THESIS domains are relatively similar.\nSecond, in the EXAM domain, fine-tuning with MEDIA-train and THESIS-train both hurt the performance of the baseline. In turn, fine-tuning with EXAM-train reduces the baseline performance in MEDIA and THESIS. This point to an obvious difference between EXAM and the other 2 domains.\nSummary. Overall, fine-tuning models on training data from different domains leads to considerable performance changes, emphasizing the importance of domain in GEC. This also encourages us to study domain adaptation for GEC in the future." }, { "figure_ref": [], "heading": "Indicators of Domain Shifts", "publication_ref": [ "b14" ], "table_ref": [ "tab_7", "tab_7" ], "text": "The domain transfer experiments reveal that there exist appreciable domain shifts in GEC. To better understand domain shifts in GEC, we further devise 3 indicators from a statistical perspective:\n• Vocabulary Overlap (VO) is defined as the ratio of the vocabulary of the target domain covered by the source domain. Higher VO represents better vocabulary coverage. Since larger data usually covers vocabulary better, we sample 1,000 tokens from each domain when calculating VO to make it comparable.\n• Type Distribution Similarity (TDS) is measured as the Kullback-Leibler (KL) divergence (Kullback and 1951) between the error type distributions of two domains.\nThe lower TDS indicates closer error type distributions. We extract and classify errors with the tool from MuCGEC (Zhang et al., 2022a).\n• Error Pattern Overlap (EPO) is computed as the ratio of the error patterns in the target domain occurring in the source domain. We define an error pattern as a mapping from the erroneous span to the corresponding correct span. To eliminate the influence of data sizes, we randomly extract 300 edits from each domain to calculate EPO.\nWe treat all 3 training sets as the source domains and all 3 test sets as the target domains. Then, we count the above indicators between them, as shown in Table 6. With the help of these indicators, we revisit the results of domain transfer experiments and gain more insights, as shown below.\nExplanation for in-domain results. In the previous section, we observe that using in-domain data for fine-tuning consistently outperforms outof-domain data. Here, we find that the in-domain training sets best cover the vocabulary of the test sets, as reflected by VO. After looking at TDS and EPO, we also find that in-domain training sets have the error distributions most similar to the test sets, in terms of both error types and patterns. These results show that different domains have their own characteristics in word selection and error distribution, which explains why using in-domain data contributes more than our-of-domain data.\nExplanation for out-of-domain results. Previously, we also observe that the MEDIA and THE-SIS domains can benefit each other via fine-tuning, while the EXAM domain is unable to help or get help from other domains. From Table 6, we find that TDS/EPO is relatively low/high between ME-DIA and THESIS, exhibiting that these two domains have similar error distributions. The reason can be that they are both built from realistic writing scenes, although MEDIA is informal writing while THESIS is formal writing. As indicated by high TDS and low EPO compared to other domains, EXAM has the most distinct error distribution. The possible reason is that errors in EXAM are deliberately designed to challenge native students and seldom occur in natives' daily writing. Such differences in error distribution can be strong evidence to explain the out-of-domain transfer phenomena." }, { "figure_ref": [], "heading": "Domain-aware Data Augmentation", "publication_ref": [ "b42" ], "table_ref": [ "tab_8" ], "text": "As previously mentioned, the writing style and error distribution of the training data have a significant impact on the model's performance in a specific domain. Hence, we propose a simple domainaware data augmentation method by adapting the two aspects of pseudo data to the target domain.\nWe first perform the style adaptation, which means using the raw data with a writing style simi- lar to the target domain for augmentation. For the MEDIA domain, we collect 100k raw sentences from the Wechat public account platform. For the THESIS domain, we collect 100k raw sentences from academic papers in the Chinese Scientific Literature (CSL) dataset (Li et al., 2022b). We exclude EXAM since it is difficult to gather sufficient raw data that comes from the same source.\nWe then conduct the error adaptation. We inject 4 kinds of errors (missing, substituted, redundant, and word-order errors) to the raw sentence by rules and carefully control the error type distribution to simulate the target domain.\nThe experimental results are shown in Table 7. The domain-aware data augmentation (+ both) leads to significant performance gains, even with the in-domain real training data (Finetuned Baseline). Only using either style adaptation (+ style adaptation, without adjusting error type distribution) or error adaptation (+ error adaptation, using 100k data from a general domain, i.e., WuDaoCorpora (Yuan et al., 2021)) still improves performance compared to the baseline, while the improvement is more marginal than simultaneously using both of them. Overall, this is a straightforward attempt, and we hope future work could study more methods for GEC domain adaptation based on NaSGEC." }, { "figure_ref": [], "heading": "Comparison with Existing Datasets", "publication_ref": [ "b48", "b29", "b30" ], "table_ref": [ "tab_10", "tab_10", "tab_9", "tab_10" ], "text": "In this section, we compare NaSGEC with existing CGEC datasets, including both native and learner datasets, by analysis of domain shift indicators (Table 8) and domain transfer experiments (Table 9). Specifically, the baseline in Table 9 is trained with real learner data for MuCGEC and FCGEC, and pseudo native data for CCTC.\nNaSGEC vs. Existing learner datasets. Most existing CGEC datasets are for learners. We select MuCGEC (Zhang et al., 2022a) from them for comparison, because it actually covers several previous learner datasets, e.g., NLPCC18 (Zhao et al., 2018) and CGED (Rao et al., 2018(Rao et al., , 2020)).\nFrom domain shift indicators in Table 8, we have two observations. First, VO is always high from our domains to MuCGEC, implying our data cover the vocabulary of MuCGEC well. This may be because learners tend to use more common words. Second, all our domains get a mediocre level of TDS and EPO, revealing that errors made by native speakers differ from those made by learners. This illustrates why directly fine-tuning models on native data can not further boost performance on learner data.\nFrom domain transfer experiments in Table 9, we can see fine-tuning on domains of NaSGEC always results in performance degradation on MuCGEC, among them EXAM brings the least decline.\nWe encourage future work to explore better ways to transfer between native and learner domains, which will allow us to apply the rich experience of learner GEC to under-explored native GEC." }, { "figure_ref": [], "heading": "NaSGEC vs. Existing native datasets.", "publication_ref": [ "b35", "b39" ], "table_ref": [ "tab_9", "tab_2" ], "text": "There are two existing native CGEC datasets, i.e., CCTC (Wang et al., 2022) and FCGEC (Xu et al., 2022).\nAs shown in Table 8, CCTC is most like the MEDIA domain of NaSGEC, possibly because they are both collected from natives' informal writing. EPO from MEDIA and THESIS to CCTC is higher than 40%, even exceeding their in-domain overlap ratios. As mentioned in Section 3, CCTC has a very high proportion of spelling errors. Spelling errors in Chinese, such as misusing \"的/地/得\", have fixed patterns and thus can be easily covered. In contrast, our data contains more long-tail and challenging grammatical errors.\nLooking at transfer experiments, the recall of the baseline in CCTC greatly increased when fine-tuned on MEDIA and THESIS, but the precision keeps low. After carefully examining, we think this is due to the difference in error density. As shown in Table 2, about 65.2% and 70.0% of sentences in MEDIA and THESIS have errors, while the number in CCTC is just 9.3%. Therefore, fine-tuning the baseline on our data will make it correct errors more aggressively, which causes poor precision in low error-density domains. In view of this, we hope future work can study how to transfer GEC models across domains with different error densities.\nFor FCGEC, fine-tuning the model on the EXAM domain of NaSGEC leads to a huge improvement of over 22 F 0.5 scores, indicating they are highly compatible. The indicator results also confirm this point. We hope they can be two complementary resources to facilitate CGEC for Chinese teaching." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b41", "b8", "b24", "b23", "b11", "b48", "b29", "b30", "b36", "b35", "b42", "b39", "b20", "b28", "b6", "b12", "b27", "b19", "b40", "b4", "b15", "b5", "b22", "b10", "b7", "b37", "b44" ], "table_ref": [], "text": "Dataset. Most GEC datasets are built for English. Early English GEC datasets, such as FCE (Yannakoudakis et al., 2011), NUCLE (Dahlmeier et al., 2013), and JFLEG (Napoles et al., 2017), are built from student essays written by non-native English learners. After realizing the flaw of the limited text domain, researchers propose GMEG (Napoles et al., 2019) and CWEB (Flachs et al., 2020), two new datasets that broaden the target domain of English GEC to native speakers' daily writing.\nEarly CGEC work also primarily constructs datasets from learner essays, including NLPCC18 (Zhao et al., 2018), CGED (Rao et al., 2018(Rao et al., , 2020)), YACLC (Wang et al., 2021), and MuCGEC (Zhang et al., 2022a). Concurrently with our work, some newly released CGEC datasets take native writing domains into account. CCTC (Wang et al., 2022) annotates 1,500 web documents written by native speakers from the WuDaoCorpora (Yuan et al., 2021). FCGEC (Xu et al., 2022) mainly consists of sentences from multi-choice questions in Chinese examinations. Another work, NaCGEC (Ma et al., 2022), collects data from Chinese examinations and news sites.\nTo the best of our knowledge, NaSGEC is the first CGEC dataset that annotates texts from multiple native domains under a unified scheme, which enables us to perform domain-wise experiments and analysis in CGEC for the first time. Domain Adaptation. Domain adaptation has been extensively studied in various NLP tasks (Ramponi and Plank, 2020), such as machine trans-lation (Chu and Wang, 2018;Jiang et al., 2020;Pham et al., 2021), syntax parsing (Li et al., 2019;Yang et al., 2022), and information extraction (Chen and Qian, 2021;Lekhtman et al., 2021).\nCompared with other fields, research on domain adaptation for GEC is under-explored. Existing studies lie in adapting GEC models to a specific first language or proficiency level of the second language learners (Chollampatt et al., 2016;Nadejde and Tetreault, 2019). In this work, we build a multi-domain CGEC dataset from different writing scenarios and conduct basic cross-domain experiments, which can promote related research. We believe this is a valuable research direction for GEC even in the Large Language Model era (Fang et al., 2023;Coyne and Sakaguchi, 2023;Wu et al., 2023;Zhang et al., 2023)." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper presents NaSGEC, a new multi-domain native CGEC dataset, which consists of 12,500 sentences from three representative native domains. We clearly describe the construction process and perform detailed data analysis. We conduct benchmark experiments with the SOTA BART-based CGEC model and two kinds of training data. We also launch domain transfer experiments and devise domain shift indicators, in order to have a clearer understanding of our domains. We hope NaSGEC can spur future work on cross-domain GEC evaluation, domain adaptation for GEC, and more." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "We think the limitations of our work are three-fold.\n(1) As discussed in Section 2.1, we employ existing CGEC models to select sentences for annotation when building the MEDIA and THESIS domains of NaSGEC. Although this reduces annotation costs, it inevitably introduces biases into our dataset. For instance, the proportion of complex syntax-or semantic-related errors may be lower than that in reality, since existing CGEC models fail to identify them.\nNote that although we manage to mitigate such biases by voting with multiple models, this issue still exists. Future work should explore how to automatically mine erroneous sentences from a low error-density domain with minimal biases.\n(2) The current size of our dataset is relatively small. We will continuously collect more data from more diverse domains. Compared with other domains, THESIS has a much smaller data size (1.5k), as authorized papers are hard to obtain. In the future, we plan to cooperate with universities and thus accumulate more authorized data to enrich this domain.\n(3) Based on our multi-domain NaSGEC, we have reported and analyzed cross-domain performance preliminarily. However, besides fine-tuning with small-scale data in the target domain, many other potentially helpful domain adaptation techniques can be tried. We believe cross-domain GEC is a valuable research topic and encourage future work to study it with NaSGEC." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "Data license. For the EXAM and MEDIA domains of NaSGEC, we only collect sentences from public corpora or websites. For the THESIS domain, we have obtained permission from the authors of dissertations.\nAnnotation payment. During annotation, all annotators/reviewers were paid according to their finished task numbers and quality. The average salaries for annotators and reviewers are about 25 and 34 RMB per hour, respectively." }, { "figure_ref": [ "fig_2" ], "heading": "A Annotation Tool", "publication_ref": [], "table_ref": [], "text": "We present the annotation interface of our annotation tool in Figure 3. Given a potentially erroneous sentence, the annotator can rewrite it in a text box if he/she finds this sentence contains errors. If the sentence is correct, the annotator can directly click the Error Free button and submit.\nSpecifically, when annotating the MEDIA and THESIS domains, we provide annotators with the context of each sentence. Because sentences in these domains are extracted from complete essays or dissertations, they may need cross-sentence information to correct. We ask our annotators to mark such sentences with the Need Context button to facilitate future study in document-level CGEC." }, { "figure_ref": [], "heading": "Annotation Interface", "publication_ref": [], "table_ref": [], "text": "TASK : 1 Original Sentence:\n!\"#$%&'()*+$,&-./01234/56789\nThe deepest impression was their last meeting, time lines intertwined and memories flowed. " }, { "figure_ref": [], "heading": "B Experimental Details", "publication_ref": [ "b26" ], "table_ref": [ "tab_11", "tab_3", "tab_6", "tab_10" ], "text": "We use the fairseq toolkit5 (Ott et al., 2019) to build our benchmark models. Our model is based on the large variant of the Chinese BART (Shao et al., 2021) 6 , which has about 400M parameters. Following Zhang et al. (2022b), we extend the original vocabulary of the Chinese BART to cover some common but missed Chinese characters and punctuation, e.g., Chinese quotation marks, which they find can greatly improve model performance.\nWe list detailed experimental hyper-parameter settings in Table 10. The total training time for using real learner data (about 1.35M sentence pairs) is about 10 hours. The total training time for using pseudo native data (about 100M sentence pairs) is about 7 days. Due to the limitation of time and computation resources, the benchmark results in Table 3 are reported over a single run. The finetuning time is about 20 minutes. All fine-tuning results in Table 5 and Table 9 are averaged over 3 runs with distinct random seeds." }, { "figure_ref": [], "heading": "C Results of the Seq2Edit Model", "publication_ref": [ "b16", "b21", "b0", "b25", "b47" ], "table_ref": [], "text": "Besides Seq2Seq-based models like BART (Lewis et al., 2020), there is another competitive CGEC paradigm called Seq2Edit. The Seq2Edit-based models first predict a sequence of edits, and then apply them to the erroneous sentence to conduct corrections (Malmi et al., 2019;Awasthi et al., 2019). Recently, Zhang et al. (2022a) adapt GEC-ToR (Omelianchuk et al., 2020), a widely-used Seq2Edit model in English, to Chinese and find it can achieve promising performance. Hence, we follow their efforts and test the ability of Chinese GECToR on NaSGEC, as shown in We can see that, in MEDIA and EXAM, Seq2Seq outperforms Seq2Edit substantially. However, in THESIS, Seq2Edit performs significantly better. We attribute this to Seq2Edit's natural ability to copy. Seq2Edit can directly copy tokens from the source sentence by predicting the Keep tag. In THESIS, there are many English words and technical terms, which Seq2Seq tends to mis-correct while Seq2Edit keeps unchanged. So Seq2Edit achieves a much higher precision in this domain. In view of this, we plan to enhance our BART-based benchmark models with the copy mechanism (Zhao et al., 2019) or other approaches in the future." }, { "figure_ref": [], "heading": "D Pseudo Data Generation", "publication_ref": [ "b42", "b9" ], "table_ref": [], "text": "We use rule-based corruption to generate largescale synthetic training data from clean native corpora. Specifically, we randomly select 100M sentences from the WuDao corpora (Yuan et al., 2021) 7 as the seed corpus, which is mainly composed of website articles written by native speakers. We select tokens for corruption with a probability of 0.05 and perform the following operations with corresponding probabilities (in parentheses):\n• Replacement (0.55): We replace the current token with another token in its confusion set (0.5) or a random token from the whole vocabulary (0.5).\n• Insertion (0.2): We insert the same token (0.5) or a random token from the whole vocabulary (0.5) before the current token • Deletion (0.2): We delete the current token.\n• Swap (0.05): We swap the current token and the token after it.\nFollowing Dai et al. (2022), we inject noises from both character and word granularity to achieve better performance, which means each sentence is segmented into either the word (0.5) or character (0.5) sequence before corruption. The word-level and character-level confusion sets are built considering phonetics and glyphs.\nWe also show the effect of the size of pseudo data in Figure 6. When the data size increases, the model performance continuously improves in the MEDIA and THESIS domains, whereas the model performance in the EXAM domain keeps low." }, { "figure_ref": [], "heading": "E Error Type Performance", "publication_ref": [], "table_ref": [], "text": "In In all domains, models repair redundant errors consistently well, as their corrections do not need to generate new content and are the easiest and most deterministic. In contrast, models encounter difficulties in handling word-order errors universally since such errors require long-range structural knowledge to correct.\nIn terms of substituted and missing errors, models exhibit divergent behaviours. The performance on substituted errors in MEDIA is very high, probably because they are often spelling and punctuation errors. However, as another realistic writing scene, THESIS has a much inferior performance on substituted errors due to the low correction precision. After studying cases, we find THESIS contains many English words (e.g., LSTM) and technical terms (e.g., 支持向量机, supporting vector machine), which usually cause miscorrection. Besides, the performance on substituted errors in EXAM is also quite low, owing to their complexity.\nConsidering missing errors, the model performs much better in MEDIA than others. As discussed before, we observe that a large proportion of missing errors in MEDIA is caused by missing punctuation, which well-trained models can easily handle. Domain: MEDIA Source 30日下午齐鲁晚报的一名读者报料称,南湖镇两个女孩泥水,正在医院抢救。 On the afternoon of the 30th a reader of the Qilu Evening News reported that two girls in Nanhu Town muddy water, and were being rescued in the hospital." }, { "figure_ref": [], "heading": "Ref. 1", "publication_ref": [], "table_ref": [], "text": "30日下午,齐鲁晚报的一名读者报料称,南湖镇两个女孩溺水,正在医院抢救。 On the afternoon of the 30th, a reader of the Qilu Evening News reported that two girls in Nanhu Town drowned, and were being rescued in the hospital." }, { "figure_ref": [], "heading": "Source", "publication_ref": [], "table_ref": [], "text": "应当注意的是,重音切记过多。过多则显示不了孰轻孰重。 It is worth noting that too much stress should be remembered. Too much stress can not show which is more important.\nRef " }, { "figure_ref": [], "heading": "Source word2vec的基本结构是一个输入隐藏输出的三层神经网络。", "publication_ref": [], "table_ref": [], "text": "The basic structure of word2vec is a three-layer neural network with input hidden output." }, { "figure_ref": [], "heading": "Ref. 1 word2vec的基本结构是一个包含输入层、隐藏层和输出层的三层神经网络。", "publication_ref": [], "table_ref": [], "text": "The basic structure of word2vec is a three-layer neural network including the input layer, hidden layer and output layer." }, { "figure_ref": [], "heading": "Ref. 2 word2vec的基本结构是一个由输入层、隐藏层和输出层组成的三层神经网络。", "publication_ref": [], "table_ref": [], "text": "The basic structure of word2vec is a three-layer neural network composed of the input layer, hidden layer and output layer. Domain: EXAM Source 止咳祛痰片,它里面的主要成分是远志、桔梗、贝母、氯化铵等配制而成的。 Zhike Qutan Tablet, the main components of which are mainly compounded of Milkwort, Platycodon grandiflorum, Fritillaria, Ammonium chloride, etc." }, { "figure_ref": [], "heading": "Ref. 1", "publication_ref": [], "table_ref": [], "text": "止咳祛痰片,它里面的主要成分是远志、桔梗、贝母、氯化铵等配制而成的。 Zhike Qutan Tablet, the main components of which are mainly compounded of Milkwort, Platycodon grandiflorum, Fritillaria, Ammonium chloride, etc." }, { "figure_ref": [], "heading": "Ref. 2", "publication_ref": [], "table_ref": [], "text": "止咳祛痰片,它里面的主要成分是远志、桔梗、贝母、氯化铵等配制而成的。 Zhike Qutan Tablet, the main components of which are mainly compounded of Milkwort, Platycodon grandiflorum, Fritillaria, Ammonium chloride, etc." }, { "figure_ref": [], "heading": "Source", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "同学们临走时总是忘记关灯。从这一件平凡的小事中,却说明了一个大问题。", "publication_ref": [], "table_ref": [], "text": "The students always forget to turn off the lights when they leave. From this trivial matter, shows a big problem." }, { "figure_ref": [], "heading": "Ref. 1 同学们临走时总是忘记关灯。从这一件平凡的小事中,我们却发现了一个大问题。", "publication_ref": [], "table_ref": [], "text": "The students always forget to turn off the lights when they leave. From this trivial matter, we found a big problem." }, { "figure_ref": [], "heading": "Ref. 2 同学们临走时总是忘记关灯。从这一件平凡的小事中,却说明了一个大问题。", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "The students always forget to turn off the lights when they leave. From This trivial matter shows a big problem.\nTable 13: Annotation examples in NaSGEC. \"Source\" denotes the source sentence, \"Ref\" denotes the reference." }, { "figure_ref": [], "heading": "F Annotation Examples", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "We show some real annotation examples from NaS-GEC in Table 13. We also present screenshots of all data sources of our domains in Figure 5." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank all anonymous reviewers and the meta reviewer for their insightful comments, which will definitely help us improve this work in the future. This work was supported by the National Natural Science Foundation of China (Grant No. 62176173) and Alibaba Group through Alibaba Innovative Research Program, and also supported by Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions." } ]
10.1007/978-3-319-99501-4_41
[ { "authors": "Abhijeet Awasthi; Sunita Sarawagi; Rasna Goyal; Sabyasachi Ghosh; Vihari Piratla", "journal": "", "ref_id": "b0", "title": "Parallel iterative edit models for local sequence transduction", "year": "2019" }, { "authors": "Christopher Bryant; Mariano Felice; Øistein E Andersen; Ted Briscoe", "journal": "", "ref_id": "b1", "title": "The BEA-2019 shared task on grammatical error correction", "year": "2019" }, { "authors": "Christopher Bryant; Zheng Yuan; Muhammad ; Reza Qorib; Hannan Cao; Hwee Tou Ng; Ted Briscoe", "journal": "", "ref_id": "b2", "title": "Grammatical error correction: A survey of the state of the art", "year": "2022" }, { "authors": "Wanxiang Che; Zhenghua Li; Ting Liu", "journal": "", "ref_id": "b3", "title": "LTP: A Chinese language technology platform", "year": "2010" }, { "authors": "Zhuang Chen; Tieyun Qian", "journal": "", "ref_id": "b4", "title": "Bridge-based active domain adaptation for aspect term extraction", "year": "2021" }, { "authors": "Shamil Chollampatt; Duc Tam Hoang; Hwee Tou Ng", "journal": "", "ref_id": "b5", "title": "Adapting grammatical error correction based on the native language of writers with neural network joint models", "year": "2016" }, { "authors": "Chenhui Chu; Rui Wang", "journal": "", "ref_id": "b6", "title": "A survey of domain adaptation for neural machine translation", "year": "2018" }, { "authors": "Steven Coyne; Keisuke Sakaguchi", "journal": "", "ref_id": "b7", "title": "An analysis of gpt-3's performance in grammatical error correction", "year": "2023" }, { "authors": "Daniel Dahlmeier; Hwee Tou Ng; Mei Siew; Wu", "journal": "", "ref_id": "b8", "title": "Building a large annotated corpus of learner English: The nus corpus of learner English", "year": "2013" }, { "authors": "Yong Dai; Linyang Li; Cong Zhou; Zhangyin Feng; Enbo Zhao; Xipeng Qiu; Piji Li; Duyu Tang", "journal": "", "ref_id": "b9", "title": "Is whole word masking always better for Chinese BERT?\": Probing on Chinese grammatical error correction", "year": "2022" }, { "authors": "Tao Fang; Shu Yang; Kaixin Lan; Derek F Wong; Jinpeng Hu; Lidia S Chao; Yue Zhang", "journal": "", "ref_id": "b10", "title": "Is chatgpt a highly fluent grammatical error correction system? a comprehensive evaluation", "year": "2023" }, { "authors": "Simon Flachs; Ophélie Lacroix; Helen Yannakoudakis; Marek Rei; Anders Søgaard", "journal": "", "ref_id": "b11", "title": "Grammatical error correction in low error density domains: a new benchmark and analyses", "year": "2020" }, { "authors": "Haoming Jiang; Chen Liang; Chong Wang; Tuo Zhao", "journal": "", "ref_id": "b12", "title": "Multi-domain neural machine translation with word-level adaptive layer-wise domain mixing", "year": "2020" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b13", "title": "Adam: a method for stochastic optimization", "year": "2014" }, { "authors": "Solomon Kullback; Richard A Leibler", "journal": "The annals of mathematical statistics", "ref_id": "b14", "title": "On information and sufficiency", "year": "1951" }, { "authors": "Entony Lekhtman; Yftah Ziser; Roi Reichart", "journal": "", "ref_id": "b15", "title": "DILBERT: Customized pre-training for domain adaptation with category shift, with an application to aspect extraction", "year": "2021" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "", "ref_id": "b16", "title": "BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Jiquan Li; Junliang Guo; Yongxin Zhu; Xin Sheng; Deqiang Jiang; Bo Ren; Linli Xu; ; ", "journal": "", "ref_id": "b17", "title": "Sequenceto-action: Grammatical error correction with action guided sequence generation", "year": "2022" }, { "authors": "Yudong Li; Yuqing Zhang; Zhe Zhao; Linlin Shen; Liu Weijie; Mao Weiquan; Zhang Hui", "journal": "", "ref_id": "b18", "title": "CSL: A Large-scale Chinese Scientific Literature Dataset", "year": "2022" }, { "authors": "Zhenghua Li; Xue Peng; Min Zhang; Rui Wang; Luo Si", "journal": "", "ref_id": "b19", "title": "Semi-supervised domain adaptation for dependency parsing", "year": "2019" }, { "authors": "Shirong Ma; Yinghui Li; Rongyi Sun; Qingyu Zhou; Shulin Huang; Dingchao Zhang; Li Yangning; Ruiyang Liu; Zhongli Li; Yunbo Cao; Haitao Zheng; Ying Shen", "journal": "", "ref_id": "b20", "title": "Linguistic rules-based corpus generation for native Chinese grammatical error correction", "year": "2022" }, { "authors": "Eric Malmi; Sebastian Krause; Sascha Rothe; Daniil Mirylenka; Aliaksei Severyn", "journal": "", "ref_id": "b21", "title": "Encode, tag, realize: high-precision text editing", "year": "2019" }, { "authors": "Maria Nadejde; Joel R Tetreault", "journal": "", "ref_id": "b22", "title": "Personalizing grammatical error correction: Adaptation to proficiency level and L1", "year": "2019" }, { "authors": "Courtney Napoles; Maria Nȃdejde; Joel Tetreault", "journal": "TACL", "ref_id": "b23", "title": "Enabling robust grammatical error correction in new domains: data sets, metrics, and analyses", "year": "2019" }, { "authors": "Courtney Napoles; Keisuke Sakaguchi; Joel Tetreault", "journal": "", "ref_id": "b24", "title": "JFLEG: a fluency corpus and benchmark for grammatical error correction", "year": "2017" }, { "authors": "Kostiantyn Omelianchuk; Vitaliy Atrasevych; Artem Chernodub; Oleksandr Skurzhanskyi", "journal": "", "ref_id": "b25", "title": "GECToR-grammatical error correction: tag, not rewrite", "year": "2020" }, { "authors": "Myle Ott; Sergey Edunov; Alexei Baevski; Angela Fan; Sam Gross; Nathan Ng; David Grangier; Michael Auli", "journal": "", "ref_id": "b26", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "year": "2019" }, { "authors": "Minh Quang Pham; Josep ; Maria Crego; François Yvon", "journal": "TACL", "ref_id": "b27", "title": "Revisiting multi-domain machine translation", "year": "2021" }, { "authors": "Alan Ramponi; Barbara Plank", "journal": "", "ref_id": "b28", "title": "Neural unsupervised domain adaptation in NLP -A survey", "year": "2020" }, { "authors": "Gaoqi Rao; Qi Gong; Baolin Zhang; Endong Xun", "journal": "", "ref_id": "b29", "title": "Overview of NLPTEA-2018 share task Chinese grammatical error diagnosis", "year": "2018" }, { "authors": "Gaoqi Rao; Erhong Yang; Baolin Zhang", "journal": "", "ref_id": "b30", "title": "Overview of NLPTEA-2020 shared task for Chinese grammatical error diagnosis", "year": "2020" }, { "authors": "Brian Richards", "journal": "Journal of Child Language", "ref_id": "b31", "title": "Type/token ratios: What do they really tell us", "year": "1987" }, { "authors": "Keisuke Sakaguchi; Courtney Napoles; Matt Post; Joel Tetreault", "journal": "TACL", "ref_id": "b32", "title": "Reassessing the goals of grammatical error correction: fluency instead of grammaticality", "year": "2016" }, { "authors": "Yunfan Shao; Zhichao Geng; Yitao Liu; Junqi Dai; Fei Yang; Li Zhe; Hujun Bao; Xipeng Qiu", "journal": "", "ref_id": "b33", "title": "CPT: a pre-trained unbalanced transformer for both Chinese language understanding and generation", "year": "2021" }, { "authors": "Christian Szegedy; Vincent Vanhoucke; Sergey Ioffe; Jon Shlens; Zbigniew Wojna", "journal": "", "ref_id": "b34", "title": "Rethinking the inception architecture for computer vision", "year": "2016" }, { "authors": "Baoxin Wang; Xingyi Duan; Dayong Wu; Wanxiang Che; Zhigang Chen; Guoping Hu", "journal": "", "ref_id": "b35", "title": "CCTC: A cross-sentence Chinese text correction dataset for native speakers", "year": "2022" }, { "authors": "Yingying Wang; Cunliang Kong; Liner Yang; Yijun Wang; Xiaorong Lu; Renfen Hu; Shan He; Zhenghao Liu; Yun Chen; Erhong Yang; Maosong Sun", "journal": "", "ref_id": "b36", "title": "YACLC: a Chinese learner corpus with multidimensional annotation", "year": "2021" }, { "authors": "Haoran Wu; Wenxuan Wang; Yuxuan Wan; Wenxiang Jiao; Michael Lyu", "journal": "", "ref_id": "b37", "title": "Chatgpt or grammarly? evaluating chatgpt on grammatical error correction benchmark", "year": "2023" }, { "authors": "Xiuyu Wu; Yunfang Wu", "journal": "", "ref_id": "b38", "title": "From spelling to grammar: A new framework for Chinese grammatical error correction", "year": "2022" }, { "authors": "Lvxiaowei Xu; Jian-Cheng Wu; Jiawei Peng; Jiayu Fu; Ming Cai", "journal": "", "ref_id": "b39", "title": "FCGEC: Fine-grained corpus for Chinese grammatical error correction", "year": "2022" }, { "authors": "Sen Yang; Leyang Cui; Ruoxi Ning; Di Wu; Yue Zhang", "journal": "", "ref_id": "b40", "title": "Challenges to open-domain constituency parsing", "year": "2022" }, { "authors": "Helen Yannakoudakis; Ted Briscoe; Ben Medlock", "journal": "", "ref_id": "b41", "title": "A new dataset and method for automatically grading ESOL texts", "year": "2011" }, { "authors": "Sha Yuan; Hanyu Zhao; Zhengxiao Du; Ming Ding; Xiao Liu; Yukuo Cen; Xu Zou; Zhilin Yang; Jie Tang", "journal": "AI Open", "ref_id": "b42", "title": "WuDaoCorpora: A super large-scale Chinese corpora for pre-training language models", "year": "2021" }, { "authors": "Baolin Zhang", "journal": "International Chinese Language Education", "ref_id": "b43", "title": "Features and functions of the HSK dynamic composition corpus (in Chinese)", "year": "2009" }, { "authors": "Yue Zhang; Leyang Cui; Deng Cai; Xinting Huang; Tao Fang; Wei Bi", "journal": "", "ref_id": "b44", "title": "Multi-task instruction tuning of llama for specific scenarios: A preliminary study on writing assistance", "year": "2023" }, { "authors": "Yue Zhang; Zhenghua Li; Zuyi Bao; Jiacheng Li; Bo Zhang; Chen Li; Fei Huang; Min Zhang; ; ", "journal": "", "ref_id": "b45", "title": "MuCGEC: a multi-reference multi-source evaluation dataset for Chinese grammatical error correction", "year": "2022" }, { "authors": "Yue Zhang; Bo Zhang; Zhenghua Li; Zuyi Bao; Chen Li; Min Zhang", "journal": "", "ref_id": "b46", "title": "SynGEC: Syntax-enhanced grammatical error correction with a tailored gecoriented parser", "year": "2022" }, { "authors": "Wei Zhao; Liang Wang; Kewei Shen; Ruoyu Jia; Jingming Liu", "journal": "", "ref_id": "b47", "title": "Improving grammatical error correction via pre-training a copy-augmented architecture with unlabeled data", "year": "2019" }, { "authors": "Yuanyuan Zhao; Nan Jiang; Weiwei Sun; Xiaojun Wan", "journal": "", "ref_id": "b48", "title": "Overview of the NLPCC 2018 shared task: grammatical error correction", "year": "2018" } ]
[]
NaSGEC: a Multi-Domain Chinese Grammatical Error Correction Dataset from Native Speaker Texts
We introduce NaSGEC, a new dataset to facilitate research on Chinese grammatical error correction (CGEC) for native speaker texts from multiple domains. Previous CGEC research primarily focuses on correcting texts from a single domain, especially learner essays. To broaden the target domain, we annotate multiple references for 12,500 sentences from three native domains, i.e., social media, scientific writing, and examination. We provide solid benchmark results for NaSGEC by employing cutting-edge CGEC models and different training data. We further perform detailed analyses of the connections and gaps between our domains from both empirical and statistical views. We hope this work can inspire future studies on an important but under-explored directioncross-domain GEC.
Yue Zhang; Bo Zhang; Haochen Jiang; Zhenghua Li; Chen Li; Fei Huang; Min Zhang
[ { "figure_caption": "Figure 1 :1Figure 1: The construction procedure of NaSGEC.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The distributions of 4 kinds of error in 3 domains of NaSGEC and other CGEC datasets.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Our annotation interface.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 Figure 4 :44Figure 4 shows our review interface. The reviewer can choose whether accept each submission by clicking the check box before it. Considering other valid answers may be missed by annotators, the reviewer can also click the Add button to input a new correction for supplementary.", "figure_data": "", "figure_id": "fig_3", "figure_label": "44", "figure_type": "figure" }, { "figure_caption": "Figure 5 :Figure 6 :56Figure 5: The screenshots of data sources for our 3 domains.", "figure_data": "", "figure_id": "fig_4", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "A native CGEC example with two references from the THESIS domain of NaSGEC.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "DatasetWriter #Sent. #Err. Sent. (Perc.) Avg. Length Avg. Edits Avg. Refs Avg. NEs Type-Token NLPCC18(Zhao et al., 2018) ", "figure_data": "Learner 2,0001,983 (99.2%)29.72.01.10.390.43MuCGEC (Zhang et al., 2022a) Learner 7,0636,544 (92.7%)38.53.22.30.380.42CCTC (Wang et al., 2022)Native25,2072,331 (9.3%)41.81.01.00.680.53FCGEC (Xu et al., 2022)Native41,34022,517 (54.6%)53.11.51.71.910.49NaSGEC (MEDIA)Native4,0002,605 (65.2%)49.01.81.40.790.55NaSGEC (THESIS)Native1,5001,050 (70.0%)60.51.91.50.670.45NaSGEC (EXAM)Native7,0004,849 (69.3%)55.91.41.71.000.51NaSGECNative12,5008,504 (68.0%)54.31.61.60.890.52", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Dataset statistics, including the writer, the number of sentences (#Sent.), the number and percentage of erroneous sentences (#Err. Sent. (Perc.)), the average length (characters) of sentences (Avg. Length), the average number of edits per reference (Avg. Edits), the average number of references (Avg. Refs), the average number of named entities per sentence (Avg. NEs, extracted by the LTP toolkit(Che et al., 2010)), the average ratio of vocabulary size by the total number of tokens (Type-token, calculated followingFlachs et al. (2020)).", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Real Learner 35.96 29.15 34.35 24.16 34.06 25.65 23.01 11.31 19.06 27.71 24.84 27.08 Pseudo Native 53.39 29.17 45.79 30.86 33.52 31.15 9.78 2.60 6.30 31.34 21.76 28.80 Pseudo Native ⇒ Real Learner 38.37 31.16 36.67 25.67 35.09 27.13 24.48 11.59 20.02 29.51 25.95 28.72 Real Learner ⇒ Pseudo Native 51.90 26.20 43.39 31.61 31.97 31.87 10.77 2.52 6.51 31.43 20.23 28.29 Benchmark results on NaSGEC. \"Pseudo Native ⇒ Real Learner\" means that we first train the model on pseudo native data, then on real learner data. The same goes for \"Real Learner ⇒ Pseudo Native\".", "figure_data": "MEDIATHESISEXAMAveragePRF 0.5PRF 0.5PRF 0.5PRF 0.5", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Data split statistics of NaSGEC.", "figure_data": "", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "To facilitate fine-tuning, we split data into training/dev/test sets. The split statistics are listed in Table 4. For each domain, we select the Baseline 53.77/28.24/45.54 28.39/33.15/29.23 21.88/9.83/17.57 MEDIA 61.35/42.72/56.43 31.96/42.29/33.60 20.85/7.17/15.09 THESIS 52.65/33.40/47.21 34.96/43.96/36.45 20.61/8.54/16.07 EXAM 49.16/24.74/41.06 27.93/31.58/28.59 48.29/24.23/40.29 Results of transfer experiments on NaSGEC.", "figure_data": "Test →MEDIATHESISEXAMTrain ↓P/R/F 0.5P/R/F 0.5P/R/F 0.5", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Vocabulary Overlap (VO), Type Distribution Similarity (TDS), and Error Pattern Overlap (EPO) between training and test sets from different domains of NaSGEC. Specifically, VO and EPO are averaged over 3 calculations.", "figure_data": "Target →MEDIA-testTHESIS-testEXAM-testSource ↓VO (%) TDS EPO (%) VO (%) TDS EPO (%) VO (%) TDS EPO (%)MEDIA-train65.030.00125.8463.130.05031.7563.100.1845.07THESIS-train56.470.02522.7775.730.00933.0565.610.1615.94EXAM-train62.970.2106.9466.330.13910.2968.300.00114.89", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Results of domain-aware data augmentation.", "figure_data": "MEDIATHESISPRF 0.5PRF 0.5Pretrained Baseline53.77 28.24 45.54 28.39 33.15 29.23+ style adaptation 54.31 29.79 46.63 29.09 34.91 30.09+ error adaptation 54.64 32.04 47.88 29.77 37.79 31.09+ both57.29 32.41 49.66 31.17 43.17 33.00Finetuned Baseline61.35 42.72 56.43 34.96 43.96 36.45+ style adaptation 61.49 43.08 56.65 35.27 44.71 36.83+ error adaptation 61.72 43.65 57.00 35.12 45.30 36.77+ both62.02 43.92 57.30 36.01 46.24 37.68", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Vocabulary Overlap (VO), Type Distribution Similarity (TDS), and Error Pattern Overlap (EPO) from domains of NaSGEC to existing CGEC datasets. Specifically, VO and EPO are averaged over 3 calculations.", "figure_data": "Target →MuCGECCCTCFCGECSource ↓ VO (%) TDS EPO (%) VO (%) TDS EPO (%) VO (%) TDS EPO (%)MEDIA72.500.0315.7964.430.06542.2664.930.2763.98THESIS70.200.0456.4354.670.12940.0760.430.2295.99EXAM70.030.0787.3157.830.4278.4768.470.01013.26Test →MuCGECCCTCFCGECTrain ↓P/R/F 0.5P/R/F 0.5P/R/F 0.5Baseline 53.84/29.77/46.34 19.41/45.99/21.94 33.50/10.93/23.71MEDIA52.67/21.88/41.10 20.88/55.40/23.85 32.07/5.12/15.62THESIS 60.61/21.09/44.09 17.98/55.73/20.80 34.10/8.15/20.83EXAM57.06/25.41/45.68 16.73/45.34/19.15 50.00/32.32/45.07", "figure_id": "tab_9", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Results of transfer experiments from domains of NaSGEC to existing CGEC datasets.", "figure_data": "", "figure_id": "tab_10", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Our hyper-parameter settings.", "figure_data": "", "figure_id": "tab_11", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "BothBART and GECToR are trained on real learner training data described in Section 4.2.", "figure_data": "MEDIAPRF 0.5BART35.96 29.15 34.35GECToR 33.36 19.85 29.36THESISPRF 0.5BART24.16 34.06 25.65GECToR 42.29 18.20 33.44EXAMPRF 0.5BART23.01 11.31 19.06GECToR 20.93 8.80 16.41", "figure_id": "tab_12", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Experimental results of the Seq2Edit-based model (GECToR) compared with the Seq2Seq-based model (BART) on NaSGEC.", "figure_data": "", "figure_id": "tab_13", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "we evaluate the error type performance of each domain's best model on NaSGEC. The best model denotes the fine-tuned model achieving the highest F 0.5 score in Table 5. S 59.91/51.66/58.06 29.79/60.64/33.17 25.38/15.07/22.33 M 67.56/32.54/55.59 47.37/15.38/33.46 44.21/19.62/35.35 R 59.41/42.44/55.01 65.71/34.85/55.83 66.10/41.42/59.06 W 40.00/12.77/28.04 42.25/12.75/28.88 29.74/9.46/20.82", "figure_data": "MEDIATHESISEXAMP/R/F 0.5P/R/F 0.5P/R/F 0.5", "figure_id": "tab_14", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "The fine-grained performance of each domain's best model regarding error types. S: Substituted errors, M: Missing errors, R: Redundant errors, W: Word-order errors.", "figure_data": "", "figure_id": "tab_15", "figure_label": "12", "figure_type": "table" }, { "figure_caption": ". 1 应当注意的是,重音切忌过多。过多则显示不了孰轻孰重。 It is worth noting that too much stress should be avoided. Too much stress can not show which is more important. It is worth noting that avoiding too much stress should be remembered. Too much stress can not show which is more important. At present, the most widely used stemming method is the Poter-Stemmer algorithm, which is based on the suffix for glass. At present, the most widely used stemming method is the Poter-Stemmer algorithm, which is based on the suffix for stripping.", "figure_data": "Ref. 2应当注意的是,重音切记不要过多。过多则显示不了孰轻孰重。Domain: THESISSource目前应用最为广泛的词干提取方法为波特词干算法(Poter-Stemmer),它基于后缀进行玻璃。Ref. 1目前应用最为广泛的词干提取方法为波特词干算法(Poter-Stemmer),它基于后缀进行剥离。", "figure_id": "tab_16", "figure_label": "", "figure_type": "table" } ]
[{"Category": "Data Source", "Citation": "(Yannakoudakis et al., 2011)", "Explanation": "The cited work is a dataset for English GEC that has been used as a source of data for research on the topic."}, {"Category": "Data Source", "Citation": "(Dahlmeier et al., 2013)", "Explanation": "The cited work is another dataset for English GEC that has been used as a data source for research in the field."}, {"Category": "Data Source", "Citation": "(Napoles et al., 2017)", "Explanation": "The cited work is a dataset for English GEC that has been utilized in research to gather data and build models."}, {"Category": "Data Source", "Citation": "(Bryant et al., 2019)", "Explanation": "The cited work is a dataset for English GEC that has been referenced in research to provide a data source for studies in the field."}, {"Category": "Data Source", "Citation": "(Zhao et al., 2018)", "Explanation": "The cited work is a dataset for Chinese GEC (CGEC) that has been used as a data source in research on the topic."}, {"Category": "Data Source", "Citation": "(Rao et al., 2018)", "Explanation": "The cited work is a dataset for Chinese GEC (CGED) that has been used as a data source in research on the topic."}, {"Category": "Data Source", "Citation": "(Wang et al., 2021)", "Explanation": "The cited work is a dataset for Chinese GEC (YACLC) that has been used as a data source in research on the topic."}, {"Category": "Data Source", "Citation": "(Zhang et al., 2022a)", "Explanation": "The cited work is a dataset for Chinese GEC (MuCGEC) that has been used as a data source in research on the topic."}, {"Category": "Supporting Evidence", "Citation": "(Napoles et al., 2019)", "Explanation": "The cited work, GMEG, is mentioned as a native dataset that targets texts from multiple domains, which is important for the construction and evaluation of CGEC approaches."}, {"Category": "Supporting Evidence", "Citation": "(Flachs et al., 2020)", "Explanation": "The cited work, CWEB, is also mentioned as a native dataset that contributes to the research on CGEC."}, {"Category": "Supporting Evidence", "Citation": "(Wang et al., 2022)", "Explanation": "The cited work, CCTC, is the first native CGEC dataset composed of web documents written by natives, which is a significant contribution to the field of CGEC."}, {"Category": "Supporting Evidence", "Citation": "(Xu et al., 2022)", "Explanation": "The cited work, FCGEC, collects sentences from the questions in Chinese examinations, providing another valuable dataset for CGEC research."}, {"Category": "Supporting Evidence", "Citation": "(Li et al., 2022a)", "Explanation": "The cited work, Li et al., 2022a, is mentioned as a cutting-edge CGEC approach that is evaluated under the in-domain setting, providing insights into the current state of the art in CGEC research."}, {"Category": "Supporting Evidence", "Citation": "(Zhang et al., 2022b)", "Explanation": "The cited work, Zhang et al., 2022b, is also mentioned as a cutting-edge CGEC approach that is evaluated under the in-domain setting, further supporting the claim that in-domain evaluation is the current standard in CGEC research."}, {"Category": "Supporting Evidence", "Citation": "(Wu and Wu, 2022)", "Explanation": "The cited work, Wu and Wu, 2022, is another cutting-edge CGEC approach that is evaluated under the in-domain setting, further highlighting the need for out-of-domain evaluation in CGEC research."}, {"Category": "Methodological Basis", "Citation": "(Sakaguchi et al., 2016)", "Explanation": "The cited work by Sakaguchi et al. (2016) has established the direct rewriting annotation paradigm, which the citing paper adopts in their annotation process to improve the efficiency of GEC."}, {"Category": "Methodological Basis", "Citation": "(Napoles et al., 2017)", "Explanation": "The cited work by Napoles et al. (2017) has also contributed to the direct rewriting annotation paradigm, which the citing paper uses in their annotation process to improve the efficiency of GEC."}, {"Category": "Data Source", "Citation": "(Zhang et al., 2022a)", "Explanation": "The cited work by Zhang et al. (2022a) has provided the annotation guidelines for MuCGEC, which the citing paper extends to accommodate errors made by native speakers in their annotation process."}, {"Category": "Data Source", "Citation": "(Zhang et al., 2022a)", "Explanation": "The cited work is used to extract the edits of references and original sentences in the analysis of NaSGEC, providing a data source for the study conducted in the citing paper."}, {"Category": "Data Source", "Citation": "(Richards, 1987)", "Explanation": "The cited work by Richards (1987) is used to provide the type-token ratio metric for analyzing the lexical variety in the datasets mentioned in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Zhao et al., 2018)", "Explanation": "The cited work by Zhao et al. (2018) is used to present the statistics of a mainstream learner dataset, which the citing paper extends by providing a comparison with the NaSGEC dataset."}, {"Category": "Extension or Continuation", "Citation": "(Zhang et al., 2022a)", "Explanation": "The cited work by Zhang et al. (2022a) is used to present the statistics of another mainstream learner dataset, which the citing paper extends by providing a comparison with the NaSGEC dataset."}, {"Category": "Data Source", "Citation": "(Wang et al., 2022)", "Explanation": "The cited work by Wang et al. (2022) is used to present the statistics of a newly published native dataset, which the citing paper uses to compare with the NaSGEC dataset."}, {"Category": "Data Source", "Citation": "(Xu et al., 2022)", "Explanation": "The cited work by Xu et al. (2022) is used to present the statistics of another newly published native dataset, which the citing paper uses to compare with the NaSGEC dataset."}, {"Category": "Methodological Basis", "Citation": "(Zhang et al., 2022a)", "Explanation": "The cited work by Zhang et al. (2022a) provides a character-based evaluation metric that the citing paper adopts to measure the performance of their system in correcting errors in text."}, {"Category": "Data Source", "Citation": "(Zhang, 2009)", "Explanation": "The cited work by Zhang (2009) is the source of a public available large-scale human-annotated CGEC training dataset called HSK, which the citing paper utilizes in their research on errors in learner essays."}, {"Category": "Data Source", "Citation": "(Zhao et al., 2018)", "Explanation": "The cited work by Zhao et al. (2018) is the source of another public available large-scale human-annotated CGEC training dataset called Lang8, which the citing paper combines with the HSK dataset to form a larger training set for their research on errors in learner essays."}, {"Category": "Methodological Basis", "Citation": "(Zhang et al., 2022a)", "Explanation": "The cited work by Zhang et al. (2022a) provides a method of randomly selecting a dev set of 5k sentence pairs from the combined HSK and Lang8 training dataset, which the citing paper follows in their own research on errors in learner essays."}, {"Category": "Extension or Continuation", "Citation": "(Yuan et al., 2021)", "Explanation": "The cited work by Yuan et al. (2021) is the source of the WuDaoCorpora dataset, which the citing paper uses to extract clean sentences for the creation of synthetic native training data based on heuristic rules. This extension or continuation of the research aims to address the lack of large-scale training data for errors made by native speakers."}, {"Category": "Supporting Evidence", "Citation": "(Zhang et al., 2022a)", "Explanation": "The cited work provides a tool for error classification in GEC, which the citing paper uses to extract and classify errors in the data for further analysis and understanding of domain shifts in GEC."}, {"Category": "Methodological Basis", "Citation": "(Li et al., 2022b)", "Explanation": "The cited work provides the raw data used for the error adaptation in the data augmentation method proposed in the citing paper, which serves as a methodological basis for the study conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Yuan et al., 2021)", "Explanation": "The cited work, WuDaoCorpora, is used in the citing paper to provide a general domain dataset of 100k data for error adaptation, which is a continuation of the study on GEC domain adaptation based on NaSGEC."}, {"Category": "Data Source", "Citation": "(Zhang et al., 2022a)", "Explanation": "The cited work by Zhang et al. serves as the source of the MuCGEC dataset, which is used for comparison with NaSGEC in the citing paper."}, {"Category": "Data Source", "Citation": "(Wang et al., 2022)", "Explanation": "The cited work, CCTC, is a native CGEC dataset that serves as the data source for the in-domain overlap ratio analysis in the citing paper."}, {"Category": "Data Source", "Citation": "(Xu et al., 2022)", "Explanation": "The cited work, FCGEC, is another native CGEC dataset that is also used as a data source for the in-domain overlap ratio analysis in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(Wang et al., 2022)", "Explanation": "The cited work, CCTC, is mentioned in the context of the in-domain overlap ratio analysis in the citing paper, indicating a possible extension or continuation of the research on native CGEC datasets."}, {"Category": "Extension or Continuation", "Citation": "(Xu et al., 2022)", "Explanation": "The cited work, FCGEC, is also mentioned in the context of the in-domain overlap ratio analysis in the citing paper, suggesting a possible extension or continuation of the research on native CGEC datasets."}, {"Category": "Data Source", "Citation": "(Wang et al., 2022)", "Explanation": "The cited work, CCTC, is a data source for the web documents used in the study of the citing paper."}, {"Category": "Data Source", "Citation": "(Yuan et al., 2021)", "Explanation": "The cited work, WuDaoCorpora, is a data source for the web documents used in the study of the citing paper."}, {"Category": "Data Source", "Citation": "(Xu et al., 2022)", "Explanation": "The cited work, FCGEC, is a data source for the sentences used in the study of the citing paper."}, {"Category": "Data Source", "Citation": "(Ma et al., 2022)", "Explanation": "The cited work, NaCGEC, is a data source for the texts used in the study of the citing paper."}, {"Category": "Data Source", "Citation": "(Ramponi and Plank, 2020)", "Explanation": "The cited work is a data source for the study of domain adaptation in various NLP tasks."}, {"Category": "Data Source", "Citation": "(Chu and Wang, 2018)", "Explanation": "The cited work is a data source for the study of machine translation in domain adaptation."}, {"Category": "Data Source", "Citation": "(Jiang et al., 2020)", "Explanation": "The cited work is a data source for the study of machine translation in domain adaptation."}, {"Category": "Data Source", "Citation": "(Pham et al., 2021)", "Explanation": "The cited work is a data source for the study of machine translation in domain adaptation."}, {"Category": "Data Source", "Citation": "(Li et al., 2019)", "Explanation": "The cited work is a data source for the study of syntax parsing in domain adaptation."}, {"Category": "Data Source", "Citation": "(Yang et al., 2022)", "Explanation": "The cited work is a data source for the study of syntax parsing in domain adaptation."}, {"Category": "Data Source", "Citation": "(Chen and Qian, 2021)", "Explanation": "The cited work is a data source for the study of information extraction in domain adaptation."}, {"Category": "Data Source", "Citation": "(Lekhtman et al., 2021)", "Explanation": "The cited work is a data source for the study of information extraction in domain adaptation."}, {"Category": "Data Source", "Citation": "(Chollampatt et al., 2016)", "Explanation": "The cited work provides a specific first language and proficiency level of second language learners for adapting GEC models in the field of domain adaptation."}, {"Category": "Data Source", "Citation": "(Nadejde and Tetreault, 2019)", "Explanation": "The cited work contributes to the field of domain adaptation for GEC by providing a specific first language and proficiency level of second language learners in adapting GEC models."}, {"Category": "Extension or Continuation", "Citation": "(Fang et al., 2023)", "Explanation": "The cited work extends the research on GEC in the Large Language Model era by exploring the use of multi-domain CGEC datasets in different writing scenarios."}, {"Category": "Extension or Continuation", "Citation": "(Coyne and Sakaguchi, 2023)", "Explanation": "The cited work continues the research on GEC in the Large Language Model era by focusing on the use of multi-domain CGEC datasets in different writing scenarios."}, {"Category": "Extension or Continuation", "Citation": "(Wu et al., 2023)", "Explanation": "The cited work further extends the research on GEC in the Large Language Model era by exploring the use of multi-domain CGEC datasets in different writing scenarios."}, {"Category": "Extension or Continuation", "Citation": "(Zhang et al., 2023)", "Explanation": "The cited work continues the research on GEC in the Large Language Model era by focusing on the use of multi-domain CGEC datasets in different writing scenarios."}, {"Category": "Data Source", "Citation": "(Shao et al., 2021)", "Explanation": "The cited work provides the large variant of the Chinese BART model used in the citing paper to build the benchmark models."}, {"Category": "Methodological Basis", "Citation": "(Zhang et al., 2022b)", "Explanation": "The cited work is cited to extend the original vocabulary of the Chinese BART to cover common Chinese characters and punctuation, which improves model performance in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "(Ott et al., 2019)", "Explanation": "The cited work is used to build the benchmark models in the citing paper using the fairseq toolkit, providing foundational data and methods for the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Lewis et al., 2020)", "Explanation": "The cited work by Lewis et al. (2020) introduces the Seq2Seq-based model BART, which the citing paper adopts as a method for conducting CGEC in Chinese."}, {"Category": "Extension or Continuation", "Citation": "(Malmi et al., 2019)", "Explanation": "The cited work by Malmi et al. (2019) introduces the Seq2Edit model, which the citing paper builds upon to develop a new GEC-ToR model for Chinese."}, {"Category": "Extension or Continuation", "Citation": "(Awasthi et al., 2019)", "Explanation": "The cited work by Awasthi et al. (2019) also contributes to the development of the Seq2Edit model, which the citing paper further extends to test its ability in Chinese."}, {"Category": "Data Source", "Citation": "(Zhang et al., 2022a)", "Explanation": "The cited work by Zhang et al. (2022a) provides the Chinese GEC-ToR model that the citing paper uses in its study of NaSGEC in Chinese."}, {"Category": "Supporting Evidence", "Citation": "(Omelianchuk et al., 2020)", "Explanation": "The cited work by Omelianchuk et al. (2020) introduces the GEC-ToR model, which the citing paper cites to support the use of Seq2Edit in Chinese for NaSGEC."}, {"Category": "Methodological Basis", "Citation": "(Zhao et al., 2019)", "Explanation": "The cited work by Zhao et al. (2019) introduces the copy mechanism, which the citing paper plans to adopt in their future research to enhance the precision of their BART-based benchmark models."}, {"Category": "Data Source", "Citation": "(Yuan et al., 2021)", "Explanation": "The cited work provides the seed corpus for the rule-based corruption method used in the citing paper to generate synthetic training data from clean native corpora."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b16", "b27", "b25", "b28", "b13", "b23", "b28", "b31", "b12", "b1", "b32", "b15", "b13", "b29" ], "table_ref": [], "text": "The development of quality document encoders is of paramount importance for several NLP applications, such as long document classification tasks with biomedical (Johnson et al., 2016), or legal (Chalkidis et al., 2022b) documents, as well as information retrieval tasks (Chalkidis et al., 2021a;Rabelo et al., 2022;Nentidis et al., 2022). Despite the recent advances in the development of transformer-based sentence encoders (Reimers and Gurevych, 2019;Gao et al., 2021;Liu et al., 2021;Klein and Nabi, 2022a) via unsupervised contrastive learning, little do we know about the potential of neural document-level encoders targeting the encoding of long documents (Ks of words).\nTraining Corpus Average Text Length Reimers and Gurevych (2019) The computational complexity of standard Transformer-based models (Vaswani et al., 2017;Devlin et al., 2019) (PLMs) given the quadratic self-attention operations poses challenges in encoding long documents. To address this computational problem, researchers have introduced efficient sparse attention networks, such as Longformer (Beltagy et al., 2020), BigBird (Zaheer et al., 2020), and Hierarchical Transformers (Chalkidis et al., 2022a). Nonetheless, fine-tuning such models in downstream tasks is computationally expensive; hence we need to develop efficient document encoders that produce quality document representations that can be used for downstream tasks out-ofthe-box, i.e., without fully (end-to-end) fine-tuning the pre-trained encoder, if not at all. Besides computational complexity, building good representation models for encoding long documents can be challenging due to document length. Long documents contain more information than shorter documents, making it more difficult to capture all the relevant information in a fixed-size representation. In addition, long documents may have sections with different topics, which increases the complexity of encoding that usually leads to collapsing representations (Jing et al., 2022). More-over, long documents can be semantically incoherent, meaning that content may not be logically related or may contain irrelevant information. For these reasons, it is challenging to create a quality representation that captures the most important information in the document.\nTo the best of our knowledge, we are the first to explore the application of self-contrastive learning for long documents (Table 1). The contributions of our work are threefold:\n(i) We train Longfomer-based document encoders using a state-of-the-art self-contrastive learning method, SimCSE by Gao et al. (2021).\n(ii) We further enhance the quality of the latent representations using convex neural networks based on functional Bregman divergence. The network is optimized based on self-contrastive loss with divergence loss functions (Rezaei et al., 2021).\n(iii) We perform extensive experiments to highlight the empirical benefits of learning representation using unsupervised contrastive and our proposed enhanced self-contrastive divergence loss. We compare our method with baselines on three long document topic classification tasks from the legal and biomedical domain." }, { "figure_ref": [], "heading": "Related Work Document Encoders", "publication_ref": [ "b24", "b26", "b20", "b17", "b28", "b9", "b0", "b23", "b14", "b13" ], "table_ref": [], "text": "The need for quality document representations has always been an active topic of NLP research. Initial work on statistical NLP focused on representing documents as Bag of Words (BoW), in which direction TF-IDF representations were the standard for a long time. In the early days of deep learning in NLP, models developed to represent words with latent representations, such as Word2Vec (Mikolov et al., 2013), and GloVe (Pennington et al., 2014). Within this research domain, the use of word embedding centroids as document embeddings, and the development of the Doc2Vec (Le and Mikolov, 2014) model were proposed. Given the advanced compute needs to encode documents with neural networks, follow-up work mainly developed around sentence/paragraph-level representations, such as Skip Thoughts of Kiros et al. (2015), which relies on an RNN encoder. In the era of pre-trained Transformer-based language models, Reimers and Gurevych (2019) proposed the Sentence Transformers framework in order to develop quality dense sentence representations. Many works followed a similar direction relying on a selfsupervised contrastive learning setup, where most ideas are adopted mainly from Computer Vision literature (Chen et al., 2020;Bardes et al., 2022).\nSelf-Supervised Contrastive Learning in NLP Several self-contrastive methods have been proposed so far for NLP applications. To name a few: MirrorRoBERTa (Liu et al., 2021), SCD (Klein and Nabi, 2022b), miCSE (Klein and Nabi, 2022a), DeCluTR (Giorgi et al., 2021), and SimCSE (Gao et al., 2021) -described in Section 3.2-, all create augmented versions (views) of the original sentences using varying dropout and comparing their similarity. The application of such methods is limited to short sentences and relevant downstream tasks, e.g., sentence similarity, while these methods do not use any additional component to maximize diversity in latent feature representations." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Base Model -Longformer", "publication_ref": [ "b1" ], "table_ref": [], "text": "We experiment with Longformer (Beltagy et al., 2020), a well-known and relatively simple sparseattention Transformer. Longformer uses two sets of attention, namely sliding window attention and global attention. Instead of using the full attention mechanism, the sliding-window attention gives local context higher importance. Given a fixed window size w, each token attends to 1 2 w tokens on the respective side. The required memory for this is O(n × w). Sliding-window attention is combined with global attention from/to the [CLS] token." }, { "figure_ref": [], "heading": "Domain-Adapted Longformer:", "publication_ref": [ "b1", "b5", "b22" ], "table_ref": [], "text": "As a baseline, we use Longformer DA models which are Longformer models warm-started from domain-specific PLMs.\nTo do so, we clone the original positional embeddings 8× to encode sequences up to 4096 tokens. The rest of the parameters (word embeddings, transformers layers) can be directly transferred, with the exception of Longformer's global attention K, Q, V matrices, which we warm-start from the standard (local) attention matrices, following Beltagy et al. (2020). All parameters are updated during training.\nFor legal applications (Section 4.1), we warmstart our models from Legal-BERT (Chalkidis et al., 2020), a BERT model pre-trained on diverse English legal corpora, while for the biomedical one, we use BioBERT (Lee et al., 2020), a BERT model pre-trained on biomedical corpora. " }, { "figure_ref": [ "fig_0" ], "heading": "Self-supervised Contrastive Learning", "publication_ref": [ "b13" ], "table_ref": [], "text": "To use our Longformer DA for self-supervised contrastive learning, we need to use a Siamese network architecture (left part of Figure 1). Assume we have mini-batch D = {(x i )} N i=1 of N documents. As positive pairs (x i , x i + ), the method uses augmented (noised) versions of the input feature x i . As negative pairs (x i , x i -), all remaining N-1 documents in a mini-batch are used. The augmentations take place in the encoder block f θ of the model. θ is the parameterization of the encoder. We use the SimCSE (Gao et al., 2021) framework, in which case the encoder f θ is a pre-trained language model, Longformer DA in our case, and augmentation comes in the form of varying token dropout (masking) rate (τ). The loss objective used in the unsupervised version of SimCSE is the multiple negatives ranking loss (ℓ mnr ):\nℓ mnr = - 1 n n i=1 exp ( f (s i , si )) j exp f s i , s j (1)\nwhere si is the positive augmented input sequence in the mini-batch, and s j are the negatives. Multiple negatives ranking loss takes a pair of representations (s i , si ) and compares these with negative samples in a mini-batch. In our experiments, we train such models, dubbed Longformer DA+SimCSE ." }, { "figure_ref": [ "fig_0" ], "heading": "Bregman Divergence Loss", "publication_ref": [ "b29", "b16" ], "table_ref": [], "text": "We complement this method with an additional ensemble of subnetworks optimized by functional Bregman divergence aiming to improve the output document latent representations further. Specifically, the embedding of self-contrastive networks further passes to k-independent subnetworks to promote diversity in feature representations.\nThe s i and s j vectors from the contrastive framework are mapped to k-independent ensemble of neural networks that are optimized using functional Bregman divergence. \nG(s a , s b ) = ( s a (x)w ŝa (x)dx + ϵ ŝa ) - ( s a (x)w ŝb (x)dx + ϵ ŝb )(4)\nEach sub-network produces a separate output (right part of Figure 1). The divergence is then computed using the output at point ŝa and ŝb using the projections as input. We convert the divergence to similarity using a Gaussian kernel as done by Rezaei et al. (2021).1 \nψ = exp -G/2σ 2 (5)\nThe mini-batch has size N. For empirical distributions s a α(z i ), s b (z j ) where i and j are the respective index for the two branches and z the projector representation, we have:\nℓ Bregman(s a (z i ),s b (z j )) = -log(exp ψ i, j N t=1 exp ψ i,k )(6)\nThe final objective function is computed on the combination of as follows: (SCOTUS). This is a single-label multi-class topic classification task, where given a SCOTUS opinion, the model has to predict the relevant area among 14 issue areas (labels).\nL Total = ℓ mnr + λ • ℓ Bregman (7) Method ECtHR SCOTUS MIMIC Avg. Training Efficiency µ-F 1 m-F 1 µ-F 1 m-F 1 µ-F 1 m-F 1 µ-F 1 Time (h)\nMIMIC (Johnson et al., 2016) dataset contains approx. 50k discharge summaries from US hospitals. Each summary is annotated with one or more codes (labels) from the ICD-9 hierarchy, which has 8 levels in total. We use the 1st level of ICD-9, including 19 categories, respectively. This is a multi-label topic classification task, where given the discharge summary, the model has to predict the relevant ICD-9 top-level codes (labels)." }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [], "table_ref": [], "text": "To get insights into the quality of the learned representations out-of-the-box, we train classifiers using document embeddings as fixed (frozen) feature representations. We consider two linear classification settings: (i) Linear evaluation plugging a MLP classification head on top of the document embeddings;\n(ii) Linear evaluation plugging a linear classifier on top of the document embeddings." }, { "figure_ref": [], "heading": "Results and Discussion", "publication_ref": [ "b30", "b15" ], "table_ref": [ "tab_1", "tab_1", "tab_1", "tab_2", "tab_1", "tab_2" ], "text": "In Table 2, we present the results for all examined Longformer variants across the three examined datasets and two settings using macro-F1 (m-F 1 ) and micro-F1 (µ-F 1 ) scores.\nClassification performance: In the last line of Table 2, we present the results for the baseline Longformer DA model fine-tuned end-to-end, which is a 'ceiling' for the expected performance, comparing to the two examined linear settings, where the document encoders are not updated. We observe that in the SCOTUS dataset training models with an MLP head are really close to the ceiling performance (approx. 1-4p.p. less in µ-F 1 ). The gap is smaller for both models trained with the self-contrastive objective (+SimCSE, +Sim-CSE+Bregman), especially the one with the additional Bregman divergence loss, where the performance decrease in µ-F 1 is only 1 p.p.\nIn the other two datasets (ECtHR and MIMIC), the performance of the linear models is still approx. 10-15 p.p. behind the ceilings in µ-F 1 . In ECtHR, we find that self-contrastive learning improves performance in the first settings by 3 p.p. in µ-F 1 , while the additional divergence Bregman loss does not really improve performance. This is not the Model µ-F 1 m-F 1 Longformer DA 54.9 48.1 » + SimCSE 51.8 43.6 » + SimCSE + Bregman 56.9 48.5 case, in the second linear setting (second group in Table 2), where the baseline outperforms both models. Similarly in MIMIC, we observe that selfcontrastive learning improves performance in the first settings by 3 p.p. in µ-F 1 , but the performance is comparable given linear classifiers. Overall, our enhanced self-contrastive method leads to the best results compared to its counterparts.\nIn Table 3, we also present results on SCOTUS in a few-shot setting using the SetFit (Tunstall et al., 2022) framework, where Bregman divergence loss improves performance compared to the baselines.\nGiven the overall results, we conclude that building subnetwork ensembles on top of the document embeddings can be a useful technique for encoding long documents and can help avoid the problem of collapsing representations, where the model is unable to capture all the relevant information in the input. Our approach has several advantages for long-document processing:\nEfficiency considerations: In Table 2, we observe that in both linear settings where fixed document representations are used, the training time is 2-8× decreased compared to end-to-end fine-tuning, while approx. 0.5% of the parameters are trainable across cases, which directly affects the compute budget. We provide further information on the size of the models in Appendix B.\nAvoidance of collapsing representations: When processing long documents, there is a risk that the representation will collapse (Jing et al., 2022), meaning that the model will not be able to capture all the relevant information in the input. By mapping the document embedding from the base encoder into smaller sub-networks, the risk of collapsing representations is reduced, as the divergence loss attempts to reduce redundancy in the feature representation by minimizing the correlation. The results shown in Table 3 in a low-resource setting further highlight the advantage of training a Longformer with contrastive divergence learning." }, { "figure_ref": [], "heading": "Conclusions and Future Work", "publication_ref": [], "table_ref": [], "text": "We proposed and examined self-supervised contrastive divergence learning for learning representation of long documents. Our proposed method is composed of a self-contrastive learning framework followed by an ensemble of neural networks that are optimized by functional Bregman divergence. Our method showed improvement compared to the baselines on three long document topic classifications in the legal and biomedical domains, while the improvement is more vibrant in a few-shot learning setting. In future work, we would like to further investigate the impact of the Bregman divergence loss in more classification datasets and other NLP tasks, e.g., document retrieval." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b2", "b11" ], "table_ref": [], "text": "In this work, we focus on small and medium size models (up to 134M parameters), while recent work in Large Language Models (LLMs) targets models with billions of parameters (Brown et al., 2020;Chowdhery et al., 2022). It is unclear how well the performance improvement from the examined network architecture would translate to other model sizes or baseline architectures, e.g., GPT models.\nFurther on, it is unclear how these findings may translate to other application domains and datasets, or impact other NLP tasks, such as document retrieval/ranking. We will investigate these directions in future work. ,5,8,10,20] 74 " }, { "figure_ref": [], "heading": "Hyper-parameters", "publication_ref": [], "table_ref": [], "text": "µ-F 1 m-F 1 µ-F 1 m-F 1 µ-F 1 m-F 1 µ-F 1 m-F 1 µ-F 1 m-F 1 g ∈ [2" }, { "figure_ref": [], "heading": "B Number of parameters", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C Pooling methods", "publication_ref": [], "table_ref": [], "text": "We evaluate Mean, Max and [CLS] pooling. Results for end-to-end fine-tuning can be found in the table 6. Our results show that using mean pooling during continued pre-training in combination with max-pooling for classification could further enhance the performance instead of using the same pooling method for both stages." }, { "figure_ref": [], "heading": "D Neural network Architecture", "publication_ref": [], "table_ref": [], "text": "Our model contains two linear layers with one activation layer and two batch normalization layers. We also compare the model without batch normalization layers. The comparison is made on the SCO-TUS dataset using end-to-end fine-tuning. One can see that removing batch normalization worsens performance.\nNormalization m-F 1 µ-F 1 Batch Norm 75.6 63.5 w/o Batch Norm 72.5 63.1 " }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "Mina Rezai and Bernd Bisch were supported by the Bavarian Ministry of Economic Affairs, Regional Development and Energy through the Center for Analytics -Data -Applications (ADA-Center) within the framework of BAYERN DIGITAL II (20-3410-2-9-8).M. R. and B. B. were supported by the German Federal Ministry of Education and Research (BMBF) Munich Center for Machine Learning (MCML). This work was also partly funded by the Innovation Fund Denmark (IFD). 2" }, { "figure_ref": [], "heading": "A Hyper-parameter Optimization", "publication_ref": [ "b29", "b13" ], "table_ref": [], "text": "Continued Pre-training: We define the search space based on previous studies such as Rezaei et al. (2021) and Gao et al. (2021). For the contrastive Bregman divergence, we benchmark the performance for the first-stage hyper-parameters on the downstream task to tune the respective hyper-parameters. We use mean pooling for all settings. The learning rate, the total optimization steps, the use of a batch-norm layer, the σ parameter, the number of sub-networks g, and the batch size are grid-searched. Temperature (.1) and the input length to 4096 are fixed beforehand. The learning rate for these models was 3e-5. We run 50.000 optimization steps for each model." }, { "figure_ref": [], "heading": "Training for classification tasks:", "publication_ref": [], "table_ref": [], "text": "We used AdamW as an optimizer. Bayesian optimization is used to tune the hype-rparameters learning rate, number of epochs and batch size. We use mean pooling for all settings. Early stopping is set to a patience score of 3. 3 These parameters were fixed after some early experiments. We use a learning rate of 1e-4 and run ECTHR and SCOTUS for 20 and" } ]
2024-03-26
10.18653/v1/2020.findings-emnlp.261
[ { "authors": "Adrien Bardes; Jean Ponce; Yann Lecun", "journal": "", "ref_id": "b0", "title": "Vicreg: Variance-invariance-covariance regularization for self-supervised learning", "year": "2022" }, { "authors": "Iz Beltagy; Matthew E Peters; Arman Cohan", "journal": "", "ref_id": "b1", "title": "Longformer: The long-document transformer", "year": "2020" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b2", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b3", "title": "", "year": "" }, { "authors": "Ilias Chalkidis; Xiang Dai; Manos Fergadiotis; Prodromos Malakasiotis; Desmond Elliott", "journal": "", "ref_id": "b4", "title": "An exploration of hierarchical attention transformers for efficient long document classification", "year": "2022" }, { "authors": "Ilias Chalkidis; Manos Fergadiotis; Prodromos Malakasiotis; Nikolaos Aletras; Ion Androutsopoulos", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "LEGAL-BERT: The muppets straight out of law school", "year": "2020" }, { "authors": "Ilias Chalkidis; Manos Fergadiotis; Nikolaos Manginas; Eva Katakalou; Prodromos Malakasiotis", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Regulatory compliance through Doc2Doc information retrieval: A case study in EU/UK legislation where text similarity has limitations", "year": "2021" }, { "authors": "Ilias Chalkidis; Manos Fergadiotis; Dimitrios Tsarapatsanis; Nikolaos Aletras; Ion Androutsopoulos; Prodromos Malakasiotis", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Paragraph-level rationale extraction through regularization: A case study on European court of human rights cases", "year": "2021" }, { "authors": "Ilias Chalkidis; Abhik Jana; Dirk Hartung; Michael Bommarito; Ion Androutsopoulos; Daniel Katz; Nikolaos Aletras", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "LexGLUE: A benchmark dataset for legal language understanding in English", "year": "2022" }, { "authors": "Ting Chen; Simon Kornblith; Mohammad Norouzi; Geoffrey Hinton", "journal": "", "ref_id": "b9", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": " Pmlr", "journal": "", "ref_id": "b10", "title": "", "year": "" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann; Parker Schuh; Kensen Shi; Sasha Tsvyashchenko; Joshua Maynez; Abhishek Rao; Parker Barnes; Yi Tay; Noam Shazeer; Emily Vinodkumar Prabhakaran; Nan Reif; Ben Du; Reiner Hutchinson; James Pope; Jacob Bradbury; Michael Austin; Guy Isard; Pengcheng Gur-Ari; Toju Yin; Anselm Duke; Sanjay Levskaya; Sunipa Ghemawat; Henryk Dev; Xavier Michalewski; Vedant Garcia; Kevin Misra; Liam Robinson; Denny Fedus; Daphne Zhou; David Ippolito; Hyeontaek Luan; Barret Lim; Alexander Zoph; Ryan Spiridonov; David Sepassi; Shivani Dohan; Mark Agrawal; Andrew M Omernick; Thanumalayan Dai; Marie Sankaranarayana Pillai; Aitor Pellat; Erica Lewkowycz; Rewon Moreira; Oleksandr Child; Katherine Polozov; Zongwei Lee; Xuezhi Zhou; Brennan Wang; Mark Saeta; Orhan Diaz; Michele Firat; Jason Catasta; Kathy Wei; Douglas Meier-Hellstern; Jeff Eck; Slav Dean; Noah Petrov; Fiedel", "journal": "", "ref_id": "b11", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b12", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Tianyu Gao; Xingcheng Yao; Danqi Chen", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "SimCSE: Simple Contrastive Learning of Sentence Embeddings", "year": "2021" }, { "authors": "John Giorgi; Osvald Nitski; Bo Wang; Gary Bader", "journal": "", "ref_id": "b14", "title": "Declutr: Deep contrastive learning for unsupervised textual representations", "year": "2021" }, { "authors": "Li Jing; Pascal Vincent; Yann Lecun; Yuandong Tian", "journal": "", "ref_id": "b15", "title": "Understanding dimensional collapse in contrastive self-supervised learning", "year": "2022" }, { "authors": "E W Alistair; Tom J Johnson; Lu Pollard; Li-Wei H Shen; Mengling Lehman; Mohammad Feng; Benjamin Ghassemi; Peter Moody; Leo Szolovits; Roger G Anthony Celi; Mark", "journal": "Scientific data", "ref_id": "b16", "title": "Mimic-iii, a freely accessible critical care database", "year": "2016" }, { "authors": "Ryan Kiros; Yukun Zhu; Richard Russ R Salakhutdinov; Raquel Zemel; Antonio Urtasun; Sanja Torralba; Fidler", "journal": "Advances in neural information processing systems", "ref_id": "b17", "title": "Skip-thought vectors", "year": "2015" }, { "authors": "Tassilo Klein; Moin Nabi", "journal": "", "ref_id": "b18", "title": "micse: Mutual information contrastive learning for low-shot sentence embeddings", "year": "2022" }, { "authors": "Tassilo Klein; Moin Nabi", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "SCD: Selfcontrastive decorrelation of sentence embeddings", "year": "2022" }, { "authors": "Quoc Le; Tomas Mikolov", "journal": "", "ref_id": "b20", "title": "Distributed representations of sentences and documents", "year": "2014" }, { "authors": " Pmlr", "journal": "", "ref_id": "b21", "title": "", "year": "" }, { "authors": "Jinhyuk Lee; Wonjin Yoon; Sungdong Kim; Donghyeon Kim; Sunkyu Kim; Chan Ho; So ; Jaewoo Kang", "journal": "Bioinformatics", "ref_id": "b22", "title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", "year": "2020" }, { "authors": "Fangyu Liu; Ivan Vulić; Anna Korhonen; Nigel Collier", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Fast, effective, and self-supervised: Transforming masked language models into universal lexical and sentence encoders", "year": "2021" }, { "authors": "Tomas Mikolov; Kai Chen; Greg Corrado; Jeffrey Dean", "journal": "", "ref_id": "b24", "title": "Efficient Estimation of Word Representations in Vector Space", "year": "2013" }, { "authors": "Anastasios Nentidis; Georgios Katsimpras; Eirini Vandorou; Anastasia Krithara; Antonio Miranda-Escalada; Luis Gasco; Martin Krallinger; Georgios Paliouras", "journal": "Springer", "ref_id": "b25", "title": "Overview of bioasq 2022: The tenth bioasq challenge on large-scale biomedical semantic indexing and question answering", "year": "2022-09-05" }, { "authors": "Jeffrey Pennington; Richard Socher; Christopher D Manning", "journal": "", "ref_id": "b26", "title": "GloVe: Global vectors for word representation", "year": "2014" }, { "authors": "Juliano Rabelo; Randy Goebel; Mi-Young Kim; Yoshinobu Kano; Masaharu Yoshioka; Ken Satoh", "journal": "The Review of Socionetwork Strategies", "ref_id": "b27", "title": "Overview and discussion of the competition on legal information extraction/entailment (coliee)", "year": "2021" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "", "ref_id": "b28", "title": "Sentence-bert: Sentence embeddings using siamese bert-networks", "year": "2019" }, { "authors": "Mina Rezaei; Farzin Soleymani; Bernd Bischl; Shekoofeh Azizi", "journal": "", "ref_id": "b29", "title": "Deep bregman divergence for contrastive learning of visual representations", "year": "2021" }, { "authors": "Lewis Tunstall; Nils Reimers; Unso Eun; Seo Jo; Luke Bates; Daniel Korat; Moshe Wasserblat; Oren Pereg", "journal": "", "ref_id": "b30", "title": "Efficient Few-Shot Learning Without Prompts", "year": "2022" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b31", "title": "Attention is all you need", "year": "2017" }, { "authors": "Manzil Zaheer; Guru Guruganesh; Avinava Kumar; Joshua Dubey; Chris Ainslie; Santiago Alberti; Philip Ontanon; Anirudh Pham; Qifan Ravula; Li Wang; Yang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b32", "title": "Big bird: Transformers for longer sequences", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 108.78, 559.63, 181.09, 33.02 ], "formula_id": "formula_0", "formula_text": "ℓ mnr = - 1 n n i=1 exp ( f (s i , si )) j exp f s i , s j (1)" }, { "formula_coordinates": [ 3, 333.89, 408.85, 191.25, 39.62 ], "formula_id": "formula_1", "formula_text": "G(s a , s b ) = ( s a (x)w ŝa (x)dx + ϵ ŝa ) - ( s a (x)w ŝb (x)dx + ϵ ŝb )(4)" }, { "formula_coordinates": [ 3, 374.34, 557.44, 150.8, 12.61 ], "formula_id": "formula_2", "formula_text": "ψ = exp -G/2σ 2 (5)" }, { "formula_coordinates": [ 3, 332.21, 642.3, 192.93, 32.34 ], "formula_id": "formula_3", "formula_text": "ℓ Bregman(s a (z i ),s b (z j )) = -log(exp ψ i, j N t=1 exp ψ i,k )(6)" }, { "formula_coordinates": [ 3, 358.95, 723.69, 166.19, 11.56 ], "formula_id": "formula_4", "formula_text": "L Total = ℓ mnr + λ • ℓ Bregman (7) Method ECtHR SCOTUS MIMIC Avg. Training Efficiency µ-F 1 m-F 1 µ-F 1 m-F 1 µ-F 1 m-F 1 µ-F 1 Time (h)" }, { "formula_coordinates": [ 8, 92.49, 76.31, 409.84, 25.89 ], "formula_id": "formula_5", "formula_text": "µ-F 1 m-F 1 µ-F 1 m-F 1 µ-F 1 m-F 1 µ-F 1 m-F 1 µ-F 1 m-F 1 g ∈ [2" } ]
Efficient Document Embeddings via Self-Contrastive Bregman Divergence Learning
Learning quality document embeddings is a fundamental problem in natural language processing (NLP), information retrieval (IR), recommendation systems, and search engines. Despite recent advances in the development of transformer-based models that produce sentence embeddings with self-contrastive learning, the encoding of long documents (Ks of words) is still challenging with respect to both efficiency and quality considerations. Therefore, we train Longfomer-based document encoders using a state-of-the-art unsupervised contrastive learning method (SimCSE). Further on, we complement the baseline methodsiamese neural network-with additional convex neural networks based on functional Bregman divergence aiming to enhance the quality of the output document representations. We show that overall the combination of a self-contrastive siamese network and our proposed neural Bregman network outperforms the baselines in two linear classification settings on three long document topic classification tasks from the legal and biomedical domains.
Daniel Saggau; Mina Rezaei; Bernd Bischl; Ilias Chalkidis
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of our proposed self-contrastive method combining SimCSE of Gao et al. (2021) (left part) with the additional Bregman divergence networks and objective of Rezaei et al. (2021) (right part).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Gϕ (s a , s b ) = ϕ(s a ) -ϕ(s b )-[s a (x)s b (x)]δϕ(s b )(x)dµ(x) (2) s a and s b are vectors output by the self-contrastive network, and ϕ is a strictly convex function and can be described via a linear functional, consisting of weights w k and biases ϵ k . The function ϕ(s a ) is approximate by: ϕ(s a ) = sup (w,ϵ w )∈Q s a (x)w(x)dx + ϵ w (3) We take the empirical distribution of the projection representation to compute ŝa and ŝb . Specifically we define: ŝi = argmax k [ s a (x)w k (x)dx + ϵ k ] for i = (a,b). Using the above specification and ϕ(s a ), we get the following functional divergence term:", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Params (%) Test Results for all methods across all datasets. Best performance in bold, and second-best score is underlined. We also report average training time and the percentage of parameters that are trainable.", "figure_data": "Document Embedding + MLPLongformer DA61.4 47.8 65.7 50.5 63.9 48.3 63.64.5h0.5%Longformer DA+SimCSE64.4 55.0 69.2 57.5 66.0 52.9 66.5»»Longformer DA+SimCSE+Bregman 64.8 56.3 69.7 58.8 66.7 51.7 67.1»»Document Embedding + Linear LayerLongformer DA73.7 62.4 69.3 59.0 59.4 21.7 67.51h0.5%Longformer DA+SimCSE70.6 56.2 69.6 60.9 59.2 23.0 66.5»»Longformer DA+SimCSE+Bregman 73.3 59.5 71.4 62.0 59.6 22.7 68.1»»End-to-End Fine-tuning (Ceiling)Longformer DA78.8 71.5 75.2 63.2 78.9 56.4 77.68h100%Where λ is a scalar hyperparameter to weigh therelative importance of the Bregman divergence andcontrastive loss. In our experiments, we train suchmodels, dubbed Longformer DA+SimCSE+Bregman .4 Experimental Set-up4.1 Datasets and TasksECtHR (Chalkidis et al., 2021b) dataset contains11k cases from the European Court of HumanRights (ECtHR). This is a multi-label topic classi-fication task, where given the facts of an ECtHRcase, the model has to predict the alleged violatedECtHR article among ten such articles (labels).SCOTUS (Chalkidis et al., 2022b) dataset con-tains 4.7k cases from the Supreme Court of US", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Test Results for all Longformer variants for SCOTUS. Best performance in bold, and second-best score is underlined.", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Table5shows the number of parameters for the different models. Modding the transformer to a Longformer adds 6M parameters for LegalBERT small and 24M parameters for BioBERT medium. By working with LegalBERT-small and BioBERTbase we cover both small and medium sized models.", "figure_data": "Model#ParamsBioBert Base109MLongformer Base148MLegalBERT small35MLongformer Legal-DA + SimCSE + Bregman41MLongformer Bio-DA134MLongformer MLP.27M", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Number of Parameters for the Longformer variants.", "figure_data": "", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Test results for various pooling operators with end-to-end tuning on SCOTUS for Longformer DA .", "figure_data": "Pooling operatorm-F 1 µ-F 1Mean + Max Pooling 78.3 70.6Mean Poolig76.9 68.1Max Pooling77.6 69.5[CLS] Pooling77.1 69.5", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "F1 performance for ablation model without batch norm layers for end-to-end fine-tuning on SCO-TUS.", "figure_data": "", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) provides a method for training transformer-based models, which the citing paper adopts in their research on long document classification tasks."}, {"Category": "Data Source", "Citation": "(Johnson et al., 2016)", "Explanation": "The cited work by Johnson et al. (2016) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Chalkidis et al., 2022b)", "Explanation": "The cited work by Chalkidis et al. (2022b) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Chalkidis et al., 2021a)", "Explanation": "The cited work by Chalkidis et al. (2021a) is a data source for the information retrieval tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Rabelo et al., 2022)", "Explanation": "The cited work by Rabelo et al. (2022) is a data source for the information retrieval tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Nentidis et al., 2022)", "Explanation": "The cited work by Nentidis et al. (2022) is a data source for the information retrieval tasks mentioned in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) provides a method for training transformer-based models, which the citing paper adopts in their research on long document classification tasks."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Data Source", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The cited work by Reimers and Gurevych (2019) is a data source for the long document classification tasks mentioned in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Beltagy et al., 2020)", "Explanation": "The cited work introduces efficient sparse attention networks, which the citing paper adopts to address the computational problem of long document encoders in downstream tasks."}, {"Category": "Methodological Basis", "Citation": "(Zaheer et al., 2020)", "Explanation": "The cited work introduces the BigBird model, which the citing paper may have used to develop efficient document encoders for downstream tasks."}, {"Category": "Methodological Basis", "Citation": "(Chalkidis et al., 2022a)", "Explanation": "The cited work introduces hierarchical transformers, which the citing paper may have used to develop efficient document encoders for downstream tasks."}, {"Category": "Data Source", "Citation": "(Jing et al., 2022)", "Explanation": "The cited work may have provided data or information on the challenges of encoding long documents, which the citing paper uses to build good representation models for encoding long documents."}, {"Category": "Methodological Basis", "Citation": "(Gao et al., 2021)", "Explanation": "The cited work by Gao et al. (2021) provides a state-of-the-art self-contrastive learning method that the citing paper adopts in training Longfomer-based document encoders."}, {"Category": "Extension or Continuation", "Citation": "(Rezaei et al., 2021)", "Explanation": "The cited work by Rezaei et al. (2021) is further extended in the citing paper to enhance the quality of the latent representations using functional Bregman divergence and self-contrastive loss with divergence loss functions."}, {"Category": "Methodological Basis", "Citation": "(Mikolov et al., 2013)", "Explanation": "The cited work on Word2Vec provides a foundational method for developing word embedding centroids as document embeddings in the research on document representation."}, {"Category": "Methodological Basis", "Citation": "(Pennington et al., 2014)", "Explanation": "The cited work on GloVe contributes to the development of word embedding centroids as document embeddings in the research on document representation."}, {"Category": "Methodological Basis", "Citation": "(Le and Mikolov, 2014)", "Explanation": "The development of the Doc2Vec model in the cited work is a foundational method for representing documents in the research on document representation."}, {"Category": "Methodological Basis", "Citation": "(Kiros et al., 2015)", "Explanation": "The cited work on Skip Thoughts provides a method for developing sentence/paragraph-level representations in the research on document representation."}, {"Category": "Methodological Basis", "Citation": "(Reimers and Gurevych, 2019)", "Explanation": "The Sentence Transformers framework in the cited work is a method for developing quality dense sentence representations in the research on document representation."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2020)", "Explanation": "The cited work by Chen et al. provides a self-supervised contrastive learning setup that the citing paper adopts in their research on NLP applications."}, {"Category": "Supporting Evidence", "Citation": "(Bardes et al., 2022)", "Explanation": "The cited work by Bardes et al. contributes to the self-supervised contrastive learning setup in NLP applications, providing additional evidence to support the research in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Liu et al., 2021)", "Explanation": "The cited work by Liu et al. introduces the MirrorRoBERTa method for self-contrastive learning in NLP applications, which the citing paper builds upon in their research."}, {"Category": "Methodological Basis", "Citation": "(Klein and Nabi, 2022a)", "Explanation": "The cited work by Klein and Nabi introduces the miCSE method for self-contrastive learning in NLP applications, which the citing paper adopts in their research."}, {"Category": "Methodological Basis", "Citation": "(Klein and Nabi, 2022b)", "Explanation": "The cited work by Klein and Nabi introduces the SCD method for self-contrastive learning in NLP applications, which the citing paper references in their research."}, {"Category": "Methodological Basis", "Citation": "(Giorgi et al., 2021)", "Explanation": "The cited work by Giorgi et al. introduces the DeCluTR method for self-contrastive learning in NLP applications, which the citing paper builds upon in their research."}, {"Category": "Methodological Basis", "Citation": "(Gao et al., 2021)", "Explanation": "The cited work by Gao et al. introduces the SimCSE method for self-contrastive learning in NLP applications, which the citing paper references in their research."}, {"Category": "Methodological Basis", "Citation": "(Beltagy et al., 2020)", "Explanation": "The cited work, Longformer, serves as the basis for the method used in the citing paper, as it is a well-known and simple sparse-attention Transformer that the paper adopts for its research."}, {"Category": "Methodological Basis", "Citation": "(Beltagy et al., 2020)", "Explanation": "The cited work provides the warm-starting method for the global attention matrices in the Longformer models used in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(Gao et al., 2021)", "Explanation": "The cited work, SimCSE, provides the framework for the augmentation process used in the citing paper to generate positive and negative pairs for contrastive learning."}, {"Category": "Methodological Basis", "Citation": "(Rezaei et al., n.d.)", "Explanation": "The cited work by Rezaei et al. provides the methodology of using a Gaussian kernel to convert the divergence to similarity, which the citing paper adopts in their research to improve the output document latent representations."}, {"Category": "Data Source", "Citation": "(2021).1", "Explanation": "The cited work provides the specific data used in the analysis of the citing paper, which is the mini-batch size of N in the mini-batch for empirical distributions s a \u03b1(z i ), s b (z j ) where i and j are the respective index for the two branches and z the projector representation."}, {"Category": "Methodological Basis", "Citation": "(SCOTUS)", "Explanation": "The cited work provides the methodology of a single-label multi-class topic classification task, which the citing paper adopts in their own research to predict the relevant area among 14 issue areas (labels) in SCOTUS opinions."}, {"Category": "Extension or Continuation", "Citation": "Method ECtHR SCOTUS MIMIC Avg.", "Explanation": "The cited work is an extension of the research on the topic of multi-class topic classification, as the citing paper builds upon the methodology of ECtHR SCOTUS to further explore the area of multi-class topic classification in the context of MIMIC and other datasets."}, {"Category": "Supporting Evidence", "Citation": "(Tunstall et al., 2022)", "Explanation": "The cited work provides the SetFit framework that the citing paper uses in the few-shot setting to present results on SCOTUS."}, {"Category": "Methodological Basis", "Citation": "(Jing et al., 2022)", "Explanation": "The cited work by Jing et al. (2022) highlights the risk of representation collapse in long document processing, which the citing paper addresses by mapping document embeddings into smaller subnetworks to reduce the risk of collapsing representations."}, {"Category": "Extension or Continuation", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) focuses on large language models with billions of parameters, which is a different model size compared to the small and medium size models (up to 134M parameters) studied in the citing paper. The cited work may provide insights on how the performance improvement from the examined network architecture in the citing paper can be extended to larger model sizes."}, {"Category": "Extension or Continuation", "Citation": "(Chowdhery et al., 2022)", "Explanation": "The cited work by Chowdhery et al. (2022) also targets large language models with billions of parameters, which is a different model size compared to the small and medium size models (up to 134M parameters) studied in the citing paper. The cited work may provide insights on how the performance improvement from the examined network architecture in the citing paper can be extended to larger model sizes."}, {"Category": "Extension or Continuation", "Citation": "(Brown et al., 2020)", "Explanation": "The cited work by Brown et al. (2020) may provide insights on how the performance improvement from the examined network architecture in the citing paper can be extended to other application domains and datasets, or impact other NLP tasks such as document retrieval/ranking."}, {"Category": "Methodological Basis", "Citation": "(Rezaei et al., 2021)", "Explanation": "The cited work by Rezaei et al. provides a benchmark for the search space of hyper-parameters in the first stage of the model, which the citing paper uses to tune the parameters for the contrastive Bregman divergence."}, {"Category": "Methodological Basis", "Citation": "(Gao et al., 2021)", "Explanation": "The cited work by Gao et al. also provides a benchmark for the search space of hyper-parameters in the first stage of the model, which the citing paper uses to tune the parameters for the contrastive Bregman divergence."}]
[ { "figure_ref": [ "fig_5" ], "heading": "", "publication_ref": [], "table_ref": [], "text": "Abstract. Text-conditional medical image generation has vital implications for radiology, such as augmenting small real-world medical datasets, preserving data privacy, and enabling patient-specific data modeling. However, this field has lagged behind text-to-natural image generation, particularly in handling 3D medical imaging modalities like CT and MRI, which are crucial for critical care. In this paper, we introduce Gener-ateCT, the first approach to generating 3D medical imaging conditioned on free-form medical text prompts. GenerateCT incorporates a text encoder and three key components: a novel causal vision transformer for encoding 3D CT volumes, a text-image transformer for aligning CT and text tokens, and a text-conditional super-resolution diffusion model. Given the absence of directly comparable methods in 3D medical imaging, we established baselines with cutting-edge methods to demonstrate our method's effectiveness. GenerateCT significantly outperforms these methods across all key metrics. Importantly, we explored GenerateCT's clinical applications by evaluating its utility in a multi-abnormality classification task. First, we established a baseline by training a multi-abnormality classifier on our real dataset. To further assess the model's generalization to external datasets and its performance with unseen prompts in a zero-shot scenario, we employed an external dataset to train the classifier, setting an additional benchmark. We conducted two experiments in which we doubled the training datasets by synthesizing an equal number of volumes for each set using GenerateCT. The first experiment demonstrated an 11% improvement in the AP score when training the classifier jointly on real and generated volumes. The second experiment showed a 7% improvement when training on both real and generated volumes based on unseen prompts. Moreover, GenerateCT enables the scaling of synthetic training datasets to arbitrary sizes. As an example, we generated 100,000 3D CT volumes, fivefold the number in our real dataset, and trained the classifier exclusively on these synthetic volumes. Impressively, this classifier surpassed the performance of the one trained on all available real data by a margin of 8%. Lastly, domain experts evaluated the generated volumes, confirming a high degree of alignment with the text prompt. Our code and pre-trained models are available at: https://github.com/ibrahimethemhamamci/GenerateCT. Fig. 1: GenerateCT is a cascaded framework that generates high-resolution and highfidelity 3D chest CT volumes based on medical language text prompts." }, { "figure_ref": [ "fig_5" ], "heading": "Introduction", "publication_ref": [ "b23", "b1", "b6", "b9", "b11", "b14", "b25", "b30", "b33", "b44", "b20", "b30", "b45", "b24", "b0", "b5", "b14", "b33" ], "table_ref": [], "text": "The text-conditional generation of synthetic images holds significant promise for the medical field by producing text-aligned and clinically-ready images, bypassing the need for labeling. It facilitates the large-scale adoption of machine learning, enhancing radiological workflows, accelerating medical research, and improving patient care. Additionally, it addresses key challenges in medical image analysis, such as data scarcity, patient privacy concerns, imbalanced class distribution, and the need for trained clinicians for manual annotation [24].\nThe field of natural image generation from free-flowing text has seen remarkable progress [2,7,10,12,15,26,30,31,34,45]. Despite these advancements, the medical field has yet to fully capitalize on the potential of generative modeling, due to the significant distribution shift between natural and medical images-and even among different medical domains [21]. A particular application is the generation of 2D chest X-rays from radiology reports [5], achieved by fine-tuning a pre-trained, open-source text-to-image model [31]. However, extending this method to more spatially complex modalities, such as 3D computed tomography (CT) or magnetic resonance imaging (MRI), remains unexplored. The primary challenge is the exponential increase in computational complexity associated with the nature of 3D medical imaging [8]. Furthermore, unlike in 2D generation, there are no pre-trained 3D models available for fine-tuning [46]. Generating 2D slices instead of 3D volumes also poses significant challenges in the medical field due to the lack of spatial context. Additionally, the scarcity of 3D medical imaging data paired with radiology reports limits development [25].\nRecognizing this gap, we propose GenerateCT, the first method for the synthesis of 3D medical imaging conditioned on free-form text prompts (see Figure 1), specifically targeting high-resolution 3D chest CT volumes. Our framework consists of three key components: The first is a novel causal vision transformer, CT-ViT, which encodes the 3D CT volumes into tokens. CT-ViT is trained to reconstruct 3D CT volumes autoregressively, allowing us to maintain high axial resolution and generate a variable number of axial slices, thus providing a variable axial field-of-view [1]. Second, a bidirectional text-image transformer aligns the CT tokens with the encoded tokens of the free-form radiology text. This alignment is facilitated using masked CT token prediction [6]. Third, we employ a cascaded diffusion model [15] to enhance the in-plane resolution of the generated low-resolution volumes. This step is also text-conditioned, ensuring faithful resolution enhancement based on the input prompt [34].\nGenerateCT's uniqueness, being the first of its kind in 3D medical imaging, means that no directly comparable methods exist, further highlighting its novelty. Regardless, to demonstrate the effectiveness of our framework, we have designed some baseline methods using state-of-the-art generation models. First, to show the importance of 3D generation architecture for ensuring consistency in 3D chest CT volumes, we employ two text-conditional 2D image generation methods for comparison. We also implement a text-to-video generation model for 3D chest CT synthesis to highlight our framework's optimized benefits over other 3D generation approaches. Furthermore, we perform a comprehensive ablation study to underscore the effectiveness of GenerateCT's cascaded architecture.\nGenerateCT synthesizes text-aligned 3D chest CT volumes, bypassing the need for labeling. Since GenerateCT is the first of its kind, we, more importantly, assessed its clinical utility in a multi-abnormality classification task. Initially, we established a baseline training classifier on all available real volumes. We expanded our training data by creating an equivalent number of synthetic volumes with GenerateCT, yielding an 11% improvement in mean AP by training on this joint dataset. Furthermore, GenerateCT allowed us to massively scale our synthetic dataset; we produced 100,000 3D CT volumes, five times our original dataset size, and trained the classifier on this synthetic data alone. Remarkably, this approach outperformed training with the complete set of real data by 8%. We then evaluated the model's performance on an external dataset and with unseen prompts in a zero-shot scenario, proving GenerateCT's high generalization.\nGenerateCT synthesizes high-fidelity 3D chest CT volumes from free-form text prompts. To our knowledge, this is the first approach to explore text-to-3D medical imaging generation. Our contributions can be summarized as follows:\n-We propose a novel text-to-CT generation framework capable of producing 3D chest CT volumes conditioned on medical language text prompts. -At the core of GenerateCT is CT-ViT, which enables autoregressive encoding and decoding of 3D CT volumes for flexible axial field-of-view handling. -We conduct a thorough evaluation of our approach's generative abilities compared with reasonably designed baselines across multiple image-quality metrics. Also, human domain experts evaluate generated 3D chest CT volumes underscoring a high degree of text alignment, realism, and consistency. -We assess the generated volumes' clinical value and text-alignment by performing a multi-abnormality classification task in two settings: (a) data augmentation, with an increase of up to a factor of five, and (b) a zero-shot scenario, where no prompts from the training set are used for generation. -To facilitate out-of-the-box 3D chest CT volume generation based on text prompts, we make all trained models (and code) publicly available. " }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b23", "b30", "b17", "b13", "b15", "b37", "b42", "b16", "b36", "b39", "b40", "b13", "b41", "b36", "b13", "b17", "b10" ], "table_ref": [], "text": "Text-conditioned medical image generation. Due to the increasing demand for data, medical image generation has emerged as an important research direction. Recent studies [5,24] have explored the generation of 2D medical images based on medical language text prompts. These studies have successfully adapted pre-trained latent diffusion models [31], utilizing publicly available chest X-rays and corresponding radiology reports [18]. With GenerateCT, we expand this capability to include the text-conditioned generation of 3D medical imaging.\nText-conditioned video generation. has seen significant advancements and can be split into two primary research streams: diffusion-based [4, 14,16,38,43] and transformer-based auto-regressive methods [17,37,40,41]. Diffusion-based techniques, utilizing 3D U-Net architectures, typically generate shorter and lowresolution videos with a preset number of frames, but can enhance resolution and duration through cascaded diffusion models [14]. In contrast, transformerbased methods offer adaptability, handling variable frame numbers, and producing longer videos, albeit at lower dimensions [42]. In this context, our method extends the concept of text-conditional video generation to 3D medical imaging, essentially treating 3D CT volumes as a video. GenerateCT combines a transformer-based [37] and a diffusion-based method [14], which enables the generation of high-resolution CT volumes with flexible and increased slice counts.\nDatasets for text-conditioned medical image generation. Training models to generate medical images from text requires paired imaging data with corresponding radiology reports. While publicly available 2D medical imaging datasets like MIMIC-CXR [18] exist, there is a scarcity of publicly accessible 3D medical imaging datasets with reports. Creating such datasets is challenging due to their larger size, the expertise required for annotating 3D images, and strict data-sharing restrictions. The limited availability of such datasets is evident, as even a study focusing on multi-abnormality detection in chest CT volumes [11] made only a small portion of their dataset publicly accessible. This highlights the urgent need for more publicly available 3D medical imaging data and the potential for text-conditioned 3D medical image data generation, which can drive further research in this field. To address this challenge, we have made our fully trained models publicly available. We hope that this will enable researchers to generate their own datasets using text prompts, virtually without restrictions." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_5" ], "heading": "Dataset Preparation", "publication_ref": [ "b8", "b22" ], "table_ref": [], "text": "Our dataset comprises 25,701 non-contrast 3D chest CT volumes with a 512×512 resolution and varying axial slice counts ranging from 100 to 600. These volumes originate from 21,314 unique patients and have been reconstructed using multiple methods appropriate for different window settings [39]. This results in a total of 49,138 CT volumes, considering different reconstruction methods.\nWe partitioned the volumes into a training set comprising 20,000 unique patients and a testing set comprising 1,314 unique patients, ensuring no patient overlap. Each CT volume is accompanied by metadata that includes the patient's age, sex, and imaging specifics. Moreover, these volumes are paired with radiological reports that are categorized into separate sections: clinical information, technique, findings, and impression. The text prompts are formatted as {age} years old {sex}: {impression} using the impression section and metadata, as shown in Fig. 1. We convert the CT volumes into their respective Hounsfield Units (HU) using the slope and intercept values retrieved from the metadata. These values are clipped to the range [-1000 HU, +1000 HU], representing the practical lower and upper limits of the HU scale [9,23]." }, { "figure_ref": [ "fig_0" ], "heading": "GenerateCT: Text-Conditional 3D CT Generation", "publication_ref": [ "b35", "b43", "b18", "b19", "b5", "b28", "b14", "b26", "b32", "b2", "b13" ], "table_ref": [], "text": "GenerateCT, as shown in Fig. 2, consists of three primary components, each trained in distinct stages: ( ) accepts a low-resolution CT volume x lr ∈ R (201)×128×128 and outputs embedded CT tokens z x ∈ R (101)×8×8 . The decoder network (Φ CTViT d ) then utilizes these embedded CT tokens to reconstruct CT volumes (x lr ) in the same space. Concisely, the process is represented as:\nz x = Φ CTViT e (x lr ) and xlr = Φ CTViT d (z x ).\nThe encoder network first extracts non-overlapping patches of 16 × 16 pixels from the first slice of a 3D chest CT volume, and 2 × 16 × 16 patches from the remaining slices. Each patch is then linearly transformed into a D-dimensional space, where D is the latent space dimension, set to 512. For the first frame, that data is reshaped from\nB × C × 1 × (H • p 1 ) × (W • p 2 ) to B × 1 × H × W × (C • p 1 • p 2 ).\nHere, B represents the batch size, C the number of channels, H and W the height and width of the slices, and p 1 and p 2 the spatial patch sizes. A linear layer then transforms the final dimension to D, resulting in a tensor with dimensions\nB × 1 × H p1 × W p2 × D.\nThe remaining slices undergo a similar reshaping and linear transformation, from\nB × 1 × (T • p t ) × (H • p 1 ) × (W • p 2 ) to B × T × H × W × (C • p t • p 1 • p 2 )\nand finally to B × T × H p1 × W p2 × D, with p t representing the temporal patch size and T the number of temporal patches.\nAfter combining the initial and subsequent frame embeddings, the resulting tensor is\nB × (1 + T ) × H p1 × W p2 × D.\nThis tensor is processed by two transformer networks in sequence. The spatial transformer operates on a reshaped tensor of\n(B • (1 + T )) × ( H p1 • W p2 ) × D\n, outputting a tensor of the same size. The causal transformer then processes this output, reshaped to\n( H p1 • W p2 ) × (B • (1 + T )) × D,\nand produces an output maintaining these dimensions. This process preserves both the spatial and latent dimensions after each transformer layer, ensuring 3D volumetric information retention throughout the network's processing stages.\nThe CT-ViT decoder mirrors the encoding process by transforming tokens back into their original voxel space, reconstructing 3D CT volumes while preserving the axial dimensionality of the input. This capability enables the generation of 3D CT volumes with varying numbers of axial slices. Additionally, CT-ViT incorporates vector quantization to create a discrete latent space. This technique quantizes the encoder outputs into a set of entries from a learned codebook, as described in [36]. Besides, the model's autoregressive training process combines multiple loss functions, including the L2 loss from ViT-VQGAN [44] to ensure consistency during the reconstruction, image perceptual loss [19] for perceptual similarity, and an adversarial loss function in alignment with StyleGAN [20]. Vision-Language Transformer: Token Modeling. In GenerateCT's second stage, we align CT and text spaces using masked visual token modeling [6]. This involves the previously trained CT-ViT encoder (Φ * CTViT e ) and its produced CT tokens (z *\nx ), which are masked (mask[z * x ]) and input into a bidirectional transformer (Φ M T ). The radiology report (r), encoded with a text encoder (Φ T5X ), serves as a conditional input [29]. The transformer's role is to predict these masked CT tokens based on the text embedding, incorporating cross-attention with the input CT tokens. These predicted CT tokens are then processed by the frozen CT-ViT decoder (Φ * CTViT d ), expected to reconstruct the input 3D CT volume accurately. The forward pass in the text-CT alignment stage, utilizing masked token modeling with the trained CT-ViT, is represented as follows:\nẑ * x = Φ M T mask[z * x ], Φ * T5X (r) and xlr = Φ * CTViT d (ẑ * x ).\nThe training for this vision-language transformer model also integrates reconstruction loss and token critic loss. Reconstruction loss assesses the model's capability to predict masked video codebook IDs in sequences, using cross-entropy to quantify the difference between predicted and actual tokens. Additionally, the critic loss includes a component evaluating whether video codebook ID sequences are authentic or fabricated, employing binary cross-entropy to gauge the alignment between the predicted critics' probabilities and the actual labels.\nDuring inference, all CT tokens are masked and predicted by the bidirectional transformer, based on the text embeddings and the CT tokens previously predicted. These tokens are then reconstructed using the CT-ViT decoder.\nDiffusion Steps: Text-Conditional Super-Resolution. GenerateCT's final stage employs a diffusion-based, text-conditional super-resolution model (Φ Diff ) to enhance the resolution of each slice from initially synthesized low-resolution 3D CT volumes in the axial dimension. Using a cascaded diffusion approach [15], this process sequentially employs diffusion steps that enhance image resolution by upsampling and introducing finer details. This cascaded method outperforms traditional U-Net diffusion models [27,33] in terms of memory efficiency, achieved by incorporating a cross-attention layer with T5 embedded text tokens at the bottleneck stage, which replaces self-attention layers [3]. This layer conditions the diffusion on both the encoded text prompt and the initial low-resolution image. Optimal cascading steps have been identified through an ablation study (Tab. 1). Notably, using CT-ViT reconstructed volumes as input, instead of the original downsampled volumes, enhances performance, aligning with the principles of noisy conditioning [14]. The upsampling process is denoted as x = Φ Diff xlr , Φ * T5X (r) , where x represents the final generated high-resolution 3D chest CT volume, with dimensions of (201)×512×512, based on the prompt.\nThe training of the model employs a loss function designed to minimize the disparity between denoised and actual high-resolution images. This function incorporates a Mean Squared Error (MSE) component for pixel accuracy and integrates noise levels into the loss weighting, ensuring that noisy samples are properly accounted for. The overall loss is the mean of these noise-weighted MSE values, quantifying the denoised slices' deviation from the actual slices. Inference. After training, GenerateCT can generate 3D chest CT volumes (x) from a given novel radiological text prompt (r), formally defined as follows:\nz * r = Φ * T5X (r) and x = Φ * Diff Φ * CTViT d Φ * M T ([empty], z * r ), z * r ,\nwhere [empty] represents fully masked CT token placeholders. This process involves encoding the prompt, predicting CT tokens with the masked transformer, and then decoding these tokens to create the synthetic 3D CT volume." }, { "figure_ref": [ "fig_0" ], "heading": "Implementation Details", "publication_ref": [ "b21" ], "table_ref": [], "text": "We trained the CT-ViT model on 49,138 CT volumes (see Sec. 3.1). We employed the Adam optimizer [22] with β1 and β2 hyperparameters set to 0.9 and 0.99, respectively, a learning rate of 0.00003, and an effective batch size of 32. The training was conducted for one week on a node with 8 A100 GPUs, completing 100,000 iterations. Subsequently, we trained the MaskGIT transformer using a paired dataset, which included the same 3D CT volumes with the same resolution as CT-ViT and medical language text prompts (Fig. 2) from their corresponding radiology reports. The Adam optimizer was used with identical β1 and β2 values, and the learning rate was maintained at 0.00003. However, we adjusted the effective batch size to 4 and introduced a cosine annealing warmup scheduler with a warmup phase of 10,000 steps and a maximum limit of 4,000,000 steps. This training stage, also executed on 8 A100 GPUs, lasted one week, concluding after 500,000 iterations. Finally, we trained the super-resolution diffusion model on the CT slices, each initially resized to 128 × 128. The super-resolution model then upscaled these to 512 × 512, using the original volumes as ground truth.\nFor this model, the same text prompts used for the 3D volumes were provided as conditioning for all slices of a 3D CT. We retained the previous hyperparameters for the Adam optimizer and set the learning rate to 0.0005. This final training phase was carried out on 8 A100 GPUs over a week, reaching 275,000 iterations." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1", "fig_2", "fig_1", "fig_3" ], "heading": "Experimental Results", "publication_ref": [ "b34", "b10", "b12", "b33", "b30", "b36" ], "table_ref": [], "text": "Quantitative evaluation. We evaluated the quality of 3D chest CT volumes generated by different methods utilizing the following metrics, see Tab. 1:\n-Fréchet Video Distance (FVD) quantifies the dissimilarity between generated and real volumes by extracting image features with the I3D model [35], which is well-suited for videos, denoted as FVD I3D . Recognizing its limitations for medical imaging, we also employed the 3D CT-Net model [11], trained on our dataset for multi-abnormality classification (detailed in Sec. 5). This approach (denoted as FVD CT-Net ) allows for domain-relevant feature extraction and distance computation, providing a more appropriate comparison. -Fréchet Inception Distance (FID) assesses the quality of generated images, but at a slice-level using the InceptionV3 model [13]. FID may not be fully suitable for our 3D generation, as individual 2D CT slices might not accurately reflect volume-level findings, potentially leading to misleading results. -The CLIP score quantifies the alignment between text prompts and generated volumes, a process achieved by utilizing the pretrained CLIP model [28], which is designed to correlate visual and textual content effectively. Within our training dataset, comprising paired volumes and radiology text reports, we attained a CLIP score of 27.4, serving as a benchmark for alignment.\nComparisons with baseline methods. Given GenerateCT's uniqueness as the first framework of its kind in 3D medical imaging, there are no directly comparable methods. Thus, to demonstrate its effectiveness, we designed the following baseline methods for comparative analysis (see Tab. -Base w/ Imagen. To assess the importance of our 3D generation architecture for achieving spatial consistency in 3D chest CT volumes, we employed a text-conditional 2D image generation method, Imagen [34], for slice-wise generation. We conditioned Imagen on the slice number alongside the text prompt during training. Then the generated slices are combined in the order of the conditioning slice number to form a 3D chest CT volume. Fig. 3 shows that, even though high resolution and accurate axial slices were achieved, the chest CT volumes generated by this 2D baseline lack spatial consistency, further highlighting the need for a dedicated 3D generation algorithm. -Base w/ SD. To demonstrate that even a pre-trained 2D text-to-image model is not sufficient for 3D medical image generation, we fine-tuned Stable Diffusion (SD) [31]. Despite slightly outperforming Imagen, fine-tuning SD failed to produce spatially consistent and accurate 3D volumes, as seen in the sagittal and coronal planes (Fig. 3). This effort also highlighted the computational complexities of direct, text-conditional 3D medical image generation: generating just one 2D axial slice with SD required 13 GB of GPU memory. The memory requirement would escalate exponentially when utilizing a basic 3D diffusion model to generate a 3D chest CT volume consisting of over 200 slices, underscoring its limitations for such medical applications and the imperative for an optimally engineered framework like GenerateCT.\n64 years old female: Cardiomegaly, pericardial effusion. Bilateral pleural effusion.\n34 years old female: Bilateral consolidations and ground-glass opacities.\n47 years old male: Bilateral mosaic attenuation pattern.\n44 years old male: The overall examination is within normal limits. -Base w/ Phenaki. To highlight that even 3D generation models might not capture the nuanced medical details of chest CT volumes, we adapted a state-of-the-art text-to-video generation model, Phenaki [37], for 3D chest CT generation. Although spatial consistency increased, Phenaki failed to generate medically detailed CT volumes (Fig. 3). This underscores the unique challenges of text-conditional high-resolution 3D medical image generation and the necessity for an optimized solution like our cascaded architecture.\nAblation study. GenerateCT's cascaded architecture was evaluated across different stages. We tested three X-Stage Cascaded Models (XSCM), which combine a transformer-based 3D text conditional generation model, followed by X-1 diffusion-based super-resolution steps to produce high-resolution 3D CT volumes. As seen in Tab. 1, FVD CT-Net consistently showed lower scores compared to FVD I3D , a result of CT-Net's specific training on 3D CT volumes. As the number of super-resolution steps increased, both FVD I3D and FVD CT-Net along with FID and CLIP showed enhanced performance. However, the 4SCM model was an outlier due to its significantly low initial resolution. The 3SCM model, achieving a CLIP score of 27.1 close to the baseline of 27.4, demonstrated excellent alignment with the text prompts. Therefore, the 3SCM model, outperforming others in all key metrics, was selected as the optimal configuration for GenerateCT. Qualitative results. GenerateCT effectively translates specific text prompts into 3D chest CT volumes, as shown in Fig. 4. The initial three volumes show distinct pathologies, marked with colored text and areas, consistent across slices, contrasting with a fourth volume of a healthy lung. These volumes display diversity in size, orientation, age, and sex, emphasizing the range of data producible from the text prompts. Fig. 3 further demonstrates GenerateCT's ability to create comprehensive 3D images by including both sagittal and coronal slices in addition to axial ones. Fig. 5 showcases the model's cross-attention between text and generated volumes, emphasizing regions corresponding to specific pathologies. This involves averaging attention outputs across heads and relevant tokens corresponding to each pathology in the input prompts, then upscaling the lowdimensional cross-attention outputs to high-resolution CT volume dimensions using an affine transformation. Such visualizations show GenerateCT's precision in aligning text with the relevant regions, translating medical terms into spatially accurate and clinically significant image features, such as cardiomegaly around the heart, pleural effusion at the effusion site, and consolidation in the affected lung area. We showcase slices from 3D chest CT volumes in the raw HU range of [-1000, +1000], diverging from standard windowing for more authentic representation. Supplementary material offers varied windowing examples. Expert evaluation. A blinded study, involving two radiologists with 4 and 11 years of experience, evaluated 200 3D chest CT volumes, which were equally divided into 100 real and 100 synthetic. The radiologists were tasked with determining whether each volume was real or synthetic and with verifying the match between text prompts and volume findings (Tab. 2). In the first task, even though they were aware that half of the volumes were synthetic and that 3D volumes were provided for evaluation instead of 2D slices, both radiologists exhibited significant misclassification rates. This underscores GenerateCT's ability to create 3D CT volumes that are highly indistinguishable and spatially accurate. The disparity in false negative rates for real versus synthetic volumes was not statistically significant (p = 0.0636, unpaired T-test), emphasizing synthesized volumes' realism. In the second task, the radiologists found that a comparable number of synthetic volumes, such as 70, accurately matched the given text prompts, similar to the real volumes. This indicates a high level of alignment between the generated 3D chest CT volumes and their corresponding text prompts." }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "Clinical Value of GenerateCT", "publication_ref": [ "b10", "b10", "b10" ], "table_ref": [], "text": "Utilizing GenerateCT in data augmentation. We assessed the clinical potential of GenerateCT within a radiological framework. To set a benchmark, a multi-abnormality classification model [11] was initially trained on 20,000 real 3D chest CT volumes from our dataset, each representing a unique patient profile (see Sec. 3.1). The baseline achieved a mean average precision (AP) of 0.254 and an area under the receiver operating characteristic curve (AUROC) of 0.631. We then generated 20,000 synthetic volumes using text prompts and trained the classifier on this mixed dataset of real and synthetic data. The results showed an 11% improvement in mean AP and a 6% increase in mean AUROC compared to training on real data alone (see Fig. 6). Further experimentation involved expanding the synthetic dataset to 100,000 3D volumes using repeated prompts. Training exclusively on this synthetic data led to an 8% increase in mean AP and a 4% rise in mean AUROC compared to the real-data model. Given the synthetic-data model's outperformance over the real-data model, alongside computational limitations (each generated volume takes 184 seconds and is 400 MB, totaling 40 TB for 100,000 volumes), further extensions have not been pursued.\nThe results, detailed in Fig. 6, demonstrate GenerateCT's effectiveness in clinical settings. First, data augmentation, even by a single factor, significantly boosts performance, underscoring its potential for researchers who have realworld data and aim to enhance performance. Second, training on a larger, fully synthetic dataset after a fivefold increase yielded notably better scores compared to the real-data-only model, highlighting GenerateCT's contribution to data privacy. This approach enables researchers to train and share generation models, like ours, facilitating the creation of synthetic data using text prompts, thus having even better performance without privacy or data-sharing concerns. Third, the increase in scores with repetitive prompt use for data generation indicates GenerateCT's ability to generate variable data using the same prompts. Further training details and accuracies by abnormality type are in the supplementary. Utilizing GenerateCT in a zero-shot setting. To evaluate GenerateCT's ability to generalize to external datasets, we conducted an experiment using RadChestCT [11], which consists of 3,630 chest CT volumes with a mean abnormality label frequency of 0.129. We created a new dataset using text prompts not included in our GenerateCT training, matching RadChestCT's training set in terms of volume count and abnormality distribution. The classifier [11] was trained on this synthetic dataset, the original RadChestCT dataset, and a combination of both. The results were promising: the model trained on the synthetic data achieved close performance metrics to the model trained on real patient data, with mean APs of 0.177 (real) versus 0.146 (synthetic) and mean AU-ROCs of 0.613 (real) versus 0.536 (synthetic). Training on the combined dataset significantly increased the performance (mean AUROC 0.623, mean AP 0.190). This demonstrates that GenerateCT's key benefits extend to external datasets and its potential for clinical applications, even with unseen prompts. The supplementary provides further training details and accuracies by abnormality type." }, { "figure_ref": [], "heading": "Conclusion and Discussion", "publication_ref": [], "table_ref": [], "text": "In this paper, we introduce GenerateCT, the first text-conditional 3D medical image generation framework, specifically for 3D chest CT volumes. Our experiments demonstrate its capability to generate realistic, high-quality volumes from text prompts, as well as its clinical applications in multi-abnormality classification. As a major step forward in this domain, we make GenerateCT fully open-source to lay a solid foundation for future research and development. Limitations. Despite its innovative approach, GenerateCT faces several challenges. Its uniqueness leads to a lack of benchmarks, which limits comprehensive evaluation, though we established baseline methods for comparison. While it is designed to handle 3D CT volumes of varying sizes in-depth, a detailed assessment of this capability remains to be performed. Our dataset, sourced from a single institution, may lack sufficient diversity, raising concerns about bias and limited applicability. Expanding training beyond just the impression sections could enhance outcomes. Moreover, the significant computational demands associated with its 3D nature pose challenges in resource-constrained settings. This supplementary document enhances and expands upon the findings detailed in the main paper, focusing on three critical dimensions:\n-Enhanced Qualitative Results: It introduces an expanded collection of examples featuring various windowing techniques for comparison, demonstrating how GenerateCT effectively creates 3D CT volumes from text descriptions. -Detailing Clinical Application: A comprehensive examination is presented on the utility of GenerateCT within a clinical setting, particularly focusing on its role in data augmentation for the classification of multiple abnormalities. -Generalization and Adaptability in Clinical Settings: Further, it presents a detailed exploration of another practical clinical application of GenerateCT, where we illustrate its ability to generalize to external datasets and its proficiency in generating 3D chest CT volumes from unseen prompts. 22 years old male: Cardiomegaly and minimal ground-glass opacities in both lungs. Interlobular septal thickening and lymphadenopathy.\nRaw Slice Mediastinal Window Abnor ? Abnor ? Abnor ?\n34 years old male: Atelectasis. Ground-glass opacity, mostly in the anterior. Mosaic attenuation pattern. Centrilobular emphysema, bronchiectasis.\n36 years old female: Cardiomegaly. Segmental-subsegmental bronchiectasis, bronchial wall thickening and accompanying ground-glass opacity in both lungs. Consolidation. " }, { "figure_ref": [ "fig_5", "fig_0" ], "heading": "Comprehensive Qualitative Results", "publication_ref": [], "table_ref": [], "text": "This section showcases a broad spectrum of 3D chest CT volumes generated by GenerateCT. Fig. 1 displays 2D axial slices from synthetic 3D CT volumes, illustrating both the raw HU range of [-1000, +1000] and various windowing techniques. These methods align with clinical practice and reveal the generative details derived from medical text descriptions. This emphasizes GenerateCT's precision in capturing spatial details as well as its adeptness in handling dynamic ranges. Furthermore, Fig. 2 highlights the efficacy of GenerateCT's crossattention mechanism in accurately associating specific pathologies mentioned in text prompts with the corresponding areas across different window settings.\nThese visualizations demonstrate the model's exceptional ability to convert medical language into clinically relevant, spatially precise image features, showcasing its potential to create detailed and accurate 3D images from textual prompts." }, { "figure_ref": [], "heading": "Utilizing GenerateCT in Data Augmentation", "publication_ref": [], "table_ref": [], "text": "In this section, we take a closer look at a practical clinical application of Genera-teCT. Through a case study, we demonstrate the training of a multi-abnormality classification model with synthetic chest CT volumes generated from medical text prompts. This detailed examination underscores the substantial potential of GenerateCT in data augmentation, particularly in scenarios where obtaining real patient data is limited or challenging. Furthermore, we highlight Gen-erateCT's contribution to data privacy. Our approach enables researchers to train and share models similar to ours, promoting the creation of synthetic data through text prompts, thereby enhancing performance without compromising privacy or data-sharing concerns. Additionally, we show that GenerateCT can reliably generate diverse data, even when using the same prompts repeatedly." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b10", "b31", "b8" ], "table_ref": [], "text": "Our initial step involved training a multi-abnormality classification model on all our available training data, comprising 20,000 unique patient profiles with 18 different abnormality labels, using real chest CT volumes. This baseline achieved a mean average precision (AP) of 0.254 and a mean area under the receiver operating characteristic curve (AUROC) of 0.631. To illustrate GenerateCT's effectiveness in scenarios with available real patient data, we augmented the training dataset by creating an equal number of synthetic volumes with GenerateCT, effectively doubling it. Furthermore, to demonstrate GenerateCT's efficacy in situations lacking real patient data and its capacity to generate large numbers of synthetic volumes, we produced 100,000 CT volumes, fivefold the number in our original dataset, through the repetitive use of the same prompts and trained the classifier solely on this synthetic data. Our experiment utilized the CT-Net model [11], with its default parameters for classifying 18 distinct abnormalities. The Stochastic Gradient Descent optimizer [32] was employed with a learning rate of 0.001 and a weight decay of 0.0000001. All training sessions spanned 15 epochs with a batch size of 12, conducted on three A6000 48G GPUs. For consistency, all volumes were resized to 420×420×201, and HU values were calibrated to a range of [-1000, +200], focusing on heart and lung abnormalities [9]." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "Tab. 1 details the model's performance in various training scenarios, highlighting AUROC and AP metrics across 18 abnormalities. We observed an 11% improvement in mean AP and a 6% increase in mean AUROC when training on both real and an equal number of synthetic volumes, compared to using only real data.\nExpanding the synthetic dataset to 100,000 volumes and training exclusively on this data resulted in an 8% rise in mean AP and a 4% increase in mean AUROC compared to the model trained on all the real data available to us. Validation was performed on the same real-patient dataset across all training scenarios. These results underscore GenerateCT's effectiveness in data augmentation; significantly enhancing performance by only doubling the dataset size proves beneficial for researchers with access to real-world data. Moreover, training with a larger, entirely synthetic dataset produced superior results over the real-dataonly model, underscoring GenerateCT's role in ensuring data privacy. This approach facilitates the training and sharing of models like ours, allowing the generation of synthetic data using text prompts, thus enhancing performance while avoiding privacy or data-sharing concerns. Furthermore, the consistent improvement in performance metrics, even with repetitive use of the same prompts, illustrates GenerateCT's ability to produce varied data from identical inputs.\nIn conclusion, the results in Tab. 1 establish GenerateCT as a valuable asset in data augmentation. Our experimental findings underscore GenerateCT's capability to generate detailed and realistic 3D chest CT volumes that accurately align with diverse text prompts. These outcomes mark a significant advancement in 3D medical imaging, suggesting that GenerateCT can be a powerful tool for enhancing diagnostic and treatment planning processes. Moreover, the potential of GenerateCT to simulate realistic, high-resolution medical images based on textual descriptions opens new avenues for future applications in healthcare." }, { "figure_ref": [], "heading": "Utilizing GenerateCT in a Zero-Shot Setting", "publication_ref": [ "b10" ], "table_ref": [], "text": "In this section, we detail the application of GenerateCT in a zero-shot scenario, evaluating the model's ability to generalize to external datasets and perform with unseen prompts. We selected RadChestCT [11] as the external dataset, which comprises 3,630 3D chest CT volumes featuring 83 different abnormalities and a mean abnormality label frequency of 0.129." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "Initially, we established a baseline by training the classifier on RadChestCT, which included 2,286 3D CT volumes for training and 1,344 for validation. Each volume was associated with labels for 83 unique abnormalities. Subsequently, we generated a new dataset matching the volume count and abnormality distribution of RadChestCT's training set, resulting in 2,286 synthetic 3D CT volumes. The generation process employed structured medical language text prompts, {age} years old {sex}: {impression}, where {impression} denoted the specific abnormalities. These text prompts were novel, not included in the original training data for GenerateCT, and featured a unique distribution of abnormalities. Due to the absence of age and sex parameters in RadChestCT, these were assigned randomly. The classifier underwent training using both the synthetic dataset and a combination of synthetic and real data. To ensure consistency, we applied the same preprocessing and model parameters as described in Sec. 2." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "Tab. 2 presents the scores for each training scenario across all 83 abnormalities, noting comparable results between models trained on synthetic and real data: a mean AP of 0.146 and AUROC of 0.536 for synthetic, against 0.177 AP and 0.613 AUROC for real data. This similarity is significant, given that both scenarios used the same real patient dataset for validation, originating from a different institutional setup than that used for GenerateCT training. Training jointly with synthetic and real patient data showed a modest increase in both mean AUROC (0.623) and mean AP (0.190), underscoring the value of synthetic data in model training. The results in Tab. 2 establish GenerateCT as a valuable tool for data generation from unseen prompts. Our experimental results highlight GenerateCT's ability to create detailed and realistic 3D chest CT volumes that correspond accurately to diverse text prompts not used during training. This demonstrates the extension of GenerateCT's key benefits, mentioned in Sec. 2, to external datasets and its potential for clinical application with unseen prompts. " } ]
2024-03-11
[ { "authors": "A Arnab; M Dehghani; G Heigold; C Sun; M Lučić; C Schmid", "journal": "Proceedings of the IEEE/CVF international conference on computer vision", "ref_id": "b0", "title": "Vivit: A video vision transformer", "year": "2021" }, { "authors": "Y Balaji; S Nah; X Huang; A Vahdat; J Song; K Kreis; M Aittala; T Aila; S Laine; B Catanzaro", "journal": "", "ref_id": "b1", "title": "ediffi: Text-to-image diffusion models with an ensemble of expert denoisers", "year": "2022" }, { "authors": "I Beltagy; M E Peters; A Cohan", "journal": "", "ref_id": "b2", "title": "Longformer: The long-document transformer", "year": "2020" }, { "authors": "A Blattmann; R Rombach; H Ling; T Dockhorn; S W Kim; S Fidler; K Kreis", "journal": "", "ref_id": "b3", "title": "Align your latents: High-resolution video synthesis with latent diffusion models", "year": "2023" }, { "authors": "P Chambon; C Bluethgen; J B Delbrouck; R Van Der Sluijs; M Połacin; J M Z Chaves; T M Abraham; S Purohit; C P Langlotz; A Chaudhari", "journal": "", "ref_id": "b4", "title": "Roentgen: Vision-language foundation model for chest x-ray generation", "year": "2022" }, { "authors": "H Chang; H Zhang; L Jiang; C Liu; W T Freeman", "journal": "", "ref_id": "b5", "title": "Maskgit: Masked generative image transformer", "year": "2022" }, { "authors": "W Chen; H Hu; C Saharia; W W Cohen", "journal": "", "ref_id": "b6", "title": "Re-imagen: Retrieval-augmented text-to-image generator", "year": "2022" }, { "authors": "A Clark; J Donahue; K Simonyan", "journal": "", "ref_id": "b7", "title": "Adversarial video generation on complex datasets", "year": "2019" }, { "authors": "T D Denotter; J Schubert", "journal": "Hounsfield unit", "ref_id": "b8", "title": "", "year": "2019" }, { "authors": "M Ding; Z Yang; W Hong; W Zheng; C Zhou; D Yin; J Lin; X Zou; Z Shao; H Yang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b9", "title": "Cogview: Mastering text-to-image generation via transformers", "year": "2021" }, { "authors": "R L Draelos; D Dov; M A Mazurowski; J Y Lo; R Henao; G D Rubin; L Carin", "journal": "Medical image analysis", "ref_id": "b10", "title": "Machine-learning-based multiple abnormality prediction with large-scale chest computed tomography volumes", "year": "2021" }, { "authors": "S Gu; D Chen; J Bao; F Wen; B Zhang; D Chen; L Yuan; B Guo", "journal": "", "ref_id": "b11", "title": "Vector quantized diffusion model for text-to-image synthesis", "year": "2022" }, { "authors": "M Heusel; H Ramsauer; T Unterthiner; B Nessler; S Hochreiter", "journal": "Advances in neural information processing systems", "ref_id": "b12", "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "year": "2017" }, { "authors": "J Ho; W Chan; C Saharia; J Whang; R Gao; A Gritsenko; D P Kingma; B Poole; M Norouzi; D J Fleet", "journal": "", "ref_id": "b13", "title": "Imagen video: High definition video generation with diffusion models", "year": "2022" }, { "authors": "J Ho; C Saharia; W Chan; D J Fleet; M Norouzi; T Salimans", "journal": "J. Mach. Learn. Res", "ref_id": "b14", "title": "Cascaded diffusion models for high fidelity image generation", "year": "2022" }, { "authors": "J Ho; T Salimans; A Gritsenko; W Chan; M Norouzi; D J Fleet", "journal": "", "ref_id": "b15", "title": "Video diffusion models", "year": "2022" }, { "authors": "W Hong; M Ding; W Zheng; X Liu; J Tang", "journal": "", "ref_id": "b16", "title": "Cogvideo: Large-scale pretraining for text-to-video generation via transformers", "year": "2022" }, { "authors": "A E Johnson; T J Pollard; S J Berkowitz; N R Greenbaum; M P Lungren; C Y Deng; R G Mark; S Horng", "journal": "Scientific data", "ref_id": "b17", "title": "Mimic-cxr, a de-identified publicly available database of chest radiographs with free-text reports", "year": "2019" }, { "authors": "J Johnson; A Alahi; L Fei-Fei", "journal": "Springer", "ref_id": "b18", "title": "Perceptual losses for real-time style transfer and super-resolution", "year": "2016" }, { "authors": "T Karras; S Laine; M Aittala; J Hellsten; J Lehtinen; T Aila", "journal": "", "ref_id": "b19", "title": "Analyzing and improving the image quality of stylegan", "year": "2020" }, { "authors": "A Kebaili; J Lapuyade-Lahorgue; S Ruan", "journal": "Journal of Imaging", "ref_id": "b20", "title": "Deep learning approaches for data augmentation in medical imaging: A review", "year": "2023" }, { "authors": "D P Kingma; J Ba", "journal": "", "ref_id": "b21", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "R Lamba; J P Mcgahan; M T Corwin; C S Li; T Tran; J A Seibert; J M Boone", "journal": "AJR. American journal of roentgenology", "ref_id": "b22", "title": "Ct hounsfield numbers of soft tissues on unenhanced abdominal ct scans: variability between two different manufacturers' mdct scanners", "year": "2014" }, { "authors": "H Lee; W Kim; J H Kim; T Kim; J Kim; L Sunwoo; E Choi", "journal": "", "ref_id": "b23", "title": "Unified chest x-ray and radiology report generation model with multi-view chest x-rays", "year": "2023" }, { "authors": "N Linna; C E Kahn", "journal": "International Journal of Medical Informatics", "ref_id": "b24", "title": "Applications of natural language processing in radiology: A systematic review", "year": "2022" }, { "authors": "A Nichol; P Dhariwal; A Ramesh; P Shyam; P Mishkin; B Mcgrew; I Sutskever; M Chen", "journal": "", "ref_id": "b25", "title": "Glide: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2021" }, { "authors": "A Q Nichol; P Dhariwal", "journal": "PMLR", "ref_id": "b26", "title": "Improved denoising diffusion probabilistic models", "year": "2021" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark", "journal": "PMLR", "ref_id": "b27", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "C Raffel; N Shazeer; A Roberts; K Lee; S Narang; M Matena; Y Zhou; W Li; P J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b28", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "A Ramesh; P Dhariwal; A Nichol; C Chu; M Chen", "journal": "", "ref_id": "b29", "title": "Hierarchical textconditional image generation with clip latents", "year": "2022" }, { "authors": "R Rombach; A Blattmann; D Lorenz; P Esser; B Ommer", "journal": "", "ref_id": "b30", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "S Ruder", "journal": "", "ref_id": "b31", "title": "An overview of gradient descent optimization algorithms", "year": "2016" }, { "authors": "C Saharia; W Chan; H Chang; C Lee; J Ho; T Salimans; D Fleet; M Norouzi", "journal": "", "ref_id": "b32", "title": "Palette: Image-to-image diffusion models", "year": "2022" }, { "authors": "C Saharia; W Chan; S Saxena; L Li; J Whang; E L Denton; K Ghasemipour; R Gontijo Lopes; B Karagol Ayan; T Salimans", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b33", "title": "Photorealistic textto-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "T Unterthiner; S Van Steenkiste; K Kurach; R Marinier; M Michalski; S Gelly", "journal": "", "ref_id": "b34", "title": "Fvd: A new metric for video generation", "year": "2019" }, { "authors": "A Van Den Oord; O Vinyals", "journal": "Advances in neural information processing systems", "ref_id": "b35", "title": "Neural discrete representation learning", "year": "2017" }, { "authors": "R Villegas; M Babaeizadeh; P J Kindermans; H Moraldo; H Zhang; M T Saffar; S Castro; J Kunze; D Erhan", "journal": "", "ref_id": "b36", "title": "Phenaki: Variable length video generation from open domain textual description", "year": "2022" }, { "authors": "V Voleti; A Jolicoeur-Martineau; C Pal", "journal": "", "ref_id": "b37", "title": "Masked conditional video diffusion for prediction, generation, and interpolation", "year": "2022" }, { "authors": "M J Willemink; P B Noël", "journal": "European radiology", "ref_id": "b38", "title": "The evolution of image reconstruction for ct-from filtered back projection to artificial intelligence", "year": "2019" }, { "authors": "C Wu; L Huang; Q Zhang; B Li; L Ji; F Yang; G Sapiro; N Duan", "journal": "", "ref_id": "b39", "title": "Godiva: Generating open-domain videos from natural descriptions", "year": "2021" }, { "authors": "C Wu; J Liang; L Ji; F Yang; Y Fang; D Jiang; N Duan", "journal": "Springer", "ref_id": "b40", "title": "Nüwa: Visual synthesis pre-training for neural visual world creation", "year": "2022" }, { "authors": "W Yan; Y Zhang; P Abbeel; A Srinivas", "journal": "", "ref_id": "b41", "title": "Videogpt: Video generation using vq-vae and transformers", "year": "2021" }, { "authors": "R Yang; P Srivastava; S Mandt", "journal": "", "ref_id": "b42", "title": "Diffusion probabilistic modeling for video generation", "year": "2022" }, { "authors": "J Yu; X Li; J Y Koh; H Zhang; R Pang; J Qin; A Ku; Y Xu; J Baldridge; Y Wu", "journal": "", "ref_id": "b43", "title": "Vector-quantized image modeling with improved vqgan", "year": "2021" }, { "authors": "J Yu; Y Xu; J Y Koh; T Luong; G Baid; Z Wang; V Vasudevan; A Ku; Y Yang; B K Ayan", "journal": "", "ref_id": "b44", "title": "Scaling autoregressive models for content-rich text-to-image generation", "year": "2022" }, { "authors": "C Zhang; C Zhang; M Zhang; I S Kweon", "journal": "", "ref_id": "b45", "title": "Text-to-image diffusion model in generative ai: A survey", "year": "2023" } ]
[ { "formula_coordinates": [ 6, 115.2, 454.48, 183.45, 12.73 ], "formula_id": "formula_0", "formula_text": "z x = Φ CTViT e (x lr ) and xlr = Φ CTViT d (z x )." }, { "formula_coordinates": [ 6, 34.02, 526.42, 345.83, 21.67 ], "formula_id": "formula_1", "formula_text": "B × C × 1 × (H • p 1 ) × (W • p 2 ) to B × 1 × H × W × (C • p 1 • p 2 )." }, { "formula_coordinates": [ 6, 108.62, 572.42, 96.76, 13.47 ], "formula_id": "formula_2", "formula_text": "B × 1 × H p1 × W p2 × D." }, { "formula_coordinates": [ 7, 34.02, 37.17, 345.83, 21.61 ], "formula_id": "formula_3", "formula_text": "B × 1 × (T • p t ) × (H • p 1 ) × (W • p 2 ) to B × T × H × W × (C • p t • p 1 • p 2 )" }, { "formula_coordinates": [ 7, 73.32, 84.68, 114.75, 13.47 ], "formula_id": "formula_4", "formula_text": "B × (1 + T ) × H p1 × W p2 × D." }, { "formula_coordinates": [ 7, 34.02, 108.59, 123.62, 13.47 ], "formula_id": "formula_5", "formula_text": "(B • (1 + T )) × ( H p1 • W p2 ) × D" }, { "formula_coordinates": [ 7, 260.16, 123.07, 119.68, 13.47 ], "formula_id": "formula_6", "formula_text": "( H p1 • W p2 ) × (B • (1 + T )) × D," }, { "formula_coordinates": [ 7, 85.03, 438, 244.4, 12.69 ], "formula_id": "formula_7", "formula_text": "ẑ * x = Φ M T mask[z * x ], Φ * T5X (r) and xlr = Φ * CTViT d (ẑ * x )." }, { "formula_coordinates": [ 8, 67.29, 332.27, 279.29, 12.69 ], "formula_id": "formula_8", "formula_text": "z * r = Φ * T5X (r) and x = Φ * Diff Φ * CTViT d Φ * M T ([empty], z * r ), z * r ," } ]
GenerateCT: Text-Conditional Generation of 3D Chest CT Volumes
Ibrahim Ethem Hamamci; Sezgin Er; Anjany Sekuboyina; Enis Simsar; Alperen Tezcan; Ayse Gulnihan Simsek; Sevval Nil Esirgun; Furkan Almas; Irem Doğan; Muhammed Furkan Dasdelen; Chinmay Prabhakar; Hadrien Reynaud; Sarthak Pati; Christian Bluethgen; Mehmet Kemal Ozdemir; Bjoern Menze
[ { "figure_caption": "Fig. 2 :2Fig. 2: The GenerateCT architecture consists of three main components. (1) The CT-ViT encoder architecture processes the embeddings of CT patches from raw slices S through a spatial transformer followed by a causal transformer (auto-regressive indepth), generating CT tokens. (2) The vision-language transformer is trained to reconstruct masked tokens based on the frozen CT-ViT encoder's predictions, conditioned on T5X text prompt tokens. (3) A text-conditional diffusion model is employed to upsample low-resolution slices from generated 3D chest CT volumes. Finally, Gener-ateCT demonstrates the capability to generate high-resolution 3D chest CT volumes with arbitrary slice numbers conditioned on medical language text prompts.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 :3Fig.3: Axial, sagittal, and coronal slices of ground-truth and synthetic 3D chest CT volumes generated by different methods, based on the text prompt: \"26 years old male: Findings compatible with COVID-19 pneumonia\". The results showcase GenerateCT's proficiency in crafting detailed and spatially consistent 3D volumes. Although comparing with ground truth is not customary in text-to-image research, its inclusion here acts as a reference point, highlighting GenerateCT's ability to produce diverse volumes that accurately align with text prompts, instead of merely replicating training data.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Three sequential slices from each synthetic 3D chest CT within the practical HU range of [-1000 HU, +1000 HU] generated based on the given prompt, showcasing GenerateCT's proficiency in preserving spatial consistency across successive slices. Abnormalities referenced in the prompts are color-highlighted, underscoring our method's precision in translating textual descriptions into clinically accurate volumetric features.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "\"\"Fig. 5 :5Fig. 5: Cross-attention maps for showing specific abnormalities in the text-conditional generation of chest CT volumes, highlighting GenerateCT's precision in aligning text with relevant regions. Colors from blue to red represent the weights from low to high.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 :6Fig.6: Comparative analysis of multi-abnormality classification models with incremental data augmentation using GenerateCT underscores its significant clinical utility, especially in low-data environments. Given that the mean frequency of abnormalities in the test set is 0.179, lower mean AP values are expected for models.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 1 :1Fig. 1: Example 2D slices of generated 3D CT volumes with varied windowing settings. Each example includes three windowing settings for the same slice: (1) within the raw HU range of [-1000 HU, +1000 HU], (2) lung window within the range of [-1000 HU, +150 HU], and (3) mediastinal window within the range of [-125 HU, +225 HU]. This highlights GenerateCT's ability to produce highly detailed and clinically accurate 3D chest CT volumes based on text descriptions.", "figure_data": "", "figure_id": "fig_5", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig. 2: Cross-attention maps illustrate specific abnormalities in the text-conditional generation of 3D chest CT volumes with varied windowing settings, underscoring Gen-erateCT's precision in translating medical terminology into clinically relevant image features in the corresponding areas. Although our work generates comprehensive 3D chest CT volumes, we present only 2D axial slices due to presentation and visualization constraints. These slices act as representative examples to demonstrate the depth and detail GenerateCT can achieve, providing insights into its ability to accurately depict complex anatomical structures and abnormalities in a three-dimensional context.", "figure_data": "", "figure_id": "fig_6", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Quantitative results for GenerateCT and its variants, compared with baseline methods, demonstrate our method's superior performance across all key metrics, underscoring its effectiveness in generating 3D chest CT volumes from medical text prompts. Sampling time tests were conducted on an NVIDIA A100 80GB GPU.", "figure_data": "MethodOut Time(s) FVD I3D ↓ FVD CT-Net ↓ FID↓ CLIP↑Base w/ Imagen Base w/ SD Base w/ Phenaki 3D 2D 2D234 367 233557.7 3513.5 1886.817.319 21.194 9.5534160.8 151.7 104.324.8 23.5 25.2Ours (2SCM) Ours (3SCM) Ours (4SCM)3D 3D 3D102 184 2441661.4 1092.3 1201.48.9021 8.1745 8.586986.9 55.8 71.325.9 27.1 26.6", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Labeling outcomes by experts for authenticity prediction and text prompt alignment with real and synthetic 3D CT volume. The statistical analysis underscores the convincing realism and text alignment of the generated 3D chest CT volumes.", "figure_data": "First Radiologist (4 years)Second Radiologist (11 years)TaskReal Volumes Synthetic Volumes Real Volumes Synthetic VolumesReal: 74 3D Realism Synthetic: 26Real: 41 Synthetic: 59Real: 71 Synthetic: 29Real: 36 Synthetic: 64Matched: 82 Alignment Mismatched: 18 Mismatched: 34 Mismatched: 17 Mismatched: 30 Matched: 66 Matched: 83 Matched: 70", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Performance metrics for all abnormalities with incremental data augmentation using GenerateCT. This highlights its significant clinical utility, especially in data augmentation, and its applicability in scenarios where real patient data sharing is challenging. It facilitates sharing trained models rather than private patient data, especially since the synthetic-only model outperforms the real data-only model.", "figure_data": "MeanthickeningInterlobular septalBronchiectasisConsolidationPeribronchial thickeningMosaic attenuation patternPleural effusionPulmonary fibrotic sequelaLung opacityLung noduleAtelectasisEmphysemaLymphadenopathyHiatal herniaC. artery wall calcificationPericardial effusionCardiomegalyArterial wall calcificationMedical materialAbnormality0.631 0.254 0.669 0.282 0.601 0.234 0.619 0.247 0.656 0.274 0.1790.699 0.132 0.768 0.141 0.614 0.121 0.645 0.124 0.691 0.185 0.0700.573 0.111 0.658 0.132 0.535 0.098 0.582 0.095 0.598 0.098 0.0930.655 0.235 0.693 0.264 0.592 0.154 0.599 0.168 0.6355 0.185 0.1460.513 0.073 0.503 0.152 0.551 0.084 0.580 0.099 0.604 0.158 0.0690.739 0.152 0.712 0.195 0.594 0.097 0.612 0.087 0.661 0.125 0.0560.777 0.323 0.815 0.365 0.632 0.198 0.678 0.205 0.725 0.286 0.1250.531 0.256 0.638 0.258 0.558 0.241 0.592 0.240 0.624 0.245 0.2410.603 0.477 0.785 0.490 0.549 0.485 0.542 0.506 0.598 0.545 0.3950.560 0.483 0.621 0.456 0.523 0.415 0.562 0.420 0.685 0.452 0.4490.609 0.314 0.585 0.352 0.595 0.284 0.550 0.276 0.625 0.297 0.2310.522 0.202 0.621 0.254 0.512 0.235 0.542 0.254 0.582 0.288 0.1930.616 0.345 0.679 0.399 0.591 0.301 0.612 0.351 0.642 0.345 0.2450.544 0.159 0.542 0.152 0.638 0.298 0.652 0.325 0.685 0.345 0.1400.649 0.384 0.691 0.452 0.794 0.428 0.790 0.493 0.825 0.493 0.2460.667 0.044 0.685 0.056 0.642 0.079 0.690 0.094 0.662 0.152 0.0260.804 0.310 0.745 0.352 0.590 0.142 0.593 0.154 0.599 0.184 0.1020.648 0.434 0.605 0.452 0.714 0.445 0.705 0.415 0.715 0.405 0.2920.650 0.143 0.702 0.156 0.594 0.109 0.623 0.141 0.656 0.149 0.082AUROC AP AUROC AP AUROC AP AUROC AP AUROC AP Test Set20k Real 20k Real+20k Synth 20k Synthetic 40k Synthetic 100k Synthetic", "figure_id": "tab_6", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Performance metrics for different abnormalities across various training datasets, highlighting GenerateCT's generation capability based on unseen prompts.", "figure_data": "Real DataSynthetic DataComposite DataAbnormalityAUROCAPAUROCAPAUROCAPTest SetAir trapping0.5610.0440.6210.0500.6330.0510.031Airspace disease0.6050.2580.5710.2100.6070.2330.171Aneurysm0.5770.0150.4930.0120.5870.0200.011Arthritis0.5150.2840.5100.2980.5050.2820.279Aspiration0.6160.0910.5180.0510.6240.0920.049Atelectasis0.5750.3490.5790.3560.5960.4080.290Atherosclerosis0.5500.3140.4730.2810.5250.2970.294Bandlike or linear0.4610.1560.5110.1910.4830.1660.177Breast implant0.3870.0160.3250.0120.5500.0660.017Breast surgery0.4990.0300.5040.0260.4840.0370.023Bronchial thickening0.5560.0800.4740.0740.5660.0860.070Bronchiectasis0.7040.3130.5430.1790.6660.2340.154Bronchiolectasis0.7390.0680.4750.0210.6830.0440.021Bronchiolitis0.4430.0240.4920.0250.5090.0260.025Bronchitis0.5330.0100.5670.0230.5700.0110.008CABG0.7540.1180.5040.0680.7640.1150.041Calcification0.4260.6690.5010.7270.4280.6760.721Cancer0.5930.6140.5230.5750.6180.6360.563Cardiomegaly0.7520.2380.6220.1420.7980.3140.094Catheter or port0.6600.2180.5910.1200.6810.2660.084Cavitation0.6040.0560.4930.0580.5890.0560.040Chest tube0.8640.1230.6400.0370.8810.1730.018Clip0.4880.0980.5320.1170.4910.1060.092Congestion0.8850.0420.7010.0150.9510.2660.005Consolidation0.6900.2860.5650.1930.6800.2560.139Coronary artery disease0.5670.6080.5000.5680.5820.6070.566Cyst0.4970.1690.4690.1560.4880.1620.167Debris0.6970.0810.5720.0480.6970.1110.038Deformity0.5800.0620.4750.0510.5510.0570.052Density0.5360.1060.4990.0950.5380.1160.092Dilation or ectasia0.5710.0630.4580.0510.5890.0660.046Distention0.5920.0200.6530.0560.6410.0190.011Emphysema0.6230.3290.4210.2300.6140.3520.275Fibrosis0.7920.3320.5740.1520.7750.2590.118Fracture0.6010.0940.5360.0750.5880.0970.070GI tube0.9000.1920.7100.0670.9100.2690.018Granuloma0.4480.0710.4110.0660.4500.0710.080Groundglass0.5940.4150.5240.3410.5890.4220.325Hardware0.4470.0220.5130.0280.4160.0210.026Heart failure0.8780.0560.5850.0130.9510.1990.009Heart valve replacement0.7450.0430.7210.0590.8580.1650.014Hemothorax0.8890.1250.7210.0110.8330.0320.005Hernia0.5230.1200.4880.1260.5480.1260.115Honeycombing0.9030.2580.5660.0450.8460.1050.032Infection0.5390.3550.4480.3010.5380.3540.317Infiltrate0.4130.0150.4380.0210.3520.0140.018Inflammation0.5290.0870.4290.0760.5210.0860.082Interstitial lung disease0.7390.3620.5650.1960.7420.3040.152Lesion0.4670.2340.4870.2460.4820.2350.251Lucency0.5740.0280.5670.0280.5560.0410.018Lung resection0.5190.2220.5160.2360.5450.2420.229Lymphadenopathy0.6820.2600.5800.1910.6860.2720.151Mass0.4980.1230.5410.1490.5050.1280.128Mucous plugging0.5190.0280.4130.0270.4800.0270.028Nodule0.6490.8580.6000.8550.6820.8730.800Nodule >1cm0.5150.1360.5440.1580.4990.1210.128Opacity0.3690.4560.5390.5710.6340.6670.543Pacemaker/defibrillator0.7780.1280.5630.0790.8570.2610.049Pericardial effusion0.6260.2070.5440.1670.6290.2360.143Pericardial thickening0.5010.0240.5510.0760.5380.0260.025Plaque0.6080.0340.4080.0230.5660.0310.024Pleural effusion0.7700.4240.6560.3080.7920.5070.199Pleural thickening0.5830.1200.5730.1250.5490.1180.100Pneumonia0.6290.0790.5690.0670.6640.0960.050Pneumonitis0.6770.0700.5780.0340.6890.0520.027Pneumothorax0.7800.1960.5760.0300.8150.1930.024Postsurgical0.5540.5250.5170.5030.5370.5210.485Pulmonary edema0.8160.1440.6380.0810.8520.2170.034Reticulation0.7470.2110.5590.1210.7100.1650.090Scarring0.4480.1930.4620.2190.5310.2470.227Scattered calcifications0.5190.1870.5060.1900.4910.1870.183Scattered nodules0.4970.2160.4630.2110.4940.2250.223Secretion0.5870.0190.5300.0190.5990.0210.014Septal thickening0.7930.1760.6120.1050.7940.1950.060Soft tissue0.4750.1660.5580.2060.4660.1600.171Staple0.5010.0320.5360.0400.4620.0330.031Stent0.5800.0400.5500.0640.5540.0370.032Sternotomy0.7430.1860.5360.0860.7790.2410.068Suture0.5070.0280.5340.0220.4660.0220.020Tracheal tube0.9370.2340.7100.0330.9310.2320.013Transplant0.7010.1740.5740.0990.7130.1780.074Tree in bud0.5730.0640.3990.0200.5910.0350.023Tuberculosis0.5340.0050.3660.0030.4670.0060.003Mean0.6130.1770.5360.1460.6230.1900.129", "figure_id": "tab_7", "figure_label": "2", "figure_type": "table" } ]
[{"Category": "Extension or Continuation", "Citation": "[21]", "Explanation": "The cited work highlights the need for a more spatially complex modality in medical image analysis, which the citing paper extends by exploring the generation of 3D CT and MRI images from free-flowing text."}, {"Category": "Methodological Basis", "Citation": "[8]", "Explanation": "The cited work highlights the exponential increase in computational complexity associated with 3D medical imaging, which serves as the basis for the discussion in the citing paper on the challenges in generating 3D medical imaging."}, {"Category": "Extension or Continuation", "Citation": "[46]", "Explanation": "The cited work mentions the lack of pre-trained 3D models for fine-tuning, which the citing paper extends by proposing a new method for generating 3D medical imaging conditioned on free-form text prompts."}, {"Category": "Extension or Continuation", "Citation": "[25]", "Explanation": "The cited work highlights the scarcity of 3D medical imaging data paired with radiology reports, which the citing paper extends by proposing a method to address this challenge in the medical field."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work mentions the use of a novel causal vision transformer, CT-ViT, to encode 3D CT volumes into tokens, which the citing paper adopts in its framework to generate 3D medical imaging."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work mentions the use of a bidirectional text-image transformer to align CT tokens with the encoded tokens of free-form radiology text, which the citing paper uses in its framework to generate 3D medical imaging."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work on masked CT token prediction provides the method used in the citing paper to facilitate alignment in the generation of 3D chest CT volumes."}, {"Category": "Extension or Continuation", "Citation": "[15]", "Explanation": "The cited work on cascaded diffusion model is extended in the citing paper to enhance the in-plane resolution of the generated low-resolution volumes in a text-conditioned manner."}, {"Category": "Extension or Continuation", "Citation": "[34]", "Explanation": "The cited work on text-conditioned resolution enhancement is further extended in the citing paper to ensure faithful resolution enhancement based on the input prompt."}, {"Category": "Supporting Evidence", "Citation": "[3]", "Explanation": "The cited work on text-to-video generation in 3D chest CT synthesis highlights the optimized benefits of the citing paper over other 3D generation approaches."}, {"Category": "Data Source", "Citation": "[1]", "Explanation": "The cited work on the use of state-of-the-art generation models in the citing paper serves as a data source for the design of baseline methods."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work on the use of text-conditional 2D image generation methods in the citing paper is used as a methodological basis for comparison in the design of baseline methods."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work introduces a method of using cascaded diffusion models to enhance the resolution and duration of video generation, which the citing paper builds upon in their research on text-conditioned video generation."}, {"Category": "Extension or Continuation", "Citation": "[18]", "Explanation": "The cited work, MIMIC-CXR, is a publicly available 2D medical imaging dataset that the citing paper extends by focusing on the generation of 3D medical images with text conditioning."}, {"Category": "Data Source", "Citation": "[39]", "Explanation": "The cited work provides the method used to reconstruct the chest CT volumes in the dataset, which is essential for the analysis and research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[9,23]", "Explanation": "The cited works provide the range of values for the Hounsfield Units (HU) that are used to convert the CT volumes into their respective HU values, which is a critical step in the data processing and analysis performed in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[36]", "Explanation": "The cited work introduces the concept of vector quantization, which the citing paper adopts in the CT-ViT decoder to create a discrete latent space for the encoder outputs."}, {"Category": "Supporting Evidence", "Citation": "[44]", "Explanation": "The cited work provides the L2 loss function from ViT-VQGAN that the citing paper incorporates in the CT-ViT decoder to ensure consistency during the reconstruction process."}, {"Category": "Supporting Evidence", "Citation": "[19]", "Explanation": "The cited work contributes the image perceptual loss that the citing paper uses in the CT-ViT decoder to ensure perceptual similarity in the generated 3D CT volumes."}, {"Category": "Supporting Evidence", "Citation": "[20]", "Explanation": "The cited work provides the StyleGAN adversarial loss function that the citing paper incorporates in the CT-ViT decoder to ensure alignment in the generation of 3D CT volumes."}, {"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work on masked visual token modeling serves as the methodological basis for the second stage of GenerateCT, which involves aligning CT and text spaces using a transformer model to predict masked CT tokens based on the text embedding and cross-attention with input CT tokens."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work introduces a diffusion-based text-conditional super-resolution model that the citing paper adopts in the final stage of their research to enhance the resolution of low-resolution 3D CT volumes in the axial dimension."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work on noisy conditioning is used as a basis for the upsampling process in the citing paper, which enhances performance and aligns with the principles of noisy conditioning."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work by Adam is used to set the hyperparameters for the training process in the citing paper, which is essential for optimizing the model performance."}, {"Category": "Data Source", "Citation": "(see Sec. 3.1)", "Explanation": "The data used in the training of the CT-ViT model is acknowledged in the cited work, which is crucial for the study conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "Fig. 2", "Explanation": "The use of medical language text prompts in the paired dataset is an extension of the research conducted in the cited work, exploring a new dimension in the training process of the MaskGIT transformer."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work introduces the I3D model for feature extraction, which the citing paper adopts in their evaluation of the quality of generated 3D chest CT volumes."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work presents the CT-Net model for multi-abnormality classification, which the citing paper uses to extract domain-relevant features and compute distance metrics for a more appropriate comparison in their evaluation of generated volumes."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work introduces the InceptionV3 model for image quality assessment at a slice-level, which the citing paper employs in their evaluation of generated images in 3D chest CT volumes."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work, the pretrained CLIP model, is utilized in the citing paper to quantify the alignment between text prompts and generated volumes, which serves as a benchmark for the training dataset and the process of correlation between visual and textual content."}, {"Category": "Extension or Continuation", "Citation": "[34]", "Explanation": "The cited work, Imagen, is used in the baseline method to assess the importance of the 3D generation architecture in achieving spatial consistency in 3D chest CT volumes. The method is designed to condition Imagen on the slice number and text prompt during training to generate slices that are combined in order to form a 3D volume."}, {"Category": "Methodological Basis", "Citation": "[31]", "Explanation": "The cited work, Stable Diffusion, is used as a pre-trained 2D text-to-image model for fine-tuning in the citing paper to generate 3D medical images. However, the results show that the fine-tuned model still lacks spatial consistency and accuracy, highlighting the need for a dedicated 3D generation algorithm."}, {"Category": "Methodological Basis", "Citation": "[37]", "Explanation": "The cited work, Phenaki, is adapted to generate 3D chest CT volumes in the citing paper. The adaptation of the model highlights the use of a state-of-the-art text-to-video generation model in the research conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[11]", "Explanation": "The cited work provides a multi-abnormality classification model that the citing paper extends by training a classifier on the real 3D chest CT volumes to set a benchmark for assessing the clinical potential of GenerateCT in a radiological framework."}, {"Category": "Extension or Continuation", "Citation": "[11]", "Explanation": "The cited work, RadChestCT, is used as a basis for creating a new dataset for the citing paper, which is then used to train a classifier model. The citing paper extends the research by exploring the performance of the model on a new dataset and comparing it to the model trained on real patient data."}, {"Category": "Data Source", "Citation": "[11]", "Explanation": "The cited work, CT-Net, is a model used in the experiment to classify 18 distinct abnormalities in chest CT volumes. The citing paper relies on this model to conduct its research and analysis."}, {"Category": "Data Source", "Citation": "[9]", "Explanation": "The cited work provides the range of HU values used in the training sessions to calibrate the volumes in the citing paper."}]
[ { "figure_ref": [], "heading": ". INTRODUCTION", "publication_ref": [ "b2", "b3", "b4", "b5", "b6", "b7", "b10", "b0", "b1", "b0" ], "table_ref": [], "text": "Short-term action anticipation in egocentric videos is the task of predicting the actions that are likely to be performed by a first-person in the near future, along with foreseeing a next-active-object interaction and an estimate of the time at which the interaction will occur. The computer vision community has gathered significant progress in the field of action anticipation in egocentric videos, which predicts only the action labels [3,4,5,6]. However, the use of the next active objects [7,8,9] has not been widely explored in the current literature. Recently [10] proposed the use of next active objects for anticipating future actions. Based on the description of [10], the task of short-term anticipation remains challenging since it requires the ability to anticipate both the mode of action and the time at which the action will begin, known as the time to contact.\nThe next active objects play a crucial role in understanding the nature of interactions happening in a video. They provide important context for predicting future actions as they indicate which objects are likely to be involved in the next action [11]. In this vein, we propose a novel approach for addressing the problem of STA in egocentric videos. Our approach utilizes a guided attention mechanism between the spatiotemporal features extracted from video clips and objects to enhance the spatial object-centric information as proposed in [1]. Our model builds on top of StillFast [2].\nThe main contribution of this paper is to show the importance of the proposed guided attention mechanism for the next active object-based STA. Our approach aims to better capture the visual cues related to the next active objects, which we assume are highly correlated with the action that will follow. The proposed GANO model is trained and evaluated on the largest egocentric video dataset: Ego4D [10]. Experimental results demonstrate that GANO v2 outperforms the state-ofthe-art (SOTA) egocentric action anticipation methods. Additionally, we refer the reader to [1] which investigates the impact of guided attention on the performance of the GANO model for transformer-based prediction heads on \"v1\" of the EGO4D dataset. The results justify that incorporating guided attention, in other words, combining the information from spatiotemporal features and objects, improves the STA performance." }, { "figure_ref": [], "heading": ". OUR APPROACH", "publication_ref": [ "b0" ], "table_ref": [], "text": "We now describe the details of our method, GANO v2 . However, we refer the readers to the original paper [1] for more details on Guided-Attention." }, { "figure_ref": [], "heading": ". Backbone", "publication_ref": [ "b11" ], "table_ref": [], "text": "Given an input video clip, the proposed model takes as input a high-resolution last observed frame from the clip and low-resolution sampled video, V = {v i } T i=1 where v i ∈ R C×H o ×W o . An object detector [12] pre-trained on [10] is used to extract object detections for each sampled video frame. The detection consists of the bounding boxes (x1, y1, x2, y2) along with their class label. To process the input image and video simultaneously, the proposed model comprises a twobranch backbone. A 2D CNN backbone processes the high-resolution frame v T and produces a stack of 2D features at different spatial resolutions, ψ. The \"fast\" branch consists of two parts: (1) A 3D CNN backbone processes the video, V, and outputs a stack of 3D features. (2) In parallel, an MLP is employed to generate object embeddings from the object detections (class label , x1, y1, x2, y2) for the input video frames. In the final process, the stack of 3D features is fused with object embeddings using the Guided-Attention approach." }, { "figure_ref": [], "heading": ". Object Guided Attention.", "publication_ref": [ "b12" ], "table_ref": [], "text": "We use Objects-Guided Multi-head Attention to efficiently fuse spatiotemporal information across the video clip, and object detections and then infer long-term dependencies across both. Using a single attention head does not suffice as our goal is to allow detection embeddings to attend to co-related patches from the video clip. Therefore, we modify the Multi-Head Attention described in [13] in a way that it can take the inputs from both modalities. To do so, we set Query Q, Key K, and Value V as follows.\nQ = f vid (F i ), where i ∈ [1, ..N], K, V = f ob j (O j ), where j ∈ [1, ...M], Object-Guided Attention(Q,K,V) = Concat(h 1 , ...h h )W o ,\nwhere\nh i = Attention(QW i Q , KW i K , VW i V ), and Attention(Q, K, V) = so f tmax( QK T d k )V(1)\nwhere W i Q , W i K , andW i V are learnable parameter matrices and d k represents the dimensions of K. The output of this Object-Guided Multi-Attention is the attended features for the provided object embeddings, denoted as F i for a single feature layer i. The entire 3D feature stack, Ψ, (Ψ = {F i } N i=1 ) for N feature layers, is sent to the Combined Feature Pyramid Network." }, { "figure_ref": [], "heading": "Models", "publication_ref": [], "table_ref": [], "text": "Data Split Noun N+V N+TTC Overall FRCNN+SF. [ " }, { "figure_ref": [], "heading": ". Feature Pyramid Network and Prediction Head", "publication_ref": [ "b1", "b15", "b1", "b1" ], "table_ref": [], "text": "We adopt the Combined Feature Pyramid Layer and Predicting head from [2] for the purpose of fusing 2D and 3D feature stacks for mid-level feature fusion and final prediction respectively. The 3D feature maps, Ψ are interpolated and averaged out temporally to match the shape of ψ, followed by a 3×3 convolutional layer. The resulting summed to the 2D features, ψ, and then passed through another 3x3 convolutional layer. The resulting feature maps are then fed to a standard Feature Pyramid Layer [15].\nThe prediction head is based on Detectron2 [16] implementation. It consists of a Region Proposal Network (RPN) which predicts region proposals from the feature pyramid. A RoiAlign layer is then used to extract features from the region proposals. As mentioned in [2], a global average pooling from the feature pyramid is also applied to the final layer of feature pyramid outputs and concatenated with local features from region proposals, to be followed by a dense layer. The resulting representations are summed to the original local features through a residual connection. The final features are then used to predict the object class, bounding boxes, verbs, and TTC. We refer readers to [2] for further details." }, { "figure_ref": [], "heading": ". Training and Implementation details", "publication_ref": [], "table_ref": [], "text": "The model is trained end-to-end using classification and regression loss for verb and TTC prediction. In addition, we also employ the standard faster-RCNN losses. We performed experiments on the large-scale egocentric dataset EGO4D [10]. We preprocess the input video clips by randomly scaling the height between 248 and 280px and taking 224px crops at training time. We sample 32 consecutive frames as input for the low-resolution stream. The object detections are extracted on the original \"high\" resolution frame and are then scaled down to match the input shape of video frames. In our experiment, we use a ResNet-50 as 2D CNN and an X3D-M as 3D CNN. GANO v2 was trained with an SGD optimizer for 20 epochs with a cosine learning rate of 1e -5 with a batch size of 4 and a weight decay of 1e -6 on two NVIDIA-SMI Tesla V100 GPU." }, { "figure_ref": [], "heading": ". Results.", "publication_ref": [], "table_ref": [ "tab_0", "tab_1" ], "text": "The results in Table 1 demonstrate that GANO v2 outperforms all baseline methods across all metrics evaluated on \"v2\" of the EGO4D dataset. We also conducted an ablation study in Table 2 to investigate the impact of the guided attention mechanism on different feature layer(s) of the output of 3D CNN, Ψ. It is noted that the performance improves if Multi-head attention fusion is applied on the last layer instead only on the initial feature layer of 3D CNN. However, we achieve the best performance if attention is employed to all the feature layers." }, { "figure_ref": [], "heading": ". CONCLUSION AND LIMITATIONS", "publication_ref": [], "table_ref": [], "text": "We have presented the Guided-Attention for Next Active Object v2 (GANO v2 architecture as used in the EGO4D 2023 challenge. We propose an end-to-end architecture for predictive video tasks for short-term anticipation which involves predicting the next-active-object class, its location (bounding box), future action, and the time to contact. Our model obtains better performance as compared to other submissions on the test set of \"v2\" of the dataset. The limitation is that it relies on the performance of object detector for guided attention. In the future, we plan to improve performance by exploring different modalities and fusion-based methods." } ]
2023-10-04
[ { "authors": "Sanket Thakur; Cigdem Beyan; Pietro Morerio; Vittorio Murino; Alessio Del Bue", "journal": "", "ref_id": "b0", "title": "Enhancing next active object-based egocentric action anticipation with guided attention", "year": "2023" }, { "authors": "Francesco Ragusa; Giovanni Maria Farinella; Antonino Furnari", "journal": "", "ref_id": "b1", "title": "Stillfast: An end-to-end approach for short-term object interaction anticipation", "year": "2023" }, { "authors": "Rohit Girdhar; Kristen Grauman", "journal": "", "ref_id": "b2", "title": "Anticipative Video Transformer", "year": "2021" }, { "authors": "Miao Liu; Siyu Tang; Yin Li; James Rehg", "journal": "", "ref_id": "b3", "title": "Forecasting human object interaction: Joint prediction of motor attention and actions in first person video", "year": "2020" }, { "authors": "Antonino Furnari; Giovanni Maria Farinella", "journal": "", "ref_id": "b4", "title": "What would you expect? anticipating egocentric actions with rolling-unrolling lstms and modality attention", "year": "2019" }, { "authors": " Chao-Yuan; Yanghao Wu; Karttikeya Li; Haoqi Mangalam; Bo Fan; Jitendra Xiong; Christoph Malik; Feichtenhofer", "journal": "", "ref_id": "b5", "title": "MeMViT: Memory-Augmented Multiscale Vision Transformer for Efficient Long-Term Video Recognition", "year": "2022" }, { "authors": "Sanket Thakur; Cigdem Beyan; Pietro Morerio; Vittorio Murino; Alessio Del Bue", "journal": "", "ref_id": "b6", "title": "Anticipating next active objects for egocentric videos", "year": "2023" }, { "authors": "Antonino Furnari; Sebastiano Battiato; Kristen Grauman; Giovanni Maria Farinella", "journal": "Journal of Visual Communication and Image Representation", "ref_id": "b7", "title": "Next-active-object prediction from egocentric videos", "year": "2017" }, { "authors": "Hamed Pirsiavash; Deva Ramanan", "journal": "", "ref_id": "b8", "title": "Detecting activities of daily living in first-person camera views", "year": "2012" }, { "authors": "Kristen Grauman; Andrew Westbury; Eugene ", "journal": "", "ref_id": "b9", "title": "Ego4d: Around the World in 3,000 Hours of Egocentric Video", "year": "2022" }, { "authors": "Eadom Dessalene; Chinmaya Devaraj; Michael Maynord; Cornelia Fermuller; Yiannis Aloimonos", "journal": "IEEE TPAMI", "ref_id": "b10", "title": "Forecasting action through contact representations from first person video", "year": "2021" }, { "authors": "Kaiming Shaoqing Ren; Ross He; Jian Girshick; Sun", "journal": "", "ref_id": "b11", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "year": "2015" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki ", "journal": "", "ref_id": "b12", "title": "Attention is all you need", "year": "2017" }, { "authors": "Christoph Feichtenhofer; Haoqi Fan; Jitendra Malik; Kaiming He", "journal": "", "ref_id": "b13", "title": "Slowfast networks for video recognition", "year": "2019" }, { "authors": "Tsung-Yi Lin; Piotr Dollar; Ross Girshick; Kaiming He; Bharath Hariharan; Serge Belongie", "journal": "", "ref_id": "b14", "title": "Feature pyramid networks for object detection", "year": "2017-07" }, { "authors": "Yuxin Wu; Alexander Kirillov; Francisco Massa; Wan-Yen Lo; Ross Girshick", "journal": "", "ref_id": "b15", "title": "Detectron2", "year": "2019" } ]
[ { "formula_coordinates": [ 2, 324.83, 491.78, 224.55, 58.31 ], "formula_id": "formula_0", "formula_text": "Q = f vid (F i ), where i ∈ [1, ..N], K, V = f ob j (O j ), where j ∈ [1, ...M], Object-Guided Attention(Q,K,V) = Concat(h 1 , ...h h )W o ," }, { "formula_coordinates": [ 2, 349.26, 550.56, 209.73, 59.06 ], "formula_id": "formula_1", "formula_text": "h i = Attention(QW i Q , KW i K , VW i V ), and Attention(Q, K, V) = so f tmax( QK T d k )V(1)" } ]
GUIDED ATTENTION FOR NEXT ACTIVE OBJECT @ EGO4D SHORT TERM OBJECT INTERACTION ANTICIPATION CHALLENGE
In this technical report, we describe the Guided-Attention mechanism [1] based solution for the short-term anticipation (STA) challenge for the EGO4D challenge. It combines the object detections, and the spatiotemporal features extracted from video clips, enhancing the motion and contextual information, and further decoding the object-centric and motioncentric information to address the problem of STA in egocentric videos. For the challenge, we build our model on top of StillFast [2] with Guided Attention applied on fast network. Our model obtains better performance on the validation set and also achieves state-of-the-art (SOTA) results on the challenge test set for EGO4D Short-Term Object Interaction Anticipation Challenge.
Sanket Thakur; Cigdem Beyan; Pietro Morerio; Vittorio Murino; Alessio Del Bue
[ { "figure_caption": "Fig. 1 .1Fig.1. Our GANO v2 model uses a low-resolution video clip with sampled frames and a high-resolution target frame. Object detections are extracted for sampled input frames and are fused with patch features using a multi-head attention layer. The resulting attended 3D feature stack is merged with the 2D feature stack using a feature pyramid network and followed by a prediction head. The prediction head uses an RPN network to generate local feature which is fused with global features from P t , with a Global Average Pooling operation, and concatenated with local features. These features are fed into a fusion network and then summed to the original local features through residual connections. The local-global representations are then used to predict the final prediction for NAO bounding boxes, object class, verb, and TTC.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Results% in Top-5 mean Average Precision on the validation and test sets of EGO4D v2. In the header of the table, N+V stands for Noun + Verb and N+TTC stands for Noun + Time to Contact. Best results per column within a section of comparable results (horizontal lines) are reported in bold", "figure_data": "14]val21.07.457.042.98StillFast [2]val20.26 10.377.163.96GANO v2 (Ours)val20.52 10.427.283.99FRCNN+SF. [14]test26.15 9.458.693.61StillFast [2]test25.06 13.299.145.12GANO v2 (Ours)test25.67 13.609.025.16Guided Fusion in Layer Noun N+V N+TTC Overall118.79.426.273.22420.47 10.407.203.96All20.52 10.427.283.99", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Guided Attention fusion prediction for each output layer of 3D CNN. Results% in Top-5 mean Average Precision on the validation set of EGO4D v2.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work introduces the use of next active objects for anticipating future actions, which the citing paper builds upon in their research on short-term action anticipation in egocentric videos."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work introduces a guided attention mechanism that the citing paper adopts to enhance the spatial object-centric information in the video clips."}, {"Category": "Extension or Continuation", "Citation": "[2]", "Explanation": "The cited work, StillFast, is built upon in the citing paper to develop a new model for the next active object-based STA."}, {"Category": "Data Source", "Citation": "[10]", "Explanation": "The cited work, Ego4D dataset, is the largest egocentric video dataset used in the study conducted in the citing paper for training and evaluation of the GANO model."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work provides a detailed description of the Guided-Attention method, which the citing paper builds upon in their research."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work is used to extract object detections for the input video frames, which serves as a methodological basis for the object processing in the proposed model."}, {"Category": "Data Source", "Citation": "[10]", "Explanation": "The cited work is acknowledged as the pre-training dataset for the object detector used in the proposed model, providing a foundational data source for the object processing."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work provides a Multi-Head Attention mechanism that the citing paper modifies to suit the specific needs of the research, allowing for the efficient fusion of spatiotemporal information and the detection of co-related patches in both video and object detection inputs."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work provides the Combined Feature Pyramid Layer and Predicting head for fusing 2D and 3D features and final prediction, which the citing paper adopts to implement the mid-level feature fusion and final prediction process."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b19", "b23", "b13", "b25", "b5", "b5", "b23", "b0" ], "table_ref": [], "text": "With the emergence of dialogue data (Zhang et al., 2020b), and the evolution of pre-trained language models (Qiu et al., 2020), end-to-end task-oriented dialogue (TOD) systems (Su et al., 2022;Lee, 2021;Tian et al., 2022) gradually replaced the previous modular cascading dialogue systems (Gao et al., 2018). The end-to-end TOD system adopts a uniform training objective, preventing the error propagation problem in pipelined dialogue systems (Gao et al., 2018). Nonetheless, the end-to-end paradigm requires more training data to perform better (Su et al., 2022). Meanwhile, TOD data is enormously expensive to annotate (Budzianowski et al., 2018) as it simultaneously contains dialogue state tracking, dialogue action prediction, and response generation. It is also expensive to annotate large amounts of complicated dialogue data for * Corresponding Author.\nThere are 18 colleges I have found , would you prefer one in town centre or in the west ? I'm looking for a college type attraction.\nSure , we have thirteen options , 10 of which are free . May I suggest King's college , or Hughes hall ? I would like to visit on in town centre please.\n[Attraction] {type : college}" }, { "figure_ref": [], "heading": "Dialogue States", "publication_ref": [], "table_ref": [], "text": "[Attraction] {type : college}" }, { "figure_ref": [], "heading": "Dialogue States", "publication_ref": [], "table_ref": [], "text": "[Attraction] {type : college, area : centre}" }, { "figure_ref": [], "heading": "Dialogue States", "publication_ref": [], "table_ref": [], "text": "[Attraction] {type : college, area : centre}" }, { "figure_ref": [], "heading": "Dialogue States", "publication_ref": [], "table_ref": [], "text": "Okay , may I have their postcode , entrance fee , and phone number ?\n[Attraction] {type : college, area : centre}" }, { "figure_ref": [], "heading": "Dialogue States", "publication_ref": [], "table_ref": [], "text": "[Attraction] {type : college, area : centre}" }, { "figure_ref": [], "heading": "Dialogue States", "publication_ref": [], "table_ref": [], "text": "Can you provide the postcode, entrance fee and phone number ?\n[Attraction] {type : college, area : centre}" }, { "figure_ref": [], "heading": "Dialogue States", "publication_ref": [], "table_ref": [], "text": "[Attraction] {type : college, area : centre}" }, { "figure_ref": [], "heading": "Dialogue States", "publication_ref": [ "b17", "b23", "b8", "b13", "b23", "b24", "b1", "b31", "b9", "b2", "b0", "b3", "b4" ], "table_ref": [], "text": "Low-Resource Training Phrase :\nPrediction :\nParaphrasing Sure , the post code to King's college is CB21ST , the entrance fee is free , and phone number 3645351.\nSure , The post code is CB21ST, the entrance fee is free. Miss: King's college, 3645352\nFigure 1: The TOD training and prediction procedure in the low-resource scenario. When the user utterance is rephrased, the predictions miss some entities.\neach emerging domain (Mi et al., 2022). Therefore, improving data utilization efficiency in lowresource scenarios becomes critical for end-to-end TOD.\nPrevious approaches (Zhang et al., 2020b;Su et al., 2022) improve the transferability of models on downstream tasks and capacity to handle small samples by conducting self-supervised or semisupervised further-pretraining (He et al., 2022) of models on data from additional dialogue domains. However, these further pre-trains on million-level datasets may require hundreds of GPU hours and are resource-constrained. Then on specific downstream dialogue tasks, a unified multi-task generative paradigm (Lee, 2021;Su et al., 2022) was applied to end-to-end dialogue tasks. Although this generative approach demonstrates better generalization and outcomes, we argue that heterogeneity and duality between data are ignored. Here, heterogeneity refers to the formative discrepancy between uncertain, unstructured discourse (e.g., user utterances and system responses) and deterministic, structured dialogue states. Accordingly, the underlying alignment information and knowledge contained within the heterogeneous data is not fully exploited in the above approach.\nTo address the above challenges, we propose an innovative multijugate dual learning framework in TOD (MDTOD). Contrary to previous work on reconstructing user discourse based on belief states (Sun et al., 2022;Chen et al., 2020), we observed that modeling the duality between user utterance and system responses can further uncover alignment information of entities between user utterance, system responses, and dialogue states. Specifically, the model is required to reconstruct the user discourse based on the dialogue state and also to deduce the user utterance backward based on the system response. Consequently, the model can further learn the mapping relationship between the heterogeneous information, and improve the performance of the end-to-end TOD system in low-resource scenarios.\nHowever, proper dual training increases the likelihood of the model learning spurious data correlations. It is evidenced by the fact that comparable model performance can be attained using only highfrequency phrases as the training set (Yang et al., 2022). As a result, the model does not generalize well to test samples with significant expression variations or domain differences, as illustrated in Figure 1. To accomplish this, we expand the oneto-one dual learning paradigm to multijugate dual learning by capitalizing on the property of semantic representation variety. Given a deterministic dialog state as a constraint (Hokamp and Liu, 2017), a specific user utterance (system response) is rewritten into multiple utterances (responses) with the same semantics but various expressions utilizing decoding methods such as beam search or random sampling. Consequently, the richer representation of information permits the spurious correlation of shallow statistical patterns acquired by the model to be effectively mitigated, thereby enhancing the model's generalization (Cui et al., 2019).\nOur proposed method exploits the entity alignment information among heterogeneous data by designing a dual learning task; it also mitigates the phenomenon of false correlations and increases the generalization capacity of models via rephraseenhanced multijugate dual learning. As a result, the method does not introduce any additional trainable model parameters. It can be directly integrated into end-to-end TOD systems in arbitrary low-resource scenarios as a training approach to increase data utilization efficiency. We show the effectiveness of our method in several task-oriented datasets, including MultiWOZ2.0 (Budzianowski et al., 2018), MultiWOZ2.1 (Eric et al., 2020), and KVRET (Eric et al., 2017). We also demonstrate the advantages of our approach in low-resource scenarios. All code and parameters will be made public.\nOur primary contributions are summarized below:\n• A novel, model-independent, dual learning technique intended for low-resource end-toend TOD systems is presented that can be incorporated directly into the training of any TOD system.\n• To address the issue of spurious correlations impacting the generalization of models, a paradigm of paraphrase-enhanced multijugate dual learning is presented. 2 Related Work" }, { "figure_ref": [], "heading": "Task-Oriented Dialogue Systems", "publication_ref": [ "b5", "b10", "b32", "b18", "b20", "b16", "b23", "b13" ], "table_ref": [], "text": "TOD aims to complete user-specific goals via multiple turns of dialogue. Prior work focused mainly on TOD subtasks based on the pipeline paradigm (Gao et al., 2018), but it was prone to error propagation between modules. Therefore, recent research has attempted to model dialogue tasks from an endto-end generation approach. DAMD (Zhang et al., 2020a) generates the different outputs of a conversation process via multiple decoders and expands multiple dialogue actions dependent on the dialogue state. A portion of the study (Hosseini-Asl et al., 2020;Yang et al., 2020;Peng et al., 2021) models the individual dialogue tasks in the TOD as cascading generation tasks using GPT2 (Radford et al., 2019) of the decoder architecture as the backbone network. Multi-task approaches (Lin et al., 2020;Su et al., 2022;Lee, 2021) utilizing I would like to find a cheap place to stay that has 4 stars and has free parking.\nContext Ct 1 I would like to find a cheap place to stay that has 4 stars and has free parking." }, { "figure_ref": [], "heading": "Context Ct 1", "publication_ref": [], "table_ref": [], "text": "Is there any cheap place to stay with 4 stars and free parking ?" }, { "figure_ref": [], "heading": "Context Ct 2", "publication_ref": [], "table_ref": [], "text": "Is there any cheap place to stay with 4 stars and free parking ?" }, { "figure_ref": [], "heading": "Context Ct 2", "publication_ref": [], "table_ref": [], "text": "I am looking for a reasonably priced place that is 4 stars and has free parking." }, { "figure_ref": [], "heading": "Context Ct 3", "publication_ref": [], "table_ref": [], "text": "I am looking for a reasonably priced place that is 4 stars and has free parking." }, { "figure_ref": [], "heading": "Context Ct 3", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "MDTOD", "publication_ref": [], "table_ref": [], "text": "I have found 8 places that match, would you like me to book one of them for you ?\nResponse Rt 1 I have found 8 places that match, would you like me to book one of them for you ?" }, { "figure_ref": [], "heading": "Response Rt 1", "publication_ref": [], "table_ref": [], "text": "There are 8 places that meet your requirements. should I book one for you?" }, { "figure_ref": [], "heading": "Response Rt 2", "publication_ref": [ "b21", "b14" ], "table_ref": [], "text": "There are 8 places that meet your requirements. should I book one for you? Figure 2: The overall structure of multijugate dual learning. To get paraphrase-enhanced multiple contexts Ct and responses Rt , the contexts and responses in each dialogue turn will be paraphrased based on deterministic dialogue states using an off-the-shelf paraphrase model. Then, the multijugate dual learning is performed between the paraphrase-enhanced contexts Ct and dialogue states and between the paraphrase-enhanced responses Rt and dialogue states, respectively.\nencoder-decoder architectures such as T5 (Raffel et al., 2020) or BART (Lewis et al., 2020) exist for modeling dialogue sub-tasks as sequence-tosequence generating tasks.\nAlthough the methods mentioned above use a uniform end-to-end approach to model TOD, none performs well in low-resource scenarios. To this end, we devise a rephrase-enhanced multijugate dual learning to exploit the entity alignment information more adequately and to obtain more robust performance." }, { "figure_ref": [], "heading": "Dual Learning for Generation", "publication_ref": [ "b7", "b28", "b6", "b15", "b24", "b1", "b33", "b30", "b2" ], "table_ref": [], "text": "Dual learning aims to utilize the paired structure of data to acquire effective feedback or regularization information, thus enhancing model training performance. Dual learning was initially introduced in unsupervised machine translation (He et al., 2016) and combined with reinforcement learning to optimize two agents iteratively. DSL (Xia et al., 2017) then extended dual learning to supervised settings to take advantage of pairwise relationships of parallel corpora. Similar work (Guo et al., 2020) employs cycle training to enable unsupervised mutual generation of structured graphs and text. MPDL (Li et al., 2021) expands the duality in dialogue tasks to stylized dialogue generation without the parallel corpus. A portion of the work (Sun et al., 2022;Chen et al., 2020) integrates the idea of duality into the dialogue state tracking. Some of the work (Zhang et al., 2018;Yang et al., 2018;Cui et al., 2019) introduces dual learning in dialogue generation to enhance responses' diversity, personality, or coherence. However, each method mentioned above requires multiple models or combines reinforcement learning and dual modeling, considerably increasing the task's complexity and training difficulty.\nIn contrast to previous work, our proposed multijugate dual learning objectives share the same model parameters. It does not require modifications to the original training objectives of the maximum likelihood estimation, making training more straightforward and more readily applicable to other tasks." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "End-to-End Task-Oriented Dialogue System", "publication_ref": [ "b13", "b10" ], "table_ref": [], "text": "Typically, end-to-end TOD systems consist of subtasks such as dialogue state prediction and response generation (Lee, 2021). End-to-end TOD systems typically model the several subtasks of the dialogue process as sequence generation tasks to facilitate the unification of model structure, and training objectives (Hosseini-Asl et al., 2020). Denote the TOD dataset as D TOD = {Dial i , DB} N i=1 , where DB is the database. In a multi-turn dialogue Dial i , where the user utterance in the t-th turn is U t , and the system response is R t , the dialogue history or dialogue context can be expressed as follows:\nC t = [U 0 , R 0 , • • • , U t-1 , R t-1 , U t ].\n(1)\nAfter that, the model generates the dialogue state B t based on the previous dialogue context C t :\nL B = N i=1 T i t=1 -log P θ (B t |C t ),(2)\nwhere N represents the total number of sessions in the dataset, T i symbolizes the total number of turns per session and θ denotes an arbitrary generation model. The system then searches the database with the criterion B t and retrieves the database result D t . Then, the TOD system generate the response R t based on the context U t , dialogue state B t and database query result D t for each round:\nL R = N i=1 T i t=1 -log P θ (R t |C t , B t , D t ).(3)\nFinally, a human-readable response text containing the entity is obtained by combining the belief state and the search results from the database." }, { "figure_ref": [], "heading": "Multijugate Dual Learning", "publication_ref": [], "table_ref": [], "text": "This section describes how to design dual learning objectives in the training process of TOD. Also, we expound on how to construct multijugate dual learning by paraphrasing user utterances and system responses with representational diversity based on deterministic dialogue states." }, { "figure_ref": [], "heading": "Dual Learning in TOD", "publication_ref": [], "table_ref": [], "text": "We define the deterministic dialogue state S t = [B t ; D t ] consisting of two informational components: the belief state B t and the database query results D t .\nAs illustrated in Figure 2, dialogue states can be viewed as information with a unique manifestation of determinism (Zhang et al., 2020a) without regard to the order of dialogue actions. Utilizing dialogue state as a constraint, the natural language of context and response could be viewed as data with different representations of uncertainty. Therefore, we designed the dual task in TOD to learn the mapping relationship between the utterance of linguistic forms and dialogue state representation.\nLet f cb : C t -→ B t denote the forward learning objective of generating belief states according to the context referred to by Eq.2, and f bc : B t -→ C t denote the reverse learning objective of reconstructing the context according to the belief states, then the dual learning task between user utterance and dialogue state is defined as maximizing the following logarithmic probability:\nlog i∈N t∈T i P θ (S i t |C i t ; f cb )(C i t |S i t ; f bc ). (4)\nSimilarly, let f cr : C t -→ R t , f rc : R t -→ C t denote the dual learning task between the dialogue context C t and the system response R t :\nlog i∈N t∈T i P θ (R i t |C i t ; f cr )(C i t |R i t ; f rc ). (5)\nAccordingly, the loss function of the total dual learning objective is the sum of the above two components:\nL Dual = E i∼N t∼T i -(log P θ (S i t , R i t |C i t ; f cr , f cb ) + log P θ (C i t |S i t ; f bc ) + log P θ (C i t |R i t ; f rc )). (6)\nFurthermore, the two dual learning objectives share a set of model parameters in a multi-task paradigm, thus ensuring knowledge transfer between the dual tasks." }, { "figure_ref": [], "heading": "Construction of Multijugate Relations", "publication_ref": [ "b26" ], "table_ref": [], "text": "Dual learning enhances data usage efficiency by acquiring additional entity alignment information between heterogeneous data, but it does not lessen the effect of spurious correlations on model generalization. Leveraging the deterministic properties of dialogue states and the uncertainty of linguistic representations, we expand the original one-toone dual learning to multijugate dual learning by paraphrases. Theoretically, several semantically identical but inconsistently expressed contexts or system responses exist for a deterministic dialogue state. Consequently, given (S t , C t ) or (S t , R t ), we rephrase the context C t and the response R t restricted by the entities in dialogue state S t with the following constraint generation method:\nCt ∼ P(C t , S t ), Rt ∼ P(S t , R t ). (7)\nSpecifically, we utilize an off-the-shelf paraphrasing model with the dialogue context C t as the model input. Also the value in the dialogue state S t will be treated as a constraint to limit the decoding. Then, beam search is employed in the generation to obtain K different contexts Ct or responses Rt as the result of paraphrase generation.\nMoreover, since the context C t of the current turn depends on the dialogue history\n(• • • , C t-1 , S t-1 , R t-1\n) of the previous turn, rewriting the context or responses of each turn results in a combinatorial explosion. Therefore, a heuristic was adopted whereby the dialogue context C t and system response R t would only be rewritten once every dialogue turns. The method for producing the final paraphrase is:\nCij t ∼ N i=1 T i t=1 M j=1 P(C ij t , S ij t ),(8)\nRij t ∼ N i=1 T i t=1 M j=1 P(S ij t , R ij t ),(9)\nwhere M represents the number of single samples to be rewritten. In practice, as the proportion of training data increases, the number of M decreases. In addition, paraphrasing was preferred over word substitution or addition/deletion-based techniques (Wei and Zou, 2019) because word substitution is based on a particular probability of word-level alterations, preventing the modification of phrases with false correlation. Moreover, section 4.4.3 approved paraphrasing produces more diverse and high-quality augmented content, alleviating the risk of spurious relevance more effectively." }, { "figure_ref": [], "heading": "Multijugate Dual Learning for Training", "publication_ref": [], "table_ref": [], "text": "By acquiring paraphrase-enhanced samples, the original one-to-one dual learning can be augmented with multijugate dual learning, allowing the model to completely leverage the entity alignment information between heterogeneous data while maintaining appropriate generalization. The overall framework of our method is illustrated in Figure 2. Consequently, the final loss function for multijugate dual learning of TOD is as follows:\nLDual = E i∼N t∼T i j∼M -(log P θ (S ij t , R ij t |C ij t ; f cr , f cb ) + log P θ (C ij t |S ij t ; f bc )(C ij t |R ij t ; f rc )). (10\n)\n4 Experiments\nIn the context of an end-to-end dialogue scenario, we examine the comprehensive performance of multijugate dual learning on several dialogue datasets, including performance on dialogue state tracking and end-to-end task completion. In addition, evaluation studies were conducted in a scenario with limited resources to assess how effectively dual learning utilizes the knowledge contained within the data. In addition, the impact of several dual learning components and rewriting procedures on the method's overall performance is investigated." }, { "figure_ref": [], "heading": "Datasets and Evaluation Metrics", "publication_ref": [ "b0", "b3", "b4" ], "table_ref": [], "text": "MultiWOZ2.0 (Budzianowski et al., 2018), Mul-tiWOZ2.1 (Eric et al., 2020), and KVRET (Eric et al., 2017), three of the most extensively investigated datasets in the task-oriented dialogue domain, were analyzed. MultiWOZ2.0 is the first proposed dialogues dataset across seven domains, and Multi-WOZ2.1 is the version with several MultiWOZ2.0 annotation problems fixed. Following earlier research, we simultaneously evaluate both datasets to assess the robustness of the model against mislabeling. KVRET is a multi-turn TOD dataset containing three domains: calendar scheduling, weather query, and navigation. Detailed statistics of the three datasets are illustrated in Table 7.\nFor the selection of metrics under the end-to-end dialogue task, we use the standard and widely used Inform, Success, BLEU, and Combined score, where Inform measures whether the system's responses refer to the entity requested by the user, Success measures whether the system has answered all of the user's requests, BLEU measures the quality of the model generation. The Combined score indicates the overall performance of the taskoriented system. It is calculated using the formula: Combined Score = (Inform + Success) * 0.5 + BLEU. For the dialogue state tracking task, the Joint Goal Accuracy (JGA) is applied to quantify the fraction of total turns where the model predicts that all slots in one turn are correct. " }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [], "table_ref": [], "text": "We did comparison experiments with the following potent baselines. ( 1 further learn the alignment information between entities and thus improve the success rate of the task. Meanwhile, T5+DL achieves higher values on BLEU with different proportions of training data, indicating that the dual learning objective between user utterance and system response is also beneficial for improving the quality of text generation. In addition, MDTOD with multijugate dual learning achieves better results, indicating that controlled rephrasing can further enhance the effect of dual learning." }, { "figure_ref": [], "heading": "Dual Learning in Dialogue State Tracking", "publication_ref": [], "table_ref": [], "text": "To further investigate the effectiveness of the dual learning task between user utterance and dialogue state on the gain of TOD in multijugate dual learning, we conducted experiments on the Mul-tiWOZ2.0 dataset for dialogue state tracking in low-resource scenarios. We set four different quantitative training sizes of 1%, 5%, 10% and 20% to represent different degrees of low-resource scenarios.\nWe can infer from the experimental results in learning components, removing dual learning between context and system responses resulted in a 1.87-point performance decrease, indicating that fully exploiting the implicit alignment information between context and system responses was more effective at enhancing the model's overall performance. Additionally, deleting both dual learning components resulted in a 2.02 points decrease in the combined score, demonstrating that both dual learning objectives are effective for this strategy." }, { "figure_ref": [], "heading": "Mitigating Spurious Correlation for Generalization", "publication_ref": [], "table_ref": [ "tab_11" ], "text": "This section explores the generalizability of dual learning across domains when different numbers of paraphrases are tested, i.e., on a domain that does not appear in the training process, to examine the effect of rephrasing enhanced multijugate dual learning for mitigating spurious correlations of entities and improving generalization. In the In-Car dataset, we explore the ability of MDTOD to generalize to both the scheduling and weather domains separately. The Goal Score is calculated as (inform + success) * 0.5 to signify task accomplishment. As indicated in Table 5, the model exhibits some improvement in task completion rate and text generation performance in both new domains when using rephrased augmented multijugate dual learning. Further, when the number of paraphrases is 2, a boost of 4.21 points is obtained on the Goal Score compared to no additional rephrasing mechanism. This improvement indicates that the multiple conjugations further alleviate the shallow spurious correlations among entities captured by the model, thus improving the task completion rate." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Effect of Different Paraphrases", "publication_ref": [ "b26" ], "table_ref": [], "text": "To investigate the impact of various rephrasing techniques on the construction of multijugate dual learning, we examined the impact of easy data aug- mentation (EDA) (Wei and Zou, 2019), synonym replacement (SYN), and paraphrasing (PARA) to generate augmented data with limited resources.\nAs demonstrated in the upper part of Figure 3, both PARA and EDA demonstrate minor improvements as the number of augmented data increases, with PARA exceeding EDA. The results indicate that PARA generates higher-quality augmented data, whereas SYN increases noise.\nThe results in Figure 3 indicate that increasing the number of PARA leads to an increase in the completion rate of dialogue goals. In contrast, EDA and SYN provide a minor boost or decrease in the model's performance. This analysis reveals that a rephrasing strategy enables better discourse rewriting under dialogue state constraints, alleviating the spurious correlation issue and enhancing the model's generalizability." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We propose a novel multijugate dual learning for task-oriented dialogues in low-resource scenarios. Exploiting the duality between deterministic dialogue states and uncertain utterances enables the entity alignment information in heterogeneous data to be fully exploited. Meanwhile, paraphraseenhanced multijugate dual learning alleviates the spurious correlation of shallow pattern statistics. Experiments on several TOD datasets show that the proposed method achieves state-of-the-art results in both end-to-end response generation and dialogue state tracking in low-resource scenarios." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Multijugate dual learning improves the model's performance in TOD tasks in low-resource scenarios, but the introduction of the dual training objects increases the required graphics memory and training steps. In addition, the rephrasing mechanism necessitates an additional paraphraser to rewrite the training samples; hence, the number of training samples increases according to the number of paraphrases. Despite this, we find that the higher training cost associated with multijugate dual learning is preferable to employing a large quantity of dialogue data for further pre-training or manually labeling data.\nConsidered from a different angle, the scenario described above presents possibilities for future research, such as the development of higher-quality rephrasing algorithms to filter the augmented text. In the meantime, multijugate dual learning is a learning objective between structured and unstructured texts. Therefore it may be to any task involving heterogeneous data, such as generative information extraction, and data-to-set generation." }, { "figure_ref": [], "heading": "Generation-based Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Joint Goal Accuracy", "publication_ref": [ "b27", "b22", "b36", "b12", "b1", "b21", "b18", "b23", "b13", "b24" ], "table_ref": [], "text": "Model 2.0 2.1 TRADE (Wu et al., 2019) 48.62 46.00 COMER (Ren et al., 2019) 48.79 -DSTQA (Zhou and Small, 2019) 51.44 51.17 SOM-DST (Kim et al., 2020) 51.38 52.57 dual-DST (Chen et al., 2020) -49.88 T5-Base (Raffel et al., 2020) 52.16 52.08 SimpleTOD † (Hosseini-Asl et al., 2020) 51.37 50.14 SOLOIST † (Peng et al., 2021) 53.20 53.36 PPTOD † (Su et al., 2022) 53.57 51.68 MTTOD (Lee, 2021) 53.56 53.44 BORT (Sun et al., 2022) 54.00 -MDTOD 54.41 53.85 " }, { "figure_ref": [], "heading": "C Case Analysis", "publication_ref": [], "table_ref": [ "tab_16" ], "text": "We present partial selections of paraphrases in Table 10 to demonstrate the effect of the rephraser. As shown in the first example, when the constraints are set to the entities \"hail\" and \"los angeles\", the rephraser still produces paraphrases that are fluent and satisfy the constraints.\nIn addition, we illustrate a sample of the dialog generated by MDTOD in Table 11 . The dialogue begins with the user seeking an Indian restaurant in the center of town, and the model correctly extracts the values of the slots \"food\" and \"area\". When the conversation proceeds to turn 2, MDTOD generates more belief states than oracle's belief states, but the model generates the correct results. The reason is that there are some labeling errors in Multi-WOZ2.0, while MDTOD can still generate correct belief states, which shows the robustness of MD-TOD. When the conversation progressed to turn 5, MDTOD still predicted the correct belief state despite the user changing the reservation time from 13:30 to 12:30, indicating that the model understood the semantic meaning of the current input sentences rather than simply repeating the belief state from the previous turn. " }, { "figure_ref": [], "heading": "Examples", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by the National Key Research and Development Program of China (No.2020AAA0108700) and National Natural Science Foundation of China (No.62022027)." }, { "figure_ref": [], "heading": "A Implementation Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 Setup for Experiments", "publication_ref": [], "table_ref": [], "text": "All of our experiments utilize Huggingface's checkpoints. The backbone network of the end-to-end dialogue model is T5-base. For the generation of paraphrases, we adopt tuner007/pegasus_paraphrase 1 directly and construct multiple paraphrases with beam search in decoding. The AdamW optimizer was applied to train the dialogue model and adjusted using linear scheduling with a warmup technique. For the entire dataset in MultiWOZ, we trained 10 epochs with a batch size of 3. Training epochs were relatively increased in the scenario with limited resources. All trials were executed on NVIDIA GeForce RTX 3090 GPU (24G) or NVIDIA A800 (80G). Without additional specifications, the average of three runs with different random seeds was taken as the final result for all experiments. " }, { "figure_ref": [], "heading": "B Experiments with Full Training Data", "publication_ref": [], "table_ref": [], "text": "" } ]
2023-05-25
10.18653/v1/w17-5506
[ { "authors": "Pawel Budzianowski; Tsung-Hsien Wen; Bo-Hsiang Tseng; Iñigo Casanueva; Stefan Ultes; Milica Osman Ramadan; Gasic", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Multiwoz -A largescale multi-domain wizard-of-oz dataset for taskoriented dialogue modelling", "year": "2018-10-31" }, { "authors": "Zhi Chen; Lu Chen; Yanbin Zhao; Su Zhu; Kai Yu", "journal": "", "ref_id": "b1", "title": "Dual learning for dialogue state tracking", "year": "2020" }, { "authors": "Shaobo Cui; Rongzhong Lian; Di Jiang; Yuanfeng Song; Siqi Bao; Yong Jiang", "journal": "", "ref_id": "b2", "title": "Dal: Dual adversarial learning for dialogue generation", "year": "2019" }, { "authors": "Mihail Eric; Rahul Goel; Shachi Paul; Abhishek Sethi; Sanchit Agarwal; Shuyang Gao; Adarsh Kumar; Anuj Kumar Goyal; Peter Ku; Dilek Hakkani-Tür", "journal": "European Language Resources Association", "ref_id": "b3", "title": "Multiwoz 2.1: A consolidated multi-domain dialogue dataset with state corrections and state tracking baselines", "year": "2020-05-11" }, { "authors": "Mihail Eric; Lakshmi Krishnan; François Charette; Christopher D Manning", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Key-value retrieval networks for task-oriented dialogue", "year": "2017-08-15" }, { "authors": "Jianfeng Gao; Michel Galley; Lihong Li", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Neural approaches to conversational AI", "year": "2018" }, { "authors": "Qipeng Guo; Zhijing Jin; Xipeng Qiu; Weinan Zhang; David Wipf; Zheng Zhang", "journal": "", "ref_id": "b6", "title": "Cyclegt: Unsupervised graph-to-text and text-to-graph generation via cycle training", "year": "2020" }, { "authors": "Di He; Yingce Xia; Tao Qin; Liwei Wang; Nenghai Yu; Tie-Yan Liu; Wei-Ying Ma", "journal": "", "ref_id": "b7", "title": "Dual learning for machine translation", "year": "2016-12-05" }, { "authors": "Wanwei He; Yinpei Dai; Yinhe Zheng; Yuchuan Wu; Zheng Cao; Dermot Liu; Peng Jiang; Min Yang; Fei Huang; Luo Si; Jian Sun; Yongbin Li", "journal": "AAAI Press", "ref_id": "b8", "title": "GALAXY: A generative pre-trained model for taskoriented dialog with semi-supervised learning and explicit policy injection", "year": "2022-02-22" }, { "authors": "Chris Hokamp; Qun Liu", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Lexically constrained decoding for sequence generation using grid beam search", "year": "2017" }, { "authors": "Ehsan Hosseini-Asl; Bryan Mccann; Chien-Sheng Wu; Semih Yavuz; Richard Socher", "journal": "", "ref_id": "b10", "title": "A simple language model for task-oriented dialogue", "year": "2020-12-06" }, { "authors": "Hyunmin Jeon; Gary Geunbae; Lee ", "journal": "", "ref_id": "b11", "title": "Domain state tracking for a simplified dialogue system", "year": "2021" }, { "authors": "Sungdong Kim; Sohee Yang; Gyuwan Kim; Sang-Woo Lee", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Efficient dialogue state tracking by selectively overwriting memory", "year": "2020" }, { "authors": "Yohan Lee", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Improving end-to-end dialog system with A simple auxiliary task", "year": "2021-11" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020-07-05" }, { "authors": "Jinpeng Li; Yingce Xia; Rui Yan; Hongda Sun; Dongyan Zhao; Tie-Yan Liu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b15", "title": "Stylized dialogue generation with multi-pass dual learning", "year": "2021" }, { "authors": "Zhaojiang Lin; Andrea Madotto; Genta Indra Winata; Pascale Fung", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Mintl: Minimalist transfer learning for task-oriented dialogue systems", "year": "2020-11-16" }, { "authors": "Fei Mi; Yasheng Wang; Yitong Li", "journal": "AAAI Press", "ref_id": "b17", "title": "CINS: comprehensive instruction for few-shot learning in task-oriented dialog systems", "year": "2022-02-22" }, { "authors": "Baolin Peng; Chunyuan Li; Jinchao Li; Shahin Shayandeh; Lars Liden; Jianfeng Gao", "journal": "Trans. Assoc. Comput. Linguistics", "ref_id": "b18", "title": "SOLOIST: building task bots at scale with transfer learning and machine teaching", "year": "2021" }, { "authors": "Xipeng Qiu; Tianxiang Sun; Yige Xu; Yunfan Shao; Ning Dai; Xuanjing Huang", "journal": "", "ref_id": "b19", "title": "Pre-trained models for natural language processing: A survey", "year": "2020" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b20", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b21", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Liliang Ren; Jianmo Ni; Julian Mcauley", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Scalable and accurate dialogue state tracking via hierarchical sequence generation", "year": "2019" }, { "authors": "Yixuan Su; Lei Shu; Elman Mansimov; Arshit Gupta; Deng Cai; Yi-An Lai; Yi Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Multi-task pre-training for plug-and-play task-oriented dialogue system", "year": "2022-05-22" }, { "authors": "Haipeng Sun; Junwei Bao; Youzheng Wu; Xiaodong He", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "BORT: back and denoising reconstruction for end-to-end task-oriented dialog", "year": "2022-07-10" }, { "authors": "Xin Tian; Yingzhan Lin; Mengfei Song; Siqi Bao; Fan Wang; Huang He; Shuqi Sun; Hua Wu", "journal": "", "ref_id": "b25", "title": "Q-TOD: A query-driven task-oriented dialogue system", "year": "2022" }, { "authors": "Jason Wei; Kai Zou", "journal": "", "ref_id": "b26", "title": "EDA: Easy data augmentation techniques for boosting performance on text classification tasks", "year": "2019" }, { "authors": "Chien-Sheng Wu; Andrea Madotto; Ehsan Hosseini-Asl; Caiming Xiong; Richard Socher; Pascale Fung", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Transferable multi-domain state generator for task-oriented dialogue systems", "year": "2019" }, { "authors": "Yingce Xia; Tao Qin; Wei Chen; Jiang Bian; Nenghai Yu; Tie-Yan Liu", "journal": "", "ref_id": "b28", "title": "Dual supervised learning", "year": "2017" }, { "authors": " Pmlr", "journal": "", "ref_id": "b29", "title": "", "year": "" }, { "authors": "Min Yang; Wenting Tu; Qiang Qu; Zhou Zhao; Xiaojun Chen; Jia Zhu", "journal": "Neural Networks", "ref_id": "b30", "title": "Personalized response generation by dual-learning based domain adaptation", "year": "2018" }, { "authors": "Shiquan Yang; Xinting Huang; Jey Han Lau; Sarah M Erfani", "journal": "", "ref_id": "b31", "title": "Robust task-oriented dialogue generation with contrastive pre-training and adversarial filtering", "year": "2022" }, { "authors": "Yunyi Yang; Yunhao Li; Xiaojun Quan", "journal": "", "ref_id": "b32", "title": "UBAR: towards fully end-to-end task-oriented dialog systems with GPT-2", "year": "2020" }, { "authors": "Hainan Zhang; Yanyan Lan; Jiafeng Guo; Jun Xu; Xueqi Cheng", "journal": "", "ref_id": "b33", "title": "coherence for sequence to sequence model in dialogue generation", "year": "2018-07-13" }, { "authors": "Yichi Zhang; Zhijian Ou; Zhou Yu; ; ", "journal": "AAAI Press", "ref_id": "b34", "title": "Taskoriented dialog systems that consider multiple appropriate responses under the same context", "year": "2020-02-07" }, { "authors": "Yizhe Zhang; Siqi Sun; Michel Galley; Yen-Chun Chen; Chris Brockett; Xiang Gao; Jianfeng Gao; Jingjing Liu; Bill Dolan", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "DIALOGPT : Largescale generative pre-training for conversational response generation", "year": "2020-07-05" }, { "authors": "Li Zhou; Kevin Small", "journal": "", "ref_id": "b36", "title": "Multi-domain dialogue state tracking as dynamic knowledge graph enhanced question answering", "year": "2019" } ]
[ { "formula_coordinates": [ 4, 101.86, 152.76, 156.28, 10.63 ], "formula_id": "formula_0", "formula_text": "C t = [U 0 , R 0 , • • • , U t-1 , R t-1 , U t ]." }, { "formula_coordinates": [ 4, 109.94, 224.22, 179.93, 33.96 ], "formula_id": "formula_1", "formula_text": "L B = N i=1 T i t=1 -log P θ (B t |C t ),(2)" }, { "formula_coordinates": [ 4, 87.24, 397.93, 202.63, 33.96 ], "formula_id": "formula_2", "formula_text": "L R = N i=1 T i t=1 -log P θ (R t |C t , B t , D t ).(3)" }, { "formula_coordinates": [ 4, 329.55, 232.86, 195.6, 25.59 ], "formula_id": "formula_3", "formula_text": "log i∈N t∈T i P θ (S i t |C i t ; f cb )(C i t |S i t ; f bc ). (4)" }, { "formula_coordinates": [ 4, 322.16, 334.68, 202.98, 25.59 ], "formula_id": "formula_4", "formula_text": "log i∈N t∈T i P θ (R i t |C i t ; f cr )(C i t |R i t ; f rc ). (5)" }, { "formula_coordinates": [ 4, 311.96, 436.33, 213.18, 45.32 ], "formula_id": "formula_5", "formula_text": "L Dual = E i∼N t∼T i -(log P θ (S i t , R i t |C i t ; f cr , f cb ) + log P θ (C i t |S i t ; f bc ) + log P θ (C i t |R i t ; f rc )). (6)" }, { "formula_coordinates": [ 5, 108.73, 99.12, 181.14, 13.39 ], "formula_id": "formula_6", "formula_text": "Ct ∼ P(C t , S t ), Rt ∼ P(S t , R t ). (7)" }, { "formula_coordinates": [ 5, 69.59, 253.32, 100.55, 10.63 ], "formula_id": "formula_7", "formula_text": "(• • • , C t-1 , S t-1 , R t-1" }, { "formula_coordinates": [ 5, 113.28, 372.65, 176.59, 33.96 ], "formula_id": "formula_8", "formula_text": "Cij t ∼ N i=1 T i t=1 M j=1 P(C ij t , S ij t ),(8)" }, { "formula_coordinates": [ 5, 113.38, 412.78, 176.49, 33.96 ], "formula_id": "formula_9", "formula_text": "Rij t ∼ N i=1 T i t=1 M j=1 P(S ij t , R ij t ),(9)" }, { "formula_coordinates": [ 5, 318.59, 90.72, 202, 66.66 ], "formula_id": "formula_10", "formula_text": "LDual = E i∼N t∼T i j∼M -(log P θ (S ij t , R ij t |C ij t ; f cr , f cb ) + log P θ (C ij t |S ij t ; f bc )(C ij t |R ij t ; f rc )). (10" }, { "formula_coordinates": [ 5, 520.6, 147.92, 4.54, 9.46 ], "formula_id": "formula_11", "formula_text": ")" } ]
Multijugate Dual Learning for Low-Resource Task-Oriented Dialogue System
Dialogue data in real scenarios tend to be sparsely available, rendering data-starved endto-end dialogue systems trained inadequately. We discover that data utilization efficiency in low-resource scenarios can be enhanced by mining alignment information uncertain utterance and deterministic dialogue state. Therefore, we innovatively implement dual learning in task-oriented dialogues to exploit the correlation of heterogeneous data. In addition, the one-to-one duality is converted into a multijugate duality to reduce the influence of spurious correlations in dual training for generalization. Without introducing additional parameters, our method could be implemented in arbitrary networks. Extensive empirical analyses demonstrate that our proposed method improves the effectiveness of end-to-end task-oriented dialogue systems under multiple benchmarks and obtains state-of-the-art results in low-resource scenarios.
Shimin Li; Xiaotian Zhang; Yanjun Zheng; Linyang Li; Xipeng Qiu
[ { "figure_caption": "Figure 3 :3Figure 3: To investigate the impact of various rephrasing strategies on multijugate dual learning.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "like to find a cheap place to stay that has 4 stars and has free parking.", "figure_data": "Original Context Ct Original Context CtOriginal Response RtI would like to find a cheap place to stay that has 4I have found eight places that match, would you like I have found eight places that match, would you likestars and has free parking.me to book one of them for you ? me to book one of them for you ?", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Database Results Database ResultsOff-the-Shelf Paraphraser Off-the-Shelf ParaphraserPrice PriceInternet InternetParking ParkingCheap Cheap✔ ✔✔ ✔Cheap Cheap✔ ✔…… ……", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Off-the-Shelf Paraphraser Off-the-Shelf Paraphraser", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "log 𝑃 𝜃 (𝑆 𝑡 |𝐶 𝑡 ; 𝑓 𝑐𝑏 )(𝐶 𝑡 |𝑆 𝑡 ; 𝑓 𝑏𝑐 ) log 𝑃 𝜃 (𝑅 𝑡 |𝐶 𝑡 ; 𝑓 𝑐𝑟 )(𝐶 𝑡 |𝑅 𝑡 ; 𝑓 𝑟𝑐 ) 𝐶 𝑡 ∼ 𝒫 𝜙 (𝐶 𝑡 , 𝑆 𝑡 )", "figure_data": "Price:cheapArea:NoneParking:YesName:NoneWIFI:NoneStar4Match # : 8", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The performance of MDTOD is evaluated at 5%, 10%, and 20% of the data size. Comb. denotes Combined Score.", "figure_data": "MultiWOZ 2.05% Training set10% Training set20% Training setModelInform Success BLEU Comb. Inform Success BLEU Comb. Inform Success BLEU Comb.MD-Sequicity 49.4019.7010.3044.8558.1034.7011.4057.8064.4042.1013.0066.25DAMD52.5031.8011.6053.7555.3030.3013.0055.8062.6044.1014.9068.25SOLOIST69.3052.3011.8072.6069.9051.9014.6075.5074.0060.1015.2582.29MinTL75.4860.9613.9882.2078.0866.8715.4687.9482.4868.5713.0088.53UBAR73.0460.2816.0382.8979.2068.7016.0990.0482.5066.6017.7292.26T5-Base77.8063.3014.5684.9481.0067.0015.1789.1784.2072.7017.7196.16BORT69.8045.9011.0068.9074.5060.6015.5083.1082.1065.6014.3088.10PPTOD79.8663.4814.8986.5584.4268.3615.5791.9684.9471.7017.0195.32MTTOD82.0064.0014.4887.4982.1071.1016.2192.8189.5078.5015.5399.53MDTOD85.65 (±2.35)62.20 (±2.70)15.24 (±1.04)89.16 (±1.48)86.30 (±0.90)71.50 (±0.60)14.47 (±1.19)93.37 (±1.04)90.25 (±0.55)80.90 (±0.42)16.40 (±1.15)101.97 (±0.73)", "figure_id": "tab_5", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The performance is evaluated at 10%, 20%, and 50% of the data size. The numbers in parentheses indicate the variance of the four runs. ±1.07 16.14 ±1.48 22.37 ±1.17 31.22 ±2.32 MinTL 9.25 ±2.33 21.28 ±1.94 30.32 ±2.14 35.96 ±1.25 SOLOIST 13.21 ±1.97 26.53 ±1.62 32.42 ±1.13 38.68 ±0.98 PPTOD base 29.72 ±0.61 40.20 ±0.39 43.35 ±0.64 46.96 ±0.40 MDTOD 21.22 ±2.86 40.90 ±0.20 45.10 ±1.40 47.89 ±0.55", "figure_data": "ModelTraining Set1%5%10%20%SimpleTOD 7.91", "figure_id": "tab_7", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "DST evaluated at different proportions of low resources. The results are the means and standard deviations of the four runs.", "figure_data": "", "figure_id": "tab_8", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "", "figure_data": "that MDTOD had the greatest accuracyat three different magnitudes, 5%, 10%, and 20%.MDTOD is lower than PPTOD at 1% magnitude", "figure_id": "tab_9", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Different setting of multijugate dual learning.", "figure_data": "due to that PPTOD performs further pre-training ona large amount of additional dialogue data and thuscan achieve relatively better results in extremelylow-resource scenarios. Conversely, MDTOD doesnot perform any additional pre-training, but stillachieves the highest accuracy in the case of theother three magnitudes of data, indicating that mul-tijugate dual learning between user utterances anddialogue states is an important component thatmakes the overall approach effective.4.4 Analysis4.4.1 Dismantling multijugate dual learningTo investigate the effect of different dual learningcomponents and paraphrase augmentation on theproposed technique, we conducted ablation experi-ments by omitting various components using a 10%data size setting. In Table 4, Para represents theapproach of paraphrase augmentation, DU-DL rep-resents dual learning between context and dialoguestate, and RU-DL indicates dual learning betweencontext and system response.As shown in Table 4, the model's performancedecreases slightly when only dual learning is re-tained and the paraphrase enhancement is removed,indicating that multijugate dual learning can par-tially mitigate the overfitting problem caused bypairwise learning and thereby improve the model'sgeneralization capability. Among the various dual", "figure_id": "tab_10", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "The outcomes of the cross-domain evaluation. X / * → * denotes that the * domain is excluded from the training set and only the * domain is tested.", "figure_data": "", "figure_id": "tab_11", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Results of the performance comparison between MDTOD and other generative models, using Mul-tiWOZ 2.0 and 2.1 datasets, for the dialogue state tracking. †: The results provided in the publications of these approaches could not be reproduced in MultiWOZ2.1 or with an unfair evaluation script, so we corrected these results based on their open source code.", "figure_data": "eters or use a more powerful pre-training model fordialogue. Despite this, Dual-Dialog earns the high-est results, proving that dual learning can morethoroughly exploit the information included in theoriginal data and enhance the performance of task-oriented dialogue systems despite the vast amountof data. Our proposed strategy likewise achievesthe greatest BLEU on MultiWOZ2.0, showing thatthe quality of the model's generated responses hasbeen substantially enhanced.B.2 Dialogue State TrackingTo further investigate the influence of bipartite mod-eling between uncertain user utterances and deter-ministic belief states in dual learning on TOD sys-tems, we compared MDTOD with different generat-ing paradigm baselines while performing the beliefstate tracking task. According to Table 8, MD-TOD obtained up-to-date results for both datasetsin the belief state tracking challenge. On Multi-WOZ 2.0 and 2.1, our suggested technique achievesa 0.41 JGA improvement above the previous high-est BORT and MTTOD. Dual learning betweendialogue states and user utterances can learn en-tity alignment information in the data, resulting inimproved performance in belief state tracking.", "figure_id": "tab_12", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Full dataset comparison results between MDTOD and baselines under end-to-end settings. †: the results in(Su et al., 2022) are utilized. ‡: reproduced results operating the author's open-source code.", "figure_data": "MultiWOZ 2.0MultiWOZ 2.1", "figure_id": "tab_13", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Paraphrased Utterance plan a dinner with Martha later this week at 11 am. Paraphrased Response what day of the workweek do you want the reminder set for?", "figure_data": "Constraints[weather] [value_weather_attribute] hail [value_location] los angelesOriginal Utteranceis there going to be hail in los angeles this weekend ?Original Responseon Sunday hail is predicted to fall in san mateoParaphrased Utterance will hail hit los angeles this weekend?Paraphrased Response on sunday hail is foreshadow to fall in san mateoConstraints[schedule] [value_event] dentist appointmentOriginal Utterancegive me the date and time of my dentist appointmentOriginal Responseyour dentist appointment is at [value_time] on [value_date] .Paraphrased Utterance tell me the date and time of the dentist appointmentParaphrased Response your tooth doctor appointment is at [value_time] on [value_date] .Constraints[schedule] [value_party] martha [value_event] dinner [value_time] 11amOriginal Utteranceschedule dinner with Martha later this week at 11am.Original Responsewhat day of the week do you want the reminder set for?", "figure_id": "tab_14", "figure_label": "", "figure_type": "table" }, { "figure_caption": "A partial sample of the rephrased user utterances and system responses generated by paraphraser, where the constraints are marked in green.User Utterance okay when you find something i would like to make a reservation for 2 people on a sunday at 13:30 . Oracle Belief State [restaurant] {food : chinese, area : centre} Generated Belief State [restaurant] {food : chinese, area : centre, time : 13:30, day : sunday, people : 2} Oracle Reference do you have a price range in mind ? Delexicalized Response i was able to book you at [value_name] . your reference number is [value_reference] . is there anything else i can help you with ? Lexicalized Response i was able to book you at charlie chan . your reference number is 4wngilmf . is there anything else i can help you with ? User Utterance i don't mind about the price range . Oracle Belief State [restaurant] {food : chinese, area : centre} Generated Belief State [restaurant] {food : chinese, area : centre, time : 13:30, day : sunday, people : 2} Oracle Reference okay , how about lan hong house ? it is a chinese restsurant in the centre area and is in the moderate price range . will that work for you ? Delexicalized Response i would recommend [value_name] . , but i was n't able to book the restaurant for that date and time . would you like to try a different time ? Delexicalized Response booking was successful . the table will be reserved for 15 minutes . reference number is : [value_reference] . anything else i can help with ? Lexicalized Response booking was successful . the table will be reserved for 15 minutes . reference number is : f1ntkwzo . anything else i can help with ? {food : chinese, area : centre, name : lan hong house, time : 12:30, day : sunday, people : 2} Generated Belief State [restaurant] {food : chinese, area : centre, time : 12:30, day : sunday, people : 2} Oracle Reference you are booked for 12:30 on sunday for 2 people . your reference number is f1ntkwzo . is there something else i can help you with ? Delexicalized Response booking was successful . the table will be reserved for 15 minutes . reference number is : [value_reference] . anything else i can help with ? Reference thank you , if you need anything else let me know . good bye . Delexicalized Response thank you for using our system ! Lexicalized Response thank you for using our system !", "figure_data": "SNG0586DialogueUser Utterancehi, i am looking for a restaurant in the centre of town serving north indian food .Oracle Belief State[restaurant] {food : north indian, area : centre}Turn 0Generated Belief State[restaurant] {food : north indian, area : centre}Oracle Referencewe don't have any specific restaurants in that category. let's try something else ?Delexicalized Responsei am sorry , there are no restaurants serving [value_food] food in the centre . would you like to try a different cuisine ?Lexicalized Responsei am sorry , there are no restaurants serving north indian food in the centre . would you like to try a different cuisine ?User Utterancethat 's too bad . how about chinese ?Oracle Belief State[restaurant] {food : chinese, area : centre}Turn 1Generated Belief State[restaurant] {food : chinese, area : centre}Oracle Referenceokay sure what do you have ?Delexicalized Responsethere are [value_choice] [value_food] restaurants in the [value_area] . do you have a price range preference ?Lexicalized Responsethere are 3 chinese restaurants in the centre . do you have a price range preference ?Turn 2Turn 3Lexicalized Responsei would recommend lan hong house .User Utterancethat 's fine . can you try booking it ?Oracle Belief State[restaurant] {food : chinese, area : centre, time : 13:30, day : sunday, people : 2}Turn 4Generated Belief State[restaurant] {food : chinese, area : centre, time : 13:30, day : sunday, people : 2}Oracle Reference User Utterance i am sorry Turn 5 how about 12:30 ? Oracle Belief State booking was successful . the table will be reserved for 15 minutes . reference [restaurant] Lexicalized Response number is : f1ntkwzo . anything else i can help with ?User Utterancethanks so much . have a nice day . bye .Turn 6Oracle Belief State Generated Belief State[restaurant] {food : chinese, area : centre, name : lan hong house, time : 12:30, day : sunday, people : 2}Oracle", "figure_id": "tab_15", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_16", "figure_label": "11", "figure_type": "table" } ]
[{"Category": "Data Source", "Citation": "(Zhang et al., 2020b)", "Explanation": "The cited work is a dialogue data source that the citing paper uses in its research on end-to-end task-oriented dialogue systems."}, {"Category": "Methodological Basis", "Citation": "(Qiu et al., 2020)", "Explanation": "The cited work is a pre-trained language model that the citing paper adopts in its research on end-to-end task-oriented dialogue systems."}, {"Category": "Extension or Continuation", "Citation": "(Su et al., 2022)", "Explanation": "The citing paper extends the research on end-to-end task-oriented dialogue systems by exploring new dimensions, contexts, or variables."}, {"Category": "Extension or Continuation", "Citation": "(Lee, 2021)", "Explanation": "The citing paper continues the research on end-to-end task-oriented dialogue systems by building upon the work of Lee (2021)."}, {"Category": "Extension or Continuation", "Citation": "(Tian et al., 2022)", "Explanation": "The citing paper expands the research on end-to-end task-oriented dialogue systems by exploring new methods and techniques."}, {"Category": "Methodological Basis", "Citation": "(Gao et al., 2018)", "Explanation": "The cited work is a modular cascading dialogue system that the citing paper adopts in its research on end-to-end task-oriented dialogue systems."}, {"Category": "Data Source", "Citation": "(Budzianowski et al., 2018)", "Explanation": "The cited work is a dialogue data source that the citing paper uses in its research on end-to-end task-oriented dialogue systems."}, {"Category": "Methodological Basis", "Citation": "(Mi et al., 2022)", "Explanation": "The cited work by Mi et al. (2022) provides a method for improving data utilization efficiency in low-resource scenarios, which the citing paper builds upon to improve the end-to-end TOD system."}, {"Category": "Methodological Basis", "Citation": "(Sun et al., 2022)", "Explanation": "The cited work by Sun et al. (2022) provides a method for reconstructing user discourse based on belief states, which the citing paper adopts in their research to model the duality between user utterance and system responses."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2020)", "Explanation": "The cited work by Chen et al. (2020) also contributes to the method of reconstructing user discourse based on belief states, which the citing paper further builds upon to model the duality between user utterance and system responses."}, {"Category": "Methodological Basis", "Citation": "(Yang et al., 2022)", "Explanation": "The cited work by Yang et al. (2022) is used to demonstrate the impact of high-frequency phrases on model performance, which serves as a basis for the citing paper to further discuss the generalization of the model in low-resource scenarios."}, {"Category": "Methodological Basis", "Citation": "(Budzianowski et al., 2018)", "Explanation": "The cited work by Budzianowski et al. (2018) provides the MultiWOZ2.0 dataset, which the citing paper uses as a task-oriented dataset for evaluating the effectiveness of their proposed method in end-to-end TOD systems."}, {"Category": "Data Source", "Citation": "(Eric et al., 2020)", "Explanation": "The cited work by Eric et al. (2020) contributes the MultiWOZ2.1 dataset, which the citing paper utilizes in their research to demonstrate the advantages of their approach in low-resource scenarios."}, {"Category": "Data Source", "Citation": "(Eric et al., 2017)", "Explanation": "The cited work by Eric et al. (2017) provides the KVRET dataset, which the citing paper uses in their research to evaluate the effectiveness of their method in end-to-end TOD systems in low-resource scenarios."}, {"Category": "Methodological Basis", "Citation": "(Radford et al., 2019)", "Explanation": "The cited work, GPT2, serves as the backbone network for the end-to-end generation approach in the citing paper, providing a method for modeling individual dialogue tasks in the TOD as cascading generation tasks."}, {"Category": "Methodological Basis", "Citation": "(Raffel et al., 2020)", "Explanation": "The cited work, T5, is used as a method for modeling dialogue sub-tasks in sequence-to-sequence generating tasks."}, {"Category": "Methodological Basis", "Citation": "(Lewis et al., 2020)", "Explanation": "The cited work, BART, is also used as a method for modeling dialogue sub-tasks in sequence-to-sequence generating tasks."}, {"Category": "Methodological Basis", "Citation": "(He et al., 2016)", "Explanation": "The cited work introduces the concept of dual learning in unsupervised machine translation, which the citing paper builds upon to optimize two agents in a reinforcement learning setting."}, {"Category": "Extension or Continuation", "Citation": "(Xia et al., 2017)", "Explanation": "The cited work extends dual learning to supervised settings, which the citing paper further expands to take advantage of pairwise relationships in parallel corpora."}, {"Category": "Extension or Continuation", "Citation": "(Guo et al., 2020)", "Explanation": "The cited work employs cycle training to enable mutual generation of structured graphs and text, which the citing paper extends to dialogue tasks in a similar fashion."}, {"Category": "Extension or Continuation", "Citation": "(Li et al., 2021)", "Explanation": "The cited work expands the duality in dialogue tasks to stylized dialogue generation without a parallel corpus, which the citing paper further extends in the context of dialogue state tracking."}, {"Category": "Extension or Continuation", "Citation": "(Sun et al., 2022)", "Explanation": "The cited work integrates the idea of duality into dialogue state tracking, which the citing paper further extends in the context of dialogue generation to enhance responses in terms of diversity, personality, and coherence."}, {"Category": "Extension or Continuation", "Citation": "(Chen et al., 2020)", "Explanation": "The cited work introduces dual learning in dialogue generation to enhance responses in various aspects, which the citing paper further extends in the context of dialogue state tracking."}, {"Category": "Extension or Continuation", "Citation": "(Zhang et al., 2018)", "Explanation": "The cited work introduces dual learning in dialogue generation to enhance responses in various aspects, which the citing paper further extends in the context of dialogue state tracking."}, {"Category": "Extension or Continuation", "Citation": "(Yang et al., 2018)", "Explanation": "The cited work introduces dual learning in dialogue generation to enhance responses in various aspects, which the citing paper further extends in the context of dialogue state tracking."}, {"Category": "Extension or Continuation", "Citation": "(Cui et al., 2019)", "Explanation": "The cited work introduces dual learning in dialogue generation to enhance responses in various aspects, which the citing paper further extends in the context of dialogue state tracking."}, {"Category": "Methodological Basis", "Citation": "(Lee, 2021)", "Explanation": "The cited work by Lee provides a framework for end-to-end TOD systems that includes subtasks such as dialogue state prediction and response generation, which the citing paper adopts in its research on the same topic."}, {"Category": "Extension or Continuation", "Citation": "(Hosseini-Asl et al., 2020)", "Explanation": "The cited work by Hosseini-Asl et al. discusses the use of sequence generation tasks in end-to-end TOD systems, which the citing paper extends by exploring new dimensions and variables in the dialogue process."}, {"Category": "Data Source", "Citation": "(Budzianowski et al., 2018)", "Explanation": "The cited work, MultiWOZ2.0, is a dataset that the citing paper uses to evaluate the performance of a model in the task-oriented dialogue domain."}, {"Category": "Data Source", "Citation": "(Eric et al., 2017)", "Explanation": "The cited work, KVRET, is a multi-turn TOD dataset that the citing paper uses to assess the robustness of a model against mislabeling in the task-oriented dialogue task."}, {"Category": "Data Source", "Citation": "(Eric et al., 2020)", "Explanation": "The cited work, MultiWOZ2.1, is a version of the MultiWOZ2.0 dataset with fixed annotation problems that the citing paper uses to evaluate the performance of a model in the task-oriented dialogue task."}, {"Category": "Methodological Basis", "Citation": "(Wu et al., 2019)", "Explanation": "The cited work, TRADE, provides a method for training a model to perform a specific task, which the citing paper builds upon in their research."}, {"Category": "Data Source", "Citation": "(Ren et al., 2019)", "Explanation": "The cited work, COMER, is the data source for the model used in the citing paper to perform a specific task."}, {"Category": "Methodological Basis", "Citation": "(Zhou and Small, 2019)", "Explanation": "The cited work, DSTQA, provides a method for training a model to perform a specific task, which the citing paper builds upon in their research."}, {"Category": "Methodological Basis", "Citation": "(Kim et al., 2020)", "Explanation": "The cited work, SOM-DST, provides a method for training a model to perform a specific task, which the citing paper builds upon in their research."}, {"Category": "Methodological Basis", "Citation": "(Chen et al., 2020)", "Explanation": "The cited work, dual-DST, provides a method for training a model to perform a specific task, which the citing paper builds upon in their research."}, {"Category": "Data Source", "Citation": "(Raffel et al., 2020)", "Explanation": "The cited work, T5-Base, is the data source for the model used in the citing paper to perform a specific task."}, {"Category": "Methodological Basis", "Citation": "(Hosseini-Asl et al., 2020)", "Explanation": "The cited work, SimpleTOD, provides a method for training a model to perform a specific task, which the citing paper builds upon in their research."}, {"Category": "Methodological Basis", "Citation": "(Peng et al., 2021)", "Explanation": "The cited work, SOLOIST, provides a method for training a model to perform a specific task, which the citing paper builds upon in their research."}, {"Category": "Methodological Basis", "Citation": "(Su et al., 2022)", "Explanation": "The cited work, PPTOD, provides a method for training a model to perform a specific task, which the citing paper builds upon in their research."}, {"Category": "Data Source", "Citation": "(Lee, 2021)", "Explanation": "The cited work, MTTOD, is the data source for the model used in the citing paper to perform a specific task."}, {"Category": "Data Source", "Citation": "(Sun et al., 2022)", "Explanation": "The cited work, BORT, is the data source for the model used in the citing paper to perform a specific task."}, {"Category": "Methodological Basis", "Citation": "(Sun et al., 2022)", "Explanation": "The cited work, BORT, provides a method for training a model to perform a specific task, which the citing paper builds upon in their research."}, {"Category": "Methodological Basis", "Citation": "(Sun et al., 2022)", "Explanation": "The cited work, BORT, provides a method for training a model to perform a specific task, which the citing paper builds upon in their research."}]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b13", "b16", "b24", "b37", "b16", "b3", "b31", "b40", "b3" ], "table_ref": [], "text": "Object pose estimation is a fundamentally important task in computer vision with a multitude of real-world applications, e.g., in autonomous driving, 3D reconstruction, or in virtual and augmented reality applications. Pose estimation has been studied in depth on the instance level [14,17,19,25,38], and on the category-level for very specific object classes like cars [11] and faces [26]. However, it remains unclear how to learn category-level 3D pose estimation for general object categories. The main reason is that current models require large-scale annotated data, but annotating data with 3D poses is prohibitively expensive.\nWe aim to approach this challenging open research problem by developing models that learn from limited manual annotation and large-scale synthetic data with automated annotations. In particular, we build on recent results that develop a render-and-compare approach to categorylevel pose estimation [17,34] and demonstrated more efficient learning from few examples [35] compared to standard deep neural network-based methods, due to their inherent 3D-aware network architecture. However, these methods still suffer from a lower pose prediction accuracy when learned from few examples, compared to models learned from large-scale annotated data.\nIn this work, we aim to close the performance gap between models trained on a limited number of annotated real images and fully supervised models. To achieve this, we first introduce an advanced method to generate realistic synthetic data, and second, we extend models that demonstrate good generalization capabilities, to make them even better.\nThe major obstacle that prevents the community from using generated data rendered using computer graphics is that most current object pose estimation approaches [21,30,32,41] are sensitive to domain shift. This means that their performance degrades significantly when trained on synthetic images and then evaluated on real-world images. To address this issue, we create and develop SyntheticP3D, a synthetic dataset with high-quality realistic images and accurate 3D annotations with no manual efforts. As outlined in Figure 1, the dataset generation begins with the rendering the CAD models with a graphics-based renderer. To narrow the gap between synthetic images and natural images, we propose a graphics-guided style transfer module that utilizes a pre-trained style transfer generative model to produce high-quality images while maintaining 3D consistency. We also introduce an out-of-distribution (OOD)aware generation design that can effectively break the spurious correlations between task-related semantic information and domain-specific features. SyntheticP3D can improve model's robustness in OOD scenes with only a negligible Figure 1: Our approach learns 3D pose estimation from SyntheticP3D, where CAD models are rendered under randomly sampled viewpoints and lighting with various textures and backgrounds. Using SyntheticP3D, we propose an effective method that allows for accurate 3D pose estimation on real data, even in challenging domains considered to be out-ofdistribution for standard benchmarks. degradation in in-distribution benchmark performance.\nAs a second contribution, we develop a domain robust object pose estimation approach based on prior work on neural mesh models [34] that use inverse rendering on discriminative neural features for pose estimation. In particular, our approach represents an object category as a cuboid mesh and learns a generative model of neural feature activations at each mesh vertex for pose estimation via differentiable rendering. The feature representations at each vertex are trained to be invariant to instance-specific details and changes in 3D pose using contrastive learning. We extend the model to achieve better domain generalization by enhancing the consistency among vertex features across domains, and reweighting predictions to depend more on reliable features. To better adapt our model to real-world image domains, we fine-tune it on unlabeled real-world images using pseudo-labels from unlabeled data.\nWe summarize the contributions of our paper as follows:\n• We create the SyntheticP3D dataset by rendering CAD models in various poses, lighting conditions, and backgrounds, with high-quality realistic images with 3D annotations. As a result, models trained on Synthet-icP3D dataset achieve better accuracy on real-world images and generalize well to out-of-distribution scenarios.\n• We introduce a novel training and inference process for neural mesh models that enables them to perform unsupervised domain adaptation via feature consistency.\n• Our results show that our proposed model combined with our synthetic data generalizes almost as well as fully supervised models, when using only 50 training samples per class. Using 10% of the annotated data it can even outperform fully supervised models. Moreover, it generalizes more robustly in realistic out-ofdistribution scenarios, despite being trained on minimal real data." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b31", "b35", "b23", "b40", "b35", "b19", "b14", "b21", "b9", "b17", "b41", "b41", "b17", "b9" ], "table_ref": [], "text": "Category-level 3D pose estimation. Category-level 3D pose estimation estimates the 3D orientations of objects in a certain category. A classical approach was to formulate pose estimation as a classification problem [21,32]. Subsequent works can be categorized into keypoint-based methods and render-and-compare methods [5,36]. Keypointbased methods [24,41] first detect semantic keypoints and then predict the optimal 3D pose by solving a Perspectiven-Point problem. Render-and-compare methods [5,36] predict the 3D pose by fitting a 3D rigid transformation to minimize a reconstruction loss. Recently, NVSM [35] proposed a semi-supervised approach and investigated pose estimation in few-shot settings. Annotations of 3D poses are hard to obtain, and most previous works are largely limited by the number and quality of 3D annotations on real images. In this work, we propose to incorporate synthetic images generated from CAD models to address this challenge.\nUnsupervised domain adaptation. Unsupervised domain adaptation (UDA) leverages both labeled source domain data and unlabeled target domain data to learn a model that works well in the target domain. One approach is to learn domain-invariant feature representations by minimizing domain divergence in a latent feature space [20,28,31]. Another line of work adopts adversarial loss [15,33] to extract domain invariant features, where a domain classifier is trained to distinguish the source and target distributions.\nRecent works have also investigated UDA in downstream tasks, such as human pose estimation [4] and parsing deformable animals [22]. However, previous works often limited their scope to improving pose estimation or segmentation performance on i. [10,18,42,43] were proposed to address this problem. [42,43] formulated self-training as a general EM algorithm and proposed a confidence regularized framework. [18] proposed a self-ensembling framework to bootstrap models using unlabeled data. Moreover, [10] extended the previous work to unsupervised domain adaptation and investigated selfensembling in closing domain gaps. In this work, we introduce an approach that leverages 3D cross-domain consistency in a contrastive learning framework." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "We introduce our SyntheticP3D dataset and the data generation process in Section 3.1. And we describe our proposed cross-domain object pose estimation approach in Section 3.2." }, { "figure_ref": [], "heading": "SyntheticP3D", "publication_ref": [ "b28", "b1", "b15", "b11", "b15" ], "table_ref": [], "text": "We generate realistic-looking synthetic images for training to reduce the domain generalization gap. Given CAD models C = {C i } N i=1 and 2D background images B = {B j } K j=1 , our synthetic image generation can be formulated as\nI render = R(C i , ξ), I synthetic = I render ⊕ B j(1)\nwhere ξ ∈ SO(3) represents a randomized object pose, R is an off-the-shelf renderer, and ⊕ overlays the rendered object image onto the background image B j based on the object mask.\nAlthough the image generation pipeline we employ yields image samples with 3D annotations at no additional cost, there is a significant domain gap between synthetic and real images. This gap presents great challenges for deep learning models to apply knowledge learned from synthetic data to natural images. Moreover, the generation of synthetic data is often biased towards the domain style of the testing benchmark, leading to models trained on the abundant synthetic data overfitting on domain-specific features. The overfitting can result in a drop in performance when evaluated on out-of-distribution (OOD) datasets. To address these issues, we propose two novel designs for our Synthet-icP3D dataset that improve both the in-distribution and outof-distribution performance.\nGraphics-guided style transfer. Rendering photo-realistic images from CAD models is a challenging task, despite the plentiful CAD models available online and the technological advancements in modern renderers. Achieving high levels of realism requires detailed object materials and textures, which are not available in most CAD models publicly available [3,37]. Moreover, simulating authentic lighting conditions demands professional expertise to set up various types of lights and largely increases the rendering time of synthetic images. In fact, modern generative models [29,39] are capable of generating high-resolution, detailed images with realistic textures and specular reflections. We propose to utilize such models to generate high-quality training images with 3D annotations.\nTherefore, we design a graphics-guided style transfer module that can rapidly generate photo-realistic images without relying on high-quality CAD models. As demonstrated in Figure 1, we start by rendering the image I render = R(C i , ξ) with the graphics-based renderer. Then we use a Canny edge detector [2] to produce a 2D edge map E encoding the semantic and structural information of the object C in the 2D lattice. The edge map is used to guide the generation of a high-quality image I ′ synthetic using a pre-trained style transfer generative model Ψ. The generative model Ψ takes an edge map as input and generates a high-quality realistic image consistent with the semantics provided in the edge map. By leveraging the edge map input, our approach effectively retains the semantic and structural information of C, enabling us to obtain 3D annotations for high-quality image I ′ synthetic directly from the rendering parameters. Formally this module is given by\nI render = R(C i , ξ) E = CannyEdge(I render ) I ′ synthetic = Ψ(E) ⊕ B j(2)\nNote that the style transfer generative model can be trained with abundant 2D images from the Internet. The highquality synthetic training data with 3D annotations come at no extra cost with the help of our graphics-guided module.\nEarly experiments revealed that the style transfer network exhibits mode collapse, resulting in textureless objects with similar colors (see Figure 2). We propose two Figure 2: Top: Visualizations of the SyntheticP3D. The naïve approach yields textureless objects with similar colors. We promote diverse textures and colors with 3Dconsistent prior noise and simple prompt engineering. Bottom: Visualizations of 3D consistent prior noise for diverse texture generation. approaches that address this issue. To promote varied textures from the style transfer generative model, we choose to render the CAD models with textures from the Describable Texture Dataset (DTD) [7]. This strategy introduces 3Dconsistent prior noise into the edge maps, which compels the model to generate a variety of textures and colors. To further encourage color diversity, we include random colors in our prompts in the form of \"[color] [category]\", such as \"red car\" and \"green aeroplane\". This approach allows us to produce a wide range of colors while maintaining 3D consistencies.\nOOD-aware generation. From a causal perspective, the OOD robustness problem can be attributed to the spurious correlation between task-related semantic features, such as object parts and their locations, and domain-specific features, such as backgrounds [16]. Models trained on real images would inevitably learn from such spurious correlation, resulting in a high in-distribution benchmark performance (largely due to overfitting) and poor OOD robustness. Previous methods struggled to break the spurious correlation in real images [12,16], which involves complex data augmentations or swapping features as a regularization.\nThe fully controllable generation of our synthetic dataset allows us to disentangle task-related semantics of foreground objects, including CAD models and poses, from domain-specific features such as background images. To this end, we collected 100 images from the Internet, and during our synthetic data generation process, we fully randomized the selection of B j , independent of the foreground object category. In Section 4.6, we demonstrate that our OOD-aware design significantly enhances our model's OOD robustness while only marginally degrading in-distribution performance." }, { "figure_ref": [], "heading": "Domain Consistency 3D Pose Estimation via", "publication_ref": [ "b3", "b0" ], "table_ref": [], "text": "Render-and-Compare\nOur work builds on and significantly extends neural mesh models (NMMs) [34]. Specifically, we introduce the domain contrastive loss and the cross-domain feature consistency.\nNeural Mesh Models represent objects as a neural mesh N = {V, C} with a set of vertices that represent a cuboid mesh V = {V r ∈ R 3 } R r=1 and learnable features for each vertex\nC = {C r ∈ R c } R r=1\n, where c is the number of channels and R is the number of vertices, that the C is learned via a running average of features from training images. During training, we first extract feature map F = Φ W (I), where Φ is the feature extractor with weights W . The feature extractor is trained with the contrastive loss that increases features' spatial distinguishability from each other [1]:\nL con (F ) = - i∈F G ( j∈F G\\{i} ∥f i -f j ∥ 2 + j∈BG ∥f i -f j ∥ 2 ),\nwhere FG and BG indicate pixels assigned as foreground or background respectively, i, j is pixels on the extracted feature map F .\nAt test time, we can infer the object pose m by minimizing the feature reconstruction error w.r.t. the pose m with gradient descent\nL rec (F, N, m, b) = -ln p(F | N, m, b) = - i∈F G ln 1 σ r √ 2π - 1 2σ 2 r ∥f i -C r ∥ 2 - i ′ ∈BG ln 1 σ √ 2π - 1 2σ 2 ∥f i ′ -b∥ 2 . (3\n)\nwhere FG and BG indicates pixels assigned as foreground or background respectively, b is learnt features that represent backgrounds, and σ is the variance. L rec is also used in training to train the neural features on the mesh." }, { "figure_ref": [], "heading": "Domain Contrastive Loss.", "publication_ref": [ "b0" ], "table_ref": [], "text": "To further improve the domain generalization ability of the NMMs, we improve the feature C to be invariant to variations between synthetic and real images.\nTo achieve this, we introduce a domain-contrastive loss that encourages features in real and synthetic data to become similar to each other:\nL domain (C, {F }) = R r=1 N n=1 ∥f n,r -C r ∥ 2 , (4\n)\nwhere {f n,r } N n=1 are corresponding features for the vertex r on the neural mesh in F n . F n is the feature map of the n-th real image. The correspondence between the neural mesh N and the real data is obtained with pseudo labels introduced below.\nFinally, our full model is trained by optimizing the joint loss:\nL joint = L con + L rec + αL domain ,(5)\nwith α being a weight parameter that ensures that both losses are approximately on the same scale.\nUnsupervised domain adaptation with pseudo labels.\nThe core challenge of our approach lies in finding the corresponding features for every vertex on the neural mesh in the real data without access to any pose annotations. To resolve this problem, we first train a neural mesh from synthetic data where we have ground-truth annotations. We train the parameters of the neural texture C through maximum likelihood estimation (MLE) by minimizing the negative log-likelihood of the feature representations over the whole training set. The correspondence between the feature vectors f i and vertices r in the synthetic data is computed using the annotated 3D pose. To reduce the computational cost of optimizing Equation 3, we follow [1] and update C in a moving average manner. Given a synthetically trained neural mesh, we start by estimating the 3D poses {m est } of the real data by optimizing the pose parameters m to maximize the likelihood p(F | N, m, b). Then we perform unsupervised domain adaptation using pseudo-labels produced by the estimated poses. Specifically, we project the mesh N to the 2D lattice with the estimated pose m est and obtain the corresponding features {f n,r } from the real images for every vertex r. Finally, we proceed to fine-tune the neural mesh model by optimizing Equation 5 to obtain the domain-adapted neural mesh.\nIn the following section, we demonstrate that our proposed unsupervised domain adaptation approach is highly efficient at bridging the domain gap between real and synthetic data, giving accurate predictions on real data without using any real annotations, and outperforming state-of-theart models when fine-tuned with very few annotated real data." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we present our main experimental results. We start by describing the experimental setup in Section 4.1. Then we study the performance of approach on 3D pose estimation under unsupervised and semi-supervised settings in Section 4.2. We also report experimental results on out-of-distribution data in Section 4.4 to demonstrate the generalization ability of our model." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b39", "b3", "b40", "b12", "b40", "b3", "b3", "b40" ], "table_ref": [], "text": "Datasets. We first evaluate 3D pose estimation by our model and baseline models on PASCAL3D+ dataset [37]. The PASCAL3D+ dataset contains 11045 training images and 10812 validation images of 12 man-made object categories with category and object pose annotations. We evaluate 3D pose estimation under 5 different settingsunsupervised, semi-supervised with 7, 20, and 50 images [35], as well as the fully-supervised setting. To investigate model robustness in out-of-distribution scenarios, we evaluate our method on the OOD-CV dataset [40]. The OOD-CV dataset includes out-of-distribution examples from 10 categories of PASCAL3D+ and is a benchmark to evaluate out-of-distribution robustness to individual nuisance factors including pose, shape, texture, context and weather.\nEvaluation. 3D pose estimation aims to recover the 3D rotation parameterized by azimuth, elevation, and inplane rotation of the viewing camera. Following previous works [34,41], we evaluate the error between the predicted rotation matrix and the ground truth rotation matrix:\n∆ (R pred , R gt ) = ∥log m(R T pred Rgt)∥ F √ 2\n. We report the accuracy of the pose estimation under common thresholds, π 6 and π 18 . Training Setup. We use an ImageNet [9] pre-trained ResNet50 [13] as feature extractor. The dimensions of the cuboid mesh N are defined such that for each category most of the object area is covered. Which takes around 1 hour per category on a machine with 2 RTX Titan Xp GPUs. We implement our approach in PyTorch [23] and apply the rasterisation implemented in PyTorch3D [27].\nSyntheticP3D. We sample the synthetic training data using the CAD models provided in the PASCAL3D+ and OOD-CV datasets. We use Blender [8] as our renderer to generate the synthetic images. For each category we sample the azimuth pose randomly in the range [0, 360], the elevation in range [-90, 90] and the in-plane rotation in the range [-5, 5] degrees. We sample 7000 images per class and randomize the texture of the CAD model by sampling textures from the describable texture database [6]. The background images are sampled from a collection of 100 images that we collected from the internet by searching for the keywords \"wallpaper\"+[\"street, jungle, market, beach\"] (see examples in the supplementary materials).\nBaselines. We compare our model to fully supervised methods for category-level 3D pose estimation, including StarMap [41] and NeMo [34] using their official implementation and training setup. Following common practice, we also evaluate a popular baseline that formulates pose estimation as a classification problem. In particular, we evaluate the performance of a deep neural network classifier that uses the same backbone as NeMo. We train a ResNet50 1: Few-shot pose estimation results on 6 vehicle classes of PASCAL3D+ following the evaluation protocol in [35]. We indicate the number of annotations during training for each category and evaluate all approaches using Accuracy (in percent, higher better) and Median Error (in degree, lower better). We also include the fully supervised baseline [34] (Full Sup.) which is trained from the full dataset (hundreds of images per category).\n[13], which performs pose estimation for all categories in a single classification task. We report the result using the implementation provided by [41].\nFew-shot Learning. We further compare our approach at a recently proposed semi-supervised few-shot learning setting [35]. This means we use 7, 20, and 50 annotated images for training from the Pascal3D+ dataset, and evaluate 6 vehicle categories (aeroplane, bicycle, boat, bus, car, motorbike), which have a relatively evenly distributed pose regarding the azimuth angle. In order to utilize the unlabelled images, a common pseudo-labelling strategy is used for all baselines. Specifically, we first train a model on the annotated images, and use the trained model to predict a pseudolabel for all unlabelled images in the training set. We keep those pseudo-labels with a confidence threshold τ = 0.9, and we utilize the pseudo-labeled data as well as the annotated data to train the final model. The state-of-the-art baseline in this few-shot setting is NVSM [35]." }, { "figure_ref": [], "heading": "Few-Shot 3D Pose Estimation", "publication_ref": [], "table_ref": [], "text": "Table 1 shows the performance of our approach and all baselines at semi-supervised few-shot 3D pose estimation on 6 vehicle classes of the PASCAL3D+ dataset. All models are evaluated using 7, 20, and 50 (per class) training images with annotated 3D pose and a collection of unlabelled training data (as described in Section 4.1). Among the models trained without our SyntheticP3D dataset, the ResNet50 classification baseline and NeMo achieve a comparable performance using few annotated images. Notably, NVSM is by far the best performing baseline when using only 7 or 20 annotated images per object class. However, when using 50 annotated images, the NeMo baseline outperforms NVSM by a margin of 3.7%.\nUsing our SyntheticP3D dataset, our proposed CC3D outperforms all baselines across all few-shot data regimes. Remarkably, our model constantly outperforms the prior arts by a margin of > 20% in both π 6 and π 18 accuracy. We further observe that the NeMo model trained using our SyntheticP3D dataset (SyntheticP3D +NeMo) and domain adapted as described in Section 4.1 also significantly outperforms the original NeMo baseline, hence demonstrating the effectiveness of our synthetic data. Nevertheless, it does not match the performance of our proposed 3D-aware contrastive consistency approach. Finally, we note that using 50 annotated images our CC3D model even performs competitively to the fully supervised trailing it by only by 8.2%@ π 6 and 8.1%@ π 18 , and hence significantly closing the gap between fully supervised models and models trained on synthetic data." }, { "figure_ref": [], "heading": "Comparison to Supervised Approaches", "publication_ref": [], "table_ref": [], "text": "Table 2 summarizes our results when comparing to fully supervised models trained on the full annotated dataset. In the experiment SyntheticP3D+CC3D, we first pre-train with synthetic data and then use L domain for finetuning with unlabeled real images. In experiments named \"SyntheticP3D+CC3D+X%\", we additionally use labelled data for a final fine-tuning, where X% denotes the number of available real image labels.\nWhen annotations of real images are not available, our proposed CC3D outperforms the NeMo and ResNet50 baselines that use same training data (SyntheticP3D + Res50, SyntheticP3D + NeMo) by a significant margin. Notably, SyntheticP3D + NeMo can bridge the synthetic-toreal domain gap much better compared to SyntheticP3D + Res50, outperforming it by > 10% at π 6 and π 18 . Our CC3D further outperforms SyntheticP3D + NeMo by 4.5% and 1.9% at π 6 and π 18 respectively, while also reducing the median prediction error by 2.1%. It is worth noting that these results are achieved without access to any real image annotation, which demonstrates the effectiveness of our proposed approach. Table 2: Pose estimation results on PASCAL3D+. We evaluate all models using Accuracy (percentage, higher better) and Median Error (degree, lower better). We compare the state-of-the-art fully supervised baselines (StarMap, NeMo, Res50) to models learned on synthetic data and transferred to real (SyntheticP3D + Res50, SyntheticP3D + NeMo, SyntheticP3D + CC3D) and SyntheticP3D + CC3D trained with 10% and 50% of annotated data. Note how CC3D outperforms other approaches when trained without real annotations, and even outperforms the SOTA methods using only 10% of the annotated data.\nWhen annotations of real images are available, our proposed CC3D outperforms the fully supervised state-ofthe art using only 50% of the annotated data that is available to the fully supervised methods by 1.3%@ π 6 and 10.4%@ π 18 . The large performance increase in the highaccuracy evaluation of π 18 indicates that our method can leverage the detailed annotations in the synthetic data to learn a representation that benefits from real annotations exceptionally well. Remarkably, even when using only 10% of the data that is available to the SOTA supervised methods, our approach can match their performance and even outperform them in terms of the finer π 18 accuracy by a fair margin. This demonstrates the enhanced efficiency that our proposed CC3D approach enables for 3D pose estimation." }, { "figure_ref": [], "heading": "Robust 3D Pose Estimation", "publication_ref": [], "table_ref": [], "text": "In Table 3 we illustrate the performance of our CC3D approach and several baselines at 3D pose estimation on the OOD-CV dataset to inestigate their their robustness under domain shifts to shape, pose, texture, context, and weather. We observe that the fully supervised ResNet50 baseline has on average a similar performance under OOD shifts as the NeMo model. We note that the NeMo model achieves a higher accuracy on the Pascal3D+ data (Table 2) and hence indicating less robustness compared to the ResNet50.\nAll models trained without real annotations achieve a lower performance compared to the fully supervised baselines. However, the performance gap between the fully supervised and unsupervised baselines is lower compared to the PASCAL3D+ dataset. This can be attributed to the much larger variability in the synthetic data regarding texture, context, pose and background. Notably, there only remains a large performance gap in terms of OOD robustness to texture and weather shifts in the data between supervised and unsupervised models, indicating that the variability in the texture of the synthetic data is not sufficiently realistic. We also note that our CC3D achieves the highest OOD robustness among the unsupervised models.\nWhen fine-tuned with 10% of real data the performance of the unsupervised models is enhanced significantly. Notably, our CC3D is able to close the gap to fully supervised models in terms of OOD robustness due to the large variability in the synthetic data and its ability to transfer this knowledge to real images.\nWe provide some qualitative results in Figure 3 to visualize our model's predictions on PASCAL3D+ and OOD-CV datasets." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "As shown in Table 4, we evaluate the contribution of each proposed component. Specifically, we evaluate various settings on five categories (aeroplane, boat, car, motorbike, and train) of the PASCAL3D+ dataset. We use \"SyntheticP3D + CC3D\" as the full model. The graphicsguided style transfer, denoted \"style transfer\", produced high-quality synthetic data with diverse textures and colors using a style transfer network. The unsupervised domain adaptation, denoted \"unsup adaptation\", adapts the synthetically trained model to real data with a domain contrastive loss on pseudo-labels (Eq 4)." }, { "figure_ref": [], "heading": "Breaking Spurious Correlation with Domain-Nonspecific Synthetic Data", "publication_ref": [ "b15" ], "table_ref": [], "text": "From the causal perspective, the OOD robustness problem is mainly due to the spurious correlation between domain-specific features and task-related semantic features [16]. Our proposed OOD-aware generation can effectively break such spurious correlation by generating synthetic data with domain-nonspecific backgrounds and demonstrate large improvements on OOD-CV dataset. As an ablation study, we re-generate the synthetic dataset with domain-specific backgrounds (e.g., cars have backgrounds on roads), denoted SyntheticP3D-Spurious. As shown in Table 5, models trained on SyntheticP3D achieves much better OOD robustness with a negligible degradation on the in-distribution benchmark (i.e., PASCAL3D+)." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we introduced an approach for learning category-level 3D pose estimation using a novel synthetic dataset that is generated from CAD models. To bridge the domain gap between real and synthetic images, we introduced a new domain adaptation algorithm that lever- Table 3: Robustness of pose estimation methods on the OOD-CV dataset. We report the performance on OOD shifts in the object shape, 3D pose, texture, context and weather. We compare fully supervised baselines (NeMo, Res50) to models learned on synthetic data and transferred to real (SyntheticP3D + Res50, SyntheticP3D + NeMo, SyntheticP3D + CC3D) and when fine-tuning these models with 10% real annotated data. Note how our CC3D model achieves higher robustness compared to other models trained without real annotation. When fine-tuned on 10% of the training data in OOD-CV (+10%) it performs on par at π 6 and outperforms all baselines at π 18 . Table 5: Ablation study on the OOD-aware generation with which we can effectively break spurious correlation between domain-specific features and task-related semantic features. Our method with SyntheticP3D demonstrate much better OOD robustness at the cost of a small degradation on in-distribution dataset.\nages the 3D mesh geometry to obtain consistent pseudocorrespondences between synthetic and real images. In particular, we generate pseudo-labels on unlabeled real images for semi-supervised learning achieving robust cross-domain consistency through a 3D-aware statistical approach. Our experimental results demonstrate that our CC3D can greatly reduce the domain gap to fully-supervised models trained on real data when trained without any annotation of real images, and even performing competitively to state-of-the-art models when fine-tuned with very few annotated real data. Moreover, our proposed model outperforms the next best baseline by 10.4%@ π 18 using only 50% of the data used for the baseline. We also show that with the help of synthetic data and our proposed domain adaptation, we can effectively improve model robustness in very challenging out-of-distribution scenarios." } ]
2023-05-25
[ { "authors": "Yutong Bai; Angtian Wang; Adam Kortylewski; Alan Yuille", "journal": "", "ref_id": "b0", "title": "Coke: Localized contrastive learning for robust keypoint detection", "year": "2020" }, { "authors": "John Canny", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b1", "title": "A computational approach to edge detection", "year": "1986" }, { "authors": "Thomas Angel X Chang; Leonidas Funkhouser; Pat Guibas; Qixing Hanrahan; Zimo Huang; Silvio Li; Manolis Savarese; Shuran Savva; Hao Song; Su", "journal": "", "ref_id": "b2", "title": "Shapenet: An information-rich 3d model repository", "year": "2015" }, { "authors": "Wenzheng Chen; Huan Wang; Yangyan Li; Hao Su; Zhenhua Wang; Changhe Tu; Dani Lischinski; Daniel Cohen-Or; Baoquan Chen", "journal": "IEEE", "ref_id": "b3", "title": "Synthesizing training images for boosting human 3d pose estimation", "year": "2016" }, { "authors": "Zijian Xu Chen; Jie Dong; Andreas Song; Otmar Geiger; Hilliges", "journal": "", "ref_id": "b4", "title": "Category level object pose estimation via neural analysis-by-synthesis", "year": "2020" }, { "authors": "M Cimpoi; S Maji; I Kokkinos; S Mohamed; A Vedaldi", "journal": "", "ref_id": "b5", "title": "Describing textures in the wild", "year": "2014" }, { "authors": "Mircea Cimpoi; Subhransu Maji; Iasonas Kokkinos; Sammy Mohamed; Andrea Vedaldi", "journal": "", "ref_id": "b6", "title": "Describing textures in the In Proceedings of the IEEE conference on computer vision and pattern recognition", "year": "2014" }, { "authors": "", "journal": "Stichting Blender Foundation", "ref_id": "b7", "title": "Blender -a 3D modelling and rendering package", "year": "2018" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "Ieee", "ref_id": "b8", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Geoffrey French; Michal Mackiewicz; Mark Fisher", "journal": "", "ref_id": "b9", "title": "Self-ensembling for visual domain adaptation", "year": "2017" }, { "authors": "Andreas Geiger; Philip Lenz; Christoph Stiller; Raquel Urtasun", "journal": "The International Journal of Robotics Research", "ref_id": "b10", "title": "Vision meets robotics: The kitti dataset", "year": "2013" }, { "authors": "Vipul Gupta; Zhuowan Li; Adam Kortylewski; Chenyu Zhang; Yingwei Li; Alan Yuille", "journal": "", "ref_id": "b11", "title": "Swapmix: Diagnosing and regularizing the over-reliance on visual context in visual question answering", "year": "2022" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b12", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Yisheng He; Wei Sun; Haibin Huang; Jianran Liu; Haoqiang Fan; Jian Sun", "journal": "", "ref_id": "b13", "title": "Pvn3d: A deep point-wise 3d keypoints voting network for 6dof pose estimation", "year": "2020-06" }, { "authors": "Judy Hoffman; Eric Tzeng; Taesung Park; Jun-Yan Zhu; Phillip Isola; Kate Saenko; Alexei Efros; Trevor Darrell", "journal": "Pmlr", "ref_id": "b14", "title": "Cycada: Cycle-consistent adversarial domain adaptation", "year": "2018" }, { "authors": "Maximilian Ilse; Jakub M Tomczak; Patrick Forré", "journal": "PMLR", "ref_id": "b15", "title": "Selecting data augmentation for simulating interventions", "year": "2021" }, { "authors": "Shun Iwase; Xingyu Liu; Rawal Khirodkar; Rio Yokota; Kris M Kitani", "journal": "", "ref_id": "b16", "title": "Repose: Fast 6d object pose refinement via deep texture rendering", "year": "2021-10" }, { "authors": "Samuli Laine; Timo Aila", "journal": "", "ref_id": "b17", "title": "Temporal ensembling for semisupervised learning", "year": "2016" }, { "authors": "Yi Li; Gu Wang; Xiangyang Ji; Yu Xiang; Dieter Fox", "journal": "", "ref_id": "b18", "title": "Deepim: Deep iterative matching for 6d pose estimation", "year": "2018-09" }, { "authors": "Xiaofeng Liu; Yuzhuo Han; Song Bai; Yi Ge; Tianxing Wang; Xu Han; Site Li; Jane You; Jun Lu", "journal": "", "ref_id": "b19", "title": "Importanceaware semantic segmentation in self-driving with discrete wasserstein training", "year": "2020" }, { "authors": "Arsalan Mousavian; Dragomir Anguelov; John Flynn; Jana Kosecka", "journal": "", "ref_id": "b20", "title": "3d bounding box estimation using deep learning and geometry", "year": "2017" }, { "authors": "Jiteng Mu; Weichao Qiu; Gregory D Hager; Alan L Yuille", "journal": "", "ref_id": "b21", "title": "Learning from synthetic animals", "year": "2020" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga", "journal": "", "ref_id": "b22", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" }, { "authors": "Georgios Pavlakos; Xiaowei Zhou; Aaron Chan; Konstantinos G Derpanis; Kostas Daniilidis", "journal": "IEEE", "ref_id": "b23", "title": "6-dof object pose from semantic keypoints", "year": "2017" }, { "authors": "Sida Peng; Yuan Liu; Qixing Huang; Xiaowei Zhou; Hujun Bao", "journal": "", "ref_id": "b24", "title": "Pvnet: Pixel-wise voting network for 6dof pose estimation", "year": "2019-06" }, { "authors": "Rajeev Ranjan; M Vishal; Rama Patel; Chellappa", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b25", "title": "Hyperface: A deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition", "year": "2017" }, { "authors": "Nikhila Ravi; Jeremy Reizenstein; David Novotny; Taylor Gordon; Wan-Yen Lo; Justin Johnson; Georgia Gkioxari", "journal": "", "ref_id": "b26", "title": "Accelerating 3d deep learning with pytorch3d", "year": "2020" }, { "authors": "Artem Rozantsev; Pascal Mathieu Salzmann; Fua", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b27", "title": "Beyond sharing weights for deep domain adaptation", "year": "2018" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily Denton; Seyed Kamyar; Seyed Ghasemipour; Burcu Karagol Ayan; Sara Mahdavi; Rapha Gontijo Lopes", "journal": "", "ref_id": "b28", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Hao Su; Yangyan Charles R Qi; Leonidas J Li; Guibas", "journal": "", "ref_id": "b29", "title": "Render for cnn: Viewpoint estimation in images using cnns trained with rendered 3d model views", "year": "2015" }, { "authors": "Baochen Sun; Jiashi Feng; Saenko ", "journal": "", "ref_id": "b30", "title": "Return of frustratingly easy domain adaptation", "year": "2016" }, { "authors": "Shubham Tulsiani; Jitendra Malik", "journal": "", "ref_id": "b31", "title": "Viewpoints and keypoints", "year": "2015" }, { "authors": "Eric Tzeng; Judy Hoffman; Kate Saenko; Trevor Darrell", "journal": "", "ref_id": "b32", "title": "Adversarial discriminative domain adaptation", "year": "2017" }, { "authors": "Angtian Wang; Adam Kortylewski; Alan Yuille", "journal": "", "ref_id": "b33", "title": "Nemo: Neural mesh models of contrastive features for robust 3d pose estimation", "year": "2021" }, { "authors": "Angtian Wang; Shenxiao Mei; Alan L Yuille; Adam Kortylewski", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b34", "title": "Neural view synthesis and matching for semisupervised few-shot learning of 3d pose", "year": "2021" }, { "authors": "He Wang; Srinath Sridhar; Jingwei Huang; Julien Valentin; Shuran Song; Leonidas J Guibas", "journal": "", "ref_id": "b35", "title": "Normalized object coordinate space for category-level 6d object pose and size estimation", "year": "2019" }, { "authors": "Yu Xiang; Roozbeh Mottaghi; Silvio Savarese", "journal": "", "ref_id": "b36", "title": "Beyond pascal: A benchmark for 3d object detection in the wild", "year": "2014" }, { "authors": "Yu Xiang; Tanner Schmidt; Venkatraman Narayanan; Dieter Fox", "journal": "", "ref_id": "b37", "title": "Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes", "year": "2017" }, { "authors": "Lvmin Zhang; Maneesh Agrawala", "journal": "", "ref_id": "b38", "title": "Adding conditional control to text-to-image diffusion models", "year": "" }, { "authors": "Bingchen Zhao; Shaozuo Yu; Wufei Ma; Mingxin Yu; Shenxiao Mei; Angtian Wang; Ju He; Alan Yuille; Adam Kortylewski", "journal": "", "ref_id": "b39", "title": "Ood-cv: A benchmark for robustness to out-ofdistribution shifts of individual nuisances in natural images", "year": "2022" }, { "authors": "Xingyi Zhou; Arjun Karpur; Linjie Luo; Qixing Huang", "journal": "", "ref_id": "b40", "title": "Starmap for category-agnostic keypoint and viewpoint estimation", "year": "2018" }, { "authors": "Yang Zou; Zhiding Yu; Jinsong Kumar; Wang", "journal": "", "ref_id": "b41", "title": "Unsupervised domain adaptation for semantic segmentation via class-balanced self-training", "year": "2018" }, { "authors": "Yang Zou; Zhiding Yu; Xiaofeng Liu; Jinsong Kumar; Wang", "journal": "", "ref_id": "b42", "title": "Confidence regularized self-training", "year": "2019" } ]
[ { "formula_coordinates": [ 3, 81.37, 602.37, 204.99, 9.81 ], "formula_id": "formula_0", "formula_text": "I render = R(C i , ξ), I synthetic = I render ⊕ B j(1)" }, { "formula_coordinates": [ 3, 366.93, 579.87, 178.18, 40.66 ], "formula_id": "formula_1", "formula_text": "I render = R(C i , ξ) E = CannyEdge(I render ) I ′ synthetic = Ψ(E) ⊕ B j(2)" }, { "formula_coordinates": [ 4, 336.1, 236.71, 79.16, 12.2 ], "formula_id": "formula_2", "formula_text": "C = {C r ∈ R c } R r=1" }, { "formula_coordinates": [ 4, 308.86, 332.39, 241.34, 22.6 ], "formula_id": "formula_3", "formula_text": "L con (F ) = - i∈F G ( j∈F G\\{i} ∥f i -f j ∥ 2 + j∈BG ∥f i -f j ∥ 2 )," }, { "formula_coordinates": [ 4, 333.48, 451.12, 207.76, 74.17 ], "formula_id": "formula_4", "formula_text": "L rec (F, N, m, b) = -ln p(F | N, m, b) = - i∈F G ln 1 σ r √ 2π - 1 2σ 2 r ∥f i -C r ∥ 2 - i ′ ∈BG ln 1 σ √ 2π - 1 2σ 2 ∥f i ′ -b∥ 2 . (3" }, { "formula_coordinates": [ 4, 541.24, 505.55, 3.87, 8.64 ], "formula_id": "formula_5", "formula_text": ")" }, { "formula_coordinates": [ 4, 343.32, 681.15, 197.92, 30.2 ], "formula_id": "formula_6", "formula_text": "L domain (C, {F }) = R r=1 N n=1 ∥f n,r -C r ∥ 2 , (4" }, { "formula_coordinates": [ 4, 541.24, 691.88, 3.87, 8.64 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 5, 103.94, 163.07, 182.42, 9.81 ], "formula_id": "formula_8", "formula_text": "L joint = L con + L rec + αL domain ,(5)" }, { "formula_coordinates": [ 5, 308.86, 326.22, 153.83, 19.15 ], "formula_id": "formula_9", "formula_text": "∆ (R pred , R gt ) = ∥log m(R T pred Rgt)∥ F √ 2" } ]
Robust Category-Level 3D Pose Estimation from Synthetic Data
Obtaining accurate 3D object poses is vital for numerous computer vision applications, such as 3D reconstruction and scene understanding. However, annotating realworld objects is time-consuming and challenging. While synthetically generated training data is a viable alternative, the domain shift between real and synthetic data is a significant challenge. In this work, we aim to narrow the performance gap between models trained on synthetic data and few real images and fully supervised models trained on large-scale data. We achieve this by approaching the problem from two perspectives: 1) We introduce Synthet-icP3D, a new synthetic dataset for object pose estimation generated from CAD models and enhanced with a novel algorithm. 2) We propose a novel approach (CC3D) for training neural mesh models that perform pose estimation via inverse rendering. In particular, we exploit the spatial relationships between features on the mesh surface and a contrastive learning scheme to guide the domain adaptation process. Combined, these two approaches enable our models to perform competitively with state-of-the-art models using only 10% of the respective real training images, while outperforming the SOTA model by 10.4% with a threshold of π 18 using only 50% of the real training data. Our trained model further demonstrates robust generalization to out-ofdistribution scenarios despite being trained with minimal real data.
Jiahao Yang; Wufei Ma; Angtian Wang; Xiaoding Yuan; Alan Yuille; Adam Kortylewski
[ { "figure_caption": "PASCAL3D+ ACC π 6 ↑ ACC π 18 ↑ MedErr ↓ full model 79.2 (-0.0) 52.0 (-0.0) 14.1 (-0.0) -style transfer 75.9 (-3.3) 47.8 (-4.2) 17.1 (+3.0) -unsup adaptation 76.5 (-2.7) 49.0 (-3.0) 16.0 (+1.9) -style transfer -unsup adaptation 70.6 (-8.6) 46.5 (-5.5) 23.6 (+9.5) Table 4: Ablation study on the unsupervised domain adaptation and graphics-guided style transfer module on the PASCAL3D+ dataset (aeroplane, boat, car, motorbike, and train).", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "PASCAL3D+ ACC π 6 ↑ ACC π 18 ↑18MedErr ↓ SynP3D+CC3D 76.3 (-0.0) 41.4 (-0.0) 15.5 (-0.0) SynP3D-Spurious+CC3D 77.0 (+0.7) 42.8 (+1.4) 14.8 (-0.7) OOD-CV ACC π 6 ↑ ACC π 18 ↑ MedErr ↓ SynP3D+CC3D 48.2 (-0.0) 14.8 (-0.0) 37.0 (-0.0) SynP3D-Spurious+CC3D 42.7 (-5.5) 16.4 (+1.6) 45.7 (+8.7)", "figure_data": "", "figure_id": "fig_1", "figure_label": "18", "figure_type": "figure" }, { "figure_caption": "Figure3: Qualitative results of our proposed model on the PASCAL3D+ and OOD-CV datasets. We illustrate the predicted 3D pose using the CAD models from the respective datasets. Note that in our approach object are represent as cuboid without detailed shape. Our SynP3D+CC3D model is able to estimate the pose correctly for a variety of objects in challenging scenarios, such as unusual background (d & e), complex textures (f) and object shapes (c) and camera views (b).", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" } ]
[{"Category": "Supporting Evidence", "Citation": "[14,17,19,25,38]", "Explanation": "The cited works provide a deep understanding of object pose estimation on the instance level, which forms the basis for the research conducted in the citing paper on category-level pose estimation for general object categories."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work on category-level pose estimation for specific object classes like cars provides a methodological basis for the development of models that learn from limited manual annotation and large-scale synthetic data with automated annotations in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[26]", "Explanation": "The cited work on category-level pose estimation for faces extends the research on object pose estimation to a new object class, which the citing paper builds upon in its development of models for category-level pose estimation for general object categories."}, {"Category": "Data Source", "Citation": "[21,30,32,41]", "Explanation": "The cited works are used as a reference to highlight the sensitivity of current object pose estimation approaches to domain shift, which is a key challenge in using generated data rendered using computer graphics."}, {"Category": "Supporting Evidence", "Citation": "[34]", "Explanation": "The cited work on neural mesh models is used as a basis for the development of a domain robust object pose estimation approach in the citing paper."}, {"Category": "Data Source", "Citation": "[21,32]", "Explanation": "The cited works provide the data used in the classification problem of category-level 3D pose estimation."}, {"Category": "Methodological Basis", "Citation": "[24,41]", "Explanation": "The cited works are keypoint-based methods that detect semantic keypoints and predict the optimal 3D pose by solving a Perspectiven-Point problem, which the citing paper adopts in their research."}, {"Category": "Extension or Continuation", "Citation": "[5,36]", "Explanation": "The cited works are render-and-compare methods that predict the 3D pose by fitting a 3D rigid transformation to minimize a reconstruction loss. The citing paper builds upon this approach to further explore the field of category-level 3D pose estimation."}, {"Category": "Supporting Evidence", "Citation": "[35]", "Explanation": "The cited work proposes a semi-supervised approach for pose estimation in few-shot settings, providing evidence to support the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[35]", "Explanation": "The cited work investigates the use of synthetic images generated from CAD models in the field of category-level 3D pose estimation, which the citing paper utilizes in their research."}, {"Category": "Data Source", "Citation": "[35]", "Explanation": "The cited work also discusses the challenge of obtaining annotations of 3D poses and the limited number and quality of real image annotations, which the citing paper addresses by incorporating synthetic images."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work proposes a semi-supervised approach for pose estimation in few-shot settings, which the citing paper adopts in their research to address the challenge of obtaining annotations of 3D poses."}, {"Category": "Methodological Basis", "Citation": "[10,18,42,43]", "Explanation": "The cited works propose a self-training approach that is adopted in the citing paper to address the problem of domain gap in unsupervised domain adaptation."}, {"Category": "Methodological Basis", "Citation": "[42,43]", "Explanation": "The cited works formulate self-training as a general EM algorithm and propose a confidence regularized framework that is utilized in the citing paper to improve performance in unsupervised domain adaptation."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work proposes a self-ensembling framework that is used in the citing paper to bootstrap models using unlabeled data in the context of unsupervised domain adaptation."}, {"Category": "Methodological Basis", "Citation": "[29,39]", "Explanation": "The cited works are used as a basis for the design of a graphics-guided style transfer module in the citing paper to generate high-quality training images with 3D annotations."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work by Canny provides the Canny edge detector, which is used in the citing paper to produce a 2D edge map for guiding the generation of a high-quality image using a pre-trained style transfer generative model."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The cited work, the Describable Texture Dataset (DTD), is used as a source of textures to promote varied textures and colors in the style transfer generative model. The DTD provides a dataset of textures that the model can use to generate diverse textures and colors in the final output."}, {"Category": "Data Source", "Citation": "[16]", "Explanation": "The cited work by [16] is used to highlight the issue of spurious correlation between task-related semantic features and domain-specific features in real images, which the citing paper uses to motivate the need for a fully controllable synthetic dataset to disentangle these features."}, {"Category": "Methodological Basis", "Citation": "[34]", "Explanation": "The cited work introduces the concept of neural mesh models (NMMs) that the citing paper builds upon by extending the model to include a domain contrastive loss and cross-domain feature consistency."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work introduces the contrastive loss function for training the feature extractor, which the citing paper adopts in their research to increase the spatial distinguishability of features."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The cited work is used to reduce the computational cost of optimizing the neural mesh model by following a specific updating method for the parameters."}, {"Category": "Data Source", "Citation": "[37]", "Explanation": "The dataset used in the study is the PASCAL3D+, which is cited to acknowledge its origin and the data used in the research."}, {"Category": "Data Source", "Citation": "[35]", "Explanation": "The number of images used in the semi-supervised setting is cited to provide context and data for the study."}, {"Category": "Data Source", "Citation": "[40]", "Explanation": "The OOD-CV dataset is cited to highlight the out-of-distribution scenarios evaluated in the study."}, {"Category": "Data Source", "Citation": "[37]", "Explanation": "The category and object pose annotations in the PASCAL3D+ dataset are cited to provide a basis for the evaluation of 3D pose estimation."}, {"Category": "Data Source", "Citation": "[34,41]", "Explanation": "The cited works provide the error metric used in the evaluation of the predicted rotation matrix in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work provides the pre-trained model used as a feature extractor in the training setup of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work provides the feature extraction method used in the training setup of the citing paper."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The cited work provides the implementation framework used in the development of the approach in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work provides the rasterization method used in the implementation of the approach in the citing paper."}, {"Category": "Data Source", "Citation": "[41]", "Explanation": "The cited work provides the official implementation and training setup for the StarMap method used in the citing paper for category-level 3D pose estimation."}, {"Category": "Data Source", "Citation": "[34]", "Explanation": "The cited work provides the official implementation and training setup for the NeMo method used in the citing paper for category-level 3D pose estimation."}, {"Category": "Extension or Continuation", "Citation": "[35]", "Explanation": "The cited work is a common practice in the field of category-level 3D pose estimation, and the citing paper extends the evaluation protocol to include a deep neural network classifier that uses the same backbone as the NeMo method."}, {"Category": "Supporting Evidence", "Citation": "[13]", "Explanation": "The cited work performs pose estimation for all categories in a single classification task, which provides a basis for the comparison of the performance of the approaches in the citing paper."}, {"Category": "Data Source", "Citation": "[35]", "Explanation": "The cited work provides the data and the few-shot learning setting for the comparison of the performance of the approaches in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[35]", "Explanation": "The cited work, NVSM, serves as the state-of-the-art baseline in the few-shot setting, providing a methodological basis for the citing paper to compare and build upon in their research."}, {"Category": "Extension or Continuation", "Citation": "[16]", "Explanation": "The cited work provides a causal perspective on the OOD robustness problem and the proposed OOD-aware generation method is an extension that effectively breaks spurious correlation and improves OOD robustness."}]
[ { "figure_ref": [ "fig_3" ], "heading": "INTRODUCTION", "publication_ref": [ "b46", "b1", "b11", "b13" ], "table_ref": [], "text": "The spread of misinformation in the modern media ecosystem has become an urgent social issue [46]. In order to combat the proliferation of misleading information, fact-checking becomes an essential task, which aims to assess the factuality of a given claim made in written or spoken language based on the collected evidence [1,11]. Figure 1 shows a real-world claim that originates and circulates on Arabic social media. The claim is included in the multilingual dataset XFact [13]. Here, we translate the claim into English for illustration. In order to evaluate the factuality of this claim, a journalist needs to search through potentially many sources to find the statistics of mRNA vaccines, and perform the numerical comparison based on the evidence." }, { "figure_ref": [], "heading": "This article confirms that", "publication_ref": [], "table_ref": [], "text": "Professor Dolores Cahill, from Dublin, believes that 30% of the vacancies received in the middle of the test…… 2. At least \"30% of those vaccinated\" with mRNA Covid-19 vaccine \"will be dead within three months\"......" }, { "figure_ref": [], "heading": "Search Snippets", "publication_ref": [ "b43" ], "table_ref": [], "text": "Claim: 30% of people injected with mRNA Covid-19 vaccine will die within three months.\nVerdict: False Source Documents True 1…What Dolores said was wrong. 43,600 people participated in mRNA vaccine trials. After eight months, no deaths were recorded." }, { "figure_ref": [ "fig_3", "fig_3", "fig_3" ], "heading": "In countries that have been vaccinated for more than three months, such as the United States, no deaths have been reported to prove immunization-related…", "publication_ref": [ "b19", "b36", "b45", "b40", "b19", "b43", "b3", "b13", "b37", "b36", "b19", "b39", "b44" ], "table_ref": [], "text": "False Figure 1: An example claim from the XFact dataset. The example is translated into English for illustration. Search snippet 1 is generated automatically by the search engine, which is a short summary of source document 1. One will predict the claim to be true only based on the search snippets, but the claim is false if the document is provided.\nThough evidence plays a significant role in fact-checking, early efforts in automatic systems only use the claim to predict the factuality [19,36,45]. Schuster et al. [40] demonstrated that relying on surface patterns of claims without considering other evidence fails to identify well-presented misinformation. To address this issue, recent efforts asked annotators to mutate sentences from Wikipedia articles to create claims and evidence [19,43]. These synthetic claims cannot replace real-world claims that are circulating in the media ecosystem as shown in Figure 1. Therefore, other works chose to crawl real-world claims from fact-checking websites [3,13,37], and used the snippets returned by search engines as the evidence. However, such snippets may not provide sufficient information to verify the claim. Take Figure 1 as an example. Based on the snippets only, the verdict of the claim is True as necessary information about deaths after vaccination is missing. Through manual inspections, we found that only 46% of search snippets provide sufficient information, while 82% of source documents provide Table 1: Comparisons of fact-checking datasets. Type in the header means the type of evidence used, such as sentence (sent), metadata (meta), question-answer pairs (qa pairs), etc. Source means where the evidence is collected from, such as Wikipedia, fact-checking websites. Retrieved denotes if the evidence is given or retrieved from the source. PunditFact [36] no deaths have been reported in the vaccine trials and countries that have been vaccinated for more than 3 months. Based on the evidence, one can predict the factuality of the given claim is False.\nAiming for improving real-world fact-checking systems, we propose to incorporate full text from source documents as evidence. Unlike previous synthetic datasets, where gold evidence sentences are annotated [19,39,44], the key challenge of using source documents is how to extract related sentences as evidence. Therefore, we are not able to train an evidence extractor in a supervised manner. On the other hand, source documents returned by the search engine contain lots of irrelevant information. Taking such noisy sentences as evidence propagates errors to the downstream claim verification module. In order to address these two issues, we develop a latent variable model, allowing discrete evidence extraction and claim verification in an end-to-end fashion. Our model directly controls sparsity and contiguity for maintaining a better balance in keeping relevant sentences as evidence and removing irrelevant sentences. Experiments on two datasets under different settings demonstrate the effectiveness of the proposed approaches. Our key contributions are summarized as follows:\n• We conduct extensive analyses on the real-world multilingual dataset XFact, then propose to incorporate source documents and introduce two enriched datasets. • We propose a joint system that models evidence extraction as a latent variable, maintaining a better balance between keeping relevant and removing irrelevant information. • Experiments show that modeling source documents lead to significant improvements upon best-reported models." }, { "figure_ref": [], "heading": "RELATED WORK 2.1 Fact-Checking Datasets", "publication_ref": [ "b12", "b36", "b49", "b40", "b9", "b35", "b15", "b25", "b43", "b19", "b3", "b13", "b36", "b49", "b40", "b19", "b39", "b44", "b15", "b25", "b3", "b13" ], "table_ref": [], "text": "We reviewed the existing fact-checking dataset as summarized in Table 1. Following Guo et al. [12], we grouped the datasets into two categories: real-world and synthetic. Real-world datasets consist of claims that are naturally occurred and fact-checked by journalists, while synthetic datasets contain claims created artificially by mutating sentences from Wikipedia articles. Early real-world efforts predicted the veracity solely based on the claims or with metadata [36,49], but relying on surface patterns of claims without considering the state of the world fails to identify well-presented misinformation [40]. Therefore, later works proposed to incorporate evidence into the dataset. Ferreira and Vlachos [9] used the headlines of selected news articles, and Pomerleau and Rao [35] used the entire articles instead as the evidence for the same claims. Instead of using news articles, Hanselowski et al. [15] and Kotonya and Toni [25] extracted summaries accompanying fact-checking articles about the claims as evidence. The aforementioned works assume that evidence is given for every claim, which is not conducive to developing systems that need to retrieve evidence from a large knowledge source. In order to integrate the evidence retrieval for better fact-checking, other efforts created claims artificially. Thorne et al. [43] and Jiang et al. [19] considered Wikipedia as the source of evidence and annotated the sentences supporting or refuting each claim. To address this, Augenstein et al. [3] and Gupta and Srikumar [13] retrieved evidence from the Internet, but the search results were not annotated. Thus, it is possible that irrelevant information is present in the evidence, while information that is necessary for verification is missing. To construct a better evidence-based dataset, we retrieve documents from web pages and select relevant evidence sentences from documents as evidence. Such a design makes the dataset suitable to train fact-checking systems that can extract evidence from web sources and validate real-world claims based on evidence found on the Internet.\nEarly efforts predicted the veracity solely based on the claims or with metadata [36,49], but studies show that predictions that do not consider evidence fails to identify misinformation [40]. Therefore, synthetic datasets [19,39,44] considered Wikipedia as the source of evidence and annotated the sentences from articles as evidence. However, these efforts restricted world knowledge to a single source (i.e. Wikipedia), which is not ideal to develop systems that collect evidence from heterogeneous sources. On the other hand, real-world efforts [15,25] extracted summaries accompanying fact-checking articles about the claims as evidence. Nonetheless, using fact-checking articles is not realistic, as they are not available during inference. To address this issue, other datasets [3,13] included search snippets generated by Google as evidence. Unlike prior real-world efforts, we propose to directly incorporate retrieved documents to provide more information for better verification." }, { "figure_ref": [], "heading": "Fact-Checking Systems", "publication_ref": [ "b16", "b44", "b29", "b34", "b28", "b38", "b55", "b56", "b3", "b13", "b25" ], "table_ref": [], "text": "When verifying synthetic claims, systems often operate as a pipeline consisting of an evidence extraction module and a verification module. Relevant articles are first retrieved from the Wikipedia dump by using entity linking, or TF-IDF [16,44]. After obtaining the evidence sentences, a textual entailment model is applied for the claim verification [29,34]. Recent systems employ graph-based models to aggregate evidence. This allows the verification of more complex claims where several pieces of information can be combined [28,38,55,56]. Due to the difficulty of annotating evidence under real-world scenarios, most systems assume the evidence sentences are given, ignoring the challenge of evidence extraction [3,13,25]. Different from these methods, our proposal involves treating evidence extraction as a latent variable. This innovative design empowers our system to efficiently gather evidence from various web sources, thereby verifying real-world claims through evidence discovered online." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "DATASET ANALYSIS 3.1 Search Snippets Analysis", "publication_ref": [ "b13", "b10" ], "table_ref": [], "text": "We investigated the usage of search snippets as evidence for verifying real-world claims. To evaluate the information provided by the search snippets, instances from XFact [13] were manually examined in two phases. The examination team has fifteen members. Five of them are involved in the first phase, while the other five participants are in the second phase. All annotators are undergraduate students who are fluent in English. To ensure examination consistency, they were trained by the authors and went through several pilot examinations. We randomly selected 100 instances and translated them into English. For inter-annotator agreement, we randomly selected 20% of claims to be annotated by 5 annotators. We calculated the Fleiss K score [10] to be 0.75.\nIn the first phase, each annotator was given 50 claims with their corresponding label, search snippets, and source documents. Annotators were required to answer (yes or no) if the snippets and documents provide sufficient information to predict the label of the given claim. We reported the average results in Figure 2, only 46% of snippets provide sufficient information to verify the claim. Our same analysis suggests that for 82% of the instances, using documents provides sufficient evidence to determine the factuality. Annotators were also asked to label if each snippet and document is related to the claim. 78% of source documents are not directly related to the claim (redundancy), while only 37% of search snippets contain irrelevant information. We also noticed that more than 52% of the sentences that can be served as evidence were in three consecutive paragraphs in a document. In the second phase, each annotator was given the claim with search snippets and source documents. Annotators were asked to infer the labels of 50 claims based on the snippets or documents. As shown in Figure 2, human predictions are more accurate when source documents are given (71% vs. 40%). However, we notice that the performance gap between snippets and documents is smaller in prediction accuracy. One reason is that the documents contain more irrelevant information that may affect the prediction accuracy. Based on these results, we conclude that if a fact-checking system is able to extract relevant sentences from source documents, it will gain benefits from the additional contextual clues." }, { "figure_ref": [], "heading": "Multilingual Dataset Extension", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Next, we extend XFact with source documents. XFact contains URLs of web pages and search snippets of these web pages generated by Google search. We only include web pages that the XFact provides the URLs. We also filtered the web pages published after the claims were made to avoid possible information leakage. In detail, we use the HTTP Get method to obtain the web pages according to the given URLs, then utilize xpath to locate all the text content under the <body><p> tags of the web page, so as to exclude the advertisements, brand information, contact information and other irrelevant contents in the web page. Multimedia information (e.g. pictures, videos) is also removed. With this pre-processing procedure, we are able to get textual contents (source documents) from 71% of the web pages. Due to the difficulty of identifying the evidence sentences from source documents in 25 languages, gold labels of evidence are not annotated. For websites with an anti-crawling function, the above method will not return web pages. In this scenario, we obtain content by manually opening web pages. Table 2 shows the statistics of the extended XFact." }, { "figure_ref": [], "heading": "Distribution of the Dataset", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "For training and development, the top twelve languages based on the number of labeled examples are included. The average number of examples per language is 1784, with Serbian being the smallest (835). The dataset is split into training (75%), development (10%), and 𝛼 1 test set (15%). This leaves us with 13 languages for our zero-shot test set (and 𝛼 3 ). The remaining set of sources form our out-ofdomain test set (and 𝛼 2 ). In total, X-FACT covers the following 25 languages (shown with their ISO 639-1 code for brevity): ar, az, bn, de, es, fa, fr, gu, hi, id, it, ka, mr, no, nl, pa, pl, pt, ro, ru, si, sr, sq, ta, tr.\nThere are 7 possible labels for each claim in XFact and EFact: True, Mostly-True, Partly-True, Mostly False, False, Unverifiable and Other. Table 3 shows the composition of training, development, and test sets of XFact and EFact, respecitvely." }, { "figure_ref": [], "heading": "Monolingual Dataset Construction", "publication_ref": [ "b3", "b13" ], "table_ref": [], "text": "In order to comprehensively evaluate the proposed paradigm on real-world claims, we further construct an English dataset EFact. Augenstein et al. [3] introduced a real-world English dataset with search snippets as evidence. However, it is constructed 4 years ago, and more than half (58%) of the URLs provided in the dataset are invalid. The content of the web pages is either deleted or expired, so we are not able to get the texts on the web pages. Following Gupta and Srikumar [13], we build the monolingual version of XFact. In summary, we scrape fact-checked claims from dedicated agencies and result in a total of 10,000 English claims. We collected a list of nonpartisan fact-checkers compiled by International Fact-Checking Network (IFCN) 1 , and Duke Reporter's Lab2 . After obtaining the list, we first queried Google's Fact Check Explorer (GFCE) 3 for all the fact-checks done by a particular website. Then we crawled the linked article on the website and additional metadata such as author, URL, and date of the claim. We removed duplicate claims and examples where the label appeared in the claim itself. For websites not linked through GFCE, we skipped these websites as the verdict of the claim is not well-specified.\nNext, we normalized the verdict of claims into 7 labels similar to XFact. The label set contains five labels with a decreasing level of truthfulness: True, Mostly-True, Partly-True, Mostly False, and False. To encompass several other cases where assigning a label" }, { "figure_ref": [], "heading": "Claim", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Documents", "publication_ref": [ "b3", "b13" ], "table_ref": [ "tab_2" ], "text": "Evidence Extractor Claim Verifier Verdict is difficult due to lack of evidence or subjective interpretations, we introduced Unverifiable as another label. A final label Other was used to denote cases that do not fall under the above-specified categories. Following the process described, we reviewed each factchecker's rating system along with some examples and manually mapped these labels to our newly designed label scheme. Table 3 shows the label distribution of EFact.\nWhen verifying a claim, journalists first find information related to the fact and evaluate it given the collected evidence. To validate real-world claims, we chose to incorporate full text in the web pages as evidence. In order to collect evidence from the web sources, we first submitted each claim as a query to the Google Search API by following Augenstein et al. [3] and Gupta and Srikumar [13]. The top 10 search results are retrieved. For each result, we saved the search rank, URL, timestamp and document. For a small percentage of the claims, Google search did not yield any results. We removed these claims from our training, development, and test sets. We have two measures to ensure the reliability of the evidence. Firstly, we maintained a list of misinformation and disinformation websites, all search results from these websites will be filtered out. Secondly, we filtered out results from fact-checking websites to prevent the answer from being trivially found." }, { "figure_ref": [ "fig_1" ], "heading": "MODEL", "publication_ref": [], "table_ref": [], "text": "As shown in Figure 3, the proposed model contains two modules: evidence extractor and claim verifier. We propose the Sparsity and Contiguity Assisted Latent Evidence Extractor (SCALE) as the evidence extractor, which extracts the evidence by assigning binary masks (0 or 1) to sentences. After obtaining the evidence sentences, the claim verifier predicts the verdict of the claim conditioned on the extracted evidence. We provide the pseudo code in Algorithm 1. These two modules are jointly trained in an end-to-end manner." }, { "figure_ref": [], "heading": "Motivation", "publication_ref": [ "b12", "b18", "b4", "b26" ], "table_ref": [], "text": "There are three main challenges to incorporating source documents for verifying real-world claims.\n• The first one is that we cannot train an evidence extractor in a supervised learning manner, as gold evidence sentences in the documents are not available. • Next, the source documents encompass a substantial amount of irrelevant information, since they are aggregated from heterogeneous web sources. • Lastly, evidence is crucial for generating justifications to convince readers [12]. Extracted sentences are encouraged to be contiguous, which improves readability [18]. Aiming at addressing these challenges, we use SCALE to build the model. Firstly, we can view the evidence extraction as a latent variable, and jointly train it with claim verification based on SCALE. Unlike other information retrievers (e.g. TF-IDF), the evidence extractor in the proposed joint system can solicit optimization feedback obtained from the claim verification. On the other hand, SCALE is more stable when compared with other latent variable models [4,26], which rely on sampling-based gradient estimators and thus exhibit high variance.\nSecondly, using SCALE can control sparsity and contiguity in the evidence extraction. Imposing sparsity helps to strike a balance between removing irrelevant information and keeping relevant information in the document. Encouraging contiguity aids in extracting continuous evidence sentences for better readability." }, { "figure_ref": [], "heading": "Algorithm 1 Pseudo code implementation of our joint model", "publication_ref": [], "table_ref": [], "text": "Require: Batch size 𝑁 , Claim 𝑐, Document 𝑑, Labels 𝑇\n1: for Sampled Mini-batch {𝑐 𝑘 } 𝑁 𝑘=1 , {𝑑 𝑘 } 𝑁 𝑘=1 , {𝑇 𝑘 } 𝑁 𝑘=1 do 2:\nfor All 𝑘 ∈ {1, ..., 𝑁 } do 3:\n𝐶 𝑘 , 𝐷 𝑘 = BERT_encoder(𝑐 𝑘 ), BERT_encoder(𝑑 𝑘 )." }, { "figure_ref": [], "heading": "4:", "publication_ref": [], "table_ref": [], "text": "𝑍 𝑘 = SCALE_Extractor(𝐶 𝑘 , 𝐷 𝑘 )." }, { "figure_ref": [], "heading": "5:", "publication_ref": [], "table_ref": [], "text": "𝑇 ′ 𝑘 = Verifier (concat(𝐶 𝑘 , 𝐷 𝑘 * 𝑍 𝑘 ))." }, { "figure_ref": [], "heading": "6:", "publication_ref": [], "table_ref": [], "text": "⊲ Verifier can be BERT or KGAT " }, { "figure_ref": [], "heading": "Evidence Extractor", "publication_ref": [ "b8" ], "table_ref": [], "text": "We proposed a latent variable model SCALE to compute sentencelevel values μ ∈ [0, 1] 𝐿 , which will be used to mask the sentences in the source document. Only the sentences have been assigned non-zero value will be considered as evidence for claim verification.\nSentence Representation. We use BERT [8] to encode the claim and sentences in the document. We feed the sentences independently to the BERT and use the representations of CLS tokens as the sentence representations. Then we concatenate the sentence representations as the claim representations: 𝒙 𝑐 ∈ R 𝐷 ×𝐿 𝑐 and document representations: 𝒙 𝑑 ∈ R 𝐷 ×𝐿 𝑑 where 𝐷 is the embedding size and 𝐿 𝑐 , 𝐿 𝑑 is the number of sentences.\nFactor Graph. Finding the highest-scored evidence sentences under certain constraints can be viewed as a structured prediction problem. Essentially, the global structure can be represented as assignments to multiple variables, and posit a decomposition of Those susceptible to immune response may have a reaction to mRNA vaccines." }, { "figure_ref": [], "heading": "Most people will not experience severe side effects include fever and fatigue.", "publication_ref": [], "table_ref": [], "text": "Severe side effects are defined as those that prevent daily activity.\nThe COVID-19 mRNA vaccines have efficacy rates of 90-95%." }, { "figure_ref": [ "fig_2" ], "heading": "This may explain the intense reactions such as aches and fevers reported in some recipients.", "publication_ref": [ "b33" ], "table_ref": [], "text": "Reactions though severe were transient. the problem into local factors 𝑓 . Each of 𝑓 will impose constraints on the evidence sentences. In this work, we introduce two factors: BUDGET and PAIR to control the sparsity and continuity of the extracted sentences. In detail, we assume a factor graph F , which consists of each factor 𝑓 ∈ F corresponding to a subset of variables 𝝁 𝑓 = (𝝁 𝑖 ) 𝑖 ∈ 𝑓 in it. Note that 𝝁 is a 𝐿 = 𝐿 𝑐 ×𝐿 𝑑 dimensional binary mask selecting the evidence sentences from documents that are aligned to the claim.\nThe following local sub-problem is required to be tractable for any factor:\nμ 𝑓 = arg max 𝝁 𝑓 ∈ {0,1} |𝑓 | 𝒔 ⊤ 𝑓 𝝁 𝑓 + ℎ 𝑓 𝝁 𝑓 ,(1)\nwhere the first item indicates the selection of sentences in the documents according to the degree of importance matrix: 𝒔 𝑓 = 𝒙 𝑐 • 𝒙 𝑇 𝑑 , and the second item represents the local score functions of each factor: ℎ 𝑓 (𝝁 𝑓 ) = 𝐿-1 𝑖=1 𝑟 𝑖,𝑖+1 𝜇 𝑖,𝑖+1 , where 𝑟 𝑖,𝑖+1 ∈ R are edge scores in factor graph F . Figure 4 illustrates how BUDGET factor and PAIR factor are used for imposing constraints on extracting evidence sentences from source documents. PAIR factor is used to impose continuity, and BUDGET factor is use to induce sparsity. We instantiate a factor graph with 𝐿 binary variables (one for each sentence) and a pairwise factor for every pair of contiguous sentences:\nF = {𝑃𝐴𝐼𝑅(𝜇 𝑖 , 𝜇 𝑖+1 ; 𝑟 𝑖,𝑖+1 ) : 1 ≤ 𝑖 ≤ 𝐿},(2)\nA binary pairwise Markov Random Field (MRF) in Equation 1with PAIR factor can be derived as:\n𝑠𝑐𝑜𝑟𝑒 (𝜇; 𝑠) = 𝐿 ∑︁ 𝑖=1 𝑠 𝑖 𝜇 𝑖 + 𝐿-1 ∑︁ 𝑖=1 𝑟 𝑖,𝑖+1 𝜇 𝑖 𝜇 𝑖,𝑖+1 ,(3)\nwhere 𝑟 𝑖,𝑖+1 ≥ 0 encourages contiguity on the evidence extraction. We further impose sparsity by adding the BUDGET factor 𝐿 𝑖=1 𝜇 𝑖 ≤ 𝐾 and obtain F as follows:\nF = {𝐵𝑈 𝐷𝐺𝐸𝑇 (𝜇 1 , ..., 𝜇 𝐿 ; 𝐾)} ∪{𝑃𝐴𝐼𝑅(𝜇 𝑖 , 𝜇 𝑖+1 ; 𝑟 𝑖,𝑖+1 ) : 1 ≤ 𝑖 ≤ 𝐿},(4)\nwhere 𝐾 is a hyperparameter to control the sparsity. More sentences are extracted as 𝐾 increases.\nMarginal Inference. To identify the highest-scoring global structure, it is essential to maximize the global score function, denoted as score(𝝁; 𝒔), which combines information coming from all factors. This can be viewed as the maximum a posteriori (MAP) inference. Formally, it can be written as:\nμ = arg max 𝝁 ∈ {0,1} 𝐿 𝒔 ⊤ 𝝁 + ∑︁ 𝑓 ∈ F ℎ 𝑓 𝝁 𝑓 score(𝝁;𝒔 ) .(5)\nThe solution to the MAP problem is a vector μ whose entries are 0 and 1. However, it is often difficult to obtain an exact maximization algorithm for complex structured problems that involve interacting sub-problems that have global agreement constraints [33]. Therefore, we can define a Gibbs distribution 𝑝 (𝝁; 𝒔) ∝ 𝑒𝑥𝑝 (𝑠𝑐𝑜𝑟𝑒 (𝝁; 𝒔)).\nThe MAP in Equation 5is the mode of this distribution." }, { "figure_ref": [], "heading": "SCALE.", "publication_ref": [ "b48", "b24", "b31" ], "table_ref": [], "text": "Due to the overlapping interaction of the factors 𝑓 ∈ F . the MAP problem is often intractable. Continuous relaxation can be used to replace the discrete constraints 𝝁 ∈ {0, 1} 𝐿 , which is known as LP-MAP inference [48]. When the factor graph F does not have cycles, these continuous relaxations are nearly optimal [24,31] as for many structured prediction tasks in natural language processing. Formally, we can rewrite the Equation 5 as:\nμ = arg max 𝝁 ∈ [0,1] 𝐿 score(𝝁; 𝒔).(6)\nHowever, LP-MAP is not suitable to train with backpropagation. Consequently, we introduce the SCALE model to address the optimization problem. Arbitrary factor graphs can be instantiated as long as a MAP oracle for each factor is provided. In detail, SCALE is the 𝑙 2 regularized LP-MAP as:\nμ = arg max 𝝁 ∈ [0,1] 𝐿 score(𝝁; 𝒔) -1/2∥𝝁 ∥ 2 . (7\n)" }, { "figure_ref": [], "heading": "Claim Verifier", "publication_ref": [ "b13", "b8", "b28", "b44", "b28" ], "table_ref": [], "text": "Finally, the verifier makes predictions conditioned on the selected evidence and the claim 𝒄: ŷ = pred(𝝁 ⊙ 𝒙 ∥ 𝒄) to obtain the verdict label. ⊙ and ∥ denote the element-wise product and concatenation, respectively. In practice, we adopt two types of claim verifiers as follows: (1) BERT-Based Model: Following the state-of-theart model on XFact [13], we use a multi-layer perceptron with embeddings from BERT [8] to predict the verdict of the claim. (2) Graph-Based Model: Kernel graph attention network [28] is the SOTA graph-based verifier on FEVER [44]. Following Liu et al. [28], we construct the evidence graph based on the output embeddings of the claim and selected evidence. Node and edge kernels are then used to conduct fine-grained evidence propagation. The updated node representations are used to predict the verdict of the claim." }, { "figure_ref": [], "heading": "EXPERIMENTS AND ANALYSES 5.1 Baselines", "publication_ref": [], "table_ref": [], "text": "We adopt seven representative extractor baselines. Pipeline extractors select sentences without seeking supervision from the verdict prediction. Joint extractors train evidence extraction and claim verification jointly. As an indicator of label distribution, we include a majority baseline with the most frequent label of the distribution." }, { "figure_ref": [], "heading": "Pipeline Extractors.", "publication_ref": [ "b2", "b19", "b43", "b34", "b8", "b17", "b53", "b52", "b51" ], "table_ref": [], "text": "(1) Rule-based Extractor: This is a simple baseline that includes the 𝑁 sentences adjacent the snippet in the source document. In practice, we choose 𝑁 = 6 and 𝑁 = 12.\n(2) Surface Extractor: Following previous efforts on synthetic datasets [2,19,43], we use TF-IDF to extract sentences in the source documents as the evidence.\n(3) Semantic Extractor: Following Nie et al. [34], we extract evidence based on semantic similarity. BERT [8] is used to get the representations of the claim and sentences in the source document. Cosine similarity is used for selecting evidence.\n(4) Hybrid Extractor: We employ rankSVM to choose sentences based on the feature sets of rankings returned by TF-IDF as well as similarity scores calculated using BERT.\n(5) CONCRETE: Huang et al. [17] introduces a pioneering factchecking framework utilizing cross-lingual retrieval, gathering evidence from various languages using a retriever. (6) CofCED: Yang et al. [53] employs a hierarchical encoder for web text representation, develops cascaded selectors for verdictexplainable sentence selection.\n(7) FDHN: Xu and Kechadi [52] presents a fuzzy logic-based hybrid model that combines deep learning with textual and numerical context analysis to enhance fake news detection. (8) GETRAL: Wu et al. [51] models claims and evidences as graphstructured data, focusing on capturing long-distance semantic dependencies through neighborhood propagation." }, { "figure_ref": [], "heading": "Joint Extractors.", "publication_ref": [ "b13", "b26", "b50", "b32", "b30", "b4", "b23", "b5", "b27" ], "table_ref": [], "text": "(1) Attention: Following Gupta and Srikumar [13], we get relevance weights between the output embeddings of all retrieved sentences and the claim via dot product attention.\nThen we obtain the evidence by filtering the weighted sentences.\n(2) Reinforce: We follow Lei et al. [26] by assigning a binary Bernoulli variable to each sentence from source documents. The evidence extractor is optimized using REINFORCE [50]. A 𝐿 0 regularizer is used to impose sparsity.\n(3) FusedMax: We used fusedmax [32] to encourage attention to contiguous segments of text, by adding an additional total variation regularizer, inspired by the fused lasso.\n(4) Gumbel: Following Maddison et al. [30], we employ the Gumbel-Max trick to reparameterize the Bernoulli variables.\n(5) HardKuma: We follow Bastings et al. [4] by adopting Hard-Kuma variables and use reparameterized gradients estimates [23].\n(6) UNIREX: We adopt the rationale extraction in Chan et al. [5]. [27] uses an auxiliary module to ensure alignment between the selected rationale and the original input." }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b13", "b19", "b43", "b22", "b47", "b54", "b39" ], "table_ref": [], "text": "Following Gupta and Srikumar [13], we evaluate our proposed model on three test sets. The out-of-domain and zero-shot test sets aim to measure the transfer abilities of a fact-checking system across different domains and languages. (3) Zero-Shot: The test set (𝛼 3 ) includes claims from languages not contained in the training set. Models that overfit language-specific artifacts will have poor performance on 𝛼 3 . We reported the mean F1 score and standard deviation by 5 runs.\nHyperparameters of the pipeline and joint extractors. Following previous efforts on synthetic datasets [19,43], we configure the pipeline extractor to select five pieces of evidence from source documents. For the pipeline extractors, we set the retrieved evidence obtained from TF-IDF to be more than 5 words for surface extractor. We use the mBERT default tokenizer with max-length as 256 to preprocess data for semantic extractor. We use the default parameters in scikit-learn with RBF kernel for the hybrid extractor. For the joint extractors, we build the models based on their official implementations and tune the hyper-parameters on the dev set. For the proposed model, we use Adam [22] with 1𝑒-5 learning rate with 0.5 decay. The predictor hidden size is set to 200.\nDocuments from specialized domains such as science and ecommerce have also been considered [47,54]. Schuster et al. [39] constructed VitaminC based on factual revisions to Wikipedia, in which evidence pairs are nearly identical in language and content, with the exception that one supports a claim while the other does not. However, these efforts restricted world knowledge to a single source (Wikipedia), ignoring the challenge of retrieving evidence from heterogeneous sources on the web." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_4", "tab_5" ], "text": "XFact: Table 4 shows our model surpassing others on three tests using search snippets or source documents. It also generalizes across domains (𝛼 2 ) or languages (𝛼 3 ) with robust standard deviation. Providing more context around snippets enhances results. Yet, adding more sentences results in a performance improvement of less than 1%. The enhancement in the model's performance is not always due to the potential non-adjacency of evidence sentences.\nJoint extractors using source documents perform better than snippet models, with our model showing a 5.39% improvement, highlighting the value of more context. The graph-based model has a higher F1 score than the BERT-based one, as real-world claim verification demands multiple evidence synthesis. Comparing extractors with source documents, hybrid extractors outperform both EFact: We further present the experimental results on the monolingual dataset in Table 5. We observe similar experimental results as for XFact. Our proposed model consistently outperforms baselines under different settings. Given the search snippets, our model outperforms other joint models by 0.88% on average. When the source documents are provided, the performance gap becomes larger (2.6% on average). Such results further illustrate the effectiveness of incorporating source documents to verify real-world claims and the superiority of the proposed model when more contexts are given." }, { "figure_ref": [], "heading": "Analysis and Discussion", "publication_ref": [], "table_ref": [], "text": "In this section, we give detailed analyses for baselines on XFact with snippets and documents.\nEffect of Factors: As shown in Figure 5, as 𝐾 increases, more sentences in the source documents are selected as evidence. However, the F1 score of the model is not monotonically increasing as 𝐾 increases, as irrelevant information is included. The model achieves the best performance when 𝐾 = 30, where 11.74 sentences (191.07 tokens) are selected as evidence on average. If we remove the BUD-GET factor completely, the performance will drop from 46.04 to 45.03 in terms of the F1 score. On the other hand, the PAIR constraint is also beneficial to the model. Models with PAIR constraints consistently achieve better results." }, { "figure_ref": [ "fig_4", "fig_5" ], "heading": "Effect of Metadata:", "publication_ref": [ "b12" ], "table_ref": [ "tab_6", "tab_7", "tab_8" ], "text": "We further study the impact of concatenating metadata (e.g. language, publish date) to the claim for verification. From Table 6, the proposed model gains performance improvements from the metadata. However, improvements in the model with documents are smaller when compared with the model that uses snippets. Also, the impact of metadata is less significant than the ratio of extracted sentences. The model without metadata can achieve competitive results than models with metadata but less optimal extraction ratio. One potential reason is that the model can extract important information similar to metadata from documents, so the impact of metadata is diminished.\nEffect of Evidence: In Table 7, we vary the number of evidence from source documents for pipeline extractors and report the F1 scores on the test set with the BERT-based model. The variation results indicate that both the quantity and quality of retrieved evidence affect the performance. Using less evidence could not provide enough information to help the verifiers to predict the factual label.\nIn contrast, introducing too much evidence will bring irrelevant and noisy sentences thus impeding the veracity prediction.\nHuman Evaluaion: We asked 5 annotators to annotate 100 claims sampled from the test set 𝛼 1 . Since XFact is a multilingual dataset, we first translated these claims and source documents into English. Each annotator is required to select sentences that would be able to verify the claim as evidence. We compare sentences extracted by different extractors including surface, semantic, and hybrid extractors with different numbers of sentences with the gold evidence annotated by the annotators, and show the results in Table 8. Compared with the other baseline model, our method can obtain a 7.98% F1 performance boost on average. When compared with pipeline extractors, our model can maintain a better balance between precision and recall. Such results show the effectiveness of our approach that jointly models evidence extraction and claim verification.\nCase Study: We present the case study of extracting evidence on source documents using joint extractors based on attention and SCALE. From Figure 6, we observe that attention based joint extractor extract much more evidence on source documents than SCALE based extractor, thus introducing more irrelevant information and making it more difficult for verifiers to predict factual labels. We attribute the accurate and effective evidence obtained by SCALE to constrained (e.g. sparsity, contiguity), deterministic and fully differentiable extracting capabilities.\nError Analysis: We present a typical error case made by the proposed model on the out-of-domain set as shown in Figure 7. The gold label of this claim is Partly True, while the model predicted it as True. The extracted sentences shows the model does able to find relevant evidence to support the claim. However, the claim verifier has difficulties in predicting labels with similar ratings (partly true v.s. true). One reason is that the claim verifier is not trained on this domain, which makes it harder to distinguish labels with similar ratings. Through calculation, we found that more than 80% of the errors are caused by predicting similar labels within two ratings. For example, predicting mostly false and partly true as false. On the in-domain test set, the proposed model made 24% less such errors. Another reason is that distinguishing similar ratings is a very difficult problem. Even for human fact-checkers, their ratings on the same claim are not consistent [12]. The training corpus potentially exhibit such inconsistency, which makes the model confused when predicting the factuality." }, { "figure_ref": [], "heading": "LIMITATIONS", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose to incorporate full text of web pages for verifying real-world claims. Though the proposed fact-checking system significantly outperforms baselines, it still has the following three major limitations. Firstly, the training corpus only contains claims selected and verified by fact-checkers, as it is crawled from fact-checking agencies. Fact-checkers select and verify claims based on their judgements as well as public interests. Thus, there is no guarantee that the training corpus can cover any topics. Secondly, evidence in the retrieved web pages can be exhibited in the tables, PDFs, images, audios and videos. Human fact-checkers are able to extract relevant information from these heterogeneous sources, while our fact-checking system can only extract textual sentences as evidence. Unlike an artificial fact-checking dataset that assumes the world knowledge is restricted to Wikipedia, real-world dataset requires knowledge from more diversified sources. Using a search engine is an effective approach to obtain related knowledge, but it also brings the concern of untrustworthy evidence. Not all web documents returned by the search engine are equally trustworthy, and sometimes trustworthy sources contradict each other. Almost all existing fact-checking systems including ours are not able to address the presence of disagreeing or untrustworthy evidence." }, { "figure_ref": [], "heading": "CONCLUSIONS", "publication_ref": [], "table_ref": [], "text": "In this paper, we first analyzed the real-world dataset XFact, then proposed to incorporate retrieved documents as evidence to enrich the dataset. A latent variable model is further developed to jointly select evidence and predict factuality. Experiments indicate that retrieved documents can provide sufficient contextual clues to the model even when gold evidence sentences are not annotated. Our model maintains a balance between keeping relevant information and removing irrelevant information from source documents." } ]
2024-01-27
10.18653/v1/D19-1475
[ { "authors": "", "journal": "Surface", "ref_id": "b0", "title": "", "year": "2821" }, { "authors": "Bill Adair; Chengkai Li; Jun Yang; Cong Yu", "journal": "", "ref_id": "b1", "title": "Progress toward \"the holy grail\": The continued quest to automate fact-checking", "year": "2017" }, { "authors": "Rami Aly; Zhijiang Guo; M Schlichtkrull; James Thorne; Andreas Vlachos; Christos Christodoulopoulos; O Cocarascu; Arpit Mittal", "journal": "", "ref_id": "b2", "title": "FEVEROUS: Fact Extraction and VERification Over Unstructured and Structured information", "year": "2021" }, { "authors": "Isabelle Augenstein; Christina Lioma; Dongsheng Wang; Lucas Chaves Lima; Casper Hansen; Christian Hansen; Jakob Grue Simonsen", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "MultiFC: A Real-World Multi-Domain Dataset for Evidence-Based Fact Checking of Claims", "year": "2019" }, { "authors": "Jasmijn Bastings; Wilker Aziz; Ivan Titov", "journal": "", "ref_id": "b4", "title": "Interpretable Neural Predictions with Differentiable Binary Variables", "year": "2019-07-28" }, { "authors": "Aaron Chan; Maziar Sanjabi; Lambert Mathias; Liang Tan; Shaoliang Nie; Xiaochang Peng; Xiang Ren; Hamed Firooz", "journal": "PMLR", "ref_id": "b5", "title": "Unirex: A unified learning framework for language model rationale extraction", "year": "2022" }, { "authors": "Jifan Chen; Aniruddh Sriram; Eunsol Choi; Greg Durrett", "journal": "", "ref_id": "b6", "title": "Generating Literal and Implied Subquestions to Fact-check Complex Claims", "year": "2022-12-07" }, { "authors": "Wenhu Chen; Hongmin Wang; Jianshu Chen; Yunkai Zhang; Hong Wang; Shiyang Li; Xiyou Zhou; William Yang; Wang ", "journal": "", "ref_id": "b7", "title": "TabFact: A Large-scale Dataset for Table-based Fact Verification", "year": "2020" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2019" }, { "authors": "William Ferreira; Andreas Vlachos", "journal": "", "ref_id": "b9", "title": "Emergent: a novel data-set for stance classification", "year": "2016" }, { "authors": "L Joseph; Fleiss", "journal": "Psychological bulletin", "ref_id": "b10", "title": "Measuring nominal scale agreement among many raters", "year": "1971" }, { "authors": "Lucas Graves", "journal": "Reuters Institute for the Study of Journalism", "ref_id": "b11", "title": "Understanding the Promise and Limits of Automated Factchecking", "year": "2018" }, { "authors": "Zhijiang Guo; Michael Sejr Schlichtkrull; Andreas Vlachos", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b12", "title": "A Survey on Automated Fact-Checking", "year": "2021" }, { "authors": "Ashim Gupta; Vivek Srikumar", "journal": "", "ref_id": "b13", "title": "X-Fact: A New Benchmark Dataset for Multilingual Fact Checking", "year": "2021" }, { "authors": "Vivek Gupta; Maitrey Mehta; Pegah Nokhiz; Vivek Srikumar", "journal": "", "ref_id": "b14", "title": "IN-FOTABS: Inference on Tables as Semi-structured Data", "year": "2020" }, { "authors": "Andreas Hanselowski; Christian Stab; Claudia Schulz; Zile Li; Iryna Gurevych", "journal": "", "ref_id": "b15", "title": "A Richly Annotated Corpus for Different Tasks in Automated Fact-Checking", "year": "2019" }, { "authors": "Andreas Hanselowski; Hao Zhang; Zile Li; Daniil Sorokin; Benjamin Schiller; Claudia Schulz; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "UKP-Athene: Multi-Sentence Textual Entailment for Claim Verification", "year": "2018" }, { "authors": "Kung-Hsiang Huang; Chengxiang Zhai; Heng Ji", "journal": "", "ref_id": "b17", "title": "CONCRETE: Improving Cross-lingual Fact-checking with Cross-lingual Retrieval", "year": "2022" }, { "authors": "Sarthak Jain; Sarah Wiegreffe; Yuval Pinter; Byron C Wallace", "journal": "", "ref_id": "b18", "title": "Learning to Faithfully Rationalize by Construction", "year": "2020" }, { "authors": "Yichen Jiang; Shikha Bordia; Zheng Zhong; Charles Dognin; Maneesh Singh; Mohit Bansal", "journal": "", "ref_id": "b19", "title": "HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification", "year": "2020" }, { "authors": "Kashif Khan; Ruizhe Wang; Pascal Poupart", "journal": "", "ref_id": "b20", "title": "WatClaimCheck: A new Dataset for Claim Entailment and Inference", "year": "2022-05-22" }, { "authors": "Jude Khouja", "journal": "", "ref_id": "b21", "title": "Stance Prediction and Claim Verification: An Arabic Perspective", "year": "2020" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b22", "title": "Adam: A method for stochastic optimization", "year": "2015" }, { "authors": "P Diederik; Max Kingma; Welling", "journal": "", "ref_id": "b23", "title": "Auto-Encoding Variational Bayes", "year": "2014-04-14" }, { "authors": "Terry Koo; Alexander M Rush; Michael Collins; Tommi S Jaakkola; David A Sontag", "journal": "", "ref_id": "b24", "title": "Dual Decomposition for Parsing with Non-Projective Head Automata", "year": "2010-09-11" }, { "authors": "Neema Kotonya; Francesca Toni", "journal": "", "ref_id": "b25", "title": "Explainable Automated Fact-Checking for Public Health Claims", "year": "2020" }, { "authors": "Tao Lei; Regina Barzilay; Tommi S Jaakkola", "journal": "", "ref_id": "b26", "title": "Rationalizing Neural Predictions", "year": "2016-11-01" }, { "authors": "Wei Liu; Haozhao Wang; Jun Wang; Zhiying Deng; Yuankai Zhang; Cheng Wang; Ruixuan Li", "journal": "", "ref_id": "b27", "title": "Enhancing the Rationale-Input Alignment for Selfexplaining Rationalization", "year": "2024" }, { "authors": "Zhenghao Liu; Chenyan Xiong; Maosong Sun; Zhiyuan Liu", "journal": "", "ref_id": "b28", "title": "Finegrained Fact Verification with Kernel Graph Attention Network", "year": "2020" }, { "authors": "Jackson Luken; Nanjiang Jiang; Marie-Catherine De Marneffe", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "QED: A fact verification system for the FEVER shared task", "year": "2018" }, { "authors": "Chris J Maddison; Andriy Mnih; Yee Whye Teh", "journal": "", "ref_id": "b30", "title": "The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables", "year": "2017-04-24" }, { "authors": "F T André; Martins; A T Mário; Figueiredo; M Q Pedro; Noah A Aguiar; Eric P Smith; Xing", "journal": "J. Mach. Learn. Res", "ref_id": "b31", "title": "AD 3 : alternating directions dual decomposition for MAP inference in graphical models", "year": "2015" }, { "authors": "Vlad Niculae; Mathieu Blondel", "journal": "", "ref_id": "b32", "title": "A Regularized Framework for Sparse and Structured Neural Attention", "year": "2017-09" }, { "authors": "Vlad Niculae; F T André; Martins", "journal": "PMLR", "ref_id": "b33", "title": "LP-SparseMAP: Differentiable Relaxed Optimization for Sparse Structured Prediction", "year": "2020-07" }, { "authors": "Yixin Nie; Haonan Chen; Mohit Bansal", "journal": "AAAI Press", "ref_id": "b34", "title": "Combining Fact Extraction and Verification with Neural Semantic Matching Networks", "year": "2019-01-27" }, { "authors": "Dean Pomerleau; Delip Rao", "journal": "Fake News Challenge", "ref_id": "b35", "title": "The fake news challenge: Exploring how artificial intelligence technologies could be leveraged to combat fake news", "year": "2017" }, { "authors": "Eunsol Hannah Rashkin; Jin Yea Choi; Svitlana Jang; Yejin Volkova; Choi", "journal": "", "ref_id": "b36", "title": "Truth of Varying Shades: Analyzing Language in Fake News and Political Fact-Checking", "year": "2017" }, { "authors": "Michael Schlichtkrull; Zhijiang Guo; Andreas Vlachos", "journal": "", "ref_id": "b37", "title": "AVeriTeC: A dataset for real-world claim verification with evidence from the web", "year": "2023" }, { "authors": "Vladimir Michael Sejr Schlichtkrull; Barlas Karpukhin; Mike Oguz; Wentau Lewis; Sebastian Yih; Riedel", "journal": "", "ref_id": "b38", "title": "Joint Verification and Reranking for Open Fact Checking Over Tables", "year": "2021" }, { "authors": "Tal Schuster; Adam Fisch; Regina Barzilay", "journal": "", "ref_id": "b39", "title": "Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence", "year": "2021" }, { "authors": "Tal Schuster; Roei Schuster; Darsh J Shah; Regina Barzilay", "journal": "Computational Linguistics", "ref_id": "b40", "title": "The Limitations of Stylometry for Detecting Machine-Generated Fake News", "year": "2020" }, { "authors": "Shaden Shaar; Nikolay Babulkov; Giovanni Da San; Preslav Martino; Nakov", "journal": "", "ref_id": "b41", "title": "That is a Known Lie: Detecting Previously Fact-Checked Claims", "year": "2020" }, { "authors": "Kishore Gautam; Durgesh Shahi; Nandini", "journal": "", "ref_id": "b42", "title": "FakeCovid -A Multilingual Cross-domain Fact Check News Dataset for COVID-19", "year": "2020" }, { "authors": "James Thorne; Andreas Vlachos; Christos Christodoulopoulos; Arpit Mittal", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "FEVER: a Large-scale Dataset for Fact Extraction and VERification", "year": "2018" }, { "authors": "James Thorne; Andreas Vlachos; Oana Cocarascu; Christos Christodoulopoulos; Arpit Mittal", "journal": "", "ref_id": "b44", "title": "The Fact Extraction and VERification (FEVER) Shared Task", "year": "2009" }, { "authors": "Nicolas Turenne", "journal": "PloS one", "ref_id": "b45", "title": "The rumour spectrum", "year": "2018" }, { "authors": "Soroush Vosoughi; Deb Roy; Sinan Aral", "journal": "Science", "ref_id": "b46", "title": "The spread of true and false news online", "year": "2018" }, { "authors": "David Wadden; Shanchuan Lin; Kyle Lo; Lucy Lu Wang; Madeleine Van Zuylen; Arman Cohan; Hannaneh Hajishirzi", "journal": "", "ref_id": "b47", "title": "Fact or Fiction: Verifying Scientific Claims", "year": "2020" }, { "authors": "Martin J Wainwright; M I Jordan", "journal": "Found. Trends Mach. Learn", "ref_id": "b48", "title": "Graphical Models, Exponential Families, and Variational Inference", "year": "2008" }, { "authors": "William Yang; Wang ", "journal": "", "ref_id": "b49", "title": "Liar, Liar Pants on Fire\": A New Benchmark Dataset for Fake News Detection", "year": "2017" }, { "authors": "Williams Ronald", "journal": "Machine learning", "ref_id": "b50", "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "year": "1992" }, { "authors": "Junfei Wu; Weizhi Xu; Qiang Liu; Shu Wu; Liang Wang", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b51", "title": "Adversarial contrastive learning for evidence-aware fake news detection with graph neural networks", "year": "2023" }, { "authors": "Cheng Xu; M-Tahar Kechadi", "journal": "", "ref_id": "b52", "title": "Fuzzy Deep Hybrid Network for Fake News Detection", "year": "2023" }, { "authors": "Zhiwei Yang; Jing Ma; Hechang Chen; Hongzhan Lin; Ziyang Luo; Yi Chang", "journal": "", "ref_id": "b53", "title": "A Coarse-to-fine Cascaded Evidence-Distillation Neural Network for Explainable Fake News Detection", "year": "2022" }, { "authors": "Wenxuan Zhang; Yang Deng; Jing Ma; Wai Lam", "journal": "", "ref_id": "b54", "title": "AnswerFact: Fact Checking in Product Question Answering", "year": "2020" }, { "authors": "Wanjun Zhong; Jingjing Xu; Duyu Tang; Zenan Xu; Nan Duan; Ming Zhou; Jiahai Wang; Jian Yin", "journal": "", "ref_id": "b55", "title": "Reasoning Over Semantic-Level Graph for Fact Checking", "year": "2020" }, { "authors": "Jie Zhou; Xu Han; Cheng Yang; Zhiyuan Liu; Lifeng Wang; Changcheng Li; Maosong Sun", "journal": "", "ref_id": "b56", "title": "GEAR: Graph-based Evidence Aggregating and Reasoning for Fact Verification", "year": "2019" } ]
[ { "formula_coordinates": [ 5, 58.47, 394.24, 188.24, 17.22 ], "formula_id": "formula_0", "formula_text": "1: for Sampled Mini-batch {𝑐 𝑘 } 𝑁 𝑘=1 , {𝑑 𝑘 } 𝑁 𝑘=1 , {𝑇 𝑘 } 𝑁 𝑘=1 do 2:" }, { "formula_coordinates": [ 5, 371.91, 544.01, 186.83, 18.47 ], "formula_id": "formula_1", "formula_text": "μ 𝑓 = arg max 𝝁 𝑓 ∈ {0,1} |𝑓 | 𝒔 ⊤ 𝑓 𝝁 𝑓 + ℎ 𝑓 𝝁 𝑓 ,(1)" }, { "formula_coordinates": [ 5, 366.73, 697.22, 192.01, 8.43 ], "formula_id": "formula_2", "formula_text": "F = {𝑃𝐴𝐼𝑅(𝜇 𝑖 , 𝜇 𝑖+1 ; 𝑟 𝑖,𝑖+1 ) : 1 ≤ 𝑖 ≤ 𝐿},(2)" }, { "formula_coordinates": [ 6, 101.34, 112.82, 193.25, 24.75 ], "formula_id": "formula_3", "formula_text": "𝑠𝑐𝑜𝑟𝑒 (𝜇; 𝑠) = 𝐿 ∑︁ 𝑖=1 𝑠 𝑖 𝜇 𝑖 + 𝐿-1 ∑︁ 𝑖=1 𝑟 𝑖,𝑖+1 𝜇 𝑖 𝜇 𝑖,𝑖+1 ,(3)" }, { "formula_coordinates": [ 6, 108.41, 179, 186.17, 22.29 ], "formula_id": "formula_4", "formula_text": "F = {𝐵𝑈 𝐷𝐺𝐸𝑇 (𝜇 1 , ..., 𝜇 𝐿 ; 𝐾)} ∪{𝑃𝐴𝐼𝑅(𝜇 𝑖 , 𝜇 𝑖+1 ; 𝑟 𝑖,𝑖+1 ) : 1 ≤ 𝑖 ≤ 𝐿},(4)" }, { "formula_coordinates": [ 6, 103.69, 294.78, 190.89, 40.79 ], "formula_id": "formula_5", "formula_text": "μ = arg max 𝝁 ∈ {0,1} 𝐿 𝒔 ⊤ 𝝁 + ∑︁ 𝑓 ∈ F ℎ 𝑓 𝝁 𝑓 score(𝝁;𝒔 ) .(5)" }, { "formula_coordinates": [ 6, 124.77, 495.25, 169.81, 15.15 ], "formula_id": "formula_6", "formula_text": "μ = arg max 𝝁 ∈ [0,1] 𝐿 score(𝝁; 𝒔).(6)" }, { "formula_coordinates": [ 6, 99.65, 576.57, 191.76, 17.45 ], "formula_id": "formula_7", "formula_text": "μ = arg max 𝝁 ∈ [0,1] 𝐿 score(𝝁; 𝒔) -1/2∥𝝁 ∥ 2 . (7" }, { "formula_coordinates": [ 6, 291.41, 579.17, 3.17, 7.94 ], "formula_id": "formula_8", "formula_text": ")" } ]
Give Me More Details: Improving Fact-Checking with Latent Retrieval
Evidence plays a crucial role in automated fact-checking. When verifying real-world claims, existing fact-checking systems either assume the evidence sentences are given or use the search snippets returned by the search engine. Such methods ignore the challenges of collecting evidence and may not provide sufficient information to verify real-world claims. Aiming at building a better fact-checking system, we propose to incorporate full text from source documents as evidence and introduce two enriched datasets. The first one is a multilingual dataset, while the second one is monolingual (English). We further develop a latent variable model to jointly extract evidence sentences from documents and perform claim verification. Experiments indicate that including source documents can provide sufficient contextual clues even when gold evidence sentences are not annotated. The proposed system is able to achieve significant improvements upon best-reported models under different settings.
Xuming Hu; Junzhe Chen; Zhijiang Guo; Philip S Yu
[ { "figure_caption": "Figure 2 :2Figure 2: Comparison of information sufficiency, redundancy, and prediction accuracy when humans are given search snippets and source documents.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Overview of the model.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Factor graph for the evidence extractor.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "( 1 )1In-Domain: The test set (𝛼 1 ) is distributionally similar to the training set, and contains claims from the same languages and sources as the training set. (2) Out-of-Domain: The test set (𝛼 2 ) contains claims from the same languages as the training set but from different domains. A model performs well on both 𝛼 1 and 𝛼 2 can generalize across different domains.", "figure_data": "", "figure_id": "fig_3", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: A case on extracted sentences from source documents based on attention and SCALE.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: An error case on extracted sentences from source documents based on SCALE.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Statistics of XFact and EFact.", "figure_data": "DatasetTypeTrainDevTestClaims19,0792,5359,106Snippets85,856 11,154 47,507XFactDocuments 91,579 12,089 52,552Avg #Words in the Claim27.8Avg #Words in the Snippets32.6Avg #Words in the Documents 662.4Claims8,0028011,197Snippets44,3594,5836,472EFactDocuments 46,5244,6506,954Avg #Words in the Claim15.6Avg #Words in the Snippets37.1Avg #Words in the Documents 544.42", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Label distribution of XFact and EFact.", "figure_data": "LabelFalse Mostly False Partly True Mostly True True Unverifiable OtherTrain10,8362,8726,5653,5895,716521537Development9381055401865217965Test (In-Domain)1,41214883128479710185Test (Out-of-Domain)9800696029013812Test (Zero-Shot)1,4484117607132726054Train1,5492,0771,8344901,306426320Development155208183491314332Test232311275731956348", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results on three different test sets of XFact with search snippets (Snip), extra sentences surrounding the snippets (Snip+) and source documents (Doc). † denotes results from Gupta and Srikumar[13].", "figure_data": "Extractors / VerifiersBERT-Based ModelGraph-Based ModelIn-Domain Out-of-Domain Zero-Shot In-Domain Out-of-Domain Zero-ShotMajority6.90 †10.60 †7.60 †---Atten [13]38.90 †15.70 †16.50 †39.43±1.2416.04±0.9416.88±1.04Reinforce [26]39.18±1.3717.25±1.4217.66±1.4539.46±1.3517.40±1.2717.92±1.34FusedMax [32]38.24±1.3416.82±1.3117.04±1.5838.41±1.5217.08±1.2417.31±1.17Snip JointGumbel [30] HardKuma [4]38.31±1.28 38.26±1.1316.61±0.86 16.78±1.4917.11±1.05 17.23±1.0638.55±1.34 38.42±0.7716.82±0.95 16.94±1.2817.33±1.18 17.44±0.93UNIREX [5]38.47±1.0116.98±0.8417.47±1.0538.77±0.8217.02±0.7317.64±1.19DAR [27]38.76±1.2417.24±0.8217.88±1.1938.91±0.9517.13±0.7317.62±1.01Ours40.88±1.1418.46±0.9318.73±1.21 41.21±1.2418.79±1.0319.04±1.08Snip+ PipeRule (6 Sentences) Rule (12 Sentences) 41.57±1.37 41.73±1.1919.04±1.08 18.89±1.4219.05±1.63 18.73±1.5142.02±1.33 41.83±1.4019.57±1.37 18.97±1.2919.21±1.29 18.79±1.38PipeDoc(7) DAR: Discriminatively Aligned Rationalization (DAR)", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Results on test set of EFact with search snippets (Snip), extra sentences surrounding the snippets (Snip+) and source documents (Doc).", "figure_data": "Extractors / VerifiersBERTGraphMajority7.58-Atten36.83 †36.75 †Reinforce37.58±1.46 37.84±1.45FusedMax36.53±0.87 36.83±0.81Snip JointGumbel HardKuma36.29±0.93 35.70±0.80 37.49±1.41 37.39±1.30UNIREX37.65±1.24 37.44±1.35DAR37.52±1.27 37.41±1.28Ours38.09±1.20 37.92±1.31Snip+ PipeRule (6 Sents) Rule (12 Sents) 38.42±1.01 38.11±0.88 39.66±1.48 39.76±1.56Surface39.85±1.62 40.02±1.17PipeSemantic40.35±1.68 40.42±1.34Hybrid40.27±1.10 40.58±1.35Atten40.13±0.92 20.04±0.65DocReinforce FusedMax41.65±0.95 40.95±1.18 42.39±1.18 42.03±1.12JointGumbel HardKuma42.12±1.45 42.32±1.19 43.38±1.11 43.55±1.16UNIREX43.46±1.24 43.72±1.05DAR43.37±1.13 43.70±1.10Ours44.87±1.10 44.24±0.98Figure 5: Effects of Factors (BUDGET and PAIR). BUDGETis imposed to control the sparsity of the sentence selection.𝐾 is the hyper-parameter to control it. PAIR is imposed toencourage contiguity.surface and semantic ones in pipeline methods. Joint approachesbeat all pipeline ones, our model has improved by an average of2.68% compared to the pipeline model's state-of-the-art, GETRAL,emphasizing the value of more context. emphasizing the signifi-cance of joint training for evidence extraction and factuality pre-diction. This joint training provides explicit feedback and greaterrobustness in terms of standard deviation. The inconsistency inpipeline extractors' results is due to excess irrelevant data. Ourmodel surpasses other joint models in performance and robustness,showing a 1.82% improvement over the SOTA joint model, DAR.This is primarily due to SCALE allowing for deterministic andfully differentiable evidence extraction, resulting in a sturdy andwell-generalized model.", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Effect of metadata: F1 results on the test set.", "figure_data": "ExtractorsmBERT-Based Graph-BasedSnippets w/o Meta40.8841.21Snippets42.1443.53Documents w/o Meta46.0446.36Documents (𝐾=30)46.4246.89Documents (𝐾=10)43.8545.83Documents (𝐾=20)45.7446.02Documents (𝐾=50)45.5946.35", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Effects of evidence: F1 results on the test set are reported. #E indicates the number of evidence.", "figure_data": "#E1351015Documents (Surface)41.06 42.84 42.76 42.26 41.64Documents (Semantic) 41.63 42.35 42.89 42.40 42.13Documents (Hybrid)42.04 42.56 42.98 42.52 42.19", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Human evaluation of extracted evidence. (5), (10),(15) denote the number of extracted sentences.", "figure_data": "ExtractorsPrecision RecallF1Semantic (5)21.4810.78 14.36Surface (5)26.687.8912.18Hybrid (5)23.359.9813.98Semantic (10)20.7723.25 21.94Surface (10)19.5028.55 23.17Hybrid (10)23.9820.99 22.39Semantic (15)11.2742.64 17.83Surface (15)9.0548.77 15.27Hybrid (15)12.4752.66 20.16HardKuma20.1926.54 22.94Ours22.7933.76 27.21", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" } ]
[{"Category": "Data Source", "Citation": "[13]", "Explanation": "The cited work provides the multilingual dataset XFact that the citing paper uses to evaluate the factuality of claims in Arabic social media."}, {"Category": "Supporting Evidence", "Citation": "[19]", "Explanation": "The cited work by Schuster et al. demonstrated the importance of considering evidence in fact-checking, which is a foundational concept for the citing paper in understanding the need to address the issue of relying on surface patterns of claims without considering other evidence."}, {"Category": "Extension or Continuation", "Citation": "[3]", "Explanation": "The cited work by other works in the field of fact-checking websites is extended in the citing paper to further explore the use of real-world claims in the media ecosystem for fact-checking."}, {"Category": "Data Source", "Citation": "[36]", "Explanation": "The cited work, PunditFact, is a source of information that provides the necessary data for the citing paper to analyze the factuality of a given claim in the context of vaccine trials and countries that have been vaccinated for more than 3 months."}, {"Category": "Data Source", "Citation": "[12]", "Explanation": "The cited work by Guo et al. is used to group the fact-checking datasets into two categories, which serves as the basis for the classification in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[36]", "Explanation": "The cited work by Guo et al. is extended by the citing paper to group the datasets into two categories, which is a continuation of the research in the cited work."}, {"Category": "Extension or Continuation", "Citation": "[49]", "Explanation": "The cited work by Guo et al. is extended by the citing paper to group the datasets into two categories, which is a continuation of the research in the cited work."}, {"Category": "Extension or Continuation", "Citation": "[9]", "Explanation": "The cited work by Ferreira and Vlachos is extended by the citing paper to use the headlines of selected news articles as evidence for the same claims, which is a continuation of the research in the cited work."}, {"Category": "Extension or Continuation", "Citation": "[35]", "Explanation": "The cited work by Pomerleau and Rao is extended by the citing paper to use the entire articles as evidence for the same claims, which is a continuation of the research in the cited work."}, {"Category": "Extension or Continuation", "Citation": "[15]", "Explanation": "The cited work by Hanselowski et al. is extended by the citing paper to extract summaries accompanying fact-checking articles about the claims as evidence, which is a continuation of the research in the cited work."}, {"Category": "Extension or Continuation", "Citation": "[25]", "Explanation": "The cited work by Kotonya and Toni is extended by the citing paper to extract summaries accompanying fact-checking articles about the claims as evidence, which is a continuation of the research in the cited work."}, {"Category": "Data Source", "Citation": "[3]", "Explanation": "The cited work by Augenstein et al. serves as a data source for the citing paper, providing a method for retrieving evidence from the Internet to construct a better evidence-based dataset."}, {"Category": "Data Source", "Citation": "[13]", "Explanation": "The cited work by Gupta and Srikumar also serves as a data source for the citing paper, providing a method for retrieving evidence from the Internet to construct a better evidence-based dataset."}, {"Category": "Extension or Continuation", "Citation": "[39]", "Explanation": "The cited work by Augenstein et al. is extended in the citing paper to consider Wikipedia as the source of evidence and annotate sentences from articles as evidence, creating a better evidence-based dataset for fact-checking systems."}, {"Category": "Extension or Continuation", "Citation": "[44]", "Explanation": "The cited work by Augenstein et al. is further extended in the citing paper to consider Wikipedia as the source of evidence and annotate sentences from articles as evidence, creating a better evidence-based dataset for fact-checking systems."}, {"Category": "Data Source", "Citation": "[15]", "Explanation": "The cited work is a real-world effort that provides summaries of fact-checking articles as evidence, which the citing paper uses to develop systems for evidence collection from heterogeneous sources."}, {"Category": "Data Source", "Citation": "[25]", "Explanation": "The cited work is another real-world effort that also uses fact-checking articles as evidence for system development, but the citing paper highlights the need for a more realistic approach that is not available during inference."}, {"Category": "Data Source", "Citation": "[3]", "Explanation": "The cited work is a dataset that includes search snippets generated by Google as evidence, which the citing paper uses to address the issue of not having fact-checking articles available during inference."}, {"Category": "Data Source", "Citation": "[13]", "Explanation": "The cited work is another dataset that also includes search snippets as evidence, but the citing paper highlights the need for a more realistic approach that is not available during inference."}, {"Category": "Extension or Continuation", "Citation": "[15]", "Explanation": "The citing paper extends the work of the cited work by proposing a new approach to directly incorporate retrieved documents as evidence for better verification in system development."}, {"Category": "Methodological Basis", "Citation": "[16,44]", "Explanation": "The cited works provide the basis for using entity linking or TF-IDF in evidence extraction for claim verification in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[29,34]", "Explanation": "The cited works introduce the use of textual entailment models for claim verification in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[28,38,55,56]", "Explanation": "The cited works present the use of graph-based models for evidence aggregation in claim verification, which the citing paper builds upon."}, {"Category": "Data Source", "Citation": "Most systems assume the evidence sentences are given", "Explanation": "The cited work highlights the need for real-world evidence annotation in claim verification, which the citing paper addresses by treating evidence extraction as a latent variable."}, {"Category": "Supporting Evidence", "Citation": "[13]", "Explanation": "The cited work, XFact, is the source of the instances used in the study conducted in the citing paper. The instances are used to evaluate the information provided by search snippets in verifying real-world claims."}, {"Category": "Data Source", "Citation": "[3]", "Explanation": "The cited work by Augenstein et al. is used to provide a method for submitting claims as queries to the Google Search API, which is utilized in the citing paper to collect evidence for verifying claims."}, {"Category": "Data Source", "Citation": "[13]", "Explanation": "The cited work by Gupta and Srikumar is also used to provide a method for submitting claims as queries to the Google Search API, which is utilized in the citing paper to collect evidence for verifying claims."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work by [12] is used to highlight the importance of evidence in generating justifications to convince readers, which the citing paper builds upon in its research on claim verification."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work by [18] is mentioned to encourage the use of contiguous evidence sentences in the model for improved readability, which the citing paper adopts in its research on claim verification."}, {"Category": "Supporting Evidence", "Citation": "[4,26]", "Explanation": "The cited works by [4,26] are used to compare the stability of SCALE with other latent variable models, providing supporting evidence for the claim that SCALE is more stable in the context of claim verification."}, {"Category": "Methodological Basis", "Citation": "[8]", "Explanation": "The cited work, BERT, is used as the method to encode the claim and sentences in the document, providing the sentence representations that are essential for the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[33]", "Explanation": "The cited work provides a method for solving complex structured problems with global agreement constraints, which the citing paper adopts in its research to address the MAP problem."}, {"Category": "Methodological Basis", "Citation": "[48]", "Explanation": "The cited work introduces the concept of continuous relaxation for the MAP problem, which the citing paper adopts in the form of LP-MAP inference to address the optimization problem."}, {"Category": "Extension or Continuation", "Citation": "[24,31]", "Explanation": "The cited works provide theoretical insights on the near-optimality of continuous relaxations for structured prediction tasks in natural language processing, which the citing paper extends by applying the concept to the problem of training with backpropagation."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work on XFact provides a state-of-the-art model for claim verification, which the citing paper adopts in the form of a multi-layer perceptron with embeddings from BERT to predict the verdict of a claim."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work on kernel graph attention network is the SOTA graph-based verifier on FEVER, which the citing paper uses to construct the evidence graph and perform fine-grained evidence propagation in the claim verification process."}, {"Category": "Methodological Basis", "Citation": "(1)", "Explanation": "The rule-based extractor serves as a simple baseline for the study conducted in the citing paper, providing a foundational method for comparison and analysis."}, {"Category": "Data Source", "Citation": "(2)", "Explanation": "The surface extractor utilizes TF-IDF to extract sentences in the source documents as evidence, serving as a data source for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(3)", "Explanation": "The semantic extractor employs BERT to get representations of the claim and sentences in the source document, using cosine similarity for evidence selection. This method serves as a methodological basis for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "(4)", "Explanation": "The hybrid extractor utilizes rankSVM to choose sentences based on feature sets of rankings returned by TF-IDF and similarity scores calculated using BERT, providing a methodological basis for the study conducted in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "(5)", "Explanation": "The CONCRETE framework introduced by Huang et al. [17] serves as a continuation of the study on factchecking, utilizing cross-lingual retrieval to gather evidence from various languages using a retriever."}, {"Category": "Extension or Continuation", "Citation": "(6)", "Explanation": "The CofCED framework developed by Yang et al. [53] employs a hierarchical encoder for web text representation and cascaded selectors for verdictexplainable sentence selection, serving as an extension of the study on factchecking."}, {"Category": "Methodological Basis", "Citation": "[51]", "Explanation": "The cited work by Wu et al. introduces a model that focuses on capturing long-distance semantic dependencies through neighborhood propagation, which the citing paper adopts in their research to improve fake news detection."}, {"Category": "Methodological Basis", "Citation": "[52]", "Explanation": "The cited work by Xu and Kechadi presents a hybrid model that combines deep learning with textual and numerical context analysis to enhance fake news detection, which the citing paper adopts in their research to improve fake news detection."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work by Gupta and Srikumar provides the method of using dot product attention to obtain relevance weights between output embeddings of retrieved sentences and the claim, which the citing paper adopts in their research on evidence extraction."}, {"Category": "Methodological Basis", "Citation": "[26]", "Explanation": "The cited work by Lei et al. provides the method of using a binary Bernoulli variable to filter sentences from source documents in evidence extraction, which the citing paper follows in their research."}, {"Category": "Methodological Basis", "Citation": "[32]", "Explanation": "The cited work on fusedmax encourages attention to contiguous segments of text by adding a total variation regularizer, which the citing paper adopts in their research to improve evidence extraction."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work by Maddison et al. employs the Gumbel-Max trick to reparameterize Bernoulli variables, which the citing paper adopts in their research to improve evidence extraction."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work by Bastings et al. adopts Hard-Kuma variables and reparameterized gradients estimates, which the citing paper follows in their research to improve evidence extraction."}, {"Category": "Methodological Basis", "Citation": "[5]", "Explanation": "The cited work by Chan et al. adopts the rationale extraction method, which the citing paper adopts in their research to improve evidence extraction."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work by Gupta and Srikumar provides the test sets used in the citing paper to evaluate the transfer abilities of a fact-checking system across different domains and languages."}, {"Category": "Data Source", "Citation": "[22]", "Explanation": "The cited work by Kingma and Ba introduces the Adam optimization algorithm, which the citing paper uses in their model for training."}, {"Category": "Methodological Basis", "Citation": "[39]", "Explanation": "The cited work by Schuster et al. introduces the concept of factual revisions to Wikipedia for evidence retrieval, which the citing paper adopts in their model to improve evidence retrieval from heterogeneous sources on the web."}, {"Category": "Extension or Continuation", "Citation": "[47,54]", "Explanation": "The cited works by other authors have also considered the use of documents from specialized domains in evidence retrieval, which the citing paper extends by considering the challenge of retrieving evidence from heterogeneous sources on the web."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work by [12] provides a study on the inconsistency in human fact-checking ratings, which serves as a methodological basis for the citing paper in understanding the challenges in distinguishing similar labels in factuality prediction."}]
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b3", "b4", "b5", "b6", "b7", "b6", "b4", "b2", "b5", "b8", "b8", "b9", "b2", "b0", "b8" ], "table_ref": [], "text": "LiDAR sensors are commonly used in autonomous driving applications alongside cameras and radars. LiDARs offer precise and rich depth information independently of the lighting conditions, helping an autonomous vehicle to understand and navigate its surroundings. The biggest drawback of LiDAR sensors is their sensitivity to adverse weather conditions such as rain, snow, and fog. Most perception algorithms developed for LiDAR sensors are trained and tested in favorable weather conditions. However, their performance is seen to degrade in adverse weather [1], [2], severely reducing their reliability. With the increasing number of autonomous vehicles on the roads, it is fundamental that they can navigate the environment and safely interact with other road users regardless of weather conditions.\nIn the literature, only a few approaches have been proposed to address this problem [4], [5], [6], [7], [8]. Sebastian et al. [7] classify weather conditions from a LiDAR scan. Zhang et al. [5] measure the degradation of LiDAR measurements in rainy conditions. Both methods only provide information on a scan-wise level, which does not allow the identification of adverse weather effects in the point cloud. However, detecting which points belong to solid obstacles and which are derived from adverse weather effects is essential for the reliable operation of an autonomous vehicle. 1 Institute of Measurement, Control, and Microtechnology, Ulm University, Germany {firstname.lastname}@uni-ulm.de We propose a method for detecting adverse weather effects in LiDAR data based on energy outlier detection. In the figure, we show a scene from the RoadSpray [3] dataset, where both a leading vehicle and the ego vehicle are driving on a wet surface, generating trailing spray. Our model is trained to associate low energy scores (darker colors) with inliers and high energy scores (lighter colors) with outliers, allowing for robust classification of adverse weather effects.\nHeinzler et al. [6] propose a semantic segmentation network to classify adverse weather on a point-wise level. Due to the challenge of labeling adverse weather data, they test their model on data collected in a weather chamber, where a static scenario is created and artificial fog and rain are overlayed. However, real-world scenarios are much more challenging since complex dynamic scenarios can occur under different weather conditions.\nIn this work, we address the limitations mentioned above by proposing a method for the point-wise detection of adverse weather conditions. Instead of directly classifying adverse weather points, we reframe the task as an outlier detection problem. For this purpose, we adapt the energybased outlier detection framework proposed in [9] to the 3D point cloud domain. More specifically, we rewrite the energy score formulation proposed in [9] for the point cloud domain and extend the proposed energy loss function to account for large class imbalances. We use this approach to differentiate between inlier points (buildings, vehicles, pedestrians, etc.) and outlier points (spray, rain, snow, fog, etc.). Fig. 1 shows a qualitative result of our approach in detecting spray. In extensive experiments, we show that our method performs better than previous state-of-the-art methods on real and simulated data. When training on datasets containing a single adverse weather effect, our approach shows higher robustness to unseen adverse weather effects, making it more applicable to real-world applications where multiple weather effects can occur simultaneously. Furthermore, our approach allows for the combined semantic segmentation of inlier points and the detection of outlier points. When we compare our method with a state-of-the-art network for semantic segmentation [10], we see that our solution performs better in detecting points generated by adverse weather effects.\nThe lack of publicly available labeled data is the primary reason for the limited research on LiDAR in adverse weather. With this work, we help increase the research opportunities in this field by releasing the SemanticSpray dataset, which provides semantic labels for the RoadSpray [3] dataset. The dataset contains scenes of vehicles driving on wet surfaces, which can generate a trailing spray corridor. This effect is seen to be highly problematic in high-speed scenarios where a large number of spray points are introduced in the LiDAR measurements [1], impeding the sensors' field of view and, in extreme cases, causing perception systems to fail.\nIn summary, our main contributions are:\n• We adapt the energy-based outlier detection framework [9] to detect adverse weather in LiDAR point clouds by formulating the point energy score and extending the energy loss function to account for large class imbalances. • We show that our method outperforms the previous state-of-the-art approaches in detecting adverse weather effects in LiDAR data and has greater robustness to unseen weather effects. • We show how our method can be adapted to perform both outlier detection and semantic segmentation of the inlier classes using different network architectures. • We help expand the critical research field of LiDAR perception in adverse weather conditions by releasing the SemanticSpray dataset." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Adverse Weather Detection in LiDAR Data", "publication_ref": [ "b0", "b4", "b10", "b6", "b4", "b5", "b11", "b7", "b7", "b12", "b12", "b13" ], "table_ref": [], "text": "Walz et al. [1] perform a study on the effect of vehicle spray on LiDAR and camera-based object detectors using a vehicle-mounted setup to simulate spray. The results show that state-of-the-art detectors are negatively affected by this weather effect, introducing misdetections and ghost objects in the scene. Zhang et al. [5] propose a method for estimating the degree of degradation for LiDAR scans in rainy conditions. They use an auto-encoder trained with the DeepSAD framework [11] to output an anomaly score for an input Li-DAR scan. Sebastian et al. [7] propose a CNN-based method for classifying weather and road surface conditions in LiDAR data. Similar to [5], the method is not meant for classification on a point-wise level but rather on the entire LiDAR scan. Heinzler et al. [6] use a lightweight CNN architecture to detect rainfall and fog on a point-wise level. They train their network on data recorded inside a weather chamber where a static scenario is created using vehicles and mannequins, and then artificial fog and rain are overlayed. Although the proposed method performs well on the test data recorded inside the weather chamber, a generalization to dynamic realworld scenarios is not trivial [12]. Stanislas et al. [8] propose a CNN and voxel-based approach for detecting airborne particles in robotic applications. Their CNN-based method uses a state-of-the-art CNN architecture developed for image segmentation. Their voxel-based approach instead uses a fully 3D convolutional backbone, resulting in high inference times. Although the primary goal of [8] is to detect smoke and dust, we test their approaches on adverse weather since airborne particles also include snow, rain, fog, and spray.\nA variety of filtering-based methods has also been developed for the detection of outliers in point clouds. This includes voxel grid down-sampling, statistical outlier removal (SOR), and radius outlier removal (ROR). However, these general-purpose filters perform poorly on adverse weather condition detection [13]. For this reason, Charron et al. [13] propose the DROR filter, which aims to detect snow points by dynamically adjusting the radius of neighbor search to compensate for the non-uniform density of LiDAR point clouds. Kurup et al. [14] propose DSOR, which further improves DROR by considering the mean distance between neighbors. Overall, filtering-based approaches have a significant advantage over learning-based methods of not requiring training data. However, they have high computational complexity and generalize poorly to unmodeled weather effects, limiting their use in real-world applications." }, { "figure_ref": [], "heading": "B. Anomaly Detection Methods", "publication_ref": [ "b14", "b10", "b8", "b15", "b8", "b16", "b15", "b15", "b8" ], "table_ref": [], "text": "Many approaches have been developed for anomaly detection, mainly for image data. The outlier exposure method introduced in [15] proposes to train a network with an auxiliary dataset of outliers, allowing the network to generalize to unseen anomalies. Ruff et al. [11] use a similar approach, redefining the loss function to map inlier data inside a hypersphere and outliers outside it. The anomaly score is then derived by computing the distance of the input from the center of the hypersphere. Recently, energy-based methods (EBMs) have emerged as the state-of-the-art for anomaly detection both on an image [9] and pixel [16] level. These methods rely on the energy function [9] to map the classification logits to a single real number. The network is trained to create an energy gap between inlier and outlier data which can be used for classification. In our work, we use a similar approach for the point-wise detection of adverse weather effects in LiDAR data. Liu et al. [17] propose the abstention learning paradigm, where a model is trained to abstain from making a classification when it is not certain that the input is an inlier. For this purpose, an additional class is added to the last classification layer of a network, and its output value is used for uncertainty estimation. Similar to [16], we use the idea of an additional class to extend our proposed network. However, different from [16], we do not optimize the abstention learning loss function and instead use an adapted version of the energy loss function proposed in [9] to detect outliers." }, { "figure_ref": [ "fig_0" ], "heading": "III. METHOD", "publication_ref": [], "table_ref": [], "text": "This section describes our proposed method for detecting adverse weather points in LiDAR data. More specifically, our goal is to classify in a binary way if a point in a point cloud is an inlier (not adverse weather) or an outlier (adverse weather). In Fig. 2, we show an overview of the approach." }, { "figure_ref": [], "heading": "A. Preliminaries", "publication_ref": [ "b8" ], "table_ref": [], "text": "Brief Introduction to EBMs. Given a classification model f (x) : R D → R K , which maps an input x ∈ R D to K-logits, the energy function is defined as:\nE(x; f ) = -log k∈{1,...,K} e f (k|x) ,(1)\nwith f (k|x) being the k-th output logit. The output of E(x; f ) : R D → R represents the energy score of the input x. EBMs are characterized by their objective function, which aims to associate low energy scores with inlier inputs and high energy scores with outliers. The learned energy gap can then be used to classify inliers and outliers using an appropriate energy threshold [9]. Point Energy Score. We rewrite the energy function to process 3D inputs on a point-wise level. Let p = [x 1 , . . . , x N ] ∈ R N ×C be a point cloud with N total points and x i ∈ R C a single point with C features. Given a classification model f (p) : R N ×C → R N ×K which outputs K-logits for each point x i ∈ p, we define the point energy function as:\nE i (p; f ) = -log k∈{1,...,K} e fi(k|p) .\n(\n)2\nThe output of the function E i (p; f ) : R N ×C → R represents the point energy score of x i , with i = {1, . . . , N }. Here, f i (k|p) is the k-th output logit for point x i ." }, { "figure_ref": [], "heading": "B. Point-wise Adverse Weather Detection", "publication_ref": [ "b17", "b18", "b8", "b17", "b18", "b4", "b8", "b5", "b4" ], "table_ref": [], "text": "Dataset. Given a point cloud p, we define the associated point-wise label set as y = {y 1 , . . . , y N } ∈ N N . We reserve the labels y i ∈ Y in = {1, . . . , Y } for inlier classes (buildings, vehicles, pedestrians, etc.) and the label y i ∈ Y out = {Y + 1} for the outlier class (rain, snow, fog, etc.).\nNetwork Architecture. Using the abstention learning (AL) framework, we extend the last classification layer of f (p) with an additional class. This class is not used to classify outliers directly but instead allows the model to abstain from classification when it is not confident that a point belongs to one of the Y in classes.\nAlthough our method is not restricted to a particular network architecture, we propose a model for the specific case of adverse weather detection, which we name Ad-verseWeatherNet (AWNet). Fig. 3 shows an overview of the architecture. AWNet first voxelizes the input point cloud and then passes it through a stack of squeeze-excitation attention layers [18]. Each attention layer independently weights the channels of each voxel, and then a combined weight is derived by summing the individual weights. This value modulates the voxelized input channels via element-wise multiplication. The result is then processed using the sparse convolutions backbone presented in [19], which allows for rich feature extraction without the high computational cost of full 3D convolutions. Finally, the extracted features are classified using a set of fully connected layers.\nLoss Function. The network f (p) is trained using the following loss function:\nℓ total = ℓ cls + λℓ energy .(3)\nThe first term in (3) is a semantic segmentation term, which in our case is the standard negative log-likelihood (NLL) loss function defined for an input point cloud p, and associated point-wise labels y as:\nℓ cls = E xi∈p -log f i (y i |p) k∈{1,...,K} f i (k|p) y i ∈ Y in ,(4)\nwith f i (y i |p) being the y i -th output logit. As shown in [9], minimizing the NLL results in the model associating a low energy score with the inlier inputs. Moreover, the loss function allows to learn the semantic segmentation of the inlier classes. The second term in (3) is defined for an input Fig. 3. AWNet architecture. We use a stack of three squeeze excitation [18] (SE) attention layers to weight the channels of each voxel. Then, we extract features using the 3D sparse convolution proposed in [19] and classify each voxel using two fully connected (FC) layers.\npoint cloud p and associated point-wise labels y as:\nℓ energy = E xi∈p ω -1 in max(0, E i (p; f ) -m in ) 2 y i ∈ Y in + E xi∈p ω -1 out max(0, m out -E i (p; f )) 2 y i ∈ Y out(5)\nwith:\nω in = 1 + yi∈y 1 yi∈Y in , ω out = 1 + yi∈y 1 yi∈Y out . (6\n)\nThe ℓ energy (5) term is a hinge loss function that penalizes inlier energies greater than the margin parameter m in ∈ R and outlier energies lower than m out ∈ R [9]. This creates an energy gap that can be used to define an adequate threshold to differentiate between inlier and outlier points. The task of point-wise anomaly detection in the case of adverse weather conditions can be an unbalanced problem. Even in severe precipitations, the number of non-adverse weather effect points can largely outnumber adverse weather ones. This large unbalance causes the ℓ energy (5) term to saturate, resulting in insufficient supervision during training for the outlier energy association. We address this problem using the terms described in (6), which weight the energy loss terms proportionally to the number of inlier and outlier points present in the point cloud. The parameter λ ∈ R is used to weight the loss term ℓ energy (5)." }, { "figure_ref": [], "heading": "C. Training and Inference", "publication_ref": [ "b8", "b15", "b2", "b2", "b2" ], "table_ref": [], "text": "Training. Like other energy-based methods, we use a finetuning approach for training [9], [16]. Given a model trained on a dataset containing only inlier points, we first extend its final classification layer using the AL framework described in Section III-B. Then, we fine-tune the model weights on a dataset that contains both inlier and outlier points using the loss function ℓ total (3). This approach has the advantage of being efficient since it does not require the entire retraining of the network.\nInference. At inference time, our model returns the energy score for each point in the point cloud. An appropriate energy threshold τ ∈ R can be chosen depending on the application at hand. For example, in autonomous driving applications, τ can be chosen so that a high fraction (e.g., 95%) of inlier points is correctly classified. The threshold τ is common for all points during inference. The decision rule g i for a point x i ∈ p can then be formulated as:\ng i (p; τ ; f ) = inlier if E i (p; f ) ≤ τ, outlier else.(7)\nIV. THE SEMANTICSPRAY DATASET Within this work, we also release the SemanticSpray dataset. The data is based on the recently released Road-Spray [3] dataset, which performs large-scale testing of the effect of vehicle spray on LiDAR, camera, and radar sensors. The RoadSpray dataset provides unlabeled scenes of vehicles driving on wet a surface at different speeds in highway-like scenarios. For the sensor setup and additional information on the experiments, we refer the reader to the original publication [3]. To create the SemanticSpray dataset, we manually label each point in a LiDAR scan as one of three categories: background, vehicle, and spray. All static objects (road, vegetation, buildings, etc.) are labeled as background. The moving vehicles in the scene are labeled as vehicle. The spray class contains the spray generated by the ego vehicle and other moving vehicles in the scene. In total, we provide semantic labels for 16565 dynamic scenes, with approximately 6.23 • 10 6 background, 4.84 • 10 4 vehicle and 2.15 • 10 4 spray points. The dataset is available for download at https:// semantic-spray-dataset.github.io. The dataset toolkit is instead available at https://github.com/ aldipiroli/semantic_spray_dataset." }, { "figure_ref": [], "heading": "V. EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Experiment Setup", "publication_ref": [ "b5", "b12", "b13", "b7", "b13", "b5", "b19", "b20", "b21", "b14", "b8", "b15", "b20", "b22", "b18" ], "table_ref": [], "text": "Baselines. We evaluate our approach against the other methods for adverse weather [6], [13], [14] and airborne particle detection in LiDAR data [8]. Each learning-based baseline is trained to classify inlier (non-adverse weather), and outlier (adverse weather) points using the default parameters. Previous methods evaluate only on a single dataset which usually includes a single adverse weather condition. However, this testing approach is unrealistic since, in many real-world scenarios, multiple weather effects can occur simultaneously, i.e., snowfall and wet surface spray. Therefore, we test each method on the adverse weather effects used during training and on unseen ones.\nDatasets. We evaluate the performance in a variety of different adverse weather conditions. SemanticSpray is the dataset presented in Section IV, which contains spray effects in highway-like scenarios. We use 7898 scans for training and 8667 for testing. The WADS [14] dataset was recorded in snowy conditions while driving in urban environments. We use 1011 scans for training and 918 for testing 1 . The DENSE [6] dataset contains static scenarios inside a weather chamber where artificial rainfall and fog are generated. We use the official train (61900 scans), and test (19787 scans) splits. To test our proposed method on more complex scenarios under foggy conditions, we use the simulation method proposed in [20] to augment with fog the NuScenes dataset [21] (NuScenes-Fog). The dataset has 27449 scans for training and 6019 for testing. Since our main goal is to differentiate between inliers and outliers, unless otherwise stated, we consider all the possible non-adverse weather classes (e.g., vehicle, pedestrian, building, etc.) as the inlier class. Similarly, we consider all of the adverse weather classes (e.g., snow, fog, and spray) as the outlier class.\nEvaluation Metrics. Following prior work [22], [15], [9], [16], we evaluate the outlier detection task using the area under receiver operating characteristics (AUROC), area under the precision-recall curve (AUPR) and the false positive rate at 95% true positive rate (FPR95). To compare against statistical filters, we use precision and recall metrics. Additionally, we use the intersection over union (IoU) and the mean IoU (mIoU) to evaluate semantic segmentation performances.\nImplementation Details. We pretrain AWNet using the NuScenes [21] dataset on the foreground (vehicles, pedestrians, cyclists, etc.) and background class (everything else). We train for 30 epochs on 20% of the training set using the NLL loss, Adam optimizer [23], and constant learning rate of 10 -4 . For the energy-based fine-tuning, we train for a maximum of 20 epochs using Adam optimizer and training parameters m in = -5, m out = 5 and λ = 0.1.\nWhen training on the SemanticSpray and DENSE datasets, we set ω in and ω out equal to 1 since there is a lower imbalance between inlier and outlier points. We implement AWNet with voxel size of [0.1, 0.1, 0.2] m in the x, y and z direction respectively. A full description of the sparse backbone parameters is given in [19]. The final classification layer comprises two fully connected layers of size 256. Point energy score distributions of AWNet trained on the SemanticSpray dataset and tested on the SemanticSpray (top) and WADS (bottom) datasets. Although the network is trained only on spray data, the model associates similar energy scores to snow and spray points. This highlights the robustness of our method to unseen weather effects since a common classification threshold τ can be chosen to classify both effects." }, { "figure_ref": [ "fig_4", "fig_4", "fig_4" ], "heading": "B. Results", "publication_ref": [ "b11", "b3", "b9", "b2", "b9", "b2" ], "table_ref": [ "tab_4" ], "text": "Training and Testing on the Same Weather Effect. We start the evaluation by comparing the performance of AWNet against the other methods by training and testing in the same weather conditions. We report the results in Tab. I. Our method achieves the best average performance across all tested datasets in terms of AUROC and FPR95, improving the latter by 2.9% points compared to the second-best method (Particle-UNet). This last result shows that our method is well suited for safety-critical applications like autonomous driving, where a high percentage of adverse weather effects needs to be detected without having a large number of false positives. Fig. 1 Training and Testing on Different Weather Conditions. In Tab. II, we report the results of the methods trained on a single weather condition D train and tested on unseen (during training) weather conditions D test . We see that AWNet outperforms the other methods by a large margin across all metrics. When training on snowy conditions (WADS) and testing on spray (SemanticSpray), AWNet achieves an improvement of 3.17% AUROC, 24.12% AUPR and 17.29% FPR95 points against the second best network (Particle-VoxelNet). We see similar performances when training on spray and testing on snowfall. Although snowfall and spray are very different effects, our method can detect both as adverse weather, even when training with only one of the weather effects. This result shows that compared to the other approaches, the proposed point-wise energy-based outlier detection has a higher robustness to unseen adverse weather effects. This can also be observed in Fig. 4, where AWNet associates similar energy scores for both seen and unseen weather effects. This property is desirable for real-world applications where is not possible to cover all possible weather variations during training. A qualitative example of AWNet trained on Seman-ticSpray and tested on WADS is reported in Fig. 5-bottom. We also see that both voxel-based approaches (AWNet and Particle-VoxelNet) perform better than the CNN-based methods when training and testing on different sensors. For example, the WADS dataset was recorded using a 64 layers LiDAR whereas the SemanticSpray dataset with a 32 layers sensor. CNN-based methods rely on a range image projection which is dependent on both the sensor resolution and sampling rate. In contrast, voxel-based methods are less affected by the sensor properties since they use a fixed voxel partitioning. In our experiments, we also tested the generalizability of models trained on the DENSE dataset. However, similar to what [12] reported, the restricted field of view and the internal structure of the fog chamber yields poor performances on outdoor datasets with 360 • field of view.\nTraining With Simulated Data. All four methods per-form well when training and testing with simulated fog data (NuScenes-Fog). When testing on snowfall (WADS), AWNet improves by 5.55% FPR95 points against the second-best method (Particle-VoxelNet). However, when testing on the SemanticSpray dataset, AWNet has lower performances than Particle-VoxelNet in terms of AUPR, suggesting that there is still a margin for improvement.\nComparison Against Statistical Filters. In Table III, we show the comparison between our proposed method and the statistical filters. Our method largely outperforms both filters in all datasets. DROR and DSOR perform well on snow (design task) and fog detection. When testing on spray, we see lower performances instead. Unlike snowfall and fog, spray is detected in dense clusters, making the use of statistical properties like the number of neighbors less effective for point-wise filtering.\nSimultaneous Semantic Segmentation and Outlier Detection. In many applications, in addition to outlier detection, the segmentation of the inlier class might also be needed. Our method is well suited for this task since, in addition to differentiating between inliers and outliers, we also learn the semantic segmentation of the inlier classes using ℓ cls (4). We test the performances of AWNet against Cylinder3D [10], a state-of-the-art semantic segmentation network, on the WADS dataset. Both networks are trained from scratch using only the WADS dataset. We train AWNet for 30 epochs using only ℓ cls (4) and then fine tune for 20 epochs using ℓ total (3) with parameters m in = -4.5, m out = 0 and λ = 0.1. During inference, we determine the semantic class of a point by choosing the maximum softmax logit among the K + 1 outputs. In addition, we classify as snowfall all points x i which have E(x i ; f ) > τ , with τ = -0.25. We train Cylinder3D using the default parameters described in [10]. Additionally, we train the same network using our proposed approach (Cylinder3D-E). First, we train for 30 epochs with the default parameters, then we fine-tune for 20 epochs using both the original loss function and ℓ energy (3) with parameters m in = -5, m out = 5 and λ = 1. For inference, we use the same approach described for AWNet with τ = 4.5. During training and evaluation, we also use the unlabeled class since we observe that a small portion of the snowfall points is incorrectly labeled as such. We show the results in Tab. IV. Overall, Cylinder3D achieves a higher mIoU across the inlier classes than AWNet. However, in the adverse weather class (snowfall), AWNet outperforms Cylinder3D by 8.28% IoU points. Using our fine-tuning approach (Cylinder3D-E) improves the performances of Cylinder3D on the outlier class (snowfall) by 0.73% IoU points, showing that our method can be used with different network architectures. Inference Time. We test the inference time of AWNet and the other methods on the SemanticSpray dataset using an NVIDIA RTX 2080 Ti. To process a single LiDAR scan AWNet takes on average 15.37 ms, Particle-VoxelNet 409.14 ms, Particle-UNet 4.32 ms, WeatherNet 4.18 ms, DROR 199.27 ms and DSOR 71.50 ms. Although both CNN methods have faster inference, AWNet is still real-time capable given the 10-20 Hz acquisition frequency of common Li-DAR sensors. Furthermore, compared to the previous voxelbased state-of-the-art approach (Particle-VoxelNet), AWNet has more than 26-times faster inference time.\nDiscussion. Our approach demonstrates promising results in distinguishing between inliers and outliers while training and testing under similar weather conditions. Additionally, it offers good generalization performance for detecting unseen weather effects. However, our method has some limitations. For instance, when the point distribution of inliers and outliers is similar (e.g., snow and vegetation in Fig. 5-top), some inlier points incorrectly receive a high point energy score. This issue is further accentuated when testing under unseen weather conditions (Fig. 5-bottom)." }, { "figure_ref": [], "heading": "C. Ablation Studies", "publication_ref": [ "b5", "b5" ], "table_ref": [], "text": "Components Contribution. Tab. V shows the contribution of each component used when training AWNet on the WADS dataset. When we pretrain AWNet on inlier data, we see a small improvement in snowfall detection. When testing the same model on spray (unseen during training), we see an improvement of 0.73% AUROC, 7.54% AUPR, and 4.30% FPR95 points instead. This can be associated with the pretrained model already having an internal representation of the inlier class, allowing for a better generalization of the outlier class during training. This result is important since most LiDAR perception datasets are recorded in good weather conditions. Our method shows that this large amount of data can be leveraged to improve the detection of adverse weather without the cost of additional labeled data. When we compare the results of AWNet trained from scratch on WADS and tested on SemanticSpray to Particle-VoxelNet, we still see an improvement of 1.98% AUROC, 16.39% AUPR and 12.46% FPR95 points, showing that even without pretraining our method has better robustness to unseen weather effects. The point-wise energy loss weighting term described in (6) further improves the generalizability performances of AWNet when testing on spray. In our experiments, we observed that in datasets where a large imbalance between inliers and outliers exists (WADS and NuScenes-Fog), the energy weighting term helps the model consistently converge during training. Finally, we see that both the AL framework and the SE Attention modules increase performance when testing on the WADS and SemanticSpray datasets. Different Network Architectures. In Tab. VI we test the performance of our proposed method using different network architectures. All models are trained from scratch, using the proposed loss function (3) and class-wise weighting described in (6). We set λ = 0.01 and margin parameters m in = -5, m out = 5 in all experiments. The results indicate that our approach can improve performance when evaluating on known and unknown adverse weather conditions. For instance, our method improves Particle-UNet's FPR95 score on WADS by 2.8% points. Similarly, Particle-VoxelNet's FPR95 score on SemanticSpray improves by 13.12% points. The performance gains can be attributed to the energy loss function, which provides additional supervision during training. By penalizing energy outputs in the range [m in , m out ], the network leans a more distinguishable representation between inlier and outliers points." }, { "figure_ref": [], "heading": "VI. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this work, we propose a new method for detecting adverse weather effects in LiDAR point clouds. We reframe the task as an outlier detection problem and use the recently proposed energy-based outlier detection framework to robustly detect adverse weather points. Extensive experiments on datasets containing spray, rain, snow, and fog show that our method performs better than previous state-of-theart methods. Furthermore, our proposed method has higher robustness to unseen weather effects, increasing its applicability to real-world applications. Finally, we contribute to the expansion of the critical research field of LiDAR perception in adverse weather conditions by releasing the SemanticSpray dataset, which contains labeled scenes of vehicle spray data in highway-like scenarios." } ]
2023-06-29
[ { "authors": "S Walz; M Bijelic; F Kraus; W Ritter; M Simon; I Doric", "journal": "IEEE", "ref_id": "b0", "title": "A benchmark for spray from nearby cutting vehicles", "year": "2021" }, { "authors": "M J Mirza; C Buerkle; J Jarquin; M Opitz; F Oboril; K.-U Scholl; H Bischof", "journal": "", "ref_id": "b1", "title": "Robustness of object detectors in degrading weather conditions", "year": "2021" }, { "authors": "C Linnhoff; L Elster; P Rosenberger; H Winner", "journal": "", "ref_id": "b2", "title": "Road spray in lidar and radar data for individual moving objects", "year": "2022" }, { "authors": "A Piroli; V Dallabetta; M Walessa; D A Meissner; J Kopp; K C J Dietmayer", "journal": "", "ref_id": "b3", "title": "Robust 3d object detection in cold weather conditions", "year": "2022" }, { "authors": "C Zhang; Z Huang; M H Ang; D Rus", "journal": "IEEE", "ref_id": "b4", "title": "Lidar degradation quantification for autonomous driving in rain", "year": "2021" }, { "authors": "R Heinzler; F Piewak; P Schindler; W Stork", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b5", "title": "Cnn-based lidar point cloud de-noising in adverse weather", "year": "2020" }, { "authors": "G Sebastian; T Vattem; L Lukic; C Bürgy; T Schumann", "journal": "IEEE", "ref_id": "b6", "title": "Rangeweathernet for lidar-only weather and road condition classification", "year": "2021" }, { "authors": "L Stanislas; J Nubert; D Dugas; J Nitsch; N Sünderhauf; R Siegwart; C Cadena; T Peynot", "journal": "Springer", "ref_id": "b7", "title": "Airborne particle classification in lidar point clouds using deep learning", "year": "2021" }, { "authors": "W Liu; X Wang; J Owens; Y Li", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b8", "title": "Energy-based out-ofdistribution detection", "year": "2020" }, { "authors": "X Zhu; H Zhou; T Wang; F Hong; Y Ma; W Li; H Li; D Lin", "journal": "", "ref_id": "b9", "title": "Cylindrical and asymmetrical 3d convolution networks for lidar segmentation", "year": "2021" }, { "authors": "L Ruff; R A Vandermeulen; N Görnitz; A Binder; E Müller; K.-R Müller; M Kloft", "journal": "", "ref_id": "b10", "title": "Deep semi-supervised anomaly detection", "year": "2020" }, { "authors": "J Egelhof; P Wolf; K Berns", "journal": "", "ref_id": "b11", "title": "Disturbance and particle detection in lidar data", "year": "2022" }, { "authors": "N Charron; S Phillips; S L Waslander", "journal": "IEEE", "ref_id": "b12", "title": "De-noising of lidar point clouds corrupted by snowfall", "year": "2018" }, { "authors": "A Kurup; J Bos", "journal": "", "ref_id": "b13", "title": "Dsor: A scalable statistical filter for removing falling snow from lidar point clouds in severe winter weather", "year": "2021" }, { "authors": "D Hendrycks; M Mazeika; T Dietterich", "journal": "", "ref_id": "b14", "title": "Deep anomaly detection with outlier exposure", "year": "2019" }, { "authors": "Y Tian; Y Liu; G Pang; F Liu; Y Chen; G Carneiro", "journal": "", "ref_id": "b15", "title": "Pixelwise energy-biased abstention learning for anomaly segmentation on complex urban driving scenes", "year": "2021" }, { "authors": "Z Liu; Z Wang; P P Liang; R R Salakhutdinov; L.-P Morency; M Ueda", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b16", "title": "Deep gamblers: Learning to abstain with portfolio theory", "year": "2019" }, { "authors": "J Hu; L Shen; G Sun", "journal": "", "ref_id": "b17", "title": "Squeeze-and-excitation networks", "year": "2018" }, { "authors": "S Shi; Z Wang; J Shi; X Wang; H Li", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b18", "title": "From points to parts: 3d object detection from point cloud with part-aware and partaggregation network", "year": "2020" }, { "authors": "M Hahner; C Sakaridis; D Dai; L Van Gool", "journal": "", "ref_id": "b19", "title": "Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather", "year": "2021" }, { "authors": "H Caesar; V Bankiti; A H Lang; S Vora; V E Liong; Q Xu; A Krishnan; Y Pan; G Baldan; O Beijbom", "journal": "", "ref_id": "b20", "title": "nuscenes: A multimodal dataset for autonomous driving", "year": "2020" }, { "authors": "D Hendrycks; K Gimpel", "journal": "", "ref_id": "b21", "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "year": "2017" }, { "authors": "D P Kingma; J Ba", "journal": "CoRR", "ref_id": "b22", "title": "Adam: A method for stochastic optimization", "year": "2014" } ]
[ { "formula_coordinates": [ 3, 105.28, 360.27, 193.52, 22.6 ], "formula_id": "formula_0", "formula_text": "E(x; f ) = -log k∈{1,...,K} e f (k|x) ,(1)" }, { "formula_coordinates": [ 3, 102.44, 563.7, 147.92, 22.6 ], "formula_id": "formula_1", "formula_text": "E i (p; f ) = -log k∈{1,...,K} e fi(k|p) ." }, { "formula_coordinates": [ 3, 291.06, 566.09, 7.74, 8.64 ], "formula_id": "formula_2", "formula_text": ")2" }, { "formula_coordinates": [ 3, 392.11, 536.76, 165.89, 9.81 ], "formula_id": "formula_3", "formula_text": "ℓ total = ℓ cls + λℓ energy .(3)" }, { "formula_coordinates": [ 3, 323.46, 627.27, 234.54, 37.43 ], "formula_id": "formula_4", "formula_text": "ℓ cls = E xi∈p -log f i (y i |p) k∈{1,...,K} f i (k|p) y i ∈ Y in ,(4)" }, { "formula_coordinates": [ 4, 64.7, 215, 234.11, 62.75 ], "formula_id": "formula_5", "formula_text": "ℓ energy = E xi∈p ω -1 in max(0, E i (p; f ) -m in ) 2 y i ∈ Y in + E xi∈p ω -1 out max(0, m out -E i (p; f )) 2 y i ∈ Y out(5)" }, { "formula_coordinates": [ 4, 63.93, 315.68, 231, 19.61 ], "formula_id": "formula_6", "formula_text": "ω in = 1 + yi∈y 1 yi∈Y in , ω out = 1 + yi∈y 1 yi∈Y out . (6" }, { "formula_coordinates": [ 4, 294.93, 320.16, 3.87, 8.64 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 4, 346.89, 370.12, 211.11, 24.22 ], "formula_id": "formula_8", "formula_text": "g i (p; τ ; f ) = inlier if E i (p; f ) ≤ τ, outlier else.(7)" } ]
Energy-based Detection of Adverse Weather Effects in LiDAR Data
Autonomous vehicles rely on LiDAR sensors to perceive the environment. Adverse weather conditions like rain, snow, and fog negatively affect these sensors, reducing their reliability by introducing unwanted noise in the measurements. In this work, we tackle this problem by proposing a novel approach for detecting adverse weather effects in LiDAR data. We reformulate this problem as an outlier detection task and use an energy-based framework to detect outliers in point clouds. More specifically, our method learns to associate low energy scores with inlier points and high energy scores with outliers allowing for robust detection of adverse weather effects. In extensive experiments, we show that our method performs better in adverse weather detection and has higher robustness to unseen weather effects than previous state-ofthe-art methods. Furthermore, we show how our method can be used to perform simultaneous outlier detection and semantic segmentation. Finally, to help expand the research field of LiDAR perception in adverse weather, we release the SemanticSpray dataset, which contains labeled vehicle spray data in highway-like scenarios.
Aldi Piroli; Vinzenz Dallabetta; Johannes Kopp; Marc Walessa; Daniel Meissner; Klaus Dietmayer
[ { "figure_caption": "2 BMW2Fig. 1.We propose a method for detecting adverse weather effects in LiDAR data based on energy outlier detection. In the figure, we show a scene from the RoadSpray[3] dataset, where both a leading vehicle and the ego vehicle are driving on a wet surface, generating trailing spray. Our model is trained to associate low energy scores (darker colors) with inliers and high energy scores (lighter colors) with outliers, allowing for robust classification of adverse weather effects.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Given an input point cloud p, we aim to detect if a point x i ∈ p is caused by adverse weather. We reframe the problem as an outlier detection task and use the proposed point energy score to detect outlier points. During training, we minimize the loss function ℓ total (3), which results in the model f (p) associating a low energy score with inlier points and a high energy score with outliers. This creates an energy gap between the two categories, which can be used to select a classification threshold τ . The top-right plot shows an example of the energy gap between inlier and outlier points on the test set of the WADS [14] dataset when training f (p) on snowy conditions (WADS training set). During inference, points are classified as outliers (red points in the bottom-right plot) if their energy score is greater than the threshold τ .", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 4.Point energy score distributions of AWNet trained on the SemanticSpray dataset and tested on the SemanticSpray (top) and WADS (bottom) datasets. Although the network is trained only on spray data, the model associates similar energy scores to snow and spray points. This highlights the robustness of our method to unseen weather effects since a common classification threshold τ can be chosen to classify both effects.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "and Fig. 5-top show qualitative results of the point energy score output of AWNet trained and tested on vehicle spray and snow respectively.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. The top row shows an example of AWNet trained and tested on the WADS dataset. The bottom row shows instead an example of AWNet trained on SemanticSpray and tested on WADS. On the left column, we show the semantic ground truth labels, whereas on the right, the output point energy score where lighter colors represent inlier scores and brighter colors outliers. The ground truth classes are represented with the following colors: • unlabeled, • car, • road, • other-ground, • building, • vegetation, • other-obstacle, • snowfall.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "RESULTS OF ADVERSE WEATHER DETECTION WHEN TRAINING AND TESTING ON THE SAME WEATHER EFFECT. VALUES ARE IN PERCENTAGE. ↑ MEANS HIGHER VALUES ARE BETTER, AND ↓ THAT LOWER VALUES ARE BETTER.", "figure_data": "MethodDtestAUROC ↑ AUPR ↑FPR95 ↓WADS97.1897.1712.62SemanticSpray99.6999.240.31Particle-UNet [8]NuScenes-Fog99.9899.980.02DENSE99.6198.960.68average99.1198.843.41WADS98.7198.482.02SemanticSpray99.1698.960.70Particle-VoxelNet [8]NuScenes-Fog99.8899.870.15DENSE92.0282.0137.57average97.4494.8310.11WADS97.4096.6810.59SemanticSpray99.4695.472.88WeatherNet [6]NuScenes-Fog99.8999.600.04DENSE99.4998.901.94average99.0697.663.86WADS98.2696.891.24SemanticSpray99.8499.250.02AWNet (ours)NuScenes-Fog99.9999.980.01DENSE99.2797.550.77average99.3498.420.51", "figure_id": "tab_1", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "{13, 14, 17, 20, 23, 26, 30, 34, 35, 36} / {11, 12, 15, 16, 18, 22, 24, 28, 37, 76}.", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "RESULTS OF ADVERSE WEATHER DETECTION WHEN TRAINING ON AN ADVERSE WEATHER EFFECT AND TESTING ON AN UNSEEN (DURING TRAINING) WEATHER EFFECT. FOR EACH RESULT, DTRAIN REPRESENTS THE TRAINING SET, AND DTEST THE TEST SET. VALUES ARE IN PERCENTAGE. ↑ MEANS HIGHER VALUES ARE BETTER, AND ↓ THAT LOWER VALUES ARE BETTER.", "figure_data": "MethodD trainDtestAUROC ↑ AUPR ↑FPR95 ↓WADSSemanticSpray84.3352.1459.59SemanticSpray WADS74.5763.7570.17Particle-UNet [8]NuScenes-FogSemanticSpray61.9455.4083.22NuScenes-FogWADS93.3092.7435.58average78.5366.0162.14WADSSemanticSpray96.1369.4918.15SemanticSpray WADS93.0290.1423.10Particle-VoxelNet [8]NuScenes-FogSemanticSpray87.5482.0928.57NuScenes-FogWADS95.3295.0610.07average93.0084.1919.97WADSSemanticSpray94.4071.7233.94SemanticSpray WADS81.5868.5358.64WeatherNet [6]NuScenes-FogSemanticSpray85.1757.3465.12NuScenes-FogWADS88.8387.0058.99average87.4971.1554.17WADSSemanticSpray99.3093.610.86SemanticSpray WADS96.7491.937.19AWNet (ours)NuScenes-FogSemanticSpray96.2970.5221.24NuScenes-FogWADS97.3595.264.52average97.4287.838.45", "figure_id": "tab_3", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "OF OUR PROPOSED METHOD AGAINST STATISTICAL FILTERS. OUR METHOD IS TRAINED AND TESTED ON THE SAME DATASET. VALUES ARE IN PERCENTAGE. ↑ MEANS HIGHER VALUES ARE BETTER, AND ↓ THAT LOWER VALUES ARE BETTER.", "figure_data": "MethodDtestPrecision ↑ Recall ↑WADS85.4888.11SemanticSpray50.6864.24DROR [13]NuScenes-Fog70.0084.09DENSE57.9267.65average66.0276.02WADS76.9288.85SemanticSpray51.5756.58DSOR [14]NuScenes-Fog75.4789.72DENSE64.5669.76average67.1376.23WADS91.1195.45SemanticSpray87.4999.32AWNet (ours)NuScenes-Fog99.2299.84DENSE91.4395.73average92.3197.58", "figure_id": "tab_4", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "SEGMENTATION EVALUATION WHEN TRAINING AND TESTING ON WADS. VALUES ARE IN PERCENTAGE.", "figure_data": "Modelunlabeledcarroadother-groundIoU building vegetationother-obstaclesnowfallmIoUCylinder3D [10]29.7663.33 51.3162.3468.4153.3215.4178.7552.83Cylinder3D-E [10] + ours32.5865.29 47.7661.7166.8452.0917.1279.4852.86AWNet (ours)12.4456.43 58.0660.7767.6460.0617.3287.0352.47", "figure_id": "tab_5", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "STUDIES OF OUR PROPOSED METHOD WHEN TRAINING ON WADS. VALUES ARE IN PERCENTAGE. ↑ MEANS HIGHER VALUES ARE BETTER, AND ↓ THAT LOWER VALUES ARE BETTER.", "figure_data": "DtestPretrained ModelWeighted Energy LossALSE AttentionAUROC ↑ AUPR ↑FPR95 ↓--✓✓97.9796.751.33WADS✓ ✓-✓✓ -✓ ✓98.23 98.0897.05 96.541.22 1.40✓✓✓-98.1396.671.28✓✓✓✓98.2696.891.24--✓✓98.1185.885.69SemanticSpray✓ ✓-✓✓ -✓ ✓98.84 98.9693.42 87.821.39 3.36✓✓✓-98.6090.892.53✓✓✓✓99.3093.610.86", "figure_id": "tab_6", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "OF OUR PROPOSED LOSS FUNCTION TO DIFFERENT NETWORK ARCHITECTURES. ALL MODELS ARE TRAINED FROM SCRATCH ON THE WADS DATASET. VALUES ARE IN PERCENTAGE. ↑ MEANS HIGHER VALUES ARE BETTER, AND ↓ THAT LOWER VALUES ARE BETTER.", "figure_data": "MethodDtestAUROC ↑AUPR ↑ FPR95 ↓Particle-UNet [8]97.6297.099.82Particle-VoxelNet [8]WADS98.7298.352.19WeatherNet [6]97.5196.438.66Particle-UNet [8]87.4352.5653.37Particle-VoxelNet [8]SemanticSpray98.4888.485.03WeatherNet [6]93.7772.1531.60", "figure_id": "tab_7", "figure_label": "VI", "figure_type": "table" } ]
[{"Category": "Data Source", "Citation": "[1], [2]", "Explanation": "The cited works provide the basis for understanding the performance of LiDAR sensors in adverse weather conditions, which is essential for the citing paper to address the problem of LiDAR sensitivity to such conditions."}, {"Category": "Supporting Evidence", "Citation": "[4], [5], [6], [7], [8]", "Explanation": "The cited works provide a foundation for the development of approaches to address the problem of LiDAR sensitivity to adverse weather conditions, which the citing paper builds upon in its research."}, {"Category": "Methodological Basis", "Citation": "[3]", "Explanation": "The cited work, RoadSpray, provides a dataset that the citing paper uses to train a model for detecting adverse weather effects in LiDAR data based on energy outlier detection."}, {"Category": "Extension or Continuation", "Citation": "[6]", "Explanation": "The cited work by Heinzler et al. proposes a semantic segmentation network to classify adverse weather on a point-wise level. The citing paper builds upon this work by testing the model on data collected in a weather chamber, but also recognizes the limitations of the approach in real-world scenarios with complex dynamic scenarios under different weather conditions."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work provides the energy-based outlier detection framework that the citing paper adapts to the point cloud domain, serving as the methodological basis for the point-wise detection of adverse weather conditions."}, {"Category": "Supporting Evidence", "Citation": "[10]", "Explanation": "The cited work [10] is a state-of-the-art network for semantic segmentation that the citing paper compares their method with, providing a basis for evaluating the performance of the method in detecting points generated by adverse weather effects."}, {"Category": "Data Source", "Citation": "[3]", "Explanation": "The cited work [3] is the RoadSpray dataset that the citing paper uses to provide semantic labels for the dataset, which is a key data source for the research on LiDAR in adverse weather."}, {"Category": "Extension or Continuation", "Citation": "[1]", "Explanation": "The cited work [1] is a study on the effects of spray points in high-speed scenarios, which the citing paper builds upon by discussing the challenges posed by the large number of spray points in LiDAR measurements and the potential for perception systems to fail in extreme cases."}, {"Category": "Methodological Basis", "Citation": "[1]", "Explanation": "The study by Walz et al. on the effect of vehicle spray on object detectors provides a methodological basis for the citing paper to understand the impact of weather conditions on LiDAR and camera-based object detection systems."}, {"Category": "Data Source", "Citation": "[5]", "Explanation": "The method proposed by Zhang et al. for estimating the degree of degradation in LiDAR scans in rainy conditions serves as a data source for the citing paper to benchmark the performance of their own method in a similar context."}, {"Category": "Data Source", "Citation": "[7]", "Explanation": "The method proposed by Sebastian et al. for classifying weather and road surface conditions in LiDAR data is a data source for the citing paper to assess the performance of their own method in a similar context."}, {"Category": "Data Source", "Citation": "[6]", "Explanation": "The use of a lightweight CNN architecture by Heinzler et al. to detect rainfall and fog on a point-wise level is a data source for the citing paper to understand the state-of-the-art in point-wise weather detection methods."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work provides a method for generalizing the proposed approach to real-world scenarios, which the citing paper adopts to improve the performance of the system in dynamic environments."}, {"Category": "Extension or Continuation", "Citation": "[8]", "Explanation": "The cited work proposes a CNN and voxel-based approach for detecting airborne particles in robotic applications, which the citing paper tests on adverse weather conditions to expand the research on the detection of snow, rain, fog, and spray."}, {"Category": "Data Source", "Citation": "[13]", "Explanation": "The cited work proposes a filter for detecting snow points in point clouds, which the citing paper utilizes to improve the performance of the system in adverse weather conditions."}, {"Category": "Supporting Evidence", "Citation": "[14]", "Explanation": "The cited work, DSOR, is mentioned as a method that improves the performance of DROR by considering the mean distance between neighbors. This suggests that the citing paper is building upon the work of [14] to further enhance the performance of the method."}, {"Category": "Methodological Basis", "Citation": "[15]", "Explanation": "The cited work introduces the outlier exposure method, which the citing paper adopts to train a network with an auxiliary dataset of outliers to generalize to unseen anomalies."}, {"Category": "Methodological Basis", "Citation": "[11]", "Explanation": "The cited work uses a similar approach to the cited work, redefining the loss function to map inlier data inside a hypersphere and outliers outside it, which the citing paper leverages in their research."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work introduces the energy function, which the citing paper uses in their work for the point-wise detection of adverse weather effects in LiDAR data."}, {"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work uses energy-based methods for anomaly detection on a pixel level, which the citing paper builds upon for the point-wise detection of adverse weather effects in LiDAR data."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work proposes the abstention learning paradigm, where a model is trained to abstain from making a classification when it is not certain that the input is an inlier, which the citing paper adopts in their research for the point-wise detection of adverse weather effects in LiDAR data."}, {"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work introduces the idea of adding an additional class to the last classification layer of a network for uncertainty estimation, which the citing paper adopts in their research to extend their proposed network."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work provides the energy function and energy gap concept that the citing paper adopts to process 3D inputs on a point-wise level and classify inliers and outliers using an energy threshold."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work introduces the concept of squeeze-excitation attention layers, which the citing paper adopts in the design of the Ad-verseWeatherNet (AWNet) model for adverse weather detection."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work presents a sparse convolutions backbone that the citing paper adopts for feature extraction in the point cloud processing stage."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work provides the NLL loss function and the energy score concept, which the citing paper adopts in the model to associate low energy scores with inlier inputs and learn semantic segmentation of inlier classes."}, {"Category": "Data Source", "Citation": "[18]", "Explanation": "The cited work introduces the squeeze excitation (SE) attention layers, which the citing paper uses in the stack of three SE attention layers to weight the channels of each voxel in the AWNet architecture."}, {"Category": "Methodological Basis", "Citation": "[19]", "Explanation": "The cited work proposes the 3D sparse convolution used in the AWNet architecture to extract features and classify each voxel in the point cloud using two fully connected (FC) layers."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work introduces the concept of hinge loss function and energy gap to differentiate inlier and outlier points, which the citing paper adopts to address the problem of unbalanced data in point-wise anomaly detection during training."}, {"Category": "Methodological Basis", "Citation": "[9], [16]", "Explanation": "The cited works provide a finetuning approach for training the model, which the citing paper adopts in its own research to improve the classification of inlier and outlier points in a point cloud."}, {"Category": "Data Source", "Citation": "[3]", "Explanation": "The cited work, RoadSpray dataset, serves as the basis for the creation of the SemanticSpray dataset in the citing paper. The data is used to perform large-scale testing of the effect of vehicle spray on various sensors, which is the starting point for the data labeling process in the SemanticSpray dataset."}, {"Category": "Data Source", "Citation": "[13]", "Explanation": "The cited work is used as a data source for the evaluation of the approach in the citing paper, which is related to the detection of adverse weather in LiDAR data."}, {"Category": "Data Source", "Citation": "[14]", "Explanation": "The cited work is also used as a data source for the evaluation of the approach in the citing paper, focusing on the detection of adverse weather in LiDAR data."}, {"Category": "Data Source", "Citation": "[8]", "Explanation": "The cited work is used as a data source for the evaluation of the approach in the citing paper, specifically in the context of airborne particle detection in LiDAR data."}, {"Category": "Data Source", "Citation": "[20]", "Explanation": "The cited work is used to augment the NuScenes dataset with fog, providing a more complex and challenging scenario for the proposed method to test on."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The NuScenes dataset is the source of the data used in the simulation method proposed in the cited work, which is then used to augment the dataset and create a new version (NuScenes-Fog) for the proposed method to test on."}, {"Category": "Methodological Basis", "Citation": "[22], [15], [9], [16]", "Explanation": "The evaluation metrics used in the cited works are adopted in the citing paper to assess the performance of the proposed method in the outlier detection task."}, {"Category": "Data Source", "Citation": "[21]", "Explanation": "The NuScenes dataset is used as a training set for the AWNet model in the cited work, providing a foundational data source for the research conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[23]", "Explanation": "The use of the Adam optimizer in the training process of the AWNet model is adopted from the cited work, serving as a methodological basis for the research conducted in the citing paper."}, {"Category": "Data Source", "Citation": "[12]", "Explanation": "The cited work is mentioned in the context of the DENSE dataset, and the citing paper acknowledges the results reported in the cited work regarding the field of view and internal structure of the fog chamber."}, {"Category": "Extension or Continuation", "Citation": "(NuScenes-Fog)", "Explanation": "The cited work is used to train and test the four methods on simulated fog data, and the citing paper extends the research by evaluating the performance of the methods on the NuScenes-Fog dataset."}, {"Category": "Extension or Continuation", "Citation": "(WADS)", "Explanation": "The cited work is used to test the generalizability of the models trained on the DENSE dataset, and the citing paper extends the research by evaluating the performance of the methods on the WADS dataset."}, {"Category": "Extension or Continuation", "Citation": "(SemanticSpray)", "Explanation": "The cited work is used to test the performance of the methods on the SemanticSpray dataset, and the citing paper extends the research by evaluating the performance of the methods on this dataset as well."}, {"Category": "Extension or Continuation", "Citation": "[10]", "Explanation": "The citing paper extends the research of Cylinder3D by testing the performance of AWNet against the state-of-the-art semantic segmentation network on the WADS dataset."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work provides the default training parameters for the Cylinder3D model, which the citing paper adopts in their own research to train the model using the proposed approach."}]
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b49", "b51", "b16", "b3", "b61", "b56", "b80", "b84", "b9", "b15", "b43", "b84", "b85", "b40", "b68", "b80", "b40", "b43", "b80", "b85", "b85", "b43", "b40", "b80", "b64", "b77", "b86", "b86", "b64", "b77", "b0", "b54", "b58", "b80" ], "table_ref": [], "text": "Appearance-based gaze estimation is a promising solution to indicate human user attention in various settings with a single webcam as the input device, such as human- robot interaction [50,52], social interaction [17,34], and entertainment [14,62]. Machine learning-based methods have been evolving from convolutional neural networks (CNN) [57,81,85] to vision transformer (ViT) [10,16] for more robust performance under the in-the-wild usage setting. There is also a trend to include the face image instead of just the eye region [44,85]. Despite their effectiveness, it is well known that data-driven methods are prone to be over-fitted to data bias to lose their generalization ability across environments, thus, are limited in real-world applications.\nIn the research community, this issue has often been analyzed quantitatively in cross-domain evaluations with training and testing on different datasets [86]. There is a significant performance drop if the trained model is tested on different domains regarding personal appearances, head poses, gaze directions, and lighting conditions [41,69,81]. To improve the robustness of the deep estimation model, many large-scale datasets have been collected and contributed to the community [22,41,44,81,86]. In-the-wild setting datasets achieved diverse lighting [86] and large subject scale [44], while controlled settings can capture images with extreme head pose and gaze direction [41,81]. However, acquiring gaze labels is costly, and constructing a comprehensive training dataset covering all conditions is not trivial.\nAnother line of work created synthetic data as additional training data [65,78,87]. Although synthetic images from gaze redirection can be used to augment existing training data [87], the label accuracy from the learning-based generative model is not good enough as training data alone, especially for large angles. Learning-by-synthesis using 3D graphics has been proven effective in eye-only gaze estimation [65,73,78]. However, the domain gap between real and synthetic images is difficult to fill [63], and no effective pipeline has been proposed to synthesize and adapt full-face images as training data.\nIn this work, we propose tackling the goal of generalizable appearance-based gaze estimation by leveraging data synthesis and the domain adaptation approach. As shown in the top of Fig. 1, we perform single-image 3D face reconstruction to synthesize data for large head poses and extend the gaze direction ranges. Based on the synthetic data, we propose a novel unsupervised domain adaptation framework combining disentangled representation learning and a self-training strategy. Our proposed disentangling autoencoder (DisAE) structure is first trained on the synthetic source domain for learning gaze representation expected to better generalize to unseen domains. The model is then trained on unlabeled target domains in a self-training manner [1,48,54,55,76]. Based on the characteristics of our synthetic data, we propose to use background-switching data augmentation consistency loss for the synthetic-real domain adaptation. Experiments with multiple target datasets show that the proposed pipeline significantly improves performance from the source dataset before reconstruction. The single-image face reconstruction in our approach is accessible for most real-world settings, yet the multi-view face reconstruction could pose an upper bound in terms of performance. We also analyze in detail how far single-view reconstruction can approach the training accuracy of multiview reconstruction. This manuscript is based on our previous publication [59], and parts of the text and figures are reused from the previous version. The major changes are as follows. First, we fully update the model training pipeline by introducing DisAE and a self-training strategy for unsupervised domain adaptation. Second, we added an experiment using multi-view reconstruction data from the ETH-XGaze datasets [81] to analyze the upper-bound performance of synthetic training. This results in re-calibrated camera parameters for the ETH-XGaze dataset, promoting future multi-view gaze estimation tasks.\nIn summary, the contributions of this work are threefold.\n(i) We are the first to propose a novel approach using single-view 3D face reconstruction to create synthetic training data for appearance-based gaze estimation.\nWe utilize the property of the synthetic data to perform the background switching for image appearance argumentation.\n(ii) We propose a novel unsupervised domain adaptation approach combining feature disentanglement and selftraining strategy. Experiments show that the proposed method is particularly effective in addressing synthetic-real domain gaps.\n(iii) We provide a detailed comparison with multi-view face reconstruction to analyze the single-view performance. We release the re-calibrated camera extrinsic parameters for the ETH-XGaze dataset to facilitate further research." }, { "figure_ref": [], "heading": "Related works", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Appearance-Based Gaze Estimation", "publication_ref": [ "b29", "b65", "b64", "b65", "b77", "b78", "b83", "b10", "b12", "b28", "b55", "b8", "b43", "b82", "b84", "b19", "b22", "b63", "b84", "b85", "b84", "b85", "b43", "b43", "b40", "b80" ], "table_ref": [], "text": "While traditional model-based gaze estimation relies on 3D eyeball models and geometric features [26,30], appearance-based methods [66] use a direct mapping from the image to the gaze direction, enabling their use in a wider range of settings and with less hardware dependency. Previous work on appearance-based gaze estimation has mostly used single eye [65,66,72,[77][78][79]84] or two-eye images as the input [11,13,29,56]. Recent works using full-face input [9,44,83,85] have shown higher robustness and accuracy of gaze estimation than the eye-only methods.\nTo alleviate the data hunger of deep learning methods, multiple datasets have been proposed. Most of the gaze datasets usually collected the data in indoor environments that lack variant lighting conditions [20,23,36,64]. Later works switched to in-the-wild data collection to cover variant lighting conditions [85,86]. However, these datasets have limited ranges of head pose and gaze directions due to the data collection devices such as the laptop [85,86], cellphone [44], and tablet [36,44]. Recent datasets have fur-ther extended diversity in head pose and environment conditions [41,81]. However, acquiring training datasets that meet the requirements for head pose and appearance variations in the deployment environment still requires significant effort. In this work, we extend the head pose ranges of source datasets with full-face synthetic data for the gaze estimation task." }, { "figure_ref": [], "heading": "Learning-by-Synthesis for Gaze Estimation", "publication_ref": [ "b64", "b73", "b32", "b77", "b86" ], "table_ref": [], "text": "Previous studies have created synthetic training data for the gaze estimation task to bypass the burden of real-world data collection. One direction is to use multi-view stereo reconstruction [65]. However, the multi-view setup has the drawback that the environment is limited to the laboratory conditions. Another group of methods used hand-crafted computer graphics models to generate the samples with arbitrary head poses, gaze directions, and lighting conditions [73,74]. Unfortunately, these generated samples from the graphics models have a non-negligible domain gap between the synthesis and realisim. Gaze redirection has been proposed to generate synthetic data for the personal gaze estimator training [33,78,87]. However, these approaches cannot guarantee that the generated samples have exactly the target gaze label. Alternatively, this work uses a singleimage 3D face reconstruction approach for accurate data synthesis, enabling us to generate synthetic training data with higher realism and precision than previous methods." }, { "figure_ref": [], "heading": "Domain Gap in Gaze Estimation", "publication_ref": [ "b55", "b78", "b28", "b44", "b55", "b24", "b74", "b6", "b54", "b30", "b41", "b44", "b70", "b44" ], "table_ref": [], "text": "The cross-domain gap is a significant challenge in appearance-based gaze estimation, and it becomes more critical with synthetic data. To tackle this, previous works either improved the generalizability by devising better gaze representation learning from the source domain [8, 56,79] or directly used target domain data with unsupervised domain adaptation [29,45,48]. For instance, disentangling transforming encoder-decoder [56] separates the features to get more domain-variant gaze features. PureGaze [8] extracts the purified gaze features out of the image feature using a self-adversarial learning strategy. However, domain generalization in gaze estimation remains challenging due to numerous influencing factors. This study is intended to synthesize data tailored to the head pose distribution of the target domain and primarily consider adaptation rather than generalization.\nUnsupervised domain adaptation has succeeded in tasks like classification and segmentation [25,35,75], but only limited work has focused on regression tasks [7,[53][54][55], of which gaze estimation is particularly challenging. Sim-GAN [63] adapts the synthetic training data to be similar to real target images before training, while recent methods are focusing more on directly adapting the model by target domain using self-supervised learning [31,37,42,45]. Liu et al. [48] proposed a framework using collaborative learning that adapts to the target domain with only very few images guided by outlier samples. Some methods pinpointed some specific issues such as jitter samples [47] and in-plane rotation inconsistency [3] and developed specific self-supervised learning strategies to address them. Gaze-CLR [38] leveraged multi-view consistency as the underlying principle of the proposed contrastive learning framework. Wang et al. [71] proposed a contrastive learning framework based on an assumption of the similarity between gaze labels and features. LatentGaze [45] leveraged generative adversarial networks to transform the target domain to the source domain for easier estimation. In summary, most of the previous work only focused on adapting a wide range to a narrow range, and the effectiveness on synthetic source data has not been evaluated. Thus, we propose a gaze estimation model that specifically learns better gaze representation from synthetic data and adapts to the real domain using unlabeled data." }, { "figure_ref": [], "heading": "3D Face Reconstruction", "publication_ref": [ "b50", "b79", "b88", "b67", "b45", "b57", "b5", "b27", "b89", "b90", "b39", "b60" ], "table_ref": [], "text": "There has been significant progress in the monocular 3D face reconstruction techniques in recent years [92]. Despite the fact that reconstructed 3D faces have also been used to augment face recognition training data [51,80,89], no prior work has explored its usage in full-face appearance-based gaze estimation yet. Most of the methods based on 3D morphable models [15,68] approximate facial textures via the appearance basis [5, 46,58] that the appearances of the eye region can be distorted. To preserve accurate gaze labels after reconstruction, the proposed data synthesis approach utilizes 3D face reconstruction methods that sample texture directly from the input image [4, 6,18,27,28,90,91]. In addition, since many prior works rely on orthogonal or weak perspective projection models, we also investigate how to precisely align the reconstruction results with the source camera coordinate system.\nIn addition to the monocular-based methods, there are multi-view stereo methods that produce a better 3D geometry [40] and Neural Radiance Field (NeRF) that represents the scene implicitly as a radiance field to achieve fine-level details. However, these methods require a long processing time and a large number of views, and they are sensitive to low-quality images [61]." }, { "figure_ref": [], "heading": "Novel-View Synthesis", "publication_ref": [], "table_ref": [], "text": "Our proposed method consists of two parts: data synthesis and synthetic-real domain adaptation. In this section, we introduce the data synthesis process with 3D face reconstruction to generate samples with a large range of head poses. To fill the gap between the synthesis and the real sample, we propose a domain-adaptive gaze estimation model described in Sec. 4. Figure 2. Overview of the data synthesis pipeline. We assume that 3D face reconstruction methods generate facial meshes under an orthogonal projection model. We then convert the mesh via the proposed projective matching to align with the ground-truth gaze position in the input camera coordinate system." }, { "figure_ref": [], "heading": "Overview", "publication_ref": [], "table_ref": [], "text": "Fig. 2 shows the overview of our data synthesis pipeline. Given an ordinary single-view gaze dataset, we apply 3D face reconstruction on each sample to synthesize face images with novel head poses while preserving accurate gaze direction annotations. We adopt a simple 3D face reconstruction to create the 3D face mesh in the pixel coordinate system, and propose a transformation process, named projective matching, to obtain the 3D face mesh in the camera coordinate system. Finally, 2D face images can be rendered using camera perspective projection with the 3D face mesh." }, { "figure_ref": [], "heading": "3D Face Reconstruction", "publication_ref": [ "b43", "b80", "b83", "b40", "b5", "b18", "b38", "b89" ], "table_ref": [], "text": "We assume that the source gaze dataset consists of 1) face images, 2) the projection matrix (intrinsic parameters) C of the camera, and 3) the 3D gaze target position g ∈ R 3 in the camera coordinate system. Most of the existing gaze datasets satisfy our requirements [44,81,84], and yaw-pitch annotations can also be converted assuming a distance to the dummy target [41]. State-of-the-art learning-based 3D face reconstruction methods usually take a cropped face patch as input and output a 3D facial mesh, which is associated with the input image in an orthographic projection way. Without loss of generality, we assume that the face reconstruction method takes a face bounding box defined with center (c x , c y ), width w b , and height h b in pixels and then resized to a fixed input size by factor (s x , s y ). The reconstructed facial mesh is defined as a group of N vertices V p = {v\n(i) p } N i=0 . Each vertex is represented as v (i) p = [u (i) , v (i) , d (i)\n] ⊤ in the right-handed coordinate system, where u and v directly correspond to the pixel locations in the input face patch and d is the distance to the u-v plane in the same pixel unit. This representation has been used by recent works [6,19,27,39,90], and we can convert arbitrary 3D representation to it by projecting the reconstructed 3D face onto the input face patch.\nOur goal is to convert the vertices of the reconstructed 3D face V p to another 3D representation V c = {v\n(i) c } N i=0 where each vertex v (i) c = [x (i) , y (i) , z (i) ]\n⊤ is in the original camera coordinate system so that it can be associated with the gaze annotation g. In this way, the gaze target location can also be represented in the facial mesh coordinate system, and we can render the facial mesh under arbitrary head or camera poses together with the ground-truth gaze direction information." }, { "figure_ref": [ "fig_2" ], "heading": "Projective Matching", "publication_ref": [ "b81", "b84", "b20" ], "table_ref": [], "text": "Projective matching, in a nutshell, is to approximate parameters for transforming the V p to V c such that V c matches the perspective projection.\nIn detail, since u and v of each reconstructed vertex v p are assumed to be aligned with the face patch coordinate system, v c must be on the back-projected ray as\nv c = λ C -1 p o ||C -1 p o || = λ C -1 T -1 p ||C -1 T -1 p|| ,(1)\nwhere\np o = [u o , v o , 1] ⊤ and p = [u, v, 1]\n⊤ indicates the pixel locations in the original image and the face patch in the homogeneous coordinate system, respectively, and\nT =   s x 0 -s x (c x -w 2 ) 0 s y -s y (c y -h 2 ) 0 0 1   (2)\nrepresents the cropping and resizing operation to create the face patch, i.e., p = T p o . The scalar λ indicates scaling along the back-projection ray and physically means the distance between the camera origin and v c . Since Eq. ( 1) does not explain anything about d, our task can be understood as finding λ, which also maintains the relationship between u, v, and d. Therefore, as illustrated in Fig. 3, we propose to define λ as a function of d as λ = αd+ β. α indicates a scaling factor from the pixel to physical (e.g., millimeter) unit, and β is the bias term to align αd with the camera coordinate system. Please note that α and β are constant parameters determined for each input image and applied to all vertices from the same image. We first fix α based on the distance between two eye centers (midpoints of two eye corner landmarks) compared to a physical reference 3D face model. 3D face reconstruction methods usually require facial landmark detection as a pre-processing step. Thus we can naturally assume that we know the corresponding vertices in V p to the eye corner landmarks. We use a 3D face model with 68 landmarks (taken from the OpenFace library [2]) as our reference. We set α = l r /l p , where l p and l r are the eye-center distances in V p and in the reference model, respectively.\nWe then determine β by aligning the reference landmark depth in the camera coordinate system. In this work, we use the face center as a reference, which is defined as the centroid of the eyes and the mouth corner landmarks, following previous works on full-face gaze estimation [82,85]. We use the same face center as the origin of the gaze vector through the data normalization and the gaze estimation task.\nWe approximate β as the distance between the groundtruth 3D reference location and the scaled/reconstructed location as β = ||v|| -α d. d is the reconstructed depth values computed as the mean of six landmark vertices corresponding to the eye and mouth corner obtained in a similar way as when computing α. v is the centroid of the 3D locations of the same six landmarks in the camera coordinate system, which are obtained by minimizing the projection error of the reference 3D model to the 2D landmark locations using the Perspective-n-Point (PnP) algorithm [21]." }, { "figure_ref": [ "fig_3" ], "heading": "Training Data Synthesis", "publication_ref": [ "b87", "b85", "b80", "b59" ], "table_ref": [], "text": "With the 3D face mesh V c in the original camera coordinate system, we can render 2D face images under arbitrary head poses with the ground-truth gaze vector. To render a face image in a new camera coordinate system defined by the extrinsic parameters R e , t e , we project the vertex v c and gaze target position g onto the new system by applying the transformation R e v c + t e and R e g + t e , respectively. To render a face image from a source head pose R s , t s to a target head pose R t , t t , we transform the vertices and gaze position by applying the transformation R t (R s ) -1 (v c -t s ) + t t , similar for g.\nIn this work, we further augment the synthetic images in terms of lighting conditions and background appearances by virtue of the flexible synthetic rendering. We set the background to random colors or scenes by modifying the blending settings. Although most 3D face reconstruction methods do not reconstruct lighting and albedo, we maximize the diversity of rendered images by controlling the global illumination. In the PyTorch3D renderer, the ambient color [r, g, b] represents the ambient light intensity, ranging from 0 to 1, in which 1 is the default value for full lighting. For weak-light images, we set them to be a random value between 0.25 and 0.75. Overall, among all generated images, the ratio of black, random color, and random scene are set to 1:1:3, and half of them are weak lighting. Random scene images are taken from the Places365 dataset [88], and we apply blurring to them before rendering faces. Fig. 4 shows examples of the synthesized images using MPIIFaceGaze [86] and ETH-XGaze [81].\nIn the experiments, we applied 3DDFA [27] to reconstruct 3D faces from the source dataset. After projective matching, we rendered new images using the PyTorch3D library [60]." }, { "figure_ref": [], "heading": "Domain Adaptation with Feature Disentanglement", "publication_ref": [ "b44", "b58", "b54" ], "table_ref": [], "text": "The data synthesis presented in the Sec. 3 can render realistic face regions with accurate gaze labels, and these generated samples could be directly used to train a model for the cross-domain task. However, there is still an image appearance gap between the synthetic and real samples. In particular, the influence of the background, hair, clothing, and other non-face areas of the synthesized images on the gaze estimation model cannot be ignored. To fill the gap, we propose a gaze estimation framework that can adjust to the target domain by unsupervised domain adaptation. However, gaze-unrelated features, such as image appearance and head pose, make the adaptation unstable [45,59]. To avoid being disrupted by the unrelated features, we first devise the disentangling auto-encoder (DisAE) (Sec. 4.1) to separate the gaze-related features during the supervising training with the synthetic data from the source domain. Then, we further adapt the DisAE to the target domain in a selftraining approach [54,55]. Since our synthetic source domain has random images as the background (Sec. 3.4), we propose to use background-switching consistency loss on the target domain as one of the self-training objectives." }, { "figure_ref": [], "heading": "Disentangling Auto-Encoder", "publication_ref": [ "b86" ], "table_ref": [], "text": "As the base architecture for adaptation, we propose a disentangling auto-encoder to separate the gaze-unrelated features to reduce their influences on gaze estimation. Specifically, as shown in the top of Fig. 5, we propose to use an encoder-decoder architecture to disentangle appearance, head, and gaze embeddings. For prediction, we use an MLP to predict head pose ĥ, and a vision transformer to predict gaze direction ĝ. Finally, all features are concatenated and fed into a decoder to reconstruct the image. After this feature disentanglement, the extracted features are strongly correlated to gaze and ease the influence from other gazeunrelated features, and the pre-trained model is expected to be more suited to domain adaptation.\nTo ensure the features are disentangled, we prepare three additional subnets denoted as ψ a , a face recognition network that predicts appearance embeddings, ψ h for predicting head pose, and ψ g for predicting gaze [87], all having a ResNet-18 structure. The three subnets are trained on the source domain, and then we train the DisAE using the losses conducted by these three subnets. Please note that we only use the source domain's synthetic data to pre-train the DisAE. As shown in the top of Fig. 5, the pre-training loss mainly consists of two components, encoding-decoding and feature swapping." }, { "figure_ref": [], "heading": "Encoding-Decoding", "publication_ref": [], "table_ref": [], "text": "Encoding-decoding losses are defined between each input image and the output (decoded image and estimation result) from the DisAE architecture. Reconstruction loss is defined as L rec = |I -Î| 1 , where Î is the reconstructed image and I is the ground-truth image. Estimator loss is the commonly used ℓ1 loss on the gaze labels g defined as the pitch and yaw dimensions. In addition to gaze loss, we will calculate the head pose loss with the dataset's head pose label h. Taken together, the estimator loss is defined as L est = |g -ĝ| 1 + λ head |h -ĥ| 1 , where ĝ and ĥ are the predicted gaze and head direction.\nGaze consistency loss is aimed to make the gaze features not sensitive to different appearance features. We add an N (0, 0.1) random noise to the appearance features before feeding them into the decoder. The reconstructed image Î with noise is expected to have the same gaze direction as the original image I. Therefore, we use the pre-trained gaze subnet ψ g to compute a gaze consistency loss between the two images as L gc = |ψ g (I) -ψ g ( Î)| 1 ." }, { "figure_ref": [], "heading": "Feature Swapping", "publication_ref": [], "table_ref": [], "text": "Feature swapping losses are defined between image pairs with disentangled features swapped after passing through the encoder. Feature exchange consistency loss is introduced to enable the disentangling of the face appearances from the geometric gaze and head pose features. In detail, we swap the appearance embeddings between two samples I 1 and I 2 and decode them to Ĩ1 and Ĩ2 . Then we constrain the gaze and head pose of them to be the same, as illustrated in the top of Fig. 5. The loss is formulated as\nL ex (I 1 , I 2 ) = ψ |ψ(I 1 ) -ψ( Ĩ1 )| 1 + |ψ(I 2 ) -ψ( Ĩ2 )| 1 ,\n(3) and the total loss for pre-training DisAE is the sum of all the above losses\nL source = L est + λ rec L rec + λ gc L gc + λ ex L ex .\n(4)" }, { "figure_ref": [], "heading": "Self-Training on Target Domains", "publication_ref": [ "b87" ], "table_ref": [], "text": "After the source-domain-only supervised training of DisAE, we leverage the unlabeled target-domain data using an augmentation consistency loss, inspired by the wide use of data . The overview of our synthetic-real domain adaptation approach. Top: An encoder-decoder structure for feature disentanglement (DisAE). We prepare three subnets ψ a , ψ h , and ψ g to support the disentanglement of appearance, head, and gaze features. The gaze features are fed into a vision transformer to get the predicted gaze direction ĝ while the head features are fed into an MLP to get the predicted head pose direction ĥ. Bottom: augmentation consistency is proposed during the unsupervised domain adaptation of DisAE towards the target domain.\nbasic idea is we apply data augmentations on unlabeled target domain images and train the model so that its output is consistent before and after the augmentation. These data augmentations tune the gaze-related features in DisAE on the target domain without the gaze label.\nThe augmentation consistency loss is defined as the ℓ1 loss between the gaze prediction of the original target image and that of the augmented target images as\nL aug = λ bg L bg + λ flip L flip , where L bg = |ĝ t -ĝbg t | 1 and L flip = |ĝ t -ĝflip t | 1 .\nAs one of the data augmentation, we propose using a background-switching augmentation loss, particularly suitable for the synthetic data. Specifically, we obtain the facial region mask from the detected landmarks and change the background to random images from Places365 dataset [88], which should not alter the gaze direction. Since our synthetic source domain has random background regions, there is no gaze-relevant information in non-face regions and the pre-trained DisAE is expected to focus on face regions. By training the model in the target domain so that gaze estimation remains consistent across images with swapped backgrounds, the model can be adapted to take ad-vantage of this property. Since most geometric augmentations may alter the results of gaze estimation, we only consider flipping as another data augmentation. We flip the target image horizontally, negating the gaze label's yaw value.\nIn addition to the augmentation consistency loss, we use similar losses as the pre-training process. We still compute the same gaze consistency loss L gc for the target domain as L gc = |ψ g (I t ) -ψ g ( Ît )| 1 . Since head pose labels can be obtained through the data normalization process even for the unlabeled target domain images, we define the estimator loss as\nL est = |g s -ĝs | 1 + λ head (|h s -ĥs | 1 + |h t -ĥt | 1 ),\nwhere ĝs and ĥs are the predicted gaze and head directions from source-domain input I s , respectively. The ĥt is the predicted head direction of the target-domain input I t .\nConsequently, the total loss for the adaptation stage is defined as\nL adapt = L est + λ gc L gc + L aug .\n(5)" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [], "table_ref": [], "text": "The face recognition subnet ψ a is trained using a triplet loss [70] and the estimation subnets ψ h and ψ g are both trained using ℓ1 loss. For the source-only training stage for the DisAE, all subnets ψ a , ψ h , and ψ g are trained fully supervised with source-domain data. For the target-domain adaptation stage, target-domain data is also used for fully supervised training of the face recognition subnet ψ a and head subnet ψ h , while we use the same gaze subnet ψ g trained on the source domain. We train the DisAE using the Adam optimizer [43] for 12 epochs, setting the learning rate to 0.001, decaying to 0.1 every five epochs. Apart from the target sample flipping in Sec 4.2, we also horizontally flip the images of the whole source training set to alleviate the inconsistent accuracy between horizontally symmetric images.\nDuring adaptation, we randomly sample 2,000 samples from the target-domain datasets to adapt the model by ten epochs, and the final result is the average of five randomseed repetitions. For the coefficients, we empirically set them to λ head = 0.5, λ rec = 1.0, λ ex = 1.0, λ gc = 1.0. For the augmentation consistency losses, the DisAE in the bottom of Fig. 5 share the weights. Each type of augmentation loss is computed separately and added together, and we set λ bg = 0.5 and λ flip = 0.5." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "We first verify the effectiveness of the angle range extension of the synthesized training data presented in Sec. 3 by data extrapolation experiments. We then evaluate the proposed DisAE model for reducing the synthetic-to-real domain gap with an ablation study. Finally, we also compare multiple reconstruction methods on their effects on the final gaze estimation performance." }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b84", "b80", "b55", "b40", "b11", "b80", "b81", "b80", "b20" ], "table_ref": [], "text": "MPIIFaceGaze [85] consists of over 38,000 images of 15 subjects with variant lighting conditions. Since we only use this dataset for data synthesis, we selected images with frontal head poses that both pitch and yaw angles of the head pose are smaller than 15°. To ensure that the number of samples from each subject is balanced for training, we randomly down-sampled or up-sampled the number of images of each subject to be 1,500. ETH-XGaze [81] contains over one million images of 110 subjects under variant head poses. We follow the official evaluation protocol and used the public evaluation server to retrieve the test results. EYEDIAP [22] consists of over four hours of video data, using continuous screen targets (CS) or 3D floating object targets (FT). We treated the screen target and floating target subsets separately and sampled one image every five frames from the VGA videos using the pre-processing provided by Park et al. [56]. Gaze360 [41] consists of indoor and outdoor images of 238 subjects with wide ranges of head poses and gaze directions. We followed the preprocessing of Cheng et al. [12] which excluded cases with invisible eyes, resulting in 84,902 images.\nWe apply the data normalization scheme commonly used in appearance-based gaze estimation [81,82] for all datasets. We also directly render the 3D facial mesh in the normalized camera space. Unless otherwise noted, we follow the ETH-XGaze dataset [81] and set the virtual camera's focal length to 960 mm, and the distance from the camera origin to the face center to 300 mm. Face images are rendered in 448 × 448 pixels and down-scaled to 224 × 224 pixels before being fed into CNNs. 3D head pose is ob-tained by fitting a 6-landmark 3D face model to the 2D landmark locations provided by the datasets, using the PnP algorithm [21]." }, { "figure_ref": [], "heading": "Baseline Methods", "publication_ref": [ "b9", "b66", "b23" ], "table_ref": [], "text": "We compare our method with several state-of-the-art gaze estimation methods. Besides the simple yet strong baseline ResNet [32], the Gaze-TR [10] is one of the state-ofthe-art backbone architectures for single-image gaze estimation. It first extracts the gaze features from ResNet and feeds the feature maps into a transformer encoder followed by an MLP to output the gaze directions. PureGaze [8] first extracts image features using a ResNet, followed by an MLP for gaze estimation and decoding blocks for image reconstruction. PnP-GA [48] is a domain adaptation model using Mean-Teacher [67] structure and can be applied on many existing structures. We follow the original implementation which only uses 10 target samples for adaptation. DANN [24] includes a gradient reverse layer and a domain classifier on top of the backbone, forcing the model to learn invariant features from source and target domains." }, { "figure_ref": [], "heading": "Data Extrapolation", "publication_ref": [], "table_ref": [], "text": "We explore the most practical setting data extrapolation, i.e., an extension of head poses and gaze directions from small ranges with synthesis data samples. To this purpose, we extend the source MPIIFaceGaze dataset to a similar head pose distribution as the target ETH-XGaze and EYE-DIAP datasets, respectively. Note that we use the training set of the ETH-XGaze as the target head poses distribution. We use the head pose values obtained through the data normalization process, and each source image is reconstructed and rendered with 16 new head poses randomly chosen from the target dataset. To avoid extreme profile faces with fully occluded eyes, we discarded the cases whose pitchyaw vector's ℓ2-norm is larger than 80°during the data synthesis. As a result, the MPIIFaceGaze is extended to three synthetic datasets ETH-XGaze, EYEDIAP CS, and EYE-DIAP FT, with 360,000 images, respectively. We refer to these datasets as MPII-NV." }, { "figure_ref": [ "fig_5", "fig_5" ], "heading": "Comparison of Datasets", "publication_ref": [ "b40" ], "table_ref": [], "text": "We evaluate how our data synthesis approach improves performance compared to other baseline training datasets. As a real-image baseline, we compare the Gaze360 [41] dataset which mostly covers a wide gaze range. The head pose and gaze distributions of the source and target real datasets (blue) and the synthetic datasets (green) are shown in Fig. 6, together with the gaze distribution of Gaze360 (head pose is not provided). Since we synthesize the data based on head pose distribution, it can be seen that the gaze distribution does not exactly match the target but only roughly overlaps.\nThe comparison of training data under several baseline models is presented in Tab. 1 and numbers represent the angular error. We compare different SOTA models in terms of gaze estimation performances when training on different training sets. From the table, we can see all models trained on our synthesized dataset MPII-NV, achieve the best performances on both ETH-XGaze and EYEDIAP-CS compared to other training datasets. Note the MPII-NV is purely an extension of the original MPIIFaceGaze with large head poses. The significant improvements from models trained on MPII-NV over MPIIFaceGaze indicate that our synthetic data pipeline can produce useful data for cross-dataset training. For the EYEDIAP-FT, better performance was obtained when using real data ETH-XGaze. One hypothesis is that EYEDIAP FT has a larger offset between gaze and head pose due to the use of physical gaze targets, such that our data synthesized based on head pose cannot fully reproduce the target gaze distribution (Fig. 6)." }, { "figure_ref": [], "heading": "Comparison of Models", "publication_ref": [], "table_ref": [], "text": "With the synthetic data, we evaluate the proposed DisAE for both cross-dataset and unsupervised domain adaptation settings. In Tab. 2, we fix the MPII-NV as a training dataset to compare DisAE with SOTA methods. The top block of Tab. 2 shows performances of cross-dataset evaluation without adaptation, i.e., all methods are only trained with source-domain samples. We can see that DisAE outperforms the three baseline models across all test datasets, showing the advantage of the feature disentanglement even without domain adaptation. The bottom block of Tab. 2 shows the domain adaptation with unlabeled samples from the target test sets. We can observe that our proposed DisAE model successfully adapts to all target domains, showing superior estimation errors in the last row.\nTo evaluate the effectiveness of combining DisAE with our self-training strategy, we apply the same augmentation consistency adaptation to the Res-18 networks and refer to it as Res18 + aug in Tab. 2. This baseline achieves worse results than the DisAE showing that the unsupervised domain adaptation is difficult to be handled with simple data augmentation due to the gaze-unrelated features that exist in face images. Conversely, our proposed DisAE effectively alleviates this issue by focusing on gaze-related features, enhancing the model's adaptability to the target domain. Furthermore, we apply DANN on the DisAE by feeding the disentangled gaze features into the gradient reverse layer and domain classifier for a domain classification loss. As in DisAE + DANN, though the DANN does not show remarkable effects on most of the test sets, DisAE demonstrates more stable adapting performance compared to the basic ResNet. " }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [], "text": "We first conduct an ablation study on the source domain data augmentation with image flipping. As shown in the top block of Tab. 3, DisAE shows lower error than the nonflipping setting on all test datasets, proofing that flipping (doubling) the training data is a valuable and simple approach to deal with the inconsistency in the symmetric images.\nFor the domain adaptation stage, we examine individual loss terms based on Eq. 5. Note that we separately explore the flipping augmentation (L flip ) and background replacement augmentation (L bg ). From the table, we can see that all proposed losses can gradually improve the accuracy on most of the test datasets. The L gc causes a negative effect only on EYEDIAP FT, probably because of the low resolution and the float point existence of the EYEDIAP FT dataset. " }, { "figure_ref": [], "heading": "Analysis of Data Quality", "publication_ref": [ "b80" ], "table_ref": [], "text": "In this section, we aim to compare the face reconstruction performance between our proposed single-view and a more complex multi-view method. The primary goal of this comparison is to establish an upper bound of performance when using synthetic data for training gaze estimation models, given that the multi-view synthetic data offers higher photorealism. In addition, we compare multiple methods of single-view reconstructions to analyze the impact of reconstruction performance on model accuracy. We reconstruct the frontal-camera images of ETH-XGaze [81] and rotate them exactly to the other cameras, resulting in the synthetic version of ETH-XGaze, denoted as XGazeF-NV." }, { "figure_ref": [ "fig_6" ], "heading": "Multi-view Reconstruction", "publication_ref": [ "b80", "b48" ], "table_ref": [], "text": "As ETH-XGaze [81] is a camera-synchronized dataset, we implement multi-view reconstruction using the Agisoft Metashape software [49]. Since reconstruction quality under dark environments drops extensively, we only reconstruct the full-light frames.\nThrough preliminary analysis, it is confirmed that there is a discrepancy between the external parameters provided by the dataset and the actual camera images. This is possibly due to the drift of the camera position after camera calibration, and the discrepancy varies for each subject. Therefore, we first use Metashape to optimize the camera extrinsic parameters for each subject. The Metashape optimization takes the 18 images of each frame of the target subject as input, as well as the fixed intrinsic parameters provided by the dataset. This provides updated extrinsic parameters for each frame, and we discard frames whose camera position diverges by more than 10 mm from the raw average position. We use the average value of the other remaining frames as the final extrinsic parameters for the subject.\nWe then use the recalibrated extrinsic parameters and the original intrinsic parameters to perform the multi-view face reconstruction. Based on the reconstruction results, we render and denote the dataset as MVS-XGazeF-NV. Random factors for background augmentation are kept the same, and sample images from these reconstruction methods are shown in Fig. 7." }, { "figure_ref": [ "fig_6" ], "heading": "Comparision of Reconstruction Methods", "publication_ref": [ "b57" ], "table_ref": [], "text": "We compare the multi-view face reconstruction with SOTA single-view methods 3DDFA [27] and DECA [18]. As another simplest baseline Landmarks Fitting, we fit the BFM model [58] to the detected 68 2D facial landmarks and get a facial shape, of which the texture is the RGB values obtained by projecting to the original image.\nWe separate the ETH-XGaze 80 training subjects into four folds to perform the leave-one-fold-out evaluation with a ResNet-18 model. We compare the performances of mod- els trained on synthetic data generated by different face reconstruction methods. The average errors are shown in Tab. 4. From the top three rows block, we observe that the three single-view methods produce similar performance, and we attribute this to the similar appearance and texture (resolution) between all of these single-view methods.\nOn the other hand, as expected, the high-quality MVS-XGazeF-NV shows the lowest error that is even close to the performance trained on real ETH-XGaze shown in the last two rows, which represents an upper bound of synthetic training. Qualitatively, there are image quality differences between the single-view methods and Multi-view Stereo in Fig. 7 such as the artifacts in eyebrow, which may cause the above performance gap. However, despite performing worse than the multi-view method, single-view methods are easier to deploy in realworld applications without complicated multi-view synchronization. More importantly, there is a potential for reducing the gap between single-view and multi-view. As shown in the fifth and the second row of Tab. 4, the proposed background augmentation reduces the error of XGazeF-NV-3DDFA, which proves that appearance diversity is helpful for the generation of training data. Besides the background augmentation, further refining the reconstruction quality, especially the texture, has the potential to further promote the single-view reconstruction methods closer to the performance of the upper-bound multi-view methods." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This work presents an effective data synthesis pipeline and an unsupervised domain adaptation approach for fullface appearance-based gaze estimation. Our approach utilizes 3D face reconstruction to synthesize novel-head-poses training datasets while keeping accurate gaze labels via projective matching. The proposed DisAE model can learn gaze-related features from the synthetic data and thus can effectively generalize to other domains. The DisAE can be further adapted to target domains through the self-training process using the proposed background-switching consistency loss. Through experiments, we show the generated synthetic data can benefit the model training, and our approach achieves better performances than existing SOTA methods for cross-dataset and unsupervised domain adaptation evaluations. Furthermore, experiments also verified that synthetic data can reach comparable performance as real data, pointing out the potential of synthetic training in future work." }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "This work was supported by JSPS KAKENHI Grant Number JP21K11932." } ]
2023-05-25
[ { "authors": "Massih-Reza Amini; Vasilii Feofanov; Loic Pauletto; Emilie Devijver; Yury Maximov", "journal": "", "ref_id": "b0", "title": "Self-training: A survey", "year": "2022" }, { "authors": "Tadas Baltrusaitis; Amir Zadeh; Yao ; Chong Lim; Louis-Philippe Morency", "journal": "", "ref_id": "b1", "title": "Openface 2.0: Facial behavior analysis toolkit", "year": "2018" }, { "authors": "Yiwei Bao; Yunfei Liu; Haofei Wang; Feng Lu", "journal": "", "ref_id": "b2", "title": "Generalizing gaze estimation with rotation consistency", "year": "2022-06" }, { "authors": "David W Peter N Belhumeur; David J Jacobs; Neeraj Kriegman; Kumar", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b3", "title": "Localizing parts of faces using a consensus of exemplars", "year": "2013" }, { "authors": "Volker Blanz; Thomas Vetter", "journal": "", "ref_id": "b4", "title": "A morphable model for the synthesis of 3d faces", "year": "1999" }, { "authors": "Adrian Bulat; Georgios Tzimiropoulos", "journal": "", "ref_id": "b5", "title": "How far are we from solving the 2d & 3d face alignment problem? (and a dataset of 230,000 3d facial landmarks)", "year": "2017" }, { "authors": "Xinyang Chen; Sinan Wang; Jianmin Wang; Mingsheng Long", "journal": "", "ref_id": "b6", "title": "Representation subspace distance for domain adaptation regression", "year": "2021" }, { "authors": "Yihua Cheng; Yiwei Bao; Feng Lu", "journal": "", "ref_id": "b7", "title": "Puregaze: Purifying gaze feature for generalizable gaze estimation", "year": "2022" }, { "authors": "Yihua Cheng; Shiyao Huang; Fei Wang; Chen Qian; Feng Lu", "journal": "", "ref_id": "b8", "title": "A coarse-to-fine adaptive network for appearancebased gaze estimation", "year": "2020" }, { "authors": "Yihua Cheng; Feng Lu", "journal": "", "ref_id": "b9", "title": "Gaze estimation using transformer", "year": "2022" }, { "authors": "Yihua Cheng; Feng Lu; Xucong Zhang", "journal": "", "ref_id": "b10", "title": "Appearancebased gaze estimation via evaluation-guided asymmetric regression", "year": "2018" }, { "authors": "Yihua Cheng; Haofei Wang; Yiwei Bao; Feng Lu", "journal": "", "ref_id": "b11", "title": "Appearance-based gaze estimation with deep learning: A review and benchmark", "year": "2021" }, { "authors": "Yihua Cheng; Xucong Zhang; Feng Lu; Yoichi Sato", "journal": "IEEE Trans. Image Process", "ref_id": "b12", "title": "Gaze estimation by exploring two-eye asymmetry", "year": "2020" }, { "authors": "Florin Peter M Corcoran; Stefan Nanu; Petronel Petrescu; Bigioi", "journal": "IEEE Transactions on Consumer Electronics", "ref_id": "b13", "title": "Real-time eye gaze tracking for gaming design and consumer electronics systems", "year": "2012" }, { "authors": "Yu Deng; Jiaolong Yang; Sicheng Xu; Dong Chen; Yunde Jia; Xin Tong", "journal": "", "ref_id": "b14", "title": "Accurate 3d face reconstruction with weakly-supervised learning: From single image to image set", "year": "2019" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "", "ref_id": "b15", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "J Nathan; Emery", "journal": "Neuroscience & biobehavioral reviews", "ref_id": "b16", "title": "The eyes have it: the neuroethology, function and evolution of social gaze", "year": "2000" }, { "authors": "Yao Feng; Haiwen Feng; Michael J Black; Timo Bolkart", "journal": "TOG", "ref_id": "b17", "title": "Learning an animatable detailed 3d face model from in-the-wild images", "year": "2021-07" }, { "authors": "Yao Feng; Fan Wu; Xiaohu Shao; Yanfeng Wang; Xi Zhou", "journal": "", "ref_id": "b18", "title": "Joint 3d face reconstruction and dense alignment with position map regression network", "year": "2018" }, { "authors": "Tobias Fischer; Hyung ; Jin Chang; Yiannis Demiris", "journal": "", "ref_id": "b19", "title": "Rtgene: Real-time eye gaze estimation in natural environments", "year": "2018" }, { "authors": "Martin A Fischler; Robert C Bolles", "journal": "Commun. ACM", "ref_id": "b20", "title": "Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography", "year": "1981" }, { "authors": "Kenneth Alberto; Funes Mora; Florent Monay; Jean-Marc Odobez", "journal": "", "ref_id": "b21", "title": "Eyediap: A database for the development and evaluation of gaze estimation algorithms from rgb and rgb-d cameras", "year": "2014" }, { "authors": "Kenneth Alberto; Funes Mora; Florent Monay; Jean-Marc Odobez", "journal": "", "ref_id": "b22", "title": "Eyediap: A database for the development and evaluation of gaze estimation algorithms from rgb and rgb-d cameras", "year": "2014" }, { "authors": "Yaroslav Ganin; Victor Lempitsky", "journal": "", "ref_id": "b23", "title": "Unsupervised domain adaptation by backpropagation", "year": "2015" }, { "authors": "Huan Gao; Jichang Guo; Guoli Wang; Qian Zhang", "journal": "", "ref_id": "b24", "title": "Cross-domain correlation distillation for unsupervised domain adaptation in nighttime semantic segmentation", "year": "2022" }, { "authors": "Elias Daniel; Guestrin ; Moshe Eizenman", "journal": "IEEE TBE", "ref_id": "b25", "title": "General theory of remote gaze estimation using the pupil center and corneal reflections", "year": "2006" }, { "authors": "Jianzhu Guo; Xiangyu Zhu; Zhen Lei", "journal": "", "ref_id": "b26", "title": "3ddfa", "year": "2018" }, { "authors": "Jianzhu Guo; Xiangyu Zhu; Yang Yang; Fan Yang; Zhen Lei; Stan Z Li", "journal": "", "ref_id": "b27", "title": "Towards fast, accurate and stable 3d dense face alignment", "year": "2020" }, { "authors": "Zidong Guo; Zejian Yuan; Chong Zhang; Wanchao Chi; Yonggen Ling; Shenghao Zhang", "journal": "", "ref_id": "b28", "title": "Domain adaptation gaze estimation by embedding with prediction consistency", "year": "2020" }, { "authors": "Dan Witzner; Hansen ; Qiang Ji", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b29", "title": "In the eye of the beholder: A survey of models for eyes and gaze", "year": "2010" }, { "authors": "Kaiming He; Haoqi Fan; Yuxin Wu; Saining Xie; Ross Girshick", "journal": "", "ref_id": "b30", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b31", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Zhe He; Adrian Spurr; Xucong Zhang; Otmar Hilliges", "journal": "", "ref_id": "b32", "title": "Photo-realistic monocular gaze redirection using generative adversarial networks", "year": "2019" }, { "authors": "Leonard R Philip S Holzman; Deborah L Proctor; Nicholas J Levy; Herbert Y Yasillo; Stephen W Meltzer; Hurt", "journal": "Archives of general psychiatry", "ref_id": "b33", "title": "Eye-tracking dysfunctions in schizophrenic patients and their relatives", "year": "1974" }, { "authors": "Jiaxing Huang; Dayan Guan; Aoran Xiao; Shijian Lu; Ling Shao", "journal": "", "ref_id": "b34", "title": "Category contrast for unsupervised domain adaptation in visual tasks", "year": "2022" }, { "authors": "Qiong Huang; Ashok Veeraraghavan; Ashutosh Sabharwal", "journal": "Machine Vision and Applications", "ref_id": "b35", "title": "Tabletgaze: dataset and analysis for unconstrained appearance-based gaze estimation in mobile tablets", "year": "2017" }, { "authors": "Ashish Jaiswal; Ramesh Ashwin; Mohammad Zaki Babu; Debapriya Zadeh; Fillia Banerjee; Makedon", "journal": "Technologies", "ref_id": "b36", "title": "A survey on contrastive self-supervised learning", "year": "2020" }, { "authors": "Swati Jindal; Roberto Manduchi", "journal": "", "ref_id": "b37", "title": "Contrastive representation learning for gaze estimation", "year": "2022" }, { "authors": "Amin Jourabloo; Xiaoming Liu", "journal": "", "ref_id": "b38", "title": "Pose-invariant 3d face alignment", "year": "2015" }, { "authors": "Berk Kaya; Suryansh Kumar; Francesco Sarno; Vittorio Ferrari; Luc Van Gool", "journal": "", "ref_id": "b39", "title": "Neural radiance fields approach to deep multi-view photometric stereo", "year": "2022" }, { "authors": "Petr Kellnhofer; Adria Recasens; Simon Stent; Wojciech Matusik; Antonio Torralba", "journal": "", "ref_id": "b40", "title": "Gaze360: Physically unconstrained gaze estimation in the wild", "year": "2019" }, { "authors": "Adnan Khan; Sarah Albarri; Muhammad Arslan; Manzoor ", "journal": "IEEE", "ref_id": "b41", "title": "Contrastive self-supervised learning: a survey on different architectures", "year": "2022" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "ICLR", "ref_id": "b42", "title": "Adam: A method for stochastic optimization", "year": "2015" }, { "authors": "Kyle Krafka; Aditya Khosla; Petr Kellnhofer; Harini Kannan; Suchendra Bhandarkar; Wojciech Matusik; Antonio Torralba", "journal": "", "ref_id": "b43", "title": "Eye tracking for everyone", "year": "2016" }, { "authors": "Isack Lee; Jun-Seok Yun; Hee Hyeon Kim; Youngju Na; Bong Seok; Yoo", "journal": "", "ref_id": "b44", "title": "Latentgaze: Cross-domain gaze estimation through gaze-aware analytic latent code manipulation", "year": "2022" }, { "authors": "Tianye Li; Timo Bolkart; Michael J Black; Hao Li; Javier Romero", "journal": "", "ref_id": "b45", "title": "Learning a model of facial shape and expression from 4D scans", "year": "2017" }, { "authors": "Ruicong Liu; Yiwei Bao; Mingjie Xu; Haofei Wang; Yunfei Liu; Feng Lu", "journal": "", "ref_id": "b46", "title": "Jitter does matter: Adapting gaze estimation to new domains", "year": "2022" }, { "authors": "Yunfei Liu; Ruicong Liu; Haofei Wang; Feng Lu", "journal": "", "ref_id": "b47", "title": "Generalizing gaze estimation with outlier-guided collaborative adaptation", "year": "2021" }, { "authors": "", "journal": "Agisoft LLC", "ref_id": "b48", "title": "Agisoft metashape", "year": "2022" }, { "authors": "Päivi Majaranta; Andreas Bulling", "journal": "", "ref_id": "b49", "title": "Eye tracking and eyebased human-computer interaction", "year": "2014" }, { "authors": "Iacopo Masi; Tal Hassner; Anh Tuan Tran; Gérard Medioni", "journal": "", "ref_id": "b50", "title": "Rapid synthesis of massive face sets for improved face recognition", "year": "2017" }, { "authors": "Bilge Mutlu; Toshiyuki Shiwa; Takayuki Kanda; Hiroshi Ishiguro; Norihiro Hagita", "journal": "", "ref_id": "b51", "title": "Footing in human-robot conversations: how robots might shape participant roles using gaze cues", "year": "2009" }, { "authors": "Ismail Nejjar; Qin Wang; Olga Fink", "journal": "", "ref_id": "b52", "title": "Dare-gram: Unsupervised domain adaptation regression by aligning inverse gram matrices", "year": "2023" }, { "authors": "Takehiko Ohkawa; Yu-Jhe Li; Qichen Fu; Ryosuke Furuta; Kris M Kitani; Yoichi Sato", "journal": "", "ref_id": "b53", "title": "Domain adaptive hand keypoint and pixel localization in the wild", "year": "2022" }, { "authors": "Takehiko Ohkawa; Takuma Yagi; Atsushi Hashimoto; Yoshitaka Ushiku; Yoichi Sato", "journal": "IEEE Access", "ref_id": "b54", "title": "Foreground-aware stylization and consensus pseudo-labeling for domain adaptation of first-person hand segmentation", "year": "2021" }, { "authors": "Seonwook Park; Shalini De Mello; Pavlo Molchanov; Umar Iqbal; Otmar Hilliges; Jan Kautz", "journal": "", "ref_id": "b55", "title": "Few-shot adaptive gaze estimation", "year": "2019" }, { "authors": "Seonwook Park; Adrian Spurr; Otmar Hilliges", "journal": "", "ref_id": "b56", "title": "Deep pictorial gaze estimation", "year": "2018" }, { "authors": "Pascal Paysan; Reinhard Knothe; Brian Amberg; Sami Romdhani; Thomas Vetter", "journal": "", "ref_id": "b57", "title": "A 3d face model for pose and illumination invariant face recognition", "year": "2009" }, { "authors": "Jiawei Qin; Takuru Shimoyama; Yusuke Sugano", "journal": "", "ref_id": "b58", "title": "Learning-by-novel-view-synthesis for full-face appearancebased 3d gaze estimation", "year": "2022" }, { "authors": "Nikhila Ravi; Jeremy Reizenstein; David Novotny; Taylor Gordon; Wan-Yen Lo; Justin Johnson; Georgia Gkioxari", "journal": "", "ref_id": "b59", "title": "Accelerating 3d deep learning with pytorch3d", "year": "2020" }, { "authors": "Alexandru Radu; Sven Rosu; Behnke", "journal": "IEEE", "ref_id": "b60", "title": "Neuralmvs: Bridging multi-view stereo and novel view synthesis", "year": "2022" }, { "authors": "Lorenzo Scalera; Stefano Seriani; Paolo Gallina; Mattia Lentini; Alessandro Gasparetto", "journal": "Robotics", "ref_id": "b61", "title": "Human-robot interaction through eye tracking for artistic drawing", "year": "2021" }, { "authors": "Ashish Shrivastava; Tomas Pfister; Oncel Tuzel; Joshua Susskind; Wenda Wang; Russell Webb", "journal": "", "ref_id": "b62", "title": "Learning from simulated and unsupervised images through adversarial training", "year": "2017" }, { "authors": "Brian A Smith; Qi Yin; Steven K Feiner; Shree K Nayar", "journal": "", "ref_id": "b63", "title": "Gaze locking: Passive eye contact detection for humanobject interaction", "year": "2013" }, { "authors": "Yusuke Sugano; Yasuyuki Matsushita; Yoichi Sato", "journal": "", "ref_id": "b64", "title": "Learning-by-synthesis for appearance-based 3d gaze estimation", "year": "2014" }, { "authors": "Kar-Han Tan; David J Kriegman; Narendra Ahuja", "journal": "", "ref_id": "b65", "title": "Appearance-based eye gaze estimation", "year": "2002" }, { "authors": "Antti Tarvainen; Harri Valpola", "journal": "Proc. NeurIPS", "ref_id": "b66", "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", "year": "2017" }, { "authors": "Anh Tuan Tran; Tal Hassner; Iacopo Masi; Gerard Medioni", "journal": "", "ref_id": "b67", "title": "Regressing robust and discriminative 3d morphable models with a very deep neural network", "year": "2017" }, { "authors": "Kang Wang; Rui Zhao; Hui Su; Qiang Ji", "journal": "", "ref_id": "b68", "title": "Generalizing eye tracking with bayesian adversarial learning", "year": "2019" }, { "authors": "Xun Wang; Xintong Han; Weilin Huang; Dengke Dong; Matthew R Scott", "journal": "", "ref_id": "b69", "title": "Multi-similarity loss with general pair weighting for deep metric learning", "year": "2019" }, { "authors": "Yaoming Wang; Yangzhou Jiang; Jin Li; Bingbing Ni; Wenrui Dai; Chenglin Li; Hongkai Xiong; Teng Li", "journal": "", "ref_id": "b70", "title": "Contrastive regression for domain adaptation on gaze estimation", "year": "2022" }, { "authors": "Ulrich Weidenbacher; Georg Layher; P-M Strauss; Heiko Neumann", "journal": "", "ref_id": "b71", "title": "A comprehensive head pose and gaze database", "year": "2007" }, { "authors": "Erroll Wood; Tadas Baltrušaitis; Louis-Philippe Morency; Peter Robinson; Andreas Bulling", "journal": "", "ref_id": "b72", "title": "Learning an appearance-based gaze estimator from one million synthesised images", "year": "2016" }, { "authors": "Erroll Wood; Tadas Baltrušaitis; Louis-Philippe Morency; Peter Robinson; Andreas Bulling", "journal": "", "ref_id": "b73", "title": "A 3D Morphable Model of the Eye Region", "year": "2016" }, { "authors": "Aoran Xiao; Jiaxing Huang; Weihao Xuan; Ruijie Ren; Kangcheng Liu; Dayan Guan; Abdulmotaleb El Saddik; Shijian Lu; Eric Xing", "journal": "", "ref_id": "b74", "title": "3d semantic segmentation in the wild: Learning generalized models for adverse-condition point clouds", "year": "2023" }, { "authors": "Qizhe Xie; Zihang Dai; Eduard Hovy; Thang Luong; Quoc Le", "journal": "Advances in NIPS", "ref_id": "b75", "title": "Unsupervised data augmentation for consistency training", "year": "2020" }, { "authors": "Yunyang Xiong; J Hyunwoo; Vikas Kim; Singh", "journal": "", "ref_id": "b76", "title": "Mixed effects neural networks (menets) with applications to gaze estimation", "year": "2019" }, { "authors": "Yu Yu; Gang Liu; Jean-Marc Odobez", "journal": "", "ref_id": "b77", "title": "Improving fewshot user-specific gaze adaptation via gaze redirection synthesis", "year": "2019" }, { "authors": "Yu Yu; Jean-Marc Odobez", "journal": "", "ref_id": "b78", "title": "Unsupervised representation learning for gaze estimation", "year": "2020" }, { "authors": "Yuxiao Hu; Dalong Jiang; Shuicheng Yan; Lei Zhang; Hongjiang Zhang", "journal": "", "ref_id": "b79", "title": "Automatic 3d reconstruction for face recognition", "year": "2004" }, { "authors": "Xucong Zhang; Seonwook Park; Thabo Beeler; Derek Bradley; Siyu Tang; Otmar Hilliges", "journal": "", "ref_id": "b80", "title": "Eth-xgaze: A large scale dataset for gaze estimation under extreme head pose and gaze variation", "year": "2020" }, { "authors": "Xucong Zhang; Yusuke Sugano; Andreas Bulling", "journal": "", "ref_id": "b81", "title": "Revisiting data normalization for appearance-based gaze estimation", "year": "2018" }, { "authors": "Xucong Zhang; Yusuke Sugano; Andreas Bulling; Otmar Hilliges", "journal": "", "ref_id": "b82", "title": "Learning-based region selection for end-to-end gaze estimation", "year": "2020" }, { "authors": "Xucong Zhang; Yusuke Sugano; Mario Fritz; Andreas Bulling", "journal": "", "ref_id": "b83", "title": "Appearance-based gaze estimation in the wild", "year": "2015" }, { "authors": "Xucong Zhang; Yusuke Sugano; Mario Fritz; Andreas Bulling", "journal": "", "ref_id": "b84", "title": "It's written all over your face: Full-face appearancebased gaze estimation", "year": "2017" }, { "authors": "Xucong Zhang; Yusuke Sugano; Mario Fritz; Andreas Bulling", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b85", "title": "Mpiigaze: Real-world dataset and deep appearancebased gaze estimation", "year": "2019" }, { "authors": "Yufeng Zheng; Seonwook Park; Xucong Zhang; Shalini De Mello; Otmar Hilliges", "journal": "", "ref_id": "b86", "title": "Self-learning transformations for improving gaze and head redirection", "year": "2020" }, { "authors": "Bolei Zhou; Agata Lapedriza; Aditya Khosla; Aude Oliva; Antonio Torralba", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b87", "title": "Places: A 10 million image database for scene recognition", "year": "2017" }, { "authors": "Hang Zhou; Jihao Liu; Ziwei Liu; Yu Liu; Xiaogang Wang", "journal": "", "ref_id": "b88", "title": "Rotate-and-render: Unsupervised photorealistic face rotation from single-view images", "year": "2020" }, { "authors": "Xiangyu Zhu; Zhen Lei; Xiaoming Liu; Hailin Shi; Stan Z Li", "journal": "", "ref_id": "b89", "title": "Face alignment across large poses: A 3d solution", "year": "2016" }, { "authors": "Xiangyu Zhu; Xiaoming Liu; Zhen Lei; Stan Z Li", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b90", "title": "Face alignment in full pose range: A 3d total solution", "year": "2019" }, { "authors": "Michael Zollhöfer; Justus Thies; Pablo Garrido; Derek Bradley; Thabo Beeler; Patrick Pérez; Marc Stamminger; Matthias Nießner; Christian Theobalt", "journal": "Computer Graphics Forum", "ref_id": "b91", "title": "State of the art on monocular 3d face reconstruction, tracking, and applications", "year": "2018" } ]
[ { "formula_coordinates": [ 4, 50.11, 589.34, 236.25, 26.71 ], "formula_id": "formula_0", "formula_text": "(i) p } N i=0 . Each vertex is represented as v (i) p = [u (i) , v (i) , d (i)" }, { "formula_coordinates": [ 4, 258.06, 244.36, 220.28, 470.58 ], "formula_id": "formula_1", "formula_text": "(i) c } N i=0 where each vertex v (i) c = [x (i) , y (i) , z (i) ]" }, { "formula_coordinates": [ 4, 351.15, 435.28, 193.96, 24.8 ], "formula_id": "formula_2", "formula_text": "v c = λ C -1 p o ||C -1 p o || = λ C -1 T -1 p ||C -1 T -1 p|| ,(1)" }, { "formula_coordinates": [ 4, 336.66, 469.88, 147.21, 11.23 ], "formula_id": "formula_3", "formula_text": "p o = [u o , v o , 1] ⊤ and p = [u, v, 1]" }, { "formula_coordinates": [ 4, 362.69, 514.09, 182.43, 34.93 ], "formula_id": "formula_4", "formula_text": "T =   s x 0 -s x (c x -w 2 ) 0 s y -s y (c y -h 2 ) 0 0 1   (2)" }, { "formula_coordinates": [ 6, 311.78, 549.38, 230.42, 22.69 ], "formula_id": "formula_5", "formula_text": "L ex (I 1 , I 2 ) = ψ |ψ(I 1 ) -ψ( Ĩ1 )| 1 + |ψ(I 2 ) -ψ( Ĩ2 )| 1 ," }, { "formula_coordinates": [ 6, 338.26, 620.04, 177.46, 9.81 ], "formula_id": "formula_6", "formula_text": "L source = L est + λ rec L rec + λ gc L gc + λ ex L ex ." }, { "formula_coordinates": [ 7, 50.11, 545.29, 236.25, 38 ], "formula_id": "formula_7", "formula_text": "L aug = λ bg L bg + λ flip L flip , where L bg = |ĝ t -ĝbg t | 1 and L flip = |ĝ t -ĝflip t | 1 ." }, { "formula_coordinates": [ 7, 337.3, 576.34, 207.82, 12.43 ], "formula_id": "formula_8", "formula_text": "L est = |g s -ĝs | 1 + λ head (|h s -ĥs | 1 + |h t -ĥt | 1 )," }, { "formula_coordinates": [ 7, 365.85, 657.07, 122.28, 9.81 ], "formula_id": "formula_9", "formula_text": "L adapt = L est + λ gc L gc + L aug ." } ]
Domain-Adaptive Full-Face Gaze Estimation via Novel-View-Synthesis and Feature Disentanglement
Along with the recent development of deep neural networks, appearance-based gaze estimation has succeeded considerably when training and testing within the same domain. Compared to the within-domain task, the variance of different domains makes the cross-domain performance drop severely, preventing gaze estimation deployment in real-world applications. Among all the factors, ranges of head pose and gaze are believed to play a significant role in the final performance of gaze estimation, while collecting large ranges of data is expensive. This work proposes an effective model training pipeline consisting of a training data synthesis and a gaze estimation model for unsupervised domain adaptation. The proposed data synthesis leverages the single-image 3D reconstruction to expand the range of the head poses from the source domain without requiring a 3D facial shape dataset. To bridge the inevitable gap between synthetic and real images, we further propose an unsupervised domain adaptation method suitable for synthetic fullface data. We propose a disentangling autoencoder network to separate gaze-related features and introduce background augmentation consistency loss to utilize the characteristics of the synthetic source domain. Through comprehensive experiments, we show that the model only using monocular-reconstructed synthetic training data can perform comparably to real data with a large label range. Our proposed domain adaptation approach further improves the performance on multiple target domains.
Jiawei Qin; Takuru Shimoyama; Xucong Zhang; Yusuke Sugano
[ { "figure_caption": "Figure 1 .1Figure 1. Overview of our approach. Top: We synthesize the data with a large range of head poses stemming from the 3D face reconstruction and propose a feature disentangling auto-encoder network pre-trained only by the synthetic data. Bottom: We leverage self-training for adapting the model to unlabeled target domains.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Determining the location of Vc via parameters α and β. α indicates a scaling factor from the pixel to physical (e.g., millimeter) unit, and β is the bias term to align αd to the camera coordinate system.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Examples of the synthesized images. The first row shows the source images from MPIIFaceGaze[85] and ETH-XGaze[81] datasets. For MPIIFaceGaze, the second and third rows show synthesized images in full and weak lighting. For ETH-XGaze, the second row shows the real images from the dataset, and the third row shows our synthetic images with the same head poses as the real samples. For each synthetic example, the three columns show the black, color, and scene background in turn. The red arrows indicate gaze direction vectors.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure5. The overview of our synthetic-real domain adaptation approach. Top: An encoder-decoder structure for feature disentanglement (DisAE). We prepare three subnets ψ a , ψ h , and ψ g to support the disentanglement of appearance, head, and gaze features. The gaze features are fed into a vision transformer to get the predicted gaze direction ĝ while the head features are fed into an MLP to get the predicted head pose direction ĥ. Bottom: augmentation consistency is proposed during the unsupervised domain adaptation of DisAE towards the target domain.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Head pose (top row) and gaze direction (bottom row) distributions of original datasets and our synthetic data for (a) source MPIIFaceGaze, (b) target ETH-XGaze, (c) synthesized dataset by extending MPIIFaceGaze to ETH-XGaze distribution, (d) target EYE-DIAP (CS), (e) synthesized dataset to EYEDIAP (CS) distribution, (f) target EYEDIAP (FT), (g) synthesized dataset to EYEDIAP (FT) distribution, and (h) Gaze360 (head pose is not provided).", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Examples of the synthesized XGaze-NV datasets using the different reconstruction methods. The last row shows the samples of the real dataset.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "augmentation in self-training [48, 54, 55]. The", "figure_data": "ℒ %&gaze subnet 𝜓 #appheadEncoderhead gaze app ℒ '(&Decoder𝑰 !Encoderhead gaze app gazeDecoder𝑰 # !𝑰𝑰 \"𝑰 \"𝑰 # \"Vision TransfomerMLP𝒉 % 𝒈 'ℒ (#$app subnet 𝜓 ! ℒ (+ ),,head subnet 𝜓 \" ℒ (+ -().gaze subnet 𝜓 #𝑰 #ℒ )*%𝑰 $", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of gaze estimation errors in degree. From left to right columns list the training sets, methods, and test sets.", "figure_data": "Training DatasetsModelETH-XGaze Train TestEYEDIAP [22] CS FTMPIIFaceGaze [85]31.96 32.62 13.02 23.01ETH-XGaze Train [81] Gaze360 [41]ResNet18 [32]-17.17 17.55 9.67 -9.8113.81 16.04MPII-NV12.84 13.99 5.3717.04MPIIFaceGaze [85]32.30 32.82 12.09 22.65ETH-XGaze Train [81] Gaze360 [41]PureGaze18 [8]-16.72 17.06 7.61 -8.7912.86 13.59MPII-NV12.93 14.07 5.4516.22MPIIFaceGaze [85]29.62 30.16 14.21 24.41ETH-XGaze Train [81] Gaze360 [41]GazeTR18 [10]-16.41 16.91 8.23 -8.9112.61 13.08MPII-NV11.36 12.01 5.5814.70ModelETH-XGazeEYEDIAPTrainTestCSFTResNet18 [32]12.84 13.99 5.37 17.04PureGaze18 [8]12.93 14.07 5.45 16.22Gaze-TR18 [10]11.36 12.01 5.58 14.70DisAE11.21 12.00 5.22 13.50Res18 + PnP-GA [48] 12.43 15.33 4.78 17.00Res18 + DANN [24]15.32 15.51 6.93 15.00Res18 + aug15.13 16.82 5.45 16.29DisAE + DANN [24] 11.13 11.92 5.20 13.50DisAE + aug (ours)10.99 11.89 4.63 12.69", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparision of our method with baseline methods. The top block are source-domain-only methods, and the bottom block are methods that utilize unlabeled target-domain data.", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "gc +L bg +L flip (ours) 10.99 11.89 4.63 12.69 Adaptation effect with respect to the individual loss.", "figure_data": "ETH-XGazeEYEDIAPTrainTestCSFTDisAE w.o. flip11.32 12.03 5.90 13.85DisAE11.21 12.00 5.22 13.50L gc11.10 11.90 5.22 13.43L bg11.12 12.00 4.96 11.54L flip11.40 12.21 5.36 11.40L bg +L flip11.32 12.12 4.81 11.34L gc +L bg11.03 11.99 4.97 13.00L gc +L flip11.09 11.95 4.80 12.94L", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparison of synthetic datasets using different reconstruction methods. The gaze estimation error is the average of the four-fold split. The model used is ResNet18 for all rows. * indicates the green background version without the background augmentation.", "figure_data": "DatasetsWithin XGaze-TestXGazeF-NV-3DDFA10.0311.32XGazeF-NV-DECA10.0211.47XGazeF-NV-fitting10.2811.67XGazeF-NV-3DDFA *20.3322.85MVS-XGazeF-NV *7.907.43MVS-XGazeF-NV6.716.19ETH-XGaze6.626.20", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[50,52]", "Explanation": "The cited works provide a promising solution to indicate human user attention in various settings with a single webcam as the input device, which the citing paper builds upon in its research on human-robot interaction."}, {"Category": "Methodological Basis", "Citation": "[17,34]", "Explanation": "The cited works offer insights into social interaction as a research area for the citing paper to explore in its study of human-robot interaction."}, {"Category": "Methodological Basis", "Citation": "[14,62]", "Explanation": "The cited works provide a reference for the citing paper to follow in its research on entertainment settings with a single webcam as the input device."}, {"Category": "Methodological Basis", "Citation": "[10,16]", "Explanation": "The cited works introduce the use of vision transformer (ViT) in machine learning-based methods for more robust performance in the in-the-wild usage setting, which the citing paper may consider in its research."}, {"Category": "Methodological Basis", "Citation": "[44,85]", "Explanation": "The cited works highlight the trend of including the face image in addition to the eye region in appearance-based gaze estimation methods, which the citing paper may consider in its research."}, {"Category": "Supporting Evidence", "Citation": "[86]", "Explanation": "The cited work provides a quantitative analysis of the data bias issue in data-driven methods, which supports the claim that these methods are prone to over-fitting and limited in real-world applications."}, {"Category": "Supporting Evidence", "Citation": "[41,69,81]", "Explanation": "The cited works highlight the performance drop in cross-domain evaluations of trained models in different domains regarding personal appearances, head poses, gaze directions, and lighting conditions, which further supports the claim of data bias in data-driven methods."}, {"Category": "Supporting Evidence", "Citation": "[22,41,44,81,86]", "Explanation": "The cited works have collected and contributed to the community large-scale datasets that have improved the robustness of the deep estimation model in the citing paper."}, {"Category": "Data Source", "Citation": "[22,41,44,81,86]", "Explanation": "The cited works have provided the data source for the in-the-wild setting datasets that the citing paper has utilized in its research to improve the robustness of the deep estimation model."}, {"Category": "Extension or Continuation", "Citation": "[65,78,87]", "Explanation": "The cited works have created synthetic data as additional training data, which the citing paper has extended by using the data to augment existing training data and improve the robustness of the deep estimation model in a more comprehensive way."}, {"Category": "Supporting Evidence", "Citation": "[1]", "Explanation": "The cited work is used to support the idea of using a self-training strategy for training the model on unlabeled target domains in a self-training manner."}, {"Category": "Methodological Basis", "Citation": "[48]", "Explanation": "The cited work provides the methodological basis for the use of a self-training strategy in the proposed framework for training the model on unlabeled target domains."}, {"Category": "Data Source", "Citation": "[54]", "Explanation": "The cited work is used to acknowledge the origin of the unlabeled target domains used in the research."}, {"Category": "Data Source", "Citation": "[55]", "Explanation": "The cited work is used to acknowledge the origin of the unlabeled target domains used in the research."}, {"Category": "Data Source", "Citation": "[76]", "Explanation": "The cited work is used to acknowledge the origin of the unlabeled target domains used in the research."}, {"Category": "Extension or Continuation", "Citation": "[59]", "Explanation": "The cited work is an extension of the research presented in the manuscript, with the major changes being the use of a background-switching data augmentation consistency loss for the synthetic-real domain adaptation and the analysis of the performance of single-view face reconstruction compared to multi-view face reconstruction."}, {"Category": "Methodological Basis", "Citation": "[81]", "Explanation": "The cited work by ETH-XGaze provides a dataset of multi-view face reconstruction data that the citing paper uses to analyze the upper-bound performance of synthetic training in the context of appearance-based gaze estimation."}, {"Category": "Data Source", "Citation": "[20,23,36,64]", "Explanation": "The cited works are mentioned as the source of data in indoor environments that lack variant lighting conditions, which the citing paper uses to address the data hunger of deep learning methods."}, {"Category": "Data Source", "Citation": "[85,86]", "Explanation": "The cited works are mentioned as the source of in-the-wild data collection that covers variant lighting conditions, which the citing paper uses to address the data hunger of deep learning methods."}, {"Category": "Data Source", "Citation": "[36,44]", "Explanation": "The cited works are mentioned as the source of data collection using a tablet, which the citing paper uses to address the data hunger of deep learning methods and cover limited ranges of head pose and gaze directions."}, {"Category": "Data Source", "Citation": "[41,81]", "Explanation": "The cited works are mentioned as the source of data that has further extended diversity in head pose and environment conditions, which the citing paper uses to address the data hunger of deep learning methods."}, {"Category": "Methodological Basis", "Citation": "[65]", "Explanation": "The cited work provides a method for multi-view stereo reconstruction that the citing paper adopts to create synthetic training data for the gaze estimation task."}, {"Category": "Methodological Basis", "Citation": "[73,74]", "Explanation": "The cited works used hand-crafted computer graphics models to generate samples with arbitrary head poses, gaze directions, and lighting conditions, which the citing paper adopts to create synthetic training data for the gaze estimation task."}, {"Category": "Methodological Basis", "Citation": "[33,78,87]", "Explanation": "The cited works proposed gaze redirection to generate synthetic data for personal gaze estimator training, which the citing paper adopts to create synthetic training data for the gaze estimation task."}, {"Category": "Data Source", "Citation": "[33,78,87]", "Explanation": "The cited works used a single-image 3D face reconstruction approach for accurate data synthesis, which the citing paper uses as a data source to create synthetic training data for the gaze estimation task."}, {"Category": "Methodological Basis", "Citation": "[25,35,75]", "Explanation": "The cited works on domain adaptation in classification and segmentation tasks provide a methodological basis for the citing paper to adapt the methods to the regression task of gaze estimation."}, {"Category": "Extension or Continuation", "Citation": "[53][54][55]", "Explanation": "The cited works on domain adaptation in regression tasks are extended in the citing paper to address the specific challenges of gaze estimation in the target domain."}, {"Category": "Methodological Basis", "Citation": "[63]", "Explanation": "Sim-GAN adapts the synthetic training data to be similar to real target images before training, providing a method for data adaptation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[31,37,42,45]", "Explanation": "Recent methods focus on directly adapting the model by target domain using self-supervised learning, which the citing paper may adopt or adapt in their own research."}, {"Category": "Methodological Basis", "Citation": "[48]", "Explanation": "Liu et al. proposed a framework using collaborative learning that adapts to the target domain with only very few images guided by outlier samples, providing a method for data adaptation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[38]", "Explanation": "Gaze-CLR leveraged multi-view consistency as the underlying principle of the proposed contrastive learning framework, which the citing paper may build upon in their research."}, {"Category": "Methodological Basis", "Citation": "[71]", "Explanation": "Wang et al. proposed a contrastive learning framework based on an assumption of the similarity between gaze labels and features, which the citing paper may use as a basis for their own research."}, {"Category": "Methodological Basis", "Citation": "[45]", "Explanation": "LatentGaze leveraged generative adversarial networks to transform the target domain to the source domain for easier estimation, providing a method for data adaptation in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[92]", "Explanation": "The cited work on monocular 3D face reconstruction techniques provides a foundational method for the proposed data synthesis approach in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[51,80,89]", "Explanation": "The cited works on using reconstructed 3D faces to augment face recognition training data have inspired the exploration of its usage in full-face appearance-based gaze estimation in the citing paper."}, {"Category": "Data Source", "Citation": "[4, 6,18,27,28,90,91]", "Explanation": "The cited works on 3D face reconstruction methods that sample texture directly from the input image are used as a data source for the proposed data synthesis approach in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[40]", "Explanation": "The cited work on multi-view stereo methods for producing better 3D geometry provides a methodological basis for the proposed data synthesis approach in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work on 3D face reconstruction methods that sample texture directly from the input image is used to align the reconstruction results with the source camera coordinate system in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[61]", "Explanation": "The cited work highlights the limitations of existing methods in processing images, which provides a basis for the citing paper to address these issues in their research."}, {"Category": "Methodological Basis", "Citation": "[41]", "Explanation": "The cited work provides a method for converting yaw-pitch annotations to meet the requirements of the source gaze dataset, which the citing paper adopts in their research."}, {"Category": "Data Source", "Citation": "[44,81,84]", "Explanation": "The cited works are the source of the data used in the study conducted in the citing paper, as they satisfy the requirements of the source gaze dataset."}, {"Category": "Methodological Basis", "Citation": "State-of-the-art learning-based 3D face reconstruction methods", "Explanation": "The cited methods are the basis for the face reconstruction process in the citing paper, as they are used to take a face bounding box and output a 3D facial mesh in an orthographic projection way."}, {"Category": "Methodological Basis", "Citation": "[6,19,27,39,90]", "Explanation": "The cited works have established a representation for face vertices in the right-handed coordinate system, which the citing paper adopts in its research to project the reconstructed 3D face onto the input face patch."}, {"Category": "Methodological Basis", "Citation": "[2]", "Explanation": "The cited work provides a reference to a 3D face model with 68 landmarks, which the citing paper uses as a basis for their research on facial landmark detection and 3D face reconstruction."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work on the Perspective-n-Point (PnP) algorithm is used as a method to obtain the centroid of 3D locations of landmarks in the camera coordinate system, which is necessary for the gaze estimation task in the citing paper."}, {"Category": "Data Source", "Citation": "[88]", "Explanation": "The cited work, Places365 dataset, is used as a source of images for the random scene images in the synthesized images generated in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work, 3DDFA, is used in the experiments of the citing paper to reconstruct 3D faces from the source dataset for the purpose of rendering new images."}, {"Category": "Methodological Basis", "Citation": "[60]", "Explanation": "The PyTorch3D library is used in the experiments of the citing paper to render new images in the process of generating synthesized images."}, {"Category": "Methodological Basis", "Citation": "[45,59]", "Explanation": "The cited works provide a method for adapting the gaze estimation model in the target domain, which the citing paper adopts to address the image appearance gap between synthetic and real samples."}, {"Category": "Methodological Basis", "Citation": "[87]", "Explanation": "The cited work provides a specific structure for the subnets used in the DisAE architecture, which the citing paper adopts to train the model and achieve the desired results."}, {"Category": "Methodological Basis", "Citation": "[88]", "Explanation": "The cited work, Places365 dataset, is used as a source of random images for background-switching augmentation in the proposed method of the citing paper."}, {"Category": "Data Source", "Citation": "[22]", "Explanation": "The cited work, EYEDIAP, is a dataset that the citing paper uses to sample images for their research on gaze estimation."}, {"Category": "Data Source", "Citation": "[56]", "Explanation": "The cited work, pre-processing provided by Park et al., is a method used by the citing paper to sample images from the EYEDIAP dataset for their research on gaze estimation."}, {"Category": "Data Source", "Citation": "[41]", "Explanation": "The cited work, Gaze360, is a dataset that the citing paper uses to study the effects of head poses and gaze directions on gaze estimation performance."}, {"Category": "Data Source", "Citation": "[81,82]", "Explanation": "The cited works provide the data normalization scheme commonly used in appearance-based gaze estimation, which the citing paper adopts in their research to process the data in a standard way."}, {"Category": "Extension or Continuation", "Citation": "[12]", "Explanation": "The cited work excludes cases with invisible eyes from the dataset, which the citing paper further extends by including these cases in their research to analyze a more comprehensive set of data."}, {"Category": "Methodological Basis", "Citation": "[21]", "Explanation": "The cited work provides the PnP algorithm for fitting a 3D face model to the 2D landmark locations in the datasets, which the citing paper uses as a methodological basis for obtaining 3D head pose information in their research."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work, Gaze-TR, serves as a backbone architecture for single-image gaze estimation in the citing paper, providing a method for extracting gaze features from ResNet and using a transformer encoder and MLP to output gaze directions."}, {"Category": "Methodological Basis", "Citation": "[8]", "Explanation": "PureGaze is another method mentioned in the cited work that is used for gaze estimation and image reconstruction by extracting image features using a ResNet and an MLP."}, {"Category": "Methodological Basis", "Citation": "[48]", "Explanation": "PnP-GA is a domain adaptation model that is used in the cited work for applying on existing structures, with a Mean-Teacher structure for adaptation and a focus on learning invariant features from source and target domains."}, {"Category": "Methodological Basis", "Citation": "[24]", "Explanation": "DANN is another method mentioned in the cited work that includes a gradient reverse layer and a domain classifier on top of the backbone, focusing on learning invariant features from source and target domains."}, {"Category": "Methodological Basis", "Citation": "[41]", "Explanation": "The cited work, Gaze360, is used as a real-image baseline in the comparison of training datasets, providing a benchmark for evaluating the performance of the data synthesis approach in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[81]", "Explanation": "The cited work, ETH-XGaze, is used as a dataset for training gaze estimation models in the citing paper. The dataset is rotated to create a synthetic version, XGazeF-NV, which is then used to establish an upper bound of performance in the study."}, {"Category": "Data Source", "Citation": "[81]", "Explanation": "The cited work, ETH-XGaze, is a camera-synchronized dataset that serves as the data source for the multi-view reconstruction process in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[49]", "Explanation": "The Agisoft Metashape software is the method used for multi-view reconstruction in the citing paper, as it is implemented to optimize the camera extrinsic parameters and perform the face reconstruction process."}, {"Category": "Methodological Basis", "Citation": "[27]", "Explanation": "The cited work 3DDFA provides the method used in the comparison of multi-view face reconstruction with SOTA single-view methods in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[18]", "Explanation": "The cited work DECA is also used in the comparison of multi-view face reconstruction with SOTA single-view methods in the citing paper."}, {"Category": "Data Source", "Citation": "[58]", "Explanation": "The cited work BFM model is used to fit the facial landmarks in the comparison of multi-view face reconstruction with SOTA single-view methods in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "Tab. 4", "Explanation": "The cited table in the text provides a comparison of the performance of models trained on synthetic data generated by different face reconstruction methods, which extends the discussion on the performance of multi-view face reconstruction methods in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "Fig.", "Explanation": "The cited figure in the text shows the image quality differences between the single-view methods and Multi-view Stereo, which further illustrates the performance of multi-view face reconstruction methods discussed in the citing paper."}]
[ { "figure_ref": [ "fig_4", "fig_4" ], "heading": "Introduction", "publication_ref": [ "b16", "b34", "b6", "b5", "b35", "b25", "b2", "b31", "b36", "b45", "b43", "b46", "b45", "b12", "b40", "b14", "b15", "b15", "b12", "b45", "b46", "b45" ], "table_ref": [], "text": "Catastrophic forgetting [17] refers to deep neural networks forget the acquired knowledge from the previous tasks disastrously while learning the current task. This is in sharp contrast to humans who are able to incrementally learn new knowledge from the ever-changing world. To bridge the gap between artificial intelligence and human intelligence, incremental learning (IL) [35,7,6,36] has emerged as a new paradigm to enable AI systems to continuously learn from new data over time.\nIn the past few years, a variety of methods [26,3,32,37] have been proposed to alleviate catastrophic forgetting in IL. In this work, we are interested in a very challenging scenario, called class-incremental learning (CIL) [46,44,47]. CIL aims to identify all the previously learned classes with no task identifier available at the inference time. Unfortunately, CIL often suffers from catastrophic forgetting because of the overlapping representations between the previous tasks and the current one in the feature space [46]. To deal with this issue, many prior studies adopt exemplar-based approaches to preserve some old class samples in a memory buffer. These methods, however, suffer from memory limitations and privacy issues. Thus, some works propose non-exemplar-based methods [13,41,15,16] that incrementally learn new tasks without storing raw samples in a memory. Most of these methods mainly focus on building large network structures [16], or designing regularization loss [13] to mitigate catastrophic forgetting, but they do not perform well in CIL.\nFigure 1: Comparison of buffer size and accuracy for different methods on TinyImageNet under zero-base and 10 phrases setting. YONO that only stores and replays one prototype for each class can achieve higher accuracy than the baselines.\nRecently, few studies [46,47], such as PASS and SSRE, propose to store one prototype (class mean) for each old class and then use augmented prototypes to train a model. Surprisingly, we find that the PASS [46] using prototype augmentation via Gaussian noise will degrade the prediction accuracy compared to that without prototype augmentation, as illustrated in Fig. 1. This is because class mean may not represent the centroid of different representations in a high-dimensional space such that the recovered representations of old classes may overlap with similar classes. It thus motivates us to optimize the learning of prototype for each class in CIL.\nIn this work, we develop YONO, a new nonexemplar model that only needs to store and replay one class-representative prototype. The key challenge lies in how to find a more representative prototype for each class so that these prototypes can be distant from each other. To address this challenge, we propose a new attentional mean-shift method to dynamically aggregate the representations of samples in each class into a prototype in a high-density region. Then we only replay one prototype for each old class without using synthetic data when training a model. To further improve the accuracy of YONO, we extend it to YONO+ that generates synthetic data from stored prototypes. Accordingly, we develop a novel approach that combines a high-dimensional space rotation matrix and Gaussian distribution to create synthetic data of old classes from stored prototypes. Extensive experiments are carried out to evaluate the performance of our methods on multiple benchmarks. Experimental results demonstrate that both YONO and YONO+ can significantly outperform the baselines in terms of accuracy and average forgetting. Moreover, replaying synthetic data can further improve the performance of YONO.\nOur contributions are four-fold: 1) we propose a novel non-exemplar model, called YONO, that can achieve good performance by only replaying one stored prototype for each class without using synthetic data, 2) to our best knowledge, we are the first to explore the prototype optimization in IL, 3) we extend YONO to YONO+, which develops a new data synthesis technique that creates high-quality data from stored prototypes, 4) the evaluation results demonstrate the superiority of our methods over the non-exemplar baselines in terms of accuracy and average forgetting." }, { "figure_ref": [], "heading": "Proposed Method", "publication_ref": [], "table_ref": [], "text": "In this section, we first elaborate on the proposed YONO that adopts attentional mean-shift method to compute a representative prototype for each class. In order to further improve the prediction accuracy, we extend YONO to develop a YONO+ with synthetic data generated from prototypes.\nProblem Description. By learning from a sequence tasks each associated with a subset of classes C t and a training set of n t examples drawn from these classes, i.e., D t ≜ {x i , y i } nt i=1 with y i ∈ C t , class-incremental learning (CIL) aims to train a model f (x; [θ, w]) ≜ G(F (x; θ); w) that predicts probabilities of all previous classes C 1:t ≜ t i=1 C i for any given example x. The model is composed of a feature extractor F (•; θ) producing compact representations and a classifier G(•; w). Given x, the probabilities over all classes C 1:t are predicted as softmax(wF (x; θ))." }, { "figure_ref": [], "heading": "YONO", "publication_ref": [], "table_ref": [], "text": "In the following, we introduce YONO, which performs two main steps in each iteration: (i) prototype learning for memory condensation that computes a prototype per class; and (ii) new task learning with prototype replay.\nFor (i), we propose a novel attentional mean-shift method to compute a prototype for each class as a mode searching of the class distribution. By training the representation for prototype-based classification, we are able to concentrate most samples to their class prototypes and keep each class a compact cluster in the representation space. This strategy significantly mitigates inter-class interference, which is a primary reason for forgetting.\nFor (ii), when learning a new task, we augment its training set with previous tasks' class prototypes from the memory. In YONO, replaying only the prototypes of previous classes suffice to retain their classes' features and mitigate catastrophic forgetting. " }, { "figure_ref": [], "heading": "Prototype Learning for Memory Condensation", "publication_ref": [ "b30" ], "table_ref": [], "text": "When learning task-t defined on a set of classes C t , for each class k ∈ C t , we construct a graph of the class's sample representations z i = F (x i ; θ) that connects class-k's prototype p k and apply graph attention [31] to move p k towards a high-density region in the representation space. We achieve this by mean-shift of the prototype: in each step, we move p k towards a weighted average over all samples belonging to class-k (their normalized representations in specific) and normalize the new p k , i.e.,\np k ← (1 -λ)p k + λ i∈[nt]:yi=k a k,i • z i ∥z i ∥ 2 , p k ← p k ∥p k ∥ 2 ,(1)\nwhere λ controls the step size of the mean-shift and n t is the size of training set for task-t. Unlike the original mean-shift algorithm, the weights a k,i are determined by learnable dot-product attention between each sample z i and the prototype p k in the representation space, i.e.,\na k ≜ softmax(ā k ), āk ≜ [ā k,1 , • • • , āk,nt ], āk,i = c(z i , p k ) ≜ ⟨z i , p k ⟩ ∥z i ∥ 2 • ∥p k ∥ 2 .(2)\nIn practice, when the number of samples n t is large, we can apply a mini-batch version of Eq. ( 1) for multiple steps, where i ∈ [n t ] is replaced by i ∈ B (B is a mini-batch of samples). We then store the prototype of each class in the memory, which will be used to train the model together with learned tasks' prototypes and a new task's data." }, { "figure_ref": [], "heading": "New Task Learning with Prototype Replay", "publication_ref": [ "b3" ], "table_ref": [], "text": "In YONO, we train the representation model F (•; θ) to produce z i = F (x i ; θ) for each sample x i to be close to its class prototype and distant from other classes' prototypes. We achieve this by minimizing the Arcface [4] loss for task-t, i.e.,\nL t,P (θ) ≜ 1 n t k∈Ct i∈[nt]:yi=k arcface(z i , p, k).(3)\nThe arcface(•, •, •) loss is defined by\narcface(z, p, k) = -log exp(cos[c -1 (z, p k ) + δ]/τ ) exp(cos[c -1 (z, p k ) + δ]/τ ) + l∈C1:t,l̸ =k exp(c(z, p l )/τ ) ,(4)\nwhere c -1 (z, p k ) ≜ arccos(c(z, p k )) denotes the angle between z and p k , τ is a temperature parameter and δ is a margin penalty to restrict the angle c -1 (z i , p k ) from being too large. Hence, samples belonging to each class are enforced to concentrate on their prototype and form a compact cluster distant from other classes in the representation space, which effectively reduces the harmful interference between classes that leads to future catastrophic forgetting.\nMeanwhile, in order to mitigate the catastrophic forgetting of previous tasks' classes C 1:t-1 , YONO replays the stored class prototypes while training with the current task's data, which means we augment the training set for task-t with prototypes from previous classes C 1:t-1 . The corresponding training objective for classification on C 1:t is to minimize the negative log-likelihood on all the task-t's data and prototypes for all the previous t -1 tasks, i.e.,\nL t,C (θ, w) ≜ 1 n t k∈Ct i∈[nt]:yi=k arcface(z i , w, k) + 1 |C 1:t-1 | k∈C1:t-1 arcface(p k , w, k).(5)\nHence, the training objective L t (θ, w) of YONO at task-t combines the prototype-learning loss for task-t in Eq. 3 and the prototype-replay augmented loss in Eq. ( 5), i.e., YONO : min\nθ,w L t (θ, w) = L t,P (θ) + L t,C (θ, w).(6)\nIn summary, L t,P (θ) mainly focuses on moving the current task's samples to their associated class prototype in the representation space so the prototypes retain most information of the task. On the other hand, L t,C (θ, w) trains the representation model's parameter θ and the classifier layer(s) w in an end-to-end manner on an augmented dataset composed of both the current task's data and the prototypes so the model can learn new tasks without suffering from forgetting previous tasks." }, { "figure_ref": [], "heading": "YONO+", "publication_ref": [], "table_ref": [], "text": "Although prototype-only replay in YONO is highly effective in mitigating catastrophic forgetting, it might be insufficient to cover all useful information of the whole distribution for each class without replay on different instances. Hence, we propose an extension YONO+ with the replay of synthetic data generated from the prototypes in the memory." }, { "figure_ref": [], "heading": "Data Synthesis from Prototypes", "publication_ref": [], "table_ref": [], "text": "By using āk,i computed in Eq. ( 2) for each learned (and can not be accessed) sample z i , we are able to synthesize a data point z ′ i that has a similar angular distance to the prototype p k as z i for replay. This leads to YONO+ whose replay of each previous class is conducted on multiple synthetic data points instead of a single prototype.\nIn particular, we firstly derive a rotation matrix R(p k , u) that can recover p k from a unit vector u = [1, 0, • • • , 0] on an unit m-sphere, i.e., p k = R(p k , u) × u. To synthesize a sample z ′ i of class-k as a proxy to z i (a previously learned sample of class-k), we then randomly draw v i in the vicinity of u, i.e.,\nv i = [ã k,i , ϵ 2 , • • • , ϵ m ], ãk,i ∼ T N (µ, σ, µ -κσ, µ + κσ),(7)\nwhere T N (µ, σ, µ -κσ, µ + κσ) denotes a symmetric truncated Gaussian distribution with a mean of µ, a variance of σ, and a lower/upper bound of µ ± κσ. To make sure that ∥v i ∥ 2 = 1, we draw ϵ i ∼ N (0, 1) for i ∈ {2, • • • , m} at first and then rescale them by\nϵ i ← 1-(ã k,i +ϵ1) 2 / m i=2 ϵ 2 i • ϵ i .\nEmpirically, by choosing κ = 1.96, which leads to a small variance (1 -µ)/κ of ãk,i , the synthetic data z ′ i are close to their prototypes and have a similar distribution as the real data z i . Thereby, we have u T v i = ãk,i , whose distribution approximates the distribution of cosine similarity āk,i between real sample z i and its associated class prototype p k ." }, { "figure_ref": [], "heading": "Next, we create z", "publication_ref": [], "table_ref": [], "text": "′ i from v i . As p k = R(p k , u) × u, we can apply the same rotation matrix R(p k , u) to v i to achieve z ′ i , i.e., z ′ i = R(p k , u) × v i .(8)\nBy applying the same rotation, the similarity between u and v i is preserved between p k and z ′ i . By sampling the synthetic data point z ′ i for each previously removed sample z i using the above synthesis, we are able to create a dataset for all seen classes in C 1:t that can be used in the replay." }, { "figure_ref": [], "heading": "New Task Learning with Synthetic Data Replay", "publication_ref": [ "b45", "b8", "b8" ], "table_ref": [], "text": "When learning a new task-t, YONO+ also replays the synthetic dataset D ′ t generated from all previous tasks' prototypes p k , i.e.,\nD ′ t ≜ {(z ′ i , k) : k ∈ C 1:t-1 , z ′ i = R(p k , u) × v i , v i = [ã k,i , ϵ 2 , • • • , ϵ m ]} .(9)\nThe training objective for task-t with the replay of previous tasks' data synthesized from the stored prototypes is\nL t,C+ (θ, w) ≜ 1 n t k∈Ct i∈[nt]:yi=k arcface(z i , w, k) + 1 |D ′ t | (z,k)∈D ′ t arcface(z, w, k).(10)\nHence, the training objective L t (θ, w) of YONO+ at task-t combines the prototype-learning loss for task-t in Eq. 3 and the synthetic-data replay augmented loss in Eq. ( 10), i.e.,\nYONO+ : min θ,w L t (θ, w) = L t,P (θ) + L t,C+ (θ, w). (11\n)\n2.3 Practical Improvement to YONO/YONO+ Finally, we adopt the following techniques to further enhance the model performance.\nKnowledge Distillation. Following previous incremental learning methods [46,9], we apply knowledge distillation (KD) [9] when training F (•; θ) on the current task data x ∼ D t by minimizing the difference between F (x; θ) and the representations F (x; θ t-1 ) produced by previous task model θ t-1 , i.e.,\nL t,KD (θ) ≜ 1 n t i∈[nt] ∥F (x i ; θ) -F (x i ; θ t-1 )∥ 2 2 . (12\n)\nMinimizing the above KD loss aims at retaining the knowledge of the previous task's model. In YONO and YONO+, we can augment their objectives L t (θ, w) in Eq. ( 6) and Eq. ( 11) with L t,KD (θ).\nModel Interpolation. In addition to KD, we apply model interpolation to retain the knowledge of the previous model θ t-1 and avoid overfitting to the current task. Specifically, after learning task-t, we update the current θ t by the following interpolation between θ t-1 and θ t , i.e.,\nθ t ← (1 -β)θ t-1 + βθ t ,(13)\nwhere β ∈ [0, 1] and we set β = 0.6 in experiments. Since θ t is mainly trained on task-t, such simple interpolation between θ t-1 and θ t leads to a more balanced performance on all tasks.\n\"Partial Freezing\" of Classifier. Each row w k in the classifier parameter w corresponds to a class k ∈ C 1:t . Since the current task-t mainly focuses on classes k ∈ C t , we apply a much smaller learning rate η ′ ≪ η (η is the learning rate for other parameters) to w k associated with previous classes to avoid significant drift of their classifier parameters, i.e.,\nw k ← w k -η ′ ∇ w k L t (θ, w), ∀k ∈ C 1:t-1(14)\nWe provide the complete procedure of YONO and YONO+ in Algorithm 1." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [ "b10", "b38", "b12", "b45", "b46", "b44", "b19", "b22", "b34", "b41", "b22", "b22", "b0" ], "table_ref": [], "text": "In this section, we first evaluate the performance of the proposed YONO and YONO+ on CIFAR-100 [11] and TinyImageNet [39]. Then we evaluate the quality of synthetic data generated from memorized prototypes. Finally, we do ablation studies to explore the impact of main components and certain hyperparameters on model performance. The detailed model configurations and hyperparameter settings are presented in Appendix 3.1.\nBaselines. We compare the proposed YONO and YONO+ with non-exemplar-based methods, including LwF [13], PASS [46], SSRE [47], IL2A [45], and FeTrIL [20]. We also compare them with some exemplar-based methods, such as iCaRL [23], BiC [35], and WA [42]. Following prior work [23], we respectively report the results of CNN predictions (i.e., iCaRL-CNN) and nearest-mean-of-exemplars classification (i.e., iCaRL-NME) for the iCaRL. We measure the performance of different methods with two commonly used metrics in IL: average accuracy [23] and average forgetting [1]." }, { "figure_ref": [], "heading": "Model Configurations and Hyper-parameter Settings", "publication_ref": [ "b18", "b42", "b7", "b27", "b45", "b22", "b8", "b22" ], "table_ref": [], "text": "We implement the proposed methods in PyTorch [19] and run the baselines using PyCIL [43], which is a well-known toolbox for CIL. In the experiments, we train the ResNet-18 [8] from scratch using the SGD [28] optimizer with an initial learning rate of 0.01. Then the learning rate is multiplied by 0.1 per 20 epochs. The weights for prototype learning, classification and KD loss in YONO and YONO+ are 1, 1 and 30, respectively. We train the model with batch size 256 for 60 epochs in each task. The experimental results are averaged over three random seeds. We execute if YONO+ then Data Synthesis: create a mini-batch of data (z ′ i , k) of previous classes from prototypes {p k : k ∈ C1:t-1} by Eq. ( 7)-( 8); Compute loss Lt(θ, w) in Eq. ( 11) on the two mini-batches; else Draw a mini-batch of prototypes (p k , k) from k ∈ C1:t-1;\nCompute loss Lt(θ, w) in Eq. ( 6) on the two mini-batches; Update feature extractor: θ ← θ -η∇ θ Lt(θ, w); different incremental phases (i.e., 5 and 10 phases) under zero-base setting. Specifically, we evenly split the classes of each dataset into several tasks. Following prior work [46], the classes in each dataset are arranged in a fixed random order. As for the memory size of exemplar-based approaches mentioned above, we use herd election [23] to select exemplars of previous tasks under different settings. Specifically, we store 20 samples of each old classes following the settings in [9,23].\nUpdate classifier for k ∈ Ct: w k ← θ -η∇w k Lt(θ, w); Update classifier for k ∈ C1:t-1: w k ← θ -η ′ ∇w k Lt(θ, w); 17 Model interpolation: θ ← (1 -β)θ ′ + βθ;" }, { "figure_ref": [ "fig_3", "fig_3", "fig_3" ], "heading": "Evaluation Results", "publication_ref": [], "table_ref": [], "text": "First, we compare the proposed two methods with the baselines on CIFAR-100 and TinyImageNet under different settings: 5 phases and 10 phases. As shown in Fig. 3, we can observe that both YONO and YONO+ outperform all the non-exemplar methods in terms of accuracy under zero-base setting. In particular, YONO can achieve comparable accuracy to YONO+ with data synthesis from stored prototypes. The reason why both YONO and YONO+ outperform the PASS is that our attentional mean-shift method can learn a compact prototype in a high-density region for each class, which reduces the inter-class interference to mitigate forgetting. We also compare our approaches with some exemplar-based models. We can observe from Fig. 3 (d) that the proposed YONO and YONO+ outperform some exemplar-based methods on TinyImageNet.\nMoreover, we present a comparison of average accuracy and forgetting for different methods, as shown in Table . 1. It can be seen that the proposed YONO and YONO+ achieve higher average accuracy than other non-exemplar baselines. While SSRE and FeTrIL have lower average forgetting than our methods, their accuracy drops rapidly in the initial tasks, resulting in low accuracy in the final task, as shown in Figure 3. In reality, the lower forgetting in SSRE and FeTrIL is attributed to the sharp drop in accuracy in the early tasks and the limited learning progress in the subsequent tasks. Moreover, the proposed methods can even outperform some exemplar-based methods in terms of average accuracy and forgetting." }, { "figure_ref": [ "fig_6", "fig_6" ], "heading": "Evaluation on Quality of Synthetic Data", "publication_ref": [ "b32", "b3" ], "table_ref": [], "text": "Next, we evaluate the quality of synthetic data generated from stored prototypes in YONO+. In this experiment, we randomly choose task t from CIFAR-100 and TinyImageNet when training with 10 phases under zero-base setting. Then we compare the synthetic data for each class from stored prototypes and the representations encoded by the extractor F (•; θ) after the training of task t. Following prior works [33,4], we use two-layer MLP to map the high-dimensional representations into a 2D space for visualization, as shown in Fig. 4. We can observe that the synthetic data generated from stored prototypes in Fig. 4 (a) and (c) form a compact cluster, whose distributions are very similar to those of " }, { "figure_ref": [ "fig_7", "fig_7" ], "heading": "Evaluation on Reliability of YONO+", "publication_ref": [], "table_ref": [], "text": "In addition, we evaluate the reliability for the proposed YONO+ and PASS in learning the representations from input data samples. Fig. 5 illustrates the distribution of representations encoded by the extractor on the CIFAR-100 under base-0 phase 10 setting for the first three tasks. Both YONO+ and PASS demonstrate effective maintenance of the decision boundary in the first task. However, in the subsequent tasks 2 and 3, YONO+ can still encode the input data into a representation within a certain boundary, whereas PASS cannot. In Fig. 5 (b),(c),(e) and (f), the light grey points represent the distribution of data from old tasks. We can see from (b) and (c) that our approach can still separate the old tasks from the current task, while PASS fails to distinguish between the distributions of data from previous tasks and the current one. This is because our attentional mean shift method can form a compact cluster for the samples in each class and also use synthetic data replay to constrain the boundary of old tasks. (f) PASS for Task 3 " }, { "figure_ref": [ "fig_8" ], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [], "text": "Finally, we implement ablation studies to explore the impact of some hyper-parameters and components on the performance of our method.\nEffect of important components. We first study the impact of some important components, such as prototype, synthetic data replay, model interpolation (MI), on prediction accuracy. Fig. 6 illustrates the comparison results on CIFAR-100 under 10 phases setting. We can observe from it that the saved prototype plays an important role in our proposed methods. The prediction accuracy will drop a lot without it. Additionally, YONO+ with synthetic data replay can slightly improve the prediction accuracy compared to YONO. Besides, MI can help improve model performance since it can retain the knowledge of prior model while ensuring the good performance of the current model. Effect of important hyperparameters. We also explore the impact of some hyperparameters, such as margin penalty δ in Arcface (Eq. ( 4)), λ in attentional mean-shift method, β for model interpolation in Eq. ( 13) and η ′ for \"partial freezing\" of classifier in Eq. ( 14), on the prediction accuracy of our method. The detailed experimental results are presented in Appendix A." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b12", "b21", "b9", "b8", "b25", "b28", "b31", "b12", "b20", "b4", "b23", "b14", "b33", "b2", "b26", "b34", "b22", "b4", "b23", "b1", "b11", "b24", "b29", "b36", "b15", "b29", "b39", "b37", "b13", "b17", "b45" ], "table_ref": [], "text": "Regularization-based method. It aims to alleviate catastrophic forgetting by introducing additional regularization terms to correct the gradients and protect the old knowledge learned by the model [13,22,10,9]. For example, some works [26,29,32] adopt weights regularization to regularize the variation of model parameters. However, it is very hard to design reasonable and reliable metrics to measure the importance of model parameters. Others [13,21,5] mainly adopt KD to transfer the output of prediction function from previously-learned tasks to the current task. However, these methods need to replay some old data samples.\nReplay-based method. The replay-based methods mainly include experience replay [24,15] and generative replay [34,3,27]. The former approach stores some old training examples within a memory buffer while the latter mainly uses generative models to generate data samples of old classes. For experience replay, some approaches mainly adopt rehearsal samples with knowledge distillation [35,23,5] and some [24,2] apply regularization on the gradients for the sake of using the rehearsal samples more sufficiently. While these methods can mitigate catastrophic forgetting, they will suffer from data privacy issues and need a large-size memory. In order to mitigate privacy issues, some works [12,25] adopt deep generative models to generate pseudo samples of previous tasks. Nevertheless, these methods either suffer from the instability of generative models or still need to store some old examples.\nArchitecture-based method. This approach can be divided into parameters isolation [30,37] and dynamic architecture. Parameters isolation methods adopt individual parameters for each task, thus they need a large memory to store the extended network for each previous task during training [16,30,40,38]. In addition, the architecture-based methods [14,18] dynamically expand the network if its capacity is not large enough for new tasks. Those methods can achieve remarkable performance, but they are not applicable to a large number of tasks.\nMost Relevant Work. The work closely related to ours is PASS [46], which saves one prototype for each class and then augments the prototype via Gaussian noise for model training. This method can save the storage of memory buffers and maintain the decision boundary of previous tasks to some degree. However, PASS uses the mean value of representations in each class as a prototype, which cannot represent the centroid of a cluster exactly. As a result, its prediction performance degrades after prototype augmentation since some representations of old classes may overlap with other similar classes. In contrast, we try to learn a more representative prototype from data samples in each class so that the class margins can be maximized, thereby mitigating forgetting." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we developed two non-exemplar-based methods, YONO and YONO+, for class-incremental learning. Specifically, YONO only needs to store and replay one prototype for each class without generating synthetic data from stored prototypes. As an extension of YONO, YONO+ proposed to create synthetic replay data from stored prototypes via a high-dimensional rotation matrix and Gaussian noise. The evaluation results on multiple benchmarks demonstrated that both YONO and YONO+ can significantly outperform the baselines in terms of accuracy and average forgetting. In particular, the proposed YONO achieved comparable performance to YONO+ with data synthesis. Importantly, this work offered a new perspective of optimizing class prototypes for exemplar-free incremental learning." }, { "figure_ref": [], "heading": "A More Ablation Studies", "publication_ref": [], "table_ref": [], "text": "We conduct more ablation studies to explore some important hyper-parameters on model performance.\nA.1 Effect of λ in mean-shift method Table. 2 shows the impact of hyper-parameter λ in mean-shift on prototype learning using CIFAR100. We can observe from it that as λ increases from 0.3 to 0.9, the prediction accuracy of YONO+ almost remains the same. Thus, we can conclude that the proposed method is not sensitive to λ that controls the steps in the attentional mean-shift algorithm. We choose λ = 0.6 in our experiments. We also investigate the effect of margin penalty δ in the Arcface on the prediction accuracy of our method. It can be observed from Fig. 7 (a) that as δ increases from 0.15 to 0.45, the average accuracy drops after δ > 0.25. Therefore, δ should be limited to no more than 0.25. According to this, we choose δ = 0.25 in our experiments. " }, { "figure_ref": [], "heading": "A.3 Effect of parameter β in model interpolation", "publication_ref": [], "table_ref": [], "text": "In addition, we investigate the influence of hyperparameter β in model interpolation on the prediction accuracy, as shown in Fig. 7 (b). It can be seen that when β = 0.6, the proposed method has the best performance. When β increases from 0.6 to 0.9, the prediction accuracy gradually drops." }, { "figure_ref": [], "heading": "A.4 Effect of small learning rate η ′", "publication_ref": [], "table_ref": [], "text": "What is more, we study the effect of learning rate η ′ in the \"Partial Freezing\" of the classifier on the model performance.\nAs illustrated in Fig. 7 (c), it can be observed that when η ′ is 0 or larger than 0.005, the prediction accuracy will drop significantly. As η ′ is a very small value, such as 0.001, the proposed method has the best performance. Hence, we choose η ′ = 0.001 in our experiment." }, { "figure_ref": [], "heading": "B Evaluation on Half-base Setting", "publication_ref": [], "table_ref": [], "text": "We also compare the proposed methods with the baselines under half-base setting with 5 and 10 phrases. Fig. 8 illustrates the accuracy comparison of different approaches on CIFAR-100 and TinyImageNet using three random seeds. We can observe that the proposed YONO and YONO+ outperform the non-exemplar-based methods on both CIFAR100 and TinyImageNet under different settings, except for one special scenario with 10 phrases where FeTril has slightly higher accuracy than our methods on CIFAR-100. In addition, both YONO and YONO+ can achieve higher accuracy than most exemplar-based methods on TinyImageNet. " } ]
2023-05-25
[ { "authors": "A Chaudhry; P K Dokania; T Ajanthan; P H Torr", "journal": "", "ref_id": "b0", "title": "Riemannian walk for incremental learning: Understanding forgetting and intransigence", "year": "2018" }, { "authors": "A Chaudhry; M Ranzato; M Rohrbach; M Elhoseiny", "journal": "", "ref_id": "b1", "title": "Efficient lifelong learning with a-gem", "year": "2018" }, { "authors": "Y Cong; M Zhao; J Li; S Wang; L Carin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b2", "title": "Gan memory with no forgetting", "year": "2020" }, { "authors": "J Deng; J Guo; N Xue; S Zafeiriou", "journal": "", "ref_id": "b3", "title": "Arcface: Additive angular margin loss for deep face recognition", "year": "2019" }, { "authors": "A Douillard; M Cord; C Ollion; T Robert; E Valle", "journal": "Springer", "ref_id": "b4", "title": "Podnet: Pooled outputs distillation for small-tasks incremental learning", "year": "2020" }, { "authors": "A Douillard; A Ramé; G Couairon; M Cord", "journal": "", "ref_id": "b5", "title": "Dytox: Transformers for continual learning with dynamic token expansion", "year": "2022" }, { "authors": "A Gepperth; B Hammer", "journal": "", "ref_id": "b6", "title": "Incremental learning algorithms and applications", "year": "2016" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b7", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "S Hou; X Pan; C C Loy; Z Wang; D Lin", "journal": "", "ref_id": "b8", "title": "Learning a unified classifier incrementally via rebalancing", "year": "2019" }, { "authors": "J Kirkpatrick; R Pascanu; N Rabinowitz; J Veness; G Desjardins; A A Rusu; K Milan; J Quan; T Ramalho; A Grabska-Barwinska", "journal": "Proceedings of the national academy of sciences", "ref_id": "b9", "title": "Overcoming catastrophic forgetting in neural networks", "year": "2017" }, { "authors": "A Krizhevsky; G Hinton", "journal": "", "ref_id": "b10", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "T Lesort; H Caselles-Dupré; M Garcia-Ortiz; A Stoian; D Filliat", "journal": "IEEE", "ref_id": "b11", "title": "Generative models from the perspective of continual learning", "year": "2019" }, { "authors": "Z Li; D Hoiem", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b12", "title": "Learning without forgetting", "year": "2017" }, { "authors": "Z Li; M Meng; Y He; Y Liao", "journal": "Springer", "ref_id": "b13", "title": "Continual learning with laplace operator based node-importance dynamic architecture neural network", "year": "2021" }, { "authors": "D Lopez-Paz; M Ranzato", "journal": "Advances in neural information processing systems", "ref_id": "b14", "title": "Gradient episodic memory for continual learning", "year": "2017" }, { "authors": "A Mallya; S Lazebnik", "journal": "", "ref_id": "b15", "title": "Packnet: Adding multiple tasks to a single network by iterative pruning", "year": "2018" }, { "authors": "M Mccloskey; N J Cohen", "journal": "Elsevier", "ref_id": "b16", "title": "Catastrophic interference in connectionist networks: The sequential learning problem", "year": "1989" }, { "authors": "O Ostapenko; M Puscas; T Klein; P Jahnichen; M Nabi", "journal": "", "ref_id": "b17", "title": "Learning to remember: A synaptic plasticity driven framework for continual learning", "year": "2019" }, { "authors": "A Paszke; S Gross; S Chintala; G Chanan; E Yang; Z Devito; Z Lin; A Desmaison; L Antiga; A Lerer", "journal": "", "ref_id": "b18", "title": "Automatic differentiation in pytorch", "year": "2017" }, { "authors": "G Petit; A Popescu; H Schindler; D Picard; B Delezoide", "journal": "", "ref_id": "b19", "title": "Fetril: Feature translation for exemplar-free class-incremental learning", "year": "2023" }, { "authors": "Q Pham; C Liu; S Hoi", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b20", "title": "Dualnet: Continual learning, fast and slow", "year": "2021" }, { "authors": "A Rannen; R Aljundi; M B Blaschko; T Tuytelaars", "journal": "", "ref_id": "b21", "title": "Encoder based lifelong learning", "year": "2017" }, { "authors": "S.-A Rebuffi; A Kolesnikov; G Sperl; C H Lampert", "journal": "", "ref_id": "b22", "title": "icarl: Incremental classifier and representation learning", "year": "2017" }, { "authors": "M Riemer; I Cases; R Ajemian; M Liu; I Rish; Y Tu; G Tesauro", "journal": "", "ref_id": "b23", "title": "Learning to learn without forgetting by maximizing transfer and minimizing interference", "year": "2018" }, { "authors": "A Rios; L Itti", "journal": "", "ref_id": "b24", "title": "Closed-loop memory gan for continual learning", "year": "2018" }, { "authors": "R Roady; T L Hayes; H Vaidya; C Kanan", "journal": "", "ref_id": "b25", "title": "Stream-51: Streaming classification and novelty detection from videos", "year": "2020" }, { "authors": "M Rostami; S Kolouri; P K Pilly", "journal": "", "ref_id": "b26", "title": "Complementary learning for overcoming catastrophic forgetting using experience replay", "year": "2019" }, { "authors": "S Ruder", "journal": "", "ref_id": "b27", "title": "An overview of gradient descent optimization algorithms", "year": "2016" }, { "authors": "J Schwarz; W Czarnecki; J Luketina; A Grabska-Barwinska; Y W Teh; R Pascanu; R Hadsell", "journal": "PMLR", "ref_id": "b28", "title": "Progress & compress: A scalable framework for continual learning", "year": "2018" }, { "authors": "J Serra; D Suris; M Miron; A Karatzoglou", "journal": "PMLR", "ref_id": "b29", "title": "Overcoming catastrophic forgetting with hard attention to the task", "year": "2018" }, { "authors": "P Veličković; G Cucurull; A Casanova; A Romero; P Lio; Y Bengio", "journal": "", "ref_id": "b30", "title": "Graph attention networks", "year": "2017" }, { "authors": "L Wang; K Yang; C Li; L Hong; Z Li; J Zhu", "journal": "", "ref_id": "b31", "title": "Ordisco: Effective and efficient usage of incremental unlabeled data for semi-supervised continual learning", "year": "2021" }, { "authors": "Y Wen; K Zhang; Z Li; Y Qiao", "journal": "Springer", "ref_id": "b32", "title": "A discriminative feature learning approach for deep face recognition", "year": "2016" }, { "authors": "C Wu; L Herranz; X Liu; J Van De Weijer; B Raducanu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b33", "title": "Memory replay gans: Learning to generate new categories without forgetting", "year": "2018" }, { "authors": "Y Wu; Y Chen; L Wang; Y Ye; Z Liu; Y Guo; Y Fu", "journal": "", "ref_id": "b34", "title": "Large scale incremental learning", "year": "2019" }, { "authors": "J Xie; S Yan; X He", "journal": "", "ref_id": "b35", "title": "General incremental learning with domain-aware categorical representations", "year": "2022" }, { "authors": "M Xue; H Zhang; J Song; M Song", "journal": "", "ref_id": "b36", "title": "Meta-attention for vit-backed continual learning", "year": "2022" }, { "authors": "S Yan; J Xie; X He", "journal": "", "ref_id": "b37", "title": "Der: Dynamically expandable representation for class incremental learning", "year": "2021" }, { "authors": "L Yao; J Miller", "journal": "CS", "ref_id": "b38", "title": "Tiny imagenet classification with convolutional neural networks", "year": "2015" }, { "authors": "J Yoon; E Yang; J Lee; S J Hwang", "journal": "", "ref_id": "b39", "title": "Lifelong learning with dynamically expandable networks", "year": "2017" }, { "authors": "L Yu; B Twardowski; X Liu; L Herranz; K Wang; Y Cheng; S Jui; J V D Weijer", "journal": "", "ref_id": "b40", "title": "Semantic drift compensation for class-incremental learning", "year": "2020" }, { "authors": "B Zhao; X Xiao; G Gan; B Zhang; S.-T Xia", "journal": "", "ref_id": "b41", "title": "Maintaining discrimination and fairness in class incremental learning", "year": "2020" }, { "authors": "D.-W Zhou; F.-Y Wang; H.-J Ye; D.-C Zhan", "journal": "", "ref_id": "b42", "title": "Pycil: A python toolbox for class-incremental learning", "year": "2021" }, { "authors": "D.-W Zhou; Q.-W Wang; H.-J Ye; D.-C Zhan", "journal": "", "ref_id": "b43", "title": "A model or 603 exemplars: Towards memory-efficient class-incremental learning", "year": "2022" }, { "authors": "F Zhu; Z Cheng; X.-Y Zhang; C.-L Liu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b44", "title": "Class-incremental learning via dual augmentation", "year": "2021" }, { "authors": "F Zhu; X.-Y Zhang; C Wang; F Yin; C.-L Liu", "journal": "", "ref_id": "b45", "title": "Prototype augmentation and self-supervision for incremental learning", "year": "2021" }, { "authors": "K Zhu; W Zhai; Y Cao; J Luo; Z.-J Zha", "journal": "", "ref_id": "b46", "title": "Self-sustaining representation expansion for non-exemplar classincremental learning", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 183.56, 442.44, 357.11, 27.27 ], "formula_id": "formula_0", "formula_text": "p k ← (1 -λ)p k + λ i∈[nt]:yi=k a k,i • z i ∥z i ∥ 2 , p k ← p k ∥p k ∥ 2 ,(1)" }, { "formula_coordinates": [ 3, 141.04, 519.65, 399.63, 23.23 ], "formula_id": "formula_1", "formula_text": "a k ≜ softmax(ā k ), āk ≜ [ā k,1 , • • • , āk,nt ], āk,i = c(z i , p k ) ≜ ⟨z i , p k ⟩ ∥z i ∥ 2 • ∥p k ∥ 2 .(2)" }, { "formula_coordinates": [ 3, 211.32, 643.34, 329.35, 27.27 ], "formula_id": "formula_2", "formula_text": "L t,P (θ) ≜ 1 n t k∈Ct i∈[nt]:yi=k arcface(z i , p, k).(3)" }, { "formula_coordinates": [ 3, 135.04, 698.68, 405.63, 26.29 ], "formula_id": "formula_3", "formula_text": "arcface(z, p, k) = -log exp(cos[c -1 (z, p k ) + δ]/τ ) exp(cos[c -1 (z, p k ) + δ]/τ ) + l∈C1:t,l̸ =k exp(c(z, p l )/τ ) ,(4)" }, { "formula_coordinates": [ 4, 127.65, 173.81, 413.02, 27.27 ], "formula_id": "formula_4", "formula_text": "L t,C (θ, w) ≜ 1 n t k∈Ct i∈[nt]:yi=k arcface(z i , w, k) + 1 |C 1:t-1 | k∈C1:t-1 arcface(p k , w, k).(5)" }, { "formula_coordinates": [ 4, 251.35, 240.43, 289.32, 14.66 ], "formula_id": "formula_5", "formula_text": "θ,w L t (θ, w) = L t,P (θ) + L t,C (θ, w).(6)" }, { "formula_coordinates": [ 4, 182.76, 495.61, 357.91, 9.68 ], "formula_id": "formula_6", "formula_text": "v i = [ã k,i , ϵ 2 , • • • , ϵ m ], ãk,i ∼ T N (µ, σ, µ -κσ, µ + κσ),(7)" }, { "formula_coordinates": [ 4, 194.39, 536.26, 129.99, 11.64 ], "formula_id": "formula_7", "formula_text": "ϵ i ← 1-(ã k,i +ϵ1) 2 / m i=2 ϵ 2 i • ϵ i ." }, { "formula_coordinates": [ 4, 72, 586.75, 468.67, 34.41 ], "formula_id": "formula_8", "formula_text": "′ i from v i . As p k = R(p k , u) × u, we can apply the same rotation matrix R(p k , u) to v i to achieve z ′ i , i.e., z ′ i = R(p k , u) × v i .(8)" }, { "formula_coordinates": [ 4, 156.78, 711.12, 383.89, 12.69 ], "formula_id": "formula_9", "formula_text": "D ′ t ≜ {(z ′ i , k) : k ∈ C 1:t-1 , z ′ i = R(p k , u) × v i , v i = [ã k,i , ϵ 2 , • • • , ϵ m ]} .(9)" }, { "formula_coordinates": [ 5, 135.45, 91.72, 405.22, 28.9 ], "formula_id": "formula_10", "formula_text": "L t,C+ (θ, w) ≜ 1 n t k∈Ct i∈[nt]:yi=k arcface(z i , w, k) + 1 |D ′ t | (z,k)∈D ′ t arcface(z, w, k).(10)" }, { "formula_coordinates": [ 5, 199.66, 158.11, 336.86, 14.66 ], "formula_id": "formula_11", "formula_text": "YONO+ : min θ,w L t (θ, w) = L t,P (θ) + L t,C+ (θ, w). (11" }, { "formula_coordinates": [ 5, 536.52, 158.43, 4.15, 8.64 ], "formula_id": "formula_12", "formula_text": ")" }, { "formula_coordinates": [ 5, 206.33, 263.48, 330.19, 27.27 ], "formula_id": "formula_13", "formula_text": "L t,KD (θ) ≜ 1 n t i∈[nt] ∥F (x i ; θ) -F (x i ; θ t-1 )∥ 2 2 . (12" }, { "formula_coordinates": [ 5, 536.52, 270.54, 4.15, 8.64 ], "formula_id": "formula_14", "formula_text": ")" }, { "formula_coordinates": [ 5, 254.79, 367.09, 285.88, 9.65 ], "formula_id": "formula_15", "formula_text": "θ t ← (1 -β)θ t-1 + βθ t ,(13)" }, { "formula_coordinates": [ 5, 219.37, 449.93, 321.29, 12.39 ], "formula_id": "formula_16", "formula_text": "w k ← w k -η ′ ∇ w k L t (θ, w), ∀k ∈ C 1:t-1(14)" }, { "formula_coordinates": [ 6, 61.44, 266.95, 279.84, 37.26 ], "formula_id": "formula_17", "formula_text": "Update classifier for k ∈ Ct: w k ← θ -η∇w k Lt(θ, w); Update classifier for k ∈ C1:t-1: w k ← θ -η ′ ∇w k Lt(θ, w); 17 Model interpolation: θ ← (1 -β)θ ′ + βθ;" } ]
CONDENSED PROTOTYPE REPLAY FOR CLASS INCREMENTAL LEARNING
Incremental learning (IL) suffers from catastrophic forgetting of old tasks when learning new tasks. This can be addressed by replaying previous tasks' data stored in a memory, which however is usually prone to size limits and privacy leakage. Recent studies store only class centroids as prototypes and augment them with Gaussian noises to create synthetic data for replay. However, they cannot effectively avoid class interference near their margins that leads to forgetting. Moreover, the injected noises distort the rich structure between real data and prototypes, hence even detrimental to IL. In this paper, we propose YONO that You Only Need to replay One condensed prototype per class, which for the first time can even outperform memory-costly exemplar-replay methods. To this end, we develop a novel prototype learning method that (1) searches for more representative prototypes in high-density regions by an attentional mean-shift algorithm and (2) moves samples in each class to their prototype to form a compact cluster distant from other classes. Thereby, the class margins are maximized, which effectively reduces interference causing future forgetting. In addition, we extend YONO to YONO+, which creates synthetic replay data by random sampling in the neighborhood of each prototype in the representation space. We show that the synthetic data can further improve YONO. Extensive experiments on IL benchmarks demonstrate the advantages of YONO/YONO+ over existing IL methods in terms of both accuracy and forgetting.
Jiangtao Kong; Zhenyu Zong; Tianyi Zhou; Huajie Shao
[ { "figure_caption": "Figure 2 :2Figure 2: Framework of the proposed YONO and YONO+. YONO only needs to replay one stored prototype for each class while YONO+ is trained on synthetic data generated from stored prototypes. (a) Memory condensation learns a compact prototype for each class inspired by mean shift algorithm. (b) Data synthesis aims to generate the representations of old classes from stored prototypes using a m-dimensional space rotation matrix and Gaussian distribution.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 : 2 for epoch=1 → E do 3123YONO and YONO+ input :Training data D1:T with classes C1:T , epochs E, steps S, iterates R, learning rate η, η ′ , δ, β initialize :Memory M ← ∅, θ, w 1 for t = 1 → T do Compute features zi = F (xi; θ) for (xi, yi) ∈ Dt; 4 Compute prototype p k for every class k ∈ Ct by iterating Eq. (1) for R iterations; 5 Save prototypes: M ← M ∪ {(p k , k) : k ∈ Ct}; 6 for step=1 → S do Draw a mini-batch of data (xi, yi) ∼ Dt;", "figure_data": "", "figure_id": "fig_1", "figure_label": "123", "figure_type": "figure" }, { "figure_caption": "18Save current task model as θ ′ ← θ; output :Feature extractor F (•; θ) and classifier G(•; w)", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Accuracy comparison of different methods on CIFAR-100 and TinyImageNet under different settings. Solid lines represent non-exemplar-based approaches while the dashed lines denote exemplar-based methods.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Table 1 :1Average accuracy and forgetting of the proposed YONO and baselines on CIFAR-100 and TinyImageNet under different settings.\"b0-10\" means zero-base with 10 phases, \"b0-5\" means zero-base with 5 phases. Bold: the best among non-exemplar methods., Red: the second best among non-exemplar methods, and Blue: the best among exemplar-based methods Average Accuracy and Forgetting on CIFAR-100 and TinyImageNet Method CIFAR-Acc [%]↑ CIFAR-Fgt [%]↓ Tiny-Acc [%]↑ Tiny-Fgt [", "figure_data": "", "figure_id": "fig_4", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Visualization of the distribution of representations generated from stored prototypes and those encoded by the extractor in the representation space. \"-R\" means generated representations, \"-E\" means extracted representations.", "figure_data": "", "figure_id": "fig_6", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Visualization of the distribution of representations encoded by YONO+ and PASS on CIFAR-100 base-0 phase 10 setting. The lighter gray points in \"Task 2\" and \"Task 3\" represent the distribution of the previous tasks' data.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Ablation study of different components in YONO+. \"w/o P\" means without prototype, \"w/o MI\" means without Model Interpolation.", "figure_data": "", "figure_id": "fig_8", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "𝒕𝒕 𝜽𝜽, 𝒘𝒘 = 𝑳𝑳 𝒕𝒕,𝒕𝒕 𝜽𝜽 + 𝑳𝑳 𝒕𝒕,𝑪𝑪 𝜽𝜽, 𝒘𝒘 (𝒛𝒛 𝑫𝑫 , 𝑩𝑩) 𝒕𝒕𝑴𝑴𝑴𝑴𝒕𝒕𝑴𝑴𝒕𝒕𝑴𝑴𝒕𝒕𝑴𝑴𝒕𝒕: 𝒕𝒕 𝑩𝑩 , 𝑩𝑩 ∈ 𝑪𝑪 𝟏𝟏:𝒕𝒕 𝒕𝒕 𝜽𝜽, 𝒘𝒘 = 𝑳𝑳 𝒕𝒕,𝒕𝒕 𝜽𝜽 + 𝑳𝑳 𝒕𝒕,𝑪𝑪+ 𝜽𝜽, 𝒘𝒘", "figure_data": "Recoverable Memory Bank𝑫𝑫𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕 𝒕𝒕𝑴𝑴𝑴𝑴𝒕𝒕𝑴𝑴𝒕𝒕𝑴𝑴𝒕𝒕𝑴𝑴𝒕𝒕 𝑴𝑴𝑴𝑴𝑴𝑴𝑴𝑴𝑴𝑴𝑴𝑴 𝑴𝑴𝒐𝒐Data Synthesis(𝒃𝒃)𝐃𝐃𝐃𝐃𝐃𝐃𝐃𝐃 𝐒𝐒𝐒𝐒𝐒𝐒𝐃𝐃𝐒𝐒𝐒𝐒𝐒𝐒𝐒𝐒𝐒𝐒Task 𝟏𝟏: Task 𝟐𝟐: 在此处键入公式。 𝑪𝑪 𝟏𝟏 𝑪𝑪 𝟐𝟐 Task 𝒕𝒕 -𝟏𝟏: ⋮ ⋮ 𝑪𝑪 𝒕𝒕-𝟏𝟏 Task 𝒕𝒕: 𝑪𝑪 𝒕𝒕No𝑝𝑝 𝑘𝑘 , 𝑘𝑘𝑧𝑧 𝑖𝑖 ′ , 𝑘𝑘 ⋮𝑪𝑪 𝟏𝟏:𝒕𝒕-𝟏𝟏⋮𝑳𝑳 𝒕𝒕,𝑪𝑪+ (𝜽𝜽, 𝒘𝒘)1 𝑅𝑅 𝑘𝑘,𝑖𝑖 � { {𝑢𝑢, 𝒗𝒗 𝑫𝑫 } {𝑝𝑝 3 , 𝒛𝒛 𝑖𝑖 ′ } {𝑝𝑝 2 , 𝒛𝒛 𝑖𝑖 ′ = ℛ(𝑝𝑝 𝑘𝑘 , 𝑢𝑢) × 𝑐𝑐 𝑖𝑖 1 ′ } {𝑝𝑝 1 , 𝒛𝒛 𝑖𝑖 1 𝒛𝒛 𝑖𝑖 ′ },� 𝑅𝑅 𝑘𝑘,𝑖𝑖⇒ 1 } 𝒗𝒗 𝑫𝑫𝒕𝒕 𝑩𝑩 , 𝑩𝑩 ∈ 𝑪𝑪 𝒕𝒕𝑪𝑪 𝒕𝒕𝑳𝑳 𝒕𝒕,𝑪𝑪 (𝜽𝜽, 𝒘𝒘)(𝒕𝒕)𝐌𝐌𝐒𝐒𝐌𝐌𝐌𝐌𝐌𝐌𝐒𝐒 𝐂𝐂𝐌𝐌𝐒𝐒𝐂𝐂𝐒𝐒𝐒𝐒𝐒𝐒𝐃𝐃𝐃𝐃𝐒𝐒𝐌𝐌𝐒𝐒𝑳𝑳 𝒕𝒕,𝒕𝒕 𝜽𝜽ClassifierMemory Condensation𝑮𝑮(. ; 𝒘𝒘)𝑧𝑧 5 𝑅𝑅 𝑘𝑘,51Current Data (𝓓𝓓 𝒕𝒕 )𝑭𝑭(. ; 𝜽𝜽)Features 𝒛𝒛 𝑫𝑫 = 𝑭𝑭(𝒙𝒙 𝑫𝑫 ; 𝜽𝜽) (𝒕𝒕)min 𝜽𝜽,𝒘𝒘 min 𝜽𝜽,𝒘𝒘YONO: YONO+:𝑧𝑧 4𝑅𝑅 𝑘𝑘,4 𝑧𝑧 3𝑅𝑅 𝑘𝑘,3𝒕𝒕 𝑩𝑩 𝑅𝑅 𝑘𝑘,2 𝑧𝑧 2𝑅𝑅 𝑘𝑘,1 𝑧𝑧 1𝒕𝒕 𝑩𝑩", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Hence, we can conclude that YONO+ can generate high-quality synthetic data to improve the model performance.", "figure_data": "10iCaRL-CNN [23] 64.2959.5442.9151.1151.05 38.75 51.85 59.06ExemplariCaRL-NME [23] 69.55 BiC [35] 66.5765.03 56.3422.82 10.3831.71 17.0456.34 45.24 33.41 40.57 60.20 42.12 17.32 20.92WA [42]68.0264.4421.3428.9160.21 45.92 20.46 32.17LwF [13]58.1547.4343.8051.8049.58 43.76 45.79 54.40SSRE [47]58.0546.5815.4412.1346.74 38.47 16.25 19.94IL2A [45]59.7841.9626.9425.0740.39 31.72 20.89 26.10Non-ExpFeTrIL [20] PASS [46]61.41 60.3348.61 51.9418.88 23.6616.14 18.7853.32 43.57 14.69 13.64 45.91 40.15 18.00 16.69PASS w/o Aug59.0255.5228.1129.5548.24 42.71 24.01 26.00YONO (Ours)65.3057.6624.7317.6057.23 54.71 14.50 27.33YONO+ (Ours)66.6460.9822.2616.8758.55 58.42 14.08 22.66", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work introduces the concept of catastrophic forgetting in deep neural networks, which the citing paper builds upon to discuss the challenges in IL and CIL."}, {"Category": "Extension or Continuation", "Citation": "[35,7,6,36]", "Explanation": "The cited works are related to IL and the citing paper further extends the research in this area by exploring new methods to address the issue of catastrophic forgetting in IL."}, {"Category": "Data Source", "Citation": "[26,3,32,37]", "Explanation": "The cited works provide data and methods that the citing paper utilizes in its research on alleviating catastrophic forgetting in IL."}, {"Category": "Extension or Continuation", "Citation": "[46,44,47]", "Explanation": "The cited works are related to CIL and the citing paper further extends the research in this area by discussing the challenges and methods for identifying all the previously learned classes in CIL."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work proposes a regularization loss to mitigate catastrophic forgetting in non-exemplar-based methods, which the citing paper adopts in their research to improve the performance of their methods in CIL."}, {"Category": "Methodological Basis", "Citation": "[16]", "Explanation": "The cited work focuses on building large network structures in non-exemplar-based methods, which the citing paper uses to design their own methods for improving the performance in CIL."}, {"Category": "Extension or Continuation", "Citation": "[46]", "Explanation": "The cited work proposes PASS and SSRE to store and use prototypes for each old class in CIL, which the citing paper extends by exploring the use of prototype augmentation via Gaussian noise to improve the performance in this task."}, {"Category": "Methodological Basis", "Citation": "[31]", "Explanation": "The cited work introduces the concept of graph attention, which the citing paper adopts to move the class prototype towards a high-density region in the representation space during the learning of task-t."}, {"Category": "Methodological Basis", "Citation": "[4]", "Explanation": "The cited work introduces the Arcface loss function, which the citing paper adopts in their research to minimize the distance between class prototypes and samples in the representation model F (\u2022; \u03b8). This function is used to train the model and improve the performance of the representation model."}, {"Category": "Methodological Basis", "Citation": "( 10), i.e., YONO+: min \u03b8,w L t (\u03b8, w) = L t,P (\u03b8) + L t,C+ (\u03b8, w). (11)", "Explanation": "The cited work introduces the YONO+ model, which the citing paper adopts as a method for training the F (\u2022; \u03b8) function on the current task data by minimizing the difference between the representations produced by the previous task model and the current task model."}, {"Category": "Supporting Evidence", "Citation": "[11]", "Explanation": "The cited work, CIFAR-100, serves as a dataset for evaluating the performance of the proposed YONO and YONO+ methods in the citing paper."}, {"Category": "Supporting Evidence", "Citation": "[39]", "Explanation": "The cited work, TinyImageNet, is used as a dataset to evaluate the performance of the proposed YONO and YONO+ methods in the citing paper."}, {"Category": "Extension or Continuation", "Citation": "[13]", "Explanation": "The cited work, LwF, is compared with the proposed YONO and YONO+ methods in the citing paper, indicating an extension of the research on exemplar-based methods in the field of image learning."}, {"Category": "Extension or Continuation", "Citation": "[46]", "Explanation": "The cited work, PASS, is compared with the proposed YONO and YONO+ methods in the citing paper, showing an extension of the research on non-exemplar-based methods in the field of image learning."}, {"Category": "Extension or Continuation", "Citation": "[47]", "Explanation": "The cited work, SSRE, is compared with the proposed YONO and YONO+ methods in the citing paper, indicating an extension of the research on non-exemplar-based methods in the field of image learning."}, {"Category": "Extension or Continuation", "Citation": "[45]", "Explanation": "The cited work, IL2A, is compared with the proposed YONO and YONO+ methods in the citing paper, showing an extension of the research on non-exemplar-based methods in the field of image learning."}, {"Category": "Extension or Continuation", "Citation": "[20]", "Explanation": "The cited work, FeTrIL, is compared with the proposed YONO and YONO+ methods in the citing paper, indicating an extension of the research on non-exemplar-based methods in the field of image learning."}, {"Category": "Extension or Continuation", "Citation": "[23]", "Explanation": "The cited work, iCaRL, is compared with the proposed YONO and YONO+ methods in the citing paper, showing an extension of the research on exemplar-based methods in the field of image learning."}, {"Category": "Extension or Continuation", "Citation": "[35]", "Explanation": "The cited work, BiC, is compared with the proposed YONO and YONO+ methods in the citing paper, indicating an extension of the research on exemplar-based methods in the field of image learning."}, {"Category": "Extension or Continuation", "Citation": "[42]", "Explanation": "The cited work, WA, is compared with the proposed YONO and YONO+ methods in the citing paper, showing an extension of the research on exemplar-based methods in the field of image learning."}, {"Category": "Methodological Basis", "Citation": "[8]", "Explanation": "The cited work, ResNet-18, serves as the model architecture used in the experiments conducted in the citing paper to train the model from scratch."}, {"Category": "Data Source", "Citation": "[19]", "Explanation": "The cited work, PyTorch, is the platform used to implement the proposed methods in the PyTorch framework, indicating a reliance on external data and pre-existing models for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work, SGD optimizer, is the method used to train the model in the experiments conducted in the citing paper, providing a methodological basis for the study."}, {"Category": "Methodological Basis", "Citation": "[43]", "Explanation": "The cited work, PyCIL toolbox, is a well-known tool used in the experiments to run the baselines, indicating a reliance on external methods and tools for the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "( 11)", "Explanation": "The cited work provides a method for computing the loss function in Eq. (6) on two mini-batches, which the citing paper adopts in their research on updating the feature extractor."}, {"Category": "Data Source", "Citation": "on the two mini-batches", "Explanation": "The cited work provides a data source in the form of two mini-batches, which the citing paper uses in their research on updating the feature extractor."}, {"Category": "Methodological Basis", "Citation": "Compute loss Lt(\u03b8, w) in Eq. ( 6) on the two mini-batches", "Explanation": "The cited work provides a method for computing the loss function in Eq. (6) on the two mini-batches, which the citing paper adopts in their research on updating the feature extractor."}, {"Category": "Methodological Basis", "Citation": "Update feature extractor: \u03b8 \u2190 \u03b8 -\u03b7\u2207 \u03b8 Lt(\u03b8, w);", "Explanation": "The cited work provides a method for updating the feature extractor by using the loss function in Eq. (6), which the citing paper adopts in their research on updating the feature extractor."}, {"Category": "Methodological Basis", "Citation": "[33,4]", "Explanation": "The cited works provide the two-layer MLP mapping method used in the citing paper to visualize the high-dimensional representations of synthetic data generated from stored prototypes in YONO+."}, {"Category": "Methodological Basis", "Citation": "[13,22,10,9]", "Explanation": "The cited works provide a basis for the regularization terms introduced in the citing paper to correct gradients and protect old knowledge in the model."}, {"Category": "Extension or Continuation", "Citation": "[26,29,32]", "Explanation": "The cited works are extended in the citing paper by adopting weight regularization to measure the importance of model parameters in a more reasonable and reliable way."}, {"Category": "Extension or Continuation", "Citation": "[13,21,5]", "Explanation": "The cited works are further extended in the citing paper by using knowledge distillation to transfer output of prediction function from previously-learned tasks to the current task without the need to replay old data samples."}, {"Category": "Data Source", "Citation": "[24,15]", "Explanation": "The cited works are used as a data source for the experience replay method in the citing paper, which stores old training examples within a memory buffer."}, {"Category": "Data Source", "Citation": "[34,3,27]", "Explanation": "The cited works are used as a data source for the generative replay method in the citing paper, which uses generative models to generate data samples of old classes."}, {"Category": "Methodological Basis", "Citation": "[24,2]", "Explanation": "The cited works are used as a basis for the use of regularization on gradients in the citing paper, in order to use rehearsal samples more sufficiently."}, {"Category": "Methodological Basis", "Citation": "[46]", "Explanation": "The work by PASS is closely related to the citing paper and provides a method for saving prototypes for each class and augmenting them with Gaussian noise for model training. The citing paper adopts this method to improve the storage of memory buffers and maintain the decision boundary of previous tasks."}]
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b5", "b9", "b16", "b9", "b13", "b25" ], "table_ref": [], "text": "In recent years, most cities around the world have seen growing traffic levels, and associated traffic congestion have started to show a number of negative effects, both at the micro and macro level. At the micro level, passengers experience frustration due to delays and also face increased risks of collisions. At the macro level, unproductive time spent in traffic damages economic health, while wasted fuel and traffic jams increase air and noise pollution. As a result, there is a growing need for effective traffic signal control methods, which can play a significant role in alleviating traffic congestion.\nCurrent traffic signal control methods can be broadly classified into two categories: fixed-time control and adaptive control. In fixed-time control, the duration of different traffic light phases are pre-determined, often optimized offline from historical data. However, urban traffic, on top of having considerable stochasticity, also shows significant temporal and spatial variations. For example, higher congestion is often seen due to temporal peaks at the end of a work day. Similarly, the spatial structure of traffic networks often gives rise to tidal patterns, with high congestion in particular lane directions. To account for this variability, adaptive traffic signal control (ATSC) methods aims at dynamically adjusting traffic signal phases online, based on current traffic conditions.\nMulti-Agent Reinforcement learning (MARL) is one such adaptive and versatile data-driven method, which has recently shown great promise in ATSC and general control tasks [6,10,17]. ATSC is cast as a MARL problem in which each agent controls a single traffic intersection, based on locally-sensed real-time traffic conditions and communication with neighboring intersections. Thus, each agent learns a policy which maps the current traffic conditions at the intersection into control outputs (e.g., phase selection, phase duration). This lends it an advantage over conventional ATSC methods, which rely on complex dynamics models and heuristic assumptions. An alternative to MARL is to train a single centralized RL agent, which is responsible for controlling all traffic intersections. However, while centralization allows for direct maximization of a global reward/objective such as average trip time, training such a centralized method is infeasible in practice due to the exponentially growing joint action space, and the high latency associated with information centralization.\nAlthough the MARL formulation of ATSC alleviates most issues associated with centralized methods, it introduces new challenges as the performance of control policies that optimize local objectives for each agent (intersection) will not be equivalent to that of a centralized global RL agent if the local objectives aren't wellaligned with the global (team-/network-level) one. Since, traffic networks have complex spatio-temporal patterns and significant interdependence between agents, greedily optimizing each agent's local reward usually does not optimize global (network-level) objectives. A possible solution to this is to directly sum each agent's local reward with that of neighboring agents into a large neighborhood reward, which becomes a new, more global objective optimized by each agent. The idea here is to couple neighboring agents via their rewards, whereby improving their neighbors' local rewards via their own actions directly affect their own long-term return. However, such a neighborhood reward has high variance since it is now conditioned on the actions of multiple agents, making it difficult for an agent to determine its true marginal contribution. A recent work introduced COMA [10], which learns a complex team-level network allowing agents to estimate their own marginal contribution to the team reward via counterfactual reasoning. Specifically, COMA's centralized value function network uses both the global team reward, as well as the states and actions of all agents as input, which becomes exponentially harder to train in larger teams.\nIn this work, we propose to spatially distribute the global credit assignment problem into a collection of local marginal contributions calculations, as a natural means to balance the tradeoff between scalability and cooperative performance. To this end, we learn a shared value network, similar to COMA's but only conditioned on each agent's states/actions and that of its direct neighbors, which can be used for agents to estimate their own marginal contribution to the local neighborhood reward. By relying on a fixed number of neighbors, our locally-centralized value network allows for significantly improved scalability, while minimally affecting the quality of the learned solutions by leveraging the natural fixed structure of ATSC, where the natural flow of traffic means that neighboring agents must be more tightly coupled. We further introduce important modifications to the advantage calculation that helps improve and stabilize policy updates.\nWe present results of an extensive set of simulations conducted on a range of benchmark traffic networks using two standard traffic simulators, SUMO [14] and CityFlow [26]. We show that our framework -SocialLight -results in improved cooperation and natural scalability to larger networks compared to existing state-of-the-art ATSC baselines. To the best of our knowledge, we are also the first work to show effective performance on both these standard traffic simulators 1 . Finally, through a series of ablation studies, we also show that the modified advantages in combination with the counterfactual baseline derived from COMA help improve the speed and stability of training in comparison to vanilla A3C/COMA." }, { "figure_ref": [], "heading": "RELATED WORK 2.1 Conventional Traffic Signal Control", "publication_ref": [ "b12", "b17", "b23", "b27", "b30" ], "table_ref": [], "text": "Traffic signal control is a versatile problem with many possible objectives to optimize and different scopes of optimization. Conventional methods can be broadly categorized into adaptive or fixed-time control based on the ability of the method to adapt to current traffic conditions. They are also categorized based on the scope of their optimization -some methods only consider optimization over a single isolated traffic intersection while others consider 1 To help the community standardize bench-marking on both simulators, our opensource code can be found at https://github.com/marmotlab/SocialLight a network of traffic intersections (multiple intersections). Here, we briefly list out seminal works for each of these categories -• Single intersection, Fixed time: The Webster method [13] obtains a closed-form solution for the optimal cycle length and phase split based on a set of modeling assumptions. • Single intersection, Adaptive: SCATS [18] is a popular adaptive-control method, which has even been deployed in numerous urban cities around the world. It takes in predefined signal plans and iteratively selects from these traffic signals according to a defined performance measure. • Multiple intersections, Fixed time: GreenWave [24] optimizes the timing offsets between different intersections to minimize the number of stops for vehicles traveling along a specific direction. • Multiple intersections, Adaptive: Max-pressure control [28] addresses the risk of oversaturation at an intersection by balancing queue lengths between neighboring intersections. This list is not exhaustive and for more details, we refer the reader to the recent survey by Wei et al. [31]. While conventional traffic control methods are currently the standard for real-world deployments, they rely on accurate traffic models." }, { "figure_ref": [], "heading": "RL-based Traffic Signal Control", "publication_ref": [ "b10", "b22", "b32", "b14", "b0", "b2", "b19", "b31", "b21", "b23", "b22", "b33", "b5", "b20", "b21", "b28", "b29", "b28", "b27", "b4", "b5", "b20", "b29", "b36", "b3", "b26" ], "table_ref": [], "text": "Model-free RL is particularly suitable for ATSC due to its ability to learn from and find structure in large amounts of raw data. Early works in RL explored different ATSC problem formulations on simplified traffic environments. Out of these variants, the most common variant has been learning to select the next traffic light phase using a set of features describing the local traffic conditions. [11,23,33]. In contrast, Li et al. [15], Aslani et al. [1] and Casas et al. [3] focused on learning policies for selecting the traffic signal timing (also known as the phase duration). While most methods focused on learning policies from a low-dimensional feature space, Mousavi et al. [20] used a CNN to directly map from image snapshots (obtained using a simulator) to the policy for selecting the next traffic phase. Similarly, Wei at al. [32] used both extracted features and real-world images to learn the policy. Taking note of the fact that traffic intersections in the real world vary greatly, Oroojlooy et al. [22] proposed At-tendLight, which uses two attention networks to learn a universal model applicable to intersections with any number of roads, lanes, phases (possible signals), and traffic flow. Multiple works described above showed that deep RL agents can effectively control individual intersections. Hence, the focus of more recent works has shifted to developing methods for a network of traffic intersections that more closely resemble real-world traffic systems where different traffic intersections are highly interconnected. From lessons learned through conventional methods such as Greenwave [24], it is evident that coordination between these intersections is necessary to achieve effective performance for the network. Current coordination methods for multi-agent traffic signal control can be broadly grouped into two -joint action learners and independent learners. Joint action learners use a single global agent to control the traffic for all intersections [23,34]. While joint action learners allow for direct optimization of a global objective, they find it difficult to scale beyond a few intersections. On the other hand, independent learners train an individual policy for each intersection, while considering other agents/intersections to be a part of the environment [6,21,22,29,30]. Wei at al. [29] proposed PressLight, which extends max pressure [28] to multi-agent RL by rewarding each agent for minimizing the pressure at an intersection. Scaling this, Chen et al [5] proposed MPLight, a deep MARL framework which uses parameter sharing to train policies using pressure-based objectives for large-scale networks. A fundamental challenge with achieving cooperation with independent learners is partial observability, as individual traffic intersections are unable to observe nearby intersections. To address this, Chu et al. [6] proposed MA2C, a MARL algorithm that adds fingerprints of its neighbors to each agent's observation for improved observability, and a spatial discount factor to reduce learning difficulty. With a similar motivation, Nishi et al. [21] used a graph convolutional neural network (GCNN) to automatically extract traffic features between distant intersections. As an alternative to direct state augmentations or feature extractions, Wei et al. [30] proposed to learn a communication mechanism between different intersections. Their framework, referred to as Colight, uses graph attentional networks (GAT's) to facilitate communication between intersections. More recently, Zhang et al. [37] further showed performance improvements by learning phase correlation with an attention mechanism over a queue length based state representation.\nFinally, making use of the Centralized Training Decentralized Execution (CTDE) paradigm, some works have focused on learning a centralized critic to guide independent policy learning [4,27]." }, { "figure_ref": [], "heading": "BACKGROUND 3.1 Multi-Agent Reinforcement Learning", "publication_ref": [ "b11", "b18", "b24", "b34", "b8", "b35" ], "table_ref": [], "text": "Multi-agent Reinforcement Learning generally optimize a global objective over a cooperative game involving numerous agents. Formally, a MARL problem can be formulated as a set of Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs) [12] characterized by a tuple 𝐺 = (𝜓, S, A, P, R, 𝜌, O, Z, 𝛾), where 𝜓 is a finite set of all agents (|𝜓 | = 𝑛, S is the state space, 𝐴 the joint action space, defined as 𝐴 = 𝐴 Let Π 𝑖 : 𝑍 𝑖 × 𝐴 𝑖 → [0, 1] denote the stochastic policy for agent 𝑖, then the joint policy of the multi-agent system is given by 𝜋 (a t |s t ) = 𝑖 ∈𝜓 𝜋 𝑖 𝜃 (𝑎 𝑡 𝑖 |𝑧 𝑡 𝑖 ) assuming the policy of each agent is parameterized by 𝜃 𝑖 . The Multi-Agent RL objective thereby is to find an optimal joint policy 𝜋 that formally maximizes the discounted returns over all agents\n𝐽 (𝜋) = 𝐸𝜏∼𝜋 [ ∞ 𝑡 =0 𝑁 𝑖=0 𝑟 𝑡 𝑖 ].\nHere, 𝜏 denotes the global trajectory (s 0 , a 0 , s 1 , a 1 ..s 𝑡 , a t ). Finally, 𝜌 and 𝛾 represents the initial state distribution and the discount factor respectively. This objective can be optimized over in a centralized manner by parameterizing the global policy 𝜋 (a t |s t ). However, such a centralized approach usually scales poorly, given the exponentially growing state-action space of the agents. Independent learning algorithms [19,25,35] have been effective in many multi-agent settings to optimize\n𝜋 𝑖 𝜃 (𝑎 𝑡 𝑖 |𝑧 𝑡 𝑖 ) over the local returns 𝑅 𝑖 (𝜏) = ∞ 𝑡 =0 𝑟 𝑡 𝑖 with either a local observation value critic 𝑉 𝜋 𝑖 𝑖 (𝑧 𝑖 ) = 𝐸𝜏 [𝑅 𝑖 (𝜏)|𝑧 0 𝑖 = 𝑧 𝑖 ] or 𝑄 𝜋 𝑖 𝑖 (𝑧 𝑖 , 𝑎 𝑖 ) = 𝐸𝜏 [𝑅 𝑖 (𝜏)|𝑧 0 𝑖 = 𝑧 𝑖 , 𝑎 0 𝑖 = 𝑎 𝑖 ] to estimate local advan- tages 𝐴 𝜋 𝑖 𝑖 (𝑧 𝑖 , 𝑎 𝑖 ) = 𝑄 𝜋 𝑖 𝑖 (𝑧 𝑖 , 𝑎 𝑖 ) -𝑉 𝜋 𝑖 (𝑧 𝑖 )\nfor policy improvement. For cooperation, multi-agent actor-critic methods [9,36] propose to learn centralized critics 𝑄 𝜋 (s, a) that optimize global returns\n𝑅(𝜏) = 𝐸𝜏 [ ∞ 𝑡 =0 𝑁 𝑖=0 𝑟 𝑡 𝑖 ]\nwith individual policies for decentralized execution in their environments." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Traffic Terminology", "publication_ref": [], "table_ref": [], "text": "Definition 1 (Traffic movement): One way by which vehicles can traverse the intersection, i.e., from one incoming lane to one connected outgoing lane. The traffic movement 𝑚 𝑖 𝑗 between lane 𝑖 and outgoing lane 𝑗 is denoted as (𝑙 𝑖𝑛 𝑖 , 𝑙 𝑜𝑢𝑡 𝑗 ), and the activation of the movement is defined as 𝑚 𝑖 𝑗 = 1.\nDefinition 2 (Traffic signal phase): A set of simultaneously allowed traffic movements, allowing only vehicles under these activated traffic movements to traverse the intersection. We denote the signal phase as 𝑝 = 𝑚 𝑖 𝑗 |𝑚 𝑖 𝑗 = 1 , where 𝑖 ∈ L in and 𝑗 ∈ L out .\nDefinition 3 (Traffic Agent and traffic network): A traffic agent is in charge of one intersection and relies on the real-time traffic conditions within its own area to control the signal phases. A traffic network is a multi-agent network G(V, E), where the vertices V are traffic agents and the edges E define the road network connecting them. An agent 𝑖 has an immediate neighbor 𝑗 if 𝐸 𝑖 𝑗 = 1. In practice, this means that agents 𝑖 and 𝑗 are directly connected.\nFig. 1 depicts an example single intersection which is composed of twelve incoming lanes and twelve outgoing lanes. Considering a single connection between incoming and outgoing lanes, i.e., each incoming lane is only connected to one outgoing lane, there is a total of twelve movements (left-turn, go-straight, and right-turn for each direction). Therefore, we can define eight phases, shown in the right side of the Fig. 1. Currently, the W-E left-turn phase is activated at the intersection, allowing vehicles at the left-turn lanes of the west and east directions to move. network, as well as the current traffic phase of each agent. However, obtaining this information is infeasible in practice. Instead, each agent is only allowed to observe a portion of this full state, which contains the incoming queue lengths, current local traffic phase, average waiting time of queued vehicles, and pressure, which can all be locally measured via Induction Loop Detectors commonly found in modern traffic networks. In this work, aligned with recent work in the community, we use a combination of these features to define each agent's local state." }, { "figure_ref": [], "heading": "Traffic as a MARL Problem", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Actions.", "publication_ref": [ "b7", "b1" ], "table_ref": [], "text": "We let agents directly select one of the 8 traffic phases, and execute that phase for a pre-specified duration (i.e., there is not fixed cycle among phases), for maximal adaptability. Note other works have also considered switching phases in a fixed cycle without specified phase duration [8], or setting phase durations within a fixed cycle length [2]." }, { "figure_ref": [], "heading": "Rewards.", "publication_ref": [ "b36" ], "table_ref": [], "text": "The global objective for traffic control is to minimize the cumulative trip time of all vehicles. However, optimizing over total trip time is hard, since vehicles usually accumulate local delays as they pass through multiple junctions in their journey. Thus, different local reward structures are often used instead, such as cumulative delay, queue length, pressure, or waiting time, which align well enough with the global trip time objective. In this work, we specifically opt to use queue lengths, to implicitly maximize throughput at each intersection. This has been recently shown to be superior compared to other local reward formulations [37]." }, { "figure_ref": [], "heading": "SOCIALLIGHT", "publication_ref": [ "b8" ], "table_ref": [], "text": "SocialLight introduces a new learning mechanism where individual traffic agents learn to cooperate by marginalizing out their true contributions to a neighborhood reward within an independent learning framework. We outline the components proposed in So-cialLight to train traffic light control policies for a junction. We first provide an intuition that motivates SocialLight. Then we present the adaptations within the asynchronous actor-critic framework that are inspired by COMA [9] to address these challenges. Finally we introduce modifications to the vanilla COMA formulation of the advantages to enhance training stability and improve convergence." }, { "figure_ref": [], "heading": "Notation", "publication_ref": [], "table_ref": [], "text": "Given a traffic network G(V, E), the local neighborhood for agent 𝑖 is denoted as V 𝑖 = 𝑖 N 𝑖 , where N 𝑖 are the agents in the immediate neighborhood of agent 𝑖. Each agent learns a policy network\n𝜋 𝜃 (𝑎 𝑡 𝑖 |𝑧 𝑡 V 𝑖\n) and a critic network\n𝑄 𝜙 (𝑧 𝑡 V 𝑖 , 𝑎 𝑡 V 𝑖\n), where 𝑧 𝑡 V 𝑖 is the augmented observation comprising the observation of each agent and its neighbors, i.e., 𝑧 𝑡\nV 𝑖 = [𝑧 𝑡 𝑖 ] 𝑗 ∈N 𝑖 [𝑧 𝑡 𝑗 ]. The 𝑎 𝑡 V 𝑖\ndenotes the joint action of the agent and it's neighbors i.e. 𝑎 𝑡\nV 𝑖 = [𝑎 𝑡 𝑖 ] 𝑗 ∈N 𝑖 [𝑎 𝑡 𝑗\n]. An agent 𝑖 receives an individual reward 𝑟 𝑡 𝑖 which can be any reward function computed using local traffic conditions such as queue length over its incoming lanes or local max pressure. Through reward sharing, an individual agent would sum up its own reward with those received by it's neighbors. The neighborhood reward for the agent 𝑖 is defined as\n𝑟 𝑡 V 𝑖 = 𝑗 ∈V 𝑖 𝑟 𝑡 𝑗 ." }, { "figure_ref": [], "heading": "Changes to the Policy Gradient", "publication_ref": [], "table_ref": [], "text": "We ) for each agent i.e. conditioned on the joint observation space and action space of the neighborhood 𝑉 𝑖 to marginalize individual contribution over the neighborhood reward. Hence in our setting, a naive COMA update is given by\nÂ(𝑧 𝑡 V 𝑖 , 𝑎 𝑡 𝑖 ) = 𝑄 𝜙 (𝑠 𝑡 V 𝑖 , 𝑎 𝑡 V 𝑖 ) - ∑︁ 𝑎 𝑡 𝑖 𝜋 𝜃 (𝑎 𝑡 𝑖 |𝑠 𝑡 V 𝑖 )𝑄 𝜙 (𝑠 𝑡 V 𝑖 , 𝑎 𝑡 V 𝑖 ),(1)\nwhere the second term is a counterfactual baseline that marginalizes an agents expected contributions by fixing the actions of neighboring agents. However, we observed that using original COMA advantages made learning unstable especially with multiple agents updating shared parameters. While the counterfactual baseline's expected contribution to the gradient is zero, we suspect that the instability primarily arises from the high bias in the critic's gradients during the first few epochs of training. To reduce this bias, we introduce two modifications. First, we take inspiration from Temporal Differences (TD), which reduces bias by estimating advantages from rolling out the trajectory and bootstrapping the critic estimates at a future state. However, unbiasing advantages with the estimated return from the full trajectory roll-out comes at the cost of larger variance in the policy gradient. This bias-variance trade-off problem is then further addressed via standard Generalized Advantage Estimation (GAE) over the modified COMA advantages.\nTD advantages do not require training an additional network. Hence, we propose a similar modification to the COMA advantages to resembles TD advantages as follows:\nÂ1\n(𝑧 𝑡 V 𝑖 , 𝑎 𝑡,𝑖 ) = 𝑟 𝑡 V 𝑖 + 𝛾 ∑︁ 𝑎 𝑡 +1 𝑖 𝜋 𝜃 (𝑎 𝑡 +1 𝑖 |𝑧 𝑡 +1 V 𝑖 )𝑄 𝜙 (𝑧 𝑡 +1 V 𝑖 , 𝑎 𝑡 +1 V 𝑖 ) - ∑︁ 𝑎 𝑡 𝑖 𝜋 𝜃 (𝑎 𝑡 𝑖 |𝑧 𝑡 V 𝑖 )𝑄 𝜙 (𝑧 𝑡 V 𝑖 , 𝑎 𝑡 V 𝑖 )(2)\nNote that our key distinction lies in the estimate of the bootstrapped value which we do by evaluating the counterfactual baseline at the future state. In contrast, TD(1) advantages in a multiagent setting bootstraps the discounted return over the expectation of the joint action over the policies of all agents 𝑗 ∈ V 𝑖 that is given by 𝐸\n𝑗 ∈V 𝑖 𝜋 (𝑎 𝑡 +1 𝑖 |𝑧 𝑡 +1 V 𝑖 ) 𝑄 (𝑧 𝑡 +1 V 𝑖 , 𝑎 𝑡 +1 V 𝑖\n). However, this fails to capture the dependence of future returns on the future actions of neighboring agents. Hence, setting the bootstrapped return as the counterfactual baseline at the future state reduces the variability of future returns on the actions of future neighboring agents, thereby reducing the variance in the gradient updates to the policy network. The above advantages via TD errors still uses a biased critic for the one-step lookahead during the first few epochs. Inspired by the success of GAE, we also use a GAE-type computation to trade off the bias and variance to the policy gradients, for improved learning stability. The generalized advantages are given as:\n𝐴 𝐺𝐴𝐸 (𝑧 𝑡 V 𝑖 , 𝑎 𝑡 𝑖 ) = ∞ ∑︁ 𝑙=0 (𝛾𝛿) 𝑙 Â1 (𝑧 𝑡 +𝑙 V 𝑖 , 𝑎 𝑡 +𝑙 𝑖 )(3)\nNote that the Generalised Advantage estimate implicitly weights the 𝑛 step TD advantages by a factor 𝛿 𝑛 as follows:\nÂ𝑛 (𝑧 𝑡 V 𝑖 , 𝑎 𝑡 𝑖 ) = 𝑛-1 ∑︁ 𝑙=0 𝛾 𝑙 𝑟 𝑡 +𝑙 V 𝑖 + 𝛾 𝑛 ∑︁ 𝑎 𝑡 +𝑛 𝑖 𝜋 𝜃 (𝑎 𝑡 +𝑛 𝑖 |𝑧 𝑡 +𝑛 V 𝑖 )𝑄 𝜙 (𝑧 𝑡 +𝑛 V 𝑖 , 𝑎 𝑡 +𝑛 V 𝑖 ) - ∑︁ 𝑎 𝑡 𝑖 𝜋 𝜃 (𝑎 𝑡 𝑖 |𝑧 𝑡 V 𝑖 )𝑄 𝜙 (𝑧 𝑡 V 𝑖 , 𝑎 𝑡 V 𝑖 )(4)\nThe discounting factor 𝛿 regulates the bias variance tradeoff where the variance in the gradient estimator increases with the time horizon due to the influence of the returns on the actions of the team in the neighborhood. The policy gradient for a rollout of length 𝑇 is thereby given as:\n∇𝐿 𝜋 (𝜃 ) = 𝑇 ∑︁ 𝑡 =0 ∇ 𝜃 𝑙𝑜𝑔(𝜋 𝜃 (𝑎 𝑖 𝑡 |𝑧 𝑡 V 𝑖 ) 𝐴 𝐺𝐴𝐸 (𝑧 𝑡 V 𝑖 , 𝑎 𝑡 𝑖 )(5)" }, { "figure_ref": [ "fig_4" ], "heading": "Critic Training", "publication_ref": [], "table_ref": [], "text": "The critic introduced in the above section estimates returns over a joint action space of the agents in the neighborhood N 𝑖 . However, having the critic output |A| 𝑛 values, where |A| represents the size of the action space of one agent, is impractical. We address this problem by using a critic representation similar to COMA which allows for an efficient evaluation of the baseline. In this work, the critic is a neural network that takes local observations of the neighborhood 𝑧 𝑡 V 𝑖 and the actions of other agents 𝑎 𝑡 {V 𝑖 -𝑖 } and outputs the a vector of length |𝐴|. Note that the actions of other agents are one hot encoded; the network is depicted in Fig. 2.\nIn doing so, the advantage term can be computed in a single pass through a dot product between the outputs of the actor and critic networks.\nTo conform with the TD error used to compute advantages via the counterfactual baseline computed at the joint future agent state, the targets of the critic are modified similarly. Here the TD(1) error is given by\n𝐺 𝑡 𝑖 = 𝑟 𝑡 V 𝑖 + 𝛾 ∑︁ 𝑎 𝑡 +1 𝑖 𝜋 𝜃 (𝑎 𝑡 +1 𝑖 |𝑧 𝑡 +1 V 𝑖 )𝑄 𝜙 (𝑧 𝑡 +1 V 𝑖 , 𝑎 𝑡 +1 V 𝑖 )(6)\nMore generally, the 𝑛 step TD returns is formulated as\n𝐺 𝑡 :𝑡 +𝑛 𝑖 = 𝑛-1 ∑︁ 𝑙=0 𝛾 𝑙 𝑟 𝑡 +𝑙 V 𝑖 + 𝛾 𝑛 ∑︁ 𝑎 𝑡 +𝑛 𝑖 𝜋 𝜃 (𝑎 𝑡 +𝑛 𝑖 |𝑧 𝑡 +𝑛 V 𝑖 )𝑄 𝜙 (𝑧 𝑡 +𝑛 V 𝑖 , 𝑎 𝑡 +𝑛 V 𝑖 )(7)\nThe TD(𝜆) targets 𝐺 𝜆 𝑡,𝑖 can be computed via these modified 𝑛 step returns and the critic network is regressed against these targets over a rollout of length 𝑇 . The critic loss reads\n𝐿 𝑄 (𝜙) = 1 𝑇 𝑇 ∑︁ 𝑡 =0 (𝑄 𝜙 (𝑧 𝑡 +𝑛,V 𝑖 , 𝑎 𝑡 +𝑛,V 𝑖 ) -𝐺 𝜆 𝑡,𝑖 ) 2 (8)" }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [ "b13", "b25" ], "table_ref": [], "text": "Our experiments aim to answer the following questions:\n(1) Does SocialLight improve over the current state of the art, when keeping standard definitions of the state space and reward? (2) What is the impact of the proposed individual contribution marginalization mechanism in comparison to standard reward sharing? (3) Do our modified advantages improve the stability of training, compared to simply applying COMA advantages locally? To answer these questions, our experiments are conducted in both the SUMO and CityFlow traffic simulators [14,26]. We first benchmark the performance of SocialLight on common baselines developed over both the synthetic SUMO and real world Cityflow traffic datasets and measure the same standard traffic metrics in both datasets.\nWe then perform an ablation to study the impact of individual contribution marginalization on the traffic performance over artificially generated traffic flows on a Manhattan road-map over Moreover, we show the improvement over the learning stability of our network over both synthetic traffic and a real traffic datasets." }, { "figure_ref": [], "heading": "Description of Traffic Datasets", "publication_ref": [ "b15", "b25", "b6", "b6" ], "table_ref": [], "text": "We conduct experiments on two different microscopic traffic simulators SUMO [16] and CityFlow [26] with synthetic and real-world datasets respectively. A traffic simulation on either simulator comprises a road network and a traffic flow dataset. Here the road network defines the positions of the intersections, the attributes of the roads (e.g., number and length of lanes, speed limits, and lane connections etc.) and the phase settings. The traffic flow datasets define the travel information of all vehicles, characterized by the origin destination (O-D) pair and time that the vehicle enters the network. Note that our simulations are conducted over homogeneous intersections that have the same settings for the roads and traffic light phases. 5.1.1 Synthetic Traffic Datasets. We use synthetic traffic dataset with a Manhattan road network on the SUMO simulator that is adapted from the benchmark method MA2C [7]. The road network is a 5×5 traffic grid network with 25 intersections. Each intersection is formed by two-lane streets (W-E) with speed limit 72 km/h and one-lane avenues with speed limit 40km/h. The traffic flow datasets are artificially generated during run time with different fixed seeds, to fairly compare among algorithms. Aligned with previous work [7], all episodes consider a fixed traffic flow composed of travel information (O-D pair) for all vehicles, while the seed sets the (random) initial position and speed of these vehicles." }, { "figure_ref": [ "fig_5" ], "heading": "Real Traffic Datasets.", "publication_ref": [ "b29", "b30", "b37", "b36" ], "table_ref": [], "text": "The real traffic dataset considers three city networks -Jinan, Hangzhou and New York. These datasets serve as popular benchmarks for ATSC [30,31,38]. The road networks are extracted from a portion of real-world traffic map for simulations, and traffic flow datasets were compiled from cameras during different time periods. As shown in Figure 3, there are a total of 12 (3𝑥4) intersections in the Jinan map, 16 (4𝑥4) intersections in the Hangzhou map, and 192(28𝑥7) intersections in the New York map. The traffic flow datasets used in these different maps are described in [37]; there are three different flow datasets for Jinan, two for Hangzhou, and two for New York." }, { "figure_ref": [], "heading": "Traffic Performance on SUMO Synthetic Traffic Datasets", "publication_ref": [ "b6" ], "table_ref": [], "text": "We compare our method SocialLight with current state-of-the-art baselines optimized on the various traffic datasets. Following the baselines [7] for a fair comparison, we keep the same experiment settings, as well as similar POMDP settings in terms of the definitions of actions and rewards. The state is modified to include current traffic phase." }, { "figure_ref": [], "heading": "Baseline methods.", "publication_ref": [ "b12", "b6", "b6", "b6" ], "table_ref": [], "text": "(1) Greedy [13]: Greedily chooses the phase associated with lanes with maximum incoming queue length. (2) IQL-LR [7]: A linear regression based independent Q-learning (IQL) algorithm, where each local agent learns its own policy independently by considering other agents as part of the environment's dynamics. (3) IA2C [7]: An extension of IQL-LR which relies on advantage actor-critic (A2C) algorithm instead of IQL. (4) MA2C [7]: A cooperative MARL algorithm, which includes the observations and fingerprints of neighboring agents in each agent's state. to alleviate the instability caused by partial observability. MA2C further introduces a spatial discount factor to scale down the observation and rewards signals of the neighboring agents, to encourage agents towards neighborhood-level cooperation.\n(5) A3C: The distributed learning framework with parameter sharing relying on A3C algorithm, where each agent learns to maximize its individual objective. (6) A3C(nr): The distributed learning framework where each agent is to maximize the neighborhood reward rather than individual reward." }, { "figure_ref": [], "heading": "Analysis.", "publication_ref": [], "table_ref": [], "text": "We first observe that SocialLight outperforms the heuristic based Greedy baseline, as well as the Deep RL baselines MA2C, IA2C, IQL-LR and even A3C with and without neighborhood reward over all traffic metrics (average queue length, speed, intersection delay, cumulative delay, and average total trip time). Prior methods IA2C, MA2C and IQL-LR are outperformed significantly by both A3C with and without neighborhood rewards, even though these are on policy actor-critic methods with the same POMDP settings. We believe that this may be due to the way in which neighboring states are aggregated into each agents' individual state via discounted summation in these baselines, which results in poorer policies. SocialLight, on the other hand, significantly outperforms standard A3C methods with and without neighborhood reward in terms of average trip time and cumulative delays. This is most likely due to the proposed contribution marginalization scheme within the neighborhood reward which tightly couples the given agent with neighboring agents to maximize throughput across the neighborhood, thereby reducing travel time and delays." }, { "figure_ref": [], "heading": "Traffic Performance on CityFlow Real Traffic Datasets", "publication_ref": [ "b36" ], "table_ref": [], "text": "We evaluate SocialLight on real traffic datasets. The experiment settings and POMDP settings are unchanged with respect to [37] for a fair comparison among all methods." }, { "figure_ref": [], "heading": "Baseline methods.", "publication_ref": [ "b12", "b27", "b29", "b4", "b36" ], "table_ref": [], "text": "(1) FixedTime [13]: Fixed time control considers a fixed cycle over phases with a pre-defined total phase length and predefined phase split over the total cycle length. (2) MaxPressure [28] : MP (max-pressure) control greedily selects the phase that can minimize the intersection pressure, where the pressure is calculated by the difference between the vehicles of incoming lanes and connected outgoing lanes. (3) Co-Light [30] : A state-of-the-art method that uses Graph Attention Neural Networks to accomplish junction level cooperation and has been trained via Deep Q learning. (4) MP-Light [5] MP-Light incorporates pressure in their states and reward to achieve state-of-the art scalable ATSC over a city-level traffic network. MP-Light also applies the FRAP based training architecture that uses phase competition within various traffic movements to improve control performance. (5) Attention-Light Attention-Light [37] is a recent state-ofthe-art model that incorporates self-attention to learn the phase correlation and competition in contrast to FRAP that uses human knowledge. This method has shown to outperform FRAP based MPLight and the CoLight over numerous traffic datasets." }, { "figure_ref": [], "heading": "Analysis.", "publication_ref": [ "b36" ], "table_ref": [ "tab_1" ], "text": "Our results show that SocialLight in Table 1 outperforms all existing baselines over a wide variety of traffic flows in terms of reducing average travel time. In particular, we observe the highest performance gains over the New York traffic flow sets that comprises 196 traffic agents. We report more modest performance gains over the Hangzhou and Jinan traffic flow datasets. Over traffic flow datasets compiled for Jinan and Hangzhou urban networks, SocialLight shows modest improvements compared to the state-of-the-art AttentionLight. Despite large gains on NewYork, the relatively modest performance gains of SocialLight over the Hangzhou and Jinan can be attributed to the saturation. This saturation results from their relatively small scale in terms of the number of agents in the training datasets. This saturation is evident from the performance classical baselines such as Max-Pressure over these traffic flow. For instance, the average trip time performance is actually observed to degrade with Co-Light and MP-Light as compared to the MaxPressure baseline on the Hangzhou and Jinan datasets. Attention-light is found to have very marginal performance gains as compared to the MaxPressure baseline (except for Hangzhou-2). This observations indicate that any potential performance gains from these datasets would be minimal due to We then turn our attention to the New York traffic flow sets, which are harder to learn over due to their sheer scale. Of the two flow sets there, New York-2 is the hardest traffic dataset [37] in terms of vehicle arrival rate. Most RL methods (CoLight, MPLight and AttentionLight) produce modest performance gains compared to the classical FixedTime and MaxPressure baselines on both the New york traffic flowsets. Co-light improves over the New York-2 dataset compared to the classical methods, but fails to do so over the New York-1 dataset. In contrast, AttentionLight fails to improve over the classic MaxPressure controller in the NewYork-2 dataset. We believe that these improvements indicate the need for true cooperation, which Co-Light and AttentionLight only achieve via their network designs.\nSocialLight pushes the state-of-the-art in terms of traffic performance on both these New York datasets, where our performance gains over the current state-of-the-art AttentionLight and CoLight are even more pronounced (≥ 20%). We believe that this is due to the impact of agents marginalizing individual contributions to the neighborhood reward, which encourages better cooperation as each agent understands its role in the local traffic performance. By overlapping the agents' neighborhoods (i.e., local areas of enhanced cooperation), our approach effectively leads to improved network-wide cooperation and thus large performance gains." }, { "figure_ref": [ "fig_6", "fig_6" ], "heading": "Impact of Individual Contribution Marginalization on Learning and Final Performance", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "This ablation study aims to identify the impact of contribution marginalization on both the learning process and the final performance of trained policies. We compare SocialLight with vanilla Asynchronous Actor Critic (A3C) with reward sharing (rewards of neighbors summed up). As shown in Fig. 4, SocialLight exhibits improved sample efficiency and improved returns as compared to A3C even though both algorithms are on-policy and have the same architectures and augmented state inputs for their policy networks. Furthermore, we note that decentralized contribution marginalization introduced in SocialLight is also shown to have improved the stability of training. While both algorithms converge, Fig 4 indicates larger variances in the returns, average speed and intersection delays during training for A3C with neighborhood reward sharing.\nWe further compare both the networks over a validation set of synthetic traffic flows generated from separate seeds that generate traffic flows during training. The improvement over the total trip time and cumulative delay shown in Table 2 highlight how the marginalization of individual contributions promote cooperation between traffic agents for large performance gains." }, { "figure_ref": [ "fig_6" ], "heading": "Impact of Modified Advantages on Training Stability and Convergence", "publication_ref": [ "b8" ], "table_ref": [], "text": "Further analysis shows the impact of the modified advantages that marginalizes individual contributions. Compared to simply applying COMA advantages [9], our modifications are shown to improve training stability as shown in Fig 4 and overall returns. We believe that original COMA advantages converge to sup-optimal policies due to the highly distributed nature of our training process, where high initial biases in the critic inhibit monotonic policy improvement during the initial phase of training. As both the policy and networks learn in conjunction, the policy network learned via the original COMA advantages converge to sub-optimal traffic control policies, differently from SocialLight." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [ "b6", "b6", "b5", "b6", "b36", "b36", "b36" ], "table_ref": [], "text": "This paper presents SocialLight, a fully decentralized training framework that learns cooperative traffic light control policies via distributedly marginalizing individual contributions to each agent's local neighborhood reward. We show that our method improves over the scalability for cooperative learning, thereby improving final traffic performance (average trip time) especially over large traffic networks such as the New York grid with 196 traffic intersections. These performance gains suggest that our method could improve the overall quality of learned policies on real-life citywide networks.\nOur method leverages the fixed spatial structure of traffic systems to define overlapping neighborhoods over which agents can marginalize their contributions, to scale cooperative learning without the need for a centralized critic. Future work will focus on extending this idea to general mixed competitive-cooperative games, where such distributed spatial structures are more difficult to identify and leverage. There, we will aim to develop methods to learn suitable neighborhoods over which individual agents can marginalize their contributions over to improve the scalability of cooperative learning.\n7 SUPPLEMENTAL MATERIAL 7.1 Experiment Settings 7.1.1 SUMO Dataset. Each simulation lasts for 3600 seconds, where the phase duration Δ𝑇 is fixed to 5 seconds. Hence there are 720 RL time steps in a single simulation episode. After a traffic phase change, the yellow time Δ𝑡 𝑦 is set to 2s [7]. (1) Actions: The traffic signal directly controls one of the 5 available phases of the traffic light [7]. Each agent executes a phase for a fixed duration in the simulator as specified by the total phase duration Δ𝑇 . (2) Observation:\nThe local state of an individual agent is defined by its current traffic phase, waiting time, and traffic queue length at each incoming lane and outgoing lane at the traffic intersection.\nHere, the waiting time is the normalized cumulative delay of the first vehicle along an incoming lane. The outgoing traffic queue lengths and incoming traffic queue lengths are obtained from near-intersection induction-loop detectors (ILD). (3) Rewards: Agent rewards are set to the incoming queue lengths over the lane area detectors in SUMO. These detectors measure incoming traffic upto a certain distance from the intersection. This has been a standard reward structue for many prior works [6,7,37].\n7.2.2 CityFlow Dataset. The Markov Decision Process Settings(State, Actions and Rewards) are defined as follows:\n(1) State: Following [37], the local state of the agent comprises the one-hot encoded current phase (action) taken by the agent and the queue length across each incoming traffic lane at an intersection. (2) Action: The traffic agent executes one of the 8 phases as described in 1.\n(3) Rewards: Following [37], agent rewards are set to the negative of the queue lengths over incoming traffic lanes in the intersection. Optimizing this reward metric maximizes the throughput from the given intersection. With reward sharing, each traffic agent would then intend to maximize the throughput through the local neighborhood." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This work was partly supported by A*STAR, CISCO Systems (USA) Pte. Ltd and National University of Singapore under its Cisco-NUS Accelerated Digital Economy Corporate Laboratory (Award I21001E0002)." } ]
2023-04-20
10.1016/j.trc.2017.09.020
[ { "authors": "Mohammad Aslani; Mohammad Saadi Mesgari; Marco Wiering", "journal": "Transportation Research Part C: Emerging Technologies", "ref_id": "b0", "title": "Adaptive traffic signal control with actor-critic methods in a real-world traffic network with different traffic disruption events", "year": "2017" }, { "authors": "Mohammad Aslani; Saadi Mesgari; Marco Wiering", "journal": "Transportation Research Part C Emerging Technologies", "ref_id": "b1", "title": "Adaptive traffic signal control with actor-critic methods in a real-world traffic network with different traffic disruption events", "year": "2017" }, { "authors": "Noe Casas", "journal": "", "ref_id": "b2", "title": "Deep deterministic policy gradient for urban traffic light control", "year": "2017" }, { "authors": "Chi-Chun Chao; Jun-Wei Hsieh; Bor-Shiun Wang", "journal": "", "ref_id": "b3", "title": "Cooperative Reinforcement Learning on Traffic Signal Control", "year": "2022" }, { "authors": "Chacha Chen; Hua Wei; Nan Xu; Guanjie Zheng; Ming Yang; Yuanhao Xiong; Kai Xu; Zhenhui Li", "journal": "", "ref_id": "b4", "title": "Toward a thousand lights: Decentralized deep reinforcement learning for large-scale traffic signal control", "year": "2020" }, { "authors": "Tianshu Chu; Jie Wang; Lara Codecà; Zhaojian Li", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b5", "title": "Multi-agent deep reinforcement learning for large-scale traffic signal control", "year": "2019" }, { "authors": "Tianshu Chu; Jie Wang; Lara Codecà; Zhaojian Li", "journal": "", "ref_id": "b6", "title": "Multi-Agent Deep Reinforcement Learning for Large-scale Traffic Signal Control", "year": "2019" }, { "authors": "Samah El-Tantawy; Baher Abdulhai", "journal": "", "ref_id": "b7", "title": "Multi-Agent Reinforcement Learning for Integrated Network of Adaptive Traffic Signal Controllers (MARLIN-ATSC)", "year": "2012" }, { "authors": "Jakob Foerster; Gregory Farquhar; Triantafyllos Afouras; Nantas Nardelli; Shimon Whiteson", "journal": "", "ref_id": "b8", "title": "Counterfactual Multi-Agent Policy Gradients", "year": "2017" }, { "authors": "Jakob Foerster; Gregory Farquhar; Triantafyllos Afouras; Nantas Nardelli; Shimon Whiteson", "journal": "", "ref_id": "b9", "title": "Counterfactual multi-agent policy gradients", "year": "2018" }, { "authors": "Wade Genders; Saiedeh Razavi", "journal": "", "ref_id": "b10", "title": "Using a deep reinforcement learning agent for traffic signal control", "year": "2016" }, { "authors": "Maxim Jayesh K Gupta; Mykel Egorov; Kochenderfer", "journal": "Springer", "ref_id": "b11", "title": "Cooperative multi-agent control using deep reinforcement learning", "year": "2017" }, { "authors": "Peter Koonce; Lee Rodegerdts", "journal": "United States. Federal Highway Administration", "ref_id": "b12", "title": "Traffic signal timing manual", "year": "2008" }, { "authors": "Daniel Krajzewicz", "journal": "Springer", "ref_id": "b13", "title": "Traffic simulation with SUMO-simulation of urban mobility", "year": "2010" }, { "authors": "Li Li; Yisheng Lv; Fei-Yue Wang", "journal": "IEEE/CAA Journal of Automatica Sinica", "ref_id": "b14", "title": "Traffic signal timing via deep reinforcement learning", "year": "2016" }, { "authors": "Pablo Alvarez Lopez; Michael Behrisch; Laura Bieker-Walz; Jakob Erdmann; Yun-Pang Flötteröd; Robert Hilbrich; Leonhard Lücken; Johannes Rummel; Peter Wagner; Evamarie Wießner", "journal": "", "ref_id": "b15", "title": "Microscopic Traffic Simulation using SUMO", "year": "2018" }, { "authors": "Ryan Lowe; Yi I Wu; Aviv Tamar; Jean Harb; Pieter Openai; Igor Abbeel; Mordatch", "journal": "Advances in neural information processing systems", "ref_id": "b16", "title": "Multi-agent actor-critic for mixed cooperative-competitive environments", "year": "2017" }, { "authors": " Lowrie", "journal": "", "ref_id": "b17", "title": "Scats-a traffic responsive method of controlling urban traffic. Sales information brochure published by Roads & Traffic Authority", "year": "1990" }, { "authors": "Volodymyr Mnih; Adria Puigdomenech Badia; Mehdi Mirza; Alex Graves; Timothy Lillicrap; Tim Harley; David Silver; Koray Kavukcuoglu", "journal": "PMLR", "ref_id": "b18", "title": "Asynchronous methods for deep reinforcement learning", "year": "1928" }, { "authors": "Seyed Sajad Mousavi; Michael Schukat; Enda Howley", "journal": "IET Intelligent Transport Systems", "ref_id": "b19", "title": "Traffic light control using deep policy-gradient and value-function-based reinforcement learning", "year": "2017" }, { "authors": "Tomoki Nishi; Keisuke Otaki; Keiichiro Hayakawa; Takayoshi Yoshimura", "journal": "IEEE", "ref_id": "b20", "title": "Traffic signal control based on reinforcement learning with graph convolutional neural nets", "year": "2018" }, { "authors": "Afshin Oroojlooy; Mohammadreza Nazari; Davood Hajinezhad; Jorge Silva", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Attendlight: Universal attention-based reinforcement learning model for traffic signal control", "year": "2020" }, { "authors": "L A Prashanth; Shalabh Bhatnagar", "journal": "IEEE", "ref_id": "b22", "title": "Reinforcement learning with average cost for adaptive control of traffic lights at intersections", "year": "2011" }, { "authors": "Elena S Roger P Roess; William R Prassas; Mcshane", "journal": "Pearson/Prentice Hall", "ref_id": "b23", "title": "Traffic engineering", "year": "2004" }, { "authors": "John Schulman; Filip Wolski; Prafulla Dhariwal; Alec Radford; Oleg Klimov", "journal": "", "ref_id": "b24", "title": "Proximal policy optimization algorithms", "year": "2017" }, { "authors": "Zheng Tang; Milind Naphade; Ming-Yu Liu; Xiaodong Yang; Stan Birchfield; Shuo Wang; Ratnesh Kumar; David Anastasiu; Jenq-Neng Hwang", "journal": "", "ref_id": "b25", "title": "Cityflow: A city-scale benchmark for multi-target multi-camera vehicle tracking and reidentification", "year": "2019" }, { "authors": "Elise Van Der Pol; Frans A Oliehoek", "journal": "", "ref_id": "b26", "title": "Coordinated deep reinforcement learners for traffic light control", "year": "2016" }, { "authors": "Pravin Varaiya", "journal": "Transportation Research Part C: Emerging Technologies", "ref_id": "b27", "title": "Max pressure control of a network of signalized intersections", "year": "2013" }, { "authors": "Hua Wei; Chacha Chen; Guanjie Zheng; Kan Wu; Vikash Gayah; Kai Xu; Zhenhui Li", "journal": "", "ref_id": "b28", "title": "Presslight: Learning max pressure control to coordinate traffic signals in arterial network", "year": "1290" }, { "authors": "Hua Wei; Nan Xu; Huichu Zhang; Guanjie Zheng; Xinshi Zang; Chacha Chen; Weinan Zhang; Yanmin Zhu; Kai Xu; Zhenhui Li", "journal": "", "ref_id": "b29", "title": "Colight: Learning network-level cooperation for traffic signal control", "year": "1913" }, { "authors": "Hua Wei; Guanjie Zheng; Vikash Gayah; Zhenhui Li", "journal": "", "ref_id": "b30", "title": "A Survey on Traffic Signal Control Methods", "year": "2019" }, { "authors": "Hua Wei; Guanjie Zheng; Huaxiu Yao; Zhenhui Li", "journal": "", "ref_id": "b31", "title": "Intellilight: A reinforcement learning approach for intelligent traffic light control", "year": "2018" }, { "authors": ", J Ma Wiering; Jilles Van Veenen; Arne Vreeken; Koopman", "journal": "", "ref_id": "b32", "title": "Intelligent traffic light control", "year": "2004" }, { "authors": "Donghan Xie; Zhi Wang; Chunlin Chen; Daoyi Dong", "journal": "", "ref_id": "b33", "title": "Iedqn: Information exchange dqn with a centralized coordinator for traffic signal control", "year": "2020" }, { "authors": "Chao Yu; Akash Velu; Eugene Vinitsky; Yu Wang; Alexandre Bayen; Yi Wu", "journal": "Multi-Agent Games", "ref_id": "b34", "title": "The Surprising Effectiveness of PPO in Cooperative", "year": "2021" }, { "authors": "Chao Yu; Akash Velu; Eugene Vinitsky; Yu Wang; Alexandre Bayen; Yi Wu", "journal": "Multi-Agent Games", "ref_id": "b35", "title": "The Surprising Effectiveness of PPO in Cooperative", "year": "2021" }, { "authors": "Liang Zhang; Qiang Wu; Jianming Deng", "journal": "", "ref_id": "b36", "title": "AttentionLight: Rethinking queue length and attention mechanism for traffic signal control", "year": "2022" }, { "authors": "Guanjie Zheng; Yuanhao Xiong; Xinshi Zang; Jie Feng; Hua Wei; Huichu Zhang; Yong Li; Kai Xu; Zhenhui Li", "journal": "", "ref_id": "b37", "title": "Learning phase competition for traffic signal control", "year": "1963" } ]
[ { "formula_coordinates": [ 3, 356.47, 140.61, 102.13, 12.91 ], "formula_id": "formula_0", "formula_text": "𝐽 (𝜋) = 𝐸𝜏∼𝜋 [ ∞ 𝑡 =0 𝑁 𝑖=0 𝑟 𝑡 𝑖 ]." }, { "formula_coordinates": [ 3, 317.96, 229.34, 241.76, 48.6 ], "formula_id": "formula_1", "formula_text": "𝜋 𝑖 𝜃 (𝑎 𝑡 𝑖 |𝑧 𝑡 𝑖 ) over the local returns 𝑅 𝑖 (𝜏) = ∞ 𝑡 =0 𝑟 𝑡 𝑖 with either a local observation value critic 𝑉 𝜋 𝑖 𝑖 (𝑧 𝑖 ) = 𝐸𝜏 [𝑅 𝑖 (𝜏)|𝑧 0 𝑖 = 𝑧 𝑖 ] or 𝑄 𝜋 𝑖 𝑖 (𝑧 𝑖 , 𝑎 𝑖 ) = 𝐸𝜏 [𝑅 𝑖 (𝜏)|𝑧 0 𝑖 = 𝑧 𝑖 , 𝑎 0 𝑖 = 𝑎 𝑖 ] to estimate local advan- tages 𝐴 𝜋 𝑖 𝑖 (𝑧 𝑖 , 𝑎 𝑖 ) = 𝑄 𝜋 𝑖 𝑖 (𝑧 𝑖 , 𝑎 𝑖 ) -𝑉 𝜋 𝑖 (𝑧 𝑖 )" }, { "formula_coordinates": [ 3, 317.78, 299.23, 89.46, 12.91 ], "formula_id": "formula_2", "formula_text": "𝑅(𝜏) = 𝐸𝜏 [ ∞ 𝑡 =0 𝑁 𝑖=0 𝑟 𝑡 𝑖 ]" }, { "formula_coordinates": [ 4, 317.87, 413.65, 35.49, 11.27 ], "formula_id": "formula_3", "formula_text": "𝜋 𝜃 (𝑎 𝑡 𝑖 |𝑧 𝑡 V 𝑖" }, { "formula_coordinates": [ 4, 431.86, 413.65, 44.03, 11.27 ], "formula_id": "formula_4", "formula_text": "𝑄 𝜙 (𝑧 𝑡 V 𝑖 , 𝑎 𝑡 V 𝑖" }, { "formula_coordinates": [ 4, 386.89, 436.59, 107.71, 11.27 ], "formula_id": "formula_5", "formula_text": "V 𝑖 = [𝑧 𝑡 𝑖 ] 𝑗 ∈N 𝑖 [𝑧 𝑡 𝑗 ]. The 𝑎 𝑡 V 𝑖" }, { "formula_coordinates": [ 4, 473.94, 449.71, 71.53, 11.27 ], "formula_id": "formula_6", "formula_text": "V 𝑖 = [𝑎 𝑡 𝑖 ] 𝑗 ∈N 𝑖 [𝑎 𝑡 𝑗" }, { "formula_coordinates": [ 4, 419.15, 517.62, 58.52, 11.27 ], "formula_id": "formula_7", "formula_text": "𝑟 𝑡 V 𝑖 = 𝑗 ∈V 𝑖 𝑟 𝑡 𝑗 ." }, { "formula_coordinates": [ 4, 332.21, 686.56, 225.99, 23.9 ], "formula_id": "formula_8", "formula_text": "Â(𝑧 𝑡 V 𝑖 , 𝑎 𝑡 𝑖 ) = 𝑄 𝜙 (𝑠 𝑡 V 𝑖 , 𝑎 𝑡 V 𝑖 ) - ∑︁ 𝑎 𝑡 𝑖 𝜋 𝜃 (𝑎 𝑡 𝑖 |𝑠 𝑡 V 𝑖 )𝑄 𝜙 (𝑠 𝑡 V 𝑖 , 𝑎 𝑡 V 𝑖 ),(1)" }, { "formula_coordinates": [ 5, 83.28, 294.93, 210.77, 52.21 ], "formula_id": "formula_9", "formula_text": "(𝑧 𝑡 V 𝑖 , 𝑎 𝑡,𝑖 ) = 𝑟 𝑡 V 𝑖 + 𝛾 ∑︁ 𝑎 𝑡 +1 𝑖 𝜋 𝜃 (𝑎 𝑡 +1 𝑖 |𝑧 𝑡 +1 V 𝑖 )𝑄 𝜙 (𝑧 𝑡 +1 V 𝑖 , 𝑎 𝑡 +1 V 𝑖 ) - ∑︁ 𝑎 𝑡 𝑖 𝜋 𝜃 (𝑎 𝑡 𝑖 |𝑧 𝑡 V 𝑖 )𝑄 𝜙 (𝑧 𝑡 V 𝑖 , 𝑎 𝑡 V 𝑖 )(2)" }, { "formula_coordinates": [ 5, 108.66, 405.81, 98.12, 15.46 ], "formula_id": "formula_10", "formula_text": "𝑗 ∈V 𝑖 𝜋 (𝑎 𝑡 +1 𝑖 |𝑧 𝑡 +1 V 𝑖 ) 𝑄 (𝑧 𝑡 +1 V 𝑖 , 𝑎 𝑡 +1 V 𝑖" }, { "formula_coordinates": [ 5, 102.17, 530.62, 191.87, 27.38 ], "formula_id": "formula_11", "formula_text": "𝐴 𝐺𝐴𝐸 (𝑧 𝑡 V 𝑖 , 𝑎 𝑡 𝑖 ) = ∞ ∑︁ 𝑙=0 (𝛾𝛿) 𝑙 Â1 (𝑧 𝑡 +𝑙 V 𝑖 , 𝑎 𝑡 +𝑙 𝑖 )(3)" }, { "formula_coordinates": [ 5, 60.56, 588, 233.49, 66.64 ], "formula_id": "formula_12", "formula_text": "Â𝑛 (𝑧 𝑡 V 𝑖 , 𝑎 𝑡 𝑖 ) = 𝑛-1 ∑︁ 𝑙=0 𝛾 𝑙 𝑟 𝑡 +𝑙 V 𝑖 + 𝛾 𝑛 ∑︁ 𝑎 𝑡 +𝑛 𝑖 𝜋 𝜃 (𝑎 𝑡 +𝑛 𝑖 |𝑧 𝑡 +𝑛 V 𝑖 )𝑄 𝜙 (𝑧 𝑡 +𝑛 V 𝑖 , 𝑎 𝑡 +𝑛 V 𝑖 ) - ∑︁ 𝑎 𝑡 𝑖 𝜋 𝜃 (𝑎 𝑡 𝑖 |𝑧 𝑡 V 𝑖 )𝑄 𝜙 (𝑧 𝑡 V 𝑖 , 𝑎 𝑡 V 𝑖 )(4)" }, { "formula_coordinates": [ 5, 350.56, 97.82, 207.64, 25.96 ], "formula_id": "formula_13", "formula_text": "∇𝐿 𝜋 (𝜃 ) = 𝑇 ∑︁ 𝑡 =0 ∇ 𝜃 𝑙𝑜𝑔(𝜋 𝜃 (𝑎 𝑖 𝑡 |𝑧 𝑡 V 𝑖 ) 𝐴 𝐺𝐴𝐸 (𝑧 𝑡 V 𝑖 , 𝑎 𝑡 𝑖 )(5)" }, { "formula_coordinates": [ 5, 354.57, 338.19, 203.63, 24.2 ], "formula_id": "formula_14", "formula_text": "𝐺 𝑡 𝑖 = 𝑟 𝑡 V 𝑖 + 𝛾 ∑︁ 𝑎 𝑡 +1 𝑖 𝜋 𝜃 (𝑎 𝑡 +1 𝑖 |𝑧 𝑡 +1 V 𝑖 )𝑄 𝜙 (𝑧 𝑡 +1 V 𝑖 , 𝑎 𝑡 +1 V 𝑖 )(6)" }, { "formula_coordinates": [ 5, 326.52, 384.74, 231.68, 28.36 ], "formula_id": "formula_15", "formula_text": "𝐺 𝑡 :𝑡 +𝑛 𝑖 = 𝑛-1 ∑︁ 𝑙=0 𝛾 𝑙 𝑟 𝑡 +𝑙 V 𝑖 + 𝛾 𝑛 ∑︁ 𝑎 𝑡 +𝑛 𝑖 𝜋 𝜃 (𝑎 𝑡 +𝑛 𝑖 |𝑧 𝑡 +𝑛 V 𝑖 )𝑄 𝜙 (𝑧 𝑡 +𝑛 V 𝑖 , 𝑎 𝑡 +𝑛 V 𝑖 )(7)" }, { "formula_coordinates": [ 5, 356.36, 460.37, 201.84, 25.96 ], "formula_id": "formula_16", "formula_text": "𝐿 𝑄 (𝜙) = 1 𝑇 𝑇 ∑︁ 𝑡 =0 (𝑄 𝜙 (𝑧 𝑡 +𝑛,V 𝑖 , 𝑎 𝑡 +𝑛,V 𝑖 ) -𝐺 𝜆 𝑡,𝑖 ) 2 (8)" } ]
SocialLight: Distributed Cooperation Learning towards Network-Wide Traffic Signal Control
Many recent works have turned to multi-agent reinforcement learning (MARL) for adaptive traffic signal control to optimize the travel time of vehicles over large urban networks. However, achieving effective and scalable cooperation among junctions (agents) remains an open challenge, as existing methods often rely on extensive, nongeneralizable reward shaping or on non-scalable centralized learning. To address these problems, we propose a new MARL method for traffic signal control, SocialLight, which learns cooperative traffic control policies by distributedly estimating the individual marginal contribution of agents on their local neighborhood. SocialLight relies on the Asynchronous Actor Critic (A3C) framework, and makes learning scalable by learning a locally-centralized critic conditioned over the states and actions of neighboring agents, used by agents to estimate individual contributions by counterfactual reasoning. We further introduce important modifications to the advantage calculation that help stabilize policy updates. These modifications decouple the impact of the neighbors' actions on the computed advantages, thereby reducing the variance in the gradient updates. We benchmark our trained network against state-of-the-art traffic signal control methods on standard benchmarks in two traffic simulators, SUMO and CityFlow. Our results show that SocialLight exhibits improved scalability to larger road networks and better performance across usual traffic metrics.
Harsh Goel; Yifeng Zhang; Mehul Damani; Guillaume Sartoretti
[ { "figure_caption": "Figure 1 :1Figure 1: Single intersection with 8 traffic light phases", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "1 ×1𝐴 2 × .... × 𝐴 𝑛 and P : S × A × S → [0, 1] denotes the global transition dynamics. The reward function 𝑅 : 𝑆 × 𝐴 × 𝑆 → R 𝑛 computes a set of private rewards [𝑟 𝑡 𝑖 ] for each agent 𝑖 at each time-step 𝑡. These rewards can be global (i.e each agent receives the same global reward) or local via reward shaping. In partially observable settings each agent cannot access the true global state of the environment s 𝑡 . Instead, it draws an observation via observation models O = [𝑂 1 , 𝑂 2 , ..., 𝑂 𝑁 ] where 𝑂 𝑖 : S → Z 𝑖 . Here Z = [𝑍 1 , 𝑍 2 , ...𝑍 𝑛 ] are the agents' observation spaces.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "3. 3 . 131Problem Definition. Given the current traffic conditions, the goal of traffic agents in the network is to select their own optimal signal phase 𝑎 𝑡 for a fixed phase duration, until the next decision time-step 𝑡 + 1, to maximize a global cumulative objective.", "figure_data": "", "figure_id": "fig_2", "figure_label": "31", "figure_type": "figure" }, { "figure_caption": "3.3.2 Observations.The true global state for a traffic systems comprises the position and current travel time of each vehicle in the", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Illustration of SocialLight: Fig a) highlights the distributed training framework where each agent maintains an actor network and a local critic network with parameter sharing. The local critic is conditioned on the augmented observations and the actions of neighboring agents to marginalize individual credit via a counterfactual baseline. This baseline is then used to compute the individual advantages used for individual policy improvement. Fig b) Details the actor and critic network architectures. ( Image helped prepared by Stuti Mittal)", "figure_data": "", "figure_id": "fig_4", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Real-world traffic road network for CityFlow dataset (From left to right, Hangzhou map, Jinan map and New York map)", "figure_data": "", "figure_id": "fig_5", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Training Plots of methods over average rewards, vehicle speed and intersection delay on the SUMO Manhattan synthetic data-set. SocialLight is shown in Blue, A3C with Neighborhood Rewards is in red and SocialLight with original COMA advantages which we refer to as decCOMA is in green. Observe that decCOMA fails to improve over cumulative returns.", "figure_data": "", "figure_id": "fig_6", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "7. 1 . 212CityFlow dataset. We adapt the same traffic phase duration and yellow duration settings as SUMO for the simulations.", "figure_data": "", "figure_id": "fig_7", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "7. 22MDP Settings 7.2.1 SUMO Dataset. The Markov Decision Process Settings(State, Actions and Rewards) are defined as follows:", "figure_data": "", "figure_id": "fig_8", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "leverage the locally centralized critic to compute advantages by marginalizing individual contributions via a counterfactual baseline that is inspired by COMA for policy improvement. COMA learns a central critic 𝑄 (s 𝑡 , a 𝑡 ) over the global state 𝑠 𝑡 and global action a 𝑡 = 𝑖 ∈𝑁 [𝑎𝑡 𝑖 ] for all 𝑁 agents. The counterfactual baseline and COMA advantages for a global state is given by 𝑄 (s 𝑡 , a 𝑡 ) -𝐸 𝑎 𝑡 𝑖 𝑄 (s 𝑡 , a 𝑡 ). In contrast our method computes a local critic 𝑄 𝜙 (𝑧 𝑡", "figure_data": "V 𝑖, 𝑎 𝑡 V 𝑖", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "SocialLight vs Baselines on CityFlow", "figure_data": "MethodsNew YorkHangzhouJinan1212123Fixed Time1397.37 1660.29 432.316 359.44 364.36 289.74 316.69Max Pressure1177.75 1535.77262.35 348.68 275.79 223.06 234.93Co-Light1221.77 1476.18271.07 297.26 276.33 237.14 278.16MP -Light1168.49 1597.24343.47 282.14 300.93 259.10 261.45Attention-Light978.621571.68259.62 284.75 268.68 212.76 216.34SocialLight760.94 1114.47 255.68 301.16 226.21 209.81 205.38Performance Gain 22.23 % 24.53 %1.54 %-15.68% 1.41 % 5.09 %the SUMO simulator. By simulating a diverse range of traffic sce-narios to train and test on, we can eliminate networks potentiallyover-fitting to a single scenario that may misrepresent our analysis.", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "SocialLight vs Baselines on SUMO -Manhattan", "figure_data": "Metrics(Average)GreedyIA2CMA2CIQL-LRA3CA3C (nr)SocialLightQueue Length5.00 (2.86)3.27 (2.00)2.24 (1.28)3.76(2.74)1.71 (1.14)1.11 (0.89)0.74 (0.69)Speed (m/s)1.56 (1.33)1.70 (1.32)2.31 (1.22)3.04(3.24)4.55 (2.78)4.94 (2.34)5.36 (2.66)Intersection Delay (s)60.97 (47.15)58.27 (46.08)21.96 (19.83)92.59(94.69)38.23 (32.83)25.43 (22.53)10.07 (9.02)Cumulative Delay (s)595.78 (464.23) 446.85 (410.71) 325.02 (269.77) 214.521(342.95) 184.60 (297.99) 159.23 (226.64) 106.51 (165.18)Trip Time (s)885.47 (572.63) 704.62 (501.33) 597.88 (399.09) 462.862(453.69) 395.04 (387.06) 386.01 (305.74) 309.81 (243.95)", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" } ]
[{"Category": "Methodological Basis", "Citation": "[6]", "Explanation": "The cited work introduces the concept of multi-agent reinforcement learning (MARL) as a method for adaptive traffic signal control (ATSC), which the citing paper adopts in their own research to dynamically adjust traffic signal phases based on current traffic conditions."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work also highlights the use of MARL in ATSC, providing a methodological basis for the citing paper to further explore the application of this method in ATSC."}, {"Category": "Methodological Basis", "Citation": "[17]", "Explanation": "The cited work mentions the use of MARL in general control tasks, which the citing paper may build upon in their research to further explore the versatility of this method in ATSC and other control tasks."}, {"Category": "Methodological Basis", "Citation": "[10]", "Explanation": "The cited work introduces the COMA method, which the citing paper adopts to learn a complex team-level network and allow agents to estimate their own marginal contribution to the team reward via counterfactual reasoning."}, {"Category": "Methodological Basis", "Citation": "[14]", "Explanation": "The cited work SUMO is used as a standard traffic simulator in the research conducted in the citing paper, providing a basis for the evaluation of the proposed method."}, {"Category": "Extension or Continuation", "Citation": "[26]", "Explanation": "The cited work CityFlow is also used as a standard traffic simulator in the research conducted in the citing paper, further extending the evaluation of the proposed method to a range of benchmark traffic networks."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work introduces the Webster method for obtaining a closed-form solution for optimal cycle length and phase split, which the citing paper adopts as a methodological basis for their research on traffic signal control."}, {"Category": "Data Source", "Citation": "[18]", "Explanation": "The cited work, SCATS, is a popular adaptive-control method that the citing paper utilizes in their research on traffic signal control. The citing paper acknowledges the origin of the method and its use in the field of traffic control."}, {"Category": "Methodological Basis", "Citation": "[24]", "Explanation": "The cited work GreenWave optimizes the timing offsets between different intersections, which the citing paper adopts in their research to improve the efficiency of traffic control."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work Max-pressure control addresses the risk of oversaturation at an intersection by balancing queue lengths between neighboring intersections, which the citing paper incorporates in their research to enhance the traffic control system."}, {"Category": "Data Source", "Citation": "[31]", "Explanation": "The cited work by Wei et al. provides a survey on traffic control methods, which the citing paper uses as a reference to understand the state-of-the-art in the field and build upon existing research."}, {"Category": "Methodological Basis", "Citation": "[11,23,33]", "Explanation": "The cited works provide a common variant of the ATSC problem formulation that involves learning to select the next traffic light phase using features describing local traffic conditions, which the citing paper adopts as a basis for their own research."}, {"Category": "Data Source", "Citation": "[15,1,3]", "Explanation": "The cited works by Li et al., Aslani et al., and Casas et al. focus on learning policies for selecting traffic signal timing, which the citing paper utilizes as a data source for their own research on the same topic."}, {"Category": "Extension or Continuation", "Citation": "[20]", "Explanation": "The work by Mousavi et al. uses a CNN to map from image snapshots to the policy for selecting the next traffic phase, which the citing paper extends by exploring a similar approach in their own research."}, {"Category": "Methodological Basis", "Citation": "[32]", "Explanation": "The work by Wei et al. uses both extracted features and real-world images to learn the policy for selecting the next traffic phase, which the citing paper adopts as a methodological basis for their own research in the same area."}, {"Category": "Methodological Basis", "Citation": "[22]", "Explanation": "The cited work At-tendLight is used as a methodological basis for the development of a universal model for learning traffic control in intersections with various road, lane, phase, and traffic flow characteristics."}, {"Category": "Extension or Continuation", "Citation": "[23,34]", "Explanation": "The cited works on joint action learners for multi-agent traffic signal control provide a basis for the development of methods for coordinating traffic in a network of intersections, which is a natural extension of the research in the citing paper."}, {"Category": "Data Source", "Citation": "[24]", "Explanation": "The cited work on Greenwave is a data source for the study of traffic coordination in a network of intersections, as it highlights the need for coordination between different traffic intersections to achieve effective performance."}, {"Category": "Methodological Basis", "Citation": "[6,21,22,29,30]", "Explanation": "The cited works on independent learners for multi-agent traffic signal control provide a methodological basis for the development of individual policies for controlling traffic in each intersection, while considering other agents and intersections as part of the environment."}, {"Category": "Methodological Basis", "Citation": "[37]", "Explanation": "The cited work by [37] introduced a new method of learning phase correlation with an attention mechanism, which the citing paper adopts in their research to improve performance in their own work."}, {"Category": "Extension or Continuation", "Citation": "[4,27]", "Explanation": "The cited works by [4] and [27] have focused on learning a centralized critic to guide independent policy learning, which the citing paper extends in their research to further explore this approach in their own work."}, {"Category": "Methodological Basis", "Citation": "[12]", "Explanation": "The cited work introduces the concept of Dec-POMDPs, which the citing paper adopts to formulate a multi-agent reinforcement learning problem as a set of Dec-POMDPs."}, {"Category": "Methodological Basis", "Citation": "[9,36]", "Explanation": "The cited works propose a method for learning centralized critics that the citing paper adopts in their research to optimize global returns in multi-agent actor-critic methods for cooperation."}, {"Category": "Methodological Basis", "Citation": "[8]", "Explanation": "The cited work by [8] has considered switching traffic phases in a fixed cycle without specified phase duration, which the citing paper adopts in their research to improve adaptability in traffic control."}, {"Category": "Data Source", "Citation": "[2]", "Explanation": "The cited work by [2] has set phase durations within a fixed cycle length, which the citing paper utilizes in their research to study the effects of phase durations in traffic control."}, {"Category": "Methodological Basis", "Citation": "[37]", "Explanation": "The cited work provides a comparison of different local reward structures and shows that queue lengths are superior in maximizing throughput at each intersection, which the citing paper adopts as a basis for their research."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work COMA is used as a basis for the adaptations made in the asynchronous actor-critic framework to address the challenges in training traffic light control policies in SocialLight."}, {"Category": "Methodological Basis", "Citation": "[14,26]", "Explanation": "The cited works provide the SUMO and Cityflow traffic simulators that the citing paper uses in its experiments to benchmark the performance of SocialLight and study the impact of individual contribution marginalization on the traffic performance."}, {"Category": "Data Source", "Citation": "[16]", "Explanation": "The cited work SUMO is the source of the road network and traffic flow dataset used in the experiments on the SUMO simulator."}, {"Category": "Data Source", "Citation": "[26]", "Explanation": "The cited work CityFlow is the source of the road network and traffic flow dataset used in the experiments on the CityFlow simulator."}, {"Category": "Data Source", "Citation": "[7]", "Explanation": "The cited work provides the traffic flow datasets that the citing paper uses to conduct their research and compare algorithms."}, {"Category": "Data Source", "Citation": "[30,31,38]", "Explanation": "The real traffic dataset is cited to acknowledge the origin of the data used in the study conducted in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The cited work provides the experiment settings and POMDP settings for a fair comparison in the citing paper, which the citing paper adopts to ensure a consistent and accurate evaluation of the proposed method."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work introduces the Greedy method, which the citing paper adopts in its research to choose the phase associated with lanes with maximum incoming queue length."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The cited work presents the IQL-LR and IA2C algorithms, which the citing paper utilizes as methodological bases for its research on local agent learning and A2C extension."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The cited work introduces the MA2C algorithm, which the citing paper adopts to address the instability caused by partial observability in cooperative MARL research."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The cited work presents the A3C and A3C(nr) algorithms, which the citing paper uses as methodological bases for distributed learning research with parameter sharing and neighborhood reward maximization."}, {"Category": "Data Source", "Citation": "[37]", "Explanation": "The cited work provides the real traffic datasets used in the evaluation of SocialLight in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[13]", "Explanation": "The cited work introduces the concept of fixed time control, which the citing paper adopts in their research to control the phases of a traffic junction."}, {"Category": "Methodological Basis", "Citation": "[28]", "Explanation": "The cited work presents the max-pressure control method, which the citing paper uses to select the phase that minimizes the intersection pressure in their research."}, {"Category": "Methodological Basis", "Citation": "[30]", "Explanation": "The cited work introduces the Co-Light method, which the citing paper utilizes in their research to achieve junction-level cooperation in traffic control."}, {"Category": "Data Source", "Citation": "[5]", "Explanation": "The cited work provides the MP-Light model, which the citing paper uses to incorporate pressure in their states and reward for state-of-the-art scalable ATSC over a city-level traffic network in their research."}, {"Category": "Methodological Basis", "Citation": "[37]", "Explanation": "The cited work presents the Attention-Light model, which the citing paper adopts in their research to incorporate self-attention in learning phase correlation and competition for state-of-the-art traffic control performance."}, {"Category": "Methodological Basis", "Citation": "[37]", "Explanation": "The cited work provides the New York traffic flowsets, which the citing paper uses to evaluate the performance of RL methods in terms of vehicle arrival rate."}, {"Category": "Methodological Basis", "Citation": "[9]", "Explanation": "The cited work, COMA advantages, serves as the basis for the modifications made in the citing paper to improve training stability and overall returns in the training process of the policy network."}, {"Category": "Data Source", "Citation": "[7]", "Explanation": "The cited work provides the value of the yellow time in a traffic phase change, which is used as a parameter in the simulation episode in the citing paper."}, {"Category": "Methodological Basis", "Citation": "[7]", "Explanation": "The citing paper adopts the available phases of the traffic light from the cited work to control the traffic signal in the simulation."}, {"Category": "Data Source", "Citation": "[6,7,37]", "Explanation": "The cited works provide the standard reward structure for traffic light control, which the citing paper uses in its research on traffic control in SUMO."}, {"Category": "Methodological Basis", "Citation": "[37]", "Explanation": "The cited work provides the framework and methodology for defining the state, action, and reward parameters in the Markov Decision Process settings used in the citing paper."}]